Hackers Hate AI Slop Even More Than You Do

You know that feeling when you open a forum and see yet another low-effort, AI-generated post that adds nothing? Cybercriminals feel it too. And they're not shy about saying so.

Researchers from the University of Edinburgh, Cambridge, and Strathclyde analyzed 97,895 AI-related conversations on cybercrime forums from ChatGPT's launch in 2022 through 2023. Their finding: the initial excitement about AI for hacking has soured into open hostility.

"People don't like it," says Ben Collier, a security researcher at the University of Edinburgh. The study documents a growing backlash against generative AI in underground forums and hacking groups.

The Complaints Sound Familiar

On Hack Forums, users vent about members who paste AI-generated "bullet-pointed explainers" of basic cybersecurity concepts. One post reads: "I see a lot of members using AI for making their threads/posts and it pisses me off since they don't even take the time to write a simple sentence or two." Another is blunter: "Stop posting AI shit."

The frustration isn't just about quality. These forums are social spaces where users build reputations through reliable, human interaction. AI undermines that. "I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person," Collier explains.

AI Doesn't Make Friends

One forum user captured the sentiment: "If I wanted to talk to an AI chatbot, there are many websites for me to do so … I come here for human interaction." The social dynamic is real — users want to make friends, not talk to bots.

The Hype vs. Reality

Despite breathless headlines about AI-powered cyberattacks, the study found no evidence that AI has lowered the barrier to entry for low-level crime. "It has not significantly reduced the skill barrier to entry, nor has it led to serious disruptions to established business models or practices," the researchers state. AI's main impact has been in already automated areas like SEO fraud, social media bots, and some romance scams.

More sophisticated threat actors are aware of AI's limitations. Ian Gray, VP of intelligence at security firm Flashpoint, notes that advanced hackers know how to jailbreak commercial models and are cautious of AI-generated projects in forums — which often contain vulnerabilities that expose the builder's infrastructure.

Even Criminals Have Standards

Some hackers have started disparaging peers who rely on AI. According to Flashpoint, one group sneered that their rivals "all they can do is use AI." The irony is thick: the same people who might use AI for crime hate seeing it in their communities.

A Possible Middle Ground?

Not everyone wants AI banned entirely. Some Hack Forums users said they'd welcome an AI assistant that helps structure posts or fix grammar — as long as it doesn't post for them. One user warned: "An AI generator for posts would turn this into a clanker forum of AI's talking to each other."

Meanwhile, Flashpoint spotted hackers discussing an "AI-enhanced" cybercrime market to speed up buying stolen data. The response? "IT'S A STUPID FUCKING IDEA TO PUT AI INTO YOUR MARKET."

What This Means for Developers

This study is a reality check. The hype around AI-powered hacking often outstrips the actual impact. For defenders, it's reassuring: AI hasn't democratized cybercrime the way some feared. For developers building AI tools, it's a reminder that even in shady corners, people crave genuine human interaction over automated noise.

The Takeaway

If cybercriminals — who have every incentive to use AI — are pushing back against it, maybe the rest of us should rethink how we deploy generative AI in social spaces. Sometimes, the best feature is no feature at all.