News

Study co-led by University of Strathclyde finds AI has yet to transform cybercrime

A woman sitting at computer screens

Cybercriminals have been struggling to adopt AI in their activities, a new study has found after analysing 100 million posts from underground cybercrime communities.

Co-authored by researchers at the University of Strathclyde, the study suggests that many cybercrime actors lack the specialist skills, time or resources needed to turn new AI tools into genuinely new criminal capabilities.

AI was used most successfully to conceal patterns that can be detected by cybersecurity defenders, and to run social media bots used for harassment and fraud.

Dark web

Researchers from Strathclyde, the University of Edinburgh and the University of Cambridge analysed discussions from the CrimeBB database, which contains more than 100 million posts scraped from underground and dark web cybercrime forums.

These conversations were analysed using a combination of machine learning tools and manual sampling techniques, searching for posts that discussed how cybercrime actors were experimenting with AI technologies from November 2022 onwards, which marks the release of ChatGPT.

Through their analysis, researchers found that AI coding assistants are mostly proving useful for already skilled actors rather than reducing the skill barrier to committing cybercrime, as the AI tools still require significant skills and knowledge to be used effectively.

They also found some evidence of the use of AI tools in more advanced forms of automation, especially in social engineering and bot farming. 

Because most cybercrime is already heavily industrialised, deskilled, and reliant on automated tools and pre-made assets, this represents an evolution rather than a revolution in criminal practices, experts say.

Poorly secured

Dr Daniel Thomas in the Department of Computer & Information Sciences at Strathclyde, said: “This research helps us understand what is happening in cybercrime communities in the wake of tools like ChatGPT.

“While we saw experimentation, it is not yet translating into widespread, step-change capability.

The more immediate risk is the rapid adoption of poorly secured AI systems by organisations and individuals, which could create new vulnerabilities that criminals can exploit.

In reassuring findings, guardrails on the major chatbots are having significant effects in reducing harm. But researchers say there is still cause for concern after observing early evidence that these communities are having some success in manipulating the outputs of the mainstream chatbots.

Interestingly, many of the people in these cybercrime communities were also seen to be panicking about potentially losing their ‘day jobs’ in IT as a result of AI disruption in mainstream software industries, which could then drive them and others towards more cybercriminal activity.  

Contrary to reports from the cybersecurity industry to date, the authors warn that the most pressing risks are likely to be from the adoption of poorly secured agentic AI systems – a form of AI that can act autonomously, making decisions and carrying out actions on specific tasks.

There are also risks around insecure ‘vibecoded’ products – where computer code has been written using AI – by legitimate industry, rather than the adoption of AI tools by cybercriminals.

The findings have been peer reviewed and will be presented at the Workshop on the Economics of Information Security in Berkley, USA, in June 2026.