
A peer-reviewed study analyzing over 100 million posts from underground cybercrime forums finds that cybercriminals have yet to fully harness AI tools since ChatGPT's release in November 2022. Researchers from the University of Edinburgh, Cambridge, and Strathclyde report that AI is mainly used to evade detection and automate social media bots, benefiting skilled users more than novices. The study suggests AI represents an evolution rather than a revolution in cybercrime, requiring significant technical expertise for effective use.
The articles present a neutral, research-focused perspective without political framing. They emphasize academic findings from multiple universities, highlighting technical challenges faced by cybercriminals in using AI. The coverage avoids partisan viewpoints, focusing instead on empirical analysis and expert commentary, reflecting a balanced approach to the topic.
The tone across the articles is largely neutral and analytical, concentrating on the current limitations of AI in cybercrime rather than sensationalizing threats. While acknowledging cybercriminal experimentation, the coverage underscores the lack of significant benefits so far, resulting in a measured and fact-based sentiment.
Each source's own headline, political lean, and sentiment — so you can see framing differences at a glance.
| Source | Their headline | Bias | Sentiment |
|---|---|---|---|
| mint | Study says AI has yet to transform cybercrime Mint | Center | Neutral |
| news18 | Study says AI has yet to transform cybercrime | Center | Neutral |
news18 broke this story on 6 May, 12:35 pm. Other outlets followed.
Well-covered story — coverage matches public importance.
Select a news story to see related coverage from other media outlets.