- AI-assisted fraud has grown sharply, making phishing campaigns more convincing
- Deepfake-enabled identity attacks caused verified losses exceeding $347 million globally
- Subscription-based AI crimeware creates a stable, growing underground market
Artificial intelligence is now used by cybercriminals to automate fraud, scale phishing campaigns, and industrialize impersonation at a level that was previously impractical.
Unfortunately, AI-assisted attacks could be among the biggest security threats your business faces this year, but staying aware and acting promptly can keep you a step ahead.
Group-IB’s Weaponized AI report shows the growing use of AI by criminals represents a distinct fifth wave of cybercrime, driven by the commercial availability of AI tools rather than isolated experimentation.
Rise in AI-driven cybercrime activity
Evidence from dark web monitoring shows that AI-related cybercrime activity is not a short-term response to new technologies.
Group-IB says first-time dark web posts referencing AI-related keywords increased by 371% between 2019 and 2025.
The most pronounced acceleration followed the public release of ChatGPT in late 2022, after which interest levels remained persistently high.
By 2025, tens of thousands of forum discussions each year referenced AI misuse, indicating a stable underground market rather than experimental curiosity.
Group-IB analysts identified at least 251 posts explicitly focused on large language model exploitation, with most references linked to OpenAI-based systems.
A structured AI crimeware economy has emerged, with at least three vendors offering self-hosted Dark LLMs without safety restrictions.
Subscription prices range from $30 to $200 per month, with some vendors claiming more than 1,000 users.
One of the fastest-growing segments is impersonation services, with mentions of deepfake tools linked to identity verification bypass rising by 233% year on year.
Entry-level synthetic identity kits are sold for as little as $5, while real-time deepfake platforms cost between $1,000 and $10,000.
Group-IB recorded 8,065 deepfake-enabled fraud attempts at a single institution between January and August 2025, with verified global losses reaching $347 million.
AI-assisted malware and API abuse have grown sharply, with AI-generated phishing now embedded in malware-as-a-service platforms and remote access tools.
Experts warn that AI-powered attacks can bypass traditional defenses unless teams continuously monitor and update systems.
Networks need protection from firewalls that can identify unusual traffic and AI-generated phishing attempts.
With appropriate endpoint protection, companies can detect suspicious activity before malware or remote access tools spread.
Rapid and adaptive malware removal remains critical because AI-enabled attacks can execute and propagate faster than standard methods can respond.
Combined with a layered security approach and anomaly detection, these measures help stop intrusions such as deepfake calls, cloned voices, and fake login attempts.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
.png)








English (US) ·