AI is not just technology; it is also a “new weapon” in cyberspace.

AI in cybercrime is rapidly evolving, driving AI phishing attacks, AI scam calls, AI cloning scams, and even Deepfake attacks. Over the years, the enormous losses caused by AI crime have been extremely concerning. With Deepfakes becoming more sophisticated, users and businesses are losing the ability to distinguish between real and fake in calls, online meetings, and videos.

According to Cybersecurity Venturesglobal cybercrime will cause an estimated $10.5 trillion in damages annually by 2025, with a significant portion being “attacked” by AI. A report by Europol (2024) also states: “AI is blurring the lines between amateur and professional attackers,” threatening national cybersecurity systems.

According to Group-IB, a leading global cybersecurity company, five prominent trends in which AI is being applied in cybercrime have been identified:

  • Sophisticated AI phishing.
  • AI-assisted scam calls.
  • Voice cloning scams.
  • Integration of AI into cyber attack tools.
  • Dark LLMs – large language models that have been stripped of control mechanisms for illegal purposes.
  • Increase in successful scam rates
  • More effective impersonation of real people
  • Shortened attack cycles
  • Spread of AI crime to new emerging hacker groups
  • Investigations becoming more difficult
  1. Voice cloning scams

  • Among all AI attack trends, Deepfake emerges as a powerful and potentially dangerous tool. Hacker groups are continuously ramping up cyberattacks using Deepfake to highlight its current priority level. In the second quarter of 2025 alone, damages caused by Deepfake are estimated to reach $350 million.
  • In Deepfake attacks, hackers mainly use the following common methods:
    • Executive Impersonation: Deepfake videos directly impersonate CEOs, CFOs, or trusted clients in real-time calls to pressure victims into making transfers or disclosing data. This is often combined with Business Email Compromise (BEC) scams.
    • Romance and Investment Scams: AI-generated fake groups build trust and manipulate victims over time.
    • KYC Bypass: Deepfake videos and images deceive identity verification processes during account creation or financial service registration.

1 1762141114

  • Deepfake and Lip-Sync services have been around for a few years, from fake Elon Musk cryptocurrencies to politicians explaining fake tax laws. As of now, fake Deepfake videos are widely sold on social platforms for just $5 to $50. This is why AI Deepfake campaigns are becoming increasingly dangerous and widespread.

2 1762141115

  • Initially, attackers exploited AI to create videos that impersonated legitimate individuals with the most natural gestures and actions during meetings. As a result, users were misled and believed they were making authentic calls with superiors and transferred large sums of money. Over the past two years, many high-profile cases involving millions in fraudulent transfers have emerged, such as the case of Arup’s CFO losing $25 million in a Zoom Deepfake meeting.
  • In the latest recorded reports, hackers are very fond of the form: “Impersonation-as-a-Service” – IaaS, which specializes in providing impersonation services for other criminals to use, often to carry out scams, financial fraud, unauthorized system access, or identity theft.
    • Impersonating IT engineers.
    • Impersonating banks.
    • Impersonating CEOs (CEO fraud).
    • Deepfake voice/video.
    • Impersonating customers.

3 1762141115

  1. AI-Assisted Scam Calls

  • Recently, this trend has been increasing significantly. Criminals have taken advantage of AI’s ability to learn a person’s voice from a short recording (just a few seconds) and then use that voice to:
    • Make calls pretending to be a relative asking for money transfers.
    • Impersonate a boss to instruct financial staff to make a transfer (Business Email Compromise in voice form).
    • Imitate the voice of an IT engineer or bank employee to obtain OTPs or login information.

4 1762141116

  • Additionally, criminals use AI voicebots to create smart automated calls impersonating:
    • Police.
    • Banks.
    • Tax agencies.
    • Shipping companies (e.g., impersonating FedEx, DHL…).
  • They then ask victims to provide OTPs, card information, ID numbers, bank account details, and can even listen and respond naturally like a human.
  1. Dark LLMs

  • Dark LLMs are rising strongly in the underground world of cybercrime. Dark LLMs (Dark Large Language Models) are known as large language models (LLMs) customized by cybercriminals, removing all ethical barriers and safety policies to support illegal activities, fraud, or cyberattacks.

5 1762141117

  • Dark LLMs are rising strongly partly because they support attackers significantly in criminal activities:
    • Scam/Fraud Content Generation.
    • Social Engineering & Phishing Kit.
    • Malware Support.
    • Reconnaissance & Initial Access.
  • According to reports, there are currently at least 3 active Dark LLM providers with rental costs starting from $30 to $200 per month, which is why there are more and more potential threats from this form of AI attack.
  • A notable point of this trend is that criminals do not use public APIs (like OpenAI, Anthropic…) because they are easy to track and collect data. Instead, they self-host LLMs on private infrastructure or the dark web to:
    • Remain anonymous.
    • Optimize according to their own needs.
    • Be difficult to shut down.
  1. Sophisticated AI Phishing.

  • This is a trend where attackers use AI-powered phishing and spam email tools, with the ability to be smarter, more automated, and scale up significantly in social engineering attack campaigns.

6 1762141119

  • Unlike traditional spam emails, AI-powered mailers are bulk email systems integrated with large language models (LLMs) or AI NLP, aimed at:
    • Automatically generating phishing email content tailored to each target.
    • Customizing and personalizing content (name, company, position, etc.).
    • Optimizing subject lines, tone, and sending times to increase open rates and click-through rates.
  • Furthermore, on the dark web, new spam platforms are emerging with the “Mailer-as-a-Service” model, where:
    • Users pay to rent bots or AI for spam as needed.
    • Service packages include:
      • Sending 10,000 emails per month.
      • Personalizing content with AI.
      • Spam performance reports.
      • Options to send from lists of stolen email addresses.
  1. Integrating AI into Cyber Attack Tools

  • This is a particularly important trend in modern cybercrime, where AI is integrated into cyber attack toolsbut does not yet operate completely independently.
  • Previously, cybercriminals used chatbots like ChatGPT or WormGPT to generate malicious code once, such as a PowerShell obfuscation script or phishing email. But now AI will be directly integrated into malware builders, phishing toolkits, reconnaissance frameworks, and even exploit public APIs from legitimate AI chatbots (like OpenAI, Google, etc.) or use their own Dark LLMs.

7 1762141120

  • A campaign will go through 6 main stages:
    • Reconnaissance: Analyze the target system, list resources, and assess vulnerabilities.
    • Vulnerability Scanning: Automatically generate queries for Shodan/Censys or write attack PoC from CVE.
    • Exploitation & Evasion: Generate exploit code suitable for the operating system, use AI to obfuscate payloads to avoid EDR detection.
    • Persistence & Privilege Escalation: Suggest ways to maintain access and escalate privileges based on the system environment.
    • Tactical Code Generation: Generate shellcode, backdoors, and droppers with AI according to specifications.
    • Phishing/Social Engineering: Write email content, chatbot scams, and voice phishing with AI-generated voices.
  1. Smart Multi-Layer Defense

  • Implement multi-layer defense combining elements like behavior analysis, biometric authentication, session analysis, and anomaly detection.
  • Use explainable AI applications to monitor user and device behavior and identify complex AI-driven fraud.
  1. Monitoring the Underground Ecosystem

  • Invest in a comprehensive Threat Intelligence platform to:
    • Monitor forums, dark markets, and underground services.
    • Provide early warnings about new AI tools, attack tactics, and Crime-as-a-Service offerings.
  1. Advanced Awareness Training

    • Contextual Warnings: Analyze content and situations (e.g., urgent requests, emphasis on authority).
      • Clear Reporting and Verification Flow: Guide employees on how to respond to and escalate suspicious requests.Khuyến khích tư duy hoài nghi: Nhận diện các hành vi tạo cảm giác gấp gáp, áp lực hay vượt quyền.
  1. AI Defense Applications

  • Integrate AI into monitoring systems to:
    • Detect anomalies on a large scale
    • Automate classification, alerts, and initial response
  • Apply machine learning models to reduce the time from detection to response (MTTD/MTTR).

AI is increasingly transforming how cybercrime operates, from making phishing more sophisticated to creating deepfakes and automating attacks through Dark LLMs. Being aware of these threats and implementing comprehensive countermeasures (people, technology, collaboration) will help minimize damage and maintain digital security in the AI era.

From Deepfakes to Dark LLMs: 5 use-cases of how AI is Powering Cybercrime | Group-IB Blog

AI in Cybersecurity: How AI Is Impacting the Fight Against Cybercrime

Exclusive article by FPT IS Technology Experts

Luu Tuan Anh – FPT IS Cyber Security Center

Share:
Img Contact

Sign up to receive the latest news from FPT IS

    Bot Avatar