
AI & Cybercrime: How Attackers Exploit Large Language Models

Technological progress is never neutral. Every innovation brings not only new opportunities but also new forms of misuse. AI systems - especially large language models (LLMs) - are now being used by attackers to reach their goals more efficiently, more quickly, and more cleverly than ever before. The question is no longer if AI will be misused, but how we can detect, limit, and prevent such abuse.
Table of Contents:
What Are Large Language Models—And Why Are They So Powerful?
LLMs are AI models trained on massive amounts of text data to understand and generate natural language. They can:
- answer complex questions,
- write or debug code,
- carry on conversations,
- and create content in fluent, natural language.
Examples:
GPT-4 (OpenAI), Claude 3 (Anthropic), Gemini (Google), LLaMA 3 (Meta), Mistral (Open Source)
These models offer immense potential - but that’s exactly what makes them dangerous. Their capabilities are general-purpose, meaning they can be used for both good and bad. The model itself is neutral - how it's used makes all the difference.
How Cybercriminals Are Already Exploiting LLMs
Security researchers have already documented numerous cases where AI models are used to plan, execute, or enhance cyberattacks. Common misuse includes:
- crafting convincing phishing emails in flawless language,
- generating social engineering scripts for calls, job scams, or fraud,
- automating ransomware communications and extortion messages,
- producing malicious code or macros for infected documents,
- enabling “low-code hacking” by less experienced attackers.
Particularly concerning: LLMs often still produce results even when users attempt to “jailbreak” them—that is, bypass built-in safety mechanisms.
Typical Attack Vectors: How Hackers Use LLMs
Here are four common LLM-enabled attack scenarios:
1. Phishing & CEO Fraud on a New Level
LLMs can write highly personalized emails in any language, based on publicly available data about the target. Even internal company tone and style can be convincingly mimicked.
2. Malware & Exploits at the Push of a Button
Despite safeguards, cleverly designed prompts can still produce malware snippets, keyloggers, or obfuscated JavaScript - especially in open-source models with no content filtering.
3. Deepfakes & Synthetic Identities
Combined with text-to-speech and video generation tools, realistic deepfakes can be created and misused in job interviews, ID verifications, or fraud scenarios.
4. Prompt Injection & Jailbreaks
Attackers use specific inputs to manipulate chatbots or AI applications, potentially gaining access to confidential data or tricking the system into harmful behavior.
Case Studies: When LLMs Become Part of the Attack Architecture
Case 1: WormGPT – The “Dark Twin” of ChatGPT
A tool distributed in hacker forums, based on an open-source LLM and optimized for cybercrime, no content filters, designed specifically for automating attacks.
Case 2: Prompt Injection via Public AI Chatbots
Researchers have shown that chatbots embedded in websites can be manipulated to reveal internal data or user information—by carefully crafting the context of the conversation.
Case 3: “LLM-as-a-Service” in Criminal Telegram Groups
Cybercriminals offer API access to unfiltered LLMs—used to automate phishing campaigns, generate fake profiles, or deploy scam bots.
Enterprise Risks: How LLMs Can Become Internal Threats
LLMs are not just an external threat—they can also become internal vulnerabilities when:
- employees enter confidential data into public chatbots,
- customer interactions are automated without proper safeguards,
- internal systems respond to manipulative prompts unintentionally.
Example:
An internal support chatbot reveals sensitive information (e.g., pricing details, internal processes) when prompted with misleading questions or contextual tricks.
Other risks include:
- data leaks via external API integrations,
- suppliers using unsecured LLM tools within the value chain,
- reputational damage from public AI misbehavior.
Security Measures & Compliance Recommendations
Companies can protect themselves through a combination of technical, organizational, and procedural measures.
Technical:
- Use verified, GDPR-compliant LLM providers (e.g., on-premise or EU-hosted models).
- Implement prompt filtering and logging for internal AI systems.
- Restrict sensitive features (e.g., data access, output forwarding).
Organizational:
- Create internal policies for safe AI usage.
- Train staff to recognize deepfakes and AI-powered phishing.
- Evaluate suppliers for AI-related risks (Third-Party Risk Management).
Procedural:
- Establish AI governance frameworks.
- Conduct LLM risk assessments (e.g., prompt injection testing).
- Ensure transparent incident communication protocols.
What the AI Act and Other Regulations Say
The EU AI Act explicitly addresses the misuse of AI, including:
- banning high-risk applications (e.g., covert manipulation, unlabelled deepfakes),
- requiring transparency and control for generative AI,
- enforcing documentation, risk assessments, and human oversight.
Other frameworks like the NIS2 Directive, Digital Services Act, and Cyber Resilience Act expand the legal foundation by demanding:
- safeguards against the misuse of digital services,
- resilience of software-based products,
- accountability when integrating external AI systems.
Conclusion: Responsible AI Use Requires Security Awareness
LLMs are tools - not inherently good or evil. But like any powerful tool, they can be abused. Businesses, public institutions, and developers must take proactive responsibility for understanding and addressing these new threats.
Those deploying LLMs must:
- understand their risks,
- set clear boundaries,
- and actively prevent misuse.
Only then can AI be integrated responsibly and sustainably into business operations - without unintended side effects or compliance violations.
Important: The content of this article is for informational purposes only and does not constitute legal advice. The information provided here is no substitute for personalized legal advice from a data protection officer or an attorney. We do not guarantee that the information provided is up to date, complete, or accurate. Any actions taken on the basis of the information contained in this article are at your own risk. We recommend that you always consult a data protection officer or an attorney with any legal questions or problems.


