AI Security & Risks in 2025: From AI-Enhanced Swatting and Adversarial Attacks to Deepfakes and Governance Challenges

AI security risks 2025

Artificial Intelligence (AI) is revolutionizing business, governance, and everyday life—but it’s also bringing serious security risks. In 2025, adversaries and policymakers alike are grappling with fast-evolving threats, from AI-enhanced swatting at universities to deepfake impersonation scams, adversarial attacks, AI-model weaponization, and insider risks. This blog post dives deep into the most recent developments in AI security & risks, offering actionable insights for organizations navigating this high-stakes landscape.


1. AI-Enhanced Swatting Attacks Shake U.S. Universities

Between August 21 and 25, 2025, a cybercriminal group nicknamed “Purgatory” orchestrated a dangerous wave of AI-amplified swatting targeting at least 10 U.S. universities. Attackers used AI to simulate gunfire and panic-inducing screams during 911 calls, triggering large-scale emergency responses and causing significant resource strain. Cybercriminals are reportedly monetizing the chaos—charging up to $95 per call—earning the group as much as $100,000. Experts warn that foreign intelligence or terrorist groups could exploit such tactics unless law enforcement develops more advanced digital tools to counteract this growing threat.New York Post


2. Adversarial Attacks Now Fuel Nearly One-Third of AI Cyber Incidents

A recent Deloitte Threat Report underscores the dramatic rise of adversarial AI cyber threats. Around 30% of all AI-related cyberattacks now involve sophisticated techniques such as training data poisoning, model theft, and manipulative inputs—designed to corrupt AI behavior or leak sensitive outcomes.Security Boulevard


3. AI Impersonation Scams Soar—Deepfakes Fuel a New Wave of Deception

Impersonation scams using AI—voice cloning, facial deepfakes, and AI-generated messaging—have surged by 148% in 2025. Fraudsters are weaponizing these technologies to mimic trusted individuals in calls, video meetings, emails, and texts, tricking victims into transferring money or sharing sensitive information. One high-profile case involved scammers impersonating a CFO to steal $25 million. Security experts recommend deliberate skepticism, multi-factor authentication (MFA), and the “Take9” pause-and-assess approach to thwart such scams.TechRadar


4. AI Tools Themselves Get Weaponized—Claude Used in Ransomware, Fraud, Extortion

Anthropic’s AI model Claude has been exploited in alarming ways, including employment fraud schemes, ransomware development, and large-scale extortion campaigns. In one instance, North Korean hackers used Claude to pass as legitimate tech workers at U.S. companies, generating revenue illicitly. Anthropic has responded with account bans and tougher security protocols—but the incident highlights the looming threat of AI tool misuse.IT Proanthropic.com


5. OWASP Lists Prompt Injection as Critical Risk in LLM Security

The security threat known as prompt injection—where malicious prompts manipulate AI responses—has emerged as a critical vulnerability. The OWASP Gen AI Security Project lists it among the top risks in developing and deploying generative AI and LLM applications. Prompt injection can lead to data manipulation, misinformation, or denial-of-service attacks.genai.owasp.org Wikipedia also details notable incidents, such as hidden prompts in academic papers and direct exploits in AI systems like Bing Chat, ChatGPT search tools, DeepSeek-R1, and Google Gemini.Wikipedia


6. CISOs Sound the Alarm: AI Governance, Insider Threats, and Data Privacy Top the Agenda in UAE

According to Proofpoint’s 2025 Voice of the CISO report, cybersecurity leaders in the UAE are increasingly prioritizing AI governance and insider threats. They identified staggering risks—100% linked data loss to departing employees, and 55% flagged concerns about customer data exposure via public GenAI tools. Safe GenAI practices are underway, with 60% enforcing internal guidelines and 58% exploring AI defense strategies.intelligentciso.com


7. AI Attacks Loom Daily: Security Leaders Brace for Constant Threats

Trend Micro’s State of AI Security Report for the first half of 2025 reveals that 93% of security leaders expect daily AI-driven attacks. Additionally, 66% of organizations believe AI will have the most significant impact on cybersecurity this year. These findings highlight the urgency for organizations to bolster AI-specific threat detection, adversarial training, and real-time model monitoring systems.Trend Micro


8. Broader Cybersecurity Landscape: From Malvertising to AI-Powered Malware

Recent vulnerabilities include the “Grokking” tactic, where attackers bypass protections on X’s (formerly Twitter) AI platform to push malicious links. Another case involves threat actors using HexStrike AI—an AI offensive tool designed for red teaming—to exploit Citrix flaws immediately after disclosure.The Hacker News Simultaneously, novel ransomware variants like “PromptLock” are using open-weight models to generate real-time malicious scripts capable of cross-platform attacks.The Hacker News Additionally, the rise of AI-driven malware creation and polymorphic attacks is stressing the need for security embedded within app development (RASP, runtime protection, threat monitoring).TechRadar


9. Organizational Costs Skyrocket—Insider Risks and AI Complexity Drive File Security Breaches

A study by OPSWAT reveals that insider threats paired with the complexity of AI systems are fueling file security breaches, costing enterprises millions. 61% of organizations have experienced insider breaches, highlighting the need for unified, multi-layered defenses that include AI risk mitigation.OPSWATHelp Net Security


Why This Matters (Summary)

Risk AreaKey ConcernRecommended Action
Physical SafetyAI-powered Swatting attacksDevelop law enforcement digital response protocols
Model IntegrityAdversarial attacks, prompt injectionDeploy adversarial training, model monitoring
Identity SecurityDeepfake impersonation scamsEducate users, enforce MFA, encourage verification
AI Tool MisuseClaude and other tools weaponizedImplement AI usage monitoring and misuse detection
Governance WeaknessInsider threats, policy gapsEstablish AI governance policies, data controls
Infrastructure ResilienceDaily AI-driven cyber threatsEquip CISOs with AI-specific threat intelligence

✅ External Links


AI security risks 2025
AI security risks 2025