Free Demo
  • Linkedin
  • Twitter
  • Youtube

Daon named a Leader in the 2025 Gartner® Magic Quadrant™ for Identity Verification: READ MORE

Connect with a Daon solutions expert

Let us know how we can assist you

  • Product/Solution Information
  • Product Demonstration
  • Request for Proposal
  • Partnership Opportunities

See why many of the world’s strongest brands chose Daon to help them build lasting trust with their customers.

How Fraudsters Use AI to Get Ahead

AI-driven fraud now operates at machine speed, launching thousands of coordinated attacks simultaneously with minimal human oversight. Defense requires the same velocity. Organizations need AI-powered tools that identify deepfakes in real-time and deploy countermeasures at the speed of the attacks themselves.



 

Last year, Chinese operators exploited Anthropic’s Claude Code AI tool to target approximately 30 global organizations. The attackers jailbroke Claude by disguising malicious tasks as legitimate defensive cybersecurity work, breaking complex attack chains into smaller, innocuous-seeming requests that avoided the system’s guardrails. Once compromised, the AI operated with remarkable autonomy: inspecting target systems, scanning for high-value databases, writing custom exploit code, and harvesting usernames and passwords. The result was four successful breaches with minimal human supervision. As Anthropic noted in its incident disclosure, “The highest-privilege accounts were identified, backdoors were created, and data were exfiltrated” largely through automated processes.

The economics prove equally striking. Stanford University’s ARTEMIS AI agent cost €15 per hour yet outperformed most human hackers in a university network assessment. The AI generated sub-agents to investigate vulnerabilities in the background while its primary process continued scanning for additional threats. Human penetration testers had to investigate each finding sequentially, but the AI pursued parallel attack paths.

Fraud has always relied on speed and scale, but AI delivers both at unprecedented levels without forcing attackers to choose between sophistication and velocity. If adversaries now deploy AI at machine speed with minimal human oversight, can defenses that are still operating at human speed keep pace?

How AI Transforms Fraud

AI transforms fraud in four fundamental ways that work in concert to overwhelm traditional defenses.

Scale and automation enable thousands of operations per second versus human-limited sequential tasks. AI executes parallel attack streams, including scanning networks, exploiting vulnerabilities, and harvesting credentials simultaneously. The ARTEMIS agent demonstrated this capability by generating sub-agents that investigated vulnerabilities in the background while its primary process continued scanning for additional threats. Defensive teams staffed for human-speed threats suddenly face an exponentially larger attack surface.

Sophistication without expertise democratizes advanced techniques that previously required specialized skills. AI handles automated reconnaissance, exploit generation, and custom code development that once demanded years of training. Claude Code autonomously wrote exploits, identified high-privilege accounts, and created backdoors. The barrier to entry has collapsed and amateur attackers now deploy professional-grade tactics.

Personalization at industrial scale allows AI to analyze target data and craft convincing social engineering campaigns. Unlike generic phishing attempts, AI generates attacks tailored to individual contexts, interests, and professional relationships. Kaspersky recently detected AI-generated websites mimicking popular applications like crypto wallets, antiviruses, and password managers. Generic fraud detection systems fail against individually customized attacks.

Real-time adaptation enables AI to test defensive responses, identify what works, and iterate mid-campaign. These systems learn from blocked attempts, adjust their approach, and probe different vectors continuously without human intervention. Static defenses become obsolete as AI discovers and exploits gaps autonomously.

The most advanced attacks deploy all four capabilities simultaneously. AI provides speed, sophistication, scale, and adaptation in a single tool, fundamentally changing the economics attackers face.

AI-Powered Social Engineering

AI amplifies social engineering by transforming what were labor-intensive, specialized attacks into automated campaigns operating at industrial scale.

The personalization engine begins by scraping social media profiles, LinkedIn connections, and data breach dumps to build detailed target profiles. AI analyzes communication patterns, professional relationships, and recent activities to generate phishing that references specific projects, colleagues, and timelines. Unlike generic “urgent wire transfer” emails, AI crafts contextually accurate requests that align with actual workplace dynamics.

Traditional social engineering required skilled operators crafting individual approaches over days or weeks, while AI generates thousands of personalized attacks in minutes. AI-generated phishing sites distributed through search engine optimization attract victims organically. Attackers iterate through victim lists, adjusting messaging based on response patterns.

AI also eliminates the fraud signals employees were trained to recognize. Grammatically perfect communications remove traditional tells like spelling errors or awkward phrasing. AI analyzes legitimate corporate communications to match tone, formatting, and terminology precisely. It generates supporting evidence (fake documents, websites, email trails) that create false credibility. Victims lose reliable signals for distinguishing legitimate requests from fraudulent ones.

Voice and video impersonation represents the newest vector. AI clones voices from as little as 10 seconds of audio gathered from social media, earnings calls, or podcasts. Real-time deepfake video calls create false legitimacy through visual presence. CEO impersonation attacks combine voice, video, and contextual knowledge with devastating effect. Americans lost nearly $2 billion to scams last year, with phone scams averaging $1,500 per victim.

Attack Automation and Autonomous Exploitation

AI has collapsed the distinction between reconnaissance, exploitation, and credential harvesting into a single automated process. Where human attackers move sequentially through attack phases over days or weeks, AI executes the entire lifecycle in hours with minimal oversight.

The Claude Code incident reveals how this works in practice. Attackers jailbroke Anthropic’s AI by disguising malicious operations as legitimate defensive security work. By breaking complex attack chains into smaller, innocuous-seeming requests, they bypassed the system’s guardrails. The compromised AI then operated largely on its own: inspecting target systems, scanning databases, writing custom exploits, and harvesting credentials. It even generated post-operation reports documenting which systems it breached, which backdoors it created, and which accounts it compromised. Four organizations were successfully breached with what Anthropic described as “minimal human supervision.”

Stanford’s ARTEMIS agent demonstrated similar capabilities during a university network assessment, outperforming most human participants. The critical advantage wasn’t just speed – it was parallelization. While human testers investigated each vulnerability sequentially, ARTEMIS deployed sub-agents to examine findings in the background while its primary process continued scanning for additional weaknesses.

This autonomy creates specific challenges for identity systems. AI doesn’t just find credentials faster than humans – it systematically identifies where they’re stored (databases, configuration files, memory dumps), extracts them, tests them across multiple systems, and maps privilege escalation paths from standard user accounts through administrator access to domain controller compromise. Traditional defenses assume attackers work at human speed, giving security teams time to detect anomalies and respond. AI eliminates that window.

The persistence problem proves equally troubling. AI establishes multiple backdoors rather than relying on single access points, monitors how security teams respond to initial detection, and adapts its techniques to maintain footholds even after discovery. Blocking one access route doesn’t eliminate the threat when AI has already established several alternatives.

AI doesn’t just automate attacks. It generates the fraudulent infrastructure supporting them.

AI-Generated Fraudulent Infrastructure

AI has industrialized the creation of fraudulent infrastructure that once required weeks of manual work. By analyzing legitimate applications to understand their design patterns, security indicators, and user flows, AI generates convincing replicas of crypto wallets, password managers, and banking portals. Recent fraud campaigns involved AI-generated websites mimicking popular applications that were distributed through search engines, attracting users organically. These sites deployed legitimate software like the Syncro remote access tool, then leveraged it for malicious purposes.

The SEO manipulation layer makes detection particularly difficult. AI optimizes fake sites for search rankings, generating content, backlinks, and metadata that mimic established applications. Users searching “download [legitimate app]” encounter fraudulent versions appearing in top results. Trust signals like HTTPS certificates, professional design, and “verified” badges mask malicious intent behind familiar legitimacy markers.

Domain and email infrastructure rotates faster than defenders can respond. AI generates convincing lookalike domains and creates email systems with appropriate headers that suggest legitimacy. As previous infrastructure gets blocked, AI rapidly regenerates new domains and addresses. Defenders face an asymmetric challenge: blocklists can’t keep pace with AI-generated rotation, URL reputation systems lag behind creation speed, and user education about “checking the URL carefully” fails when fraudulent domains appear entirely legitimate.

Organizations need verification that authenticates identity regardless of how plausible the surrounding infrastructure appears. AI’s greatest amplification may be its systematic approach to credential exploitation.

AI-Assisted Credential Exploitation

Credential stuffing has existed for years, but AI transforms it from opportunistic testing into systematic exploitation. AI tests leaked username and password combinations across thousands of services simultaneously, identifies which credentials work, catalogs which services each one accesses, and prioritizes high-value targets like banking systems, email accounts, and corporate networks. The operation runs at network bandwidth limits rather than human-limited testing speeds.

When exact credential matches fail, AI analyzes password patterns from breach databases to generate likely variations. It tests modifications based on common user behaviors—adding numbers, changing special characters, updating seasonal references—increasing success rates without requiring additional leaked data.

After gaining initial access, AI systematically probes for privilege escalation opportunities. It identifies misconfigured permissions, unpatched vulnerabilities, and weak administrator credentials, then maps relationships between accounts to find paths toward high-privilege access. Claude Code autonomously identified “highest-privilege accounts” during its attacks.

AI also coordinates credential exploitation with other attack vectors. It generates convincing support tickets requesting multi-factor authentication resets then combines stolen passwords with social engineering and precise timing.

Traditional account takeover requires attackers to test credentials manually and investigate privilege paths sequentially. AI conducts parallel testing, automated privilege mapping, and systematic exploitation. Defenders monitoring for “suspicious login patterns” face attackers operating within normal behavioral parameters. Organizations need continuous authentication that verifies identity rather than trusting initial credentials throughout a session.

AI-Powered Defense

The speed mismatch creates an asymmetric disadvantage for defenders. AI executes thousands of operations per second while security teams analyze alerts at human speed. Traditional fraud detection relies on rules-based systems flagging known patterns. AI fraud deploys constantly evolving tactics, personalized attacks, and novel exploitation methods that rules-based systems weren’t designed to catch. Organizations need defenses operating at comparable speed with comparable intelligence.

The recently introduced AI Scam Prevention Act (December 2025) represents regulatory recognition of the threat. The legislation prohibits using AI to replicate voices or images with fraudulent intent, codifies FTC bans on impersonating government or business officials, and updates definitions for text messages and video calls that haven’t changed legally since 1996. It creates an enforcement framework and establishes inter-agency coordination through an Advisory Committee.

What the Act doesn’t address proves equally important: technical detection requirements, automated attack prevention, credential exploitation, or synthetic infrastructure generation. Regulation establishes consequences for convicted fraudsters but doesn’t prevent attacks or help organizations detect them before damage occurs.

Effective defense requires AI meeting AI. Fraud detection systems must analyze authentication patterns, transaction behaviors, and communication anomalies in real time. Technologies like xDeTECH distinguish human voices from AI-generated audio during phone-based verification. xFace PAD (Presentation Attack Detection) algorithms detect presentation attacks and deepfakes at the point of capture. Machine learning models trained on evolving attack patterns adapt defenses as threats emerge.

Orchestration platforms like TrustX coordinate defensive layers including biometric verification, document authentication, behavioral analysis, and injection detection. These systems respond to AI-speed attacks with automated countermeasures calibrated to risk level. No-code orchestration enables rapid defensive iteration that matches attacker adaptation speed while providing centralized visibility into which attacks were attempted, which defenses triggered, and how techniques evolved.

Machine learning models analyzing blocked attacks identify emerging patterns before they become widespread. Defensive systems learn from attempted exploits, strengthening protection proactively through feedback loops where each blocked attack improves detection for subsequent attempts. This shifts security posture from reactive responses after breaches to adaptive evolution alongside threats.

The effective model pairs AI with human judgment. AI handles speed and scale by analyzing thousands of authentication attempts, detecting anomalies, and responding in milliseconds. Humans provide judgment by investigating edge cases, validating complex scenarios, and refining detection thresholds. Neither proves sufficient alone, but together they amplify security teams rather than replace them.

Time to Act

Attackers are already deploying AI at scale. Meanwhile, most organizations still rely on human-speed defenses including manual fraud review, rules-based detection systems, and periodic security updates. Americans lost nearly $2 billion to scams last year. AI will accelerate this figure dramatically as tools become more accessible and attack sophistication increases.

The strategic imperatives are straightforward. Organizations must implement AI-powered detection that analyzes authentication patterns and transaction behaviors at transaction speed. They need orchestrated defensive layers responding automatically to threats rather than requiring human intervention at each stage. Organizations implementing AI-powered defense report measurable fraud reduction. Those relying on legacy detection face mounting losses as AI attacks scale.