Smile ID has released its 2026 Digital Identity Fraud Report, arguing that artificial intelligence is reshaping digital fraud by making it cheaper to run, easier to scale, and harder to detect. The company says identity checks are no longer just a compliance step, because attackers increasingly treat verification as a targetable part of the security stack. The report is based on anonymised insights from more than 200 million identity verification checks run in 2025 across 37 industries and more than 35 countries.
AI is accelerating deepfake-driven fraud
Smile ID says generative AI has improved the realism and consistency of synthetic biometric attacks while cutting the cost of production. That combination, it argues, enables criminal groups to run high-volume campaigns rather than relying on one-off document manipulation. The report frames this as a shift in fraud economics, where automation and commoditised tooling reward repetition and experimentation.
Signals are overtaking images in fraud detection
The report’s headline operational change is the move from assessing what is seen to validating how it was captured. Smile ID says nearly 90% of fraud blocked in 2025 was triggered by mobile SDK signals rather than image analysis alone, up from 68% in 2024. It concludes that device, session, and environment integrity are increasingly decisive in stopping modern attacks.
Injection attacks target the capture pipeline
Smile ID highlights “injection-style” fraud, where attackers attempt to bypass the camera by feeding synthetic or pre-recorded media into the verification session. It says that in 2025 it observed more than 100,000 injection attempts per month tied to emulators, virtual cameras, and manipulated environments. The company argues this marks a move away from classic visual spoofing toward systematic interference with the verification process itself.
Authentication is now the main battleground
According to the report, fraud attempts aimed at authentication now outnumber onboarding-related fraud by more than five to one. Smile ID says attackers are operating inside verified accounts, focusing on login flows, account recovery, device changes, and high-value transactions. It adds that AI-driven automation helps criminals reuse verified biometrics, take over accounts mid-journey, and move funds across platforms at scale.
Duplicate reuse and networked abuse are rising
Smile ID reports that duplicate attempts—where stolen or fraudulent identity data is reused—more than doubled year over year in 2025 and nearly tripled the combined total seen across 2023 and 2024. It says its position as shared infrastructure helps it identify coordinated patterns that may appear legitimate when viewed by a single institution. The company claims it feeds privacy-preserving risk indicators into a dynamic defence network, combining traditional algorithms, controlled capture methods, and internally tuned language models to adapt protections across clients.
Smile ID’s report argues that identity has become a continuous security surface, with fraud evolving from “get verified once” tactics into ongoing attacks against sessions, devices, and authenticated users. CEO Mark Straub said fraud is no longer only a KYC issue and that “identity has entered the security era,” where ecosystem-level intelligence is needed for real-time defence. With deepfakes, injection attacks, and credentialed abuse rising, the report positions capture integrity and authentication controls as the next priorities for digital platforms.

