Over the last few years, headlines have captured the growing impact of deepfake-enabled fraud, from executives tricked into wiring millions to families targeted with AI-generated voice scams. Last week on Capitol Hill, a panel of experts on fraud, identity, and cybersecurity delivered a clear message: deepfakes are no longer a future risk. They are now a primary driver of fraud at scale.

The Better Identity Coalition (BIC) partnered with the Congressional Stop Scams Caucus to put on a briefing for House staff, featuring leaders from the financial services sector, identity research community, and fraud prevention industry. Together, they outlined how generative AI is reshaping the threat landscape. The discussion focused on real-world activity, and the ways in which adversaries are leveraging new tools to scam Americans. What emerged was a clear picture of a fraud ecosystem that is faster, more coordinated, and increasingly difficult to counter using traditional tools.

The takeaway for policymakers is clear. While fraud itself is not new, the capabilities now available to adversaries require a step change in how the U.S. approaches digital identity, authentication, and information sharing.

Fraud Has Gone Industrial and AI Is Accelerating It

Fraudsters have always evolved alongside technology. The difference now is the speed and scale at which they can evolve and operate.

Panelists described highly organized cybercriminal operations that resemble sophisticated businesses. These groups are using generative AI to produce synthetic identities, clone voices, and generate realistic video impersonations with minimal cost and effort. In financial services, institutions are seeing a surge in deepfake-enabled fraud attempts, often involving coordinated campaigns that submit thousands of slightly varied identity verification requests to evade detection systems.

This is not opportunistic fraud. It is industrialized, data-driven, and continuously optimized.

AI is also compressing the feedback loop for attackers. Criminals can rapidly test different approaches, whether targeting onboarding systems, call centers, or consumers, and refine their tactics in near real time. The result is a dynamic threat environment where defenses that were effective even a year ago may now fall short.

A Rapidly Expanding Attack Surface

Identity attacks are no longer confined to a single channel.

Voice-based fraud is emerging as a primary vector. Call centers, long relied upon for account recovery and customer support, are increasingly vulnerable to AI-generated voice cloning. At the same time, attackers are targeting consumers directly, using publicly available data and AI tools to build detailed profiles that make scams more convincing and more personalized.

The convergence of these tactics is expanding the attack surface beyond any individual organization. Even institutions with strong internal controls can be undermined if attackers successfully manipulate customers or exploit weaker points in adjacent systems.

This reflects a broader shift. Digital identity is not just an institutional challenge. It is an ecosystem-wide issue that requires coordination across sectors.

Why Current Defenses Are Struggling to Keep Pace

Despite meaningful advances in fraud detection, panelists pointed to persistent structural gaps.

Many identity proofing systems still rely on methods that are increasingly vulnerable to synthetic identity attacks. Authentication mechanisms often lack phishing resistance, leaving users exposed to credential theft and social engineering. While stronger solutions exist, implementation challenges have slowed their adoption.

At the same time, defenders face constraints that attackers do not. Financial institutions and other organizations must operate within privacy, security, and compliance frameworks that can limit real-time information sharing. Criminal networks, by contrast, collaborate freely, share tactics, and scale successful approaches quickly.

The result is an uneven playing field. Attackers collaborate, iterate, and scale quickly. Defenders remain constrained by fragmentation, regulation, and limited information sharing.

The Role for Policymakers

The discussion reinforced that addressing these challenges will require more than incremental improvements. Policymakers have an important role to play in accelerating the adoption of more secure, privacy-enhancing approaches to digital identity.  The panel talked through some of the key recommendations from the Financial Services Sector Coordinating Council (FSSCC) paper on Recommendations for Policymakers: Mitigating AI-Powered Attacks Against Identity and Authentication.

These includes:

  1. Advancing next-generation identity proofing and verification. Existing remote identity solutions are increasingly vulnerable to AI-enabled attacks. Governments can help catalyze more resilient, user-friendly approaches that strengthen security while improving the consumer experience, such as advancing mobile driver licenses and other verifiable digital credentials.
  2. Accelerating adoption of phishing-resistant authentication. Passwords and many common forms of multifactor authentication are no longer sufficient. Policymakers can encourage a shift toward stronger, phishing-resistant methods that reduce reliance on shared secrets and other easily compromised credentials.
  3. Promoting international coordination and interoperability. As identity and fraud challenges cross borders, the United States should work with allies to align standards and frameworks where feasible, while preserving core U.S. values around privacy and security.
  4. Expanding education and awareness. Consumers and businesses need a better understanding of emerging identity solutions and evolving threats. Public-private collaboration can help drive adoption of best practices and improve resilience across the ecosystem.

The coalition also expressed support for H.R.7270, the Stop Identity Fraud and Identity Theft Act of 2026. Introduced by Representative Pete Sessions (R-TX) and Representative Bill Foster (D-IL), the bipartisan bill would create a new identity fraud prevention innovation grant program focused on catalyzing the development, deployment, and use of more resilient, interoperable solutions Americans can use to protect and assert their identity online, and  stopping identity fraud and theft in financial services.

The U.S. has an opportunity to modernize its digital identity infrastructure in ways that enhance security, protect privacy, and improve user experience. Doing so will require coordinated action across government and industry.

Looking Ahead

The May 6 briefing made one point unmistakably clear: generative AI is not just enhancing existing fraud tactics. It is redefining them.

Without stronger, more scalable approaches to identity and authentication, both institutions and consumers will remain vulnerable. With the right policy framework, there is a path forward that can reduce fraud, strengthen trust, and support innovation.

Peyton Kelleher

Read Next

A Partnership to Help Financial Services Firms Address Gen AI-Related Cyber Risks to Identity and Authentication Infrastructure

The Better Identity Coalition is honored to be a part of a public-private initiative to address the threat of deepfake attacks against financial services with the release of two papers looking at threats and mitigations, and policy recommendations.

A New Approach to Address Concerns About Overuse of Digital IDs

The Better Identity Coalition has released an initial straw man of a “voluntary code of conduct” for the digital identity ecosystem to restrict inappropriate or overly invasive requests for identity information from verifiable digital credentials.

REAL ID No Longer Focuses on Most Pressing Problems

As REAL ID full enforcement begins, I am left wondering how the government will address the more pressing identity threats in the digital world.