AI-Powered Impersonation Is Emerging as a Top Cyberthreat for 2026

AI-Powered Impersonation Is Emerging as a Top Cyberthreat for 2026

Consumers are increasingly concerned about identity theft, with 68% identifying it as their top concern.

Headshot of Macy Meyer
Headshot of Macy Meyer

Macy is a writer on the AI Team. She covers how AI is changing daily life and how to make the most of it. This includes writing about consumer AI products and their real-world impact, from breakthrough tools reshaping daily life to the intimate ways people interact with AI technology day-to-day. Macy is a North Carolina native who graduated from UNC-Chapel Hill with a BA in English and a second BA in Journalism. You can reach her at mmeyer@cnet.com.

Expertise Macy covers consumer AI products and their real-world impact Credentials

  • Macy has been working for CNET for coming on 2 years. Prior to CNET, Macy received a North Carolina College Media Association award in sports writing.

Generative AI is expected to supercharge online scams and impersonation attacks in 2026, pushing fraud ahead of ransomware as the top cyber-risk for businesses and consumers alike, according to a new warning from the World Economic Forum.

Nearly three-quarters (73%) of CEOs surveyed by the WEF said they or someone in their professional or personal network had been affected by cyber-enabled fraud in 2025. That shift has moved executives' concerns away from ransomware, which dominated corporate threat lists just a year ago, and toward AI-driven scams that are easier to launch and harder to detect.

"The challenge for leaders is no longer just understanding the threat but acting collectively to stay ahead of it," Jeremy Jurgens, managing director at the WEF, said. "Building meaningful cyber resilience will require coordinated action across governments, businesses and technology providers to protect trust and stability in an increasingly AI-driven world."


Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Consumers are feeling the impact as well. A recent Experian report found 68% of people now see identity theft as their top concern -- ahead of stolen credit card data. And that anxiety is backed up by federal data. The US Federal Trade Commission reported $12.5 billion in consumer fraud losses in 2024, a 25% year-over-year increase.

CNET AI Atlas badge art; click to see more

Experts say generative AI is helping fuel that growth by making scams easier to create and more convincing. The WEF report found 62% of executives had encountered phishing attempts, including voice- and text-based scams, while 37% reported invoice or payment fraud. Nearly a third (32%) said they had seen identity theft cases, too. 

Increased use of AI tools is lowering the barriers for cybercriminals while raising the sophistication of attacks. Scammers can now quickly localize messages, clone voices and launch realistic impersonation attempts that are harder for victims to spot. The WEF also warns that generative AI is amplifying digital safety risks for groups like children and women, who are increasingly targeted through impersonation and synthetic image abuse.

At the same time, many businesses and organizations lack staff and expertise to defend against cyberthreats. While AI could help, the report cautions that poorly implemented tools can introduce new risks.

It's not just businesses facing more threats. In its May 2025 Scamplified report, the Consumer Federation of America warned that tools that generate highly personalized phishing emails, deepfake voices and realistic-looking alerts are stripping away many of the traditional red flags we once relied on to spot a scam.

Read more: Meet the AI Fraud Fighters: A Deepfake Granny, Digital Bots and a YouTube Star

For consumers, the advice on how to best safeguard your privacy is straightforward but increasingly important. 

The CFA urged consumers to slow down and question unexpected calls, texts or emails that create a sense of urgency or pressure to act quickly. It advised against sharing personal, financial or authentication information in response to unsolicited outreach and recommended independently verifying requests by looking up official phone numbers or websites rather than trusting caller ID, links or contact details provided in a message. You should also consider reporting suspected scams to authorities, such as the Federal Trade Commission's ReportFraud.ftc.gov website.

Generally, experts continue to recommend staying alert for suspicious messages, using strong, unique passwords, enabling multifactor authentication and keeping up with basic online security measures as AI-driven scams evolve in 2026 and beyond.

Sponsored
Sponsored
Upgrade to Pro
Choose the Plan That's Right for You
Sponsored
Sponsored
Ads
Read More
Download the Telestraw App!
Download on the App Store Get it on Google Play
×