AI's Biggest Risk Is the Story We're Not Being Told

AI's Biggest Risk Is the Story We're Not Being Told

In one of the opening shots of Un Chien Andalou, a 1929 French film co-written by Salvador Dalí, often cited as one of the first surrealist films, a young woman stares directly at the camera as a razor blade slices across her eye. 

OK, she didn't actually have her eye slit open, thanks to movie magic and all. But the movie uses surrealism as a powerful new way of seeing and interpreting the world. It's supposed to shock us out of passive viewing and spectatorship, and take us beyond traditional perception. 

Last Thursday, as I sat in a lecture hall at the Salvador Dalí Museum in St. Petersburg, Florida, listening to a talk about emerging technology and innovation in 2026, I hoped for a discussion about similarly revolutionary modern innovations. 

But far too often, when we talk about AI, we don't confront this potentially revolutionary technology with our eyes wide open. Instead, whether it's in small lectures, social media posts or Super Bowl commercials, we get a one-sided marketing pitch that masks the real risks and concerns surrounding AI.

AI Atlas
CNET

Based on the audience's questions during the Q&A, this was likely the first real introduction to generative and physical AI for many of them. The group absorbed everything uncritically, nodding along and blooming with excitement as the lecture painted a picture of a future transformed entirely for the better. 

In one particularly grating instance, we were shown a video of LG's laundry-folding robot that debuted last month at the CES 2026 trade show in Las Vegas. Having seen the robot for myself, I knew how slow it was at folding just one uniform-sized T-shirt. A robot that can actually assist with home chores is years away. 

"Who wants this robot?" the speaker shouted, and hands raised all over the room. 

Was there any mention of the technology's limitations, like the fact that it needs human help to reach into the hamper? Was there any mention of the prohibitive cost? Of course not. The crowd left that room with their understanding of AI shaped by someone who had carefully avoided mentioning any of the technology's downsides. 

This is a problem. 

The people with platforms -- whether they're tech experts, museum lecturers or influencers with millions of followers -- have a responsibility to tell the truth about AI. Not just the exciting parts. Not just the parts that make for good marketing. All of it. 

When public figures highlight AI's capabilities, they gloss over its risks: the devastating environmental impact, the proclivity for chatbots to hallucinate and make things up, the concerning way AI use affects memory skills and the rising incidents of AI-induced psychosis and suicide

These dangers are conveniently left out of the conversation; conversations that shape public perception in a way that serves a select few's interests, not the world's. 

 We've seen this dangerous pattern before. 

Since a 2018 US Supreme Court decision allowed states to legalize sports betting, celebrities and influencers have lined up to promote betting apps, pocketing massive checks while their followers face rising rates of gambling addiction and financial ruin. 

The 2021 crypto boom also brought a parade of celebrities hawking digital coins, many of which later crashed, leaving regular people holding worthless assets. Kim Kardashian settled with the SEC for $1.26 million in penalties for promoting a crypto token without disclosing that she was paid to do so. Matt Damon told us "fortune favors the brave" in a February 2022 Crypto.com Super Bowl ad that aged terribly in the wake of that year's crypto crash. 

We're watching the same story unfold with AI. We're seeing household-name actors jump into Super Bowl commercials championing AI companies for 100 million people. Influencers are taking money from AI companies to promote tools they probably don't even use and likely don't even understand, to audiences who have grown to trust them. 

The difference is that AI's risks go beyond financial loss. We're talking about job displacement, the erosion of creative industries, the spread of misinformation at scale, deepfakes that can destroy reputations and, as mentioned earlier, the environmental cost of running these massive models. 

This is why I appreciate artists like Guillermo del Toro who speaks realistically about AI. When models that referenced his distinctive visual style went viral, he didn't mince words about generative AI trained on artists' work without their permission, compensation or respect for copyright laws. He called it theft. 

Other artists and public figures have been similarly direct about the threat AI poses to their livelihoods and craft. Meanwhile, tech executives and developers dismiss these concerns as the latest wave of Luddism. 

While I generally believe that famous people are not role models to follow or trust, many people do. They assume that if someone with credentials or celebrity is enthusiastically promoting something, then it must be safe, beneficial and inevitable. That public trust comes with responsibility. 

If you're going to insist on talking about AI in public, taking $600,000 to promote Microsoft Copilot to millions on social media or, if you're the NFL, partnering with an AI company in a commercial airing during the biggest sporting event in America, you have an obligation to present the full picture -- especially to audiences who are just learning about it. 

Speak about the limitations. Talk about the jobs that are being eliminated. Mention the artists whose work is being scraped without consent to train these models. Acknowledge the staggering energy consumption. Explain how easy it is to generate convincing misinformation. Disclose when you're paid by an AI company to say what you're saying. 

This doesn't mean you can't discuss the possibilities and benefits of AI. It has real potential to accelerate drug discovery, improve disease outcomes and solve complex problems. But framing it as pure progress and innovation -- as an unalloyed good -- is ignorant or deceptive. 

Like the surrealist work that emerged after World War I, AI is revolutionary, provocative and disruptive. They both challenge the ways we see the world. 

But surrealism was intentional and deeply human, rooted in our minds and expressions and emotions. Generative AI is machine-driven pattern recognition. Surrealism was created to defy conventions and reach the ultimate truth and authenticity. 

We still deserve the truth now. The conversation around AI is happening, whether we like it or not, and it's happening fast. The least we can ask is that the people leading that conversation tell us the facts of the matter. 

Commandité
Commandité
Mise à niveau vers Pro
Choisissez le forfait qui vous convient
Commandité
Commandité
Annonces
Lire la suite
Download the Telestraw App!
Download on the App Store Get it on Google Play
×