Players are always ready to play, and haters, well, they’re always ready to hate. But when it comes to the shocking, distasteful, and viral AI-generated deepfakes of Taylor Swift, which were enough to send Elon Musk scrambling to hire an additional 100 content moderators and Microsoft to pledge more safeguards on its Designer AI app, I have a message for AI companies: You can’t just ‘shake it off.’
I understand your desire to shake it off, to keep moving forward. You claim you can’t stop, you won’t stop. It’s as if you have this melody in your mind reassuring you that “it’s gonna be alright.”
Echoing Taylor Swift, ‘now we got problems’
Indeed, Marc Andreessen’s “Techno-Optimist Manifesto” proclaimed, “Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential.” OpenAI’s mission is to develop artificial general intelligence (AGI) that benefits all of humanity. Anthropic is so confident in its ability to build reliable, interpretable, and steerable AI systems that it’s already building them. And just yesterday, Meta’s chief AI scientist Yann LeCun reminded us that the “world didn’t end” five years after GPT-2 was deemed too dangerous to release. “In fact, nothing bad happened,” he posted on X.
But Yann, I beg to differ — yes, bad things are happening with AI. That doesn’t negate the good things that are also happening, or suggest that overall optimism isn’t justified when we consider the broad arc of technological evolution.
However, it’s undeniable — bad things are happening, and perhaps the “normies” understand this better than most in the AI industry, as it’s their lives and livelihoods that are directly impacted by AI. It’s crucial that AI companies fully acknowledge this, in the most respectful way possible, and clarify how they are addressing these issues.
Only then, I believe, can they avoid the precipice of disillusionment I discussed back in October. Alongside the rapid pace of impressive, even awe-inspiring AI developments, AI also grapples with a host of complex challenges — from election misinformation and AI-generated porn to workforce displacement and plagiarism. AI holds immense positive potential for humanity’s future, but I don’t think companies are effectively communicating what that is.
And now, they’re clearly not doing a great job of communicating how they plan to fix what’s already broken. As Swifties know all too well, “now we got problems…you made a really big cut.”
I’m cheering for the AI anti-hero
I’m passionate about the AI beat. It’s thrilling, promising, and utterly fascinating. However, it can be draining to constantly cheer for what many perceive as a morally ambiguous anti-hero technology. Sometimes, I wish the most vocal AI leaders would take responsibility and say, “I’m the problem, it’s me, at tea time, everyone agrees, I’ll stare directly at the sun but never in the mirror.”
But they need to face the mirror: Regardless of how many well-intentioned AI researchers, executives, academics, and policymakers exist, there should be no doubt that the Taylor Swift AI deepfake scandal is just the tip of the iceberg. Millions of women and girls are at risk of being targeted with AI-generated porn. Experts predict AI will turn the 2024 election into a “hot mess.” Whether they can prove it or not, thousands of workers will blame AI for their layoffs.
Many “normies” I speak to already scoff at the term “AI.” I’m sure that’s incredibly frustrating for those who see the power and promise of AI as a beacon of hope with the potential to solve many of humanity’s greatest challenges.
But if AI companies can’t devise a way forward that doesn’t trample over the very humans they hope will use, appreciate — and not misuse — the technology? Well, if that happens — baby, now we got bad blood.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.