What the Taylor Swift Deepfakes Say About the Dangers of AI

How AI-generated images of Taylor Swift raised alarms about the serious risk AI poses.
"The AI Economy," a newsletter exploring AI's impact on business, work, society and tech.
This is "The AI Economy," a weekly LinkedIn-first newsletter about AI's influence on business, work, society and tech and written by Ken Yeung. Sign up here.

Welcome back for another issue of “The AI Economy,” a newsletter exploring artificial intelligence’s impact on business, work, society and tech.

This week: Uncover the consequences of the explicit deepfakes depicting the “Anti-Hero” singer and the seismic shift in the AI industry they triggered, an event unparalleled in its history. Plus, what you need to know about investing in AI, and a round-up of news headlines you may have missed.

The Prompt

Deepfakes have existed for years, long before the generative AI became popular. But alarm over the synthetic media reached a new level last month when sexually explicit AI-generated images of Taylor Swift flooded X, the platform formerly known as Twitter. The fact that someone created deepfakes representing one of the most popular personalities in the world set off alarms. Not only did it create outrage from Swifties and the public, but also from those in the AI and tech field.

White House officials are worried and have asked Congress to take action. A new bipartisan bill, The DEFIANCE Act, was recently introduced in the U.S. Senate. If passed, this law would permit victims to sue if someone creates fake explicit images of them without their consent.

And in an interview, Microsoft CEO Satya Nadella called for AI guardrails to ensure there’s more safe content that’s being produced. The company also announced a patch to its AI text-to-image generation tool that allowed deepfakes from being created.

It’s not alone in developing tech to prevent harmful images. Companies such as Adobe and PhotoGuard have established watermarks to indicate when AI-generated work is being used. And startups are developing data-poisoning tools to fend off AI scraping.

But will legislation be strong enough to curb deepfake usage? Lawmakers in Hawaii, South Dakota, Massachusetts, Oklahoma, Nebraska, Indiana, and Wyoming have introduced legislation aimed at banning these fake AI-generated images. Additional protection will surely be needed, both from our civic and tech leaders.

Just how seriously should we be taking the Swift deepfakes? Journalists with 404 Media, who have been reporting on this issue perhaps longer than most, offered this assessment: “The Taylor Swift images, nonconsensual images of other celebrities, and nonconsensual images of non-public people are not going to stop until something far worse happens.”

This week, there were additional stories about celebrity deepfakes and AI scams. YouTube removed over a thousand videos connected to an AI-driven ad campaign that used celebrity deepfakes, and these videos collectively garnered almost 200 million views. And NBC News reported on videos with fake news content about Black celebrities being posted on YouTube.

Additionally, I’m reminded of the “Treehouse of Horrors XIX” episode of “The Simpsons,” where Homer Simpson is tasked with eliminating celebrities on behalf of ad men. His task finished, deepfakes of his victims are used on billboards and TV commercials.

It’s all humorous here — remember the Balenciaga Pope Francis deepfake? — but there could be real-life implications when used in society. A fake Biden robocall was made to New Hampshire voters ahead of the state’s presidential primary election, trying to convince them not to vote. And remember the deepfake of former President Barack Obama in 2018?

The dissemination of deepfakes poses a risk of spreading misinformation and jeopardizing our financial and personal security. It’s crucial to remember that 2024 marks a significant election year in the U.S. and other countries. Consequently, there’s a real possibility of AI being employed to either influence or deter us.

Businesses aren’t immune from this media either. It’s prudent for companies to think about how to respond if they become impacted.

In the end, we can categorize the Taylor Swift deepfakes as just another incident. However, the heightened media attention surrounding the victimization of the 10-time Grammy winner brings this issue front and center unlike before. Ideally, this could catalyze individuals in Washington, D.C. and Silicon Valley to actively seek solutions to prevent the creation of such deepfakes in the future.

🚀 Seeking captivating stories for “The AI Economy” newsletter! If you’re immersed in AI – whether through building, investing, or witnessing intriguing developments – I want to hear from you! 🌐✨

Drop me a message or share your insights in the comments below.

Ready to share your expertise? I’m also conducting interviews for the newsletter – connect with me to be featured!

A Closer Look

Curious about the AI playing field and its investment potential? Journalist turned venture capitalist and AI startup founder Ben Parr published a lengthy presentation highlighting investment trends in the space. It aggregates data from across 2023 to give founders and investors a better idea of where money is going in AI.

Among the topics he covers in this 110+ page slide deck: Who’s getting funded in AI and why, the effect on job displacement, how AI impacts the global economy, AI technologies and trends worth watching, and Parr’s predictions for 2024.

Here are some takeaways from his presentation:

  • AI startups received $50 billion in funding globally in 2023 — $19 billion went to just three companies: OpenAI, Anthropic and Inflection
  • 11 AI companies made up 50% of AI funding in 2023
  • 25% of VC money went to AI-related startups in 2023 — there were 5,208 AI deals in the first nine months of that year (down nearly 27% year-over-year)
  • AI could potentially contribute to the increase in global GDP by 7% in the next decade
  • Technologies he’s watching: Creation of the State Space models, an alternative to the Transformer model currently powering most LLMs; liquid neural networks, causal AI, autonomous agents, verticalized AI used for specific industries; and AI-centric hardware such as the Humane AI Pin
  • Parr predicts dealmaking will increase in 2024 due to interest rates, more AI unicorns will emerge and there will be more exits this year
  • He doesn’t believe AI copyright issues will be resolved in 2024, unsurprisingly, though dealmaking shouldn’t be impacted
  • AI startups will need less capital after raising their Series A round, so future funding amounts should be smaller

Today’s Visual Snapshot

With generative AI’s popularity growing, it’s easy to mistake the technology as being equivalent to artificial intelligence. But “gen AI” and “AI” are not interchangeable. It’s a subset of a larger ecosystem like how search engine advertising is a part of the online marketing field.

Marily Nika, the founder of the AI Product Academy and former AI product lead at Google and Meta, published an infographic illustrating the landscape. This visual representation highlights that what we commonly know as gen AI is just a small part of the vast capabilities of AI. It’s important to remember that significant work is ongoing in other AI areas, despite the widespread use and familiarity of terms like ChatGPT in our everyday conversations.

It’s also helpful in understanding what OpenAI and Meta are talking about when these companies say they want to achieve artificial general intelligence (AGI).

Nika’s featured use cases are by no means a finite list, but a selection of applicable scenarios.

Quote This

“One of the things that I feel that’s very healthy is we’re not just talking about all of the things this new technology can do, but we’re also talking about the unintended consequences. We have learned, even as a tech industry, that we have to simultaneously address both of these. How do you really amplify the benefits and dampen the unintended consequences?”

— Microsoft CEO Satya Nadella when asked if the negative news around AI gives him pause about how far the company can push the technology (NBC’s Nightly News with Lester Holt)

Neural Nuggets

An AI-generated image of a robot reading a newspaper.
An AI-generated image of a robot reading a newspaper.

🏭 Industry Insights

💻 Work

🤖 Machine Learning

✏️ Generative AI

🛒 Finance and Commerce

☁️ Enterprise

💼 Business and Marketing

📺 Media and Entertainment

💰 Funding

⚖️ Copyright and Regulatory Issues

💥 Disruption and Misinformation

🔎 Opinions and Research

🎧 Podcasts

End Output

I hope you enjoyed diving into the latest articles on “The AI Economy!”

I’m eager to hear your thoughts on this edition. What struck a chord with you, and what left you scratching your head? Leave a comment or shoot me a message on LinkedIn with your feedback — it’s the secret sauce that makes this journey worthwhile.

Missed any articles this week? I know staying up-to-date on all the AI news can feel overwhelming. Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”

Follow my Flipboard Magazine for all the latest AI news I curate for "The AI Economy" newsletter.
Follow my Flipboard Magazine for all the latest AI news I curate for “The AI Economy” newsletter.

Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology.

Got a story you think would be a great fit for “The AI Economy“? Awesome! Shoot me a message – I’m all ears for your pitches. Let’s chat, share ideas, and better understand the AI landscape together!

Thanks for reading and be sure to subscribe to receive future editions.

Until next week, stay curious!

Subscribe to “The AI Economy”

New issues published on Fridays, exclusively on LinkedIn

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading