Celebrities and Artists Slam AI Training, Calling It a ‘Major Threat’ to Livelihoods

Actors, artists, and authors demand better protection from AI startups that use unlicensed work to train models. Plus, Apple Intelligence is almost here!
"The AI Economy," a newsletter exploring AI's impact on business, work, society and tech.
This is "The AI Economy," a weekly LinkedIn-first newsletter about AI's influence on business, work, society and tech and written by Ken Yeung. Sign up here.

IN THIS ISSUE: Those in the arts, media, and entertainment industries are among the more vocal regarding AI, behaving as doomsayers and calling for more protection for their work. Plus, iPhone users will finally get AI on their devices when the first set of Apple Intelligence features are released next week; OpenAI may soon release its next flagship model in December; and be sure to check out this week’s roundup of AI news you may have missed.

The Prompt

The SAG-AFTRA strike in 2023 was only the start of celebrities highlighting the dangers of artificial intelligence. Last November, the trade union approved a contract that would install regulations on AI, but that hasn’t diminished the efforts of actors and creative professionals to continue warning about the technology’s risk to their livelihood.

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”

More than 10,000 artists this week signed a statement decrying AI companies’ unlicensed use of their work poses a “major, unjust threat” to their profession. Signatories include actor Julianne Moore, Radiohead singer Thom Yorke, Abba’s Bjorn Ulvaeus, and comedian Kate McKinnon. The American Federation of Musicians, SAG-AFTRA, the European Writers’ Council, and Universal Music Grop also support the statement.

The petition’s organizer, Ed Newton-Rex, a British composer and the former vice president of audio at Stability AI, told The Guardian:

“There are three key resources that generative AI companies need to build AI models: people, compute, and data. They spend vast sums on the first two—sometimes a million dollars per engineer and up to a billion dollars per model. But they expect to take the third—training data—for free.”

Actor Joseph Gordon-Levitt has also joined the growing chorus of AI critics. The “Inception” star spoke at the WSJ Tech Live conference, saying what AI companies are doing is the equivalent of sleight of hand—it “makes you ignore the fact that these were created by humans.” He called for all licensing deals to be “renegotiated in light of this new technology.”

Billie Eilish, Kacey Musgraves, J Balvin, Ja Rule, Jon Bon Jovi, The Jonas Brothers, Katy Perry, Miranda Lambert and hundreds of other artists have previously voiced concern about AI. Nicholas Cage has also urged young actors to protect themselves from the technology. And let’s not forget about the Recording Industry Association of America (RIAA) suing two AI startups this summer over copyright infringement.

These are some of the most recent displays of opposition the entertainment industry has put on as AI continues to proliferate. However, not every actor and artist is against it: Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell have lent their voices to Meta’s AI assistant. Last year, the musician Grimes invited fans to make songs using an AI-generated version of her voice. And soccer superstar David Beckham and actor Bruce Willis have toyed with using deepfake technology.

As new contracts are negotiated, artists are asking their unions to ensure that future deals codify protections. This week, SAG-AFTRA successfully ensured that more than 120 games from 49 companies have agreed to AI protections.

Companies are continuing to push the boundaries of AI with new models and applications, which will likely draw more criticism from the creative industry. And it’s not just with voice assistants, but from text-to-audio and text-to-video generation apps. Increased legal action may also follow until a framework is established that safeguards artists’ rights while helping tech companies access the data they need to train their models and remain competitive.


Apple Intelligence Is Coming Next Week

Generative AI is coming to the iPhone, iPad, and Mac with the public release of iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1. Introduced in June, Apple Intelligence brings a revamped Siri, Genmojis, Visual Intelligence, Image Wand, and other gen AI features. However, some features may not be available until 2025, and not every iPhone, Mac, or iPad immediately supports Apple Intelligence.

To use it, you’ll need to have:

  • At least an iPhone 15 Pro and Pro Max
  • iPads with an A17 Pro chip or an M1 chip and later
  • Mac with an M1 or later

If you’re eager to try out Apple Intelligence on your iPhone, you can install the beta versions of iOS 18.2, even with the bugs. Keep in mind, though, that when the AI is officially released, it will include only a limited set of features. The rest of the capabilities in 18.2 will be officially added later.

Apple Intelligence’s debut will usher Apple into the generative AI space. Its entry further brings sophisticated AI models and their features to more edge devices. While an overwhelming number of mobile phones and laptops have generative AI tech installed, many consumers apparently are unimpressed by what it can do.

Related Reading:


OpenAI May Release Its Next Flagship Model in December—Or Not?

Sources tell The Verge that OpenAI is getting ready to release Orion, its next model, in two months. If accurate, it would arrive on the second anniversary of ChatGPT, though it’s said that Orion won’t initially be available on the popular AI chatbot. Instead, the company could make it initially available to select partners, allowing them to build their own products and features.

The reporting is also unclear as to whether Orion is considered to be GPT-5.

However, hours after the news broke, OpenAI attempted to refute The Verge’s reporting. On X, chief executive Sam Altman replied to one of the reporters that their article was “fake news out of control.” As my former colleague Carl Franzen outlines, Altman’s comment appeared vague and not exactly a “direct denial of the claims:”

He didn’t write “No” or “this is false,” much less describe which part of the detailed article is wrong: is OpenAI not working on a new frontier model called Orion? That would contradict prior reporting from outlets including The Information that it does have such an effort internally — which to my knowledge, OpenAI never directly denied. Is it not planning to release later this year? But it is clearly an attempt to push back on the reporting as it stands.

Orion would be the latest model from OpenAI, several months after the introduction of its reasoning model series, o1 (“Strawberry”). The company may be leveraging o1 to generate synthetic data to train Orion.

Updated as of Oct. 25 at 10:37 a.m. PT: OpenAI officially denied plans to release Orion this year. In a statement to TechCrunch, the company revealed it plans “to release a lot of other great technology.”


Today’s Visual Snapshot

23 AI startups that have raised over $1 billion in venture funding. Source: Crunchbase News
23 AI startups that have raised over $1 billion in venture funding. Source: Crunchbase News

Venture capitalists are spending a lot on artificial intelligence startups. According to Crunchbase, over the past couple of years, at least 23 private companies raised more than $1 billion. Five of them have raised over $6 billion.

Underscoring how hot the AI industry is right now, it’s reported that VCs have invested $3.9 billion in generative AI startups, spanning 206 deals, excluding OpenAI.


Quote This

“The incredible progress in AI over the past five years can be summarized in one word: Scale. Yes, there have been uplink advances, but the frontier models of today are still based on the same transformer architecture that was introduced in 2017. The main difference is the scale of the data and the compute that goes into it.

OpenAI lead research scientist Noam Brown at the TED AI conference in San Francisco, where he spoke about the future of AI and how OpenAI’s o1 model may transform industries through strategic reasoning, advanced coding, and scientific research (VentureBeat)


This Week’s AI News

🏭 AI Trends and Industry Impacts

🤖 AI Models and Technologies

✏️ Generative AI and Content Creation

💰 Funding and Investments

☁️ Enterprise AI Solutions

⚙️ Hardware, Robotics, and Autonomous Systems

🔬 Science and Breakthroughs

💼 Business, Marketing, Media and Consumer Applications

⚖️ Legal, Regulatory, and Ethical Issues

💥 Disruption, Misinformation, and Risks

🔎 Opinions, Analysis, and Editorials


End Output

Thanks for reading. Be sure to subscribe so you don’t miss any future issues of this newsletter.

Did you miss any AI articles this week? Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”

Follow my Flipboard Magazine for all the latest AI news I curate for "The AI Economy" newsletter.
Follow my Flipboard Magazine for all the latest AI news I curate for “The AI Economy” newsletter.

Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology. 

Do you have a story you think would be a great fit for “The AI Economy”? Awesome! Shoot me a message – I’m all ears!

Until next time, stay curious!

Subscribe to “The AI Economy”

New issues published on Fridays, exclusively on LinkedIn

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading