This is "The AI Economy," a weekly LinkedIn-first newsletter about AI's influence on business, work, society and tech and written by Ken Yeung. Sign up here.
Welcome back for another issue of “The AI Economy,” a newsletter exploring artificial intelligence’s impact on business, work, society and tech.
This week: Uncover the consequences of the explicit deepfakes depicting the “Anti-Hero” singer and the seismic shift in the AI industry they triggered, an event unparalleled in its history. Plus, what you need to know about investing in AI, and a round-up of news headlines you may have missed.
The Prompt
Deepfakes have existed for years, long before the generative AI became popular. But alarm over the synthetic media reached a new level last month when sexually explicit AI-generated images of Taylor Swift flooded X, the platform formerly known as Twitter. The fact that someone created deepfakes representing one of the most popular personalities in the world set off alarms. Not only did it create outrage from Swifties and the public, but also from those in the AI and tech field.
White House officials are worried and have asked Congress to take action. A new bipartisan bill, The DEFIANCE Act, was recently introduced in the U.S. Senate. If passed, this law would permit victims to sue if someone creates fake explicit images of them without their consent.
And in an interview, Microsoft CEO Satya Nadella called for AI guardrails to ensure there’s more safe content that’s being produced. The company also announced a patch to its AI text-to-image generation tool that allowed deepfakes from being created.
It’s not alone in developing tech to prevent harmful images. Companies such as Adobe and PhotoGuard have established watermarks to indicate when AI-generated work is being used. And startups are developing data-poisoning tools to fend off AI scraping.
But will legislation be strong enough to curb deepfake usage? Lawmakers in Hawaii, South Dakota, Massachusetts, Oklahoma, Nebraska, Indiana, and Wyoming have introduced legislation aimed at banning these fake AI-generated images. Additional protection will surely be needed, both from our civic and tech leaders.
Just how seriously should we be taking the Swift deepfakes? Journalists with 404 Media, who have been reporting on this issue perhaps longer than most, offered this assessment: “The Taylor Swift images, nonconsensual images of other celebrities, and nonconsensual images of non-public people are not going to stop until something far worse happens.”
This week, there were additional stories about celebrity deepfakes and AI scams. YouTube removed over a thousand videos connected to an AI-driven ad campaign that used celebrity deepfakes, and these videos collectively garnered almost 200 million views. And NBC News reported on videos with fake news content about Black celebrities being posted on YouTube.
Additionally, I’m reminded of the “Treehouse of Horrors XIX” episode of “The Simpsons,” where Homer Simpson is tasked with eliminating celebrities on behalf of ad men. His task finished, deepfakes of his victims are used on billboards and TV commercials.
It’s all humorous here — remember the Balenciaga Pope Francis deepfake? — but there could be real-life implications when used in society. A fake Biden robocall was made to New Hampshire voters ahead of the state’s presidential primary election, trying to convince them not to vote. And remember the deepfake of former President Barack Obama in 2018?
The dissemination of deepfakes poses a risk of spreading misinformation and jeopardizing our financial and personal security. It’s crucial to remember that 2024 marks a significant election year in the U.S. and other countries. Consequently, there’s a real possibility of AI being employed to either influence or deter us.
Businesses aren’t immune from this media either. It’s prudent for companies to think about how to respond if they become impacted.
In the end, we can categorize the Taylor Swift deepfakes as just another incident. However, the heightened media attention surrounding the victimization of the 10-time Grammy winner brings this issue front and center unlike before. Ideally, this could catalyze individuals in Washington, D.C. and Silicon Valley to actively seek solutions to prevent the creation of such deepfakes in the future.
🚀 Seeking captivating stories for “The AI Economy” newsletter! If you’re immersed in AI – whether through building, investing, or witnessing intriguing developments – I want to hear from you! 🌐✨
Drop me a message or share your insights in the comments below.
Ready to share your expertise? I’m also conducting interviews for the newsletter – connect with me to be featured!
A Closer Look
Curious about the AI playing field and its investment potential? Journalist turned venture capitalist and AI startup founder Ben Parr published a lengthy presentation highlighting investment trends in the space. It aggregates data from across 2023 to give founders and investors a better idea of where money is going in AI.
Among the topics he covers in this 110+ page slide deck: Who’s getting funded in AI and why, the effect on job displacement, how AI impacts the global economy, AI technologies and trends worth watching, and Parr’s predictions for 2024.
Here are some takeaways from his presentation:
- AI startups received $50 billion in funding globally in 2023 — $19 billion went to just three companies: OpenAI, Anthropic and Inflection
- 11 AI companies made up 50% of AI funding in 2023
- 25% of VC money went to AI-related startups in 2023 — there were 5,208 AI deals in the first nine months of that year (down nearly 27% year-over-year)
- AI could potentially contribute to the increase in global GDP by 7% in the next decade
- Technologies he’s watching: Creation of the State Space models, an alternative to the Transformer model currently powering most LLMs; liquid neural networks, causal AI, autonomous agents, verticalized AI used for specific industries; and AI-centric hardware such as the Humane AI Pin
- Parr predicts dealmaking will increase in 2024 due to interest rates, more AI unicorns will emerge and there will be more exits this year
- He doesn’t believe AI copyright issues will be resolved in 2024, unsurprisingly, though dealmaking shouldn’t be impacted
- AI startups will need less capital after raising their Series A round, so future funding amounts should be smaller
Today’s Visual Snapshot
With generative AI’s popularity growing, it’s easy to mistake the technology as being equivalent to artificial intelligence. But “gen AI” and “AI” are not interchangeable. It’s a subset of a larger ecosystem like how search engine advertising is a part of the online marketing field.
Marily Nika, the founder of the AI Product Academy and former AI product lead at Google and Meta, published an infographic illustrating the landscape. This visual representation highlights that what we commonly know as gen AI is just a small part of the vast capabilities of AI. It’s important to remember that significant work is ongoing in other AI areas, despite the widespread use and familiarity of terms like ChatGPT in our everyday conversations.
It’s also helpful in understanding what OpenAI and Meta are talking about when these companies say they want to achieve artificial general intelligence (AGI).
Nika’s featured use cases are by no means a finite list, but a selection of applicable scenarios.
Quote This
“One of the things that I feel that’s very healthy is we’re not just talking about all of the things this new technology can do, but we’re also talking about the unintended consequences. We have learned, even as a tech industry, that we have to simultaneously address both of these. How do you really amplify the benefits and dampen the unintended consequences?”
— Microsoft CEO Satya Nadella when asked if the negative news around AI gives him pause about how far the company can push the technology (NBC’s Nightly News with Lester Holt)
Neural Nuggets
🏭 Industry Insights
- A guide to becoming AI literate (Worklife)
- European Union countries find agreement on the Artificial Intelligence Act. The law would ban some AI apps, impose strict limits on high-risk use cases, and require transparency and stress-testing for the most advanced software models (Politico)
- What role does effective altruism play in AI security? (VentureBeat)
💻 Work
- The rise of the Chief Artificial Intelligence Officer: Why law firms, hospitals, insurance companies, government agencies and universities have created this new role in corporate America (The New York Times)
- People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)
- Stop looking at Big Tech for your AI talent. Instead, source from these startups. (Signalfire)
🤖 Machine Learning
- A practical guide to using LLMs for policy-driven content moderation (Tech Policy)
- Leak reveals Mistral’s new open source AI model performing nearly as well as GPT-4 (VentureBeat)
- The economy and ethics of AI training data (Marketplace)
- Allen Institute releases open-source LLM to help researchers better understand how AI systems work (Axios)
✏️ Generative AI
- Apple working on gen AI software features that will launch “later this year,” CEO Tim Cook confirms (The Verge)
- AI chatbots aim to solve the problem of human loneliness (Axios)
- Microsoft AI engineer claims his company thwarted his attempts to expose DALL-E 3 safety problems (Geekwire)
- OpenAI now lets you invoke GPTs directly within the chat prompt (TechCrunch)
- Gen AI helping scammers develop new and sophisticated ways to target people looking for work (Axios)
- Google Maps to use gen AI to answer user queries for restaurant or shopping recommendations, starting in the U.S. (The Verge)
- Google’s Bard AI chatbot gets an image generation feature and a more capable version of Gemini Pro to take on ChatGPT (VentureBeat)
🛒 Finance and Commerce
- Amazon launches Rufus, an in-app AI-powered shopping assistant, helping shoppers find products, do product comparisons, and offer recommendations (TechCrunch)
- Y Combinator-backed Metal’s AI assistant automates the due diligence process for financial services and private equity funds (VentureBeat)
- Can AI “trading bots” transform the world of investing? (BBC)
☁️ Enterprise
- 16 examples of how open-source LLMs are used in the enterprise (VentureBeat)
- Study: Most organizations fear implementing AI, but those that do report benefits (ZDNet)
- Profile of Salesforce AI CEO Clara Shih, what her gen AI ‘a ha’ moment was, and how she explores new ideas (VentureBeat)
💼 Business and Marketing
- WPP to invest $317 million annually to support an AI strategy aimed at bolstering the advertising group’s growth (Reuters)
- YouTube deletes 1,000 videos of celebrity AI scam ads (404 Media)
📺 Media and Entertainment
💰 Funding
- Kore.ai raises $150 million for its enterprise-focused conversation AI and Gen AI products (TechCrunch)
- Humanoid robot maker Figure reportedly in talks to raise as much as $500 million in new funding led by Microsoft and OpenAI (Bloomberg)
- AI costs are soaring to a point where some startups are now considering selling (The Information)
- Two Silicon Valley investors dish on the risks and rewards of funding AI tech (TechCrunch)
⚖️ Copyright and Regulatory Issues
- Meta claimed copyright protection in trying to get a version of its Llama AI model removed from GitHub, but argues against others using similar tactics (Business Insider)
💥 Disruption and Misinformation
- The New York Times, which is currently suing OpenAI over alleged copyright infringement, says it’s building a team to explore AI in the newsroom, but states journalists will still write, edit and report the news (The Verge)
- Fake news YouTube creators using AI-generated media to flood platform with disinformation about Black celebrities (NBC News)
- Law enforcement concerned about flood of child sex abuse images generated by AI (The New York Times)
- Machine learning and molecular image recognition help pharmaceutical companies speed drug discovery, how effective are the medicines? (Bloomberg Business)
🔎 Opinions and Research
- Former Rep. Will Hurd says he was “freaked out” by an OpenAI briefing, calls for guard rails to be put in place to ensure AGI is “a force for good” (Will Hurd/Politico)
- OpenAI: GPT-4 provides “at most a mild uplift” in the creation of biological threats (VentureBeat)
- Forget the Turing Test. AI needs to pass the Summer Camp Test before it can take over the world (Kathy Pham/Fortune)
- The cloud industry is splitting into two: AI and everything else (Business Insider)
- Drupal creator Dries Buytaert argues websites are still relevant in the gen AI era (The New Stack)
- The Cult of AI (Rolling Stone)
- AI is Powered by GPUs. It’s Time to Understand What They Are. (Technormal)
🎧 Podcasts
- How Kevin Leneway, a software engineer with Pioneer Square Labs, uses AI beyond the workplace (Geekwire Podcast)
End Output
I hope you enjoyed diving into the latest articles on “The AI Economy!”
I’m eager to hear your thoughts on this edition. What struck a chord with you, and what left you scratching your head? Leave a comment or shoot me a message on LinkedIn with your feedback — it’s the secret sauce that makes this journey worthwhile.
Missed any articles this week? I know staying up-to-date on all the AI news can feel overwhelming. Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”
Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology.
Got a story you think would be a great fit for “The AI Economy“? Awesome! Shoot me a message – I’m all ears for your pitches. Let’s chat, share ideas, and better understand the AI landscape together!
Thanks for reading and be sure to subscribe to receive future editions.
Until next week, stay curious!
Subscribe to “The AI Economy”
New issues published on Fridays, exclusively on LinkedIn
Leave a Reply
You must be logged in to post a comment.