This is "The AI Economy," a weekly LinkedIn-first newsletter about AI's influence on business, work, society and tech and written by Ken Yeung. Sign up here.
Believe it or not, this “The AI Economy” issue is not about Salesforce. Instead, we’ll look at how California lawmakers are trying to legislate AI and if there’s been any progress. Plus, the week’s roundup of AI news you may have missed.
The Prompt
If there’s something most people will agree on, it is that artificial intelligence needs regulating. However, how much and in what way remains a hotly debated subject. European Union lawmakers appear more willing than their U.S. counterparts to implement legislation to install safeguards. But while Congress remains at an impasse when it comes to regulating AI, multiple states have taken it upon themselves to pass laws designed to curb the technology’s perceived harmful effects.
In California, Silicon Valley’s home state, legislators have spent the year working to pass multiple AI bills. The California Legislature is believed to have considered approximately 15 bills thus far. More than a dozen have advanced to Governor Gavin Newsom’s desk, mandating regulating AI, testing for threats to critical infrastructure, curbing algorithms for children, limiting deepfake usage, and more. My friend and former colleague Khari Johnson has an excellent overview published in the nonprofit news publication CalMatters.
California State Senator Scott Wiener (D-San Francisco) is perhaps the most vocal proponent of AI regulation. He introduced Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which has received the most attention and is one of the bills awaiting the governor’s signature. Wiener was recently recognized in this year’s Time100 AI edition. If enacted into law, SB 1047 would mandate safety measures for developers of future AI models, specifically those that cost more than $100 million to develop or require a “defined amount of computing power.” Developers would also be required to install a kill switch to turn off the AI models in an emergency. California’s Attorney General would have the power to sue for non-compliance.
It received massive backlash from not only Silicon Valley but also Wiener’s fellow Democrats, including those from Congress. Speaker Emerita Nancy Pelosi called the bill “well-intentioned but ill-informed.” Big Tech firms Alphabet and Meta have come out in opposition, claiming Wiener’s legislation may harm research efforts. OpenAI believed the passage could damage California’s standing with the startup community, resulting in entrepreneurs leaving. “Godmother of AI” Fei-Fei Li says SB 1047, though well-intended, will harm the U.S. ecosystem.
One notable proponent of SB 1047: Elon Musk.
Multiple amendments would be attached to SB 1047, enough to garner some support from Anthropic, though not an endorsement. In a letter to Wiener (PDF), company CEO Dario Amodei wrote that the new bill’s benefits “likely outweigh its costs” while also warning that there were still some aspects “which seem concerning or ambiguous to us.”
The California Legislature would pass SB 1047, but it’s far from clear sailing to becoming law. Newsom has not indicated his position: Will he sign or veto the bill? The governor has 30 days after the legislature adjourns its session to take action—September 30. If he doesn’t do anything by then, SB 1047 would automatically become law. The Democratic governor has not been public about his stance, only saying, “This measure will be evaluated on its merits.”
Other AI bills are also awaiting Newsom’s signature. As Johnson notes in his coverage, the governor must decide about SB 942, which would require companies to supply free public AI detection tools; SB 896, which would force government agencies to evaluate the risk of generative AI and disclose when it’s being used; and legislation making gen AI-created child pornography a crime, requiring social media app makers to disable algorithmic curation of content to those under 18-years-old, punishing those who create or publish deceptive content made with AI, requiring large online platforms to remove or label deepfakes within 72 hours of being reported, requiring political campaigns to disclose AI usage in advertising, and more.
The state has certainly been active in debating regulation, but passing laws won’t end the conversation about regulation because while California has laws on the book, it’s still a fragmented system where AI providers will have to navigate across the remaining 49 states. And it’s unclear when or if Congress will pick up the debate themselves to pass federal legislation that most of us will be content with.
Suggested Reading:
- One of California’s most influential unions weighs in on AI safety bill (The Verge)
- Will California flip the AI industry on its head (The Verge)
Today’s Visual Snapshot
Investment firm Blitzscaling Ventures published a new infographic outlining four layers of the AI Agent ecosystem. Its release is timely as the enterprise tech world will discuss this subject next week at Dreamforce. Considered a work-in-progress in which the information would be updated periodically, it provides a good snapshot of how these bots are evolving and their impact on work and society.
Quote This
And in the last few years, they’ve definitely been sold this vision of Copilots. But for the vast majority of these Copilots, and for these customers are are implementing Copilots, they failed to deliver customer value. Not only did they fail to deliver customer value, they really permeated the trust of our customers. They found data leaking. They found the technology was not ready for prime time.
— Salesforce CEO Marc Benioff explained in a press conference introducing his firm’s Agentforce platform why organizations aren’t pleased with using Microsoft Copilots
This Week’s AI News
🏭 Industry Insights
- 44 of the most promising AI startups of 2024, according to top VCs (Business Insider)
- Forget jobs. AI is coming for your water (Context)
- Facebook admits to scraping every Australian adult user’s public photos and posts to train AI, without giving an opt-out option (ABC)
- Meta reignites plans to train AI using UK users’ public Facebook and Instagram posts (TechCrunch)
- iPhone 16’s unfinished Apple Intelligence is useful except when it’s bonkers (The Washington Post)
- Apple partners with third parties, like Google on iPhone 16’s visual search (TechCrunch)
🤖 General AI and Machine Learning
- OpenAI releases o1, its first model with “reasoning” abilities (The Verge)
- Is OpenAI’s new “o1” model the big step forward we’ve been waiting for? (Big Technology)
- What OpenAI’s new o1-preview and o1-mini models mean for developers (VentureBeat)
- Mistral releases Pixtral 12B, its first multimodal model (TechCrunch)
- Google debuts DataGemma, a pair of open-source, instruction-tuned models designed to reduce hallucinations around statistical data (VentureBeat)
- AI2’s new model aims to be open and powerful yet cost-effective (VentureBeat)
- LLaMA-Omni: The open-source AI that’s giving Siri and Alexa a run for their money (VentureBeat)
✏️ Generative AI
- OpenAI’s COO says ChatGPT has more than 11 million paying subscribers (The Information)
- Facebook and Instagram are making AI labels less prominent on edited content (The Verge)
- Adobe says video generation is coming to Firefly this year (TechCrunch)
🛒 Retail and Commerce
- Amazon starts testing ads in its Rufus chatbot (TechCrunch)
☁️ Enterprise
- AI takes center stage: The message Salesforce must deliver at Dreamforce (My Two Cents)
- Everything you need to know about Salesforce’s Agentforce (My Two Cents)
- ServiceNow introduces a library of enterprise AI agents you can customize to fit your workflow (VentureBeat)
- Is Anthropic’s new “Workspaces” feature the future of enterprise AI management (VentureBeat)
- Cavela is using AI to automate manufacturing and e-commerce. Here’s how the startup raised $2 million without using a pitch deck. (Business Insider)
⚙️ Hardware and Robotics
- Face-to-face with Figure’s new humanoid robot (TechCrunch)
- Google DeepMind teaches a robot to autonomously tie its shoes and fix fellow robots (TechCrunch)
💼 Business and Marketing
- The godmother of AI, Fei-Fei Li, has a new startup to teach AI systems deep knowledge of physical reality (Wired)
- Sergey Brin says he’s working on AI at Google “pretty much every day” (TechCrunch)
- Microsoft’s hypocrisy on AI: Can AI really enrich fossil-fuel companies and fight climate change at the same time? (The Atlantic)
- Mastercard buys Recorded Future, which uses AI-powered analytics to identify potential threats, for $2.65 billion (Reuters)
- How a viral AI image catapulted a Mexican startup to a major Adidas contract (TechCrunch)
- UBS has an AI tool that can scan 300,000 firms in 20 seconds (Bloomberg)
📺 Media and Entertainment
- Oprah just had an AI special with Sam Altman and Bill Gates—here are the highlights (TechCrunch)
- “If journalism is going up in smoke, I might as well get high off the fumes”: Confessions of a chatbot helper (The Guardian)
- Amazon is allowing Audible narrators to clone themselves with AI (The Verge)
- Actor James Earl Jones signed paperwork years before his passing to voice Darth Vader using AI (Futurism)
- Hawaii’s The Garden Island newspaper discovered using janky AI newscasters instead of human journalists (404 Media)
💰 Funding
- OpenAI fundraising set to vault startup’s valuation to $150 billion (Bloomberg)
- New bidding war for AI’s biggest brains (Axios)
- AI startups struggle to keep up with Big Tech’s spending spree (Bloomberg)
⚖️ Copyright and Regulatory Issues
- Nvidia, OpenAI, Anthropic and Google executives meet with the White House to talk AI energy and data centers (CNBC)
- White House extracts voluntary commitments from Adobe, Cohere, Microsoft, Anthropic, OpenAI and Common Crawl to combat deepfake nudes (TechCrunch)
💥 Disruption and Misinformation
- AI chatbots might be better at swaying conspiracy theorists than humans (Ars Technica)
- AP-NORC/USAFacts poll: Most Americans don’t trust AI-powered election information (Associated Press)
- Google’s AI will help decide whether unemployed workers get benefits (Gizmodo)
🔎 Opinions, Analysis and Research
- Why generalists own the future: In the age of AI, it’s better to know a little about a lot than a lot about a little (Dan Shipper/Chain of Thought/Every)
- Notes on OpenAI’s new o1 chain-of-thought models (Simon Willison’s Weblog)
🎧 Podcasts
End Output
Thanks for reading. Be sure to subscribe so you don’t miss any future issues of this newsletter.
Did you miss any AI articles this week? Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”
Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology.
Do you have a story you think would be a great fit for “The AI Economy”? Awesome! Shoot me a message – I’m all ears!
Until next time, stay curious!
Subscribe to “The AI Economy”
New issues published on Fridays, exclusively on LinkedIn
Leave a Reply
You must be logged in to post a comment.