This is "The AI Economy," a weekly LinkedIn-first newsletter about AI's influence on business, work, society and tech and written by Ken Yeung. Sign up here.
Artificial intelligence (AI) is baked into most, if not all, of the social networks we use today. Some use cases are evident, especially for consumer-facing platforms such as Facebook. But what about those sites tailored more towards business and working professionals? I’m looking at you, LinkedIn!
A professional social network doesn’t need an image-generation tool, does it? How will users leverage AI to find employment, grow their network, or close a sale?
In a post, LinkedIn’s engineering team explained their thought process regarding using AI, detailing what worked and what didn’t and previewing what they’re working on next. Read on to learn more.
But first, a warm welcome to the more than 1,000 subscribers to “The AI Economy!” I’m honored you’ve signed up — if you haven’t, you can do that right here — and find this weekly newsletter rewarding. Reach out if there’s anything you’d like to see more or less coverage of or if you have a great story I might want to share here. 🤗
The Prompt
LinkedIn first launched its AI-powered experience for premium subscribers in November 2023. The idea was to create a copilot to help you stay ahead of your professional life, whether that involved changing careers, building a business, learning a new skill or developing your voice.
Among the capabilities promised:
- The ability to extract salient information from your feed’s posts and tell you what you need to take action on
- Tailored AI-powered profile writing suggestions and message recommendations to help engage with hiring managers
- A way for job seekers to tell potential employers about a role at their company they really want
This may seem like an easy implementation, but engineers Juan Pablo Bottaro and Karthik Ramgopal said it was anything but. “We tried many ideas which didn’t really click,” they admitted. Eventually, they hit their eureka! moment, which resulted in the feature set Premium subscribers have today.
How Does It Work?
Bottaro and Ramgopal explain what happens in the background when you tap on those AI-powered questions that might appear next to a post you’ve discovered in your LinkedIn feed. These are intended to help better assess what you’re interested in relating to that post’s topic.
- It starts with the right agent: LinkedIn examines your query and chooses which of its AI agents can best handle it. Doing so provides tailored responses based on your interests and objectives.
- Next, intelligence gathering: Using a combination of internal APIs and Microsoft’s Bing search engine, LinkedIn’s AI agent will crawl through the dataset to find answers to your question. “We are creating a dossier to ground our response.”
- Finally, here’s your answer: With the information in hand, the AI agent will display a response, showing data coherently and informatively. LinkedIn utilizes its internal APIs to present the information smartly, avoiding giant blocks of text and making the experience interactive.
The Wins
LinkedIn attributes its success in building its AI-powered experience to several factors. The first is choosing to utilize Retrieval-Augmented Generation (RAG) to handle user queries, which made building its framework easy to do. Doing so also reduces the odds that AI responses will feature hallucinations, which is likely bad when you’re trying to get business done. Check out my February interview with Kyndryl’s Dennis Perpetua for more information about RAG.
Not wanting to move slowly, the team opted to split tasks across multiple AI agents focused on general knowledge, job assessment, post takeaways, and more. However, doing so created fragmentation costs and made maintaining a uniform user experience difficult.
LinkedIn said it adopted a “simple organizational structure” consisting of a “horizontal” engineering team handling common components and dedicated to the holistic experience to counter this. It also implemented several “vertical” engineering teams responsible for agents handling personalized post summarization, job fit assessments, and display of interview tips.
The Struggles
It wasn’t all rainbows and sunshine for LinkedIn. Bottaro and Ramgopal laid out areas the company found the most difficult, namely when it came to developing guidelines, scaling annotations and handling automatic evaluations.
The first hurdle involved developing rules to ensure “factual but also empathetic” options and responses are displayed and consistent in detail to ensure annotator score uniformity.
The second obstacle involved establishing a better approach to annotations as demand grows. LinkedIn’s linguist team built tools and processes that allowed it to evaluate up to 500 daily conversations while also analyzing quality, hallucination rates, coherence, style, and whether they violated its Responsible AI policy.
The last challenge is considered a work in progress and involves developing a model-based evaluation system to track metrics and allow for faster experimentation.
What’s Next
LinkedIn acknowledged it’s only getting started, especially since this AI-powered experience is still limited to Premium subscribers. Bottaro and Ramgopal reveals some of the things the company is working on, including:
- Improved automatic evaluation that enables faster iterations
- Developing a skill registry to dynamically discover and implement AI agents and APIs across LinkedIn’s gen AI products
- Utilizing in-house, fine-tuned models to handle simpler tasks
- Producing predictable deployment infrastructure for large language models
- Better budgeting of token usage
▶️ Read LinkedIn’s piece on building a generative AI product
Today’s Visual Snapshot
Cloud security provider Zscaler ThreatLabz has published its 2024 AI Security Report (PDF) detailing the rise in AI-driven phishing attacks. The above chart illustrates how malicious actors might use artificial intelligence to drive a ransomware attack, starting with reconnaissance in which gen AI is used to identify vulnerabilities for exposed assets (e.g., “create a table showing the known vulnerabilities for all firewalls and VPNs in this organization.”)
From there, hackers might generate polymorphic malware and ransomware and use deepfakes or phishing attacks to try and compromise systems. Using AI to automate critical portions of the attack chain allows threat actors to develop “faster, more sophisticated, and more targeted attacks against enterprises.”
Zscaler suggests that companies implement a Zero Trust architecture with advanced AI-powered phishing prevention controls to defend their systems against intruders.
Quote This
Biggest safety risk of AI is concentration of power and I doubt this board will help fight it!
— Hugging Face CEO Clement Delangue on X responding to criticism that the new U.S. AI Safety and Security Board does not include a representative of open-source AI.
This Week’s AI News
🏭 Industry Insights
- Saudi Arabia is investing big to become an AI superpower (The New York Times)
- OpenAI, Meta, Google and others agree to new child exploitation safety measures (The Wall Street Journal)
- AI advances could trigger a spike in electricity demand that could complicate the U.S.’ climate goals (Axios)
- AI is hitting a hard ceiling it can’t pass (Will Lockett/Medium)
🤖 Machine Learning
- Apple releases OpenELM, small open-source AI models designed to run on-device (VentureBeat)
- DeepMind researchers discover new learning capabilities within long-context large language models (VentureBeat)
✏️ Generative AI
- Why I use Microsoft Copilot instead of OpenAI’s ChatGPT (ZDNet)
- This tool from Cleanlab is designed to help you detect when chatbots don’t tell you the truth (MIT Technology Review)
- DeepL announces an AI writing assistant for businesses (The Next Web)
- Google testing new “Speaking practice” feature in Search to help users improve their conversational English speaking skills (TechCrunch)
- Are we headed for another generative AI winter? (Fast Company)
- Generative AI will not fulfill your autonomous SOC hopes (or even your demo dreams) (Allie Mellen and Rowan Curran/Forrester Research)
☁️ Enterprise
- OpenAI releases new enterprise-grade AI features for building and programming on GPT-4 Turbo (VentureBeat)
- Perplexity launches first business offering (Axios)
- HubSpot unveils Zendesk-like updates to its Service Hub and other AI tools for small businesses (VentureBeat)
- Cohere releases developer toolkit to accelerate generative AI app development in the enterprise (VentureBeat)
⚙️ Hardware
- REVIEW: The Rabbit R1 is a fun, funky, unfinished AI gadget (The Verge)
- The Ray-Ban Meta smart glasses have multimodal AI now (The Verge)
🔬 Science and Breakthroughs
- AI drug discovery startup Xaira launches with $1 billion in funding, says it’s ready to start developing drugs (TechCrunch)
- Companies and governments race to develop new chips to unlock AI’s potential in space (Axios)
💼 Business and Marketing
- Mark Zuckerberg warns it will take years for Meta to make money from generative AI (The Verge)
- How Palantir Technologies is using software boot camps to sell its AI platform (Bloomberg)
- Estée Lauder and Microsoft partner to help beauty brands use generative AI (VentureBeat)
📺 Media and Entertainment
- William Shatner was criticized for using an AI art cover on his new music album (VentureBeat)
- Drake deleted his AI-generated Tupac track after Shakur’s estate threatened to sue (Engadget)
💰 Funding
- Perplexity joins the Unicorn club after raising $62.7 million at a $1.04 billion valuation (Fast Company)
- Elon Musk’s xAI reportedly wants to raise $6 billion at a $18 billion valuation (TechCrunch)
- Cognition AI raises $175 million from Founders Fund at a $2 billion valuation (The Information)
- Augment, a GitHub Copilot rival, raises $252 million (TechCrunch)
- Nooks receives $22 million in funding to empower sales reps with its AI-powered call platform (VentureBeat)
- Yoneda Labs raises $4M from Khosla Ventures to build the ‘OpenAI for chemistry’ (VentureBeat)
⚖️ Copyright and Regulatory Issues
- OpenAI’s Sam Altman, Microsoft’s Satya Nadella, Alphabet’s Sundar Pichai and other tech leaders join the U.S. AI Safety and Security Board to advise Homeland Security on deploying AI safely within the country’s infrastructure (Engadget)
- UK launches probe of Amazon and Microsoft over their AI partnerships with Mistral, Anthropic and Inflection (TechCrunch)
- Connecticut Senate passes bill to regulate AI, but its fate remains uncertain (Associated Press)
💥 Disruption and Misinformation
- Microsoft deleted its WizardLM 2 LLM because it hadn’t undergone “toxicity testing,” but it’s already available on the internet (404 Media)
- Synthesia made a hyperrealistic deepfake of this reporter that’s “so good it’s scary” (MIT Technology Review)
- Baltimore County high school gym teacher arrested for using AI voice clone in an attempt to get high school principal fired (The Verge)
End Output
Thanks for reading. Be sure to subscribe so you don’t miss any future issues of this newsletter.
Did you miss any AI articles this week? Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”
Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology.
Do you have a story you think would be a great fit for “The AI Economy”? Awesome! Shoot me a message – I’m all ears!
Until next time, stay curious!
Subscribe to “The AI Economy”
New issues published on Fridays, exclusively on LinkedIn
Leave a Reply
You must be logged in to post a comment.