AI is Reshaping How We Learn to Code—And How to Build ‘The AI Team’ at Work

Professor Luis Ceze from the University of Washington and also CEO of OctoAI explains how programming is evolving because of AI. Plus, a way to gauge how employees feel about AI at work.
"The AI Economy," a newsletter exploring AI's impact on business, work, society and tech.
This is "The AI Economy," a weekly LinkedIn-first newsletter about AI's influence on business, work, society and tech and written by Ken Yeung. Sign up here.

Welcome back to “The AI Economy.” I took some time away from the newsletter during the summer. During the hiatus, I spent some time and got addicted to photographing velodrome racing at a nearby track to help recharge my creative mind. And now I’m back to share the latest AI news with you!

For this week’s issue, in honor of students returning to the classroom, I examined how artificial intelligence influences the future of software development. We’ll then look at new research from Slack on AI sentiment from full-time desk workers and what it takes for companies to assemble so-called “AI Teams.”

Stick around to check out the latest roundup of headlines you may have missed!

The Prompt

With developers turning to AI agents to help code software, how is it changing how we build technical solutions, both in the real world and in an academic setting? To answer this question, I reached out to entrepreneur Luis Ceze, the co-founder of OctoAI. But he’s not just a startup creator—he’s also an active professor at the University of Washington’s Paul G. Allen School of Computer Science and Engineering and a venture partner at Madrona.

Who better than an educator, builder and investor, all in one?

How AI Is Transforming Formal Programming Education

“Students will increasingly need to integrate AI tools into their learning process early on to meet industry expectations,” Ceze warns. “This shift implies a greater emphasis on mastering fundamental computing principles over transient techniques and frameworks of the day.”

He believes students should focus their learnings on systems architecture and design, typically two areas developers learn early in their careers. However, as AI takes over routine coding tasks, humans can dedicate more time to the architectural aspects, which Ceze claims are essential for remaining competitive in the job market.

“Designing complex software systems involves not just understanding functional requirements but also integrating considerations such as scalability, security, and long-term maintainabilty. Human architects bring a blend of intuition, experience, and contextual understanding that is difficult for AI systems to replicate fully.”

On the Evolution of Programming Languages

Like academia, Ceze emphasizes that software developers shift their focus toward systems architecture. “This means they will concentrate more on designing robust, scalable systems and defining overall project architecture rather than delving into the specifics of programming language semantics.”

The way we code will eventually evolve thanks to AI, where new programming languages might emerge “whose semantics are less focused on human brain capabilities and more on [Large Language Model] capabilities.” Ceze opines that these languages could leverage “AI/natural language processing, pattern recognition, and automated reasoning to improve code readability, efficiency and maintainability based on AI-generated insights.”

However, he raises one area of concern, specifically regarding debugging. Though AI agents may improve code quality and reduce the time human developers scrutinize their work, they are not necessarily without fault. “Although you’re likely to see fewer bugs, debugging could still become complex when issues arise in the higher-level architectural design or interactions between different AI-managed components. The developer will also have less context because they did not write the code.”

▶️ Read more about my interview with Luis Ceze (My Two Cents)


A Closer Look

Slack has released a short quiz you can take to assess how you feel about artificial intelligence in the workplace. When completed, you’ll be assigned one of five personas. The goal is for everyone in your team, department, and/or company to complete this evaluation to provide a benchmark for employers contemplating AI adoption.

“The AI-powered future of work isn’t just about enterprises, it’s also about employees—and it’s redefining everything from careers to workplace culture. But to realize the promise of AI, companies need to make AI work for workers and bring everyone on board ‘The AI Team,’” Slack Senior Vice President of Research and Analytics, Christina Janzer, says in a statement.

The quiz’s launch coincides with new research from Slack’s Workforce Lab, a group commissioned to evaluate how to improve work. After surveying 5,000 full-time desk workers across the U.S., Australia, India, Singapore, Ireland, and the United Kingdom, it identified five types of characters based on their comfort level with AI.

The Five Personas

  • The Maximalist: Those workers who use AI multiple times per week to improve the work and are shouting from the rooftops about it
  • The Underground: “Maximalists in disguise” who use AI often but are hesitant to share with their coworkers that they’re using it
  • The Rebel: Workers who don’t subscribe to the AI hype and avoid using it, considering it unfair when their coworkers opt to use them
  • The Superfan: Excited about the tech and admire the advances made in AI, but haven’t made the most use out of it at work
  • The Observer: The people who haven’t yet integrated AI into their workflow and are watching with interest and caution

Slack’s study finds that many respondents use AI, though less than half are enthusiastic enough to boast about using it. A third of those polled say they use AI multiple times a week. However, 35 percent say they are comfortable not using it (16 percent would instead observe from the sideline, and 19 percent say they don’t believe in the tech).

The research intends to provide companies with a data-powered snapshot of their employees’ feelings toward AI. Failing to consider worker attitudes while incorporating artificial intelligence might spell disaster for a company. Now, executives can assess what programs and tactics they need to implement next to motivate workers to join them in building out this so-called “AI Team.”

The timing of this comes more than a week before Slack’s parent company, Salesforce, hosts its annual Dreamforce customer conference. It’s there where more AI innovations will be announced, including the Agentforce platform that will help businesses create more autonomous AI agents.

▶️ Read more about Slack’s 5 AI Personas (My Two Cents)


Today’s Visual Snapshot

Leadership position responsible for driving generative AI strategy. Source: <a href="https://www.emarketer.com/content/ctos-responsible-for-generative-ai-strategy" target="_blank">eMarketer</a>
Leadership position responsible for driving generative AI strategy. Source: eMarketer

Companies might often wonder who implements artificial intelligence in the workplace without a Chief AI Officer. Should that responsibility befall the chief executive, the chief information officer, the chief technology officer, someone else in the C-suite, or further down the org chart?

The above chart, designed by eMarketer, provides a snapshot of what executives believe: The CTO should oversee the execution of a generative AI strategy. That decision should be surprising because it’s wise to have the leader overseeing tech throughout the company be responsible for AI.

The data comes from 2,508 executives surveyed worldwide between February 23 and April 5, 2024, and was conducted by National Research Group and Google Cloud.


Quote This

“We have inquired with the US Department of Justice and have not been subpoenaed. Nonetheless, we are happy to answer any questions regulators may have about our business.”

— Nvidia pushes back at news reports claiming the company has been subpoenaed by federal regulators looking into whether the chip maker violated antitrust laws.


This Week’s AI News

🏭 Industry Insights

🤖 General AI and Machine Learning

✏️ Generative AI

☁️ Enterprise

⚙️ Hardware and Robotics

🔬 Science and Breakthroughs

💼 Business and Marketing

📺 Media and Entertainment

💰 Funding

⚖️ Copyright and Regulatory Issues

💥 Disruption and Misinformation

🔎 Opinions, Analysis and Research


End Output

Thanks for reading. Be sure to subscribe so you don’t miss any future issues of this newsletter.

Did you miss any AI articles this week? Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”

Follow my Flipboard Magazine for all the latest AI news I curate for "The AI Economy" newsletter.
Follow my Flipboard Magazine for all the latest AI news I curate for “The AI Economy” newsletter.

Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology. 

Do you have a story you think would be a great fit for “The AI Economy”? Awesome! Shoot me a message – I’m all ears!

Until next time, stay curious!

Subscribe to “The AI Economy”

New issues published on Fridays, exclusively on LinkedIn

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading