Microsoft Pushes AI to the Edge

At Build, Microsoft showed its vision for AI that works wherever you are.
"The AI Economy," a newsletter exploring AI's impact on business, work, society and tech.
Welcome to "The AI Economy," a weekly newsletter by Ken Yeung on how AI is influencing business, work, society, and technology. Subscribe now to stay ahead with expert insights and curated updates—delivered straight to your inbox.

IN THIS ISSUE: Microsoft pushes AI innovation to the edge. Will OpenAI crack the AI hardware market, a space where many have stumbled, after acquiring Sir Jony Ive’s AI startup for nearly $6.5 billion? Plus, catch up on this week’s key headlines you might have missed, including what was announced at Google I/O and the newest Claude model from Anthropic.

The Prompt

At this year’s Build developer conference, Microsoft made a decisive bet on the future of AI-powered productivity: one where human workers partner with autonomous agents. The company rolled out a broad set of tools to help developers build this agentic future, not just by expanding their cloud capabilities but by bringing them to the edge, embedding bots into browsers, websites, the operating system, and everyday workflows.

Unlike last year’s Copilot-centric focus, this time Microsoft placed greater emphasis on creating more dynamic agents, powered through integrations with third-party systems using the Model Context Protocol (MCP). This marks a shift from showcasing single-use AI assistants to enabling broader, integrated ecosystems.

The vision: Agents can operate across all use cases and integrate seamlessly with any atomic unit of digital infrastructure.

Among the many announcements made at Build—GitHub Copilot agent mode, a no-code digital twin builder tool, the addition of model tuning in Microsoft 365 Copilot, and more intelligent agents in Microsoft Teams—several signaled a deeper strategic push. These included a platform to build on-device agents, the ability to bring AI to web apps on the Edge browser, and developer capabilities to deploy bots directly on Windows. While the company attributes its rise in daily active users to AI agents, it knows it hasn’t fully tapped into what these systems can do. 

While Microsoft attributes part of its surge in daily active users to these AI agents, it acknowledges the full potential of this ecosystem is still unfolding. “There’s a bunch of stuff in this ecosystem that has to get built out so that agents can more fully deliver what it is we hope they can deliver,” says Microsoft’s Chief Technology Officer, Kevin Scott. “What that is, is you want agents to be able to delegate work to [other agents], and you want the work that you’re delegating…to be increasingly complicated over time…We want to solve everyone’s problems no matter where they’re at because the thing you want is human imagination to be as unconstrained as humanly possible.”

That ambition was on full display at Build, as Microsoft unveiled new developer tools designed to help builders deliver intelligent, embedded solutions wherever their users are. Whether it’s creating agents that run in an application, powered by the cloud, or can operate locally on a browser or device, the company wants to show that the future of computing is about AI everywhere, working alongside people, and embedded into the fabric of digital life.

The embrace of the edge isn’t accidental, but rather the result of customer demand. But it’s not because they want an AI device fully separated from the cloud. Instead, as Microsoft’s Vice President of Products for Azure AI, Marco Casalaina, tells me, they’re looking for a hybrid option. Developers want their AI apps to operate across any environment, regardless of internet connectivity. It’s all possible thanks to more powerful small language models like Microsoft’s Phi Silica and Mistral’s Ministral. “Up to this point, small models haven’t really been very capable. They have not been very good until just recently,” Casalaina states.

The rise of lightweight models allows Microsoft to extend AI to the edge, giving developers more freedom in how and where they deploy agents. Instead of being confined to a single cloud-based chat interface tethered to the internet like a toddler in a kids’ leash backpack, agents can move more freely across browsers, devices, and operating systems.

This all comes a year after Microsoft unveiled its take on the AI PC. “We’re entering this new era where computers not only understand us, but can anticipate what we want and our intents,” company CEO Satya Nadella said in 2024. The Copilot+ PC is intended to provide a powerful enough computer that consumers and workers can use to harness the power of AI in their daily lives. With the new tools introduced at Build, Microsoft hopes to accelerate agentic adoption on these state-of-the-art machines, showcasing how they are helpful in this AI era instead of being laptops with a few AI features slapped on.

This is likely why the company launched its Windows AI Foundry, an evolved version of Windows Copilot Runtime. It’s a program that provides developers the tools and infrastructure to build, fine-tune, and deploy SLMs directly on the machine, ultimately to have agents run natively on Windows. Microsoft wants to make Windows a first-class platform for edge AI, where intelligent assistants can access contextual data, respond in real time, and deliver richer user experiences while preserving privacy and performance. And the potential reach is massive: Windows 11, the minimum OS version required to support agents, is already installed on over 500 million devices worldwide.

In showcasing its growing arsenal of edge-ready tools and developer infrastructure at Build, Microsoft made one thing clear: The future of AI isn’t just in the cloud—it’s everywhere users are. This isn’t just about faster assistants or flashier features; the company demonstrated how it’s building a new web, one built around agents that work across connectivity boundaries and integrate deeply into our daily workflows. Microsoft isn’t waiting for the edge to arrive. It’s starting to lay the tracks for the AI-powered productivity loop to run in real time, wherever you are.


Don’t Miss out on Future Issues of ‘The AI Economy’

The AI Economy is expanding! While you’ve been getting weekly insights on LinkedIn, I’m gearing up to bring you even more—deep dives into AI breakthroughs, more interviews with industry leaders and entrepreneurs, and in-depth looks at the startups shaping the future. To ensure you don’t miss a thing, subscribe now on Substack, where we’ll be rolling out more frequent updates.

Don’t worry; the weekly newsletter will still be published on LinkedIn, but other stories will be available on Substack.

Subscribe to The AI Economy


A Closer Look

OpenAI has made its biggest acquisition to date, both financially and strategically. This week, the company announced it has purchased the AI startup founded by Sir Jony Ive, the legendary designer behind many of Apple’s popular products. It’s an all-stock deal valued at around $6.5 billion, an astonishing amount for a company founded two years ago and operated stealthily without any public products to show. 

The news felt slightly awkward because it was framed simultaneously as Ive launching io and OpenAI acquiring it. It might not have been this way had io remained a standalone or subsidiary of its new parent. Why launch it on the same day it was sold and then essentially shutter it as the team is absorbed? Are we being trolled?

Regardless, OpenAI plans to leverage io’s team to strengthen its creative and design chops, potentially helping to develop an AI “companion,” a device aimed at a hardware category where AI has yet to find real traction. Ultimately, Chief Executive Sam Altman wants to ship over 100 million of them and make them a part of our everyday lives. That said, history is not in OpenAI’s favor following the dismal performances of the Humane AI pin and Rabbit R1 assistant

OpenAI could learn from Meta, which has sold more than 2 million of its Ray-Ban smart glasses. The wearables combine mixed reality and artificial intelligence to help inform the wearer about what’s happening around them in the real world. Google is working on something similar, though it’s perhaps too early to dub them Google Glass 2.0. Nevertheless, whatever form factor or device Altman’s team devises, having the Jony Ive touch ensures it will stand out in a crowded marketplace. After all, he’s the design genius behind the iMac, iPod, iPhone, iPad, Apple Watch, and AirPods.

Of all the options, it’s likely safe to say that OpenAI will not develop a phone. The announcement post suggests that the company may want a device that doesn’t needlessly keep our eyes on a screen. Instead, it should help spark creativity and inspiration and rekindle human connection. And we shouldn’t expect it to be designed like other products or interfaces. The Wall Street Journal reports that it might not be glasses or maybe even something we wear on our bodies.

Will OpenAI succeed? It’s tempting to think so. After all, it’s well-funded, can integrate its own first-party models, and has Ive on board. But success hinges on the details, especially how willing people are to adopt yet another device, especially when the experience could be delivered through the one device already central to their daily lives: the smartphone. Still, the growing momentum behind AI agents and the push toward on-device intelligence could tilt the odds in OpenAI’s favor. Could Altman’s team produce a lightweight model that rivals the power of its large language models and have it run locally?

Whatever the plan is, the first version could come as soon as late 2026.


This Week’s AI News

🏭 AI Trends and Industry Impact

🤖 AI Models and Technologies

✏️ Generative AI and Content Creation

💰 Funding and Investments

☁️ Enterprise AI Solutions

⚙️ Hardware, Robotics, and Autonomous Systems

🔬 Science and Breakthroughs

💼 Business, Marketing, Media, and Consumer Applications

🛒 Retail and Commerce

⚖️ Legal, Regulatory, and Ethical Issues

💥 Disruption, Misinformation, and Risks

🔎 Opinions, Analysis, and Editorials


End Output

Thanks for reading. Be sure to subscribe so you don’t miss any future issues of this newsletter.

Did you miss any AI articles this week? Fret not; I’m curating the big stories in my Flipboard Magazine, “The AI Economy.”

Follow my Flipboard Magazine for all the latest AI news I curate for "The AI Economy" newsletter.
Follow my Flipboard Magazine for all the latest AI news I curate for “The AI Economy” newsletter.

Connect with me on LinkedIn and check out my blog to read more insights and thoughts on business and technology. 

Do you have a story you think would be a great fit for “The AI Economy”? Awesome! Shoot me a message – I’m all ears!

Until next time, stay curious!

Subscribe to “The AI Economy”

Exploring AI’s impact on business, work, society, and technology.

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading