
You’re reading an issue of "The AI Economy," my newsletter exploring the forces shaping the AI era—tracking how AI is rewriting business, work, technology, and culture. Subscribe to get expert insights and curated updates delivered straight to your inbox.
The tech industry embraced vibe coding in 2025, as software development became one example of how artificial intelligence can disrupt professions. Instead of traditional line-by-line hand coding, developers can now instruct a bot to code an application through an intent-driven prompt. Vibe coding startups like Lovable have attracted tremendous investor interest, reaching unicorn status in remarkably short periods. Underscoring the momentum, Microsoft, Google, Salesforce, ServiceNow, and other incumbent platforms have all rushed to ship their own versions.
Companies are paying attention. Research from Palo Alto Networks shows near-universal adoption of AI-assisted coding among surveyed organizations, signaling that vibe-driven workflows are moving from fringe experiments towards the mainstream. “Vibe coding is taking off partly because it’s the best way for companies to adopt AI in a real impactful way,” states entrepreneur Amjad Masad on the “Beyond the Pilot” podcast. His startup, Replit, is one platform that has tremendously benefited from the rise of AI coding.
Subscribe to The AI Economy
And “vibing” doesn’t seem limited to coding. The mindset is poised to spawn similar approaches in other spaces.
“Something is happening to the world of code that is hidden from most people, but shouldn’t be, because it is going to happen to more than just code, soon,” Sam Schillace, Microsoft’s deputy chief technology officer and the creator of Google Docs, writes on Substack.
He argues that AI models have crossed a threshold, becoming not just competent coders, but capable thinkers, delivering significant, measurable gains. The trajectory seen with vibe coding is expected to repeat across other domains: models start out limited, then improve rapidly, as new tools emerge and best practices take shape.
Schillace doesn’t explicitly list the domains that will be impacted, but we’ve already seen “vibe-first” workflows spread across productivity, marketing, design, and entrepreneurship.
Cautiously Vibing

Now seemingly mainstream, more developers are turning to vibe coding. New data from code analysis firm Sonar finds that 72 percent who have tried AI now rely on it daily. Moreover, 42 percent of all committed code is AI-generated, a figure developers believe will rise to 55 percent this year and 65 percent the year after. Still, even with all the talk about how great vibe coding is, serious concerns remain. While it’s accelerating coding, developers haven’t put their full faith in the technology—they’re spending more time reviewing the AI’s work, creating new challenges and bottlenecks. In fact, 96 percent of respondents tell Sonar they don’t fully believe that AI-generated code is “functionally correct.”
Their skepticism doesn’t appear to have shifted—Stack Overflow’s July 2025 study found developers held similar sentiments.

“We are witnessing a fundamental shift in software engineering where value is no longer defined by the speed of writing code, but by the confidence in deploying it,” Tariq Shaukat, Sonar’s CEO, remarks in a statement. “While AI has made code generation nearly effortless, it has created a critical trust gap between output and deployment.”
The winners, he adds, will be those “who empower their developers to use AI as a true force multiplier, pairing rapid generation with the automated and comprehensive review and verification needed to ensure strictly high-quality, highly maintainable, secure code.”
What are the most popular vibe coding tools for enabling vibe-style development, you ask? Microsoft’s GitHub Copilot takes top billing among developers with 75 percent, though OpenAI’s ChatGPT is a close second with 74 percent. Anthropic’s Claude Code is used by nearly half (48 percent), followed by Google Gemini (37 percent), Cursor (31 percent), and Perplexity (21 percent).

Sonar’s study claims the industry is in a period of transition. Teams are using four different AI coding tools, and 64 percent of developers are using autonomous AI agents. However, any time saved is being reinvested in quality assurance. In other words, the dream of “AI taking the wheel” still has yet to be realized. Instead, companies are opting for a “vibe, then verify” approach with AI tools.
This caution is shared by some of the builders behind AI coding tools themselves. Michael Truell, co-founder and CEO of Cursor, warns that what’s often labeled as “vibe coding” can lead developers to disengage from the code entirely. In doing so, he argues, could create fragile systems over time. He states that his description of vibe coding—AI building end-to-end software without examining what’s under the hood—works for prototypes and small projects, but creates a “shaky foundation” with larger-scale programs.
Put another way, he equates it to house building, where you’re putting up walls and a roof, but don’t know what’s under the floorboards or how the wiring will be done.
Ways to Protect Your Vibe
Vibe coding is also creating a new class of security challenges for developers and enterprises alike. By prioritizing speed and intent over implementation details, AI-assisted workflows can surface vulnerabilities or weaken controls that are typically caught through manual review and established security practices. According to Sonar, 61 percent of developers agree that AI “often produces code that looks correct but isn’t reliable,” and the same percentage say extensive effort is needed to generate high-quality code through prompting and fixing.
It’s one thing if software developers use vibe coding tools because they likely have a higher propensity to detect and fix bugs or vulnerabilities than citizen developers who aren’t trained in cybersecurity or quality assurance. Unit 42, a division of Palo Alto Networks, recently released its SHIELD framework, an approach intended to ensure “properly designed security controls” are a part of the coding process, regardless of who—or what—is coding.
“Most evaluated organizations allow employees to use vibe coding tools due to the absence of hard blocks (e.g., blocking tools at the firewall),” Unit 42’s Senior Director Kate Middagh and its cybersecurity research Managing Director Michael Spisak write in a blog post. “However, very few of these organizations have performed a formal risk assessment on the use of these tools, and very few are monitoring inputs, outputs, and security outcomes.”
Here’s the basis of the SHIELD framework:
- (S) Separation of Duties: Ensure that AI agents only have access to development and test environments—do not grant bots access to production.
- (H) Human in the Loop: For any code involving critical system functions, require a secure code review done by a human, as well as pull request approval before code merging.
- (I) Input/Output Validation: Protect AI systems from being tricked or influenced by bad input, especially when prompts include untrustworthy data. Require AI to conduct logic check validation and prove that it’s safe and correct before it’s released into production.
- (E) Enforce Security-Focused Helper Models: Use external/independent helper models—specialized agents that provide automated security validation for vibe-coded apps—that can scan code to see if it’s safe and correct before it’s deployed.
- (L) Least Agency: Limit the power that AI systems have to only what they absolutely need to do their job. They shouldn’t have the authority to act broadly, autonomously, or destructively.
- (D) Defensive Technical Controls: Set up defensive measures around the supply chain and execution management—what code gets pulled in and what gets executed. Don’t blindly trust software and actions coming from vibe coding.
What are the main causes of vibe coding risks? The models are to blame, according to Middagh and Spisak. “AI agents are optimized to provide a working answer, fast,” the authors state. “They are not inherently optimized to ask critical security questions, resulting in a nature that is insecure by default.” Additionally, Unit 42 cites a lack of awareness of critical context, hallucinations of libraries or code packages that could create unresolvable dependencies, and untrained developers—specifically, those not trained in software development and also technical personnel who over-rely on AI.
“The age of vibe coding has arrived, but the vibes will benefit from careful tuning,” Middagh and Spisak conclude. “Speed without security rigor can quickly lead to irreversible outcomes. Identifying tactical security controls that adequately address the risk ensures we can take the safe path to vibe coding bliss and avoid the difficult path to catastrophic scenarios we cannot take back.”
Even as vibe coding gains widespread adoption and enthusiasm, it is far from flawless. Developers continue to grapple with trust gaps, quality concerns, and security challenges, spending significant time reviewing and correcting AI-generated output. The technology accelerates coding, but it cannot yet replace human judgment or expertise. Despite the excitement and rapid uptake, most developers view vibe coding as a powerful assistant rather than a fully autonomous solution, underscoring that careful oversight and rigorous verification remain essential for safe and reliable software development…
…for now.
Featured Image: An AI-generated image of a developer sitting in front of his computer at the office. Credit: Adobe Firefly
Subscribe to “The AI Economy”
Exploring AI’s impact on business, work, society, and technology.


Leave a Reply
You must be logged in to post a comment.