Deciphering the AI Tech Stack for the Enterprise

Adobe Firefly-generated image showing multiple layers of a "smart computer"

More and more people in business organizations are getting excited about using generative AI. But at the same time, employees are still unsure about how to capitalize on it for their work. For big companies, using new and amazing technologies can be a bit tricky compared to consumers and smaller businesses. Ensuring success requires having the proper infrastructure in place. So what does an AI tech stack look like for the enterprise?

Menlo Ventures, a venture capital firm long known for making bets on enterprise startups, provided its perspective on the future of AI development. A follow-up to its “State of Enterprise AI” report published last November, it posits companies treat AI development as being akin to LEGO blocks, deciding how to build the best structures.

Offering a solution for an AI tech stack presents a significant opportunity. Enterprise companies spent more than $1.1 billion on the effort last year alone, which Menlo equates to the “largest new market in [gen AI] and a massive opportunity for startups.”

Four Layers of the AI Tech Stack

Figuring out the components of the current AI stack can be challenging since everyone may have different opinions on what they are. Menlo Ventures identified four key layers that enterprising companies must have to have AI work for their businesses.

First Layer: Compute and Foundation Models

Consider this the brains of the computer. It’s where you’ll find the foundation models such as OpenAI, Anthropic, Mistral, Hugging Face, and Llama 2. Additionally, this layer contains the infrastructure used to fine-tune, train, optimize and deploy those LLMs.

Second Layer: Data

This layer connects the LLMs to the right context within the enterprise data systems. In addition to hosting different databases such as Databricks, Upstash, Pinecone and Momento, you’ll find ETL and data pipelines, and pre-processing components.

Third Layer: Deployment

Developers will manage AI applications here, organizing and controlling how the LLMs work and do their tasks. Components include agent tool frameworks such as LlamaIndex, LangChain, and Fixie. Additionally, you’ll have prompt management and orchestration here.

Fourth Layer: Observability

Here you’ll find components specifically designed to monitor the performance of run-time LLMs and safeguard against potential threats. Tools from Credal.ai, Humanloop, Truera, BrainTrust and Patronus AI are put in use at this layer.

Combined, these four layers serve as a roadmap for companies keen on developing LLMs. While pre-existing models make it simple to begin with AI, if you have a substantial dataset and feel hesitant about incorporating it with retail LLMs, understanding these various tools provides you with the necessary knowledge to create a sustainable solution.

Read more about the new enterprise AI tech stack here, plus learn about the key design principles companies should know about, and what’s next.

Subscribe to “The AI Economy”

New issues published on Fridays, exclusively on LinkedIn

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading