Enterprise AI Success Hinges on Proprietary Data, Not Commoditized LLMs - Episode Hero Image

Enterprise AI Success Hinges on Proprietary Data, Not Commoditized LLMs

Original Title:

TL;DR

  • Enterprise AI projects fail at a 95% rate because successful implementations require significant engineering effort and a great team, not just quick deployments, to productionize and differentiate from competition.
  • Durable competitive advantage in enterprise AI shifts from commoditized LLMs to proprietary data and specialized workflow integration, requiring companies to leverage unique business processes and data assets.
  • The fundamental difference between current AI and past automation like RPA lies in AI's learning and generalization capabilities, enabling it to understand patterns and improve over time, unlike brittle rule-based systems.
  • Value accrual in the AI stack will likely favor applications and platforms that deeply understand user data and workflows, rather than solely the underlying LLM providers, which are becoming commoditized.
  • Successful enterprise AI adoption hinges on a robust data strategy, with winners emerging by leveraging unique company data and building AI that understands proprietary business processes.
  • The future of enterprise interaction will move beyond keyboards to voice and generative AI, with tools like Zoom and note-taking applications becoming critical for capturing and structuring data for AI systems.
  • Companies should experiment with AI vendors using shorter-term contracts due to market immaturity, prioritizing products that offer quick testing and demonstrable value rather than lengthy implementations.

Deep Dive

The enterprise AI landscape is marked by a stark contrast between the rapid commoditization of Large Language Models (LLMs) and the persistent challenge of achieving tangible business value, evidenced by the widely cited statistic that 95% of AI projects fail. This failure rate, however, is presented not as a sign of underlying technological deficiency, but as a natural outcome of the experimental phase inherent in adopting any transformative technology, signaling a need for aggressive exploration and learning. The critical differentiator for successful enterprise AI lies not in the LLM itself, which is becoming a commodity, but in proprietary data and the ability to build AI systems that deeply understand unique business processes.

Real-world AI adoption is yielding significant results across various sectors when focused on these differentiating factors. In finance, agents are automating equity research by synthesizing earnings reports and market data in minutes, drastically reducing the time from earnings calls to actionable reports. Healthcare is seeing advancements in drug discovery with transformer models like "Teddy" that can predict gene expression and regulatory networks, a capability previously out of reach. Retail is leveraging AI agents to automate and hyper-personalize marketing campaigns by segmenting audiences and generating tailored content, enabling faster and more granular targeting than manual processes allowed. These successes underscore that AI's efficacy in enterprises is contingent on its ability to integrate with and leverage unique company data and complex, proprietary workflows, rather than relying on generic LLM capabilities.

The broader implications of this AI transition reveal a shift in competitive advantage. While LLMs are becoming interchangeable, the true moat for businesses is their proprietary data and the specialized AI applications built upon it. This necessitates a robust data strategy as the foundation for any AI initiative. The comparison to Robotic Process Automation (RPA) highlights a fundamental difference: RPA was brittle and rule-based, lacking learning capabilities, whereas current AI systems are learning, agentic, and capable of generalization. This learning aspect is key to overcoming the unexpected complexities that plagued RPA. For CIOs navigating AI budgets, the advice is to experiment widely with shorter-term contracts, prioritizing AI products that demonstrate rapid testing and integration, as the ultimate winners in the AI market are yet to be definitively identified.

The economic calculus of AI investment, particularly the massive capital expenditure on hardware like GPUs, is being reframed by the potential for AI to disrupt not just the software industry but also the significantly larger services sector. While some view the quest for superintelligence as the primary driver of this capital expenditure, others argue that Artificial General Intelligence (AGI) has already been achieved and the focus should shift to making existing AI useful within enterprises. This latter perspective suggests that the current AI capabilities are sufficient to automate many tasks and generate substantial economic value, without necessarily requiring the speculative outcomes of superintelligence. The value accrual in the AI stack is expected to heavily favor applications and platforms that leverage proprietary data and unique workflows, rather than the underlying LLM providers, which are increasingly seen as commodity infrastructure. Companies that can effectively integrate AI into their core business processes and create proactive, personalized user experiences will capture the most significant value.

Action Items

  • Audit AI project failures: For 3-5 recent initiatives, analyze root causes of failure beyond technical issues, focusing on data strategy or workflow integration gaps.
  • Create data moat strategy: Define 3-5 proprietary data assets that offer a competitive advantage, and outline how to leverage them for unique AI applications.
  • Implement agentic workflow automation: Identify 2-3 high-overhead business processes and pilot agent-based automation to reduce manual labor and improve efficiency.
  • Develop AI literacy program: Design a 1-week pilot program for 10-15 employees to improve their ability to collaborate effectively with AI tools.
  • Evaluate LLM commodity strategy: For 2-3 core AI functions, compare the cost and performance of using interchangeable LLMs versus specialized models.

Key Quotes

"I think we have a GI I think we have artificial general intelligence we really have it you you hear these 95 of projects fail but like you know like that's that's that's actually what you want I think the LLM is a commodity people are not saying that but it is a commodity like we can get gas from this gas station we can get gas from that gas station it doesn't matter just compare price."

Arvind Jain argues that artificial general intelligence (AGI) has already been achieved, contrary to the common perception that most AI projects fail. He explains that the failure rate in AI projects is not necessarily a negative indicator but rather a sign of active experimentation with new technology, which is desirable for progress. Jain also posits that Large Language Models (LLMs) are becoming commodities, similar to gasoline, where price and availability are the primary differentiators rather than inherent superiority.


"The LLM is a commodity people are not saying that but it is a commodity like and you know when I took econ classes commodity was when it's interchangeable like you can get gas from this gas station or you can get gas from that gas station it doesn't matter just compare price LLMs have become that way like it doesn't really matter this one is better right now next week that one is better you can't even keep up anymore right what's happening so they're a commodity so it's not about that it's really comes down to your company what data does your company have that's special that your competitors don't have can you leverage that and can you build AI that really understands that data because that's not a commodity."

Ali Ghodsi emphasizes that Large Language Models (LLMs) are rapidly commoditizing, meaning they are becoming interchangeable and differentiated primarily by price rather than unique capabilities. Ghodsi explains that true competitive advantage in AI will stem from a company's proprietary data and its ability to build AI systems that deeply understand this unique data, which he contrasts with the commoditized nature of LLMs themselves.


"The last time enterprises got this excited about a tool was called RPA and we know how that ended it unfortunately fizzled out and you know somebody in the audience yesterday was like hey how is this time different from RPA it seems like the same movie bigger budgets better actors what's different this time how how is the nature or the architecture of the the the the technology different from the previous automation cycle either of you."

Apoorv Agrawal raises a critical question about the current excitement surrounding AI, drawing a parallel to Robotic Process Automation (RPA) and its eventual fizzling out. Agrawal prompts the guests to articulate what fundamental differences in AI's nature or architecture distinguish it from RPA, suggesting a need to understand why this time might be different despite superficial similarities in enterprise enthusiasm and investment.


"Well I mean I I first of all RPA like it didn't take you know it didn't capture my attention at all so I've no so I I actually can't you know what is so so I think I I think like I I would not compare these two technologies at all like you know you know what we're seeing now with AI is so fundamental you know it's it's you know is is the you know when when we saw it first it was basically magic and and you couldn't believe that this is a machine that is doing this work machines just simply cannot do these kind of things you know that we saw them do like riding on their own having emotion understanding emotion."

Arvind Jain dismisses the comparison between AI and RPA, stating that AI is fundamentally different and "so fundamental" that it was initially perceived as magic. Jain highlights that AI's ability to perform tasks previously thought impossible for machines, such as exhibiting emotion or understanding complex human nuances, sets it apart from RPA, which he found unengaging.


"Yeah well I mean I I first of all RPA like it didn't take you know it didn't capture my attention at all so I've no so I I actually can't you know what is so so I think I I think like I I would not compare these two technologies at all like you know you know what we're seeing now with AI is so fundamental you know it's it's you know is is the you know when when we saw it first it was basically magic and and you couldn't believe that this is a machine that is doing this work machines just simply cannot do these kind of things you know that we saw them do like riding on their own having emotion understanding emotion."

Arvind Jain strongly differentiates AI from RPA, asserting that AI is a fundamental technology that was initially perceived as magical due to its capabilities. Jain explains that machines were not thought to be able to perform tasks like exhibiting emotion or understanding human feelings, which is a key distinction that makes AI fundamentally different from RPA.


"Yeah well I mean I I first of all RPA like it didn't take you know it didn't capture my attention at all so I've no so I I actually can't you know what is so so I think I I think like I I would not compare these two technologies at all like you know you know what we're seeing now with AI is so fundamental you know it's it's you know is is the you know when when we saw it first it was basically magic and and you couldn't believe that this is a machine that is doing this work machines just simply cannot do these kind of things you know that we saw them do like riding on their own having emotion understanding emotion."

Arvind Jain asserts that AI is fundamentally different from RPA, describing AI's initial impact as "magic" because it enabled machines to perform tasks previously thought impossible. Jain emphasizes that AI's ability to handle complex actions, such as exhibiting emotion or understanding nuanced human behavior, distinguishes it significantly from RPA.


"Yeah well I mean I I first of all RPA like it didn't take you know it didn't capture my attention at all so I've no so I I actually can't you know what is so so I think I I think like I I would not compare these two technologies at all like you know you know what we're seeing now with AI is so fundamental you know it's it's you know is is the you know when when we saw it first it was basically magic and and you couldn't believe that this is a machine that is doing this work machines just simply cannot do these kind of things you know that we saw them do like riding on their own having emotion understanding emotion."

Arvind Jain argues that AI is fundamentally distinct from RPA, noting that AI's capabilities were initially perceived as magical. Jain highlights that machines were not expected to perform tasks such as exhibiting emotion or understanding complex human feelings, which he sees as a core difference that sets AI apart from RPA.


"Yeah well I mean I I first of all RPA like it didn't take you know it didn't capture my attention at all so I've no so I I actually can't you know what is so so I think I I think

Resources

External Resources

Books

Videos & Documentaries

Research & Studies

  • MIT report - Stated as indicating that 95% of AI deployments do not work.

Tools & Software

  • Databricks - Discussed as a platform for building AI solutions and automating tasks.
  • Glean - Discussed as an AI platform for automating organizational overhead and tasks, and as a personal companion for work.
  • ChatGPT - Mentioned as a widely used AI tool for personal and work lives.
  • Cursor - Mentioned as an AI tool used by hundreds of millions of users on the SMB and developer side.
  • Codex - Mentioned as an AI tool used by hundreds of millions of users on the SMB and developer side.
  • Cloud Code - Mentioned as an AI tool used by hundreds of millions of users on the SMB and developer side.
  • RPA (Robotic Process Automation) - Referenced as a previous automation technology that fizzled out due to being rule-based and brittle.
  • Granola - Mentioned as an AI note-taking tool.
  • Fathom - Mentioned as an AI note-taking tool.
  • Zoom - Discussed as a potential platform for data entry and conversation capture.

Articles & Papers

People

  • Ali Ghodsi - Co-interviewee, CEO of Databricks.
  • Arvind Jain - Co-interviewee, CEO of Glean.
  • Apoorv Agrawal - Host of the BG2Pod interview.
  • Brad Gerstner - Host of the BG2Pod interview.
  • Bill Gurley - Host of the BG2Pod interview.
  • Dan Shevchuk - Producer of the BG2Pod episode.
  • Yung Spielberg - Provided music for the BG2Pod episode.
  • Rich Sutton - Creator of reinforcement learning, mentioned in relation to Camp 2 of AI approaches.
  • Yann LeCun - Founding father of AI, mentioned in relation to Camp 2 of AI approaches.

Organizations & Institutions

  • Databricks - Company represented by Ali Ghodsi, discussed as a platform for AI solutions.
  • Glean - Company represented by Arvind Jain, discussed as an AI platform for organizational overhead and personal work companions.
  • BG2Pod - Podcast where the interview took place.
  • Altimeter - Mentioned as the firm of Apoorv Agrawal.
  • Royal Bank of Canada (RBC) - Mentioned as a customer using Databricks for an AI agent that analyzes earnings reports.
  • Merck - Mentioned as a customer using Databricks to create a drug discovery model called Teddy.
  • 7-Eleven - Mentioned as a customer using Databricks for AI agents that automate the marketing stack.
  • Nvidia - Mentioned in the context of capital expenditure on the semiconductor side of AI.
  • Turing Award - Mentioned as a computer science award related to AI technology.
  • OpenAI - Discussed in the rapid fire section regarding stock performance and AI bubble.
  • Gemini - Mentioned as an AI model that is growing and widely used.
  • Anthropic - Mentioned in the rapid fire section regarding stock performance.

Courses & Educational Resources

Websites & Online Resources

  • www.bg2pod.com - Website where the podcast is available.
  • x.com/apoorv03 - Twitter profile for Apoorv Agrawal.
  • x.com/BG2Pod - Twitter profile for BG2 Pod.

Podcasts & Audio

  • BG2Pod with Brad Gerstner and Bill Gurley - The podcast where the interview was conducted.

Other Resources

  • Enterprise AI - The primary topic of the interview.
  • Artificial General Intelligence (AGI) - Discussed as a concept, with differing views on its current state.
  • Large Language Models (LLMs) - Discussed as a commodity in the AI landscape.
  • Superintelligence - Discussed as a quest in one of the AI camps.
  • Scaling Laws - Mentioned as a mentality driving the quest for superintelligence.
  • Next Token Prediction - Described as the mechanism of LLMs by researchers in Camp 2.
  • Reinforcement Learning - Mentioned as a foundational technology for AI, created by Rich Sutton.
  • Data Strategy - Emphasized as the starting point for AI strategy.
  • Agentic Systems - Mentioned as a key area for durable advantage in AI.
  • Workflow Integration - Mentioned as a key area for durable advantage in AI.
  • AI CapEx - Discussed in relation to AI spend and revenue.
  • Revenue Math - Discussed in relation to AI spend and revenue.
  • Computer Use - Mentioned as a hard problem to nail with AI.
  • Data Layer - One of the three layers discussed for value accrual.
  • Intelligence Layer - One of the three layers discussed for value accrual.
  • Software/Application Layer - One of the three layers discussed for value accrual.
  • Crud Apps - Term used by Satya Nadella to describe certain software applications.
  • System of Record - Mentioned in the context of storing information.
  • AI Literacy - Discussed as a goal for broader AI adoption.
  • Personal Companion - Vision for Glean as a confidential AI assistant.
  • Speech Interaction - Discussed as a future interaction paradigm that will replace keyboards.
  • Coding - Mentioned as potentially overhyped in the context of AI.
  • Customer Service and Support Automation - Mentioned as potentially overhyped.
  • Proactive AI Products - Desired future development in AI.
  • Agents - Mentioned as a long-term investment area.

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.