AI Capital Flywheel Risks Ecosystem Value, Undervaluing "Boring" Software
The AI investment landscape is a bewildering spectacle of soaring valuations and rapid innovation, far removed from the steady, predictable growth of traditional software. This conversation with Martin Casado and Sarah Wang reveals a critical, often overlooked truth: the current AI boom is not merely about building better models, but about a fundamental shift in how value is created and captured. The non-obvious implication? The immense capital flowing into foundational AI models is creating a "capital flywheel" that risks overshadowing the ecosystem built upon them, potentially leading to a concentration of power and a distortion of investment priorities. This analysis is crucial for founders, investors, and technologists navigating this complex terrain, offering a strategic lens to identify durable advantages beyond the immediate hype.
The Undervalued Core: Why "Boring" Software Still Matters
The current venture capital narrative is dominated by the allure of the "hot thing"--either cutting-edge AI or deep tech. This has created a peculiar barbell effect, leaving a vast swath of traditional, yet fundamentally sound, software companies starved for attention and investment. As the speakers highlight, a company building a database, monitoring tools, or essential enterprise software, even with solid growth in a large market, struggles to capture investor interest if it isn't exhibiting hyper-growth from zero to one hundred overnight. This "meme" of rapid, explosive growth overlooks a crucial reality: many investors, including Limited Partners (LPs), are perfectly content with a 3x net return over a fund's lifecycle, which a steady 5x growth in a large market can easily provide. The relentless pursuit of "strong growth" in the current climate leads to a systematic under-investment in the foundational infrastructure that powers much of the digital economy. This creates a hidden opportunity for those who can look beyond the immediate frenzy and recognize the enduring value of robust, well-understood software solutions.
"So I would say that that's probably the most under-invested sector right now: boring software, boring enterprise software, just traditional, no AI here."
-- Martin Casado
The consequence of this neglect is twofold. First, promising traditional software companies may struggle to secure the capital needed for expansion, innovation, or even basic operational scaling, potentially hindering their long-term viability. Second, it creates a distorted market where capital is misallocated towards fleeting trends rather than sustainable, essential infrastructure. The "silly" metric of zero-to-100 growth ignores the profit margins and long-term stability that these "boring" companies offer, representing a significant missed opportunity for investors seeking reliable returns.
The Hardware Hurdle: Why Robotics Remains a Diligence Challenge
While AI investment booms, the hardware and robotics sector presents a different kind of challenge, one that even well-resourced firms find difficult to navigate. The prevailing sentiment is that robotics is important, yet the actual investment patterns reveal a cautious approach. The speakers note that a "ChatGPT moment" -- a clear, transformative breakthrough that unlocks massive demand and investment -- has yet to materialize for hardware in the same way it has for AI models. Instead, funding often flows as if this moment is already a given, creating a potential disconnect between perceived potential and actual market validation.
A key insight here is the inherent verticality of hardware. A robot designed for agriculture is, in essence, an agricultural company, subject to the pricing, supply chains, and competitive dynamics of that specific market. This makes it difficult for firms focused on horizontal technology investing, like the speakers, to diligence effectively. They prefer to invest in horizontal solutions that can serve multiple industries, such as the software enabling autonomous vehicles (Applied Intuition, DeepMath) or general robotics data platforms (Scale AI). The actual robot interacting with the physical world, however, requires deep domain expertise in the target market, a capability better suited to specialized teams like a16z's American Dynamism (AD) fund.
"So the AD team does a lot of that type of stuff because they're actually set up to diligence that type of work. But for horizontal technology investing, there's very little when it comes to robots just because it's so fit for purpose."
-- Sarah Wang
The implication is that while capital is available for robotics, the right kind of capital, with the right kind of diligence, is scarce. The presence of high-profile figures like Elon Musk pursuing humanoid robots certainly attracts capital and attention, potentially "willing into being" an industry. However, for investors who prioritize understanding the specific market dynamics and competitive landscapes, the vertical nature of most hardware applications remains a significant barrier. This creates a situation where promising hardware companies might struggle to secure funding not due to a lack of technological merit, but due to the difficulty in assessing their market viability through a horizontal investment lens.
The ASIC Arms Race: Custom Silicon as a Competitive Moat
The conversation around custom ASICs for AI models highlights a profound shift in the economics of AI development, driven by the sheer scale of training compute. The once-unthinkable idea of designing a custom Application-Specific Integrated Circuit (ASIC) for a single model training run is now economically justifiable. The speakers recall an earlier prediction that a $1 billion training run would necessitate custom silicon. The logic is stark: if a training run costs $1 billion, saving even 20% through an ASIC ($200 million) is enough to cover the cost of designing the chip, let alone the potential for much larger savings (up to a factor of two, or $500 million). This economic reality has already begun to manifest, with companies like OpenAI confirming custom silicon deals.
This trend has significant downstream consequences. It creates a powerful moat for companies that can afford this level of investment. By optimizing hardware specifically for their models, they achieve far greater efficiency and lower inference costs than those relying on generic, off-the-shelf GPUs. This not only reduces operational expenses but also allows them to potentially offer more competitive pricing or reinvest savings into further model development.
"So now you can literally justify economically, not time-wise, that's a different issue, an ASIC per model, which is because that's how much we leave on the table every single time we do it. We do it like generic Nvidia. Exactly."
-- Alessio Fanelli
The implication for the broader AI ecosystem is a potential bifurcation. Companies with access to immense capital can build highly optimized, cost-effective inference engines, while those without may be left behind. This reinforces the "capital flywheel" concept: the more capital a frontier model company can raise, the more it can invest in custom silicon, which in turn improves efficiency and profitability, enabling it to raise even more capital. This dynamic suggests that the race for AI dominance may increasingly be a race for capital, enabling the creation of bespoke hardware that becomes a significant competitive advantage, difficult for smaller players or those building on top of existing models to overcome.
The Capital Flywheel: AI Models Outpacing Their Ecosystem
A central theme emerging from the discussion is the "capital flywheel" effect in AI, where foundational model companies can raise more capital than the aggregate of all companies building on top of them. This is a radical departure from previous technology cycles. Historically, infrastructure (like operating systems or cloud platforms) enabled a vast ecosystem of applications, and the value was distributed across this ecosystem. In the current AI cycle, the immense cost of training frontier models, coupled with the potential for massive efficiency gains through custom silicon and the sheer speed of capability breakthroughs, has inverted this dynamic.
The speakers articulate that if a company can raise more money than the total revenue of everyone using its models, it possesses an inherent advantage, regardless of whether it has achieved true Artificial General Intelligence (AGI). This is because capital provides the ammunition to out-invest, out-innovate, and potentially acquire or out-compete any entity built on its platform. The high margins of API businesses, coupled with the ability to train increasingly powerful models, create a self-reinforcing cycle.
"If I can raise more money than the aggregate of everybody that's using it, I will consume them whether I'm AGI or not."
-- Martin Casado
This creates a precarious situation for the broader AI ecosystem. Companies building applications or specialized models on top of foundational models face the constant threat of being out-capitalized by their own providers. The "innovator's dilemma" is amplified, as a foundational model provider can, with sufficient capital, develop its own specialized applications or models that directly compete with its partners. The market's willingness to fund these frontier models at astronomical valuations, often before significant revenue is generated, subsidizes this growth and reinforces the flywheel. This suggests that the long-term competitive advantage may lie not just in technological innovation, but in the ability to harness and direct vast amounts of capital, potentially leading to an oligopolistic market structure where a few dominant players control the core AI capabilities.
The Enduring Importance of "Bedside Manner" in AI
Amidst the focus on raw capabilities and competitive capital, a nuanced insight emerges regarding the practical utility of AI models: the "bedside manner" is as crucial as the core competency. While specialized models might excel at specific tasks, the most effective models, particularly for complex, long-term endeavors, are those that can also engage in effective collaboration and communication. This is exemplified by the comparison between coding models and more general-purpose models like GPT-4/5.
The speakers observe that while a specialized coding model might identify difficult bugs faster, a more general model possesses a superior "bedside manner." This translates to better brainstorming, more effective partnership in complex problem-solving, and a more intuitive user experience. The ability of a model to act as a collaborator, to understand context, and to communicate effectively is not a secondary feature but a primary driver of value, especially in tasks that require iterative development, creative input, or navigating intricate domains like enterprise software implementation or legal frameworks.
"But I think Opus 4, 5 is actually very, it's got a great bedside manner. And it really, it really matters if you're building something very complex because like it really, you know, like you're, you're, you're a partner in a brainstorming partner for somebody. And I think we don't discuss enough how every task kind of has that quality."
-- Swyx
This has significant implications for investment theses. While the market is fixated on raw capability benchmarks, the true differentiator for many applications may lie in the user experience and collaborative potential of the AI. This suggests that companies focusing solely on task-specific optimization might miss out on the broader value created by models that can integrate seamlessly into human workflows and act as true partners. The "bedside manner" isn't just about politeness; it's about the model's ability to augment human intelligence effectively across a wide range of tasks, making it a critical, though often under-discussed, factor in the success of AI-driven products and services.
Key Action Items
- Re-evaluate Traditional Software Investments: Actively seek out and invest in well-established, "boring" enterprise software companies with solid growth trajectories, recognizing their under-valuation in the current market. (Immediate to Ongoing)
- Develop Specialized Diligence for Hardware: For hardware and robotics, create or partner for specialized diligence capabilities that deeply understand specific vertical markets, rather than relying solely on horizontal tech expertise. (Next 6-12 months)
- Explore Custom Silicon Opportunities: For frontier model companies, investigate the economic viability and strategic advantage of developing custom ASICs to optimize inference costs and create a durable moat. (This pays off in 12-18 months)
- Build Defensible Niches in AI Ecosystems: For companies building on top of foundational models, focus on deep specialization, unique data moats, or exceptional user experience ("bedside manner") to create value that is difficult for model providers to replicate. (Ongoing)
- Prioritize Capital Efficiency and Funding Strategy: Founders of frontier AI models must develop robust strategies for capital raising, understanding that sustained access to capital is critical for hardware optimization and continued model development. (Immediate to Ongoing)
- Invest in AI Collaboration Skills: For individuals and teams, focus on developing skills that leverage AI as a collaborative partner, emphasizing communication, brainstorming, and complex problem-solving alongside technical AI proficiency. (Next quarter)
- Monitor the Capital Flywheel Dynamics: Investors should closely track the capital flows into frontier AI models relative to their ecosystems, anticipating potential consolidation and identifying opportunities in companies that can thrive despite this dynamic. (Ongoing)