AI's Exponential Curve Challenges Linear Economic Systems
The viral "AI Doom Scenario" isn't a forecast, but a stark warning about the velocity of technological change and the market's unpreparedness for a rapid AI-driven transition. This conversation reveals hidden consequences: not just job displacement, but the potential for systemic financial instability and the erosion of established business moats, driven by AI's exponential capability curve. Investors, strategists, and policymakers should read this to understand the non-obvious risks and opportunities presented by AI's accelerating pace, gaining an advantage by preparing for a future that arrives faster than anticipated.
The Unforeseen Cascade: When AI's Exponential Curve Meets a Linear World
The market's recent skittishness, punctuated by the unexpected virality of a Substack post, highlights a profound disconnect: the exponential march of AI capabilities versus the largely linear, historically-paced adaptations of our financial and economic systems. James van Geelen of Citrine Research, a guest familiar to Odd Lots listeners for his prescient calls on AI infrastructure and GLP-1 drugs, co-authored "The 2028 Global Intelligence Crisis." While explicitly framed as a scenario analysis, not a prediction, its resonance underscores a deep-seated anxiety about the speed of AI's advancement and its potential downstream effects. The piece didn't just predict job losses; it mapped a potential cascade of economic disruption, revealing how a rapid capability curve could stress-test financial systems built on slower, more predictable technological adoption cycles.
The core of the Citrine argument is deceptively simple: AI's progress isn't following the familiar sigmoid curve of past technological revolutions, which unfolded over decades, allowing societies and economies to adapt. Instead, it's exhibiting an almost straight, upward trajectory, with capabilities expanding at an unprecedented rate. This acceleration has profound implications, particularly for white-collar work, which has historically been insulated from the immediate, disruptive impacts of automation.
"Every time that we get into a market that's similar to this, people start asking, what if this time is different? And I guess the thing that this piece did differently was it asked, what if this time is different where the period of transition has to respond to a very, very fast accelerating capability curve."
This isn't just about AI becoming "smarter." It's about the economics of cognitive tasks shifting dramatically. A task that was uneconomical to automate a year ago might be trivially cheap today. This rapid deflation in the cost of cognitive labor, van Geelen suggests, could lead to a faster-than-expected displacement of white-collar roles. The historical precedent of technological revolutions--from agriculture to the internet--involved transitions spanning 50 years or more. AI's current trajectory, compressing immense capability gains into months and years, bypasses this crucial adaptation period. This speed is precisely why conventional wisdom, which assumes ample time for retraining and economic rebalancing, may falter. The competitive advantage, therefore, lies not in resisting this change, but in understanding and preparing for its accelerated timeline.
The Unraveling of Moats: Agentic AI and the Erosion of Network Effects
Beyond the macroeconomic shockwaves, the Citrine scenario probes a more subtle, yet equally disruptive, consequence: the potential disintegration of business moats built on network effects and intermediation. Companies in sectors like payments, delivery, and even enterprise software have long relied on these entrenched advantages. However, the rise of agentic AI--autonomous agents capable of performing complex tasks with persistent, tireless efficiency--threatens to dismantle these defenses.
Consider the delivery industry. For years, platforms have aggregated customers and drivers, creating a powerful network effect. The cost of building and maintaining such a network, coupled with the inherent friction of matching supply and demand, has allowed these companies to extract significant rents. But what happens when an AI agent, with no capacity for tedium, is instructed to find the absolute cheapest option for a burrito? It wouldn't be constrained by brand loyalty or existing platform usage. Instead, it would scour all available listings, potentially bypassing established intermediaries altogether.
"AI agents do not experience tedium, right? So the kind of way that there are a lot of layered intermediation and and rent kind of extraction layer in the economy. And then there are a lot of places where having a like an oligopoly essentially has allowed margins to really be artificially increased."
This is a critical distinction from earlier comparison shopping websites. Those required active user effort. Agentic AI, however, operates proactively and exhaustively. This capability could fundamentally alter pricing power for incumbents. While enterprises might be slower to adopt radical alternatives, the mere possibility of an AI agent finding a cheaper, albeit perhaps "shoddier," alternative could pressure pricing. The "paperclip problem" analogy--where an AI, given a simple objective, pursues it with relentless, unforeseen consequences--aptly describes how agentic commerce could reroute value away from established players, creating an unexpected advantage for nimble, AI-native solutions. This suggests that long-term competitive moats may need to be rebuilt on foundations that AI cannot easily replicate.
The Financial System's Blind Spot: Private Credit and the Unmodeled Risk
The conversation also illuminated a significant vulnerability within the financial system: its potential unpreparedness for a wave of defaults in AI-disrupted industries, particularly within private credit. While the immediate market reaction focused on software stocks, the underlying concern extends to the broader credit landscape. Private credit, by its nature, often assumes a degree of stability in its underlying borrowers and their revenue streams. However, the rapid, unpredictable disruption enabled by AI introduces a new class of risk that may not be adequately captured by traditional underwriting models.
While the scenario doesn't single out private credit as a guaranteed failure, it poses a critical question: have lenders sufficiently adjusted their assumptions regarding income, recurring revenue, and the long-term viability of businesses in sectors facing rapid AI-driven obsolescence? The "cockroach scenario" of financial stability, where systems absorb shocks without widespread contagion, is a plausible outcome. However, the systemic nature of AI's potential impact--affecting not just one sector but a broad range of white-collar industries simultaneously--presents a novel challenge. The rapid integration of life insurers into private credit, providing "permanent capital," offers a buffer, but regulatory shifts in how this capital is treated could introduce unforeseen vulnerabilities. The core insight here is that financial systems, accustomed to gradual technological evolution, may struggle to adapt to AI's accelerated disruption, creating a delayed payoff for those who can anticipate and navigate these emerging credit risks.
The Policy Void: A Chasm Between Awareness and Action
Perhaps one of the most striking, and concerning, aspects of the discourse surrounding the Citrine scenario is the apparent chasm between public and market awareness of AI's potential disruption and the lack of substantive policy discussion in Washington D.C. While executives at AI labs themselves are reportedly urging greater government attention to the potential for widespread disruption and redistribution, policymakers seem hesitant to engage with the substantive implications.
This inaction is problematic. While government intervention can indeed stabilize economies during transitions, as seen during the COVID-19 pandemic, effective policy requires foresight. The Citrine piece, despite its potentially aggressive timeline, serves as a valuable exercise in mapping potential future outcomes and identifying key indicators to monitor. The lack of robust data collection on white-collar job composition and the specific impacts of AI leaves policymakers ill-equipped to respond proactively. The historical parallel of the Luddites, who resisted technological change with devastating consequences for their livelihoods, serves as a cautionary tale. While outright resistance to AI is likely futile, the speed of the current transition--estimated by van Geelen to be closer to five to 15 years rather than the 20-30 years of previous revolutions--demands a more urgent and informed policy response. The advantage lies with those who recognize this urgency and advocate for proactive frameworks, rather than waiting for a crisis to unfold.
Key Action Items
-
Immediate Action (Next 1-3 Months):
- Scenario Planning Integration: Incorporate AI-driven disruption scenarios into existing business risk assessments and strategic planning processes.
- Skills Gap Analysis: Identify critical roles and skills within your organization that are most susceptible to AI automation and begin mapping potential reskilling pathways.
- Competitive Landscape Monitoring: Actively track AI adoption by competitors and emergent AI-native startups, focusing on their impact on customer acquisition and retention.
- Financial System Stress Testing: For financial institutions, review credit underwriting models for assumptions related to AI-driven industry disruption and technological obsolescence.
-
Medium-Term Investment (Next 6-18 Months):
- AI Literacy & Adoption Programs: Invest in training and development to foster AI literacy across the workforce, encouraging experimentation with AI tools for productivity enhancement.
- Enterprise Software Review: Re-evaluate existing enterprise software contracts and vendor relationships, assessing their long-term defensibility against AI-powered alternatives and potential pricing pressures.
- Policy Engagement & Advocacy: Actively engage with industry groups and policymakers to advocate for proactive policy frameworks that address AI's societal and economic impacts, focusing on data collection and adaptive regulatory approaches.
-
Long-Term Strategic Investment (18+ Months):
- AI-Native Business Model Development: Explore and pilot business models that are inherently designed to leverage AI capabilities, rather than retrofitting existing structures.
- Continuous Learning Culture: Cultivate a organizational culture that prioritizes continuous learning and adaptability, recognizing that the pace of technological change will only accelerate.
- Economic Transition Support Mechanisms: For governments and large organizations, begin developing robust support mechanisms for workers displaced by AI, focusing on retraining, income support, and fostering new employment sectors.