AI Hype Confronts Economic Reality: Unsustainable Growth and Diminishing Returns - Episode Hero Image

AI Hype Confronts Economic Reality: Unsustainable Growth and Diminishing Returns

Original Title: Ep 386: Was 2025 a Great or Terrible Year for AI? (w/ Ed Zitron)

TL;DR

  • The AI industry's focus on "agents" in early 2025 was largely a marketing tactic to sustain investment, not a reflection of genuine capability, as demonstrated by OpenAI's subsequent de-emphasis on the technology.
  • The narrative around AI's rapid progress and potential for job displacement, exemplified by figures like Dario Amodei, often extrapolates from narrow benchmark successes to broad economic impacts, masking underlying inefficiencies and profitability challenges.
  • Despite significant investment and hype, the core AI models of 2025, including GPT-5, showed diminishing returns from pure scaling, forcing a shift towards less efficient but more benchmark-friendly tuning and reasoning techniques.
  • The vast expenditures on AI inference costs, particularly by companies like OpenAI, reveal a fundamental economic challenge where operational expenses scale directly with revenue, creating a non-sustainable business model.
  • The proliferation of AI-generated content and the shift towards algorithmic curation on social media platforms represent a loss of competitive advantage for these platforms, as they increasingly resemble undifferentiated "slop" battling for attention.
  • The AI safety discourse, particularly concerning existential risks, is often driven by a desire for attention and influence rather than concrete solutions to present-day harms like data theft, environmental impact, or labor exploitation.
  • The perceived "intelligence" of AI models, as seen in instances like the "blackmailing" engineer story, is often a misinterpretation of their text-completion capabilities, where they are merely finishing a narrative based on their training data.

Deep Dive

2025 was a year of intense AI hype, characterized by inflated claims, massive spending, and a subsequent confrontation with economic reality. The initial excitement surrounding new models and ambitious predictions about AI's impact on jobs and society began to falter as the year progressed, revealing significant inefficiencies and a lack of genuine progress beyond incremental improvements. This realization marked a crucial turning point, shifting the dominant narrative from boundless optimism to a more critical examination of AI's true capabilities and economic viability.

The early months of 2025 were dominated by a series of high-profile AI announcements that, in retrospect, failed to deliver on their revolutionary promises. The emergence of DeepSeek, a Chinese AI startup that claimed to train models at a fraction of the cost of American counterparts, initially spooked the market by highlighting potential cost efficiencies. However, this story was largely memory-holed, possibly due to xenophobic reactions and a reluctance from major AI companies to acknowledge that cheaper, effective models were possible. Simultaneously, the concept of AI agents, heavily promoted as the next frontier of digital labor, proved to be largely marketing hype. Despite Sam Altman's pronouncements and Salesforce's ambitious "Agent Force" vision, the actual capabilities of these agents, such as OpenAI's Operator, were revealed to be rudimentary, failing to deliver on the promise of autonomous digital labor and eventually leading to a de-emphasis by OpenAI.

As the year unfolded, the economic realities of AI development began to surface, challenging the prevailing narrative of explosive growth. The much-hyped GPT-5 release, while technically advanced in some areas like routing models, did not represent a quantum leap in intelligence and was plagued by significant cost inefficiencies. Reports emerged that the underlying technology was less efficient than advertised, increasing operational costs rather than reducing them, a stark contrast to the initial claims. This underwhelming performance, coupled with the proliferation of increasingly expensive GPU infrastructure, fueled a growing skepticism about the AI bubble. Major publications began to question the sustainability of AI investments, comparing the current landscape to the dot-com bust and highlighting the massive discrepancy between AI companies' expenditures on inference and their actual revenue. This economic scrutiny intensified with leaked financial data revealing OpenAI's staggering inference costs, far exceeding its revenue, and similar patterns observed at other AI startups like Anthropic.

The latter half of 2025 saw a significant shift away from speculative existential risks and towards a more grounded assessment of AI's current limitations and economic unsustainability. The "AI 2027" scenario, predicting imminent superhuman AI and existential threats, was largely dismissed as a grift by many, including the podcast's guests, who argued that the underlying premise of AI researching AI was unproven and detached from current LLM capabilities. Similarly, prominent AI figures like Jeff Hinton, while raising concerns about future AI risks, were criticized for focusing on distant, hypothetical threats rather than addressing current harms like environmental impact and labor exploitation. The narrative began to coalesce around the idea that the AI boom, while producing impressive demonstrations, was economically unsustainable. The year concluded with a series of deals between AI companies and hardware providers that, upon closer examination, appeared to be more about future promises than present capabilities, further highlighting the precarious financial footing of the industry.

The overarching implication of 2025 for AI is that the era of unchecked hype and speculative investment is likely drawing to a close. The disconnect between the promised transformative capabilities and the economic realities of development and deployment has become too significant to ignore. While AI technology will continue to advance, the focus is likely to shift from grand, abstract visions to more practical, cost-effective applications, forcing a recalibration of expectations and business models. The year's events suggest a move from a "growth at all costs" mentality to one that prioritizes demonstrable utility and economic sustainability.

Action Items

  • Audit AI industry narratives: For 3-5 key AI claims (e.g., job displacement, agent capabilities), identify supporting evidence versus marketing spin.
  • Track AI company financial viability: For 2-3 major AI companies, analyze revenue vs. inference costs to assess long-term economic sustainability.
  • Evaluate AI's impact on user behavior: For 3-5 AI-driven platforms, measure changes in user engagement patterns and content consumption habits.
  • Investigate AI model efficiency claims: For 2-3 recent AI model releases, research and document actual inference costs and performance gains.
  • Analyze executive AI adoption strategies: For 2-3 non-AI companies investing in AI, document the strategic rationale and expected ROI.

Key Quotes

"so much happened in the world of ai in 2025 that it can actually be hard to keep track of it all i mean remember deepseek that was in 2025 as was dario amodei saying that we were going to lose half of white collar jobs to ai as well as gpt 5's release the release of assora ai looking like the best investment ever followed by ai being described as a giant bubble that was going to bring down the economy followed by that bubble being described as actually not being so bad this was also the year where nvidia ceo jensen huang took the stage in a conference wearing a jacket that well i'll be honest looks like it came from the prop department for a mad max movie jesse let's put this on the screen here i mean dude you're a computer scientist you're wearing a racer jacket i love it i'm here for it what i'm trying to say is a lot happened in the world of ai in the year that just ended and the key question that i've been grappling with is did this year end up being a great year for ai or a terrible one i would believe either answer and so much happened it could be really hard to try to keep it all straight"

Cal Newport introduces the overwhelming volume of AI news in 2025, highlighting the difficulty in assessing whether the year was ultimately positive or negative for the field. He notes the rapid succession of major AI developments and claims, suggesting that either a "great" or "terrible" conclusion could be argued.


"the secret to ed's success is pretty simple he just does his homework like he actually talks to sources he talks to reporters he reads earning reports he gets leaked information he talks to people within these companies he puts together the pieces old fashioned shoe leather reporting on what's actually happening with these businesses as opposed to reporting on the stories these businesses are telling about what their technology may or may not do"

Cal Newport explains Ed Zitron's effectiveness as an AI commentator, emphasizing his commitment to thorough research and direct engagement with sources. Newport contrasts Zitron's approach with reporting based on corporate narratives, highlighting the value of "shoe leather reporting" in uncovering the reality of AI businesses.


"deepseek was a really interesting one i remember i was on a plane i was the i was just it was like just got started to move back to new york and such like i spent a lot of time there and i remember reading about this thing and what it was was that it was a model that was trained for less money than other american models so american models that cost like 50 100 million dollars or more to train deepseek apparently cost 5 3 million i think to train it's really weird because it spooked the entire market like everyone freaked out i remember thinking yeah i remember thinking this is an obtuse story to freak people out like it was just like even trying to explain because i did like a bunch of media at the time i was explaining it to people i was shocked that people even had any interest in model training but the big thing that spooked people was it was kind of the thing that shone a spotlight on the nvidia problem which is that nvidia is like the only company really making money in this era or just and i think that people started to realize oh crap our entire stock market is based on that and it also made it clear that all the american model companies don't really give a crap about any kind of efficiency or anything"

Ed Zitron discusses the DeepSeek AI model, noting its significantly lower training cost compared to American counterparts. Zitron explains that this efficiency revelation spooked the market and highlighted the industry's reliance on Nvidia, while also suggesting that major American AI companies were not prioritizing efficiency.


"axios does this by the way i don't know if you've seen this as a reporter it's really a pain they invent a quote they paraphrase what someone said into a better form and then we'll say like this headline is 2025 is the year of agents open ai cpo says you would assume that means that the open ai cpo said 2025 is the year of ai agents he did not now he talked about 20 he said things that were less quotable they did the same thing with the blood bath and dario amodei he never actually said it was going to be a blood bath but they had a headline that said it this year is going to be a blood bath dario amodei says so anyways i watched the no axios had a quote as well where it was like this is proof that ai is taking jobs and then you read the study and it's one line saying yeah we kind of see some effect it's very frustrating because it's just marketing it's not helping sorry i'm just going to yeah i know it's so it does it does i actually told i was i was talking to i was talking to uh speaking of ai skeptics i was talking to gary marcus not long ago and he happened to be on his way to do something at axios and i was like you got to tell them to stop doing the headlines because i keep getting dinged by fact checkers afterwards like we cannot find evidence of this"

Ed Zitron expresses frustration with news outlets like Axios for paraphrasing or inventing quotes to create more sensational headlines. Zitron provides examples of this practice, including claims about AI agents and economic "bloodbaths," arguing that this marketing-driven approach distorts the actual reporting and misleads the public.


"gpt 4 5 is ready good news it's the first model that feels like talking to a thoughtful person to me i've had several moments where i've sat back in my chair and been astonished getting actually good advice from an ai bad news it's a giant expensive model we really wanted to launch it to plus and pro at the same time but we've been growing a lot and are out of gpus we will be adding tens of thousands of gpus next week and roll it out to the plus tier then hundreds of thousands of gpus coming soon i'm pretty sure y'all will use every one we can rack up this isn't how we want to operate but it's hard to perfectly perfectly even predict growth surges that lead to gpu shortages a heads up this isn't a reasoning model and won't crush benchmarks it's a different kind of intelligence and there's a magic to it i haven't felt before really excited for people to try it"

Sam Altman's tweet from February 27th describes GPT-4.5 as a significant advancement, feeling like conversing with a thoughtful person and offering good advice. However, Altman also acknowledges its high cost and the GPU shortage impacting its rollout, clarifying that it is not a reasoning model and its "magic" lies in a different form of intelligence.


"the big thing that that seemed to really open up was a story that really only you had been covering for a year so for a year plus you had been actually gathering earnings revenue numbers you've been looking at earnings reports and you had been making the case for about a

Resources

External Resources

Books

  • "The Attention Merchants" by Tim Wu - Referenced as a source discussing the emergence of advertising-supported media and the attention economy.

Articles & Papers

  • "Scaling Laws for Neural Language Models" - Mentioned in relation to the project at OpenAI that aimed to make models ten times larger.
  • "What If AI Doesn't Get Much Better Than This?" (New Yorker) - Discussed as an article that questioned the future trajectory of AI development following the underwhelming release of GPT-5.
  • "Spending on AI at Epic Levels: Will It Ever Pay Off?" (Wall Street Journal) - Referenced as an example of financial analysis concerning AI companies.
  • "AI 2027" - Described as a scenario that predicted significant impacts of superhuman AI, including potential existential risks.
  • "Your Brain on ChatGPT" (MIT Media Lab) - Discussed as a study that introduced the concept of "cognitive debt" and suggested AI use could lead to worse writing and reduced learning.
  • "The Information" - Mentioned as a publication that reported on OpenAI's internal memo regarding economic headwinds and de-emphasizing agents.

People

  • Ed Zitron - Guest on the podcast, host of the "Better Offline" podcast and writer of the "Where Is Your Ed's At" Substack, noted for his AI commentary.
  • Dario Amodei - Quoted regarding predictions about AI eliminating white-collar jobs and his views on AI capability levels.
  • Jensen Huang - CEO of Nvidia, mentioned for his appearance at a conference and discussions about GPU shipments and AI scaling.
  • Sam Altman - CEO of OpenAI, quoted on GPT-4.5, AI agents, and GPT-5, and his comparison to Oppenheimer.
  • Mark Benioff - CEO of Salesforce, mentioned in relation to the concept of "digital labor" and AI agents.
  • Cole Brown - From "Internet of Bugs," quoted on the utility and limitations of AI code completion.
  • Gary Marcus - AI skeptic, mentioned in discussions about AI safety and harms.
  • Jeff Hinton - Scientist, discussed in relation to AI safety concerns and his motivations for speaking out.
  • Daniel Kokotajlo - Former OpenAI employee, associated with the "AI 2027" scenario and concerns about AI safety.
  • Eliezer Yudkowsky - Associated with "less wrong" and effective altruism, mentioned for his views on AI risks.
  • Michael Barry - Mentioned in relation to Disney's investment in OpenAI's Sora.
  • Bob Iger - CEO of Disney, discussed in relation to Disney's investment in OpenAI.
  • Kevin Roose - New York Times reporter, mentioned for his reporting on AI, including a story about GPT-4 and a "blackmailing" AI.
  • Derek Thompson - Substack essayist, mentioned for his work on the internet becoming like television.
  • Tim Wu - Author, mentioned for his book "The Attention Merchants."
  • Walter Ong - Mentioned in relation to the concept of "culture of literacy."
  • Charlie Munger - Mentioned for his blog on scaling laws.
  • Nate - Mentioned as someone who assisted in analyzing a comment for AI generation.
  • Jesse - Mentioned in relation to the cost of his truck repairs and putting comments on screen.
  • Cal Newport - Host of the podcast, author of "Deep Questions" and a newsletter.
  • Kevin Scott - CTO of Microsoft, mentioned in a conversation with Kevin Roose.

Organizations & Institutions

  • OpenAI - Central organization discussed throughout the episode, particularly regarding GPT models, agents, and financial performance.
  • Nvidia - Referenced for its role in supplying GPUs for AI training and inference, and its financial performance.
  • DeepSeek - Chinese AI startup mentioned for its cost-effective model training and its impact on the market.
  • Anthropic - AI company mentioned in relation to its models and financial performance.
  • Microsoft - Mentioned in relation to its partnership with OpenAI and its own AI investments.
  • Google - Referenced for its Gemini models and infrastructure efficiency.
  • Meta - Discussed in relation to its social media platforms (Facebook, Instagram), the metaverse, and AI strategy.
  • Salesforce - Mentioned in relation to the concept of "digital labor" and AI agents.
  • Apple - Mentioned in relation to a paper on AI reasoning.
  • ASU (Arizona State University) - Mentioned as a source of research poking holes in AI reasoning narratives.
  • MIT Media Lab - Source of the "Your Brain on ChatGPT" study.
  • MIT Tech Review - Mentioned for a piece on compute usage in AI.
  • The Information - Publication that reported on OpenAI's internal memo.
  • Fortune - Publication that featured an interview with Dario Amodei.
  • The New Yorker - Publication that featured an article on AI bubbles.
  • The Wall Street Journal - Publication that covered AI spending and OpenAI's Orion project.
  • The New York Times - Publication that covered AI, including a story on GPT-5 and OpenAI's financial situation.
  • BBC - Mentioned for an article about DeepSeek.
  • Axios - Mentioned for paraphrasing quotes and creating headlines.
  • Effective Altruism (EA) - Movement discussed in relation to AI safety and existential risk concerns.
  • LessWrong - Online forum associated with effective altruism.
  • Ftx - Mentioned in connection with effective altruism.
  • Disney - Referenced for its investment in OpenAI's Sora and its use of AI.
  • Oracle - Mentioned for its deal with OpenAI regarding data centers and compute.
  • Amd - Mentioned for a deal with OpenAI regarding data centers and stock.
  • Broadcom - Mentioned for a deal with OpenAI regarding data centers.
  • Amazon Web Services (AWS) - Discussed in comparison to OpenAI's infrastructure costs and its own development path.
  • Amazon - Mentioned in relation to AWS and the dot-com bubble.
  • Cisco - Mentioned as a company that survived the dot-com bubble.
  • Lucent Technologies - Mentioned in relation to the dot-com bubble and a loan to Windstar Communications.
  • Windstar Communications - Mentioned in relation to the dot-com bubble.
  • Netflix - Mentioned in relation to ExpressVPN.
  • Cnet - Mentioned as a tech reviewer.
  • The Verge - Mentioned as a tech reviewer.
  • Caldera Lab - Mentioned as a sponsor.
  • Reclaim AI - Mentioned as a sponsor.
  • ExpressVPN - Mentioned as a sponsor.
  • BetterHelp - Mentioned as a sponsor.
  • YouTube - Platform where comments were pulled from.
  • Facebook - Social media platform discussed in comments.
  • Instagram - Social media platform discussed in comments.
  • TikTok - Social media platform discussed in comments.
  • Twitter - Social media platform discussed in comments.
  • Sora (app) - Mentioned as an AI distraction source.
  • Nielsen Audio Meters - Used to measure TV usage.
  • Zenith Color TV - Mentioned as an example of older television technology.
  • CNN - Mentioned in relation to Kevin Roose's reporting.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Financial Times - Publication that provided analysis on OpenAI's deals.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • The Information - Publication that reported on OpenAI's internal memo.
  • **The

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.