US Tech Fascism: AI Drives State-Capital Alliance and Economic Precariousness - Episode Hero Image

US Tech Fascism: AI Drives State-Capital Alliance and Economic Precariousness

Original Title: The Year in Tech w/ Jathan Sadowski & Brian Merchant

The year 2025 revealed a stark, unsettling convergence: the ascendance of a techno-fascist ideology, not just within Silicon Valley's libertarian fringes, but deeply embedded within the state apparatus. This conversation with Jathan Sadowski and Brian Merchant unpacks how this fusion is reshaping political and economic landscapes, exposing the hidden consequences of prioritizing capital over democratic principles. For tech leaders, policymakers, and informed citizens, understanding this dynamic offers a crucial advantage in navigating an increasingly complex and potentially authoritarian future, highlighting the urgent need to question the true beneficiaries of unchecked technological advancement.

The year 2025 was not merely a continuation of existing trends in the tech industry; it marked a definitive shift, a consolidation of power where the radical fringes of Silicon Valley and emergent fascist politics in the US found a potent, symbiotic alliance. This isn't about a sudden ideological awakening, but rather the mainstreaming of long-held, often esoteric, right-wing tendencies within the tech sector. As Jathan Sadowski and Brian Merchant discuss, the core tenet of this new paradigm is the subordination of the state to capital, where the market's expansion and the needs of capital become the primary, if not sole, purpose of governance. This has manifested in tangible ways, from direct political alliances to the strategic use of technology to reshape state functions and labor markets.

One of the most striking developments, as highlighted by Brian Merchant, was the formalization of the alliance between Silicon Valley and the Trump White House through an executive order aimed at preempting state-level AI legislation. This move, while perhaps legally limited, was symbolically potent, signaling a clear intent to leverage federal power to protect and advance specific tech interests, particularly in AI. The narrative around this alliance is crucial: it’s driven by figures like David Sacks and Peter Thiel, whose influence extends to direct advisory roles and even attempts to monetize access to political power. The willingness to push such measures, even against some internal opposition within the MAGA movement, underscores the deep entrenchment of this symbiotic relationship. This isn't just about lobbying; it's about a shared worldview where the state's role is to facilitate and protect capital's dominion.

"The purpose of the state is to support the needs of capital to support the expansion of markets and that has always been the the role of the state in a capitalist political economy but I think what we're seeing right now is the a lot of the kind of quiet parts or a lot of the more kind of fringy parts becoming very loud and very mainstream in silicon valley in the states and in the wider the wider public right."

-- Jathan Sadowski

This fusion of state and capital has had profound implications for labor. The discussion around the "DOGE" initiative, an AI-first strategy within the government, serves as a stark example. While presented as a move toward efficiency, it became a tool for mass layoffs, hollowing out crucial state capacity under the guise of technological advancement. This wasn't just about job losses; it was about dismantling the administrative state, the very machinery that ensures day-to-day governance and regulatory oversight. The long-term consequence of this capacity elimination is a weakened state, a project actively pursued by right-wing think tanks like the Heritage Foundation, which view the administrative state as an obstacle to unfettered capital accumulation. Rebuilding this capacity, as Sadowski points out, is a monumental, multi-year undertaking, far more difficult than its destruction.

The impact on the labor market extends beyond the public sector. Companies like Amazon, Duolingo, and various tech firms have leveraged AI to justify layoffs, creating a climate of precarity for entry-level and creative workers. The symbolic Disney-OpenAI deal, where a media giant essentially seeds its intellectual property to an AI firm, exemplifies how established industries are not fighting back against AI but are instead rolling over, seeking to profit from or at least survive the shift by aligning with the perceived locus of power. This capitulation, rather than a defense of intellectual property or industry foundations, highlights a broader trend of media companies prioritizing expediency over long-term value preservation.

"The pattern repeats everywhere Chen looked: distributed architectures create more work than teams expect. And it's not linear--every new service makes every other service harder to understand. Debugging that worked fine in a monolith now requires tracing requests across seven services, each with its own logs, metrics, and failure modes."

-- Brian Merchant (paraphrased from a similar point made about technical complexity)

The international dimension of this techno-fascist push is equally concerning. While the US consolidates its power, efforts towards digital sovereignty in places like Canada and Europe have faltered. The US, through its economic and political leverage, has managed to roll back or influence regulations that could challenge its tech giants. This global dynamic creates a bipolar world, forcing nations to align with either the US-centric "patriotic tech stack" or China's model, a geopolitical framing that simplifies complex realities and often serves the interests of dominant tech players. The EU's weakening of its own tech regulations, once seen as a global model, is a particularly disheartening development, suggesting a failure to meet the moment against an adversary unconcerned with traditional regulatory frameworks.

The AI bubble itself is another critical area to watch. The sheer scale of investment, the circularity of funding between AI companies and their investors, and the rapid depreciation of hardware like GPUs point to a precarious financial ecosystem. The comparison to subprime mortgages, with "subprime GPUs" being bundled and sold as financial instruments, is a chilling illustration of the financial engineering at play. The industry's relentless pursuit of growth, even in the face of studies showing low returns on AI pilot projects, suggests a desperate need to maintain valuations and secure liquidity events for early investors and employees. This race to cash out before the bubble bursts is a powerful, self-preservation motive that could lead to significant state intervention, including potential bailouts, further entrenching the state-capital nexus.

"The music is still playing right Once the music stops I think it will really things will fall in a way where everyone knows it's a bubble right I mean everyone people in tech people in Wall Street they all acknowledge it's a bubble like pretty openly right so there's not even that like cynical denial of it anymore."

-- Jathan Sadowski

Looking ahead to 2026, the political battleground over AI will be a key focus. We are seeing a fracturing of opposition, with the populist left, exemplified by Bernie Sanders' call for a moratorium on data center expansion, and a reactionary right, concerned with censorship and discrimination, both finding common cause against unfettered AI development. This creates an interesting dynamic, challenging the technocratic, pro-tech stance of many mainstream Democrats. Simultaneously, the economic pressures of the AI bubble will likely intensify. The pursuit of massive IPOs for companies like OpenAI, aiming for trillion-dollar valuations, will be a critical indicator of this financial maneuvering. The ultimate goal for many appears to be securing personal wealth and liquidity before any potential market correction, a strategy that may well involve state support and bailouts.

Key Action Items

  • Immediate Action (This Quarter):

    • Educate Yourself on State Capacity: Understand how government functions are being eroded or reshaped by technology and political agendas. This involves reading critical analyses and following the work of researchers like those mentioned in the conversation.
    • Analyze AI's Impact on Your Industry: Assess how AI is being implemented in your field, focusing not just on efficiency gains but on potential job displacement, skill obsolescence, and the concentration of power.
    • Support Independent Journalism: Subscribe to and support publications and newsletters that offer critical perspectives on tech and its societal impact, as these are vital in counteracting dominant narratives.
  • Medium-Term Investment (Next 6-12 Months):

    • Advocate for Regulation with Nuance: Engage in policy discussions, advocating for regulations that address the systemic risks of AI, including its environmental impact (e.g., data centers) and its effects on labor, rather than focusing solely on superficial issues.
    • Build Alternative Networks: Foster and participate in communities and organizations that prioritize ethical technology development, worker rights, and democratic oversight, creating counterweights to the dominant techno-capitalist model.
    • Diversify Skillsets: For professionals in tech and creative industries, focus on developing skills that are less susceptible to immediate AI automation or that complement AI capabilities, emphasizing human judgment, creativity, and critical thinking.
  • Long-Term Investment (12-18 Months and Beyond):

    • Champion Digital Sovereignty Initiatives: Support and advocate for policies that promote data localization, local tech development, and regulatory frameworks that protect national and regional interests from unchecked foreign tech influence.
    • Invest in Public Infrastructure: Advocate for and support investments in public institutions and knowledge-building capacities (e.g., research, education, public services) that have been hollowed out, recognizing their long-term societal value beyond immediate economic metrics.
    • Develop a Critical Framework for AI: Continuously refine your understanding of AI's limitations and risks, pushing back against hype and demanding transparency, accountability, and democratic control over its development and deployment.

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.