AI's Inevitable Collision With State Power and National Security
The government's threat to destroy Anthropic for refusing to remove safeguards against mass domestic surveillance and autonomous weapons is not a hypothetical scenario, but a stark illustration of how powerful AI will inevitably collide with state power. This conversation reveals the hidden consequences of AI development: the unavoidable entanglement with national security interests and the fundamental questions of who controls powerful technologies and to what end. Anyone involved in AI, policy, or national security should read this to understand the complex trade-offs and the potential for future conflicts where AI's power will force governments to assert control, potentially through forceful means, and where private companies must grapple with their role in a world increasingly shaped by state interests.
The Inescapable Embrace of State Power: AI's Collision Course with National Security
The recent standoff between the U.S. Department of Defense and AI company Anthropic, where the government designated Anthropic a "supply chain risk" for refusing to remove safeguards against mass domestic surveillance and autonomous weapons, serves as a critical inflection point. Ben Thompson, founder of Stratechery, dissects this event not through a lens of right or wrong, but by illuminating the underlying, often unstated, dynamics at play. His analysis, rooted in a long-standing concern about AI's interaction with "the world of guns," suggests that the power of AI, if as transformative as its creators claim, will inevitably draw the attention and intervention of those who wield state power. This isn't about the morality of AI or the specific actions of any single entity; it's about the systemic response to unprecedented technological capability.
Thompson argues that the fundamental question facing private AI companies is whether they are merely American companies subject to American law, or if they possess a moral imperative to support the U.S. military. This tension, he posits, has been brewing for years, manifesting in debates around selling advanced chips to China and the ethical implications of AI in warfare. The Anthropic incident, however, brings these theoretical discussions into sharp, real-world focus.
The Unintended Consequences of "Alignment"
The concept of "alignment" in AI, often discussed as aligning AI with humanity's general interests, takes on a far more specific and potentially adversarial meaning when viewed through the lens of state power. Thompson highlights the danger of unilaterally imposing restrictions, even those perceived as beneficial, without fully accounting for the geopolitical ramifications.
"if ai is as powerful as people say it is going to be then there are going to be real world reactions to that and if we're going to analogize it to nuclear weapons as dario amodei has done repeatedly you have to think through what's what would happen in a world where a private company developed nuclear weapons what would the government's response be"
This analogy to nuclear weapons is potent. Just as nations grappled with the proliferation and control of nuclear technology, the development of powerful AI by private entities presents similar, if not more complex, challenges. The government's potential response, Thompson suggests, could range from compelling access to technologies to actively hindering their development if they are perceived as a threat or not aligned with national interests. This could manifest as actions against companies, or even against critical infrastructure like chip foundries, if a nation-state perceives a strategic advantage in doing so.
The nuance Thompson seeks is in recognizing that while certain restrictions might seem ethically sound in isolation, their absolute imposition can create dangerous equilibria. For instance, restricting chip sales to China, while seemingly a way to slow their AI advancement, could, in a world of super-powerful AI, incentivize a more aggressive approach from China if they feel strategically disadvantaged. This creates a complex web of trade-offs where the "optimal" decision for one actor might be catastrophic for another, leading to unpredictable and potentially violent reactions.
The Shifting Landscape of Governance and Property Rights
The conversation touches upon a broader societal shift where fundamental questions about governance and property rights, long considered settled, are being re-examined in the age of AI. Thompson expresses frustration with the lack of public discourse on the implications of AI for national sovereignty and international law.
"all these rights all these laws are subject to the agreement of those governed by them to follow them and the final say is those who successfully inflict violence"
This statement underscores a critical, albeit uncomfortable, reality: legal frameworks and societal norms are ultimately underpinned by the capacity for coercion. As AI becomes a significant source of power, it inevitably challenges existing power structures and the laws that govern them. This is why, Thompson implies, the government might feel compelled to act decisively, even if it means confronting private companies. The argument that a private executive, like Dario Amodei, should be making weighty decisions on AI deployment, rather than an elected body, represents a surrender of the democratic process, a notion that, while understandable given the current political climate, is fraught with peril.
The historical parallel with Intel's decision to sell to the government but not design chips for the government is illustrative. Intel's foresight was to prioritize the largest possible market--consumers and businesses--to accelerate technological advancement. This approach, driven by economic necessity, contrasts sharply with government-led initiatives like nuclear weapons development, which started with state control and assumptions of guaranteed demand. AI, being a capital-intensive technology that requires a broad market to be viable, is inherently starting from a private-sector, consumer-facing position. This creates a fundamental tension when the government, a customer among many, seeks to impose its specific requirements, potentially disrupting the economic engine that makes AI development possible.
The Uncomfortable Middle Ground: Lobbying for New Laws
Thompson proposes a pragmatic, albeit challenging, path forward: instead of outright confrontation, companies like Anthropic could engage with the government by lobbying for new legislation. This approach, he suggests, could address concerns about digital surveillance while also aligning with democratic processes.
"my prescription for anthropic to give in is to allow these massive loopholes to be exploited and for the nsa to allegedly in the service of investigating foreign adversaries but by you know the process basically surveilling the domestic population i think is bad and the reality is the nature of tradeoffs is you're choosing between multiple bad options"
The frustration with the current situation stems from the perception that existing laws, designed for a pre-digital era, are ill-equipped to handle the scale and nature of digital surveillance enabled by computers and AI. Antitrust laws, for example, historically focused on controlling supply, but the power of modern surveillance stems from controlling demand and the ability to process vast amounts of data. Thompson's argument is that retrofitting old laws is ineffective; new legal frameworks are needed. The resistance to this idea--that passing new laws is impossible--carries its own implications: it signals an acceptance of unelected, unaccountable individuals making critical decisions, a scenario that undermines democratic principles.
The podcast also touches on the differing approaches of AI companies like OpenAI and Anthropic in their interactions with the Pentagon. While OpenAI appears to have agreed to limits based on "lawful capabilities" and will focus on preventing digital surveillance, Thompson frames this as a "jailbreak competition," highlighting the ongoing, often adversarial, negotiation between tech and government. He notes the advantage Anthropic has in the talent base within San Francisco, which tends to align with their resistance to government assistance, versus a broader public perception that may be more amenable to supporting national security efforts. This dynamic, he predicts, will be fascinating to observe as it plays out.
Ultimately, Thompson's analysis pushes beyond the immediate headlines to reveal the deeper, systemic forces at play. The Anthropic incident is not an isolated event, but a harbinger of future conflicts where the immense power of AI will necessitate a redefinition of the relationship between private innovation and state authority, forcing difficult conversations about control, ethics, and the very nature of governance in the 21st century.
Key Action Items
- Recognize the Inevitability of State Interest: Understand that as AI capabilities grow, governments will assert influence and control. This is not a matter of "if" but "when" and "how."
- Engage in Policy Advocacy: Actively lobby for new, appropriate legal frameworks to address digital surveillance and AI's role in national security, rather than relying on outdated legislation.
- Map Geopolitical Trade-offs: For companies operating in the AI space, critically assess the global implications of decisions, such as chip sales or safeguard policies, considering potential reactions from competing nation-states.
- Prioritize Long-Term Viability: When making architectural or policy decisions, consider their resilience and defensibility not just in the immediate term, but over longer horizons, especially in the face of potential government intervention or geopolitical shifts.
- Develop a "Government Relations" Strategy: Proactively build relationships and dialogue with governmental bodies to navigate these complex intersections, rather than waiting for adversarial confrontations.
- Invest in Public Understanding: Educate the public and policymakers about the nuances of AI development and its societal implications, fostering a more informed discourse that can lead to constructive policy.
- Embrace Discomfort for Future Advantage: Be prepared for decisions that may cause short-term friction with government or public opinion but are strategically sound for long-term technological independence and ethical AI development. This pays off in 12-18 months by establishing a more robust and defensible position.