Human Agency Is Key to AI's Future, Not Extremes
The core thesis of this conversation with Shyam Sankar, CTO of Palantir Technologies, is that the narrative surrounding Artificial Intelligence is dangerously polarized, obscuring the critical role of human agency in shaping its future. Instead of succumbing to either utopian fantasism or dystopian doomerism, Sankar argues for a pragmatic, "iron man suit" approach where AI empowers human workers and reindustrializes America. The hidden consequence revealed is that by focusing on the extremes, we neglect the daily decisions that are actively determining AI's impact. This conversation is essential for business leaders, policymakers, and anyone concerned about the future of work and national competitiveness, offering a framework to reclaim agency and build a more prosperous future through deliberate, human-guided technological adoption.
The discourse around Artificial Intelligence often devolves into two extremes: the promise of a utopian future with untold abundance, or the dread of mass unemployment and societal collapse. Shyam Sankar, Chief Technology Officer at Palantir Technologies, cuts through this polarized narrative, asserting that neither vision accurately reflects reality because both fundamentally ignore human agency. "AI doesn't do anything. Humans use AI to do something," Sankar states, emphasizing that the future of AI is not predetermined but is being actively shaped by the choices we make daily. This perspective shifts the focus from passive anticipation to active creation, highlighting the profound implications of our decisions in wielding this powerful technology.
One of the most significant downstream effects of this polarized AI discussion is the erosion of confidence in our institutions and the potential for a "new dark age." Sankar draws a parallel to the collapse of the Roman Empire, where a loss of fundamental knowledge contributed to a decline. He argues that an over-reliance on AI without understanding its underlying principles or actively engaging with its development risks a similar societal regression. The danger lies in becoming so detached from how things are made and governed that we lose the capacity to innovate, understand, or debug them. This highlights a critical consequence: the immediate convenience of AI could lead to a long-term intellectual and operational atrophy, making us vulnerable to unforeseen systemic failures.
The conventional wisdom often suggests listening to the inventors of technology for guidance on its application and governance. However, Sankar posits this is a recurring fallacy. He uses the analogy of the telescope: Galileo's impact on physics stemmed from his use of the telescope, not its invention. Similarly, he argues, the true understanding of AI's impact and governance lies not with its creators, but with its users -- the American worker on the factory floor, the nurse in the ICU, the operator in the field. This insight has a cascading effect: by devaluing the lived experience of end-users, we risk developing AI tools that are misaligned with real-world needs, potentially creating more problems than they solve. The immediate payoff of novel AI features might mask the downstream cost of solutions that don't truly serve the people they are intended to empower.
"AI doesn't do anything. Humans use AI to do something."
-- Shyam Sankar
This leads to a crucial point about the nature of progress and competitive advantage. Sankar’s work, exemplified by Project Maven, demonstrates how embracing difficult, often disruptive, technological advancements can create significant strategic advantages. Project Maven, a rogue AI effort within the Pentagon, faced immense bureaucratic resistance and skepticism. Colonel Drew Kucor, the driving force behind it, endured personal attacks and investigations, yet his perseverance in integrating AI for targeting cycles led to a dramatic reduction in the time from detection to engagement. This illustrates how embracing immediate pain -- the bureaucratic battles, the internal disruption, the need for new skills -- can yield substantial, long-term payoffs. The conventional approach, which avoids such friction, often leads to stagnation and an inability to adapt to evolving threats.
"The best always anonymous of course -- that this marine officer was accepting bribes he had stashes of money at his house he is housing somehow housing illegal aliens in his basement holy shit thing after thing."
-- Shyam Sankar (describing the resistance faced by Colonel Drew Kucor)
The conversation also delves into the reindustrialization of America, framing it as a critical component of national security and economic prosperity. Sankar argues that a strong industrial base is not merely about manufacturing goods but about national sovereignty and the ability to deter adversaries. He highlights how the over-reliance on global supply chains, particularly from China, has created significant vulnerabilities, from pharmaceuticals to rare earth minerals essential for defense. The "Jevons Paradox," where increased efficiency leads to increased consumption, is reinterpreted in the context of AI-driven productivity. Instead of leading to mass unemployment, Sankar suggests that AI can automate the "drag" -- the inefficiencies and administrative burdens -- freeing up human workers to be more productive, potentially leading to job creation as companies expand to meet increased demand. This requires a deliberate choice to use AI for empowerment, not just automation.
"The fact he could bring all this together and he had had this deep experience from his time in the marines like the government instinct to try to invent everything internally is going to fail let's go out to google let's go out to the leading technology companies in america and ask them to help us solve this problem and it was a heretical approach that led to lots of pushback lots of bullshit."
-- Shyam Sankar (on Colonel Drew Kucor's approach to Project Maven)
Ultimately, Sankar advocates for a shift in mindset, moving from a reactive stance on AI to a proactive one. This involves fostering a culture of experimentation, embracing "civil-military fusion" through programs like Detachment 201, and valuing the "heretics and heroes" who challenge the status quo. The ultimate advantage lies not in avoiding difficulty, but in embracing it strategically, building capabilities that are both robust and adaptable. This requires a belief in human ingenuity and a willingness to invest in the skills and tools that empower individuals to solve complex problems, thereby underwriting both national security and economic prosperity.
Key Action Items
- Embrace AI as an "Iron Man Suit": Focus on how AI can augment and empower your workforce, rather than solely on its potential for replacement. This requires identifying specific tasks and workflows where AI can provide significant productivity gains for existing employees.
- Immediate Action: Conduct pilot programs to explore AI augmentation in key departments.
- Prioritize User Experience and Domain Expertise: Actively involve end-users and subject matter experts in the development and deployment of AI solutions. Their lived experience is crucial for identifying true needs and potential pitfalls.
- Immediate Action: Establish cross-functional teams that include frontline workers in AI project planning.
- Invest in Technical Literacy Across the Organization: Foster an environment where employees, particularly those in operational roles, develop a foundational understanding of AI and its capabilities. This enables them to become better "customers" of AI technology.
- This pays off in 6-12 months: Develop internal training programs or workshops on AI fundamentals.
- Rebuild the Bridge Between Technology and Operations: Encourage collaboration and knowledge sharing between technical teams and operational units, whether in government or industry. This cross-pollination is vital for identifying and solving real-world problems.
- This pays off in 12-18 months: Implement structured programs for knowledge exchange, such as rotational assignments or joint project teams.
- Challenge Bureaucratic Inertia with a Focus on Outcomes: Advocate for agility and a willingness to discard outdated processes that hinder innovation. Prioritize solutions that demonstrably work, even if they disrupt established norms.
- Immediate Action: Identify and challenge one bureaucratic process that impedes AI adoption within your sphere of influence.
- Cultivate a Culture of Experimentation and Grit: Recognize that innovation is often messy and requires perseverance. Tolerate calculated risks and support individuals who are willing to challenge the status quo to achieve ambitious goals.
- This pays off in 18-24 months: Implement reward systems that recognize innovative risk-taking and successful experimentation, even if initial attempts fail.
- Develop a Clear Theory of Change for AI Adoption: Define what "winning" looks like with AI and how the technology will contribute to achieving those outcomes. This provides a strategic compass for decision-making and resource allocation.
- Immediate Action: Articulate a concise vision for how AI will contribute to your organization's strategic objectives.