Blockchain and Verifiable AI for Personal Autonomy and Trust - Episode Hero Image

Blockchain and Verifiable AI for Personal Autonomy and Trust

Original Title:

TL;DR

  • The transformer architecture's parallel processing and attention mechanism enable faster training on massive datasets, overcoming the sequential limitations of recurrent neural networks and accelerating AI development.
  • Blockchain offers a decentralized solution for managing AI-driven economies and ensuring data provenance, mitigating risks of manipulation and establishing verifiable trust in AI-mediated information.
  • Personal AI ownership, facilitated by blockchain and secure enclaves, ensures AI agents act in users' best interests, preventing exploitation and aligning AI with individual well-being.
  • The "human experience edge" emphasizes that deep domain expertise and unique human insights are crucial multipliers for effectively collaborating with AI, leading to differential performance.
  • Verifiable and private AI, utilizing secure enclaves and cryptographic elements, provides users with guaranteed privacy and model provenance, addressing concerns about data misuse and manipulation.
  • The innovator's dilemma at large tech companies often hinders the release of novel, potentially disruptive AI products, making startups essential for market validation and iteration.
  • Understanding the underlying training data and processes is critical for identifying AI biases and potential "sleeper agent" behaviors, which open weights alone do not reveal.

Deep Dive

Illia Polosukhin, a co-author of the foundational "Attention Is All You Need" paper and founder of NEAR Protocol, argues that the rapid advancement of AI necessitates a fundamental shift in how we manage information and ownership. The core implication is that as AI becomes more pervasive, ensuring individual control and trust in AI-generated information is paramount to preventing manipulation and preserving personal autonomy. This requires infrastructure that prioritizes provenance, privacy, and verifiable AI interactions, moving beyond current models where centralized entities control and profit from user data.

The development of transformer architecture, which powers models like GPT, was driven by a need for parallel processing to overcome the sequential limitations of earlier AI models. This breakthrough, enabled by advancements in GPU computing, allowed for significantly faster training on massive datasets. Polosukhin's early recognition of AI's accelerating trajectory led him to leave Google in 2017 to pursue these advancements, aiming to build products that could capitalize on this impending "step change." His experience with the practical challenges of paying global contributors for AI training data highlighted the limitations of existing financial systems and spurred his interest in blockchain as a solution for decentralized, efficient, and trustless transactions.

The crucial second-order implication of AI's proliferation is the potential for information pollution and mass manipulation. As AI increasingly mediates our access to information, the control over these systems translates to control over perception and decision-making. Polosukhin posits that blockchain offers a solution by providing a framework for verifiable data and AI interactions, ensuring provenance--the origin and history of information--and enabling individuals to own and direct their AI agents. This contrasts with the current paradigm where AI models are largely proprietary, their training data opaque, and their outputs subject to the biases and agendas of their creators.

Polosukhin's work with NEAR AI and NEAR Protocol aims to build this trust infrastructure. Their "verifiable and private AI" product, for example, utilizes hardware secure enclaves to ensure end-to-end encryption and verifiable model outputs, even allowing users to confirm the specific system prompts and models used. This is a critical step toward an AI ecosystem where users control their data and AI agents act in their best interest, rather than serving as tools for corporate data collection and manipulation. The drive towards verifiable training processes, not just open weights, is essential to understanding and mitigating inherent biases and potential "sleeper agent" vulnerabilities within AI models.

Furthermore, Polosukhin envisions an "intent-based" economy where AI agents interact with each other to fulfill user requests. This protocol, underpinned by blockchain for settlement and verification, would enable complex economic activities and replace traditional contracting and payment systems. The departure from large tech companies like Google to pursue these ventures stems from an innovator's dilemma; large organizations often struggle to pursue disruptive innovations that may not immediately align with their core revenue streams or carry high reputational risk. Startups, conversely, are better positioned to explore and validate these nascent markets, with the potential for successful ventures to be re-integrated or acquired by larger entities. Ultimately, Polosukhin's work underscores that deep expertise and individual ownership of AI are vital for navigating the future, ensuring that AI serves humanity rather than controlling it.

Action Items

  • Audit AI information sources: Identify 3-5 key AI models or platforms used for information consumption and assess their potential for subtle manipulation or bias.
  • Design personal AI ownership framework: Define 3-5 core principles for ensuring AI agents act on individual behalf, focusing on privacy and well-being.
  • Implement verifiable AI inference: Integrate secure hardware enclaves and end-to-end encryption for 3-5 critical AI interactions to ensure data privacy and output provenance.
  • Measure AI-assisted development impact: Track 5-10 key metrics (e.g., code generation speed, refactoring efficiency) to quantify the leverage gained by engineers with deep system understanding.
  • Draft AI collaboration guidelines: Establish 3-5 best practices for working with AI, emphasizing deep understanding and human expertise as a multiplier.

Key Quotes

"there's biases that go into it and like they are changing how we see the reality depending on this and so the point here is like whoever controls that effectively controls how you perceive the information how do you make decisions and censorship from which information you can see and so money obviously is like a one of the core primitives of society but information is becoming more and more important and more valuable than even money can be and more powerful"

Illia Polosukhin argues that control over information, influenced by AI biases, grants significant power. He highlights that information is becoming more valuable and influential than money, suggesting that those who control AI's perception of reality effectively control decision-making and access to information.


"it's all about how to ensure you own your ai your ai should be working on your side ensuring that your well being and success are accounted for"

Illia Polosukhin emphasizes the importance of personal AI ownership. He believes that AI should be aligned with individual interests, acting as a personal assistant that prioritizes the user's well-being and success.


"the challenge was like if you use this neural network method back in 2014 2015 2016 they were too slow right because there was this method called recurrent neural network and the way to think about it it's how we read right it reads one word at a time and so you give it you know 10 articles from search results it will read one word at a time and somewhere two minutes later it will try to answer the question right but you know if you're at google nobody wants to wait for two minutes for it to read the text right it wants to answer right away"

Illia Polosukhin explains a key technical challenge in early AI development: the slowness of recurrent neural networks. He likens their sequential processing to reading word-by-word, which was too slow for real-time applications like Google Search, where immediate responses are critical.


"what if we dropped the recurrence what if we doesn't read every word at one time sequentially what if it just reads everything in parallel and then tries to make sense of it over you know few layers of kind of processing and this is again this is goes back to nvidia gpus became pretty kind of available and i was like hey we have this massive parallel computing and when we're reading one word at a time we're utilizing it for like 10 right there's like 90 of gpu just sitting there and not being used"

Illia Polosukhin describes the conceptual shift that led to the transformer architecture. He explains the idea of processing text in parallel, leveraging the power of GPUs, rather than sequentially, which was a significant departure from previous methods and unlocked greater computational efficiency.


"transformers kind of treats text in parallel it has this mechanism of attention where every word effectively looks at all the words around it and tries to make sense of itself in within the context then you do another step of transformers you do that again and again and again and so what happens is inside it effectively builds the relationship map between every word and everything around it but not in a one to one way but in this like multi hop way because every layer of this transformation effectively adds another hop of reasoning to this uh mental representation"

Illia Polosukhin details the core mechanism of transformers: attention. He explains that this allows each word to consider all other words in the context, building a complex relationship map through multiple processing layers, which enables a deeper understanding of text.


"the reality is it's like where you are in the ecosystem is always changing and similar right now right google brain merged into deepmind and so things things are always changing well but just take for example google research i think very few people probably appreciate how i think it's it's wildly unique in the history of organizations that google has prioritized research as a core function"

Illia Polosukhin notes the dynamic nature of organizational structures within large tech companies like Google. He emphasizes that Google's unique commitment to research as a core function, distinct from typical product development, is often underestimated.


"what blockchain introduced is actually an ownership that's not relies on them kind of violence it relies on code it relies on effectively a social consensus around everybody agreeing that this is the rules and that is like you know as an engineer as a nerd it's a very like fundamental like hey okay well this is like really interesting and really new"

Illia Polosukhin explains the fundamental innovation of blockchain regarding ownership. He contrasts it with traditional ownership, which relies on governmental enforcement and potential violence, highlighting that blockchain's ownership is based on code and social consensus, which he finds fundamentally compelling as an engineer.


"the deeper understanding you have the better you can use the tool and he was of course referring to engineers who can have the context and architecture of a system you know far that far surpasses a model's kind of context window but i think that's actually a principle that we have heard is probably one of the most recurring principles on this show from all of our conversations the deeper understanding you have the better possibilities you can experience in collaboration with ai"

The podcast hosts emphasize that deeper human understanding enhances AI collaboration. They argue that individuals with a profound grasp of a system's architecture and context, exceeding an AI's limitations, can achieve superior outcomes when working with AI tools.

Resources

External Resources

Books

  • "How Minds Change" by David McRaney - Mentioned in relation to research on how language models are incredible at changing people's minds.

Articles & Papers

  • "Attention Is All You Need" - Mentioned as the paper that introduced the "T" in GPT and the core thesis behind transformers.

People

  • Illia Polosukhin - Co-author of "Attention Is All You Need" and founder of NEAR Protocol, discussed for his insights on AI, transformers, blockchain, and personal AI ownership.
  • Ilya Sutskever - Mentioned as a co-author of the "Attention Is All You Need" paper.
  • Henrik - Co-host of the podcast, involved in discussions about AI, innovation, and collaboration with AI.
  • Jeremy - Co-host of the podcast, involved in discussions about AI, innovation, and collaboration with AI.
  • David McRaney - Author of "How Minds Change," mentioned in relation to research on language models' ability to change minds.
  • Jenny Nicholson - Mentioned in a previous conversation for the insight that "your humanity is the only thing that the model doesn't have."

Organizations & Institutions

  • Google - Discussed as a place where Illia Polosukhin worked in Google Research, and for its organizational structure and approach to research.
  • NEAR Protocol - Mentioned as a project founded by Illia Polosukhin, related to blockchain and AI.
  • NEAR AI - Mentioned as a project started by Illia Polosukhin, focused on code generation and supervised training data.
  • OpenAI - Mentioned in relation to the market validation for AI products like ChatGPT.
  • Meta - Mentioned as one of the companies with a strong research organization.
  • Microsoft - Mentioned as one of the companies with a strong research organization.
  • Apple - Mentioned as the company where a former Google VP went.
  • DeepMind - Mentioned as having merged with Google Brain.
  • Google Brain - Mentioned as having merged into DeepMind.

Websites & Online Resources

  • Google.com - Mentioned as an example of expressing an intent to achieve a goal, such as finding information or products.

Other Resources

  • Transformers - Discussed as a core concept in modern AI, introduced by the "Attention Is All You Need" paper, enabling parallel processing of text.
  • Recurrent Neural Network (RNN) - Mentioned as a previous method for processing text sequentially, which was too slow for practical applications.
  • Neural GPU - Mentioned as a research concept related to memory in neural networks.
  • Neural Turing Machine - Mentioned as a research concept related to memory in neural networks.
  • Bag of Words - Described as a simpler, older method for text processing that was still useful at scale.
  • MapReduce - Mentioned as a Google technology for parallel processing.
  • Blockchain - Discussed as a technology that can provide ownership not reliant on violence, and as a solution for global payments and data provenance.
  • Personal AI Ownership - Discussed as a future concept where AI works on behalf of the individual.
  • AI Slop - Mentioned as a concern regarding the internet filling with AI-generated content without provenance.
  • Intent - Described as a protocol for AI agents to interact and achieve outcomes for users, with blockchain used for settlement and verification.
  • Verifiable and Private AI - Mentioned as a product released by NEAR AI, enabling privacy and verifiability for AI inferences.
  • Hardware Secure Enclaves - Mentioned as a technology used for running AI inferences privately.
  • Open Weights Models - Discussed in contrast to open-source, referring to models where the weights are available but the training data is unknown.
  • Sleeper Agents - Explained as a potential malicious behavior embedded in AI models during training, undetectable by inspection.
  • Synthetic Data - Mentioned as a type of data increasingly used for training AI models.
  • GDPR - Mentioned as a privacy law that complicates data removal after AI training.
  • Llama - Mentioned as an example of an open weights model.
  • OSS (Open Source Software) - Mentioned in the context of open weights models.
  • Deepseek - Mentioned as an example of an open weights model.
  • Quen - Mentioned as an example of an open weights model.
  • Crypto Economics - Mentioned as a way to enable large-scale AI training and development by aligning economic incentives.
  • Human Experience Edge - Proposed as a concept for leveraging one's unique expertise in collaboration with AI.
  • Collaboration Hygiene - Mentioned in the context of working with AI.
  • AI Augmentation - Discussed in relation to teams leveraging AI for faster and more productive work.
  • Context Window - Mentioned as a current limitation of AI models.
  • Intent - Described as a protocol for AI agents to interact and achieve outcomes for users, with blockchain used for settlement and verification.

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.