Resource Rationality: Intelligence as Optimal Strategy Under Constraints - Episode Hero Image

Resource Rationality: Intelligence as Optimal Strategy Under Constraints

Original Title: 343 | Tom Griffiths on The Laws of Thought

The quest to mathematically define "thought" has a long and winding history, stretching from Aristotle's syllogisms to modern AI. In his conversation with Sean Carroll, cognitive scientist Tom Griffiths reveals that the "laws of thought" are not a single set of rigid rules, but rather a framework of principles that govern how intelligent systems should operate, especially under constraints. This perspective reframes our understanding of human irrationality, suggesting that many apparent biases are, in fact, rational adaptations to limited cognitive resources. For anyone building AI, designing cognitive systems, or simply seeking to understand the mechanics of their own mind, this conversation offers a crucial lens: viewing intelligence not as a fixed ideal, but as an optimal strategy within a bounded system. It highlights the hidden consequences of optimizing for immediate computational efficiency, which can lead to surprising downstream effects and a deeper appreciation for the evolutionary and learned biases that shape our decision-making.

The Ghost in the Machine: Logic, Probability, and the Ideal Thinker

The pursuit of a mathematical theory of mind is not a new endeavor. As Tom Griffiths explains, early thinkers like Aristotle sought to codify good arguments, laying the groundwork for later mathematicians like Leibniz and Boole. Leibniz, in particular, envisioned a "universal character" where mathematical representation could make consequences self-evident, even attempting to express thought through arithmetic. This ambition, though ultimately unrealized by Leibniz, foreshadowed the computational theories of mind that would emerge centuries later. George Boole, building on this legacy, developed an algebra that could formalize logical reasoning, and crucially, began to explore probability theory as a means to understand uncertain inference. This marked a pivotal shift: moving beyond a binary true/false logic to acknowledge and quantify degrees of belief.

"The quest for these laws of thought is what we're talking about today with tom griffiths who's a cognitive scientist he has a new book coming out called guess what the laws of thought the quest for a mathematical theory of the mind"

-- Sean Carroll

This historical thread, from Aristotle to Boole, establishes a foundational understanding of thought as a system governed by formal rules. However, as Griffiths points out, this purely logical approach often fails to capture the messy reality of human cognition. The limitations of human minds--finite time, energy, and imperfect information--mean that perfect logical deduction is often impractical, if not impossible. This is where probability theory, particularly Bayesian reasoning, becomes essential. Griffiths frames Bayesian inference not as a departure from logic, but as its natural extension. Where logic deals with certainty, probability theory allows us to assign degrees of belief to propositions and update them as new evidence emerges. This probabilistic framework offers a more nuanced and realistic model for how intelligent systems, including humans, should navigate uncertainty.

Resource Rationality: The Art of Thinking with Less

The ideal of perfect Bayesian reasoning, while mathematically elegant, presents a significant challenge: it is computationally expensive. This is the core insight of "resource rationality," a concept Griffiths explores that redefines intelligence not by its adherence to abstract ideals, but by its efficiency in achieving goals given finite cognitive resources. The apparent "irrationalities" and biases observed in human behavior, from a purely logical standpoint, can often be understood as rational heuristics--shortcuts--developed by evolution and learning to make the best possible decisions with limited processing power.

"we are you know good at solving the kinds of problems that we face with the resources that we have right and you can think about that as being the consequences of different kinds of adaptation so evolution as well as learning"

-- Tom Griffiths

This perspective has profound implications. Instead of viewing human decision-making as flawed, we can see it as a sophisticated optimization problem. For example, instead of exhaustively considering every possible outcome, humans often employ sampling strategies, focusing on a few plausible scenarios. Similarly, the ability to set goals and subgoals, a hallmark of complex problem-solving, is not a sign of limited intellect but a necessary strategy for navigating challenges with finite cognitive capacity. This reframing suggests that many cognitive biases are not bugs, but features--adaptive solutions to the inherent constraints of biological intelligence. The challenge then becomes not to eliminate these biases, but to understand how to best leverage our cognitive resources and, perhaps, design AI systems that do the same.

Beyond Logic: Space, Networks, and the Embodied Mind

While logic and probability theory provide abstract frameworks for ideal thought, understanding how these principles are implemented in physical systems requires a different approach. Griffiths highlights the evolution of thinking about cognition through the lens of "spaces, features, and networks." Early work by psychologists like Eleanor Rosch demonstrated that human categories are often fuzzy and gradient-based, defying strict logical definitions. This led to the idea of representing concepts as points in a conceptual space, where proximity signifies similarity.

This spatial view of cognition found a powerful computational correlate in artificial neural networks. Building on early work by McCulloch and Pitts, and later Rosenblatt with the perceptron, the development of algorithms like backpropagation (itself rooted in Leibniz's calculus) enabled the training of multi-layer neural networks. These networks, by transforming points in one space into points in another, offer a mechanism for approximating complex computations and learning from data.

"you can think about a neural network as precisely as a way of computing with spaces it takes a point in one space and then it transforms it to a point in another space"

-- Tom Griffiths

However, the current dominance of large language models (LLMs) trained on vast datasets reveals a critical difference between human and artificial intelligence. LLMs, while capable of impressive feats, often exhibit peculiar biases, such as being influenced by the statistical frequency of answers in their training data rather than pure logical correctness. This suggests that while LLMs excel at pattern matching and prediction based on massive exposure, they may lack the "inductive biases"--innate predispositions or learned priors--that allow humans to learn complex concepts from limited data. The quest for AI that can truly learn like humans involves bridging this gap, potentially by incorporating more human-like inductive biases into their architecture, leading to more generalizable and perhaps more interpretable intelligence.

Key Action Items

  • Embrace Probabilistic Thinking: Actively reframe uncertain situations not as black and white, but as a spectrum of probabilities. Practice updating your beliefs based on new evidence, even if it's uncomfortable.
  • Recognize Cognitive Constraints: Acknowledge that your own thinking is subject to limitations of time, energy, and information. Understand that heuristics and biases can be rational adaptations, not just errors.
  • Prioritize Understanding Over Speed: When solving problems, resist the urge for the quickest, most obvious solution. Instead, map out the potential downstream consequences, even if it requires more effort upfront. This delayed payoff creates lasting advantage.
  • Develop Meta-Reasoning Skills: Consciously think about your thinking process. Ask yourself: "Am I using the best strategy for this problem given my resources?" This self-awareness is key to effective resource rationality.
  • Seek Diverse Perspectives: Understand that intelligence can be viewed from multiple levels--computational, algorithmic, and implementation. Recognize that different frameworks offer complementary insights, and no single perspective holds all the answers.
  • Invest in Foundational Understanding: For complex systems (technical or otherwise), focus on the underlying principles (logic, probability, network dynamics) rather than just surface-level behaviors. This deeper understanding allows for more robust and adaptable solutions.
  • Experiment with "Slower" Learning: If developing AI or optimizing learning processes, consider how to imbue systems with inductive biases that allow for learning from less data, mimicking human developmental trajectories. This pays off in efficiency and potentially more generalizable intelligence over the long term (12-18 months).

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.