Context Portfolio Solves AI's "Context Repetition Tax"

Original Title: How to Build a Personal Context Portfolio and MCP Server

The persistent friction of AI interaction stems not from the AI's limitations, but from our own failure to provide consistent, accessible context. This episode reveals a critical, yet often overlooked, challenge in the agentic era: the "context repetition tax." Every new AI tool or agent requires us to re-explain our roles, projects, and preferences from scratch, a process that wastes time and degrades output quality. The non-obvious implication is that this inefficiency creates a significant barrier to AI adoption and productivity, particularly for individuals and organizations aiming to leverage AI effectively. This conversation offers a powerful solution: building a personal context portfolio and MCP server. Anyone seeking to move beyond superficial AI interactions and unlock genuine productivity gains will find immense value in understanding and implementing this strategy, gaining a distinct advantage in navigating the evolving AI landscape.

The Hidden Cost of Starting Over: Why Context is King

The promise of the agentic era is that AI will fundamentally change how we work, augmenting our capabilities and streamlining our workflows. Yet, a pervasive, often unacknowledged, problem plagues this vision: the sheer effort required to onboard any new AI agent or tool. As the podcast highlights, "every new agent, project, or tool requires you to re-explain yourself from scratch." This isn't just an annoyance; it's a fundamental friction point that actively hinders progress. Michael Chen's observations from deploying AI in enterprise settings underscore this, noting that "the gap between 'we have data' and 'we have data in a format that an AI system can learn from' is enormous." While Chen refers to organizational data, the principle extends powerfully to personal context. Without a structured way to convey who you are, what you do, and how you operate, every AI interaction becomes a fresh start, a costly repetition of onboarding.

This "context repetition tax" has immediate and downstream consequences. In the short term, it consumes valuable time and mental energy. Explaining your role, your current projects, your team dynamics, and your communication preferences to a new AI is tedious. But the effects are more profound. The podcast suggests that the effort involved often leads to incomplete context sharing: "The sheer time and effort it takes to explain everything fully means that there was probably a lot that was left out." This incomplete information directly impacts the quality of the AI's output. An agent that doesn't fully grasp your constraints or goals will produce recommendations or actions that are suboptimal, misaligned, or even counterproductive. This is where conventional wisdom fails; simply throwing more tools at the problem ignores the foundational need for coherent context.

The Product Lock-In of Personal Memory

The problem of context portability became starkly apparent during the recent shifts in AI platform usage, where users scrambled to migrate between systems like ChatGPT and Claude. The podcast points out that the primary barrier to switching wasn't the AI's capabilities, but the "idea of having to explain to a new LLM all of those things once again." Claude's rudimentary "import saved memories" feature, essentially a prompt to export data, highlights the industry's nascent understanding of this issue. It was a step, but a simplistic one, demonstrating that current solutions often treat personal context as a disposable byproduct rather than a critical asset.

"The challenge of the portability, or lack thereof, of personal context reared its ugly head in the wake of the Pentagon threatening and then following through on their designation of Anthropic as a supply chain risk, and OpenAI's quickly regretted decision to announce their deal with the Department of Defense on the same night."

This event underscored a critical vulnerability: our personal context, the accumulated knowledge and preferences that make us effective, is often locked within proprietary systems. This creates a form of product lock-in, where the sunk cost of training an AI on our personal context discourages migration and innovation. The podcast proposes a solution that directly addresses this: a "personal context portfolio." This isn't just about exporting memories; it's about creating a portable, machine-readable representation of "you."

Building Your AI Operating Manual

The core of the proposed solution is a structured set of Markdown files that collectively serve as an "operating manual for any AI that works with you." This approach, dubbed the "personal context portfolio," is designed with several key principles. Firstly, it's "Markdown first," leveraging a universally understood format that any AI system can parse. This ensures maximum compatibility and portability, a stark contrast to proprietary data formats. Secondly, it emphasizes modularity: "separate files and separate templates for separate parts of the whole that is you." This allows different agents to access only the relevant information, preventing information overload and ensuring specificity. For example, an agent handling your calendar might only need access to your roles_and_responsibilities.md and goals_and_priorities.md, while a content generation AI might need your communication_style.md and domain_knowledge.md.

This modularity directly supports the third principle: the portfolio is "living and not static." It's not a one-time setup but a dynamic document that evolves with you. As projects change and priorities shift, the portfolio should be updated, ideally with the assistance of AI agents themselves. This continuous maintenance ensures that the context provided to AI remains current and accurate, preventing the degradation of output quality over time.

"Effectively, it's API documentation, but for you, a single source of machine-readable truth about who you are that any agentic system can read."

This framing is crucial. By treating personal context as API documentation, we shift from a reactive, ad-hoc explanation process to a proactive, structured one. The podcast outlines ten key dimensions for this portfolio, including identity.md, roles_and_responsibilities.md, current_projects.md, team_and_relationships.md, tools_and_systems.md, communication_style.md, goals_and_priorities.md, preferences_and_constraints.md, domain_knowledge.md, and decision_log.md. Each file serves a distinct purpose, providing a comprehensive yet granular view of an individual. The decision_log.md, for instance, is highlighted as potentially the "most underrated file," offering invaluable insight into past reasoning for future AI-assisted decision-making.

From Static Files to Dynamic Servers

The true power of the personal context portfolio is unlocked when it moves beyond static files. The podcast introduces the concept of an "MCP server" (presumably Message Communication Protocol, though not explicitly defined) as a way to make this context dynamically accessible. This involves deploying the Markdown files as a server that AI agents can query. The process, as described, involves using AI itself as a "tutor and build partner" to navigate the technical setup. This is where the "context repetition tax" is truly eliminated. Instead of re-explaining, agents query the MCP server.

The technical implementation, while involving some troubleshooting, is presented as an achievable goal. The podcast details the steps: setting up a local server, configuring it for remote access, and deploying it, often using platforms like GitHub and Railway. The crucial takeaway is that this infrastructure allows for a seamless, on-demand provision of context. When an AI needs to understand your work, it doesn't ask you to type it out; it queries your personal MCP server. This not only saves time but ensures that the AI receives the most up-to-date and comprehensive information available.

"The reality is messier. When you are trying to get something explained step-by-step, even if it tries to race ahead, demand that it go back and do things more simply."

This iterative process of building and refining the context portfolio, supported by AI, represents a significant shift. It moves us from a world where AI is a passive recipient of our explanations to one where AI actively accesses and utilizes a structured representation of our professional selves. This creates a lasting advantage, freeing up cognitive load and enabling more sophisticated AI-driven workflows.

  • Develop Your Personal Context Portfolio: Begin by creating the foundational Markdown files outlined in the podcast. This is an immediate action to combat context repetition.
  • Leverage AI for Drafting: Use AI assistants (like Claude or ChatGPT) as interviewers to help populate your portfolio files. This transforms the tedious task of self-explanation into an interactive process. (Immediate Action)
  • Establish a GitHub Repository: House your personal context portfolio files in a public or private GitHub repository. This provides a centralized, version-controlled location. (Immediate Action)
  • Build a Local MCP Server: Follow the podcast's guidance to set up a local MCP server that serves your context portfolio. This is a technical investment that pays off in immediate efficiency gains for local AI interactions. (Next 1-2 Weeks)
  • Deploy a Remote MCP Server: Extend your context availability by deploying your MCP server to a cloud platform. This enables broader access for various AI tools and agents. (Next 1-2 Months)
  • Iteratively Refine Your Portfolio: Treat your context portfolio as a living document. Regularly update files as your roles, projects, and priorities evolve. This ongoing maintenance is crucial for sustained advantage. (Ongoing Investment)
  • Explore Advanced Agent Interactions: Once your MCP server is live, experiment with different agents and tools, directing them to query your server for context. This will reveal new possibilities for AI-assisted workflows. (Next 3-6 Months)

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.