Building Personalized AI Workflows for Research, Writing, and Task Management - Episode Hero Image

Building Personalized AI Workflows for Research, Writing, and Task Management

Original Title: Claude Code for product managers: research, writing, context libraries, custom to-do system, and more | Teresa Torres
How I AI · · Listen to Original Episode →

Teresa Torres, author of "Continuous Discovery Habits," has meticulously engineered a personalized productivity system using Claude Code, transcending the limitations of off-the-shelf tools. This conversation reveals the profound, non-obvious implications of embracing AI not just as an assistant, but as a pair programmer for every facet of work. The hidden consequence of conventional tools is their inability to adapt to individual workflows, leading to data lock-in and friction. Torres’s approach, however, demonstrates how non-developers can leverage AI to build highly specific, searchable, and efficient systems. Anyone seeking to reclaim control over their information flow, automate tedious tasks, and gain a competitive edge through personalized workflows will find immense value here. This is for the practitioner who understands that true productivity comes from a system that mirrors one's own thinking, not the other way around.

The Hidden Cost of Generic Workflows: Why Customization Pays Off

The allure of pre-built productivity tools is undeniable: they promise immediate functionality and a standardized approach. Yet, as Teresa Torres illustrates, this convenience often masks a deeper inefficiency. Her journey from Trello to a custom Claude Code-based system highlights a critical systems-level insight: generic tools, by their very nature, impose a one-size-fits-all structure that can stifle individual workflows and create data silos. The immediate benefit of a tool like Trello is quickly overshadowed by the long-term consequence of data being locked into a proprietary format, making it difficult to extract, search, or integrate with other systems.

Torres’s move to Claude Code, coupled with Obsidian for Markdown-based notes, represents a deliberate choice to trade immediate ease of use for ultimate control and searchability. This isn't just about task management; it's about creating an information ecosystem that works for her. The system she’s built allows her to interact with her tasks and research in a fluid, conversational manner. Instead of navigating clunky interfaces and manual tagging, she can simply tell Claude what she needs, and the AI, armed with context and specific instructions, can generate reports, update tasks, or even draft summaries.

"By moving my task management to Claude, Claude now sees my tasks. I can literally start my day and be like, 'Claude, what's on my to-do list that you can just do for me?'"

This simple statement encapsulates the power of her approach. Claude, acting as a pair programmer for her daily tasks, doesn't just present a list; it can act on it. This shifts the paradigm from passive task management to active task automation and augmentation. The competitive advantage here is not in doing more tasks, but in doing tasks more intelligently and with less friction. Conventional wisdom might suggest sticking with established tools, but Torres’s experience shows this can lead to a compounding problem of inefficiency and data inaccessibility. Her system, built iteratively, allows for a level of personalization that generic apps simply cannot match, turning mundane tasks into opportunities for AI-driven assistance.

The Research Firehose: Taming Information Overload with AI

The modern professional is bombarded with information. Academic papers, industry reports, blog posts -- the sheer volume can be overwhelming, leading to a state of analysis paralysis or, worse, missed opportunities. Torres’s automated research digest workflow directly confronts this challenge, demonstrating a powerful application of AI for filtering and synthesizing external knowledge. The conventional approach often involves manual searching, bookmarking, and attempting to read articles later, a process prone to procrastination and information loss.

Torres’s system, however, transforms this by integrating research collection directly into her daily workflow through Claude Code. By setting up daily digests that pull from preprint servers like arXiv and weekly searches of Google Scholar, she’s created a continuous stream of relevant research. The immediate payoff is a curated list of papers delivered to her to-do list. But the true downstream effect, the lasting advantage, lies in the AI-powered summarization. Claude generates detailed summaries focusing on methods and effect sizes, enabling Torres to quickly assess the value of a paper without needing to read it in full.

"The only reason why I could do that is because I had this system, and I'd already looked at the paper that had just come out. I'd already analyzed it and critiqued it, and is there something we can learn from this? Then I wrote a really detailed LinkedIn post about it, and it's honestly one of my most best-performing posts on LinkedIn ever."

This anecdote perfectly illustrates the competitive edge gained from such a system. By having pre-digested, summarized research readily available, Torres was able to quickly identify a flawed study, critique it effectively, and leverage that insight into a high-performing LinkedIn post. This demonstrates how taming the information firehose allows for not just staying informed, but for becoming a thought leader. Conventional approaches would have left her struggling to keep up, let alone produce original analysis. Her method, while requiring some initial setup, creates a sustainable system for knowledge acquisition that pays dividends in terms of insight and influence. The key is not just collecting information, but making it immediately actionable and digestible through AI.

Context as a Competitive Moat: Small Files, Big Intelligence

The effectiveness of any AI, especially large language models like Claude, hinges on the quality and relevance of the context provided. A common pitfall is the temptation to dump all available information into a single, massive document, assuming more context equals better results. Torres’s experience reveals the opposite: a strategy of small, focused context files, indexed and managed intelligently, creates a more powerful and efficient AI assistant. This is where the concept of a “context library” becomes a significant competitive advantage.

Torres’s approach of creating numerous small, specific context files--a writing style guide, a business profile, product details--and an index file that maps requests to these files, is a masterful application of systems thinking. The immediate benefit is that Claude can access precisely the information it needs for a given task, rather than being overwhelmed by irrelevant data. This prevents the AI from getting "stuck" or producing generic responses. The downstream effect is a highly personalized and responsive AI partner.

"I realized the more context I provide to Claude, the more Claude can do for me. ... I don't want one file with all my products because if we're working on one product, it doesn't need to know about the other products."

This insight is crucial. By segmenting context, Torres ensures that when she asks Claude for feedback on a blog post, it accesses her writing style guide and audience profile, not her personal dog-sitting schedule. This granular control allows for a level of precision that a single, large context file would obscure. The long-term advantage is a system that scales with her needs, remains manageable, and consistently delivers high-quality, tailored output. Conventional wisdom might suggest a single repository, but Torres’s method builds a competitive moat by making her AI’s intelligence highly specific and efficient, allowing her to be “lazy” with her prompts because the system is so well-organized. This strategy minimizes the AI’s cognitive load, leading to better, faster results and enabling her to focus on higher-level strategic thinking rather than managing AI inputs.

Key Action Items

  • Build a Personalized Task Management System:

    • Immediate Action: Identify your most friction-filled task management processes. Explore using Claude Code (or a similar LLM with code execution capabilities) and a Markdown-based note-taking app (like Obsidian) to create custom commands or scripts for task creation, updating, and retrieval.
    • Time Horizon: Begin implementation within the next quarter.
  • Automate Information Filtering and Summarization:

    • Immediate Action: Identify one recurring information source (e.g., industry newsletters, specific blogs, academic preprints) that feels overwhelming. Experiment with scripting (e.g., Python) to pull this information and use an LLM to generate daily summaries.
    • Time Horizon: Develop a basic prototype within the next 3-6 months.
  • Develop a Granular Context Library:

    • Immediate Action: Start by identifying one area where you frequently provide context to AI (e.g., writing style, brand guidelines, product details). Create a dedicated, small Markdown file for this specific context.
    • Longer-Term Investment: Systematically break down your knowledge base into small, focused context files, creating an index to help AI navigate them. This pays off in 6-12 months as your AI becomes more capable and requires less explicit prompting.
  • Embrace AI as a "Pair Programmer" for Writing:

    • Immediate Action: When writing, use an LLM alongside your editor. Ask it to critique your work based on specific criteria (e.g., clarity, tone, audience relevance) rather than just for general feedback.
    • Time Horizon: Integrate this practice immediately into your writing process.
  • Define "Automation vs. Augmentation" for Tasks:

    • Immediate Action: For each new task, ask: "Can Claude automate this entirely, or should Claude augment my efforts?" This reflection helps prioritize where to invest in custom workflows.
    • Time Horizon: Apply this thinking consistently over the next quarter to refine your system.
  • Iteratively Refine AI Interactions:

    • Immediate Action: When an AI interaction doesn't go as planned, instead of just restarting, document what went wrong or what context was missing. Use this to improve your prompts or context files.
    • Time Horizon: This is an ongoing practice that yields compounding benefits over time.
  • Invest in Terminal-Based Workflows (Where Appropriate):

    • Longer-Term Investment: For tasks involving code, text manipulation, or data processing, explore terminal-based IDEs (like VS Code with Claude integration) for increased efficiency and direct interaction with AI.
    • Time Horizon: This pays off in 12-18 months as you build proficiency and discover new efficiencies.

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.