Codify Intuition: AI Review Editors for Objective, Scalable Expertise
This episode of The ChatGPT Experiment, "Creating Your Own Personal Review Editor," reveals a profound, yet often overlooked, method for enhancing productivity and objectivity: codifying intuitive review processes into reusable AI tools. Host Cary Weston argues that the "curse of knowledge" and reliance on instinct prevent many from delivering consistently high-quality work. By articulating these hidden review criteria, individuals can leverage AI, like ChatGPT or Claude, to act as a consistent, objective scoring matrix. This approach offers a significant advantage to anyone who repeatedly produces or evaluates similar content, transforming subjective experience into a scalable, efficient, and more accurate system. Those who embrace this will gain a competitive edge by formalizing their expertise, making their feedback faster, more consistent, and less prone to personal bias.
The Hidden Cost of Intuition: Why Your Gut Feeling Isn't Scalable
Most of us operate with a set of review criteria that live entirely in our heads. We "just know" what good looks like, especially for tasks we perform repeatedly. Cary Weston, host of The ChatGPT Experiment, highlights this common pitfall: the reliance on instinct and experience, which, while valuable in the moment, fundamentally limits scalability and consistency. This isn't about a lack of expertise; it's about the inability to externalize that expertise. The immediate benefit of instinct is speed, but the long-term consequence is a bottleneck. When you can't articulate your review process, you can't delegate it, automate it, or even consistently apply it yourself.
Weston proposes a solution: formalize your intuitive review process into a structured scoring matrix using AI tools like ChatGPT or Claude. This isn't about replacing human judgment but about augmenting it. The core idea is to extract the implicit checklist that already exists in your mind and make it explicit.
"If you can articulate what you're actually looking for, you can build that into a reusable editor inside ChatGPT or Claude."
This statement is the linchpin of the episode. It reframes AI not just as a content generator or a brainstorming partner, but as a repository for codified expertise. The immediate payoff is a tool that can perform reviews with a level of consistency that a human, prone to fatigue or shifting moods, simply cannot match. The delayed payoff, and the true competitive advantage, comes from the ability to scale this consistent, objective feedback loop. Imagine a team where every piece of client-facing material is reviewed by an AI trained on the lead strategist's exact criteria. This ensures brand consistency and quality across all outputs, something incredibly difficult to achieve with manual reviews alone. The conventional wisdom is that experience is king; Weston suggests that codified experience, amplified by AI, is the true kingmaker.
From Autopilot to Automation: Extracting Expertise with the Four-Part Framework
The process of turning intuitive review into an AI-driven tool hinges on a structured conversation. Weston introduces a four-part framework designed for productive interactions with AI, which he applies here to the creation of a "review editor." This framework is crucial because it guides the user to articulate the implicit knowledge that AI needs to function effectively.
The framework consists of:
1. Identifying what you're doing: Clearly stating the task (e.g., reviewing sales proposals).
2. Explaining why you're doing it: Articulating the goal (e.g., ensuring consistency, quality, and client focus).
3. Defining what success looks like: Detailing the specific criteria, attributes, or elements that constitute a "good" output, often on a defined scale (e.g., 0-5, with 4 being the minimum acceptable).
4. Inviting questions: Allowing the AI to probe for clarification before it begins its task.
Weston emphasizes that this isn't a one-off prompt but a guided interview. When you engage ChatGPT or Claude with this framework, the AI doesn't just passively receive instructions; it actively helps you define "good." It will ask clarifying questions about each category, pushing you to articulate nuances you might otherwise overlook. This interview process itself is a powerful exercise in metacognition, forcing you to confront the specifics of your own evaluation standards.
"It's hard to look objectively at something because we put so much time into it or we do it over and over again, and it's hard to, I'll tell you, this is specifically, it's hard to think about the other person, right? It's hard to put ourselves in the position of the other person that this document is being meant for."
This quote perfectly captures the "curse of knowledge" problem Weston aims to solve. When you're too close to your work, objectivity suffers. The AI, acting as an external, unbiased reviewer, can help bridge this gap. By interviewing you, it extracts your criteria and then applies them without personal bias. The immediate benefit is a clearer understanding of your own standards. The downstream effect is the creation of a reusable tool--a custom GPT, a Claude skill, or a saved prompt--that can consistently apply these standards. This is where the delayed payoff lies: building a repeatable, objective review process that elevates the quality of all subsequent work, creating a subtle but significant competitive advantage over those still relying solely on instinct.
Building a Moat of Objectivity: From Conversation to Custom Tool
The ultimate outcome of this structured conversation with AI is a tangible, reusable tool. This is where the concept of a "review editor" or "scoring matrix" truly takes shape, moving beyond a simple prompt to a persistent asset. The AI, having been "interviewed" about your specific review criteria, can then be directed to create a final output that encapsulates this knowledge. This could manifest as a custom GPT within ChatGPT, a "skill" in Claude, or a detailed prompt saved for repeated use.
The power of this transformation lies in its ability to operationalize subjective expertise. Instead of relying on an individual's experience, which is inherently limited and prone to inconsistency, you create a system that embodies that experience. This system can then be applied to any piece of work within its domain, providing objective, data-driven feedback.
Consider the example of reviewing sales proposals. A human reviewer might miss a subtle inconsistency in pricing or a tone that doesn't align with the company's brand voice. An AI editor, trained on the specific criteria for "good" sales proposals, can meticulously check each element, assign a score, and even suggest improvements. The immediate benefit is a faster, more thorough review. The more significant, long-term advantage is the creation of a "moat" of quality and consistency. Competitors who rely on ad-hoc, instinctual reviews will inevitably produce work with wider variance in quality. Your team, armed with a consistent AI reviewer, can deliver a more predictable and often higher standard of output.
"The goal here is to define the elements and the range by which what bad and good looks like."
This statement underscores the analytical rigor required. It’s not enough to say "make it good." You must define "good" and "bad" with enough specificity that an AI can parse it. This requires effort and a willingness to confront the often-unarticulated standards that govern your work. The immediate discomfort of this deep dive into your own processes pays off by creating a durable asset. This asset allows for more efficient scaling of quality assurance, making your team or your personal output more reliable and, by extension, more competitive. It’s about transforming personal intuition into a systemic advantage.
Key Action Items
- Immediate Action (Within the next week):
- Identify one recurring task or document type you review or produce.
- Begin articulating the specific criteria you use, even if they feel intuitive. Jot down categories and what "good" looks like for each.
- Engage ChatGPT or Claude using Weston's four-part framework to initiate the creation of a review editor for this task.
- Short-Term Investment (Over the next month):
- Refine the AI's output by asking clarifying questions and providing examples of good and bad work.
- Save the final structured prompt or instructions. Test its effectiveness on a few pieces of work.
- If using ChatGPT, explore creating a custom GPT. If using Claude, investigate creating a "skill."
- Mid-Term Investment (Over the next 3-6 months):
- Implement the AI-driven review editor for regular use on your chosen task.
- Track the impact on consistency, speed, and perceived quality of your output or feedback.
- Consider expanding the framework to a second recurring task or document type.
- Long-Term Investment (6-18 months):
- Develop a library of AI-powered review editors for multiple critical functions within your work or team.
- Train colleagues on how to use these tools, fostering a culture of objective, AI-assisted evaluation.
- Monitor and periodically update the review editors as your standards or the nature of the work evolves, ensuring your "moat of objectivity" remains robust.