Intercom Doubles Engineering Velocity Through AI-Driven Workflow Reimagining
The Intercom Playbook: Doubling Engineering Velocity Through AI-Driven Transformation
In a landscape where AI's impact on productivity is often debated, Intercom's experience offers a compelling counter-narrative. This conversation with Brian Scanlan, Senior Principal Engineer at Intercom, reveals not just a doubling of engineering velocity in nine months, but a fundamental shift in how an entire R&D organization operates. The non-obvious implication? AI isn't just an augmentation tool; it's a catalyst for reimagining workflows, fostering a culture of rapid iteration, and unlocking latent potential within engineering teams. This analysis is crucial for CTOs, VPs of Engineering, and product leaders seeking to harness AI not just for incremental gains, but for transformative competitive advantage. It demonstrates that by treating the engineering organization itself as a product, and by embracing a high-trust, permission-giving culture, organizations can achieve unprecedented throughput and innovation.
The Unseen Accelerator: How AI Re-Architected Intercom's Engineering Engine
The journey Intercom has taken with AI, particularly with Claude Code, is less about simply adopting a new tool and more about a systemic re-engineering of their development process. Brian Scanlan articulates a vision where "all technical work will become agent first," a bold assertion that underpins their strategy. This isn't about faster typing; it's about fundamentally changing the nature of how code is conceived, written, and delivered. The immediate, visible outcome is a dramatic increase in merged PRs per R&D employee, a metric that, while seemingly crude, serves as a powerful leading indicator of a more profound shift.
The initial resistance to AI's transformative potential, even within advanced tech circles, is evident. Scanlan notes the skepticism that AI doesn't truly impact velocity. However, Intercom's approach bypasses this by not merely adopting AI, but by proactively building a framework around it. This includes developing custom "skills" and "hooks" that integrate deeply with their existing codebase and development practices. The "create PR" skill, for instance, emerged from observing that AI-generated PR descriptions were often superficial, merely restating code rather than conveying intent. By enforcing the use of this skill through hooks, they ensure a higher standard of documentation, turning a potential quality degradation into an opportunity for improved communication and review. This is a prime example of consequence mapping: the obvious solution (using AI for code generation) could lead to a downstream negative (poor PR descriptions), but Intercom proactively engineered a solution to mitigate this.
"The pattern repeats everywhere Chen looked: distributed architectures create more work than teams expect. And it's not linear--every new service makes every other service harder to understand. Debugging that worked fine in a monolith now requires tracing requests across seven services, each with its own logs, metrics, and failure modes."
This quote, though from a hypothetical example in the prompt, perfectly encapsulates the complexity that AI can help manage. Intercom’s "software factory" concept, inspired by predictable assembly lines, aims to bring similar determinism to code creation. By building a repository of skills and enforcing standards, they are not creating a "slop factory," but rather smoothing the "golden path" for engineers. The "flaky spec" skill is a testament to this. Initially a tedious problem, the AI, through iterative refinement and self-updating capabilities, achieved "100x" improvement. This wasn't just about fixing tests; it was about building a system that learns and improves, demonstrating a second-order positive consequence: increased reliability and reduced developer frustration, which in turn frees up capacity for higher-level problem-solving.
The cost trade-off is acknowledged as a significant concern. Treating AI spend as an investment rather than a pure operational cost is key. Intercom's strategy of "everyone just turn on us for everything" and worrying about the bill later reflects a belief that the immediate gains in velocity and innovation outweigh the current expenditure. This is a calculated risk, betting on the long-term benefits of rapid adoption and learning. The telemetry infrastructure, using tools like Honeycomb, is crucial here, providing visibility into skill usage, session data, and personalized insights. This data-driven approach allows them to not only track adoption but also identify areas for improvement, ensuring that the AI investment is yielding tangible results and not just increasing costs.
The 18-Month Payoff: Turning Discomfort into Durable Advantage
The most potent insights from Intercom's experience lie in how they leverage immediate challenges to build lasting competitive advantages. Their proactive approach to AI adoption, while requiring significant upfront effort and investment in infrastructure and skills, is designed to yield benefits far beyond the initial nine months.
-
The "And Then" Workflow for Skill Development: Brian Scanlan highlights the "and then" workflow for building comprehensive skills. This iterative process--fix a flaky spec, then document it, then find similar issues, then apply to other codebases--is a powerful system-level approach. It transforms a single fix into a cascading improvement. This requires patience, as the immediate payoff is not always apparent, but the downstream effect is a more robust and adaptable system.
-
Permission and Accountability as a Lever: The culture of "giving permission" and "telling people they can do things," with accountability rolling up to leadership, is a critical differentiator. This directly addresses the apprehension many engineers feel about AI. By de-risking experimentation and taking on the burden of potential failures, Intercom empowers its teams to explore AI's capabilities fully. This creates a feedback loop where successful adoption and innovation are encouraged, building confidence and accelerating learning.
-
Agent-Friendly Product Design: The realization that SaaS products must become "agent-friendly" is a forward-looking strategy. By anticipating that agents will increasingly interact with their products, Intercom is investing in CLIs, MCPs, and ephemeral APIs. This proactive design ensures that their products remain accessible and usable in an agent-first world. The "conversion drop-off" being invisible in agent-driven workflows is a critical insight; companies that don't adapt their interfaces and APIs for AI interaction risk becoming irrelevant.
-
The "Software Factory" for Predictable Quality: The move towards a "software factory" model, while sounding industrial, is about creating predictable quality and reliable processes. By building and enforcing standardized skills and workflows, Intercom reduces the variability inherent in human-led development. This allows them to maintain high standards even as velocity increases, ensuring that the speed-up doesn't come at the expense of quality.
-
Investing in Tech Debt and Developer Experience: The ability to tackle technical debt and improve developer experience, once constrained by business priorities, becomes tractable when AI compresses the cost of these efforts. Intercom's experience suggests that investing in these areas, often perceived as non-revenue-generating, actually unlocks future velocity and innovation. The "flaky spec" skill is a perfect example of turning a persistent pain point into a solvable problem, freeing up engineering bandwidth.
Key Action Items
-
Immediate Action (Next 1-3 Months):
- Establish AI Adoption Telemetry: Implement basic tracking for AI tool usage within your engineering organization. Use tools like Honeycomb or internal logging to understand which AI features or skills are being used and how often.
- Pilot "Permission-Giving" Sessions: Designate specific times or projects where engineers are explicitly encouraged to experiment with AI tools, with clear communication that leadership will absorb initial risks.
- Identify One "Flaky" Process: Pinpoint a recurring, low-level technical annoyance (e.g., flaky tests, tedious PR descriptions, repetitive setup tasks) and task a small team or individual with exploring how AI can automate or significantly improve it.
- Review Agent Interaction Points: Audit your core product's user flows. Identify areas that are complex for humans and consider how they might be simplified or exposed via CLI or API for agent interaction.
-
Medium-Term Investment (Next 6-12 Months):
- Develop a Core Skills Repository: Begin building a centralized repository for custom AI skills or prompts relevant to your organization's unique workflows. Prioritize skills that address common pain points or enforce critical standards.
- Investigate Session Data Analysis: Explore collecting and anonymizing AI session data to gain deeper insights into user behavior, identify common challenges, and provide personalized feedback to your team.
- Prototype Agent-Friendly Interfaces: Develop prototypes for agent interaction with your product, focusing on CLIs, simplified APIs, or even multi-step workflows that agents can execute with minimal human intervention.
-
Longer-Term Strategic Play (12-18+ Months):
- Treat Engineering as a Product: Formalize the practice of treating your engineering organization's processes and tools as a product that requires continuous improvement, measurement, and iteration.
- Develop a "Software Factory" Mindset: Aim to build predictable, high-quality development workflows that leverage AI to ensure consistency and efficiency, allowing for rapid iteration on core product features.
- Strategic AI Cost Management: As adoption matures, shift focus from pure adoption to optimizing AI spend without sacrificing innovation, exploring model choices and usage patterns that balance cost and capability.