In a rare glimpse behind the curtain of one of Silicon Valley’s most closely watched AI companies, Boris Cherny, the creator of Claude Code, has shared an extensive playbook detailing how Anthropic’s own engineering team uses the tool to supercharge their development workflows. The insights, shared via a detailed thread on X, reveal a sophisticated approach to AI-assisted coding that goes far beyond simple autocomplete suggestions, instead treating Claude as a collaborative partner in a multi-threaded development environment.
The revelations come at a pivotal moment for the AI coding tools market, where GitHub Copilot, Amazon’s CodeWhisperer, and various other solutions compete for developer mindshare. Yet Cherny’s recommendations suggest that the real competitive advantage may not lie in the underlying models themselves, but rather in how developers structure their workflows around these tools. His emphasis on parallel processing, meticulous planning, and systematic knowledge management points to a maturation of AI-assisted development from experimental novelty to production-critical infrastructure.
What makes these insights particularly valuable is their source: these aren’t theoretical best practices from consultants or marketing materials, but battle-tested techniques from the team building Claude Code itself. The recommendations range from terminal configuration tweaks to sophisticated multi-agent orchestration strategies, painting a picture of a development environment where AI assistants are deeply integrated into every phase of the software lifecycle, from initial planning through debugging and analytics.
Parallel Processing: The Productivity Multiplier That Changes Everything
The single most impactful recommendation from Cherny’s team is deceptively simple: run multiple Claude sessions simultaneously using git worktrees. According to Cherny, spinning up three to five worktrees at once, each running its own Claude session in parallel, represents “the single biggest productivity unlock” and tops the list of internal team recommendations. This approach fundamentally reimagines the developer’s relationship with AI assistance, transforming it from a single-threaded consultation model into a parallel processing powerhouse.
The technical implementation relies on git worktrees, a feature that allows developers to check out multiple branches simultaneously in different directories, all sharing the same repository. While Cherny himself prefers multiple git checkouts, he notes that most of the Claude Code team has standardized on worktrees, a preference so strong that team member Adam Morris built native support for them directly into the Claude Desktop app. This integration eliminates friction and makes parallel development the path of least resistance.
The team has developed sophisticated organizational systems around this parallel workflow. Some engineers name their worktrees and create shell aliases—za, zb, zc—enabling single-keystroke navigation between different development contexts. Others maintain a dedicated “analysis” worktree exclusively for reading logs and running BigQuery queries, separating exploratory data work from active development. This specialization of worktrees mirrors the way developers might organize physical or virtual desktops, but with the added dimension of each workspace having its own AI assistant with focused context.
The Plan-First Philosophy: Why Rushing to Code Costs More Time
Cherny’s second major recommendation represents a counterintuitive discipline in an era where speed is often conflated with productivity: start every complex task in plan mode, and invest significant energy in the planning phase so Claude can execute the implementation in a single attempt. This approach inverts the typical developer instinct to immediately begin coding, instead treating the planning phase as the highest-leverage activity.
The team has developed creative variations on this planning-first approach. One engineer runs two Claude instances sequentially for complex tasks: the first Claude writes the plan, then they spin up a second Claude instance and instruct it to review the plan as a staff engineer would. This adversarial review process catches logical gaps and architectural issues before any code is written, dramatically reducing the cost of changes. The technique essentially creates a virtual design review process, with AI assistants playing both the proposer and reviewer roles.
Perhaps more importantly, team members have learned to recognize when to return to planning mode. As Cherny explains, “the moment something goes sideways, they switch back to plan mode and re-plan. Don’t keep pushing.” This discipline—stopping implementation when problems arise and returning to the planning phase—prevents the common trap of accumulating technical debt through rushed fixes. Engineers also explicitly tell Claude to enter plan mode for verification steps, treating validation as a first-class planning activity rather than an implementation detail.
Self-Documenting AI: Teaching Claude to Learn From Its Own Mistakes
The third pillar of the team’s approach involves systematic knowledge management through a CLAUDE.md file, a practice that turns every correction into a learning opportunity. Cherny recommends that after every correction, developers should end with the instruction: “Update your CLAUDE.md so you don’t make that mistake again.” According to Cherny, “Claude is eerily good at writing rules for itself,” suggesting that the AI can effectively self-document its own failure modes and guardrails.
This approach transforms the CLAUDE.md file from static documentation into a living knowledge base that evolves with each project. The team emphasizes ruthless editing of this file over time, continuously iterating until Claude’s mistake rate measurably drops. This data-driven approach to AI guidance treats the CLAUDE.md file as a tunable parameter in the development system, one that can be optimized through systematic experimentation and measurement.
One engineer has extended this concept even further, instructing Claude to maintain a notes directory for every task and project, updated after every pull request. The CLAUDE.md file then points to these notes, creating a hierarchical knowledge structure that provides both general guidelines and project-specific context. This architecture mirrors how human engineers maintain both general coding standards and project-specific documentation, but with the AI assistant actively participating in the documentation process rather than merely consuming it.
Building a Personal AI Toolkit: Skills as Reusable Development Assets
The fourth recommendation elevates Claude Code usage from ad-hoc assistance to systematic automation through custom skills committed to git and reused across projects. The team’s philosophy is straightforward: if you do something more than once a day, turn it into a skill or command. This transforms repetitive tasks into reusable assets that compound in value over time, building a personal library of AI-powered automation.
The team has developed several creative applications of this principle. One engineer built a /techdebt slash command that runs at the end of every session to identify and eliminate duplicated code, treating technical debt reduction as a continuous process rather than a periodic cleanup effort. Another created a slash command that syncs seven days of content from Slack, Google Drive, Asana, and GitHub into a single context dump, solving the problem of scattered information across multiple platforms.
Perhaps most ambitiously, some team members have built analytics-engineer-style agents that write dbt models, review code, and test changes in development environments. These specialized agents represent a significant evolution beyond general-purpose coding assistance, creating domain-specific AI tools tailored to particular workflows and toolchains. By committing these skills to git, they become team assets that can be shared, versioned, and collaboratively improved.
Autonomous Debugging: Letting Claude Fix Its Own Mistakes
The fifth major insight challenges the traditional developer instinct to micromanage debugging processes. According to Cherny, “Claude fixes most bugs by itself” when given appropriate access to context and autonomy. The team’s approach emphasizes providing Claude with the necessary information and trusting it to determine the debugging strategy, rather than prescribing specific steps.
The integration with collaboration tools streamlines this process considerably. By enabling the Slack MCP (Model Context Protocol), engineers can paste an entire Slack bug thread into Claude and simply say “fix,” eliminating context switching between communication and development tools. This seamless integration treats bug reports as first-class inputs to the development process, with Claude capable of parsing conversational context, identifying the core issue, and implementing a fix.
The team also embraces high-level debugging directives that leave implementation details to Claude’s discretion. Instructions like “Go fix the failing CI tests” provide the objective without constraining the approach. Team members even point Claude at Docker logs to troubleshoot distributed systems, a task that typically requires significant expertise to parse complex, multi-service log streams. Cherny notes that Claude is “surprisingly capable” at this type of systems-level debugging, suggesting that AI assistants may be approaching or exceeding human performance in certain diagnostic tasks.
Advanced Prompting Techniques: Making Claude Your Toughest Reviewer
The sixth set of recommendations focuses on prompt engineering techniques that invert traditional power dynamics, positioning Claude as a critical reviewer rather than a passive assistant. One technique involves instructing Claude to “Grill me on these changes and don’t make a PR until I pass your test,” effectively making the AI assistant the gatekeeper for code quality. This approach forces developers to articulate and defend their design decisions, often surfacing issues that might otherwise slip through review.
Another technique addresses the common problem of accepting mediocre solutions due to time pressure or cognitive fatigue. After receiving a suboptimal fix, developers can prompt Claude with: “Knowing everything you know now, scrap this and implement the elegant solution.” This instruction leverages the context Claude has accumulated during the initial implementation attempt, but frees it from the anchoring bias of the existing solution. The result is often a significantly better implementation that benefits from lessons learned during the first attempt.
The team also emphasizes the importance of detailed specifications before handing off work to Claude. The principle is straightforward: the more specific and unambiguous the requirements, the better the output. This recommendation aligns with traditional software engineering wisdom about requirements specification, but takes on new importance when the implementer is an AI assistant that cannot ask clarifying questions in the same way a human colleague might.
Terminal Optimization: Infrastructure for AI-Assisted Development
The seventh set of recommendations addresses the often-overlooked infrastructure layer: terminal and environment configuration. The team has standardized on Ghostty as their preferred terminal emulator, with multiple engineers citing its synchronized rendering, 24-bit color support, and proper Unicode handling as key advantages. These technical capabilities matter more in an AI-assisted development environment where developers frequently juggle multiple sessions and need clear visual differentiation between contexts.
The team uses Claude Code’s /statusline command to customize their status bars, always displaying context usage and the current git branch. This persistent visibility of context consumption helps developers manage Claude’s token budget more effectively, avoiding situations where context limits interrupt productive sessions. Many engineers also color-code and name their terminal tabs, sometimes using tmux, with one tab per task or worktree. This visual organization system makes it easier to maintain mental models of multiple parallel development streams.
Perhaps most surprisingly, Cherny recommends voice dictation as a productivity multiplier, noting that “you speak 3x faster than you type, and your prompts get way more detailed as a result.” On macOS, this feature is accessible by pressing the function key twice. The recommendation suggests that the bottleneck in AI-assisted development is often the quality and detail of human input rather than the AI’s processing capability. By removing the friction of typing, voice dictation encourages more thorough and nuanced prompts, which in turn produce better results. Additional configuration tips are available in the official Claude Code documentation.
Subagent Architecture: Distributing Cognitive Load Across Multiple AI Instances
The eighth recommendation introduces a more advanced architectural pattern: using subagents to distribute work and manage context. The basic technique is simple—append “use subagents” to any request where you want Claude to apply more computational resources to a problem. However, the strategic applications of this pattern reveal sophisticated thinking about how to structure AI-assisted work.
One key use case involves offloading individual tasks to subagents to keep the main agent’s context window clean and focused. As Claude processes more information and generates more code, its context window fills up, potentially degrading performance or requiring a new session. By delegating discrete tasks to subagents, developers can maintain a lean main context focused on high-level coordination while subagents handle specific implementation details. This mirrors how human engineering teams structure work, with tech leads maintaining architectural context while individual contributors focus on specific components.
The team has also developed sophisticated automation around subagents, particularly for security-sensitive operations. Using hooks, they route permission requests to Opus 4.5, instructing it to scan for potential attacks and auto-approve safe operations. This implementation, documented in the official Claude Code hooks guide, creates a security layer that balances developer velocity with protection against potentially dangerous operations. The pattern demonstrates how multiple AI models with different capabilities can be composed into a security architecture, with more capable models serving as gatekeepers for sensitive operations.
Analytics Without SQL: Turning Claude Into a Data Analyst
The ninth recommendation addresses a use case that extends beyond traditional software development: using Claude Code for data analytics and business intelligence. The team leverages Claude’s ability to interact with command-line tools, particularly the BigQuery CLI, to pull and analyze metrics on the fly. Cherny reveals that the team has a BigQuery skill checked into their codebase, and “everyone on the team uses it for analytics queries directly in Claude Code.” In a striking testament to the tool’s effectiveness, Cherny states: “Personally, I haven’t written a line of SQL in 6+ months.”
This application of Claude Code represents a significant expansion of its utility beyond code generation and debugging. By treating data analysis as a natural extension of the development workflow, the team eliminates the context switching typically required to answer analytical questions. Instead of opening a separate SQL client, formulating queries, and interpreting results, developers can ask Claude natural language questions about their data and receive analyzed results within the same environment where they’re writing code.
The approach generalizes to any database or data source with a CLI, MCP, or API, suggesting a broad applicability beyond BigQuery. This flexibility means teams can integrate their existing data infrastructure into Claude Code workflows without requiring specialized integrations. The pattern effectively turns Claude into a universal data interface, capable of translating natural language questions into appropriate queries for whatever data systems a team uses, then interpreting and visualizing the results.
Claude as Teacher: Transforming AI Assistance Into a Learning Platform
The final set of recommendations reframes Claude Code as an educational tool, not just a productivity enhancer. The team has developed several techniques for using Claude to accelerate learning and deepen understanding of unfamiliar codebases and technologies. The first approach involves enabling the “Explanatory” or “Learning” output style in Claude’s configuration, which instructs Claude to explain the reasoning behind its changes rather than simply implementing them. This transparency transforms each interaction into a teaching moment, helping developers understand not just what changed, but why.
The team has also discovered that Claude can generate surprisingly effective educational materials. One technique involves asking Claude to create a visual HTML presentation explaining unfamiliar code, with team members reporting that “it makes surprisingly good slides.” This capability addresses a common challenge in software engineering: the difficulty of quickly getting up to speed on complex codebases or architectural patterns. By generating visual explanations on demand, Claude accelerates the onboarding process for new projects or technologies.
For understanding system architectures and protocols, engineers ask Claude to draw ASCII diagrams, providing visual representations that aid comprehension. Perhaps most ambitiously, one team member built a spaced-repetition learning skill that implements a structured learning process: the developer explains their understanding, Claude asks follow-up questions to identify gaps, and the system stores the results for future review. This application transforms Claude from a passive reference tool into an active tutor that adapts to individual learning needs and systematically addresses knowledge gaps.
Implications for the Software Development Industry
The practices detailed by Cherny and his team suggest that we’re witnessing the emergence of a new development paradigm, one where AI assistants are deeply integrated into every phase of the software lifecycle. The sophistication of these workflows—parallel processing with git worktrees, adversarial planning reviews, self-documenting AI systems, and composed multi-agent architectures—indicates that the industry has moved well beyond treating AI coding tools as glorified autocomplete features.
What’s particularly striking is how these practices mirror and extend traditional software engineering disciplines rather than replacing them. The emphasis on planning before implementation, systematic documentation, code review, and continuous learning all have direct parallels in established best practices. However, the AI layer amplifies the effectiveness of these practices by reducing the friction and time cost associated with each. Planning becomes more thorough when Claude can rapidly prototype and critique approaches. Documentation becomes more comprehensive when the AI can write and maintain it as a natural part of the workflow. Code review becomes more rigorous when developers can instantly spin up an AI reviewer to challenge their decisions.
The economic implications are significant. If three to five parallel Claude sessions can genuinely multiply developer productivity in the way Cherny suggests, the effective cost of software development could drop dramatically while quality simultaneously improves. However, realizing these gains requires substantial upfront investment in workflow design, tooling configuration, and skill development. The detailed nature of Cherny’s recommendations—covering everything from terminal emulators to prompt engineering techniques—suggests that effectively leveraging AI assistance is itself becoming a specialized skill that separates high-performing developers from those who achieve mediocre results.
The Evolution of Developer Workflows in an AI-Augmented World
The practices emerging from the Claude Code team also raise important questions about how software development roles and responsibilities may evolve. When AI assistants can autonomously fix bugs, write analytics queries, and generate educational materials, what becomes the distinctive value of human developers? Cherny’s recommendations suggest an answer: humans increasingly focus on high-level architecture, requirement specification, and critical evaluation, while AI handles implementation details and routine tasks.
This division of labor appears in multiple recommendations. The emphasis on detailed planning and specification suggests that clearly articulating requirements becomes more important, not less, in an AI-assisted environment. The practice of having Claude serve as a critical reviewer positions humans as the ultimate decision-makers who must defend their choices against AI scrutiny. The use of voice dictation to create more detailed prompts suggests that the bottleneck is human ability to articulate nuanced requirements, not AI capability to implement them.
However, the technical sophistication required to implement these workflows—setting up git worktrees, configuring MCP integrations, writing custom skills, and architecting multi-agent systems—indicates that effective AI assistance doesn’t reduce the need for technical expertise. Instead, it shifts that expertise toward meta-level concerns: designing systems of humans and AI working together, rather than simply writing code directly. This evolution may favor developers who combine deep technical knowledge with strong communication skills and systems thinking, while potentially disadvantaging those who have focused narrowly on implementation mechanics.
Security, Reliability, and the Trust Equation
Implicit in many of Cherny’s recommendations is a high degree of trust in Claude’s capabilities and judgment. Instructions like “Go fix the failing CI tests” without specifying how, or pasting a bug report and simply saying “fix,” delegate significant autonomy to the AI assistant. The practice of routing permission requests to Opus 4.5 for automatic approval of safe operations creates an AI-powered security layer that operates with minimal human oversight. These patterns raise important questions about reliability, security, and accountability in AI-assisted development.
The team’s approach to these concerns appears to emphasize systematic learning and continuous improvement rather than restrictive guardrails. The practice of updating CLAUDE.md after every correction creates a feedback loop where mistakes inform future behavior. The use of planning mode for verification steps builds validation into the workflow rather than treating it as an afterthought. The emphasis on ruthlessly editing documentation until mistake rates measurably drop suggests a data-driven approach to reliability improvement.
Nevertheless, the delegation of significant autonomy to AI assistants represents a departure from traditional software development practices where humans directly review every change. The team’s confidence in these practices likely reflects both the capabilities of Claude specifically and their deep familiarity with its strengths and limitations. Organizations adopting similar practices will need to develop their own understanding of where AI assistance is reliable and where human oversight remains critical. The sophisticated workflows described by Cherny may represent an aspirational target rather than an immediately achievable state for teams just beginning to integrate AI into their development processes.
The Competitive Dynamics of AI-Assisted Development Tools
Cherny’s detailed recommendations also shed light on the competitive dynamics in the AI coding tools market. While much public attention focuses on which AI model is most capable—comparing Claude against GPT-4, Gemini, or other alternatives—these insights suggest that workflow integration and user experience may be equally or more important differentiators. The native support for git worktrees, the MCP integration for Slack and other tools, the hooks system for permission management, and the skills framework for custom automation all represent significant product engineering beyond the base language model.
This observation has important implications for both startups and established players in the developer tools market. Simply wrapping a capable language model in a basic interface may not be sufficient to compete against tools that have deeply considered how AI assistance integrates into real development workflows. The practices described by Cherny—parallel sessions, persistent context management, custom skills, multi-agent orchestration—require substantial product infrastructure to support effectively. Building this infrastructure requires both technical sophistication and deep understanding of developer workflows, creating potential barriers to entry for new competitors.
At the same time, the emphasis on customization and extensibility—through CLAUDE.md files, custom skills, and configurable hooks—suggests that developer preferences and workflows vary significantly. A one-size-fits-all approach is unlikely to serve all users equally well. This diversity may create opportunities for specialized tools targeting specific development contexts, programming languages, or workflow patterns. The market may evolve toward a mix of general-purpose platforms like Claude Code and specialized tools optimized for particular domains or use cases.
Open Questions and Future Directions
Despite the detailed nature of Cherny’s recommendations, significant questions remain about the scalability and generalizability of these practices. The Claude Code team presumably consists of highly skilled engineers working on a sophisticated codebase with substantial resources. How well do these practices transfer to smaller teams, less experienced developers, or codebases with different characteristics? The recommendation to run three to five parallel Claude sessions assumes both the financial resources to pay for that compute and the cognitive capacity to manage multiple concurrent development streams—assumptions that may not hold for all developers or organizations.
The emphasis on extensive customization—maintaining CLAUDE.md files, building custom skills, configuring terminal environments—also raises questions about the learning curve and time investment required to achieve the productivity gains Cherny describes. For individual developers or small teams, the upfront cost of implementing these practices may be substantial, and the return on investment may depend on factors like project duration, codebase complexity, and team size. Organizations will need to carefully consider whether the potential productivity gains justify the investment in training, tooling, and process changes.
Looking forward, the practices described by Cherny may represent an early glimpse of how software development will evolve as AI capabilities continue to improve. If current trends continue, AI assistants will become more capable, more reliable, and better integrated into development workflows. The sophisticated multi-agent architectures and parallel processing patterns used by the Claude Code team today may become standard practice across the industry. Alternatively, further advances in AI capability may enable even more dramatic changes in how software is created, potentially moving beyond the human-AI collaboration model toward more autonomous development systems. The practices emerging today provide valuable data points for understanding this evolution, even if the ultimate destination remains uncertain.


WebProNews is an iEntry Publication