Tech Lead Reveals Simple Documentation Fix for AI-Generated Code That Passes Tests but Breaks Architecture

By • min read
<h2 id="breaking">Breaking: AI Code Review Bottleneck Solved by Moving Team Memory Into Files</h2> <p>A tech lead has identified a critical gap in AI-generated pull requests: code that passes all tests but violates architectural rules because the AI lacks context that exists only in team memory. The fix is a pair of plain-text files stored in the repository.</p><figure style="margin:20px 0"><img src="https://cdn.hashnode.com/uploads/covers/5e1e335a7a1d3fcc59028c64/c94dff21-66d0-4256-bf3e-25c1978364d9.png" alt="Tech Lead Reveals Simple Documentation Fix for AI-Generated Code That Passes Tests but Breaks Architecture" style="width:100%;height:auto;border-radius:8px" loading="lazy"><figcaption style="font-size:12px;color:#666;margin-top:5px">Source: www.freecodecamp.org</figcaption></figure> <p>"The AI wrote clean code. Tests passed. But it imported the wrong authentication middleware because the migration policy was in a six-month-old Slack thread, not in the diff," said the author, a tech lead at a fast-moving engineering team. "I caught it on the second read, but the third reviewer wouldn't have."</p> <h2 id="background">Background: The Slow-Burn Problem AI Accelerated</h2> <p>Code generation tools like Claude Code, Cursor, and GitHub Copilot have increased pull request throughput dramatically. But with faster generation comes a longer review queue. The hardest reviews are those where everything looks right—only one wrong import or missing line that lives only in tribal knowledge.</p> <p>The author's team spent a quarter migrating from a v1 authentication middleware (MongoDB) to v2 (MySQL). New endpoints were required to use v2. An AI agent used v1 for three new endpoints. Tests passed because user records still existed in both databases. The error was invisible to automated checks.</p> <p>"Every new endpoint we shipped was reinforcing the legacy auth path we had just spent a quarter trying to retire," the author noted.</p> <h2 id="solution">The Solution: Two Files That Changed Everything</h2> <p>The fix was not a new tool but a new structure: <strong>AGENTS.md</strong> and <strong>CLAUDE.md</strong> files stored at the repository root. These documents contain the rules, conventions, and migration policies that previously lived only in team members' heads or scattered across Slack threads and meeting notes.</p> <p>"The realization was simple: move the team's memory into a place the AI could actually read," the author wrote. "The structure matters more than the tool."</p> <h3>How It Works</h3> <ol> <li><strong>Repository-level memory files</strong>: AGENTS.md for general AI agent instructions, CLAUDE.md for Claude-specific guidance. Each file contains actionable rules like "New endpoints must use v2 auth middleware."</li> <li><strong>Per-service memory files</strong>: For microservice architectures, each service gets its own memory file documenting service-specific conventions and dependencies.</li> <li><strong>Read-only guardrails</strong>: The AI PR reviewer is configured to read these files but not modify them, ensuring consistency.</li> </ol> <h2 id="what-this-means">What This Means for Engineering Teams</h2> <p>This approach addresses the root cause of AI-generated architectural violations: missing context. Instead of trying to train a model on every team rule, teams inject that knowledge directly into the codebase in a format the AI can parse.</p> <p>"The bottleneck wasn't the AI; it was that we had no mechanism to share unwritten rules with the AI," the author explained. "Now the AI catches those mistakes before a human is pulled in."</p> <p>The method is tool-agnostic and works with Claude Code, Cursor, Cline, GitHub Copilot, and any combination thereof. Early results show a <strong>meaningful reduction</strong> in review cycles spent on context-related errors.</p> <h2 id="implementation">Implementation: Two-Week Setup Plan</h2> <p>Teams can start from zero on an existing project. The author recommends:</p> <ul> <li><strong>Week 1</strong>: Create AGENTS.md and CLAUDE.md with the top five rules that trip up AI agents. Include migration targets, preferred libraries, and deprecation notes.</li> <li><strong>Week 2</strong>: Add per-service memory files for each microservice. Configure the PR review command to read these files by default (read-only).</li> <li><strong>Ongoing</strong>: Update the files whenever team conventions change. Generated documentation can be added as a side effect of the review process.</li> </ul> <p>"The files become a living contract between the team and the AI," the author said. "They also serve as onboarding documentation for new engineers."</p> <h2 id="limitations">What Still Needs Human Review</h2> <p>Not all review can be automated. Architectural decisions involving tradeoffs between performance, security, and maintainability still require human judgment. The AI reviewer catches mistakes in rules that are <em>explicitly documented</em>—it does not replace human reasoning about novel design choices.</p> <p>"We still do human review for anything that isn't a cut-and-dried rule," the author clarified. "But we've eliminated the category of errors that were invisible because the rule lived somewhere no one could quote it."</p> <h2 id="outcome">Outcome: A Compounding Loop of Improved Reviews</h2> <p>The feedback loop works as follows: each time a human catches a violation during review, they add the rule to the memory files. Over time, the AI catches more errors, reducing cycle time and building a shared knowledge base. The author reports that after two months, the team's review queue shrank by an estimated 30%.</p> <p>"The compounding effect is real. Every rule we add saves about 20 minutes of back-and-forth per PR," the author estimated.</p> <h2 id="sources">Sources</h2> <p>Based on a firsthand account published by a tech lead at a mid-sized engineering organization. The original guide includes detailed implementation instructions for teams using Claude Code, Cursor, Cline, or GitHub Copilot.</p>