Encoding Team Standards
April 1, 2026

I have observed this pattern repeatedly. A senior engineer, when asking
the AI to generate a new service, instinctively specifies: follow our
functional style, use the existing error-handling middleware, place it in
lib/services/, make types explicit, use our logging utility rather than
console.log. When asking the AI to refactor, she specifies: preserve the
public contract, avoid premature abstraction, keep functions small and
single-purpose. When asking it to check security, she knows to specify:
check for SQL injection, verify authorization on every endpoint, ensure
secrets are not hardcoded.A less experienced developer, faced with the same tasks, asks the AI to
“create a notification service” or “clean up this code” or “check if this
is secure.” Same codebase. Same AI. Completely different quality gates,
across every interaction, not just review.This is a systems problem, not a skills problem. And it requires a
systems solution.
The style, conventions, patterns of a codebase are things that the developers who work on it slowly internalize through experience, through exposure to code. But rarely from sitting down and reading documentation as to those properties of a project.
And we know LLMs are really good at taking such documentation and incorporating it into the context of how they work.
So here Rahul Garg proposes
treating the instructions that govern AI interactions (generation, refactoring, security, review) as infrastructure: versioned, reviewed, and shared artifacts that encode tacit team knowledge into executable instructions, making quality consistent regardless of who is at the keyboard.







