
The Problem with Tribal Knowledge
Most teams have coding guidelines. They live in many places, including wikis, onboarding documents, and the heads of senior engineers who will politely point out in code review that you shouldn't log the request body. The problem is that guidelines only work if they're applied consistently, and consistency requires either constant vigilance or automation.
I wanted to find out whether I could move my team's logging rules out of a document and into the tool itself, so that Claude would apply them automatically, every time, without being asked.
Two Mechanisms Worth Knowing
Claude Code offers two complementary ways to encode behaviour:
The first is instruction-based enforcement via CLAUDE.md. Any markdown file with this name is automatically loaded into every conversation. A global ~/.claude/CLAUDE.md applies everywhere; a project-level .claude/CLAUDE.md applies only within that project. Whatever you write there becomes part of Claude's context before you type a single message.
The second is hooks, shell commands that run automatically in response to events like a file being edited. A PostToolUse hook, a mechanism for reacting to the results of Claude’s tool actions, can run a script after every change Claude makes and feed the result back. This is useful for mechanical checks, such as: flagging console.log in production code, or detecting known sensitive field names in log calls.
For nuanced, semantic rules like:
- Is this log message meaningful?
- Does this error context carry enough information?
instructions win. Hooks are better suited to enforcing things a pattern-matcher can check. The two are complementary, not competing.
Starting with the Source
I had two sources to draw from: a blog post I'd written summarising the principles, and a detailed working document going deeper into the mechanics. Rather than summarise them myself, I pointed Claude at both and let it read them directly, the blog post via URL, the document from a local file.
The first step was simply getting Claude to read the blog post on its own, so it could internalise the philosophy behind the rules. The blog post contained the “golden rule” questions including, “Would this log help someone diagnose a real problem?”, and the “scaffolding” metaphor, which frames debug logs as temporary supports that should be removed once the structure is sound.
The second step was having Claude read the working document, this time for the mechanics. For example, structured context shapes define what information belongs in each log, correlation IDs tie related logs together across services, and the logging library pattern ensures every service logs in a consistent way.
The result was a single set of rules that merged both sources without duplication: the blog post's philosophy (the "golden rule" questions, the scaffolding metaphor for debug logs) combined with the document's technical specifics (structured context shapes, correlation ID propagation, the logging library pattern).
This merge was the moment the guidelines stopped being tribal knowledge and became something Claude could actually enforce.
Making It Portable
The natural instinct might be to put everything into the global CLAUDE.md directly. But that creates a problem: if you want the same rules to apply in multiple projects, you end up either duplicating them or having nothing project specific at all.
Claude Code supports an @filename import syntax in CLAUDE.md. This means you can keep the rules in a dedicated file - logging-guidelines.md - and have CLAUDE.md simply reference it:
@logging-guidelines.md
Now the rules file is portable. You can drop it into any project's .claude/ directory, add the import line to that project's CLAUDE.md, and the same standards apply. You can also layer team or organisation specific rules on top. In our case, a checkout-logging-guidelines.md covering a utility function specific to our HTTP client library.
After applying these rules to a project, I discovered we needed another rule. We created checkout-logging-guidelines.md to encode our convention of using handleException, a function from our customer HTTP library, so errors are sanitised before they reach the logger.
Applying It to a Real Project
Setting up the configuration is only part of the story. The real test is whether it changes what Claude actually does.
I pointed Claude at a service and asked it to find and fix violations of the new logging rules. It explored the codebase, identified every logger call that broke the rules, and produced a branch with fixes, grouping them into logical commits: log level corrections, PII removal, and then a separate pass to apply the new handleException pattern across all catch blocks.
The kinds of issues it found were the kinds that slip through code review: INFO logs for internal chatter that should be DEBUG; ERROR calls for configuration problems that aren't actually errors; caught exceptions passed raw into log context, bypassing the sanitisation utility the team already had.
Introducing handleException surfaced something subtler. When Claude started updating tests, it tried to mock the sanitisation function itself, which meant the tests would pass without ever exercising the real sanitisation guarantees. To prevent that, we added another rule: the HTTP client module (and its sanitisation utility) should never be mocked.
When a Copilot review later flagged a genuine bug, a JSON.parse call that would throw uncaught if the environment variable contained malformed JSON, Claude addressed the comment, fixed the code, added a test, and replied directly on the pull request.
What this demonstrated was the full loop: applying the rules to a real project, discovering gaps like the mocking strategy, and watching Claude enforce the refined rules end to end.
What This Actually Is
What we built here is closer to a linting configuration than a prompt. The rules are written once, stored in files, and loaded automatically. Claude enforces them the same way a well configured ESLint rule does, except the rules can be semantic. For example "is this log message meaningful?" rather than syntactic.
The broader point is that Claude Code is configurable in ways that go beyond telling it your preferred language or framework. You can encode your team's standards, your organisation's security requirements, your hard-won opinions about how things should be done and have them applied consistently, on every project, by default.
Next, I gave the resulting PR to my team to review. I suspect that will be a whole other story.
28 April 2026
