© 2026 DEV BAK - TECH BLOG. All rights reserved.
DEV BAK - TECH BLOG
AllAIBackendClaudeCodexDevOpsOpenClawOpenSourcefrontend
AI

Controlling Claude Code & Coding Agent Behavior with AGENTS.md: A Practical Guide to Three-Tier Context Engineering — Always / Ask First / Never

If you've used AI coding agents like Claude Code, Cursor, or OpenAI Codex, you've probably had this experience. You clearly said "just fix the tests," but the agent suddenly touches package.json or quietly peeks at your .env file. I used to find myself frantically hitting Ctrl+Z and thinking "wait, hold on..." And honestly, that's the lucky scenario. In practice, it's not uncommon for agents to accidentally add dependencies, break a package build, or in serious cases, push commits directly to the production branch.

The problem isn't that agents are "dumb." If anything, the problem is that they work too diligently and too autonomously. Because no one ever told them where the boundaries were. It's like telling someone "just figure it out" and throwing a new team member straight into the production environment without a single onboarding document.

This article covers how to define those boundaries in a single file. By codifying "what's always allowed / what to ask first / what's absolutely forbidden" in AGENTS.md, agents behave far more predictably, and teams can delegate autonomy with greater confidence. Drawing from real-world experience, I'll walk through rule-writing principles, hierarchical override patterns, and an honest take on "does this actually work?"


Core Concepts

Three Buckets — A Map of Agent Behavior

The idea behind AGENTS.md is simple: divide everything an agent can do into exactly three zones.

Bucket Meaning Representative Keywords
Always Actions that can be executed immediately without approval Allow, Always, Use
Ask first Actions that require user approval before execution Prompt, Ask before, Confirm
Never Actions that cannot be executed under any circumstances Forbidden, Never, Avoid

Priority order is forbidden > prompt > allow. The more restrictive rule always wins. There's a deliberate design reason for this. If "allow" wins when "allow" and "forbidden" conflict, a single missing rule could cause the agent to behave unpredictably. Conversely, if "forbidden" wins, the worst case is that the agent is overly cautious — which is far safer. This principle is designed with predictability and safety as the top priorities.

Context Engineering — The Next Step Beyond Prompt Engineering

If you think of AGENTS.md as simply "writing instructions in a file," you're only seeing half the picture.

Context Engineering: A concept defined by Anthropic as "a set of strategies for curating and maintaining the optimal set of tokens during LLM reasoning." While prompt engineering focuses on what to say, context engineering is systems engineering that designs what information to provide and in what order.

Providing a rules file to an agent is similar to giving an onboarding document to a new team member. It's like pre-loading the context of "this is how our team works." The difference is that an agent doesn't read and remember a document — it processes the rules as context on every single request. There's also a tradeoff: since there's a limit to how much information an agent can process at once (token limits), the longer the rules get, the less space remains for the actual task.

4 Principles for Writing Good Rules

Here are the criteria I've refined through real-world use.

  1. Start with action verbs — The first word should be an imperative verb like Use, Avoid, Prefer, Always, or Never. "Pursue clean code" is far less clear than "Use named exports only."
  2. Keep it under 25 words — Short, specific rules produce more consistent results than long, vague instructions.
  3. Separate positives from negatives — Mixing "do this" and "don't do this" confuses the agent too. Explicitly separating the buckets is more effective.
  4. Concrete over abstract — Instead of "write type-safely," use "TypeScript strict mode, no any types, named exports only." That actually produces consistent results.

Practical Application

Now that you understand the concepts, let's look at what the actual files look like.

Example 1: Three-Tier Permission Model (Team-Level Baseline)

The following is a permission model reconstructed from public examples and real-world experience across several teams. The logic: freely allow file reading, formatting, and single-file checks; add a confirmation step for external-impact actions (package installs, git push); and fully block security- and deployment-related actions.

markdown
## Permissions
 
### Allowed without prompt
- Read files, list files
- TypeScript single-file type check (tsc --noEmit specific-file.ts)
- Prettier, ESLint
- Vitest single test
 
### Ask first
- Package installs (pnpm add)
- Git push
- Deleting files, chmod
- Running full build or end-to-end test suites
 
### Never
- Rotate API keys or secrets
- Modify CI/CD pipeline configuration
- Push directly to main branch

The act of agreeing on "which tasks go in which bucket" is itself a valuable conversation for the team. It becomes an opportunity to explicitly discuss what you're ready to delegate to an agent.

Before building a complex setup, try starting with just these three lines. You'll immediately notice a difference in agent behavior.

markdown
Always: Run pnpm type-check before finishing
Ask before: Installing dependencies
Never: Commit .env files

Example 2: Coding-Convention-Focused AGENTS.md

It's not just permission models — code style rules can be expressed the same way. Here's a form you can use directly in a TypeScript project.

markdown
## Coding Conventions
 
Always:
- Use async/await instead of .then()
- Prefer named exports over default exports
- Run `pnpm type-check` before marking a task complete
 
Ask before:
- Installing new production dependencies
- Modifying shared configuration files (tsconfig.json, vite.config.ts)
- Creating new top-level directories
 
Never:
- Use `any` type in TypeScript
- Commit .env files or secrets
- Bypass pre-commit hooks with --no-verify

Each rule has a reason behind it.

Rule Reason
Enforce async/await Prevents callback hell and mixed usage of .then() chaining
Run pnpm type-check first Catches type errors before the agent marks a task as complete
Ban any Core of TypeScript strict mode; prevents runtime errors
Ban --no-verify Prevents quality standard degradation from bypassing pre-commit hooks

Example 3: Hierarchical Override Pattern

For team projects, you can manage global settings, project-level settings, and package-level settings separately. More specific (deeper directory depth) files take priority.

python
~/.codex/AGENTS.md             # Global personal defaults (my dev habits)
/repo/AGENTS.md                # Shared team standards (applies to all contributors)
/repo/packages/api/AGENTS.md   # Package-specific local rules (API server only)

Let's look at how this works concretely when there's a conflict. If the global file defines running Prettier as Always, but a specific package has a formatter that conflicts with auto-generated files, you can override it like this:

markdown
# ~/.codex/AGENTS.md (global)
Always: Run Prettier before committing
 
# /repo/packages/api/AGENTS.md (package-level override)
Ask before: Running Prettier on generated/* directories

This structure lays down shared team standards while still respecting individual contexts.

Example 4: OpenAI Codex .rules — Programmable Permissions Beyond Markdown

Writing rules as text in a Markdown file is a form where the agent "references" the rules. There's room for the agent to interpret or overlook rules differently depending on context. I spent a while unable to find a good answer to the question "what if I need stronger guarantees?"

OpenAI Codex's .rules file takes this one step further. Using Starlark syntax, it intercepts tool calls themselves and returns one of allow / prompt / forbidden.

python
# .rules — Starlark syntax
def rule(event, config):
    # Always prompt for bash commands related to migrations
    if event.tool == "bash" and "migration" in event.input:
        return "prompt"
    # Never allow rm -rf
    if event.tool == "bash" and "rm -rf" in event.input:
        return "forbidden"
    return "allow"

Starlark: A Python-like scripting language developed by Google, derived from the Bazel build system. Its guaranteed deterministic execution makes it well-suited for domains like permission models where predictability is critical.

If Markdown AGENTS.md is a request saying "please behave this way," then .rules is closer to "when this behavior is detected, always handle it this way." Text-based rules are sufficient in the vast majority of cases, but for security- and data-related prohibitions, this kind of code-level blocking is far more trustworthy.


Pros and Cons Analysis

Advantages

Honestly, the second item was the one that hit home the most for me. You don't really know how good it feels when style complaints like "why did you use any?" disappear from PR reviews — until you try it yourself.

Item Details
Behavioral predictability Forbidden rules operate as "blocks," not "suggestions." Combined with tool permission settings, you get strong guarantees.
Externalizing team context Gathering code conventions, commit practices, and forbidden patterns in a single file reduces onboarding costs and eliminates style debates in PR reviews.
Incremental refinement AGENTS.md is a living document. When an agent makes the same mistake twice, you add that correction to the file — team experience accumulates as rules.
Tool-agnostic scalability As the AGENTS.md open format becomes standardized, the "One AGENTS.md to Rule Them All" paradigm — controlling Claude Code, Codex, Cursor, and others with a single file — is becoming a reality.

Downsides and Caveats

It's not all upside. Here are the pitfalls I've encountered honestly laid out.

About actual effectiveness, it's worth taking a clear-eyed look. When I first saw a 2025 study measuring the effectiveness of AGENTS.md context files, I was genuinely surprised. Developer-written context files improved agent performance by an average of just 4%, while LLM-generated files actually caused a 3% decline. "Only 4%?" was my first thought — but these figures are about coding performance benchmarks, and things like reduced team communication costs or style consistency weren't included in the measurement. It's more realistic to approach this as a tool for reducing team communication overhead rather than expecting agents to write better code.

Another thing to watch out for: agents don't execute rules like a program. A rule is just one signal among many that influences a probabilistic outcome. Writing something in the Never bucket does not guarantee the agent will never do it. For truly important prohibitions, double-protecting with tool permission settings is necessary. Tool permission settings means restricting the list of tools the agent can use in the first place. For example, if you remove the Write tool from the allowlist in Claude Code, the agent simply cannot modify files at all. "What if it doesn't follow the rule?" is a worry better replaced with "make it impossible in the first place."

There's also the file bloat problem. Stuffing in every conceivable rule makes the file huge, which backfires. Since there's a limit to how much information an agent can process at once, longer rule files leave less space for actual work. Twenty short, precise rules beat one hundred long, vague ones.

Item Details Mitigation
Smaller effect than expected Context files averaged only 4% improvement. Use as a tool for reducing team communication costs.
Rules are suggestions, not commands Agents reflect rules probabilistically. Implement hard guardrails via tool permission settings.
File bloat Rules consume tokens, eating into actual work space. Keep only short, essential rules.
Incomplete cross-tool compatibility Each agent still interprets the file differently. Share only common-denominator rules; manage tool-specific rules separately.

Most Common Mistakes in Practice

These are cases I've personally experienced or that have caused real problems for teams.

  1. Having an LLM auto-generate AGENTS.md — I initially thought "wouldn't it be convenient to show GPT our project code and have it write the AGENTS.md?" and tried it. It looks plausible, but the rules that came out were disconnected from the actual team context. The research results mentioned earlier back this up — auto-generated files actually caused a performance decline. Rules written by an agent didn't come from pain the team actually experienced. Writing them yourself is strongly recommended.

  2. Making the "Always" bucket too large — For the sake of convenience, putting too many tasks in Always means the agent autonomously executes large tasks at unexpected times. Starting conservatively and gradually expanding is recommended. Rather than "I wish this were allowed," use "how bad would it be if this went wrong?" as your criterion for sorting into buckets.

  3. Trusting only the rules file and skipping tool permission settings — Writing something in the Never bucket doesn't guarantee the agent will never execute it. For truly dangerous actions (modifying a production DB, rotating secrets, etc.), it's safer to remove the tool permission entirely or hard-block it with middleware.

HumanLayer: A middleware tool that helps automate the "ask first" tier. It automatically detects specific tool calls (e.g., Bash commands containing migrations) based on rules, blocks them, and escalates to the user — providing a middle ground between soft rules and hard blocks.


Closing Thoughts

When you explicitly define the boundaries of behavior, agents become far more predictable. Instead of expecting an agent to "figure it out," declaring boundaries in a single AGENTS.md file lets the entire team delegate autonomy with greater confidence. As rule files accumulate, they become onboarding documentation, style debates disappear from PR reviews, and team experience accumulates as context rather than code. Not agents that "just get it right," but agents that move the way the team designed them to — that's what a team that collaborates well with agents looks like to me.

Here are 3 steps you can start right now.

  1. Create an AGENTS.md file in your project root. Three lines are enough to start. Always: Run pnpm type-check before finishing / Ask before: Installing dependencies / Never: Commit .env files. Even this much will make a noticeable difference in agent behavior.

  2. Gather "moments when the agent surprised you most" with your team. That list becomes the first draft of your Never bucket. Rules born from real incidents are the most effective.

  3. For truly important prohibitions, double-protect with tool permission settings. It's worth considering disabling specific tools in your agent's permission settings, or using middleware like HumanLayer to automatically block specific command patterns.


Next article: How to design AGENTS.md for multi-agent orchestration — context separation strategies between orchestrators and subagents, and role-based permission delegation patterns


References

  • AGENTS.md — a simple, open format for guiding coding agents | GitHub
  • Custom instructions with AGENTS.md – OpenAI Codex Developers
  • Improve your AI code output with AGENTS.md | Builder.io Blog
  • Writing a good CLAUDE.md | HumanLayer Blog
  • Advanced Context Engineering for Coding Agents | HumanLayer GitHub
  • Context Engineering for Coding Agents | Martin Fowler
  • Effective context engineering for AI agents | Anthropic Engineering
  • Harness engineering for coding agent users | Martin Fowler
  • Evaluating AGENTS.md: Are Context Files Helpful for Coding Agents?
  • AGENTS.md Best Practices | agentsmd.io
Share

Table of Contents

Core ConceptsThree Buckets — A Map of Agent BehaviorContext Engineering — The Next Step Beyond Prompt Engineering4 Principles for Writing Good RulesPractical ApplicationExample 1: Three-Tier Permission Model (Team-Level Baseline)Example 2: Coding-Convention-Focused AGENTS.mdExample 3: Hierarchical Override PatternExample 4: OpenAI CodexPros and Cons AnalysisAdvantagesDownsides and CaveatsMost Common Mistakes in PracticeClosing ThoughtsReferences

Recommended Posts

AGENTS.md Design Guide: Multi-Agent Context Isolation and Permission Delegation Patterns
AI

AGENTS.md Design Guide: Multi-Agent Context Isolation and Permission Delegation Patterns

Have you ever assigned "refactor this entire repo" to a single AI agent, only to watch the context window hit its limit or the agent suddenly start modifying co...

2026년 04월 27일26 min read
Automating AGENTS.md Sync: How to Prevent Context Rot with PR Templates and Pre-commit Hooks
AI

Automating AGENTS.md Sync: How to Prevent Context Rot with PR Templates and Pre-commit Hooks

To be honest, I let this problem linger for quite a while. After introducing AI coding agents to the team and carefully crafting an AGENTS.md, I one day caught ...

2026년 04월 27일20 min read
AGENTS.md vs CLAUDE.md — A Single Source of Truth Strategy to Prevent Drift, and a Practical Guide to Symlink Synchronization
AI

AGENTS.md vs CLAUDE.md — A Single Source of Truth Strategy to Prevent Drift, and a Practical Guide to Symlink Synchronization

I used to think, "They contain basically the same content anyway, so what's the harm in having two files?" If you're reading this right now, you might want to c...

2026년 04월 27일21 min read
The Complete Guide to CLAUDE.md — How to Unify Your Team's AI Coding Conventions in a Single File
AI

The Complete Guide to CLAUDE.md — How to Unify Your Team's AI Coding Conventions in a Single File

Have you ever introduced an AI coding tool to your team, only to find that every team member gets completely different code styles from the AI? I started out op...

2026년 04월 27일23 min read
The 2026 AI Coding Stack That Changed 4% of GitHub Commits — A Practical Frontend Guide to Combining Claude Code · Cursor · Codex
AI

The 2026 AI Coding Stack That Changed 4% of GitHub Commits — A Practical Frontend Guide to Combining Claude Code · Cursor · Codex

Claude Code vs Codex vs Cursor: Frontend Developer's Workflow in 2026 Honestly, I used to live with the question "Which of these three should I use?" myself....

2026년 04월 27일23 min read
Multi-Agent AI Code Review Orchestration Architecture Pattern Guide
AI

Multi-Agent AI Code Review Orchestration Architecture Pattern Guide

To be honest, until recently, when I heard "AI code review," I pictured pasting a diff into ChatGPT and asking "Does this look okay?" But lately, PR sizes have ...

2026년 04월 21일27 min read