The problem: agents don’t know your project
Every time an AI coding agent opens your repository for the first time, it knows nothing. It doesn’t know how to build your project, which framework you’re using, where the tests live, what your commit message format is, or that npm test silently passes even when nothing runs. It has to explore reading READMEs, scanning file trees, grepping for build scripts, running commands that fail, retrying with different flags.
This exploration is expensive. It burns context window tokens on file reads that could be spent on the actual task. It slows down the agent, increases the chance of errors, and often leads to PRs that fail CI because the agent missed a build step or violated a convention that was never documented.
AGENTS.md solves this by giving the agent a map of your project before it starts working. It’s a structured Markdown file placed at the root of your repository that describes everything an agent needs to orient itself: how to build, how to test, what conventions to follow, and where the important code lives.
In PDRC terms, AGENTS.md is a persistent Plan artifact. It front-loads the context that every Plan phase needs, so agents (and the humans using them) don’t waste time rediscovering the same information for every task.
What is AGENTS.md?
AGENTS.md is a simple, open format for guiding coding agents. It was created collaboratively by teams at OpenAI (Codex), Google (Jules), Cursor, Amp, Factory, and others. It’s now stewarded by the Agentic AI Foundation under the Linux Foundation.
Think of it as a README for robots:
| File | Audience | Purpose |
|---|---|---|
README.md | Humans | Project description, quick start, contribution guidelines |
AGENTS.md | AI agents | Build commands, test steps, architecture map, conventions, gotchas |
copilot-instructions.md | GitHub Copilot | Copilot-specific behavioral instructions (higher priority than AGENTS.md in the Copilot stack) |
Key characteristics
- Just Markdown. No proprietary format, no YAML schema, no required fields. Any heading structure works.
- Cross-agent compatible. AGENTS.md is recognized by GitHub Copilot, OpenAI Codex, Google Jules, Cursor, Zed, Aider, Warp, Semgrep, Factory, and more.
- Hierarchical. In monorepos, place an AGENTS.md at the root and additional ones inside subpackages. The nearest file to the edited code takes precedence.
- Living documentation. Treat it like code; update it when you change build steps, add packages, or refactor architecture.
As of 2026, over 60,000 open-source projects on GitHub include an AGENTS.md file — from Apache Airflow to Temporal’s Java SDK.
AGENTS.md vs. other instruction files
In Ch 3, we introduced the three types of custom instruction files. Let’s clarify how AGENTS.md fits alongside them.
The instruction hierarchy
When a conflict exists, higher-priority instructions win. But in practice, these files serve different purposes and rarely conflict:
| File | What it’s for | Who reads it |
|---|---|---|
copilot-instructions.md | Copilot-specific behavior: response tone, code style preferences, framework conventions | GitHub Copilot (Chat, coding agent, code review) |
.instructions.md | Path-specific rules: “use Zod for validation in this folder”, “tests here use Vitest” | GitHub Copilot |
AGENTS.md | Project-wide operational knowledge: how to build, test, deploy; architecture overview; gotchas | All agents — Copilot, Codex, Jules, Cursor, etc. |
The mental model: copilot-instructions.md tells Copilot how to behave. AGENTS.md tells any agent how your project works.
If you only use GitHub Copilot, you could put everything in copilot-instructions.md. But AGENTS.md has a key advantage: it’s agent-agnostic. If your team also uses Cursor, Codex, or any other tool, they all benefit from the same file.
What to include in AGENTS.md
GitHub’s official documentation provides a detailed prompt for auto-generating this file (we’ll cover that shortly). The prompt categorizes content into three layers. Let’s break them down.
Layer 1: High-level context (what is this project?)
Start with the essentials that help an agent orient itself in seconds:
# AGENTS.md
## Project overview
This is a Next.js 15 web application for managing inventory across
multiple warehouses. It uses TypeScript, Prisma for database access,
and tRPC for type-safe API routes.
### Tech stack
- **Runtime**: Node.js 22
- **Framework**: Next.js 15 (App Router)
- **Language**: TypeScript 5.7 (strict mode)
- **Database**: PostgreSQL 16 via Prisma ORM
- **Testing**: Vitest (unit), Playwright (e2e)
- **Linter**: Biome
- **Package manager**: pnpm 10
This saves the agent from scanning package.json, tsconfig.json, and framework config files just to understand the basic stack. It can jump straight into the task.
Layer 2: Build and validation steps (how do I work here?)
This is the most operationally important section. Document every command the agent might need, in the order they should run:
## Setup
pnpm install # Install dependencies
cp .env.example .env # Create environment file
pnpm db:generate # Generate Prisma client
pnpm db:push # Push schema to local database
## Development
pnpm dev # Start dev server on port 3000
## Build
pnpm build # Type-check + build (runs `next build`)
## Testing
pnpm test # Run all Vitest unit tests
pnpm test:e2e # Run Playwright e2e tests (requires dev server)
pnpm test -- --run -t "test name" # Run a specific test
## Linting
pnpm lint # Run Biome linter with auto-fix
pnpm lint:check # Check without fixing (used in CI)
## Important notes
- Always run `pnpm install` before building or testing after any
dependency change.
- The `pnpm db:generate` step is required after any schema change,
or the TypeScript build will fail with missing type errors.
- E2E tests require the dev server running on port 3000.
- CI runs: lint:check → build → test → test:e2e (in that order).
Why this matters: GitHub’s official guidance says that documenting build steps, including common failures and workarounds, is the single most effective way to reduce rejected PRs from the coding agent. The agent will attempt to run these commands, verify they pass, and fix failures before submitting.
Layer 3: Architecture and layout (where is everything?)
Help the agent find what it needs without searching:
## Project structure
src/
app/ # Next.js App Router pages and layouts
api/ # tRPC API routes
(auth)/ # Authentication-related pages (grouped route)
components/ # Shared React components
ui/ # Primitive UI components (Button, Input, etc.)
lib/ # Shared utilities and configuration
db.ts # Prisma client singleton
auth.ts # Authentication helpers
server/ # Server-side code
routers/ # tRPC routers (one per domain entity)
services/ # Business logic (called by routers)
prisma/
schema.prisma # Database schema
migrations/ # Migration files (auto-generated)
## Architecture decisions
- tRPC routers call service functions; services contain business logic.
Routers should NOT contain business logic directly.
- All database access goes through Prisma. No raw SQL unless there's a
documented performance reason.
- Components in `ui/` are headless — no business logic, no API calls.
- Authentication uses NextAuth.js with JWT strategy.
## CI/CD pipeline
The GitHub Actions workflow (`.github/workflows/ci.yml`) runs on every PR:
1. Install dependencies (`pnpm install --frozen-lockfile`)
2. Lint (`pnpm lint:check`)
3. Type-check and build (`pnpm build`)
4. Unit tests (`pnpm test`)
5. E2E tests (`pnpm test:e2e`)
A PR cannot merge unless all steps pass.
Monorepo strategy: nested AGENTS.md files
In monorepos, a single root-level AGENTS.md often isn’t specific enough. Each package has its own build steps, testing framework, and conventions. The solution: nested AGENTS.md files.
my-monorepo/
├── AGENTS.md # Root: shared conventions, monorepo commands
├── packages/
│ ├── api/
│ │ ├── AGENTS.md # API-specific: Express setup, test DB, migration steps
│ │ └── src/
│ ├── web/
│ │ ├── AGENTS.md # Web-specific: Next.js, component patterns, e2e tests
│ │ └── src/
│ └── shared/
│ ├── AGENTS.md # Shared lib: no framework deps, pure functions only
│ └── src/
How precedence works
When an agent edits a file in packages/api/src/, it reads:
packages/api/AGENTS.md(nearest — highest priority)AGENTS.md(root — lower priority, but still included)
The root file covers shared conventions (commit format, PR guidelines, monorepo tooling). Each package file covers its specific build steps and patterns. This mirrors how human developers think: “the general rules apply everywhere, but this package has its own quirks.”
Root AGENTS.md for a monorepo
# AGENTS.md (root)
## Monorepo overview
This is a pnpm workspace monorepo with three packages: `api`, `web`,
and `shared`. Each has its own AGENTS.md with specific setup steps.
## Global commands
pnpm install # Install everything
pnpm turbo run build # Build all packages
pnpm turbo run test # Test all packages
pnpm turbo run test --filter=api # Test one package
## Commit conventions
- Format: `type(scope): description`
- Types: feat, fix, docs, refactor, test, chore
- Scope: package name (api, web, shared)
- Example: `feat(api): add inventory search endpoint`
## PR guidelines
- Title format: `[package-name] Description`
- Always run `pnpm lint` and `pnpm test` before committing
- One logical change per PR
Auto-generating AGENTS.md with the coding agent
You don’t have to write AGENTS.md from scratch. GitHub provides an official prompt that instructs the Copilot coding agent to analyze your repository and generate a comprehensive instructions file. This is the recommended starting point for existing projects.
How to use it
- Navigate to github.com/copilot/agents (or click the Copilot icon in the GitHub search bar → Agents)
- Select the target repository from the dropdown
- Paste the prompt below and submit
GitHub’s official prompt (abridged for clarity — the full version is in the GitHub documentation):
Your task is to "onboard" this repository to Copilot coding agent by
adding a .github/copilot-instructions.md file that contains information
describing how a coding agent seeing it for the first time can work
most efficiently.
You will do this task only one time per repository and doing a good job
can SIGNIFICANTLY improve the quality of the agent's work, so take your
time, think carefully, and search thoroughly before writing the
instructions.
<Goals>
- Reduce the likelihood of a coding agent PR getting rejected due to
generating code that fails CI, fails validation, or has misbehavior.
- Minimize bash command and build failures.
- Allow the agent to complete its task more quickly by minimizing the
need for exploration using grep, find, and code search tools.
</Goals>
<WhatToAdd>
- A summary of what the repository does.
- High level repository information (size, type, languages, frameworks).
- For build, test, lint, and every scripted step: the exact sequence
of commands, validated by running them.
- Major architectural elements with relative paths.
- CI/CD checks the agent should replicate locally.
</WhatToAdd>
The agent will:
- Inventory the codebase (README, config files, scripts, workflows)
- Run commands to validate they work
- Document errors and workarounds
- Generate the file and open a PR for you to review
Important: The official prompt generates
copilot-instructions.md, which is Copilot-specific. If you want a cross-agent AGENTS.md, take the generated output and adapt it. Most of the content applies to any agent — the build steps, architecture, and conventions are universal.
Adapting the output to AGENTS.md
After the coding agent generates copilot-instructions.md, you can create AGENTS.md from it:
- Copy the content to a new
AGENTS.mdfile at the repo root - Remove any Copilot-specific behavioral instructions (response tone, code generation preferences) — those belong in
copilot-instructions.md - Keep the operational content: build steps, test commands, architecture, conventions
- Add any agent-agnostic context the prompt might not cover (deployment, environment setup, data seeding)
This way you maintain both files: copilot-instructions.md for Copilot-specific behavior, and AGENTS.md for any agent that touches the repo.
Writing an effective AGENTS.md: best practices
1. Be imperative, not descriptive
Agents follow instructions better than they interpret descriptions.
| Weak (descriptive) | Strong (imperative) |
|---|---|
| “The project uses Vitest for testing" | "Run pnpm test to execute all tests. Run pnpm vitest run -t 'test name' for a specific test." |
| "Biome is configured for linting" | "Always run pnpm lint before committing. If lint fails, run pnpm lint --fix to auto-fix." |
| "We use conventional commits" | "Format every commit as type(scope): description. Valid types: feat, fix, docs, refactor, test, chore.” |
2. Document the gotchas
The most valuable sections in AGENTS.md are the ones that prevent mistakes a newcomer would make:
## Common gotchas
- After changing `prisma/schema.prisma`, you MUST run `pnpm db:generate`
before building. Otherwise TypeScript will report missing types.
- The `AUTH_SECRET` environment variable must be at least 32 characters.
Shorter values cause a silent authentication failure.
- E2E tests assume the database is seeded. Run `pnpm db:seed` before
`pnpm test:e2e`.
- The `shared` package must be built before `api` or `web`. Use
`pnpm turbo run build` (it respects the dependency graph).
3. Validate your commands
GitHub’s official guidance emphasizes this: run every command yourself and document the exact sequence that works. Agents will attempt to execute what you document. If the commands are wrong:
- The agent wastes time debugging your documentation
- Token budget is consumed on error output instead of the actual task
- The resulting PR may fail CI because the agent couldn’t run tests locally
4. Keep it under 2 pages
AGENTS.md is read before every task. If it’s too long, it consumes a significant portion of the context window before the agent even starts working. GitHub’s official prompt explicitly constrains output to two pages. Aim for the same.
If you need more detail, use nested AGENTS.md files (for monorepos) or link to detailed docs rather than inlining everything.
5. Don’t duplicate what’s already in other files
If your CONTRIBUTING.md already documents the build steps, AGENTS.md can reference it:
## Build instructions
Follow the build steps in [CONTRIBUTING.md](./CONTRIBUTING.md#building).
The key commands for quick reference:
- `pnpm install && pnpm build`
- `pnpm test`
But be careful: agents parse Markdown links and may follow them, consuming tokens to read the referenced file. A concise summary in AGENTS.md plus a link for details is the best balance.
6. Use CLAUDE.md or GEMINI.md for agent-specific overrides
GitHub also recognizes CLAUDE.md and GEMINI.md at the repository root for agent-specific instructions. Use these when a particular agent needs different guidance:
my-repo/
├── AGENTS.md # Universal instructions (all agents)
├── CLAUDE.md # Claude-specific overrides (if needed)
├── .github/
│ └── copilot-instructions.md # Copilot-specific behavior
7. Use AGENTS.md as a lightweight documentation index
As your project matures, you’ll accumulate knowledge that doesn’t belong in AGENTS.md itself — detailed architecture decisions, security policies, glossaries, contribution guides. The temptation is to inline everything. Resist it.
A more scalable pattern: AGENTS.md stays thin and acts as an index. Each section gives a one-line description of what the document covers and where to find it. The agent reads the referenced file only when it’s relevant to the current task — not on every session.
# Project context
This is a Node.js API service. See the documentation index below for details.
## Quick reference
- Build: `pnpm build`
- Test: `pnpm test`
- Lint: `pnpm lint`
## Documentation index
| Document | What it covers | Read when... |
|---|---|---|
| [docs/ARCHITECTURE.md](./docs/ARCHITECTURE.md) | Service boundaries, data flow, ADRs | Designing or refactoring a feature |
| [docs/SECURITY.md](./docs/SECURITY.md) | Auth model, input validation rules, secrets handling | Touching auth, inputs, or env vars |
| [docs/GLOSSARY.md](./docs/GLOSSARY.md) | Domain terms and their precise meanings | Naming things or writing user-facing copy |
| [docs/CONTRIBUTING.md](./docs/CONTRIBUTING.md) | PR conventions, branch naming, review process | Opening a PR or reviewing one |
| [docs/DATABASE.md](./docs/DATABASE.md) | Schema conventions, migration workflow, query patterns | Modifying the data model |
The Read when... column is the key addition. It tells the agent when the document is relevant, so the agent can make an informed decision about whether to fetch it. Without that guidance, the agent either reads everything (expensive) or reads nothing (uninformed).
Why this works
Agents have file-reading capabilities. When you link to a document and describe what it contains, the agent can decide: “this task involves auth — I should read docs/SECURITY.md before proceeding.” The context cost is only paid for the documents that matter to the current task.
The alternative — inlining all of that content in AGENTS.md — means every task pays the full context cost regardless of relevance. A 500-line AGENTS.md consumes a meaningful chunk of the context window before the agent writes a single line of code.
When to use this pattern
This pattern fits best when:
- Your project has documentation that already exists in
docs/(orCONTRIBUTING.md,SECURITY.md, etc.) - Different tasks need different subsets of that documentation
- AGENTS.md is starting to exceed the “2 page” guideline from best practice #4
If your project is small and all the relevant context fits comfortably in two pages, keep it all in AGENTS.md. The index pattern is a scaling strategy, not a default.
VS Code integration
In Ch 3, we covered how VS Code uses instruction files. Here’s a quick refresher specific to AGENTS.md:
How VS Code discovers AGENTS.md
When you use the Agent mode in VS Code Chat, Copilot automatically loads instruction files in this order:
- Personal instructions (VS Code settings or
~/.github/copilot-instructions.md) - Repository instructions (
copilot-instructions.mdin.github/) - Path-specific instructions (
.instructions.mdfiles matching the current file) - AGENTS.md (nearest in the directory tree)
You can verify which instructions are active by checking the references panel in Chat. When AGENTS.md is loaded, it appears in the list of context files.
Auto-generating via /init
VS Code offers a built-in way to generate instruction files:
- Open the Chat panel (
Shift+Cmd+I) - Type
/init - Copilot analyzes your workspace and generates
copilot-instructions.mdor suggests updatingAGENTS.md - Review the generated content and commit it
This is a quick alternative to the GitHub web-based coding agent approach described above.
A real-world AGENTS.md: woliveiras.com blog
Let’s look at a real example: the AGENTS.md file for the woliveiras.com blog (the site you’re reading this series on). It follows the patterns we’ve discussed:
# Blog Post Authoring Guidelines (for AI agents)
This repo is an Astro blog. New posts live in `src/content/blog/*.mdx`
and are validated by the schema in `src/content.config.ts`.
## 1) Where and how to create a post
- Location: create a new `*.mdx` file under `src/content/blog/`.
- Filename/slug: use `kebab-case` (lowercase + hyphens).
- No duplicate H1: do not add `# Title` in the body.
## 2) Frontmatter (required + optional)
- title (string): Title Case is preferred.
- description (string): 1–2 sentences.
- pubDate (date-like string): use ISO `YYYY-MM-DD`.
- published (boolean): use `true` for new posts.
- tags (string[]): required by convention.
## 5) Markdown/MDX conventions used in this repo
- Use `##` for main sections, `###` for subsections.
- Always set the code block language.
- Use Mermaid for diagrams.
- Use tables for comparisons.
## 7) Quality checklist
- Frontmatter validates: required keys present; pubDate is ISO.
- Commands are explicit and use `sh` fences.
- Long posts include `## Conclusion`.
- Tag names are consistent.
Notice the structure: it tells an agent what the project is, where content goes, what format to follow, and what the quality bar is. An agent reading this file can immediately start writing a blog post that matches the site’s conventions — without exploring the codebase first.
Hands-on: create and validate an AGENTS.md
Let’s write an AGENTS.md for the agent-lab project from Ch 6 (or adapt this to your own project).
Step 1: Create the file
cd agent-lab
cat > AGENTS.md << 'EOF'
# AGENTS.md — agent-lab
## Project overview
A TypeScript calculator library used as a learning project for the
Hands-on Coding Assistants series. Includes basic arithmetic
operations with comprehensive test coverage.
## Tech stack
- Language: TypeScript 5.x (strict mode)
- Runtime: Node.js 22
- Testing: Vitest
- Package manager: npm
## Setup
npm install # Install dependencies
npx tsc --init # Initialize TypeScript (already done)
## Build
npx tsc # Compile TypeScript to JavaScript
## Testing
npx vitest run # Run all tests
npx vitest run -t "name" # Run a specific test by name
npx vitest --watch # Run tests in watch mode
## Project structure
src/
calculator.ts # Main module: add, subtract, multiply, divide
src/__tests__/ # Test files (mirror the src/ structure)
.github/
agents/ # Custom agent definitions (see Ch 6)
## Conventions
- All functions are pure (no side effects, no global state)
- Division by zero throws an Error
- Test each function for: positive numbers, negative numbers,
zero, large numbers, and decimal precision
- Test files use the pattern: `src/__tests__/<module>.test.ts`
## Common issues
- If TypeScript reports missing types, run `npm install` again.
- `npx tsc` must succeed before running tests.
EOF
Step 2: Validate the commands
Run through each command to verify they work:
# Verify install
npm install
# Verify build
npx tsc
# Verify tests (if you created them in Ch 6)
npx vitest run
If any command fails, update AGENTS.md with the correct command or add a note about the failure and its workaround.
Step 3: Test with an agent
- Open the Chat panel in VS Code (
Shift+Cmd+I) - Make sure you’re in Agent mode
- Ask:
What are the build and test commands for this project? - The agent should cite your AGENTS.md and list the exact commands you documented
- Now ask:
Add a modulus (remainder) function to the calculator, with tests - Observe whether the agent:
- Follows the conventions (pure function, test file in
__tests__/) - Runs
npx tscandnpx vitest runto validate its changes - Matches the testing patterns (positive, negative, zero, edge cases)
- Follows the conventions (pure function, test file in
What to verify
- The agent cited AGENTS.md as a reference in its response
- It followed the documented conventions without you having to repeat them
- It ran the documented test commands to verify its work
- The resulting code matches the patterns described in the file
If the agent didn’t follow a convention, your AGENTS.md might be too vague. Strengthen the instruction; change “tests should cover edge cases” to “test each function for: positive numbers, negative numbers, zero, large numbers, and decimal precision.”
AGENTS.md as a living document
One of the most common mistakes is treating AGENTS.md as a one-time setup. It’s not. It’s a living document that should evolve with your project:
| Event | What to update |
|---|---|
| New dependency added | Update setup commands and any dependency notes |
| Build step changed | Update the build/test commands section |
| New package in monorepo | Add a nested AGENTS.md for the new package |
| Architecture refactor | Update the project structure and architecture decisions |
| CI pipeline changed | Update the CI/CD section and validation steps |
| Common failure discovered | Add it to the gotchas section (“after doing X, you must do Y”) |
A good trigger: whenever a PR fails CI because the agent missed something, check whether AGENTS.md could have prevented it. If yes, update the file in the same PR that fixes the issue.
Common pitfalls
| Pitfall | What happens | Fix |
|---|---|---|
| No AGENTS.md at all | Agent wastes time exploring; PRs fail CI frequently | Create one — even a minimal file with build/test commands helps |
| Outdated commands | Agent runs old commands that fail; wastes tokens debugging | Validate commands regularly; update AGENTS.md when build steps change |
| Too long (5+ pages) | Consumes too much context window; agent may ignore later sections | Keep it under 2 pages; use nested files for monorepos |
| Only descriptions, no commands | Agent knows “we use Vitest” but not the exact command to run tests | Always include the exact pnpm/npm commands, not just tool names |
| Duplicated across files | copilot-instructions.md and AGENTS.md say conflicting things | Separate concerns: behavior in copilot-instructions.md, operations in AGENTS.md |
| No gotchas section | Agent hits the same traps a new developer would | Document every “you must do X before Y” that isn’t obvious from the code |
Copilot Memory: persistent context that grows with your codebase
AGENTS.md is context you write and maintain. Copilot Memory is context the agent builds and updates automatically.
Launched in early access in December 2025, Copilot Memory allows the agent to learn from your codebase over time and store repository-specific facts that persist across sessions. Where AGENTS.md gives the agent what you consciously decide to document, Memory captures what emerges from repeated interactions: which patterns you consistently prefer, which corrections you make repeatedly, and what project-specific context surfaces across multiple sessions.
How Memory relates to AGENTS.md and custom instructions
Think of the three systems as layers with different authorship:
| Author | Persistence | Scope | |
|---|---|---|---|
Custom instructions (copilot-instructions.md) | You | Permanent (until you edit it) | Repository-wide, always loaded |
| AGENTS.md | You | Permanent (until you edit it) | Cross-agent, project context |
| Copilot Memory | The agent (from interactions) | Persistent, auto-updated | Repository-specific, auto-loaded |
Memory doesn’t replace either of the other two. It complements them:
- Custom instructions hold your deliberate decisions: conventions, patterns, rules.
- AGENTS.md holds your curated project knowledge: architecture, build steps, gotchas.
- Memory holds what the agent learned through use: your correction patterns, preferences that emerge in context, codebase facts that are too dynamic or granular to maintain manually.
What gets stored in Memory
Memory captures facts at the repository level. Examples of what the agent might learn and store:
- “This project uses
Result<T, E>return types instead of throwing exceptions” - “The developer always prefers early returns over nested conditionals”
- “Tests in this project are organized by feature, not by test type”
- “The
UserServiceis the authoritative source for user-related operations; don’t add user logic elsewhere”
These facts emerge from corrections, explicit instructions, and observed patterns across sessions — not from a single configuration file.
Memory vs. context window
A common confusion: Memory is not the same as session context. Session context is temporary and lives only for the duration of one conversation. Memory persists across sessions and is selectively loaded based on relevance to the current task.
This means Memory doesn’t solve the context window management problem from Ch 5. If your session grows too large, Memory doesn’t extend the window. But when you start a new session, Memory pre-populates the agent with relevant repository knowledge, so you don’t have to re-establish context from scratch every time.
Current status and availability
As of April 2026, Copilot Memory is available in early access for Copilot Pro+ users. You can view and edit the facts the agent has stored by going to your repository settings under Copilot > Memory. If you see a fact that’s wrong or outdated, edit or delete it directly — the memory is yours, not a black box.
When to create agents and skills — and when not to
Ch 6 showed you how to build custom agents and sub-agents. This chapter has covered AGENTS.md as shared context. Before you commit to building either, there’s a prior question: does anything need to be created at all?
Copilot now decides autonomously when to spawn sub-agents. When you give the agent a complex task, it identifies which subtasks benefit from isolated context, spins up sub-agents to handle them independently, and incorporates their results — all without you needing to design a coordinator-worker structure. The main agent orchestrates automatically.
This changes the calculus from earlier years. Many agents teams used to maintain manually — a “summarizer agent,” a “search agent,” a “file explorer agent” — are now Copilot’s built-in behavior. You don’t have to create an agent for a task just because you want it to work in isolation.
The default: do nothing
If your task:
- Doesn’t repeat across the team on a regular basis
- Has no special tool restrictions
- Works fine with a clear prompt
- Doesn’t involve a specific domain procedure the agent would need to learn
…then don’t create anything. Write a good prompt. Copilot’s built-in agent mode, with access to your AGENTS.md context, handles the vast majority of development tasks without any custom configuration.
When to create a custom agent (.agent.md)
Create a custom agent when you want a persistent, named persona that the team can select from the dropdown. The agent defines who the AI is for this class of task — and it stays consistent across every invocation:
| Signal | Example |
|---|---|
| Same workflow repeated by multiple team members | A “PR Reviewer” agent everyone on the team uses |
| Tool restrictions needed for safety or governance | A planning agent with only read-only tools |
| Model routing (faster model for narrow tasks) | A documentation agent using a cheaper/faster model |
| Handoff between stages in a defined workflow | Plan → Implement → Review chain |
| You want it hidden from the dropdown but callable as a sub-agent | Internal worker agents that the coordinator invokes |
Custom agents do NOT make sense for:
- Tasks that only one person does once in a while
- Tasks the main agent already handles well with a prompt
- Adding agent files just to add structure when none is needed
When to create a skill (SKILL.md)
Create a skill when you have a portable, domain-specific procedure the agent should pull in based on task relevance — not a role, but a capability.
| Signal | Example |
|---|---|
| Multi-step procedure with scripts or external resources | A “run integration tests” skill that includes the test runner script |
| Cross-tool portability needed | The same procedure should work in Copilot, Cursor, and Claude Code |
| Agent should recognize and load it automatically | An “accessibility audit” skill that activates when editing UI components |
| Domain expertise that doesn’t fit in AGENTS.md | A “migration guide” skill specific to your data schema evolution process |
The practical difference: an agent is a persona you select. A skill is a capability the agent loads when it decides the task calls for it. Copilot reads skill descriptions and activates them contextually — you don’t have to invoke them manually.
The decision flow
What “Copilot auto-spawns” means for your setup
When Copilot automatically spawns a sub-agent, it creates an isolated context for the subtask and returns a summary. This is transparent — it appears as a collapsible tool call in the chat UI. You don’t need to configure it.
What this means in practice:
- A “summarizer” or “search helper” agent is rarely needed anymore
- The coordinator-worker pattern only needs explicit agents when you need tool restrictions on the workers or want those workers invocable by name across the team
- For one-off parallelism (e.g.,
/fleetin Copilot CLI), no agent files are required — the built-in behavior handles it - Agent files add value when the configuration needs to be shared, versioned, and consistent across team members, not for private one-off tasks
The general heuristic: if you’re the only one who would benefit from this agent, don’t create it. If the team would invoke it dozens of times a week, make it an agent file checked into source control.
Key takeaways
- AGENTS.md is a README for robots. It gives every agent — Copilot, Codex, Jules, Cursor, and more — the project context they need before starting work.
- Three layers of content. Project overview (what is this?), build/test commands (how do I work?), and architecture/conventions (where is everything?).
- Be imperative and specific. “Run
pnpm test” is better than “the project uses Vitest.” Document exact commands, validated by running them yourself. - Use nested files in monorepos. The nearest AGENTS.md to the edited file takes precedence, so each package can have tailored instructions.
- Auto-generate, then refine. Use the coding agent or
/initto generate a first draft, then validate commands and add gotchas from your experience. - Treat it as living documentation. Update AGENTS.md whenever build steps, conventions, or architecture change.
- Use AGENTS.md as a documentation index. Keep AGENTS.md thin. For larger projects, replace inline content with a table linking to
docs/ARCHITECTURE.md,docs/SECURITY.md,docs/GLOSSARY.md, etc. Include a “Read when…” column so the agent knows which documents are relevant to which tasks. - Copilot Memory complements AGENTS.md. Use AGENTS.md for deliberate, curated project context. Let Memory capture the patterns and corrections that emerge from real use.
- Default to doing nothing. Copilot auto-spawns sub-agents and handles most tasks with a good prompt and AGENTS.md context. Create a custom agent when a persona needs to be shared and versioned across the team. Create a skill when you have a portable, domain-specific procedure. If only you benefit, use a prompt — not a file.
In Ch 8, we’ll explore repository custom instructions with copilot-instructions.md and path-specific .instructions.md files — the complement to AGENTS.md that gives Copilot-specific behavioral guidance for different parts of your codebase.