CLAUDE.md agent rules project contextAI Generated

CLAUDE.md and Agent Rules: Why Project Context Files Beat Prompt Engineering

CLAUDE.md agent rules project context

Sebastjan Mislej2026-03-208 min read

CLAUDE.md and Agent Rules: Why Project Context Files Beat Prompt Engineering

The shift from prompt engineering to context engineering: how plain markdown files in your repo make AI agents dramatically more consistent, without writing a single new prompt.

Six months ago, I rewrote every AI prompt in my workflow. Not the words. The entire approach.

I deleted my carefully crafted prompt templates. Hundreds of lines of "you are a helpful assistant who..." instructions. Gone. In their place, I created a set of markdown files that live inside my project repositories. Files named AGENTS.md, SOUL.md, TOOLS.md. Plain text. Version controlled. Readable by humans and AI alike.

The results were immediate. My AI agents got better at their jobs overnight. Not because the model improved. Because the context did.

This is the shift from prompt engineering to context engineering. If you build with AI tools, you need to understand it.

Prompt Engineering Hit a Wall

I spent months tuning prompts. Adding examples. Tweaking system messages. Getting one task to work perfectly, only to watch the same prompt fail on a slightly different input.

Sound familiar?

The problem with prompt engineering is scope. A prompt is a single instruction. It lacks memory. It lacks structure. It lacks awareness of your project, your team, your goals.

You can write the most elegant prompt in the world. But if the AI doesn't know what your codebase looks like, what conventions you follow, or what mistakes to avoid, that prompt will produce generic output.

Anthropic published a piece on this exact topic. They call it "context engineering" and define it as the set of strategies for curating and maintaining the optimal set of tokens during LLM inference. That sounds academic. Here's what it means in practice: stop obsessing over the perfect sentence and start building the right information environment around your AI.

What Project Context Files Actually Look Like

A project context file is a markdown document that lives in your repo. It tells the AI agent who it is, what it works on, and how to behave. Not through a prompt. Through persistent, structured context.

Here's a real example from my setup. I run multiple AI agents that handle different parts of my content pipeline. Each agent has its own workspace with these files:

AGENTS.md defines the workflow:

## Every Session
1. Read SOUL.md
2. Check for assigned work
3. Execute using your writer skill
4. Report what you did

SOUL.md defines personality and constraints:

You are Pino, the Writer.
You write structured, SEO-optimized blog content.
No shortcuts, no filler, no AI slop.
Every word earns its place.

TOOLS.md maps the environment. Camera names, SSH hosts, API endpoints. Everything specific to the deployment.

These files aren't prompts. They're project documentation that happens to be machine-readable.

Why This Works Better Than Prompts

Context files are version controlled. When I change how an agent behaves, I see the diff. I can revert it. I can review it in a pull request. Try doing that with a prompt you typed into a chat window.

Context files compose. AGENTS.md handles workflow. SOUL.md handles personality. TOOLS.md handles environment. Each file has a single job. You can swap one without touching the others. A prompt is a monolith. Context files are modular.

Context files persist across sessions. Every time an AI agent starts a new session, it reads these files. It doesn't lose memory. It doesn't need you to paste the same prompt again. The context is always there, in the repo, ready to go.

Prompt EngineeringContext Engineering
Single instruction per task Structured files for every concern
Disappears when session ends Persists in the repo forever
No version history Git-tracked, diffable, reversible
Monolithic wall of text Modular files, single responsibility
You micromanage every session AI reads context, runs independently

The Real Shift: From Instructions to Environment

Prompt engineering asks: "What should I tell the AI to do?"

Context engineering asks: "What information does the AI need to make good decisions on its own?"

Key Insight

With prompts, you're micromanaging. With context files, you're building an environment where the AI can be competent without constant hand-holding.

I have agents that read a brief from a database, write a full article, run quality checks, and submit for review. All without a single runtime prompt from me. The context files tell them everything they need.

This works because modern AI models are good at following structured information. They read markdown. They understand hierarchy. They parse YAML frontmatter. Give them well-organized context and they perform well. Give them a wall of prompt text and they get confused.

Pitfalls I Hit Along the Way

This approach isn't magic. Here are the mistakes I made so you can skip them.

Too much context kills performance. Anthropic calls this "context rot." The more tokens you shove into the window, the less attention the model gives each one. I learned to keep context files lean. If a section isn't directly useful for the current task, it doesn't belong.

Vague context is worse than no context. Early versions of my SOUL.md files had lines like "be helpful and professional." That's noise. The AI already tries to be helpful. Specific constraints work. "No em dashes. No rule of three. Max 18 words per sentence." Those change behavior.

Forgetting to update context files. These files are code. They rot if you ignore them. When I added a new database table, I forgot to update TOOLS.md. The agent spent 10 minutes trying to query a table that didn't exist. Context files need maintenance, just like any documentation.

Not testing context in isolation. When an agent misbehaves, check the context first. Read what the AI actually sees at session start. Nine times out of ten, the problem is in the files, not the model.

How to Start With Context Files

You don't need a complex setup. Start with one file.

Create a CLAUDE.md (or equivalent) in your project root. Add three sections:

Starting template — keep it under 50 lines:

  1. Project overview. Two sentences. What is this repo? What does it do?
  2. Conventions. How do you name files? What framework do you use? What's forbidden?
  3. Common tasks. The things you ask the AI to do most often. Write the steps.

Run your AI tool and see what changes.

Then iterate. If the AI makes a mistake, add a rule to the context file. If it asks you the same question twice, put the answer in the file. Over weeks, your context file becomes a living document that encodes your project's institutional knowledge.

Context Engineering Is the Skill That Matters Now

Prompt engineering was the right skill for 2023. You had limited context windows, simple chat interfaces, and models that needed careful coaxing.

In 2026, context windows are massive. AI tools read your entire repo. Agents run multi-step workflows on their own. The bottleneck isn't "how do I phrase this request." It's "what information should surround this request."

The developers who build the best AI-powered workflows won't be the ones writing the cleverest prompts. They'll be the ones structuring the best project context. Markdown files in a Git repo. Simple, boring, and wildly effective.

I proved it to myself by rebuilding my entire content pipeline around context files. The AI got better. The output got more consistent. And I stopped writing prompts altogether.

Your move.

Building with AI agents?

I write about context engineering, AI workflows, and building in public. New posts a few times a week.

Follow along at sebastjanm.com

FAQ

What's the difference between CLAUDE.md and a system prompt?

A system prompt is a single block of text sent at the start of a conversation. CLAUDE.md is a file in your project that the AI reads at session start. The key difference: CLAUDE.md is version controlled, lives in your repo, and can be split into multiple files for modularity. A system prompt disappears when the session ends.

Do context files work with tools other than Claude Code?

Yes. Cursor uses .cursorrules files. GitHub Copilot reads project context from your repo. The concept is tool-agnostic. Most AI coding tools now support some form of project-level context files. The syntax varies but the principle is the same: give the AI structured information about your project.

How long should a context file be?

Short. Aim for under 100 lines per file. If your AGENTS.md is 500 lines, the AI will skim it like you skim a terms-of-service page. Keep each file focused on one concern. Split into multiple files if needed. Quality over quantity.

Can context files replace fine-tuning?

For most developer workflows, yes. Fine-tuning changes model weights. Context files change what the model sees. For project-specific behavior, context files are faster to create, easier to update, and require zero ML expertise. Fine-tuning still has its place for specialized domains, but context files cover most customization needs.

How do I know if my context file is working?

Start a fresh session and give the AI a task without any additional instruction. If it follows your conventions, uses the right tools, and avoids your forbidden patterns, your context file is working. If it produces generic output, your context file needs work.