Agent Prompts Are Different
Prompting a chatbot is about getting a good text response. Prompting an agent is about shaping behavior across dozens of tool calls and decisions. Agent prompts define personality, capabilities, constraints, and workflows — they're closer to programming than to writing.
The three critical prompt surfaces for agents are: system prompts (define the agent's identity and rules), tool descriptions (guide how tools are used), and project context (CLAUDE.md, .cursorrules, etc.).
Writing Effective System Prompts
System prompts for agents should cover:
1. Role and Identity — Who is this agent? 'You are a senior security engineer reviewing code for vulnerabilities.'
2. Capabilities and Boundaries — What can and can't it do? 'You have access to file read/write and shell execution. You cannot make network requests or access secrets.'
3. Behavioral Rules — How should it operate? 'Always read a file before editing it. Never commit without running tests first. Ask for clarification rather than guessing.'
4. Output Format — What should responses look like? 'Provide code changes as unified diffs. Summarize changes in bullet points.'
5. Error Handling — What to do when things go wrong? 'If a test fails, investigate the root cause. Don't retry the same approach more than twice.'
Tool Descriptions That Work
The LLM reads your tool descriptions to decide when and how to use each tool. Poor descriptions lead to wrong tool choices and bad parameters.
Bad: name: 'query', description: 'Run a query'
Good: name: 'search_codebase', description: 'Search the codebase for files matching a pattern or content matching a regex. Use this when you need to find where a function is defined, locate usage of an API, or discover relevant files. Returns file paths and matching line numbers.'
Key principles:
• Describe WHEN to use the tool, not just what it does
• Include examples of good and bad inputs
• Specify what the tool returns
• Mention limitations and edge cases
CLAUDE.md and Project Context
Project context files (CLAUDE.md for Claude Code, .cursorrules for Cursor) are one of the most powerful prompt surfaces. They let you encode project-specific knowledge that persists across sessions.
What to include:
• Tech stack and key dependencies
• Project structure overview
• Code conventions (naming, patterns, style)
• Build and test commands
• Common pitfalls and how to avoid them
• Architecture decisions and their rationale
What to avoid:
• Copying entire documentation (too much context dilutes quality)
• Contradicting the agent's built-in behavior
• Overly rigid rules that prevent the agent from adapting
Common Prompting Mistakes
Mistakes that reduce agent effectiveness:
• Being too vague — 'Fix the code' vs 'Fix the null pointer exception in auth.ts:42 by checking if user is undefined before accessing user.email.'
• Being too prescriptive — Specifying every single step prevents the agent from using its judgment and adapting.
• Ignoring tool descriptions — Spending hours on the system prompt but leaving tool descriptions as one-liners.
• No examples — Agents work dramatically better with 1-2 examples of desired behavior.
• Conflicting instructions — 'Be concise' + 'Explain everything in detail' = confused agent.
Advanced: Dynamic Prompting
Production agent systems often use dynamic prompts that change based on context:
• Task-specific instructions — Load different system prompts based on the task type (code review vs feature development vs debugging).
• User-specific preferences — Adjust tone, verbosity, and tool preferences based on user history.
• Context-aware guardrails — Tighten restrictions in production environments, loosen in development.
• Progressive disclosure — Start with simple instructions and add complexity as the agent demonstrates competence.
Frameworks like LangChain and CrewAI support dynamic prompt templates. For Claude Code, you can structure CLAUDE.md with conditional sections that activate based on the current task.
Explore the Tools Mentioned
Browse our curated directory of AI agents, frameworks, and MCP servers — with live GitHub signals.