communicationengineering
Create Agent Markdown
Create high-quality AI agent definition files that follow the Agent Skills specification. Produces behavioral prompts with real mental models, decision frameworks, and domain expertise — not generic filler.
agentsagent-skillsbehavioral-promptssystem-promptspersonas
Works well with agents
Works well with skills
$ npx skills add The-AI-Directory-Company/(…) --skill create-agent-markdownsection-guide.md
Markdown
| 1 | # Section-by-Section Guide |
| 2 | |
| 3 | Detailed guidance for writing each section of an agent definition file. |
| 4 | |
| 5 | ## Section ordering and purpose |
| 6 | |
| 7 | The section order is intentional. It follows how experts actually think about their work: |
| 8 | |
| 9 | 1. **Identity** (opening paragraph) — Who am I? |
| 10 | 2. **Perspective** — How do I see the world? |
| 11 | 3. **Method** — How do I work? |
| 12 | 4. **Communication** — How do I interact with others? |
| 13 | 5. **Decision-making** — How do I handle tradeoffs? |
| 14 | 6. **Boundaries** — What do I refuse to do? |
| 15 | 7. **Scenarios** — How do I handle specific situations? |
| 16 | |
| 17 | This mirrors Anthropic's "right altitude" principle: start with the high-level mental model, then add specificity layer by layer. The agent loads identity first, then progressively applies more detailed guidance. |
| 18 | |
| 19 | ## 1. Opening paragraph |
| 20 | |
| 21 | **Length:** 1-3 sentences |
| 22 | **Purpose:** Establish identity, experience level, and core worldview in the most token-efficient way possible |
| 23 | |
| 24 | **Structure:** `You are a [role] with [experience qualifier]. You [core perspective].` |
| 25 | |
| 26 | The experience qualifier matters because it sets the tone for everything that follows. "A junior developer" and "a staff engineer with 15 years across multiple codebases" produce dramatically different agent behaviors. |
| 27 | |
| 28 | **Effective openers by domain:** |
| 29 | |
| 30 | | Domain | Example | |
| 31 | |--------|---------| |
| 32 | | Engineering | "You are a senior security auditor who has reviewed hundreds of production systems. You think like an attacker first, then a defender." | |
| 33 | | Product | "You are a VP of Product with 15+ years at high-growth startups. You obsess over user problems and business impact, not feature checklists." | |
| 34 | | Leadership | "You are an Engineering Manager who was a senior IC before moving to management. You understand code deeply but your job is now people and systems, not pull requests." | |
| 35 | | Data | "You are a data engineer who builds pipelines that other teams depend on. You think in terms of data contracts, not just schemas." | |
| 36 | |
| 37 | ## 2. "Your perspective" section |
| 38 | |
| 39 | **Length:** 3-5 bullet points |
| 40 | **Purpose:** Install the agent's mental models — the lenses through which it interprets every request |
| 41 | |
| 42 | **Each bullet must be opinionated and falsifiable.** If no one could disagree with it, it's not a perspective — it's a platitude. |
| 43 | |
| 44 | **Pattern:** `You [believe/think/prioritize] X. [Why this matters / what it implies].` |
| 45 | |
| 46 | **Testing each bullet:** Ask "Could a competent professional in this role hold the opposite view?" If yes, it's a real perspective. If no, it's filler. |
| 47 | |
| 48 | Example test: |
| 49 | - "You think in dependencies, not timelines." → A PM could believe timelines should drive dependencies. **Real perspective.** |
| 50 | - "You care about code quality." → No engineer would disagree. **Filler.** |
| 51 | |
| 52 | ## 3. "How you [verb]" section |
| 53 | |
| 54 | **Length:** 5-8 numbered steps |
| 55 | **Purpose:** Reveal the agent's systematic approach to its core activity |
| 56 | |
| 57 | The verb in the heading should be the agent's PRIMARY activity: |
| 58 | - Code reviewer → "How you review" |
| 59 | - Architect → "How you design" |
| 60 | - PM → "How you break down work" |
| 61 | - Security auditor → "How you audit" |
| 62 | |
| 63 | **Each step should explain both WHAT and WHY:** |
| 64 | |
| 65 | Good: "1. **Understand intent** — Read the PR title and description first. What is this change trying to accomplish? If the intent is unclear, ask before reviewing details." |
| 66 | |
| 67 | Bad: "1. Read the code carefully." |
| 68 | |
| 69 | The good version reveals the reasoning (understanding intent before details) and includes a decision point (ask if unclear). The bad version is just a generic instruction. |
| 70 | |
| 71 | ## 4. "How you communicate" section |
| 72 | |
| 73 | **Length:** 3-5 audience-specific patterns |
| 74 | **Purpose:** Define audience-aware communication style |
| 75 | |
| 76 | **Pattern:** `**With [audience]**: [principle]. [concrete example or anti-example].` |
| 77 | |
| 78 | The key insight from Anthropic's research: LLMs respond better to audience-based framing than abstract style guides. Saying "With executives: lead with the 'so what'" is more effective than "Be concise when appropriate." |
| 79 | |
| 80 | **Common audiences by domain:** |
| 81 | |
| 82 | | Domain | Key audiences | |
| 83 | |--------|--------------| |
| 84 | | Engineering | Other engineers, product, design, leadership | |
| 85 | | Product | Executives, engineering, design, customers | |
| 86 | | Security | Engineering, compliance, management, incident response | |
| 87 | | Data | Business stakeholders, engineering, analysts | |
| 88 | |
| 89 | ## 5. "Decision-making heuristics" section |
| 90 | |
| 91 | **Length:** 4-6 heuristics |
| 92 | **Purpose:** Give the agent concrete rules for handling tradeoffs |
| 93 | |
| 94 | **Structure:** `When [situation], [resolution]. [Specific example].` |
| 95 | |
| 96 | Each heuristic should: |
| 97 | 1. Name the tradeoff explicitly |
| 98 | 2. Take a side |
| 99 | 3. Give a concrete example |
| 100 | |
| 101 | **Example:** |
| 102 | "When two technical approaches are debated, ask: 'which one is easier to change later?' Pick that one unless there's a compelling performance or cost reason not to." |
| 103 | |
| 104 | This is effective because it names the tradeoff (approach A vs B), provides a decision rule (reversibility), and carves out an exception (performance/cost). |
| 105 | |
| 106 | ## 6. "What you refuse to do" section |
| 107 | |
| 108 | **Length:** 3-5 refusals |
| 109 | **Purpose:** Define scope boundaries and prevent role confusion |
| 110 | |
| 111 | **Pattern:** `You don't [action]. [Why this is outside your scope].` |
| 112 | |
| 113 | Each refusal should: |
| 114 | 1. Be something the agent COULD be asked to do (not something absurd) |
| 115 | 2. Explain WHY it's outside scope (not just "it's not your job") |
| 116 | 3. Often point to WHO should do it instead |
| 117 | |
| 118 | **Example:** |
| 119 | "You don't write production code. You write specs, pseudocode, and architecture diagrams." (Redirects to what the agent DOES do instead.) |
| 120 | |
| 121 | ## 7. "How you handle common requests" section |
| 122 | |
| 123 | **Length:** 3-4 scenarios |
| 124 | **Purpose:** Show the agent's approach to concrete situations |
| 125 | |
| 126 | **Format:** |
| 127 | ``` |
| 128 | **"[Request in quotes]"** — [What the agent does: what it asks for first, how it structures its response, what it produces] |
| 129 | ``` |
| 130 | |
| 131 | These should be the requests that users most commonly make of this role. Each scenario should demonstrate a unique aspect of the agent's approach. |
| 132 | |
| 133 | **Selection criteria for scenarios:** |
| 134 | 1. Most frequent request type |
| 135 | 2. Request that reveals the agent's unique value |
| 136 | 3. Request where the agent's approach differs from a naive solution |
| 137 | 4. Edge case that shows how the agent handles ambiguity |
| 138 | |
| 139 | ## Optional sections |
| 140 | |
| 141 | ### Security/quality checklists |
| 142 | |
| 143 | For agents that review or audit, include a concrete checklist they run through. This is highly effective because it's specific, actionable, and turns the agent into a systematic validator. |
| 144 | |
| 145 | ### Categorization/severity frameworks |
| 146 | |
| 147 | For agents that classify or prioritize, include the specific framework. (Example: the code-reviewer agent's severity levels: Critical, Warning, Suggestion, Note.) |
| 148 | |
| 149 | ### Situational protocols |
| 150 | |
| 151 | For agents that handle different contexts differently, add a section like "How you handle common situations" with bolded scenario headers and specific approaches for each. |
| 152 |