engineering
Technical Spec Writing
Write technical specifications that bridge product requirements and implementation — with problem statements, proposed solutions, data model changes, API changes, migration plans, rollback strategies, and open questions.
technical-specRFCdesign-docspecificationengineering
Works well with agents
Works well with skills
technical-spec-writing/
SKILL.md
Markdown| 1 | |
| 2 | # Technical Spec Writing |
| 3 | |
| 4 | ## Before you start |
| 5 | |
| 6 | Gather the following from the user. If anything is missing, ask before writing: |
| 7 | |
| 8 | 1. **What are you building?** — Feature name and one-sentence summary. Link to the PRD if one exists. |
| 9 | 2. **Why now?** — Business driver, user pain, or technical urgency behind this work. |
| 10 | 3. **Scope** — New feature, refactor, migration, or extension of an existing system? |
| 11 | 4. **Constraints** — Tech stack, timeline, team size, backward-compatibility requirements. |
| 12 | 5. **Audience** — Who reviews this spec? (e.g., engineering team, staff engineer, CTO, security) |
| 13 | |
| 14 | If the user says "write a spec for X" with no context, push back: "What problem does this solve, and what does the system look like today?" |
| 15 | |
| 16 | ## Technical spec template |
| 17 | |
| 18 | ### Title |
| 19 | |
| 20 | `[Feature/Change Name] — Technical Specification` |
| 21 | |
| 22 | ### 1. Problem Statement (3-5 sentences) |
| 23 | |
| 24 | State the problem from the user's or system's perspective. Reference the PRD or incident that motivates the work. Describe the current state and why it is insufficient. |
| 25 | |
| 26 | ### 2. Context |
| 27 | |
| 28 | Summarize relevant prior art: existing systems, past decisions, constraints inherited from other teams. A reviewer who has no background should understand the landscape after reading this section. |
| 29 | |
| 30 | ### 3. Proposed Solution |
| 31 | |
| 32 | Describe the approach at a level of detail sufficient for another engineer to implement it. Include component interactions, sequence of operations, and key design decisions. Call out alternatives you considered and why you rejected them. |
| 33 | |
| 34 | ### 4. Data Model Changes |
| 35 | |
| 36 | Document every schema change: new tables, columns, indexes, or type modifications. Use DDL or a clear table format. State whether changes are additive (safe to deploy independently) or breaking. |
| 37 | |
| 38 | ### 5. API Changes |
| 39 | |
| 40 | For each new or modified endpoint: method, path, request/response shapes, error codes, and auth requirements. If no API changes, state "No API changes" explicitly so reviewers know it was considered. |
| 41 | |
| 42 | ### 6. Migration Plan |
| 43 | |
| 44 | Step-by-step plan for moving from current state to target state. Include: ordering of deploys, data backfill strategy, estimated duration, and whether the migration requires downtime. |
| 45 | |
| 46 | ### 7. Rollback Plan |
| 47 | |
| 48 | What happens if this goes wrong after deploy? Cover: feature flag strategy, database rollback steps (are migrations reversible?), and the decision criteria for triggering a rollback. |
| 49 | |
| 50 | ### 8. Monitoring & Alerting |
| 51 | |
| 52 | Define what signals indicate success or failure post-launch. Specify: key metrics to track, alert thresholds, dashboards to create or update, and who gets paged. |
| 53 | |
| 54 | ### 9. Security Considerations |
| 55 | |
| 56 | Address: authentication/authorization changes, new data sensitivity (PII, secrets), input validation, and attack surface changes. If none, state "No security impact" with justification. |
| 57 | |
| 58 | ### 10. Testing Strategy |
| 59 | |
| 60 | Specify: unit tests for new logic, integration tests for cross-service flows, load/stress tests if performance-sensitive, and manual QA scenarios that cannot be automated. |
| 61 | |
| 62 | ### 11. Open Questions |
| 63 | |
| 64 | List unresolved decisions. For each, note who owns the answer and whether it blocks implementation. Do not bury open questions inside other sections. |
| 65 | |
| 66 | ### 12. Out of Scope |
| 67 | |
| 68 | Explicitly list related work you are NOT doing. This prevents scope creep during review and sets expectations for follow-up specs. |
| 69 | |
| 70 | ## Quality checklist |
| 71 | |
| 72 | Before delivering the spec, verify: |
| 73 | |
| 74 | - [ ] Problem statement explains WHY, not just WHAT |
| 75 | - [ ] Proposed solution includes rejected alternatives with reasoning |
| 76 | - [ ] Data model changes specify whether they are additive or breaking |
| 77 | - [ ] API changes include error cases, not just happy paths |
| 78 | - [ ] Migration plan has a step-by-step ordering, not just a goal state |
| 79 | - [ ] Rollback plan is concrete — not "we will roll back if needed" |
| 80 | - [ ] Open questions list an owner and blocking status for each item |
| 81 | - [ ] Out of scope is explicit — reviewers should not wonder "what about X?" |
| 82 | - [ ] A teammate unfamiliar with the project can understand the spec without a walkthrough |
| 83 | |
| 84 | ## Common mistakes to avoid |
| 85 | |
| 86 | - **Solution without problem**. Jumping straight to implementation without establishing why the change matters. Reviewers cannot evaluate a solution they do not understand the motivation for. |
| 87 | - **Vague rollback plans**. "Roll back the deploy" is not a plan when you have run a data migration. Specify whether migrations are reversible and what data loss, if any, a rollback entails. |
| 88 | - **Missing the current state**. Describing only the target state. Reviewers need to understand what exists today to evaluate the delta. |
| 89 | - **Conflating spec with design doc**. A tech spec answers "what are we building and how do we ship it safely." A system design doc answers "what is the architecture." If you need both, write both and cross-reference. |
| 90 | - **Burying open questions**. Scattering unresolved decisions across sections instead of surfacing them in a dedicated section. Reviewers miss them and the spec ships with silent assumptions. |
| 91 |