engineering

Test Plan Writing

Write risk-based test plans with coverage matrices, test level decisions, pass/fail criteria, and environment requirements — deciding what to test, at which level, and why.

testingtest-plancoveragequality-assurancetest-strategy

Works well with agents

Code Reviewer AgentEmbedded Engineer AgentMobile Engineer AgentPerformance Engineer AgentPrompt Engineer AgentQA Engineer AgentTest Strategist Agent

Works well with skills

Performance AuditPRD WritingPrompt Engineering GuideTicket Writing
$ npx skills add The-AI-Directory-Company/(…) --skill test-plan-writing
test-plan-writing/
    • checkout-flow.md4.4 KB
  • SKILL.md6.1 KB
SKILL.md
Markdown
1 
2# Test Plan Writing
3 
4## Before you start
5 
6Gather the following from the user. If anything is missing, ask before proceeding:
7 
81. **What is being tested?** (Feature, service, migration, integration, or full release)
92. **What are the requirements?** (Link to PRD, tickets, or acceptance criteria)
103. **What is the risk profile?** (User-facing? Payment flow? Data migration? First launch or incremental?)
114. **What is the tech stack?** (Languages, frameworks, third-party integrations, data stores)
125. **What testing infrastructure exists?** (CI pipeline, staging environments, test data factories)
136. **What is the timeline?** (Release date, testing window, hard deadlines)
14 
15## Test plan template
16 
17### 1. Scope
18 
19Define what is in scope and what is explicitly out of scope. Every out-of-scope item should explain why (separate ticket, future phase, unchanged).
20 
21### 2. Risk Analysis
22 
23Rank components by risk. This drives where you invest testing effort.
24 
25| Component | Likelihood | Impact | Risk Level | Testing Investment |
26|-----------|-----------|--------|------------|-------------------|
27| Payment processing | Medium | Critical | **High** | Extensive |
28| Cart calculations | Low | High | **Medium** | Moderate |
29| Confirmation page | Low | Low | **Low** | Minimal |
30 
31Rules:
32- Anything involving money, PII, or data loss is automatically High impact
33- New third-party integrations are Medium likelihood minimum
34- Components with no existing test coverage get a likelihood bump
35 
36### 3. Test Levels per Component
37 
38For each component, decide which test levels apply and why.
39 
40| Component | Unit | Integration | E2E | Manual | Rationale |
41|-----------|------|-------------|-----|--------|-----------|
42| Price calculation | Yes | No | No | No | Pure logic, no external deps |
43| Stripe integration | No | Yes | Yes | No | Must verify real API contract |
44| Checkout flow | No | No | Yes | Yes | User-facing critical path |
45 
46Key principle: **test at the lowest level that gives you confidence.** Unit tests for logic, integration tests for contracts, E2E for critical user journeys only.
47 
48### 4. Coverage Targets
49 
50Set targets per risk level, not a single blanket number:
51 
52```
53High-risk: 90%+ line coverage, 100% of acceptance criteria
54Medium-risk: 75%+ line coverage, all happy paths + known edge cases
55Low-risk: 50%+ line coverage, happy path only
56```
57 
58A single "80% coverage" target incentivizes testing easy code instead of risky code.
59 
60### 5. Test Cases
61 
62Group by component and priority. Each case must have a clear pass/fail condition:
63 
64```
65P0 — Must pass before release:
66- [ ] Successful charge with valid card returns order confirmation
67- [ ] Declined card shows user-facing error, no order created
68- [ ] Network timeout triggers retry (max 2), then fails gracefully
69 
70P1 — Should pass, release-blocking if broken:
71- [ ] Duplicate submission within 5s is idempotent
72- [ ] Partial failure (charge succeeds, DB write fails) triggers compensation
73 
74P2 — Nice to verify, not release-blocking:
75- [ ] Charge amount matches cart total across currency formats
76```
77 
78### 6. Environment Requirements
79 
80Specify what each test level needs to run:
81 
82```
83Unit tests: Local, no external deps, mock all I/O
84Integration: CI environment, test-mode API keys, test database
85E2E: Staging environment, seeded test data, service sandboxes
86Manual: Staging with production-like data volume
87```
88 
89Call out blockers explicitly: "Staging must have Stripe sandbox keys configured before E2E tests can run."
90 
91### 7. Pass/Fail Criteria
92 
93```
94Release is GO when:
95- All P0 test cases pass
96- All P1 cases pass OR have documented workarounds approved by eng lead
97- No open P0/P1 bugs
98- Coverage targets met per risk level
99 
100Release is NO-GO when:
101- Any P0 test case fails
102- More than 2 P1 cases fail without workarounds
103- A new High-risk bug is discovered outside the original plan
104```
105 
106### 8. Schedule
107 
108Map testing activities to the timeline. Always include buffer for bug fixes — plans that allocate 100% of time to writing tests and 0% to fixing failures are fiction.
109 
110## Quality checklist
111 
112Before delivering a test plan, verify:
113 
114- [ ] Every in-scope component has a risk rating with reasoning
115- [ ] Test levels are justified per component, not applied uniformly
116- [ ] Coverage targets vary by risk level, not a single blanket number
117- [ ] Every test case has a clear pass/fail condition, not just a description
118- [ ] Pass/fail criteria define both GO and NO-GO conditions
119- [ ] Environment requirements call out setup dependencies and blockers
120- [ ] Out-of-scope items are listed explicitly, not just omitted silently
121- [ ] P0 test cases cover failure modes, not just happy paths
122- [ ] Schedule accounts for bug-fix time, not just initial test writing
123 
124## Common mistakes to avoid
125 
126- **Testing everything at E2E level.** E2E tests are slow and flaky. Reserve them for critical user journeys. Test logic with unit tests, contracts with integration tests.
127- **Flat priority lists.** If every test case is "high priority," none are. Use P0/P1/P2 to force real prioritization.
128- **Missing failure modes.** For every success case, ask: "What happens when this fails? Times out? Returns unexpected data?"
129- **Coverage theater.** Hitting 80% by testing getters while ignoring retry logic. Coverage targets must pair with risk analysis.
130- **No exit criteria.** Without pass/fail criteria, "are we done?" becomes a judgment call. Define the gate up front.
131- **Ignoring test data.** Plans that assume test data exists without specifying who creates it, how, and when.
132 
AgentsSkillsCompaniesJobsForumBlogFAQAbout

©2026 ai-directory.company

·Privacy·Terms·Cookies·