engineering

Performance Audit

Conduct systematic performance audits — profiling frontend rendering, backend latency, and database queries to produce a prioritized optimization roadmap with measurable targets.

performanceprofilingoptimizationlatencyCore-Web-Vitals

Works well with agents

Frontend Engineer AgentPerformance Engineer AgentSRE Engineer Agent

Works well with skills

System Design DocumentTechnical SEO AuditTest Plan Writing
performance-audit/
    • ecommerce-page-load.md5.1 KB
  • SKILL.md5.5 KB
SKILL.md
Markdown
1 
2# Performance Audit
3 
4## Before you start
5 
6Gather the following from the user. If anything is missing, ask before proceeding:
7 
81. **What is slow?** — Specific page, endpoint, query, or workflow (not "the app feels sluggish")
92. **How slow is it?** — Current measured latency, load time, or throughput numbers
103. **What is the target?** — Acceptable performance threshold (e.g., "page load under 2 seconds at p95")
114. **What is the architecture?** — Frontend framework, backend services, database(s), CDN, caching layers
125. **What is the traffic profile?** — Average and peak request volume, geographic distribution
136. **What has already been tried?** — Previous optimization attempts and their outcomes
14 
15## Audit template
16 
17### 1. Establish Baselines
18 
19Measure before you optimize. Record current metrics for every area under audit:
20 
21**Frontend** (Lighthouse, WebPageTest, or RUM): LCP, FID, CLS, TTFB, TBT, total page weight broken down by JS/CSS/images/fonts. Include Core Web Vitals targets: LCP <2500ms, FID <100ms, CLS <0.1.
22 
23**Backend** (APM, logs, or load testing): For each slow endpoint, record p50/p95/p99 latency, throughput (req/s), and error rate.
24 
25**Database**: List the top 3-5 slowest queries by total time (avg latency x call frequency), not single-execution time.
26 
27Record all baselines with timestamps, traffic level, and environment.
28 
29### 2. Frontend Audit
30 
31- [ ] Run bundle analyzer — identify largest modules and duplicate dependencies
32- [ ] Verify code splitting: route-specific modules lazy-loaded, tree-shaking eliminating unused exports
33- [ ] Identify unnecessary re-renders using React Profiler or equivalent
34- [ ] Check for layout thrashing: interleaved DOM reads/writes in loops
35- [ ] Verify images use modern formats (WebP/AVIF), correct sizing, and lazy loading
36- [ ] Check fonts: limit weights, use `font-display: swap`
37- [ ] Verify assets served from CDN with cache headers, no render-blocking resources in critical path
38- [ ] Check for unnecessary API calls on page load
39 
40### 3. Backend Audit
41 
42- [ ] Trace slow requests end-to-end — where does time accumulate?
43- [ ] Check middleware chain for unexpectedly slow steps (auth, logging, parsing)
44- [ ] Identify synchronous operations that could be asynchronous
45- [ ] Check for missing timeouts on external service calls
46- [ ] Identify repeatedly computed results that could be cached
47- [ ] Verify cache hit rates — low rates indicate poor key design or short TTLs
48- [ ] Check connection pool sizes against actual demand
49- [ ] Identify sequential operations that could be parallelized
50 
51### 4. Database Audit
52 
53- [ ] Run EXPLAIN/ANALYZE on the top 10 slowest queries
54- [ ] Check for missing indexes on columns in WHERE, JOIN, and ORDER BY
55- [ ] Identify N+1 query patterns — loops issuing individual queries instead of batch/JOIN
56- [ ] Check for full table scans on tables with >100K rows
57- [ ] Verify connection pooling is configured and sized appropriately
58- [ ] Check for lock contention on frequently updated rows
59 
60### 5. Prioritized Optimization Roadmap
61 
62Rank every finding by impact and effort:
63 
64| Priority | Finding | Impact | Effort | Expected Gain |
65|----------|---------|--------|--------|---------------|
66| P0 | N+1 queries on /dashboard | High | Low | p95 from 3200ms to 800ms |
67| P1 | No CDN for static assets | High | Medium | LCP from 4100ms to 2200ms |
68| P2 | Unoptimized images | Medium | Low | Page weight -40% |
69 
70- **P0**: High impact, low effort — free wins, do first
71- **P1**: High impact, higher effort — schedule immediately
72- **P2**: Medium impact, low effort — batch into one sprint
73- **P3**: Lower impact or high effort — backlog
74 
75### 6. Set Targets and Monitoring
76 
77For each P0/P1 item, define: baseline measurement, target threshold, monitoring alert condition, and verification date. Never optimize without a way to measure the result.
78 
79## Quality checklist
80 
81Before delivering a performance audit, verify:
82 
83- [ ] Baselines are recorded with timestamps, conditions, and tools used
84- [ ] Frontend, backend, and database layers are each assessed
85- [ ] Every finding includes measured data, not subjective impressions
86- [ ] The roadmap is prioritized by impact/effort, not listed in discovery order
87- [ ] P0/P1 items have specific, measurable targets (not "make it faster")
88- [ ] Monitoring is defined so regressions are caught
89 
90## Common mistakes
91 
92- **Optimizing without measuring first.** Intuition about what is slow is wrong more often than right. Profile, then optimize.
93- **Focusing on micro-optimizations.** Shaving 2ms off a function while a 3-second database query runs unchecked is wasted effort.
94- **Ignoring p99 latency.** p50 looks fine, but p99 reveals the worst user experience. Report percentile distributions, not averages.
95- **Missing the N+1 pattern.** The most common backend performance bug. Every loop that issues a query is suspect.
96- **Caching without invalidation strategy.** Stale data creates bugs harder to diagnose than slowness.
97- **Declaring victory after one test.** Verify improvements under realistic load, data volume, and conditions.
98 

©2026 ai-directory.company

·Privacy·Terms·Cookies·