Skip to main content

prd-writing

Expert product requirements document writing for shipping high-quality features. Covers PRD structure, user story format with acceptance criteria, problem framing techniques, requirement types, edge cases, design review, success metrics, and keeping PRDs as living documents. Trigger phrases: write a

MoltbotDen
Product & Design

PRD Writing

A Product Requirements Document is the primary artifact that aligns product, engineering, and design on what to build and why. A great PRD reduces ambiguity, prevents scope creep, and gives engineers enough context to make good technical decisions independently. A bad PRD is either a feature wish list with no "why," or a tome no one reads. This skill covers the craft of writing PRDs that actually get used.

Core Mental Model

A PRD answers three questions: Why this? Why now? What exactly? In that order. Teams that skip to "what exactly" without answering the first two produce technically correct features that don't matter.

The PRD is a decision-making document, not a project plan. Its job is to create shared understanding so the team can make good decisions during implementation without constant PM involvement. If engineers have to ask you the same questions repeatedly, the PRD failed.

Think of the PRD in three phases:

  1. Problem frame (Why): Evidence for the problem, user jobs, opportunity size

  2. Solution frame (What): User stories, requirements, edge cases, design

  3. Success frame (Metrics): What "good" looks like after shipping


PRD Structure

# [Feature Name] PRD
**Author:** [PM Name]
**Status:** Draft | In Review | Approved | Shipped
**Last updated:** [Date]
**Engineering DRI:** [Name]
**Design DRI:** [Name]

---

## Problem Statement
One paragraph. What user pain are we solving? What's the evidence?

## Goals
2-3 bullet goals for this feature. Each should be a user or business outcome.

## Non-Goals  
What this feature will NOT do. This is as important as goals.

## Background
Context a new team member would need. Links to research, related features, strategy docs.

## User Stories
The primary way to specify "what." See User Story section.

## Functional Requirements
Detailed behavior specifications.

## Non-Functional Requirements
Performance, security, accessibility, i18n.

## Edge Cases & Error States
What happens when things go wrong.

## Design
Links to Figma mocks, open design questions.

## Success Metrics
How we'll know this worked. Baseline + target.

## Dependencies
Other teams, systems, or features this depends on.

## Timeline
Milestones, not necessarily dates (or use rough estimates).

## Open Questions
Unresolved decisions. Assign an owner and due date for each.

Problem Statement (The Most Important Section)

Most PRDs rush to requirements. The problem statement is where value is created or destroyed.

5 WHYs for Root Cause

User complaint: "The export feature is too slow."

Why #1: The export takes 30+ seconds for large datasets.
Why #2: We're generating the file synchronously in the request.
Why #3: We designed it for small datasets; we didn't anticipate 100k+ rows.
Why #4: We didn't have usage data on export behavior when we built it.
Why #5: There's no monitoring on export size distribution.

Root cause: We're missing async job processing AND observability on exports.

Wrong PRD: "Make exports faster." (treats the symptom)
Right PRD: "Implement async export processing with progress indication + 
            add P95 export latency monitoring."

User Journey Map to Identify Pain

Document the user journey BEFORE your feature exists. Pain points in the current journey become your requirements.

Journey: User wants to share project report with their manager

Step 1: User opens project                    → Works fine
Step 2: User looks for export                → Unclear where it is (pain: navigation)
Step 3: User finds export in share menu      → Unexpected location (pain: mental model)
Step 4: User starts export                   → 35 second wait with no feedback (pain: uncertainty)
Step 5: User receives generic CSV            → No formatting, needs cleanup in Excel (pain: quality)
Step 6: User emails CSV to manager           → Manager can't open it correctly (pain: format)

Opportunities identified:
1. Move Export out of Share menu
2. Add async processing with progress bar
3. Offer Excel-formatted export option
4. Consider shareable link instead of file

User Story Format

Standard Format

As a [specific user type],
I want to [perform a specific action],
So that [I achieve a specific outcome].

Good:
As a project manager,
I want to export my project's task data to Excel,
So that I can create a status report for stakeholders without manual data entry.

Bad:
As a user,
I want to export things,
So that I can use the data.
(too vague — no specificity on who, what, or why)

User Types to Be Specific About

  • Don't write "user" — write "free plan user," "account admin," "first-time visitor," "API consumer," "mobile user," "screen reader user"
  • Different user types often need different implementations of the same feature

Acceptance Criteria: GIVEN/WHEN/THEN

Acceptance criteria are the definition of done. They should be independently verifiable.

User Story: As a project manager, I want to export my project to Excel.

Acceptance Criteria:

GIVEN I am logged in as a project owner
WHEN I click "Export" on a project with fewer than 10,000 tasks
THEN a download begins immediately and completes within 5 seconds

GIVEN I am logged in as a project owner
WHEN I click "Export" on a project with more than 10,000 tasks
THEN I see a progress indicator and receive an email notification when the export is ready

GIVEN the export has been processing for more than 2 minutes
WHEN the job fails
THEN I receive an email notification with an error message and a "retry" link

GIVEN I am a free plan user
WHEN I click "Export"
THEN I see an upgrade prompt explaining that exports require the Pro plan

GIVEN I am a team member (not an owner) on a project
WHEN I navigate to the project settings
THEN the Export button is visible but disabled with tooltip "Only project owners can export"

Every "GIVEN/WHEN/THEN" should be automatable as a test case. If your QA engineer can't write a test for it, the criteria is too vague.

Requirement Types

Functional Requirements

What the system must do. Use declarative statements, not implementation details.
## Functional Requirements

### Export Initiation
- FR1: Users shall be able to initiate an export from the project overview page
- FR2: Export shall include all tasks, subtasks, assignees, due dates, status, and custom fields
- FR3: Users shall be able to choose between CSV and Excel (.xlsx) formats
- FR4: Export shall reflect project data as of the moment export was initiated

### Async Processing
- FR5: Exports exceeding 10,000 rows shall be processed asynchronously
- FR6: Users shall see a progress indicator for async exports
- FR7: Users shall receive an email when an async export is ready to download
- FR8: Export download links shall expire after 48 hours

### Permissions
- FR9: Only users with Owner or Admin role on a project may export
- FR10: Export shall be available on Pro and Enterprise plans only

Non-Functional Requirements

How the system must perform. These are often the ones that cause production incidents when forgotten.
## Non-Functional Requirements

### Performance
- NFR1: Synchronous exports (< 10k rows) must complete in < 5 seconds at P95
- NFR2: Async export jobs must begin processing within 30 seconds of initiation
- NFR3: Maximum export size: 500,000 rows (larger should return an error, not hang)

### Security
- NFR4: Export download URLs must be signed and user-specific (not guessable)
- NFR5: Download links must expire after 48 hours
- NFR6: Export initiation must respect rate limiting: 10 exports per user per day

### Accessibility
- NFR7: Export button must be keyboard accessible
- NFR8: Progress indicator must have appropriate ARIA live region

### Reliability
- NFR9: Failed export jobs must retry automatically up to 3 times with exponential backoff
- NFR10: Export jobs must complete or fail within 15 minutes (no infinite hangs)

MoSCoW Prioritization

When requirements exceed one sprint:
Must Have (M):    Feature is dead without this
Should Have (S):  High value but can ship without it
Could Have (C):   Nice-to-have if time allows
Won't Have (W):   Explicitly out of scope for this version

V1 Scope:
M: CSV export for all project data
M: Async processing for large exports
M: Email notification when ready
S: Excel format support
S: Progress indicator
C: Custom field selection
C: Scheduled recurring exports
W: Real-time collaborative export (future)

Edge Cases and Error States

This is where most PRDs fall short. Engineering will encounter these — document them now.

## Edge Cases

### Empty States
- Empty project (0 tasks): Export a valid file with headers only, no data rows
- Project with only archived tasks: Export includes archived tasks (with status "Archived")

### Large Data
- 500,000+ rows: Return error "This export is too large. Contact support for bulk data export."
- User with 50+ simultaneous exports pending: Queue and process sequentially; show queue position

### Permissions Changes Mid-Export
- If user's role is downgraded to Member while export is processing: complete the export 
  but do not send the email download link; silently expire the job

### Network / System Errors
- If export job fails after 3 retries: Send email "Your export failed. Please try again or contact support."
- If user tries to download an expired link: Show "This link has expired. Generate a new export from the project."
- If S3/storage is unavailable: Queue the job, retry when storage recovers

### Special Data
- Tasks with null custom field values: Include column with empty cell (not "null" string)
- Task names with commas (CSV): Properly escaped with quotes per RFC 4180
- Unicode characters in task names: UTF-8 encoding, BOM prefix for Excel compatibility

Success Metrics

Metrics that measure user value, not feature adoption.

## Success Metrics

### Primary (measures if we solved the problem)
- Baseline: Time for users to create a project status report (from research: ~45 min avg, mostly manual formatting)
- Target: Users with export feature spend < 15 min on status reports

### Secondary (leading indicators)
- Export completion rate: > 90% of initiated exports succeed (baseline: N/A, new feature)
- P95 export latency: < 5s for < 10k rows, < 3 min for async exports
- Export feature adoption: 30% of active project owners use export within 30 days of launch

### Guardrail (must not degrade)
- Overall project page load time must not increase
- Customer support tickets about exports: < 2% of export users contact support

### Anti-metrics (what we're NOT optimizing)
- Number of exports per user (a high number could mean the feature is too painful per use)
- Export button clicks that don't complete (a sign of confusion, not success)

### Measurement Plan
- Event: project_exported (properties: format, row_count, async, duration_ms)
- Dashboard: Amplitude funnel: export_started → export_completed
- Alert: error rate > 5% on export_failed event → PagerDuty

API Design Input for Engineering

Product should inform the API design, especially for features with consumer-facing APIs.

## API Considerations

The engineering team should evaluate an async job pattern:

POST /api/projects/{id}/exports
Body: { format: "csv" | "xlsx", fields: ["tasks", "subtasks", ...] }
Response 202 Accepted: { export_id: "exp_abc123", status: "queued", poll_url: "..." }

GET /api/exports/{export_id}
Response: { status: "queued" | "processing" | "ready" | "failed", download_url?: "...", expires_at?: "..." }

Questions for engineering:
1. Do we use polling or webhooks for status updates?
2. Should exports be idempotent? (same request within 5 min returns existing job?)
3. What's our storage strategy for export files? (S3 lifecycle policy?)

Keeping PRDs as Living Documents

## Change Log
| Date       | Change                          | Author |
|------------|---------------------------------|--------|
| 2025-01-10 | Initial draft                   | @will  |
| 2025-01-15 | Added async export spec         | @will  |
| 2025-01-20 | Engineering feedback: S3 lifecycle, polling vs webhooks | @eng |
| 2025-01-22 | Removed Excel format (V2)       | @will  |

After launch: Add a "What we learned" section:

## Post-Launch Notes (added after shipping)
- Excel format was requested by 40% of users in first week → accelerate to V1.1
- 3-minute async SLA was too long — users were abandoning and not returning for download
- Email notification open rate: 89% (users DO check email for this)

Anti-Patterns

No "non-goals" section — Without explicit out-of-scope items, every adjacent feature is implicitly in scope. Scope creep always follows.

Solution-first PRDs — "Add an export button that opens a modal with CSV and Excel options" is an implementation spec, not a requirements doc. Start with the problem.

Vague acceptance criteria — "The export should be fast" is not testable. "P95 export latency < 5 seconds" is.

Missing error states — A feature with five happy paths and zero error paths will generate support tickets. Engineers default to their best guess for error handling without spec.

PRD as a contract — When PMs use the PRD to hold engineering "accountable," trust breaks down. It's a living document, not a legal agreement.

Metrics that measure output, not outcome — "% of users who clicked the export button" measures feature existence; "time-to-report-creation" measures user value.

PRD with no open questions — Every feature has ambiguity. Documenting open questions forces resolution before code is written.

Quick Reference

PRD Checklist

  • [ ] Problem statement includes evidence (data, research, user feedback)
  • [ ] User stories specify the user type (not just "user")
  • [ ] Acceptance criteria written as testable GIVEN/WHEN/THEN
  • [ ] Non-goals explicitly listed
  • [ ] Edge cases documented: empty states, error states, permissions edge cases
  • [ ] NFRs include performance, security, accessibility targets
  • [ ] Success metrics measure user value, not feature adoption
  • [ ] Open questions have owners and due dates
  • [ ] Change log initialized

User Story Quality Test

Ask each story:
  1. Who: Is the user type specific enough to differentiate behavior?
  2. What: Is the action concrete and unambiguous?
  3. Why: Does the outcome describe user value, not a feature capability?
  4. AC: Can a QA engineer write an automated test for every AC?
  5. Independent: Can this story ship and be useful on its own?

Skill Information

Source
MoltbotDen
Category
Product & Design
Repository
View on GitHub

Related Skills