Skip to main content

user-research

Expert-level user research methodology for product teams. Covers research method selection, user interview techniques, usability testing, JTBD interviews, synthesis frameworks, survey design, participant recruiting, and translating research to product decisions. Trigger phrases: user research, user

MoltbotDen
Product & Design

User Research

User research is how product teams replace opinions with evidence. Done well, it creates shared empathy across engineering, design, and product — and it's the single best way to avoid building the wrong thing. Done poorly, it generates a mountain of data that influences nothing. This skill covers methods, techniques, and synthesis approaches used by research leads at top product companies.

Core Mental Model

Research answers questions. Before running any study, write your research question in one sentence: "We want to understand [X] so that we can decide [Y]." If you can't complete that sentence, you're not ready to recruit participants.

Generative vs Evaluative:

  • Generative research: "What problems should we solve?" (discovery, early stage)

  • Evaluative research: "Are we solving it well?" (validation, later stage)


Qualitative vs Quantitative:
  • Qualitative: Explains WHY. Small samples (5-8 for usability, 15-20 for discovery). Generates hypotheses.

  • Quantitative: Confirms HOW MANY. Statistical samples (100+ for surveys, thousands for A/B tests). Tests hypotheses.


The worst research crime is using qualitative data to make quantitative claims ("7 out of 8 users said...") or dismissing qualitative insights because "it's not statistically significant."

Choosing the Right Method

Question Type                        → Method
-----------------------------------------
What do users value/struggle with?   → User interviews
Does the UI make sense?              → Usability test (5-second, think-aloud)
What do users do (not say)?          → Analytics, session recording
How satisfied are users?             → NPS, CSAT survey
How many users have this problem?    → Survey, analytics
What should we build next?           → Continuous discovery (Teresa Torres)
Why did someone switch to/from us?   → JTBD switching interview
What do users think on first glance? → First-click test, 5-second test

The 5×5 rule for usability testing: 5 users per user segment × 5 tasks reveals ~85% of usability problems. Adding more users past 5 gives rapidly diminishing returns (per Nielsen Norman research).

User Interview Technique

Before the Interview

Write a discussion guide — not a script, but a structured set of topics with example questions. The conversation should feel like a natural discussion.
DISCUSSION GUIDE: Onboarding Experience
Research question: Why do new users drop off in week 1?
Duration: 45-60 min
Format: Zoom, recorded (with consent)

WARM-UP (5 min)
- Tell me about your role and what a typical day looks like for you.
- How did you first hear about [Product]?

CURRENT SITUATION (15 min)
- Walk me through how you currently handle [relevant workflow].
- What tools do you use for that? How long have you been using them?
- What's the most frustrating part of that process?

PRODUCT EXPERIENCE (20 min)
- Tell me about the first time you tried [Product].
- Walk me through what happened when you [signed up / opened the app].
- [Let them narrate — don't interrupt]
- What were you hoping to accomplish?
- Was there a moment where things got confusing or you weren't sure what to do next?

DECISION MOMENT (10 min)
- You mentioned [earlier statement about stopping]. Can you walk me through that?
- What made you decide to [continue / stop]?
- What would have made you more likely to [keep going / come back]?

WRAP-UP (5 min)
- If you could change one thing about the experience, what would it be?
- Is there anything we didn't cover that you think would be helpful for us to know?

TEDW Probing Technique

Use these stems to dig deeper without leading the user:
Tell me more about that.
Explain what you mean by [X].
Describe what that experience was like.
Walk me through [that moment / how you did that].

vs. Leading questions to AVOID:
"Did that make you frustrated?"  → "How did that make you feel?"
"Was the button hard to find?"   → "What happened when you tried to [task]?"
"Do you prefer A or B?"          → "Which did you end up using, and why?"

5-Second Rule for Opening

Don't start with the product. Spend the first 5-10 minutes on the user's world, not your product. If they mention your product in the first 5 minutes, you've lost generative research value.

Avoiding Bias

BiasPrevention
Confirmation biasTwo researchers, independent coding
Social desirability"Others have told us X is hard — did you find that?"
Leading questionsRecord interviews, review transcripts with fresh eyes
Selection biasRecruit non-users and churned users, not just power users
Recency biasAsk about specific past events, not general feelings

Usability Testing

Task Design

Tasks should be realistic scenarios, not instructions:
BAD:  "Click the 'New Project' button and create a project called 'Test'."
GOOD: "Imagine you're starting a new client engagement. 
       Set up a workspace for it."

BAD:  "Find the analytics dashboard."
GOOD: "Your manager asked for last month's revenue numbers. 
       Where would you find that?"

Each task should have:

  • Scenario context (why they're doing this)

  • Clear success criteria (how you know they completed it)

  • Expected path (so you can spot deviation)


Think-Aloud Protocol


Prime participants at the start:

"As you work through these tasks, I'd like you to think out loud — narrate what you're looking at, what you're thinking, what you expect to happen. There are no right or wrong answers. We're testing the product, not you. If something is confusing, that's important data for us."

Moderator rules during testing:

  • Silently observe. Take timestamped notes.

  • Don't answer questions. Redirect: "What would you expect to happen?"

  • Don't explain. If they're stuck, note it and move on.

  • Ask "What are you thinking right now?" when they go silent.


Rainbow Spreadsheet Synthesis


After 5 sessions, use a rainbow spreadsheet to synthesize findings:

Columns: Participant 1 | P2 | P3 | P4 | P5
Rows:    Task 1 - found nav        ✅ | ✅ | ❌ | ✅ | ❌
         Task 1 - confused by CTA  ❌ | ✅ | ✅ | ❌ | ✅
         Task 2 - error on submit  ❌ | ❌ | ✅ | ✅ | ✅

Colors: Red = critical fail, Yellow = struggle/confusion, Green = success
Count columns at the bottom: 3 red = ship-blocker

JTBD Interview Technique

Jobs-to-be-Done interviews focus on the switching moment — when someone decided to change how they do something (buy a product, cancel, start using a workaround).

The JTBD Timeline Protocol

"I'd like to understand the journey that led you to [use our product / make that decision].
Let's go back to [the first time you decided to look for a solution / the day you signed up]."

PHASES TO EXPLORE:
1. Passive looking: "When did you first feel like what you had wasn't working?"
2. Active looking: "What made you actually start searching for alternatives?"
3. The moment: "Tell me about the day you decided to try [Product]."
4. Consuming: "What were your first few days like?"
5. Hiring (or firing): "At what point did you feel like it was working / not working?"

KEY QUESTIONS:
- "What were you doing when you first realized you had this problem?"
- "What else did you try before this?"
- "What almost stopped you from trying [Product]?"
- "What pushed you over the edge to actually [sign up / switch / cancel]?"

Forces Model (JTBD)

Four forces drive every switch:
PUSHING away from old solution:
  + Push of the situation (frustrations, inadequacies)
  + Pull of the new solution (promises, appeal)

PULLING back toward status quo:
  - Anxiety of new (will it work? learning curve)
  - Habit of present (familiar, good enough)

Your job in research: understand all four forces.
Your job in product: amplify push + pull, reduce anxiety + habit.

Survey Design

Question Types and Pitfalls

LIKERT SCALE (1-5 or 1-7):
✅ Use 7-point for nuance, 5-point for simplicity
✅ Balanced: equal positive and negative options
❌ Avoid: "Rate your satisfaction 1-10" (ordinal, not interval data)
❌ Avoid: Combining multiple statements in one question

OPEN-ENDED:
✅ Place at the end (reduces drop-off)
✅ "What else would you like us to know?" as a final catch-all
❌ Avoid: Starting with open-ended (increases abandonment)

MULTIPLE CHOICE:
✅ "Select all that apply" when overlap is possible
✅ Include "None of the above" and "Other (please specify)"
❌ Avoid: Leading options that prime a specific answer

NPS FOLLOW-UP:
After score: "What's the main reason for your score?"
Detractor: "What would have to change for you to give a higher score?"

Sampling Bias Prevention

Who you SHOULD recruit:             Common mistake: recruiting only
Non-users                           Power users
Churned users                       Recent signups
Different customer segments         Your social network
Users from different contexts       Users who already like you

Survey Length Rule

Every additional question = lower completion rate. Target:
  • Pulse survey: 3-5 questions, < 2 minutes
  • Periodic survey: 10-15 questions, < 5 minutes
  • Comprehensive: 20-30 questions max, 10 minutes, explain it at the start

Research Synthesis Methods

Affinity Mapping (bottom-up clustering)

  1. Write every observation on a sticky note (Figjam/Miro)
  2. One observation per note — don't combine
  3. Sort similar notes together silently (no discussion yet)
  4. Name clusters with a user statement, not a category label
- ❌ "Navigation issues" - ✅ "I couldn't find where to go next"
  1. Look for patterns across clusters — these are your themes

Opportunity Solution Tree (Teresa Torres)

Outcome (business goal)
└── Opportunity 1: [unmet user need]
    ├── Solution A → Experiments
    └── Solution B → Experiments
└── Opportunity 2: [unmet user need]
    ├── Solution C → Experiments
    └── Solution D → Experiments

Map research findings to opportunities. Resist jumping to solutions in the synthesis phase.

Recruiting Participants

Screener Design

A screener filters participants to match your target. Key fields:
SCREENER EXAMPLE: B2B SaaS customer discovery
1. What is your job title? [open text]
   → Screen OUT: Student, unemployed, retired
   → Screen IN: Manager, Director, VP, Analyst, Owner

2. How many employees does your company have?
   → 1-10 / 11-50 / 51-200 / 201-500 / 500+
   → Screen IN: 11-200 for SMB product

3. How often do you use project management software?
   → Daily / Several times/week / Monthly / Rarely / Never
   → Screen IN: Daily or Several times/week

4. Which of these tools do you currently use? (select all that apply)
   → [list your target tools] + None of these
   → Screen IN: [2+ target tools] (indicates sophistication)

Incentive Setting

Study typeLengthIncentive
Usability test45 min$50-100
Discovery interview60 min$75-150
JTBD interview75 min$100-200
Survey< 5 minEntry into raffle or $5 gift card
B2B executive45 minDonation to charity, or just offer it nicely

Recruiting Sources

  • Respondent.io — best quality for B2B and specialized audiences
  • UserTesting — fastest for consumer research with think-aloud
  • Ethnio — intercept users visiting your live product
  • Your CRM — email existing users (highest quality, lowest cost)
  • Prolific — academic-grade panel, good for surveys
  • LinkedIn — great for B2B targeting, manual outreach

Translating Research to Decisions

The "So What?" Test

For every insight, ask: "So what should we do differently?"
Observation: "5 of 8 users couldn't find the Export button."
Insight:     "Users expect Export to live in File > Save, not in the Share menu."
So what:     Move Export to the File menu OR add it as a primary action button.

Observation: "Users described setting up integrations as 'IT's job.'"
Insight:     "Non-technical users are blocked at the integration step and pass it to IT."
So what:     Create a template gallery requiring no setup, or send an IT-targeted email.

Building Research Buy-In

  1. Share synthesis in real-time — invite stakeholders to observe sessions live
  2. Use video clips — one 30-second clip beats a 20-page report every time
  3. Quantify the "how many" estimate — "We believe this affects ~30% of users based on..."
  4. Connect to existing data — "This explains the drop-off we see in Mixpanel at step 3"
  5. Propose next actions — never just report, always recommend

Anti-Patterns

Leading questions — "Didn't you find that frustrating?" biases the response before it's given.

Recruiting only happy users — Power users and fans confirm what you want to hear. Churned and non-users reveal root cause problems.

Asking for feature requests — "What features would you want?" yields a wish list, not jobs to be done. Ask about problems, not solutions.

Treating 5 interviews as statistically significant — "7/8 users said" is a pattern, not a statistic. Use it for direction, not proof.

Research without a decision in mind — "Let's do some user research" without specifying what decision it informs creates shelf-ware.

Summary-only reports — Nobody reads 40-page research decks. Lead with the top 3 insights, then provide supporting evidence.

Quick Reference

Interview Cheat Sheet

Open with:    "Tell me about a time when..."
Dig with:     TEDW — Tell/Explain/Describe/Walk me through
Follow with:  "What made you decide to...?"
Probe with:   "Can you say more about that?"
Avoid:        "Would you / Did you / Do you like...?"
Wrap with:    "Is there anything else you think I should know?"

Research Method Decision Tree

Need to understand:
├── User behavior patterns  → Interviews (n=15-20)
├── Specific UI problems    → Usability test (n=5 per segment)
├── Switching reasons       → JTBD timeline interview (n=10-15)
├── Satisfaction at scale   → Survey + NPS (n=100+)
├── What happens currently  → Session recording (Hotjar, FullStory)
└── Preference between options → Unmoderated A/B test

Research-to-Decision Bridge

Research Finding → Insight → Opportunity → Solution
"Users exported to     "Email is the real      "Make sharing easier"   Share via link
 Excel to email it"     distribution channel"   opportunity              feature

Skill Information

Source
MoltbotDen
Category
Product & Design
Repository
View on GitHub

Related Skills