TL;DR:
- Start with 5-8 interviews per segment, expand to 12-15 if patterns unclear
- Track saturation signals: repeated themes, predictable responses, diminishing insights
- Use progressive sampling: validate core problem first, then solution fit, then pricing/positioning
- Stop when 3 consecutive interviews add no new insights and key hypotheses have clear evidence
Table of contents
- Context and why it matters in 2025
- Step-by-step playbook
- Templates and examples
- Metrics to track
- Common mistakes and how to fix them
- FAQ
- Further reading
- Why CraftUp helps
Context and why it matters in 2025
Most PMs and founders either interview too few people (making decisions on 2-3 conversations) or get stuck in research paralysis, conducting 50+ interviews without clear stopping criteria. Both approaches waste time and lead to poor product decisions.
The challenge intensifies in 2025 because user behavior changes faster than ever. What worked for sample sizes in traditional market research doesn't apply to rapid product validation. You need enough signal to make confident decisions while moving fast enough to stay relevant.
Success means reaching saturation: the point where additional interviews provide diminishing returns and your key hypotheses have sufficient evidence to act on. This typically happens between 8-15 interviews per user segment, but the exact number depends on your validation goals, segment diversity, and hypothesis complexity.
Step-by-step playbook
1. Define your validation scope and segments
Goal: Establish clear boundaries for what you're validating and who you need to talk to.
Actions:
- List 3-5 core hypotheses you need to validate (problem severity, current solutions, willingness to pay)
- Define 1-2 primary user segments with specific characteristics
- Set confidence thresholds: what evidence would make you confident enough to proceed?
Example: For a project management tool, you might validate "Marketing managers at 50-500 person companies struggle with campaign coordination" and "They'd pay $50/month for better visibility."
Pitfall: Trying to validate everything in one interview round. Focus on your riskiest assumptions first.
Done when: You have written hypotheses, defined segments, and clear success criteria for each.
2. Start with a small batch (5-8 interviews per segment)
Goal: Get initial signal while preserving flexibility to adjust your approach.
Actions:
- Schedule 5-8 interviews within your primary segment
- Use consistent Customer Interview Questions That Get Real Stories to ensure comparability
- Document insights immediately after each conversation
- Track which hypotheses each interview supports or contradicts
Example: After 6 interviews with marketing managers, you might find 5/6 mention campaign visibility issues but only 2/6 consider it a top-3 problem worth paying to solve.
Pitfall: Changing your questions mid-batch, making it impossible to compare responses.
Done when: You've completed the batch with consistent methodology and documented all insights.
3. Analyze for saturation signals
Goal: Determine if you have enough data or need more interviews.
Actions:
- Map responses to your core hypotheses
- Count how many interviews mentioned each key theme
- Note when you started hearing repeated stories or predictable responses
- Identify any surprising insights that emerged
Example: If interviews 4, 5, and 6 all mentioned the same 3 pain points with similar language, and no new themes emerged, you're approaching saturation for problem validation.
Pitfall: Mistaking surface-level similarity for true saturation. Dig deeper into the "why" behind similar responses.
Done when: You can predict what the next interviewee might say about your core questions.
4. Expand strategically if needed (up to 12-15 total)
Goal: Fill gaps in understanding or validate edge cases that could change your direction.
Actions:
- Identify specific gaps: unclear hypotheses, conflicting signals, or underrepresented user types
- Schedule 4-7 additional interviews targeting these gaps
- Adjust questions slightly to probe deeper into unclear areas
- Continue tracking saturation signals
Example: If 6 interviews showed problem fit but unclear solution preferences, focus additional interviews on solution validation rather than re-confirming the problem.
Pitfall: Adding interviews without clear purpose. Each additional conversation should target specific unknowns.
Done when: Key hypotheses have consistent evidence and no major gaps remain.
5. Apply progressive validation across stages
Goal: Right-size your sample for different validation stages.
Actions:
- Problem validation: 8-12 interviews to confirm pain points and current solutions
- Solution validation: 5-8 interviews showing mockups or prototypes
- Pricing/positioning: 6-10 interviews with refined concepts
- Track cumulative insights across stages
Example: Start with Problem Validation Scorecard Compare Segments Decide Tests methodology, then move to solution interviews with a subset of engaged participants.
Pitfall: Treating each stage as completely separate. Build on previous insights and relationships.
Done when: Each validation stage reaches saturation and informs the next stage.
Templates and examples
Interview Saturation Tracker
## Validation Goal: [Problem/Solution/Pricing]
**Target Segment:** [Specific user type]
**Key Hypotheses:**
1. [Hypothesis 1]
2. [Hypothesis 2]
3. [Hypothesis 3]
## Interview Log
| Interview | Date | Participant | H1 Evidence | H2 Evidence | H3 Evidence | New Insights |
|-----------|------|-------------|-------------|-------------|-------------|--------------|
| 1 | MM/DD | [Role, Company] | Support/Contradict/Unclear | Support/Contradict/Unclear | Support/Contradict/Unclear | [Key surprises] |
| 2 | MM/DD | [Role, Company] | Support/Contradict/Unclear | Support/Contradict/Unclear | Support/Contradict/Unclear | [Key surprises] |
## Saturation Signals Checklist
After interview #___:
- [ ] Last 3 interviews mentioned same top pain points
- [ ] No new themes emerged in last 2 conversations
- [ ] Can predict likely responses to core questions
- [ ] Each hypothesis has 5+ data points
- [ ] Contradictory evidence explained by segment differences
## Decision Point
**Evidence Summary:**
- H1: [X supporting, Y contradicting, conclusion]
- H2: [X supporting, Y contradicting, conclusion]
- H3: [X supporting, Y contradicting, conclusion]
**Next Action:** [Continue interviews / Move to next stage / Pivot approach]
Metrics to track
1. Hypothesis Confirmation Rate
Formula: (Supporting interviews / Total interviews) × 100 Instrumentation: Track in your saturation tracker after each interview Example range: 70-80% for strong hypotheses, <50% suggests pivot needed
2. New Insight Frequency
Formula: New themes discovered per interview Instrumentation: Count unique insights that weren't mentioned in previous interviews Example range: 2-3 new insights per early interview, <0.5 when approaching saturation
3. Response Predictability Score
Formula: Percentage of responses you could predict before the interview Instrumentation: Before each interview, write down expected responses, then compare Example range: <30% early on, >80% at saturation
4. Segment Consistency Index
Formula: (Interviews with consistent themes / Total interviews) × 100 Instrumentation: Track how often core themes repeat within your target segment Example range: >70% indicates good segment definition, <50% suggests too broad targeting
5. Evidence Confidence Level
Formula: Hypotheses with 5+ supporting data points / Total hypotheses Instrumentation: Count interviews that provide clear evidence for each hypothesis Example range: 100% confidence requires 5+ interviews per hypothesis, minimum 3 for early decisions
6. Time to Saturation
Formula: Number of interviews until 3 consecutive provide no new insights Instrumentation: Track when diminishing returns begin Example range: 6-10 interviews for focused problems, 10-15 for complex solutions
Common mistakes and how to fix them
-
Stopping after 3-4 interviews because early signals look positive. Fix: Set minimum thresholds (5+ per segment) and track saturation signals objectively.
-
Continuing interviews indefinitely without clear stopping criteria. Fix: Define what evidence you need upfront and stick to your framework.
-
Mixing different user segments in one validation round. Fix: Validate segments separately, then compare patterns across groups.
-
Changing interview questions mid-stream and losing comparability. Fix: Lock core questions for each batch, only add follow-up probes.
-
Confusing polite agreement with validation. Fix: Focus on Customer Interview Questions That Get Real Stories that reveal actual behavior, not opinions.
-
Treating saturation as universal across all aspects. Fix: You might reach saturation on problem validation but need more interviews for solution fit.
-
Ignoring contradictory evidence from later interviews. Fix: Weight recent insights equally and investigate why patterns changed.
-
Using sample sizes from academic research for product validation. Fix: Product validation needs speed and directional confidence, not statistical significance.
FAQ
How many interviews to validate a new product idea?
Start with 8-12 interviews per user segment for problem validation. If you're targeting 2 segments, that's 16-24 total interviews. Expand to 15+ per segment only if initial patterns are unclear or contradictory.
What's the difference between interview sample size for B2B vs B2C validation?
B2B typically needs fewer interviews (8-10) because purchase decisions involve multiple stakeholders and are more rational. B2C often requires 12-15+ because individual behavior varies more and emotional factors play larger roles.
When should I stop conducting validation interviews?
Stop when you hit 3 consecutive interviews with no new insights, your key hypotheses have consistent evidence (5+ data points each), and you can predict responses to core questions with 80%+ accuracy.
Can I validate multiple user segments with the same interviews?
No. Each segment needs separate validation because pain points, solutions, and willingness to pay vary significantly. However, you might discover new segments during interviews with your primary target.
How do I know if my interview sample size is too small?
Your sample is too small if you're still discovering new major themes, getting contradictory evidence without clear explanations, or feel uncertain about core hypotheses after completing your planned interviews.
Further reading
- Nielsen Norman Group's research on qualitative sample sizes explains why 5 users find 85% of usability problems, providing foundation for validation sample sizes.
- Harvard Business Review's guide to customer discovery covers how successful startups balance speed with confidence in validation decisions.
- Strategyzer's evidence guide details how to measure evidence strength and make go/no-go decisions with limited data.
Why CraftUp helps
Learning how many interviews to validate takes practice across different product types and user segments.
- 5-minute daily lessons for busy people who need to master validation without spending weeks in research
- AI-powered, up-to-date workflows PMs need for modern validation techniques and saturation detection
- Mobile-first, practical exercises to apply immediately, including interview planning and insight synthesis
Start free on CraftUp to build a consistent product habit: https://craftuplearn.com

