Technical SEO

Title, meta description, canonical URL, OG tags, Twitter cards, breadcrumbs, FAQPage, and WebApplication schema are configured for this route.

Survey Builder + Scorer (Free)Build product and customer feedback surveys and calculate NPS, CSAT, CES, PMF, and SUS — not an HR survey suite

This tool builds product and customer feedback surveys and calculates common product scores (NPS, CSAT, CES, PMF, SUS). It is not an HR survey suite. Create surveys with templates, score results from paste or CSV, and export reports — no login.

  • Build Survey: 7 templates (NPS, CSAT, CES, PMF, SUS, Discovery, Churn), quality lint, share URL, export JSON/CSV/MD
  • Score Results: paste counts or import CSV; NPS/CSAT/CES/PMF/SUS formulas; executive summary and report export
  • No login, autosave in browser, shareable survey link, explicit Clear data

Survey

0 question(s)

Use templates above to load NPS, CSAT, CES, PMF, SUS, Discovery, or Churn. Edit title and end message as needed.

Preview

End: Thank you for your feedback!
  • No issues.

How it works

  1. Step 1

    Build your survey: pick a template (NPS, CSAT, CES, PMF, SUS, Discovery, Churn), edit questions and end screen, then run the quality lint.

  2. Step 2

    Export the survey (JSON, CSV template, or Markdown) or share a link; collect responses in your app, email, or another tool.

  3. Step 3

    In Score Results, paste counts or import CSV, choose score type (NPS/CSAT/CES/PMF/SUS), then view the score and export the report (Markdown or JSON).

Which survey to use when

Use NPS for loyalty and likelihood to recommend (one question plus follow-up). Use CSAT after a specific interaction (support, purchase, feature use). Use CES after a task (ease of completing X). Use PMF (Sean Ellis) to gauge how disappointed users would be if the product disappeared. Use SUS for standardized usability (10 items). Use Discovery for problem frequency, alternatives, and willingness-to-pay proxy. Use Churn for why they left, what they switched to, and what would bring them back.

How scoring works

NPS: % Promoters (9–10) − % Detractors (0–6), range -100 to +100. CSAT/CES: (top 2 satisfied / total) × 100. PMF: % answering "Very disappointed"; 40% is the common threshold. SUS: each item normalized to 0–4 (odd items: 5−response, even: response−1), sum × 2.5 for a 0–100 score.

Common survey mistakes

Leading questions (e.g. "How much do you love…") bias answers; use neutral wording. Missing open-text follow-up after scored questions loses the "why"; add a short "What is the main reason?" after NPS/CSAT/CES. Biased samples (e.g. only power users or only recent signups for PMF) distort the score; define a sampling rule and sample across segments.

What to do after the score

Triage: segment by cohort or plan, follow up with detractors or dissatisfied users to understand drivers. Run experiments (e.g. fix top friction points, then re-measure). Share the report with stakeholders and tie insights to the product backlog. Document open-text themes and track score over time.

Pro tips

  • Keep NPS to one question plus one open follow-up; avoid stacking multiple NPS in the same survey.
  • Use CSAT after a specific interaction (support, purchase, feature use); tie the question to the moment.
  • CES works best when the task is clear; ask 'How easy was it to [complete X]?' not a generic 'How easy was it?'
  • For PMF, use the exact Sean Ellis question and 40% 'Very disappointed' as the threshold; sample across segments.
  • SUS has 10 standard items; don't skip or reword them or the score isn't comparable to benchmarks.
  • Add a short open-text 'Why?' after every scored question so you can triage and theme responses.
  • Avoid leading questions ('How much do you love...'); use neutral wording ('How would you rate...').
  • If mixing 1–5 and 1–7 in one survey, add a note so analysts don't merge scales by mistake.
  • Export a CSV template and collect responses in a sheet or tool; then re-import here to score.
  • Use outcome routing (end-screen messages by score range) to give respondents immediate feedback.

Common mistakes

Symptom: Scores don't match benchmarks.

Cause: Wording or scale changed from standard (e.g. SUS items, PMF question).

Fix: Use the exact standard question and scale; then compare to published benchmarks.

Symptom: Low response rate or drop-off.

Cause: Survey too long or asked at the wrong moment.

Fix: Keep under 12 questions for quick surveys; trigger after the relevant action.

Symptom: Leading questions flagged in lint.

Cause: Phrases like 'How much do you love' or 'Don't you agree'.

Fix: Rewrite in neutral language; the lint suggests alternatives.

Symptom: No follow-up after NPS/CSAT.

Cause: Only score question, no open-text 'why'.

Fix: Add a short text or long text question after every scored question.

Symptom: PMF sample biased.

Cause: Only power users or only recent signups.

Fix: Define sampling rule (e.g. active in last 30 days, all plans); lint warns if unclear.

Symptom: Scale inconsistency.

Cause: Mixing 1–5 and 1–7 in same survey.

Fix: Use one scale or document why sections differ; lint warns on mix.

Symptom: Outcome ranges overlap.

Cause: End-screen outcomes have overlapping score ranges.

Fix: Set non-overlapping ranges (e.g. 0–30, 31–70, 71–100); lint fails until fixed.

Symptom: Can't compare waves.

Cause: Question order or options changed between runs.

Fix: Keep survey definition stable; use versioning or export JSON to track changes.

FAQ

What is the difference between NPS, CSAT, and CES?

NPS measures likelihood to recommend (0–10); score is % Promoters minus % Detractors (-100 to +100). CSAT measures satisfaction after an interaction (e.g. 1–5); score is % in top 2 boxes. CES measures ease of a specific task (1–5); score is % in top 2. Use NPS for loyalty, CSAT for touchpoint satisfaction, CES for task-level ease.

How is PMF score calculated?

The Product-Market Fit (Sean Ellis) question asks how disappointed you would be if you could no longer use the product. The score is the percentage who answer 'Very disappointed.' A 40% or higher is often used as a signal of product-market fit. This tool computes that percentage and highlights whether you're at or above the threshold.

How is SUS calculated?

SUS (System Usability Scale) has 10 items, each 1–5. Odd-numbered items are scored as 5 minus response; even-numbered as response minus 1 (so each item contributes 0–4). The sum of the 10 normalized items is multiplied by 2.5 to get a 0–100 score. Scores above 68 are considered above average. Don't change the item wording or the score isn't comparable.

Can I import responses from a CSV?

Yes. Export a CSV template from the Build tab (question IDs as columns), collect responses in a sheet or tool, then in the Score Results tab use Import CSV. The tool will map columns to the score type (NPS, CSAT, etc.) and compute scores, distribution, and a report. You can also paste counts manually for a quick score.

What are outcome ranges?

Outcome ranges let you show different end-screen messages based on score (e.g. 0–30 'We'd love to improve,' 31–70 'Thanks,' 71–100 'You're a fan!'). Each range is defined per scoring category with a min and max. The lint fails if ranges overlap so every response maps to one outcome.

Does the tool collect responses?

No. This tool builds the survey definition and scores results. You export the survey (or share link) and collect responses elsewhere (email, in-app, Typeform, etc.). Then you paste counts or import CSV here to compute NPS, CSAT, CES, PMF, or SUS and export the report.

Why does the lint flag 'leading question'?

Leading questions bias answers (e.g. 'How much do you love our product?' assumes they love it). The lint detects phrases like 'how much do you love,' 'don't you agree,' and suggests neutral rewrites ('How would you rate...'). Fixing these improves response quality and comparability.

Can I use custom scoring?

Yes. For multiple choice questions you can enable scoring and assign numeric values to each option. You can define multiple categories (e.g. Ease, Trust, Value) for a lightweight assessment. The Score Results tab supports custom score types when you import or paste data.

Is this tool for HR or employee surveys?

This tool is built for product and customer feedback surveys (NPS, CSAT, CES, PMF, SUS, discovery, churn). It is not an HR or employee survey suite. Use it for customer-facing and product-led feedback, not for internal engagement or 360 reviews.

Does it require login?

No. The tool runs fully client-side with autosave to browser storage, explicit Clear data, and a shareable URL that reconstructs the survey. No account or login. Your data stays in your browser until you clear it or share the link.

Learn more with CraftUp

Turn feedback into product decisions

Use CraftUp to go from survey scores to triage, experiments, and roadmap.

Freshness

Last updated: 2026-03-06

  • Launched Survey Builder + Scorer: Build and Score tabs, 7 templates (NPS, CSAT, CES, PMF, SUS, Discovery, Churn).
  • Scoring packs: NPS, CSAT, CES, PMF, SUS with correct formulas; paste counts or CSV import; report export MD/JSON.
  • Lint: leading questions, missing follow-up, scale consistency, too long, outcome overlap. Share URL, 3 survey + 3 result examples. No login.