Symptom: Scores don't match benchmarks.
Cause: Wording or scale changed from standard (e.g. SUS items, PMF question).
Fix: Use the exact standard question and scale; then compare to published benchmarks.
Title, meta description, canonical URL, OG tags, Twitter cards, breadcrumbs, FAQPage, and WebApplication schema are configured for this route.
This tool builds product and customer feedback surveys and calculates common product scores (NPS, CSAT, CES, PMF, SUS). It is not an HR survey suite. Create surveys with templates, score results from paste or CSV, and export reports — no login.
0 question(s)
Use templates above to load NPS, CSAT, CES, PMF, SUS, Discovery, or Churn. Edit title and end message as needed.
Step 1
Build your survey: pick a template (NPS, CSAT, CES, PMF, SUS, Discovery, Churn), edit questions and end screen, then run the quality lint.
Step 2
Export the survey (JSON, CSV template, or Markdown) or share a link; collect responses in your app, email, or another tool.
Step 3
In Score Results, paste counts or import CSV, choose score type (NPS/CSAT/CES/PMF/SUS), then view the score and export the report (Markdown or JSON).
Use NPS for loyalty and likelihood to recommend (one question plus follow-up). Use CSAT after a specific interaction (support, purchase, feature use). Use CES after a task (ease of completing X). Use PMF (Sean Ellis) to gauge how disappointed users would be if the product disappeared. Use SUS for standardized usability (10 items). Use Discovery for problem frequency, alternatives, and willingness-to-pay proxy. Use Churn for why they left, what they switched to, and what would bring them back.
NPS: % Promoters (9–10) − % Detractors (0–6), range -100 to +100. CSAT/CES: (top 2 satisfied / total) × 100. PMF: % answering "Very disappointed"; 40% is the common threshold. SUS: each item normalized to 0–4 (odd items: 5−response, even: response−1), sum × 2.5 for a 0–100 score.
Leading questions (e.g. "How much do you love…") bias answers; use neutral wording. Missing open-text follow-up after scored questions loses the "why"; add a short "What is the main reason?" after NPS/CSAT/CES. Biased samples (e.g. only power users or only recent signups for PMF) distort the score; define a sampling rule and sample across segments.
Triage: segment by cohort or plan, follow up with detractors or dissatisfied users to understand drivers. Run experiments (e.g. fix top friction points, then re-measure). Share the report with stakeholders and tie insights to the product backlog. Document open-text themes and track score over time.
Symptom: Scores don't match benchmarks.
Cause: Wording or scale changed from standard (e.g. SUS items, PMF question).
Fix: Use the exact standard question and scale; then compare to published benchmarks.
Symptom: Low response rate or drop-off.
Cause: Survey too long or asked at the wrong moment.
Fix: Keep under 12 questions for quick surveys; trigger after the relevant action.
Symptom: Leading questions flagged in lint.
Cause: Phrases like 'How much do you love' or 'Don't you agree'.
Fix: Rewrite in neutral language; the lint suggests alternatives.
Symptom: No follow-up after NPS/CSAT.
Cause: Only score question, no open-text 'why'.
Fix: Add a short text or long text question after every scored question.
Symptom: PMF sample biased.
Cause: Only power users or only recent signups.
Fix: Define sampling rule (e.g. active in last 30 days, all plans); lint warns if unclear.
Symptom: Scale inconsistency.
Cause: Mixing 1–5 and 1–7 in same survey.
Fix: Use one scale or document why sections differ; lint warns on mix.
Symptom: Outcome ranges overlap.
Cause: End-screen outcomes have overlapping score ranges.
Fix: Set non-overlapping ranges (e.g. 0–30, 31–70, 71–100); lint fails until fixed.
Symptom: Can't compare waves.
Cause: Question order or options changed between runs.
Fix: Keep survey definition stable; use versioning or export JSON to track changes.
NPS measures likelihood to recommend (0–10); score is % Promoters minus % Detractors (-100 to +100). CSAT measures satisfaction after an interaction (e.g. 1–5); score is % in top 2 boxes. CES measures ease of a specific task (1–5); score is % in top 2. Use NPS for loyalty, CSAT for touchpoint satisfaction, CES for task-level ease.
The Product-Market Fit (Sean Ellis) question asks how disappointed you would be if you could no longer use the product. The score is the percentage who answer 'Very disappointed.' A 40% or higher is often used as a signal of product-market fit. This tool computes that percentage and highlights whether you're at or above the threshold.
SUS (System Usability Scale) has 10 items, each 1–5. Odd-numbered items are scored as 5 minus response; even-numbered as response minus 1 (so each item contributes 0–4). The sum of the 10 normalized items is multiplied by 2.5 to get a 0–100 score. Scores above 68 are considered above average. Don't change the item wording or the score isn't comparable.
Yes. Export a CSV template from the Build tab (question IDs as columns), collect responses in a sheet or tool, then in the Score Results tab use Import CSV. The tool will map columns to the score type (NPS, CSAT, etc.) and compute scores, distribution, and a report. You can also paste counts manually for a quick score.
Outcome ranges let you show different end-screen messages based on score (e.g. 0–30 'We'd love to improve,' 31–70 'Thanks,' 71–100 'You're a fan!'). Each range is defined per scoring category with a min and max. The lint fails if ranges overlap so every response maps to one outcome.
No. This tool builds the survey definition and scores results. You export the survey (or share link) and collect responses elsewhere (email, in-app, Typeform, etc.). Then you paste counts or import CSV here to compute NPS, CSAT, CES, PMF, or SUS and export the report.
Leading questions bias answers (e.g. 'How much do you love our product?' assumes they love it). The lint detects phrases like 'how much do you love,' 'don't you agree,' and suggests neutral rewrites ('How would you rate...'). Fixing these improves response quality and comparability.
Yes. For multiple choice questions you can enable scoring and assign numeric values to each option. You can define multiple categories (e.g. Ease, Trust, Value) for a lightweight assessment. The Score Results tab supports custom score types when you import or paste data.
This tool is built for product and customer feedback surveys (NPS, CSAT, CES, PMF, SUS, discovery, churn). It is not an HR or employee survey suite. Use it for customer-facing and product-led feedback, not for internal engagement or 360 reviews.
No. The tool runs fully client-side with autosave to browser storage, explicit Clear data, and a shareable URL that reconstructs the survey. No account or login. Your data stays in your browser until you clear it or share the link.
Use CraftUp to go from survey scores to triage, experiments, and roadmap.
Last updated: 2026-03-06