Kano Survey Builder + Scorer (Free)

Use this Kano survey builder as a Kano model survey for product feature prioritization (functional / dysfunctional questions) so your team can make roadmap choices from structured evidence instead of loose opinions.

  • Upload or paste raw response data and score A/O/M/I/R/Q instantly
  • Get coefficients, ambiguity warnings, ranking, and segment comparison views
  • Export CSV, markdown, and JSON outputs for stakeholder planning

Runs in your browser. We do not store your uploaded file.

Download template

Allowed responses: I like it, I expect it (must-be), I am neutral, I can tolerate it / live with it, I dislike it.

Upload or paste data to compute Kano results.

How it works

  1. Step 1

    Upload a CSV/XLSX file or paste your Kano questionnaire responses in the template schema.

  2. Step 2

    The scorer maps each functional/dysfunctional answer pair to A, O, M, I, R, or Q using the standard matrix.

  3. Step 3

    Review categories, coefficients, flags, ranking, and segment differences, then export a stakeholder-ready summary.

What the Kano model measures

The Kano model separates customer expectation types so teams do not confuse baseline requirements with growth opportunities. A feature can land in one of six categories: Attractive (A), One-dimensional (O), Must-be (M), Indifferent (I), Reverse (R), or Questionable (Q). This classification helps prioritize reliability gaps first, then performance drivers, then delight opportunities that can differentiate the product.

The evaluation matrix concept

Kano analysis depends on paired questions: how users feel when a feature is present (functional) and how they feel when it is absent (dysfunctional). Each pair maps to a category through the standard matrix. This pairing prevents misleading conclusions from one-sided preference questions and makes category assignment consistent across analysts.

How to write good Kano features

Keep feature prompts concrete, singular, and easy to imagine in real product use. Avoid combining multiple behaviors in one item, and avoid terms that imply value judgment. High Questionable rates usually come from ambiguous wording, overlong descriptions, or confusing response labels. Pilot test wording with a small audience before broad rollout so the full sample produces interpretable data.

How to interpret results

Start with Must-be items that show strong dissatisfaction risk if missing. Next, rank One-dimensional items by coefficient strength and delivery feasibility. Attractive items become leverage once baseline expectations are stable. Use ambiguity, high-Q, and low sample flags as guardrails: they indicate where research quality is still too weak for confident roadmap commitments.

Pro tips

  • Write each feature as one concrete behavior, not a bundle of multiple outcomes.
  • Anchor every feature to one target segment before collecting responses.
  • Use the same response option wording across all questions to reduce noise.
  • Pilot with five respondents and fix wording before full distribution.
  • Keep feature descriptions under two sentences to limit interpretation drift.
  • Track questionable rate by feature weekly and rewrite unclear items first.
  • Use a minimum sample threshold per feature before making roadmap decisions.
  • Compare CS and DS together so you do not overvalue delighters with high risk.
  • Review segment splits before final prioritization to catch hidden disagreement.
  • Export a markdown summary after every run so stakeholders review the same snapshot.

Common mistakes

Symptom: Many features show high Questionable rates.

Cause: Functional and dysfunctional prompts are vague or double-barreled.

Fix: Rewrite each pair with one clear behavior and rerun a short pilot.

Symptom: Every feature looks Must-be.

Cause: Respondents were primed by onboarding copy before the survey.

Fix: Randomize question order and remove leading context from introductions.

Symptom: Conflicting categories appear between teams.

Cause: Segments with different needs were mixed into one overall analysis.

Fix: Add segment labels and review category distribution by segment first.

Symptom: Roadmap decisions change every week.

Cause: Low sample features are treated as final evidence.

Fix: Enforce a minimum respondent threshold before locking prioritization.

Symptom: Attractive ideas are over-prioritized over reliability work.

Cause: CS is reviewed without DS and Must-be signals.

Fix: Use CS and DS together and place Must-be gaps first in planning.

Symptom: Imported files fail validation frequently.

Cause: Header names or response values do not match the accepted schema.

Fix: Use the template file and normalize responses before uploading.

Symptom: Reverse features are ignored in final decisions.

Cause: Team assumes all reverse signals are noise.

Fix: Investigate reverse signals by segment before removing the feature.

Symptom: Stakeholders distrust the analysis output.

Cause: No shared summary is provided after scoring runs.

Fix: Export markdown with flags, coefficients, and next actions after each run.

FAQ

Is this Kano survey builder free?

Yes. The no-login scorer is fully free and runs in your browser. You can upload CSV or XLSX files, paste survey tables, compute Kano categories, and export stakeholder-ready outputs without paywalls. There is no limit on how many features you analyze in one session.

Do you store my uploaded file?

No. The scoring flow is client-side and does not upload your file to CraftUp servers. File parsing and calculations happen in your browser session. If you use exports, generated files are saved locally on your device, and you can clear the working dataset at any time.

What is the difference between functional and dysfunctional questions?

Functional asks how users feel when a feature exists. Dysfunctional asks how users feel when that same feature does not exist. The pair is required for Kano classification because customer sentiment depends on both presence and absence, not only positive preference in isolation.

How do you decide the primary category when results are mixed?

The tool selects the highest-frequency category excluding Questionable results, then calculates category strength as top-one percentage minus top-two percentage. If strength is below the ambiguity threshold, the feature is flagged as Ambiguous so teams avoid overconfident prioritization from noisy distributions.

Why do CS and DS matter together?

CS estimates upside from delivering a feature, while DS estimates downside risk if it is missing. Looking at both prevents one-sided decisions. A feature with modest CS but very negative DS can still deserve priority because it protects baseline satisfaction and reduces dissatisfaction risk.

What does a high Questionable rate mean?

High Questionable rates usually indicate wording or survey design issues, not customer clarity. Common causes include ambiguous feature definitions, inconsistent answer labels, or respondent fatigue. Rewrite prompts, run a small pilot, and compare rates again before making roadmap decisions from that feature's category.

How many respondents do I need per feature?

A practical minimum is twenty respondents per feature for early directional decisions, with higher thresholds for high-stakes roadmap commitments. The tool flags low-sample features so teams can separate hypotheses from stronger evidence. Segment-level decisions often require additional responses in each subgroup.

Can I compare segments in the same run?

Yes. If your dataset includes a segment column, the tool adds segment comparison outputs automatically. You can review category distribution shifts plus CS and DS differences by segment, which helps detect where one audience sees a Must-be while another experiences the same feature as Indifferent.

What file format should I use for imports?

You can use CSV or XLSX. Required columns are feature_id, feature_name, respondent_id, functional_response, and dysfunctional_response. Optional segment and timestamp columns are supported. Use the template download to avoid header mismatches and ensure response values map correctly to the Kano evaluation matrix.

How should I use the ranking output in roadmap planning?

Start with Must-be gaps that show high dissatisfaction risk, then evaluate One-dimensional opportunities with strong CS and DS signals. Attractive items are valuable when baseline quality is stable. Always review Ambiguous and High-Q flags before final prioritization, then publish the markdown summary for stakeholder alignment.

Learn more with CraftUp

Turn survey evidence into roadmap clarity

Use CraftUp lessons and workflows to move from customer signals to execution-ready priorities.

Freshness

Last updated: 2026-03-03

  • Launched no-login Kano scorer with CSV/XLSX import, paste parsing, and full A/O/M/I/R/Q categorization.
  • Added CS/DS coefficients, ambiguity detection, high-Q and low-sample warnings, and segment comparison outputs.
  • Added export stack for CSV, markdown, and JSON plus stakeholder-friendly ranking and plotting views.