Symptom: Many features show high Questionable rates.
Cause: Functional and dysfunctional prompts are vague or double-barreled.
Fix: Rewrite each pair with one clear behavior and rerun a short pilot.
Use this Kano survey builder as a Kano model survey for product feature prioritization (functional / dysfunctional questions) so your team can make roadmap choices from structured evidence instead of loose opinions.
Runs in your browser. We do not store your uploaded file.
Allowed responses: I like it, I expect it (must-be), I am neutral, I can tolerate it / live with it, I dislike it.
Step 1
Upload a CSV/XLSX file or paste your Kano questionnaire responses in the template schema.
Step 2
The scorer maps each functional/dysfunctional answer pair to A, O, M, I, R, or Q using the standard matrix.
Step 3
Review categories, coefficients, flags, ranking, and segment differences, then export a stakeholder-ready summary.
The Kano model separates customer expectation types so teams do not confuse baseline requirements with growth opportunities. A feature can land in one of six categories: Attractive (A), One-dimensional (O), Must-be (M), Indifferent (I), Reverse (R), or Questionable (Q). This classification helps prioritize reliability gaps first, then performance drivers, then delight opportunities that can differentiate the product.
Kano analysis depends on paired questions: how users feel when a feature is present (functional) and how they feel when it is absent (dysfunctional). Each pair maps to a category through the standard matrix. This pairing prevents misleading conclusions from one-sided preference questions and makes category assignment consistent across analysts.
Keep feature prompts concrete, singular, and easy to imagine in real product use. Avoid combining multiple behaviors in one item, and avoid terms that imply value judgment. High Questionable rates usually come from ambiguous wording, overlong descriptions, or confusing response labels. Pilot test wording with a small audience before broad rollout so the full sample produces interpretable data.
Start with Must-be items that show strong dissatisfaction risk if missing. Next, rank One-dimensional items by coefficient strength and delivery feasibility. Attractive items become leverage once baseline expectations are stable. Use ambiguity, high-Q, and low sample flags as guardrails: they indicate where research quality is still too weak for confident roadmap commitments.
Symptom: Many features show high Questionable rates.
Cause: Functional and dysfunctional prompts are vague or double-barreled.
Fix: Rewrite each pair with one clear behavior and rerun a short pilot.
Symptom: Every feature looks Must-be.
Cause: Respondents were primed by onboarding copy before the survey.
Fix: Randomize question order and remove leading context from introductions.
Symptom: Conflicting categories appear between teams.
Cause: Segments with different needs were mixed into one overall analysis.
Fix: Add segment labels and review category distribution by segment first.
Symptom: Roadmap decisions change every week.
Cause: Low sample features are treated as final evidence.
Fix: Enforce a minimum respondent threshold before locking prioritization.
Symptom: Attractive ideas are over-prioritized over reliability work.
Cause: CS is reviewed without DS and Must-be signals.
Fix: Use CS and DS together and place Must-be gaps first in planning.
Symptom: Imported files fail validation frequently.
Cause: Header names or response values do not match the accepted schema.
Fix: Use the template file and normalize responses before uploading.
Symptom: Reverse features are ignored in final decisions.
Cause: Team assumes all reverse signals are noise.
Fix: Investigate reverse signals by segment before removing the feature.
Symptom: Stakeholders distrust the analysis output.
Cause: No shared summary is provided after scoring runs.
Fix: Export markdown with flags, coefficients, and next actions after each run.
Yes. The no-login scorer is fully free and runs in your browser. You can upload CSV or XLSX files, paste survey tables, compute Kano categories, and export stakeholder-ready outputs without paywalls. There is no limit on how many features you analyze in one session.
No. The scoring flow is client-side and does not upload your file to CraftUp servers. File parsing and calculations happen in your browser session. If you use exports, generated files are saved locally on your device, and you can clear the working dataset at any time.
Functional asks how users feel when a feature exists. Dysfunctional asks how users feel when that same feature does not exist. The pair is required for Kano classification because customer sentiment depends on both presence and absence, not only positive preference in isolation.
The tool selects the highest-frequency category excluding Questionable results, then calculates category strength as top-one percentage minus top-two percentage. If strength is below the ambiguity threshold, the feature is flagged as Ambiguous so teams avoid overconfident prioritization from noisy distributions.
CS estimates upside from delivering a feature, while DS estimates downside risk if it is missing. Looking at both prevents one-sided decisions. A feature with modest CS but very negative DS can still deserve priority because it protects baseline satisfaction and reduces dissatisfaction risk.
High Questionable rates usually indicate wording or survey design issues, not customer clarity. Common causes include ambiguous feature definitions, inconsistent answer labels, or respondent fatigue. Rewrite prompts, run a small pilot, and compare rates again before making roadmap decisions from that feature's category.
A practical minimum is twenty respondents per feature for early directional decisions, with higher thresholds for high-stakes roadmap commitments. The tool flags low-sample features so teams can separate hypotheses from stronger evidence. Segment-level decisions often require additional responses in each subgroup.
Yes. If your dataset includes a segment column, the tool adds segment comparison outputs automatically. You can review category distribution shifts plus CS and DS differences by segment, which helps detect where one audience sees a Must-be while another experiences the same feature as Indifferent.
You can use CSV or XLSX. Required columns are feature_id, feature_name, respondent_id, functional_response, and dysfunctional_response. Optional segment and timestamp columns are supported. Use the template download to avoid header mismatches and ensure response values map correctly to the Kano evaluation matrix.
Start with Must-be gaps that show high dissatisfaction risk, then evaluate One-dimensional opportunities with strong CS and DS signals. Attractive items are valuable when baseline quality is stable. Always review Ambiguous and High-Q flags before final prioritization, then publish the markdown summary for stakeholder alignment.
Use CraftUp lessons and workflows to move from customer signals to execution-ready priorities.
Last updated: 2026-03-03