Reach
Estimate how many users or accounts are affected in a given period. You can enter week, month, quarter, or year and the tool normalizes to quarter for clean ranking.
Title, meta description, canonical URL, OG tags, Twitter cards, breadcrumb schema, FAQPage schema, and WebApplication schema are configured for this page.
This RICE score calculator helps product teams prioritize feature ideas in one view, with consistent math and practical exports for roadmap planning.
Compare one initiative or an entire backlog, normalize mixed timeframes, and share ranking decisions without extra tooling.
No login. Runs in your browser. We do not store your inputs.
Calculator mode
Normalization: week x13, month x3, quarter x1, year /4. Import and export headers: name, reach_input, reach_timeframe, reach_per_quarter, impact, confidence_percent, effort_person_months, notes, rice_score, rank.
Impact (?), confidence (?), effort (?).
| Initiative name | Reach | Reach time frame | Impact | Confidence | Effort | Notes / assumptions | RICE score | Rank | Actions |
|---|---|---|---|---|---|---|---|---|---|
Qtr: 1000 | 800 | 1 |
High score and low effort initiatives.
Low-confidence rows that likely need discovery.
Loading saved data...
Use the calculator as a working prioritization workflow, not a one-time scoring exercise. The value comes from keeping assumptions visible and rerunning ranking as evidence changes, so roadmap conversations stay grounded in current information.
Step 1
Add one or more initiatives with Reach, Impact, Confidence, and Effort inputs. Reach can be weekly, monthly, quarterly, or yearly.
Step 2
The calculator normalizes every initiative to reach per quarter, computes the RICE score, and ranks rows with stable sorting.
Step 3
Review quick wins, stakeholder rationale, and confidence sensitivity, then export or share for roadmap decisions.
RICE is short for Reach, Impact, Confidence, and Effort. The framework works best when each value is estimated in a repeatable way, then discussed as assumptions rather than facts. A good score is useful only if your team can explain why each input was chosen and what evidence could change it next week.
Estimate how many users or accounts are affected in a given period. You can enter week, month, quarter, or year and the tool normalizes to quarter for clean ranking.
Use the default Intercom-style scale: Massive 3, High 2, Medium 1, Low 0.5, Minimal 0.25. Tie impact to one primary metric, not multiple outcomes.
Confidence reflects evidence strength. Start with 100%, 80%, or 50%, then use a custom percentage when your evidence sits between standard levels.
Effort is entered in person-months with minimum 0.5 and step 0.5. The formula divides by effort, so oversized estimates can quickly push initiatives down ranking.
Keep scoring pragmatic. If your team uses vague scoring language, RICE devolves into a debate about interpretation. Clear rubrics reduce that friction and make ranking changes easier to communicate in roadmap prioritization meetings.
If your evidence is mixed, set custom confidence and note what experiment would increase certainty for the next scoring pass.
These three prefilled scenarios are realistic product initiatives. Load one to see how the RICE scoring model behaves with different effort and confidence profiles.
Reach 1200 per quarter, impact High (2), confidence 80%, effort 1.5 person-months, activation-focused assumptions.
Strong RICE candidate with solid confidence and moderate effort. Good fit for near-term roadmap prioritization.
Reach 350 per month (normalized to quarter), impact Medium (1), confidence 50%, effort 0.5 person-month.
Quick to run and cheap in effort, but confidence is low. Prioritize as a fast experiment, not a full commitment.
Reach 120 per quarter, impact Massive (3), confidence 80%, effort 4 person-months, cross-team dependencies.
Potentially high-impact initiative that ranks lower because effort is substantial. Keep as a strategic track.
Symptom: Scores look inflated across every initiative.
Cause: Impact is set to Massive by default without a shared rubric.
Fix: Define impact anchors before scoring and challenge every Massive score with concrete metric evidence.
Symptom: A weekly initiative outranks long-term bets unexpectedly.
Cause: Reach timeframe was mixed but normalization was ignored in discussion.
Fix: Review normalized reach per quarter in the tooltip and align on quarter-based comparisons.
Symptom: Top-ranked rows change every meeting with no clear reason.
Cause: Inputs are edited without updating assumptions in notes.
Fix: Require one note update whenever confidence, reach, or effort changes.
Symptom: Import CSV fails even though values look correct.
Cause: Headers or order differ from the required template.
Fix: Use the downloadable sample CSV and preserve the exact header names and order.
Symptom: Initiatives with low confidence still dominate ranking.
Cause: Confidence was set to 100% despite limited evidence.
Fix: Use 50% or custom low confidence until quantitative proof exists.
Symptom: Effort inputs are rejected with validation errors.
Cause: Effort values are below 0.5 or not in 0.5 increments.
Fix: Use person-month estimates with minimum 0.5 and step size 0.5.
Symptom: Stakeholders reject the ranked table as too technical.
Cause: Raw math is shared without practical rationale and assumptions.
Fix: Switch to Stakeholder View and present top-five rationale with notes.
Symptom: Priorities collapse under uncertainty checks.
Cause: Backlog relies on fragile confidence assumptions.
Fix: Run the confidence -20% sensitivity mode and promote initiatives that remain stable.
Yes. The RICE score calculator is fully free, requires no login, and supports unlimited scoring sessions. You can compare a full backlog in batch mode, run one-off checks in single mode, and export results to CSV, Markdown, or JSON without any paywall steps.
No server storage is used for your initiative inputs. Data stays in your browser, with optional local autosave so you can resume work later. You can remove everything at any time with Clear data. CraftUp does not persist your initiative names, notes, or scores on backend systems.
A chat tool can brainstorm, but it does not enforce consistent RICE structure by default. This calculator standardizes inputs, normalizes reach per quarter, validates effort steps, ranks initiatives, and gives export workflows. That consistency is what makes recurring roadmap prioritization credible across cycles.
Yes. The output format is built for practical product and growth planning, including stakeholder review decks and backlog triage docs. Teams commonly export Markdown into planning notes and share compressed URLs internally. Just review notes before sharing externally if they contain sensitive assumptions.
Start with the default 100%, 80%, and 50% bands to keep scoring calibrated across teams. Use 100% only for evidence-backed initiatives, 80% when you have good directional support, and 50% when assumptions are still weak. You can also enter a custom percentage for edge cases.
Teams often mix weekly, monthly, and quarterly reach estimates. Without normalization, ranking becomes misleading because time windows differ. The calculator converts all entries to reach per quarter using explicit factors, so initiatives can be compared on one baseline while preserving original input values.
Use implementation effort for the scoped initiative, not broad program effort. Include engineering, design, and QA where relevant, then round to 0.5 increments to keep estimates practical. If effort is uncertain, run a lower-confidence score first and refine after technical discovery.
Yes. Share URL compresses your batch table and notes client-side and recreates the session in a fresh browser. This makes async reviews straightforward because everyone sees the same rows and assumptions. If the URL becomes long, trim notes before sharing outside your internal tools.
Sensitivity mode applies a relative confidence drop of 20% to each row and compares the top-five ranking against the baseline. It highlights priorities that are fragile when evidence weakens. Use it before roadmap commitments to avoid overcommitting to initiatives with unstable confidence assumptions.
A weekly or biweekly cadence is practical for most teams. Re-score when reach forecasts change, confidence improves from new research, or effort estimates shift after technical grooming. Frequent lightweight reruns keep product backlog prioritization aligned with current evidence instead of outdated assumptions.
Use CraftUp lessons and workflows to turn ranked initiatives into shipped outcomes.
Last updated: 2026-03-03