Symptom: North Star is a vanity metric.
Cause: Choosing DAU/MAU or logins without a value action.
Fix: Rewrite so the metric includes the value moment (e.g. 'Weekly users who completed X'). Use the anti-vanity check in the scorecard.
Title, meta description, canonical, OG, Twitter, BreadcrumbList, FAQPage, WebApplication schema.
This is a questionnaire to define a product North Star Metric and its input metrics (product strategy), not an attribution tool. Get 3 candidates, a scorecard with anti-vanity and lagging checks, 3–5 input metrics by dimension, and a metric tree ready for the KPI Tree Builder.
No login. Autosave in browser. Shareable URL.
No login. Autosave in browser.
A good North Star (1) measures user value—it reflects the value moment, not just presence. (2) Is within product and marketing influence—you can move it with product and growth work. (3) Is a leading indicator of revenue—it predicts business outcomes rather than being revenue itself. Avoid vanity metrics (DAU/MAU without a value action) and pure lagging metrics (revenue only).
Attention: value increases with time or consumption (e.g. engaged minutes, sessions with value activity). Transaction: value increases with successful transactions (e.g. orders, GMV, completed bookings). Productivity: value increases when users complete work efficiently (e.g. tasks completed, workflows done). Pick one game that fits your product; the tool generates candidates from that game.
Map 3–5 input metrics to four dimensions: Breadth (how many users or accounts get value), Depth (how much value per interaction—e.g. lessons per user, AOV), Frequency (how often they return—e.g. return rate, weekly active), Efficiency (reliability and success of value delivery—e.g. completion rate, on-time delivery). This keeps the driver tree clear and exportable to the KPI Tree Builder.
Vanity metrics: DAU/MAU or login count without a value action don’t represent value. Rewrite to include the value moment (e.g. "Weekly users who completed X"). Lagging metrics: Revenue or profit as the North Star is lagging only; use an upstream metric that product can influence. Multiple North Stars: Pick one engagement game and one selected candidate so the team has a single North Star to optimize for.
Symptom: North Star is a vanity metric.
Cause: Choosing DAU/MAU or logins without a value action.
Fix: Rewrite so the metric includes the value moment (e.g. 'Weekly users who completed X'). Use the anti-vanity check in the scorecard.
Symptom: North Star is lagging only.
Cause: Picking revenue or profit as the North Star.
Fix: Use a leading indicator that product/marketing can influence; keep revenue as a business KPI downstream. Use the lagging warning and suggested upstream metric.
Symptom: Multiple North Stars.
Cause: Not forcing one primary game or one selected candidate.
Fix: Pick one engagement game and one winning candidate. Use the scorecard to rank and select.
Symptom: Input metrics don't map to dimensions.
Cause: Listing metrics without assigning breadth/depth/frequency/efficiency.
Fix: Assign each of the 3–5 input metrics to one dimension so the tree is clear and exportable to the KPI Tree Builder.
Symptom: Vision and definition are vague.
Cause: One-word vision or no time window on the metric.
Fix: Write a one-sentence vision (who + what success). Define the NSM with unit and time window (e.g. weekly, monthly).
Symptom: Scorecard scores are all 5.
Cause: No discrimination between candidates.
Fix: Rate honestly: controllability and measurability often differ. The rank order should help you pick one candidate.
Symptom: Export doesn't open in KPI Tree Builder.
Cause: JSON shape or IDs don't match.
Fix: Use Export JSON; the metric_tree key is a full KPI Tree state. In KPI Tree Builder, use Import and paste the metric_tree object or use a dedicated 'Import from North Star Finder' if available.
Symptom: Workshop runs over time.
Cause: No agenda or too many candidates.
Fix: Use the facilitation agenda; limit to 3 candidates and one game. Allocate 15 min vision, 20 min candidates + scorecard, 25 min input metrics and owners.
This is a questionnaire to define a product North Star Metric and its input metrics (product strategy). It is not an attribution or marketing mix tool. You get a North Star statement, 3–5 input metrics mapped to dimensions, and a metric tree draft exportable to the KPI Tree Builder.
Attention: value increases with time or consumption (e.g. engaged minutes, sessions with value). Transaction: value increases with successful transactions (e.g. orders, GMV). Productivity: value increases when users complete work efficiently (e.g. tasks completed, workflows done). Pick one game that fits your product.
Breadth, depth, frequency, and efficiency cover the main drivers; 3–5 inputs keep the tree actionable. Fewer and you miss a dimension; more and the tree becomes hard to monitor. The KPI Tree Builder can expand the tree later with sub-metrics.
The tool flags candidates that look like DAU, MAU, or login count without a value action. Vanity metrics don't represent value moments. The scorecard shows FAIL and suggests a rewrite (e.g. add 'who completed X'). You can still select a candidate but the warning stays.
If the candidate is pure revenue or profit, the tool warns that it is lagging (outcome only) and suggests an upstream metric (e.g. conversions, active subscribers). North Stars should be leading indicators that product and marketing can influence. Use the suggested upstream metric so the team can act on leading, not lagging, outcomes.
Export JSON from this tool. The JSON includes a metric_tree key with mode, metrics, and edges in the same format the KPI Tree Builder uses. You can paste that into the KPI Tree Builder import or use the metric_tree as the initial state. No manual edits are required for a valid tree; the North Star and input metrics appear as nodes with dimensions as groups. Then you can add formulas and run the scenario calculator there.
Workshop mode is for a 60–120 minute facilitated session. It provides an agenda and prompts: align on vision and value moment, pick the game, brainstorm candidates, run the scorecard, define inputs, assign owners and cadence. Same wizard steps, with facilitation guidance.
Candidates are generated with a default time window (usually weekly). When you select a candidate and build the output, the definition uses that window. You can refine the metric name and definition in the North Star statement or later in the KPI Tree Builder.
Yes. The tool is free, runs in your browser, and requires no login. You get a 6-step wizard (Solo or Workshop), 3 candidates and scorecard, 3–5 input metrics, and Copy/Markdown/CSV/JSON export. The JSON metric tree imports into the KPI Tree Builder. Autosave and shareable URL included.
Breadth: how many users or accounts get value. Depth: how much value per interaction (e.g. lessons per user, AOV). Frequency: how often they return (e.g. return rate, weekly active). Efficiency: reliability and success of value delivery (e.g. completion rate, on-time delivery). Map each input metric to one dimension.
Courses, blog, and glossary for product and discovery skills.
Use CraftUp tools and courses to turn your North Star and input metrics into clear execution and learning loops.
Last updated: 2026-03-05