Technical SEO

Title, meta description, canonical, OG, Twitter, BreadcrumbList, FAQPage, WebApplication schema.

North Star Metric Finder (Free)

This is a questionnaire to define a product North Star Metric and its input metrics (product strategy), not an attribution tool. Get 3 candidates, a scorecard with anti-vanity and lagging checks, 3–5 input metrics by dimension, and a metric tree ready for the KPI Tree Builder.

  • 6-step wizard: context, engagement game (Attention/Transaction/Productivity), 3 candidates, scorecard and selection, input metrics (breadth/depth/frequency/efficiency), output and metric tree.
  • Anti-vanity check (e.g. DAU/MAU without value action) and lagging warning (e.g. revenue as North Star). Export Copy, Markdown, CSV, JSON; JSON metric_tree imports into KPI Tree Builder.
  • Solo (10 min) or Workshop (60–120 min) mode. Three loadable examples. No login.

No login. Autosave in browser. Shareable URL.

No login. Autosave in browser.

Context

How it works

  1. Answer context (business model, audience, value moment, revenue driver). Pick one engagement game (Attention, Transaction, or Productivity). The tool generates 3 candidate North Star metrics.
  2. Score each candidate 1–5 on customer value, leading indicator, controllability, and measurability. Use the anti-vanity and lagging checks. Select one candidate. Define 3–5 input metrics mapped to breadth, depth, frequency, and efficiency.
  3. Get the North Star statement, input metrics table, metric tree draft, and what to monitor weekly. Export Markdown, CSV, or JSON; the JSON metric_tree imports into the KPI Tree Builder. Share URL or copy summary.

What makes a good North Star metric

A good North Star (1) measures user value—it reflects the value moment, not just presence. (2) Is within product and marketing influence—you can move it with product and growth work. (3) Is a leading indicator of revenue—it predicts business outcomes rather than being revenue itself. Avoid vanity metrics (DAU/MAU without a value action) and pure lagging metrics (revenue only).

The 3 engagement games

Attention: value increases with time or consumption (e.g. engaged minutes, sessions with value activity). Transaction: value increases with successful transactions (e.g. orders, GMV, completed bookings). Productivity: value increases when users complete work efficiently (e.g. tasks completed, workflows done). Pick one game that fits your product; the tool generates candidates from that game.

Input metrics: breadth, depth, frequency, efficiency

Map 3–5 input metrics to four dimensions: Breadth (how many users or accounts get value), Depth (how much value per interaction—e.g. lessons per user, AOV), Frequency (how often they return—e.g. return rate, weekly active), Efficiency (reliability and success of value delivery—e.g. completion rate, on-time delivery). This keeps the driver tree clear and exportable to the KPI Tree Builder.

Common traps (vanity, lagging, multiple north stars)

Vanity metrics: DAU/MAU or login count without a value action don’t represent value. Rewrite to include the value moment (e.g. "Weekly users who completed X"). Lagging metrics: Revenue or profit as the North Star is lagging only; use an upstream metric that product can influence. Multiple North Stars: Pick one engagement game and one selected candidate so the team has a single North Star to optimize for.

Pro tips

  • Pick one engagement game (Attention, Transaction, Productivity) and stick to it; mixing games dilutes the North Star.
  • Ensure the North Star measures a value moment, not just presence (avoid DAU/MAU without a value action).
  • Rate candidates on all four scorecard dimensions; the highest total often aligns with 'leading indicator of revenue' and 'controllability'.
  • Define 3–5 input metrics across breadth, depth, frequency, and efficiency so you have a full driver tree.
  • Export the metric tree to the KPI Tree Builder to add formulas, scenario calculator, and stakeholder views.
  • Vision sentence should be one line: who you serve and what success looks like. NSM definition should be precise (numerator/denominator, time window).
  • What to monitor weekly: pick 5 leading indicators from your input metrics so the team knows what to watch before lagging results.
  • Anti-vanity: if a candidate is just 'logins' or 'sessions', rewrite it to include the value action (e.g. 'Sessions with at least one completed lesson').
  • Workshop mode: use the facilitation agenda to run a 60–120 min session with roles; align on vision and value moment before scoring candidates.
  • Next steps should include instrumenting events, validating the value moment with data or interviews, and setting baselines then targets.

Common mistakes

Symptom: North Star is a vanity metric.

Cause: Choosing DAU/MAU or logins without a value action.

Fix: Rewrite so the metric includes the value moment (e.g. 'Weekly users who completed X'). Use the anti-vanity check in the scorecard.

Symptom: North Star is lagging only.

Cause: Picking revenue or profit as the North Star.

Fix: Use a leading indicator that product/marketing can influence; keep revenue as a business KPI downstream. Use the lagging warning and suggested upstream metric.

Symptom: Multiple North Stars.

Cause: Not forcing one primary game or one selected candidate.

Fix: Pick one engagement game and one winning candidate. Use the scorecard to rank and select.

Symptom: Input metrics don't map to dimensions.

Cause: Listing metrics without assigning breadth/depth/frequency/efficiency.

Fix: Assign each of the 3–5 input metrics to one dimension so the tree is clear and exportable to the KPI Tree Builder.

Symptom: Vision and definition are vague.

Cause: One-word vision or no time window on the metric.

Fix: Write a one-sentence vision (who + what success). Define the NSM with unit and time window (e.g. weekly, monthly).

Symptom: Scorecard scores are all 5.

Cause: No discrimination between candidates.

Fix: Rate honestly: controllability and measurability often differ. The rank order should help you pick one candidate.

Symptom: Export doesn't open in KPI Tree Builder.

Cause: JSON shape or IDs don't match.

Fix: Use Export JSON; the metric_tree key is a full KPI Tree state. In KPI Tree Builder, use Import and paste the metric_tree object or use a dedicated 'Import from North Star Finder' if available.

Symptom: Workshop runs over time.

Cause: No agenda or too many candidates.

Fix: Use the facilitation agenda; limit to 3 candidates and one game. Allocate 15 min vision, 20 min candidates + scorecard, 25 min input metrics and owners.

FAQ

Is this for defining a product North Star or for attribution?

This is a questionnaire to define a product North Star Metric and its input metrics (product strategy). It is not an attribution or marketing mix tool. You get a North Star statement, 3–5 input metrics mapped to dimensions, and a metric tree draft exportable to the KPI Tree Builder.

What are the three engagement games?

Attention: value increases with time or consumption (e.g. engaged minutes, sessions with value). Transaction: value increases with successful transactions (e.g. orders, GMV). Productivity: value increases when users complete work efficiently (e.g. tasks completed, workflows done). Pick one game that fits your product.

Why only 3–5 input metrics?

Breadth, depth, frequency, and efficiency cover the main drivers; 3–5 inputs keep the tree actionable. Fewer and you miss a dimension; more and the tree becomes hard to monitor. The KPI Tree Builder can expand the tree later with sub-metrics.

What is the anti-vanity check?

The tool flags candidates that look like DAU, MAU, or login count without a value action. Vanity metrics don't represent value moments. The scorecard shows FAIL and suggests a rewrite (e.g. add 'who completed X'). You can still select a candidate but the warning stays.

What is the lagging warning?

If the candidate is pure revenue or profit, the tool warns that it is lagging (outcome only) and suggests an upstream metric (e.g. conversions, active subscribers). North Stars should be leading indicators that product and marketing can influence. Use the suggested upstream metric so the team can act on leading, not lagging, outcomes.

How do I export to the KPI Tree Builder?

Export JSON from this tool. The JSON includes a metric_tree key with mode, metrics, and edges in the same format the KPI Tree Builder uses. You can paste that into the KPI Tree Builder import or use the metric_tree as the initial state. No manual edits are required for a valid tree; the North Star and input metrics appear as nodes with dimensions as groups. Then you can add formulas and run the scenario calculator there.

What is Workshop mode?

Workshop mode is for a 60–120 minute facilitated session. It provides an agenda and prompts: align on vision and value moment, pick the game, brainstorm candidates, run the scorecard, define inputs, assign owners and cadence. Same wizard steps, with facilitation guidance.

Can I change the time window (weekly vs monthly)?

Candidates are generated with a default time window (usually weekly). When you select a candidate and build the output, the definition uses that window. You can refine the metric name and definition in the North Star statement or later in the KPI Tree Builder.

Is the North Star Metric Finder free?

Yes. The tool is free, runs in your browser, and requires no login. You get a 6-step wizard (Solo or Workshop), 3 candidates and scorecard, 3–5 input metrics, and Copy/Markdown/CSV/JSON export. The JSON metric tree imports into the KPI Tree Builder. Autosave and shareable URL included.

What are breadth, depth, frequency, and efficiency?

Breadth: how many users or accounts get value. Depth: how much value per interaction (e.g. lessons per user, AOV). Frequency: how often they return (e.g. return rate, weekly active). Efficiency: reliability and success of value delivery (e.g. completion rate, on-time delivery). Map each input metric to one dimension.

Learn more with CraftUp

Courses, blog, and glossary for product and discovery skills.

From North Star to execution

Use CraftUp tools and courses to turn your North Star and input metrics into clear execution and learning loops.

Freshness

Last updated: 2026-03-05

  • 2026-03-05: Launched North Star Metric Finder: 6-step wizard, 3 games, scorecard, anti-vanity and lagging checks.
  • 2026-03-05: Input metrics 3–5 with breadth/depth/frequency/efficiency; metric tree export compatible with KPI Tree Builder.
  • 2026-03-05: Solo and Workshop modes; 3 loadable examples (B2C, B2B, Marketplace). Copy, MD, CSV, JSON export.