Symptom: Tree becomes a feature backlog.
Cause: Opportunity nodes are phrased as solutions.
Fix: Rewrite opportunities as customer struggles backed by evidence.
Title, meta description, canonical URL, OG tags, Twitter cards, breadcrumbs, FAQPage, and WebApplication schema are configured for this route.
This opportunity solution tree builder is for product discovery (Teresa Torres style), not generic decision trees, so teams can connect outcome metrics to opportunities, solutions, and experiments with clear evidence.
No login. Runs in your browser. We do not store your inputs.
Keyboard shortcuts: Enter adds child, Ctrl+Enter adds sibling on selected title input.
Zoom 100%
Loading saved tree...
Step 1
Define one measurable outcome, then add opportunity branches from customer evidence.
Step 2
Map candidate solutions and assumption tests under each branch while scoring opportunity quality.
Step 3
Pick one target opportunity path, align stakeholders, and export decision-ready artifacts.
The Opportunity Solution Tree is a product discovery structure that starts with one outcome, branches into opportunities, then maps candidate solutions and assumption tests. Outcome defines the measurable target. Opportunities describe customer problems. Solutions are ideas to address those opportunities. Experiments validate assumptions before full implementation. This sequence keeps teams from jumping straight to feature delivery without confirming which problem branch is truly worth solving first.
Use metric language with baseline, target, and timeframe. Example formats: "Increase activation from 32% to 42% by Q3", "Reduce month-1 churn from 18% to 12% in two quarters", or "Increase trial-to-paid conversion from 7.8% to 11% by Q4". Avoid wording like "improve onboarding" because it does not create a measurable success boundary for discovery decisions.
Opportunity branches should come from customer interviews, product analytics, support patterns, and sales objections. Add an evidence note and up to three links so each branch has traceable context. To avoid duplicates, use shared tags and merge near-equal nodes during weekly cleanup. If two branches describe the same behavior with different wording, consolidate them before scoring.
Start by comparing sibling opportunities with OpportunityScore (importance x frequency x confidence), then validate evidence quality before committing. The target opportunity should represent the best balance of impact potential and confidence in the problem signal. Once selected, use stakeholder view to align on top solutions and planned experiments for that specific path.
Symptom: Tree becomes a feature backlog.
Cause: Opportunity nodes are phrased as solutions.
Fix: Rewrite opportunities as customer struggles backed by evidence.
Symptom: Priority debates never converge.
Cause: Scores are entered without evidence quality checks.
Fix: Require evidence notes before scoring opportunities.
Symptom: Experiments never start.
Cause: Nodes miss hypothesis, metric, or timebox fields.
Fix: Treat those fields as mandatory before planning.
Symptom: Outcome does not guide decisions.
Cause: Outcome statement is broad or non-measurable.
Fix: Use one metric, one baseline, and one deadline.
Symptom: Duplicate opportunities keep appearing.
Cause: No shared tags or cleanup cadence.
Fix: Tag branches consistently and merge duplicates weekly.
Symptom: Target opportunity changes every meeting.
Cause: No explicit target selection rule.
Fix: Lock one target per cycle and re-open only with new evidence.
Symptom: Exports are hard to read.
Cause: Node titles are long and inconsistent.
Fix: Keep titles concise and move detail to notes.
Symptom: Large trees feel slow to manage.
Cause: Everything stays expanded all the time.
Fix: Collapse inactive subtrees and use search jump for navigation.
Yes. The Opportunity Solution Tree Builder is fully free and does not require login. You can create a complete product discovery tree, score opportunities, pick a target path, and export markdown, CSV, JSON, and visual snapshots in one session. There is no paywall for the core workflow.
No. The tool runs in your browser and saves state only in your own localStorage for resume convenience. CraftUp servers do not store your tree nodes, notes, evidence links, or experiment plans in this lite mode. You can wipe local state at any moment with the Clear data action.
It is Teresa Torres style product discovery, not a generic decision tree. The flow is constrained to Outcome, Opportunities, Solutions, and Assumption tests or experiments. This structure keeps teams connected to measurable outcomes and evidence, instead of drifting into broad brainstorming without prioritization discipline.
Mind maps are flexible, but they rarely enforce discovery-specific levels, scoring fields, evidence notes, and experiment planning. This builder is intentionally opinionated so teams can compare opportunities consistently, select one target branch, and produce exports that are immediately usable for roadmap and stakeholder reviews.
Yes. Share URL creates a compressed snapshot that rebuilds the tree in a fresh browser session. You can also export markdown for async review in docs and chat threads. Most teams combine both: share link for context, markdown for decision history and meeting records.
Each opportunity supports optional importance, frequency, and confidence values from one to five. The tool computes OpportunityScore as importance multiplied by frequency multiplied by confidence. Use score to compare sibling opportunities under the same parent, then validate with evidence quality before making final prioritization calls.
Use one hypothesis, one method, one success metric, and one timebox. Keep each experiment narrow enough for one discovery cycle. If an experiment node lacks these fields, it usually becomes a vague delivery task. Tight definitions make learning loops faster and easier to communicate across product, design, and engineering.
Yes. The builder is designed to stay responsive with at least two hundred nodes when you use collapse and search features. Search jump helps you navigate quickly, while subtree collapse keeps rendering light. Stakeholder view and export functions still work even when maps become deeply nested.
Start with higher sibling scores, then verify evidence quality and strategic relevance to the current outcome. Target selection is a commitment tool: it focuses the team on one path from outcome to experiments. Revisit target only when new evidence materially changes confidence or problem frequency.
Absolutely. Markdown and CSV exports are formatted for planning docs, roadmap conversations, and client updates. JSON keeps full structure for tooling handoffs, and snapshots help explain branches visually. The goal is to move from discovery thinking to decision-ready artifacts without recreating the work manually.
Use CraftUp workflows to move from opportunity mapping to execution-ready experiments.
Last updated: 2026-03-05