Score your product backlog by Reach, Impact, Confidence, and Effort. See what to build first.
Features
FeatureReachImpactConfidenceEffort
Scoring Settings
Reach timeframe
Effort units
Custom Weights
Use weighted scoring
Framework Comparison
Show ICE scores
Add your features and click score to see prioritization
#1 Priority
—
Quick Wins
0
High score, low effort
Avg Score
—
Score Spread
—
Features
—
Highest Reach
—
Avg Confidence
—
Total Effort
—
ICE Top Pick
—
Best Value
—
Scoring is step one. Prioritization is the whole story.
The Feature Prioritization skill takes your backlog, scores it with RICE, and produces a ranked scorecard with a narrative brief explaining the trade-offs and recommendations.
claude skill install feature-prio-skill
What Is RICE Scoring?
RICE is a prioritization framework that helps product managers make data-driven decisions about what to build next. Developed at Intercom, it replaces gut-feel prioritization with a repeatable scoring system that balances user impact against engineering cost.
Every feature in your backlog gets scored on four dimensions, and the resulting number gives you a single comparable metric for ranking.
Reach — How many users or customers will this feature affect in a given time period? Measure in real numbers: users per quarter, customers per month. Avoid abstract scales.
Impact — How much will each affected user benefit? Use a consistent scale: 3 (massive), 2 (high), 1 (medium), 0.5 (low), 0.25 (minimal). Resist the urge to rate everything as "high."
Confidence — How sure are you about your Reach and Impact estimates? 100% means data-backed. 80% means informed guess. 50% means speculation. This penalizes wishful thinking.
Effort — How much work is required? Measured in person-months, person-weeks, or story points. Include design, engineering, QA, and rollout. Underestimating effort is the most common RICE mistake.
RICE vs ICE vs WSJF
ICE scoring uses three factors (Impact, Confidence, Ease) and is simpler to calculate, but it ignores Reach entirely. A feature that delights 10 power users scores the same as one that helps 10,000. RICE fixes this by making audience size explicit.
WSJF (Weighted Shortest Job First) comes from SAFe and divides cost of delay by job size. It's useful for teams already using SAFe ceremonies but adds overhead for most product teams. RICE hits the sweet spot: rigorous enough to be defensible, simple enough to actually use.
Common RICE Mistakes
Inflating Reach to get the score you want — If you're not sure, lower Confidence instead. That's what it's there for.
Using Impact as a vote — Impact isn't "how much I like this idea." It's how much each affected user's experience will change.
Forgetting to include all effort — Design, documentation, QA, migration, and support costs all count. A "quick" feature that needs a migration plan isn't quick.
Scoring once and never updating — RICE scores reflect a point in time. Revisit quarterly as your data, customer mix, and strategic priorities change.
Treating the score as an absolute — RICE is a ranking tool, not a decision engine. Use it to surface the conversation, not end it.
When to Use RICE
RICE works best when you have a backlog of 5+ features competing for the same engineering resources and you need a defensible way to rank them. It's particularly valuable for quarterly planning, roadmap reviews, and stakeholder negotiations where "gut feel" won't cut it.
It's less useful for comparing fundamentally different types of work (infrastructure vs. features), for very early-stage products with no usage data, or when strategic bets require ignoring short-term scores in favor of long-term positioning.
Frequently Asked Questions
RICE scores are relative, not absolute. A score of 500 means nothing on its own — it only matters compared to other features in the same backlog. Focus on the ranking order, not the raw numbers. The spread between your highest and lowest scores tells you more than any individual score.
Use customer segment sizes, support ticket volumes, or feature request counts as proxies. If a feature targets "all users," use your total active user count. If you're genuinely guessing, set Confidence to 50% — that's what it's for. A rough estimate with honest confidence beats a precise estimate with false certainty.
Yes. Consistency is more important than precision. A "high impact" (2) rating should mean the same thing whether you're scoring a new feature or a bug fix. Write down what each level means for your team and reference it during scoring. This prevents score inflation over time.
Re-score at natural planning cadences: quarterly for roadmap planning, monthly for sprint planning. Also re-score when you get significant new data — a spike in customer requests, a competitor launch, or a change in strategic direction. Stale scores lead to stale priorities.
A priority matrix (also called an Eisenhower matrix or value-effort matrix) plots features on two axes. It's visual and intuitive but only considers two dimensions. RICE considers four dimensions and produces a single ranked score. They complement each other well — this calculator shows both: a ranked score list and a priority matrix visualization.
Yes. Switch to Advanced mode to enable weighted scoring. This lets you adjust the relative importance of each RICE factor. For example, if your team values confidence-backed decisions, increase the Confidence weight. If you're optimizing for broad adoption, increase the Reach weight. Default weights of 1.0 give you standard RICE.
Score everything that's competing for the same resources in the same time period. For most teams, that's 5–20 features per quarter. Scoring fewer than 5 features doesn't give you meaningful ranking. Scoring more than 30 usually means your items aren't at the same level of granularity.