Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Metodologías

RICE: a prioritization framework for product roadmaps

RICE: a prioritization framework for product roadmaps

Actualizado: 2026-05-03

The RICE framework is a prioritization methodology developed by Intercom[1] to decide which initiatives enter a roadmap and in what order. Four factors — Reach, Impact, Confidence, and Effort — combine into a single score that compares heterogeneous projects on the same basis.

Key takeaways

  • RICE converts subjective priority debate into a conversation about measurable assumptions.
  • Intercom’s impact scale is discrete: 3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal.
  • Confidence explicitly separates data-backed assumptions from informed guesses.
  • RICE doesn’t remove human judgment: it structures it.
  • For technical debt, strategic bets, and mandatory compliance initiatives, RICE is not the right framework.

What does RICE mean?

  • Reach: number of people affected by the initiative over a defined period — e.g., “2,000 users per month”.
  • Impact: how much each affected person’s experience changes. Intercom uses a discrete scale: 3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal.
  • Confidence: percentage reflecting how solid the three previous numbers are — 100% backed by data, 80% medium-high, 50% informed guess.
  • Effort: total estimate in person-months of the full initiative (design, development, QA, launch).

The formula

RICE = (Reach × Impact × Confidence) / Effort

The result represents total impact weighted by confidence, per person-month invested. Higher score means higher relative priority.

Practical example

A SaaS product team weighing three candidates for the next quarter:

  • A · AI-guided onboarding: Reach 1,200 new users/month, Impact 2 (high), Confidence 80%, Effort 2 person-months → RICE = (1200 × 2 × 0.8) / 2 = 960.
  • B · Slack integration: Reach 400 users/month, Impact 1 (medium), Confidence 100%, Effort 1 person-month → RICE = (400 × 1 × 1) / 1 = 400.
  • C · Billing panel redesign: Reach 800 users/month, Impact 0.5 (low), Confidence 50%, Effort 1.5 person-months → RICE = (800 × 0.5 × 0.5) / 1.5 ≈ 133.

The execution order is clear: A (960) > B (400) > C (133). The most visible feature doesn’t always win; here, effort drags C below despite touching many users.

Advantages over other frameworks

  • Comparability across heterogeneous initiatives. A UI tweak and a B2B integration land on the same scale.
  • Confidence is explicit. Forces separating “we’re sure this works” from “we think it’ll work” — something MoSCoW[2] or Kano[3] don’t do explicitly.
  • Penalises shiny-object syndrome. A big feature with low confidence (say 30%) quickly scores worse than several well-backed small ones.
  • Executable in a spreadsheet. No tooling needed; Google Sheets or a Notion table is enough to start today.

Where RICE falls short

When not to use it:

  • Infrastructure or technical-debt work. Reach and Impact are hard to estimate for “migrate the database” — use Jobs-to-be-Done or a risk × cost-of-not-doing board instead.
  • Long-term strategic bets. A 2-year bet with low Confidence almost always scores poorly, when it may be precisely the most important.
  • Irreversible or compliance-driven decisions. If something is mandatory (GDPR, WCAG accessibility), it doesn’t compete in the same list. Just as in enterprise agent governance, mandatory work isn’t prioritized — it’s done.

For these cases, RICE coexists well with complementary frameworks applied by initiative type. Combining RICE for product initiatives with another criterion for infrastructure is a common pattern in mature teams. See also how product discovery with AI feeds the data that makes RICE more precise.

Ready-to-use template

| Initiative | Reach | Impact | Confidence | Effort | RICE  | Notes |
|------------|-------|--------|------------|--------|-------|-------|
| A · ...    | 1200  | 2      | 0.8        | 2      | 960   |       |
| B · ...    | 400   | 1      | 1.0        | 1      | 400   |       |

Modern product-management tools (Productboard[4], Airfocus[5], Notion[6], Linear[7]) ship pre-built RICE templates, but a shared Sheet is usually enough for small teams — and creates less adoption friction.

Conclusion

RICE doesn’t remove human judgment: it structures it. The real value isn’t the final number, but the conversation it forces when estimating each factor with the team. If two people score the same initiative 600 and 120, that gap is exactly the discussion needed before moving the ticket to “in progress”. The RICE score converts hidden assumptions into explicit hypotheses — and that alone justifies the exercise.

Was this useful?
[Total: 7 · Average: 4.1]
  1. Intercom
  2. MoSCoW
  3. Kano
  4. Productboard
  5. Airfocus
  6. Notion
  7. Linear

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.