Idea in short

New ideas can come from anywhere, both internal and external to the organization. Customers, engineers, sales and marketing personnel, partners, competitors could inspire product or service ideas. As a Product Manager, identifying which ideas will be the most beneficial for the business is challenging. Frameworks and models, such as Kano and DFV help identify features that customers love. Yet, a structured and quantified method helps Product Managers make objective, data-driven trade-offs.

RICE scoring model is a prioritization framework that helps Product Managers determine the products, features and other initiatives to put on their roadmaps. This model helps scoring these items according to four factors, which form the acronym RICE:

  1. Reach
  2. Impact
  3. Confidence, and
  4. Effort

Scoring models

Using scoring model, such as RICE offers product teams a three-fold benefit:

  1. It enables Product Managers to make informed, data-driven decisions
  2. Eliminate or minimize personal biases in decision making, and
  3. Establish and communicate priorities with other stakeholders based on tangible business benefits

Origin

Messaging-software maker Intercom developed the RICE prioritization model to improve its own internal decision-making processes. Although the company’s product teams were aware of and had employed several other prioritization models, its Product Managers struggled to find an approach that worked for Intercom’s unique set of competing project ideas. To address this challenge, the team developed its own scoring model based on the aforementioned four factors. In addition, it developed a formula for quantifying and combining them. This formula would then provide a single score that could be consistently applied across disparate ideas, giving the team an objective way to evaluate the initiatives to pursue, based on their priorities on their product roadmap.

RICE scoring model

To use the RICE scoring model, you evaluate each of your ideas (new products, product extensions, features, etc.) by scoring them across 4 factors using the following formula:

Feature Score = (Reach x Impact x Confidence) / Effort

All features can then be ranked based on the scores. The higher the score, the better the feature. On top of the RICE formula, you can add another factor to allow even more granularity, which is the external factor—what the competition is doing.

  • 3 = basic feature
  • 2 = feature to be on par with the competition
  • 1 = feature to differentiate from the competition

Reach

This factor represents the number of customers / users that would be impacted by the feature within a defined period of time. This factor helps avoid bias towards the features that you would use yourself. For that, you have to decide:

  • What reach means in this context, and
  • The timeframe over which you want to measure it

You can choose any time period, such as a month, a quarter, etc. Then, you can decide what the reach refers to, such as the number of customer transactions, free-trial signups, or how many existing users try your new feature.

Your reach score will be the number you’ve estimated. For example, if you expect your project will lead to 150 new customers within the next quarter, your reach score is 150. On the other hand, if you estimate your project will deliver 1,200 new prospects to your trial-download page within the next month, and that 30% of those prospects will sign up, your reach score is 360.

Example
Project 1: 100 customers reach a point in the signup funnel each month, and 20% choose this option. The reach is 100 × 20% × 3 = 150 customers per quarter
Project 2: Every customer who uses this feature each quarter will see this change. The reach is 100 customers per quarter
Project 3: This change will have a one-time effect on 1000 existing customers, with no ongoing effect. The reach is 1000 customers per quarter

Use real measurements from product metrics as much as possible instead of doctoring the numbers.

Impact

This factor represents the degree the new feature or initiative will help the customer use your product/service. To focus on projects that move the needle on your goal, estimate the impact on individual initiatives. Impact could reflect a:

  • Quantitative goal (such as how many new conversions for your project will result in when users encounter it), or
  • Qualitative objective (such as increasing customer delight)

Sometimes, even when using a quantitative metric, measuring impact will be difficult. This is because you won’t necessarily be able to isolate your new project as the primary or only reason why your users take action. If measuring the impact of a project after you’ve collected the data will be difficult, estimating it beforehand will also be a challenge.

Intercom developed a five-tiered scoring system for estimating a project’s impact:

  • 3 = Massive Impact
  • 2 = High Impact
  • 1 = Medium Impact
  • 0.5 = Low Impact
  • 0.25 = Minimal Impact

Example
Project 1: For each customer who sees it, this will have a huge impact. The impact score is 3.
Project 2: This will have a lesser impact for each customer. The impact score is 1.
Project 3: This is somewhere in-between in terms of impact. The impact score is 2.

Confidence

Level of confidence about the accuracy of the metrics and estimates defined for this particular feature. The confidence component of the RICE score helps control for projects in which the team has data to support one factor of the score, but is relying more on intuition for another factors.

For example, if you have data backing up your reach estimate, but your impact score represents more of a gut feeling or anecdotal evidence, then the confidence score will help account for this.

In other words, this factor curbs enthusiasm for exciting, but ill-defined ideas. If you think an initiative could have huge impact, but don’t have data to back it up, the confidence factor lets you control for that.

As it did with impact, Intercom created a tiered set of discrete percentages to score confidence, so that its teams wouldn’t get stuck here trying to decide on an exact percentage number between 1 and 100. When determining your confidence score for a given project, your options are:

  • 100% = High Confidence
  • 80% = Medium Confidence
  • 50% = Low Confidence

Example
Project 1: We have quantitative metrics for reach, user research for impact, and an engineering estimate for effort. This project gets a 100% confidence score
Project 2: I have data to support the reach and effort, but I’m unsure about the impact. This project gets an 80% confidence score
Project 3: The reach and impact may be lower than estimated, and the effort may be higher. This project gets a 50% confidence score

Effort

This factor represents the difficulty (feasibility) for the organization to implement the feature in person hours (requires many months of development, larger teams, does not have the internal skills, etc.) Effort represents the denominator of the RICE scoring model.

In other words, if you think of RICE as a cost-benefit analysis, the other three components are all potential benefits, while effort is the single score that represents the costs.

Quantifying effort in this model is similar to scoring reach. You simply estimate the total number of resources (product, design, engineering, testing, etc.) needed to complete the initiative over a given period of time—typically person-months and that is your score.

In other words, if you estimate a project will take a total of three person-months, your effort score will be 3. Intercom scores anything less than a month as a 0.5.

  • 3 = Massive Impact
  • 2 = High Impact
  • 1 = Medium Impact
  • .5 = Low Impact
  • .25 = Minimal Impact

Example
Project 1: This will take about a week of planning, 1-2 weeks of design, and 2-4 weeks of engineering time. I’ll give it an effort score of 2 person-months
Project 2: This project will take several weeks of planning, a significant amount of design time, and at least two months of one engineer’s time. I’ll give it an effort score of 4 person-months
Project 3: This only requires a week of planning, no new design, and a few weeks of engineering time. I’ll give it an effort score of 1 person-month

Summary
Think Insights (March 27, 2024) RICE Feature Prioritization Model. Retrieved from https://thinkinsights.net/digital/rice-prioritization-framework/.
"RICE Feature Prioritization Model." Think Insights - March 27, 2024, https://thinkinsights.net/digital/rice-prioritization-framework/
Think Insights April 14, 2022 RICE Feature Prioritization Model., viewed March 27, 2024,<https://thinkinsights.net/digital/rice-prioritization-framework/>
Think Insights - RICE Feature Prioritization Model. [Internet]. [Accessed March 27, 2024]. Available from: https://thinkinsights.net/digital/rice-prioritization-framework/
"RICE Feature Prioritization Model." Think Insights - Accessed March 27, 2024. https://thinkinsights.net/digital/rice-prioritization-framework/
"RICE Feature Prioritization Model." Think Insights [Online]. Available: https://thinkinsights.net/digital/rice-prioritization-framework/. [Accessed: March 27, 2024]