How to: Score up the value of a project
Reach, Conversion Rate, and Confidence level is all you need.
When determining what projects should be on the roadmap, it’s valuable to have some way to quantify your beliefs and, through that quantification, determine whether projects are actually worthy (or just appear to be at face value).
Scoring is the process through which you can add some rigor to your roadmap building and, as a result, improve your odds of success. Overall, scoring brings a lot of positive dynamics to your team:
It forces you to define value: Scoring pushes you to describe value in tangible terms. This adds rigor to the way projects are selected and how they extract value. Over time, this habit will improve your ability to identify high-upside projects, and it might also spread to others across your organization.
It helps shape projects in ways that improve their odds of success: By scoring projects, you’ll more consistently identify why some projects are unlikely to succeed and mitigate those risks ahead of time.
It creates clarity and alignment about the value you’re trying to unlock: Scoring allows everyone to understand the value to be extracted by doing the project by clearly describing that value. By understanding this, others can better form opinions about how to best tackle the project and debate on whether that value is worth the investment in the first place.
With all this in mind, I want to propose a simple exercise you can use to score projects quickly.
Value = Reach * Conversion Rate * Confidence Level
A project’s value is almost always defined by the combination of two factors:
The addressable market of your solution (Reach)
The conversion rate of that market to your solution (Conversion Rate)
Ultimately, these two variables are kind of all you need to score any project. There are many frameworks out there (most of which work), however, I’ve found that with these two variable I have 90% of what I need to pick projects with a relative degree of accuracy.
The final variable is Confidence Level, which, simply put, mitigates opinion and ego. By putting reach, conversion rate, and confidence level together you have something that easily describes the value of any given project (as long as that project is defined by a metric of some kind).
Let’s dig into these variables a bit more closely:
Reach
Reach describes the market/opportunity size of your solution. Reach is not tied to impact, instead, it looks to quantify the number of addressable entities for your project.
Reach is measured in the number of people, events or time, impacted for a specific period. The time period is variable and generally depends on your company’s payback period for initiatives.
Sometimes projects have reach outside of a single dimension, for example some projects reduce retention while also improving organizational efficiency. I often score projects against different dimensions to put forth the clearest argument for its value.
Conversion rate:
Conversion rate measures the % of your reach that the solution will actually solve for. Conversion rate (or win rate) is essentially your gut feeling of how effective your solution will be.
Discussions about conversion rates often lead to realizations about your ability, or lack thereof, to extract value.
This ends up being the heart of value discussions as reach tends to be more easily agreed upon. If everyone agrees on the reach, conversion rate is then a tangible way to talk through impact for a project and align expectations.
Conversion rate is also a tangible metric which can be figured out (through experimentation, research, etc) which will often allow you to further advance the likelihood of a project even if it’s rejected upfront.
Confidence level:
Confidence level is an easy way to ensure that opinions are weighted against good practices.
The confidence level is set based on if you’ve undergone specific activities that derisk the project and, as a result, are more confident that the project’s value is what you say it is.
These are the options available for Confidence level:
Supported by Experiment / Existing Feature = 90%
Supported by Feedback / Product Data = 67%
Supported by Market Analysis = 45%
Supported by Vision Alignment = 23%
As you’ve probably surmised from the above, the project’s value expectation suffers the more fuzzy its foundation of insight is. This creates an incentive to derisk projects by pursuing activities that naturally increase confidence in your problem and its execution.
Confidence level can be an especially good tool for getting past situations where highest paid person‘s opinion wins. Projects that are only backed by opinion will naturally score worse and be harder to defend whereas projects that have been derisked in some way will score best.
Example Scoring:
Let’s put these variables into practice. Below are 3 different projects scored using the above variables:
1. Signup Optimization Project:
Let’s say you’re thinking about a project related to optimizing a key step within your signup process.
Reach:
In this situation, reach is the total number of people who attempt to sign up but don’t complete it within a time period.
More specifically, let’s say it’s the number of people who bounce during a specific step that you’ve identified is problematic. Let’s say per year it’s 20,000 people.
Reach Used: 20,000 over 12 months.
Conversion Rate:
In this situation, the conversion rate is the expected percentage of people who attempt to sign up but don’t complete it, but who would now complete it because that step’s friction is removed.
Let’s say that you believe that you can convert about 50% of the people you’ve identified in reach by addressing the issues with that step.
Conversion Rate Used: 50%.
Confidence Level:
Let’s say you’ve done a funnel analysis of signup and drop-off points, you’ve localized the problem with data, and you have a sense of why it happens but haven’t really tested it or talked to customers).
Confidence Level Used: In this situation, you’re using the 67% confidence level as your score is supported by product data
Final Score:
20,000 people * 50% of those converting via signup * 67% confidence level = 6,700
If you pursued this project you’d generate 6,700 incremental leads signing up every year.
2. Value Expanding Feature Project:
Let’s say you’re building a feature that expands the value of your business by introducing a way to solve a problem you haven’t addressed.
Reach: In this situation, reach is the total number of existing or new customers that could potentially become customers of this new solution over the course of 1 year.
Let’s say it’s 500 new customers at an average ARR value of $10,000. 500*10,000 = 5M.
Reach Used: $5 Million in ARR signed up within 1 year.
Conversion Rate: In this situation, conversion rate is % of existing or new customers that will become customers of this new solution.
Let’s say you believe that within a year, you can convert 10% of those customers to this new product.
Conversion Rate Used: 10%
Confidence Level:
Let’s say you’ve looked at the market and have a sense of competitors doing this and that it is desired but you’ve never connected the dots in terms of if your customers would actually care to have this).
Confidence Level Used: In this situation, you’re using the 45% confidence level as your score is only supported by market analysis
Final Score:
5,000,000 * 10% of those converting via signup * 45% confidence level = 225,000
If you built this new solution you’d be able to generate $225k in additional ARR in the first year of launching it.
3. Internal Efficiency Feature Project:
Let’s say you’re building a feature that reduces the duration of a certain internal operations process.
Reach:
In this situation, reach is the total number of hours that that internal process currently takes to perform per quarter. Let’s say it’s 300 hours quarterly.
Reach Used: 300 hours per quarter.
Conversion Rate:
In this situation, conversion rate is the expected % of hours that are removed as a result of the solution.
Let’s say you believe you can get 70% of those hours removed with this solution.
Conversion Rate Used: 70%
Confidence Level:
Let’s say you’ve done the internal process yourself and hacked at a new way to do it that the solution will be based on).
Confidence Level Used: In this situation, you’re using the 90% confidence level as your score is only supported by market analysis.
Final Score:
300 * 70% of those converting via signup * 90% confidence level = 189
If you pursued this project you’d remove 189 hours of operations work per quarter.
Scoring Related Notes:
There are a lot of important ideas that pertain to scoring (too many to share in one post). The following are some of the most important learnings I felt were worth including in this article:
Scoring doesn’t replace doing PM work.
Scoring can easily become a crutch for poor PM' ing, as it can give a false sense of rigor and sophistication. Ultimately, someone who has a clear vision, business sense, a deep understanding of customers, and a strong team can achieve an effective roadmap without ever scoring a project.
At different times I’ve encouraged my team to drop the rigor and trust their beliefs to mitigate for this risk. In many cases, if there’s a deep belief in the project built on a strong knowledge base (that was primarily developed through customer contact), then you should probably just do the project regardless of how it scores.
Scoring isn’t the what determines which projects get done and which ones don’t.
I originally believed that the purpose of scoring was to decide which projects should get done and which ones shouldn’t.
I soon realized that when you pursue a highly regimented scoring approach all you’re left with is measurable work and unfortunately, a lot of really important work (brand sentiment, experience quality, system health, debt creation/mitigation to name a few) is immeasurable or hard to measure.
A project’s score within a framework is not the deciding element for if it should be done, it is a data point in your consideration for doing it and an entry point for everyone else to help validate your project’s value.
Cost isn’t a part of this model because it does not impact value, it instead impacts return on investment.
The model does not go into timing or cost as those are properties that define return on investment. Conversation around those variables is important but when evaluating a project I prefer to start with a pure value assessment since doing investment management analysis for a project that has no upside, and as a result that you’ll never do, is a waste of time.
Once a project is scored you should be able to have thorough conversations about the worth and timing of projects and the necessary adjustments to extract value.
Standardization can be counterproductive:
A lot of frameworks try to compare projects with totally different goals against one another by consolidating value into an abstract score (if no one knows what the number means then it’s not a useful number).
I’ve found this to be detrimental to my ability to identify value and pick the right projects. It’s really hard to compare apples to oranges; if you do, all you’re doing is following whatever natural bias you have towards either apples or oranges.
I prefer just talking about value and making decisions on what the business values more over trying to translate that value into some opaque metric and building a dependency on that metric to make decisions.
Don’t score for the sake of scoring.
Not every project needs scoring and in fact scoring can be highly detrimental to your ability to get certain projects through even when they’re overwhelmingly the right decision.
Scoring is effective for large projects with clear tangible value. When you have strategic partnership, key customer pain points, small iterations, or other hard to measure opportunities you should likely revisit your need to score as the scoring framework might unfairly punish these fuzzier items.
For projects that don’t score well but that I’m still going to do, I won’t have an actual metric value for doing it because that wasn’t why the project made it to the roadmap. It made it to the roadmap because of my belief in its value and I’d rather express that value the best I can rather than make up a value number I don’t believe in.
Re-Scoring is as valuable as scoring.
Once projects are done do your best to re-score them with the actual results. This can be an incredibly insightful process for you and your team as you’ll learn more about the nature of your product and roadmap and why things work/don’t work.
This type of review will help you the types of projects you go for and sharpen your instincts for which projects work and which ones don’t.