Measuring Impact — Picking the right metrics in Product

Why picking the right metric to track success is so important and, yet, so difficult

Picking a metric is easy. Picking the right metric is a lot harder. With the movement in the product community to ensure we’re building outcome-focused products instead of simply shipping features — the question of picking the right success metric is one that comes up time and time again.

I’ve always loved using data but over my career have made lots of mistakes in how I used data and measured things. Over time, thanks to working with amazing colleagues across product and analytics on different projects, I’ve built a better understand of how different metrics work together and how we can better measure our product’s success.

A model for thinking about metrics

Making a change to your product is similar to throwing a stone into a still lake. Visualise it. Before you throw the stone the water is smooth and untouched. When you throw the stone, you’re going to see a big splash where the stone enters the water. From the epicentre, ripples will spread outwards but with a decreasing size. The size of the ripple next to where the stone landed is much bigger than the ripples further away. In fact, if you look far enough away from where you threw the stone you may not see any ripples at all.

Making changes to your product affects metrics in a similar way. The stone (the changes you’ve made) will generally have a more noticeable impact closer to where the change was made. The more removed a metric is from the change the harder it will be for you see to an impact or movement.

For this article, let’s use an example from an b2c transactional product:

Example Case: We are the Product Team looking after the funnel for an accommodation rental company. We’ve see there is large drop off in the funnel between the accommodation overview screen and the final booking screen (Click-through-rate). We’ve learned from talking to customers the current CTA (call-to-action) of “Book Now” feels too committal at this stage so we want to test changing the copy to “Review Trip Booking.” We believe by doing this will increase the CTR (click-through-rate) on this screen.

There are a range of metrics from very close to the change (hovers on CTA?) through to very far from the change (e.g. company share price.) [NB: I’ve intentionally stretched the metric chain to the extreme distance for example purposes — I don’t believe share price is ever going to be right success metric for product development teams]. In this example, which are the right metrics to track?

We have an underlying assumption that more traffic moving to the next step of the funnel leveraging the right driver consistently will increase overall conversion. Here, we can use CTR (click-through-rate) as a proxy for CVR (conversion rate) so for this experiment, CTR is probably the right metric for three reasons.

  1. It helps you measure the problem you identified in the case above — a large drop between two steps of the funnel
  2. The metric is extremely close to the change you’re making. This means it will be more sensitive to the change.
  3. The change is low risk and doesn’t affect other critical levers (e.g. pricing)

However, I’d also monitor CVR (conversion rate) to qualify we are driving the actual behaviour (a transaction) we want.

There are a few things we may witness from running this test. We may see movement in both metrics in which case: yay, we’ve learned something and had an impact 🎉. However, me may also see a movement in CTR but not CVR or no movement in either 🤔. In this situation, it’s about asking are we solving the right problem but in the wrong way? Are we solving the wrong problem? In either case, you and your team need to iterate on the next step to move your product forward.

📐 Recommended Tools Although iteration is not the focus of this article, two of the best frameworks I’ve used for iterative and learning-based development are the Product Kata from Melissa Perri and the Opportunity/Solution Tree from Teressa Torres. Both provide methods for laying out what you know, where you want to be and you think you may get there. Great boost to stakeholder comms too!

Factors affecting your metrics

Size of the impact

Throwing a smaller stone will have less of an impact. This will reduce the influence on metrics and make downstream impacts even less visible. In this case, a smaller stone doesn’t necessarily mean a smaller change so don’t assume that a minor copy change is low impact while a new feature will be high impact. Impact is about how well the change solves a problem for a group of users or customers and that depends on how well you and the team understand the problem space of the users you’re supporting

Remember, changes with a larger impact close to the change will tend to have a more visible impact further out. Smaller impact changes will make it harder to detect any movement further out.

Where the metric lake gets murkier

You may be asking, but in an b2c transactional model wouldn’t you always just want to optimise for CVR? Well, no, not necessarily. Conversion rate is only a piece of the picture of making the business successful. CVR tells us how likely a user is to convert but not the value of that transaction — the AOV (average order value.)

Pricing is a good area of a product development area that adds a lot of complexity to understanding the right metric. Let’s take discounts as on aspect of pricing. The use of discounts as a conversion incentive will likely help your CVR metric (yay) but negatively impact AOV (Average Order Value ). In the short term, this means you may reduce your revenue unless your CVR lift is high-enough to offset that drop. Can your unit economics support that and is it a trade-off your organisation wants to make?

Then there is further complexity of the long term impact of this change. Are you training your customers to expect a discount and potentially hurting revenue in the long-term? There is also a risk increasing rates of fraud or abuse or driving behaviour making it difficult to turn into long-term profit.

Some companies do this intentionally. Domino’s Pizza UK is priced at a premium but offers an assortment of offers and deals to incentive conversion. Many of the drop-shipping companies with “amazing discounts” operate in a similar way. This works because it is strategically priced to support that model. However, if you’re making these changes, you may want to look at a cohort level over a longer term — such as APRU (average revenue per user) or CLTV (customer lifetime value)

The complexity with our analogy of the lake is these metrics further from our centre (ripple) also have other factors affecting them.

So now what is the right metric to look at? CVR, AOV or ARPU (average revenue per user) or something else? And over what period of time should we look at this metrics? Higher level metrics start to get more complex at this point because there are many different factors affecting them.

Creating a map of your metrics using something like a KPI or Driver Tree will help you and your teams visualise some of this complexity. I’d recommend reading more about this concept in the link highlighted below but essentially you can think of those metrics near the core of the tree being the metrics further out and less sensitive.

💡 You can use KPI or Driver Trees to help map out the different factors affecting a particular metric. You can read more in this article: Driver Trees — How and Why to use them to Improve your Product.

Feedback loops

The other factor to consider in all of this is how large we need our feedback loop to be. Agile, and agility, is fundamentally about feedback loops — how quickly do you learn from the actions or changes you make.

👤 This concept of agile and feedback loops was introduced to me by Mohammed Rizwan. He’s one of the clearest thinkers around agility and agile frameworks that I’ve ever had the privilege to work with. He writes Agile Bites (on Medium) and you can also follow him on LinkedIn.

As a rule of thumb — lower risk initiatives where you’re micro-optimising more granular metrics will have shorter feedback loops. The time taken to see change in these metrics will generally be shorter and so your learnings will be faster. Higher risk initiatives, dealing with higher level metrics may need to run longer (months, quarters?) until you’’re confident you understand what the actual behaviour change is.

Generally speaking, should aim to runny many low, risk experiments and a handful of higher risk experiments. This comes with the universal product caveat of, “it depends” based on the phase of your product, the priorities and capabilities of your organisation.

💡You can read more about the “Initiative Pyramid) concept in 6 diagrams I use for explaining Product Management concepts.

Remember: experimentation, testing and effective product development is fundamentally about feedback loops. The feedback loops have two parts — It is helpful to learn what works (or doesn’t) but to iterate effectively you need gain an understanding of why it worked. This gives you a jumping off point the next step of development.

Picking the wrong metric

One of the big mistakes I made earlier in my career was to try and tie everything to CLTV (customer lifetime value). CLTV is an aggregate metric that includes the acquisition cost of a customer and the total profit (revenues less costs) of a customer. I operated on the assumption that this was a key goal as a business so I should be able to tie everything to it. There were a few of problems with this position:

  1. Some changes that are valuable won’t make a visible impact on this metric.
  2. I overlooked more experiential metrics that were also important.
  3. CLTV is a lagging indicator meaning it takes time to report. This may unnecessarily elongates feedback loops from any learning cycles we wanted to run.

Don’t get me wrong — CLTV absolutely has a place and should something the product teams collectively track but it won’t the the definition of success of every piece of work you undertake. I could have spent more time finding the leading metrics — either those that were future predictors of success (aspects of engagement with account management, for example) or experiential metrics (number of technical errors, availability of inventory/choice, for example).

💡 Leading and lagging metrics are a concept that groups metrics on whether they tell us what will happen (future predictors of success) or what has happened (e.g. revenue). I highly recommend Tim Herbig’s article on the topic.

Evolving metrics

The model above has focused on more common metrics —they’ve been reasonably generic. However, the step change comes when you can evolve this metrics. Overtime, you gain a deeper understanding of the relationship between different metrics and the behaviour of your users.

You will start to understand which metrics are future predictors of success beyond the basic ones outlined above. Consider the quote below:

“After all the testing, all the iterating, you know what the single biggest thing we realized? Get any individual to 7 friends in 10 days. That was it.”

This is the often quoted Facebook Aha metric shared by Chamath Palihapitiya (Facebooks first VP of Growth.) It allowed the Facebook Growth team to drive longer term retention. The metric itself is simple once you have the understanding and have confidence the metric is a future predictor of success. However, building up a deep knowledge to understand if that is the right metric (and why) is much more complex.

A similar, less cited example of is how Netflix measured its personalisation strategy as a retention driver. Their key metric was the percentage of people that rated at least 50 movies in a month. This metric, they found, drove the success of personalisation and, in turn, retention.

Recommended Listen:🎙 Netflix Founder Gibson Biddle via Melissa Perri’s Product Thinking

Gaining this understanding is key for the long term health and success of your business. For companies, be they b2c or b2b, e-commerce or SaaS or social — retention is key. But as outlined above, retention is a lagging indicator which lags different across different industries (high-frequency e-commerce may be measured in weeks, subscription services like Netflix and gyms may be measured in month and SaaS products may be measured in years). These metrics take time to show movement — lengthening your feedback loops and reducing your rate of iteration. Maturing your understanding of your metrics to learn how you can predict the future success of your customers will help you build better products, faster.

The truth about metrics

There is ambiguity in the work we do as a Product teams. It isn’t easy to build successful products. Listing metrics in a slidedeck isn’t going to magically reveal in a beacon of light answering how to make the product successful. It’s not how it works.

There is a trend in the industry now to be a data-driven Product Manager. However, that isn’t a role that operates in a vacuum. It’s a skill that from a close working relationship with your data counterparts —quantitive analysts, qualitative researchers, and your team — builds collective knowledge in your domain. The best Product Analysts and User Researchers I’ve worked with are not the ones that do a good job on an assessment I’ve asked about but the ones that act as true thought-partners. Together you ask what we are really trying to understand and why is it important.

Remember, every metric has a context and alone can’t tell you a complete picture. In an e-commerce world, you may see a metric like items removed from cart. One assumption may be that reducing this number will increase AOV (average order value) — another take may be customers are struggling to compare multiple product so are using the cart as a solution to this.

📚 Recommended Read One of the best books I’ve read in the last few years around data and statistics is Daniel Levitin’s A Field Guide to Lies and Statistics. It provides an excellent read on how data can be(mis)collected, (mis)interpreted and (mis)represented which I found useful not just a product person but also generally as a human being

Considerations

Traditionally, I prefer writing articles that offer the reader (👋) tangible tips or even templates on what they can do next. With the metrics topic, it’s a bit more complex. However, there are some prompts I have in my mind when thinking about the right metrics for the products I work with.

  1. How large of an impact do we believe this change will make on the Product? Generally speaking smaller effort changes (copy, images) have a smaller impact than larger changes (new features, pricing tiers) but this is a guide not a role. Even small effort changes like copy if you’re addressing a real problem can have an outsized impact
  2. Does this metric act as a reasonable proxy for the outcome you really want to drive? CTR (click-through-rate) is often used as proxy metric for CVR (conversion rate) as there is usually a correlation (more people moving through the funnel should result in more people converting) but that doesn’t mean every single thing that increases CTR will increase CVR.
  3. What risk does this metric not measure? As in the pricing example, leveraging discounts will affect CVR but doesn’t account of the financial implications of the change. How do you ensure you have visibility over the broader impact?

Conclusion

The main thing I’d like you to take away from this article is picking the right success metric is not always easy or obvious and that’s okay. There is ambiguity in the work we do as a product teams (which in my mind is what makes it fun!). By paying close attention and working together as a team you can start to figure out what works best. Remember the lake and the stone and reflect with your team on which ripple is the right one to measure the thing you really want to achieve.

Product Lead at @DeliveryHeroCom. Formerly @HelloFresh, @BBC, @Atos. Passion for product, business &tech. I like helping people solve problems. Berlin

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store