Know the Good and the Bad to Avoid the Ugly—Notes on Building Metrics

Inspired by an interview question I recently got (Yes! I’m actively looking!), I decided to share some notes on this topic since I’ve been dealing with metrics throughout my career (firstly financial metrics, and then product metrics more recently).

As usual, TL;DR is provided at the bottom of the page.

Before I go deeper into what constitutes a good metric, I’d like to point out one thing even more important but would not be covered in this essay: how to make better decisions with the data. I didn’t fully realize how hard it was until I experienced a rollercoaster intrapreneurship myself. There were some moments when “facts” hinted at some possible outcome, but I was too emotionally attached to the product and the team to accept the looming failure. Generally speaking, this is a topic including rational and irrational factors, and it worth a Nobel Prize as much as Behavioral Finance, so I won’t be addressing this ambitious topic in this essay.

To make it more digestible, I would provide a set of criteria for good monitoring metrics while coupling it with the signals for “bad” metrics and “ugly” real-world cases I experienced.

1. Good metrics should fit the stage and the nature of the business/product

Bad: 

  • using metrics designed for mature business to evaluate early-stage startups or vice versa;

  • using one-size-fits-all metrics on all types of business/products

Ugly Cases: 

You may think this is a mistake easy to avoid, but even industry leaders could make such mistakes. 

Partly because DAU was so widely used by investors in evaluating businesses, a company assigned a DAU goal to a group of emerging businesses, among which quite some are still at a “customer development” stage and validating product-market-fit. At this stage of the business, user retention was usually low and steady growth was almost nonexistent. Measuring the progress using DAU prematurely lead to aggressive user acquisition strategies while less attention on user retention, which should have been the focus. In the end, the user acquisition consuming quite some resources pushed the DAU even higher than expected (which is good right? given the goal was DAU). However, after the resources ran out, we still haven’t got our value proposition verified. 

Using the right metrics at the right time is really the first step to success. So make sure that you choose the right “north star” for your business, and if it’s a product portfolio, don’t try to use one-size-fits-all metrics.

2. Good metrics should come in layers

Bad: 

  • dumping the team with all kinds of metrics without prioritization;

  • using the same type of metrics or even one single so-called “north-star” without breaking it up into different intermediate metrics;

  • having only metrics important for the short-term but not the ones strategically important for the long term

Ugly Cases:

There are two types of ugly cases: too many metrics without focus and too few metrics without necessary breakdown. The problem with too many metrics is obvious: no one knows where the business is at, especially when the metrics move in different directions. Therefore, I’d like to focus on the second type of problem: having two few monitoring metrics. 

While numerous people have emphasized the importance of having a “north-star” metrics, I would like to be a proponent of monitoring products in decomposed intermediate metrics. You may think as a leader focusing one single ultimate metric will make all the decisions simple and neat, but the reality is that it is almost impossible to locate the problem, form a hypothesis or obtain effective experiment feedback without breaking down the metrics to a certain level. 

For example, if a video sharing platform only uses generic user engagement metrics such as time spent on the platform but without breaking it down to some metrics between the watching behavior and uploading behavior, there’s no way we can know which side of the marketplace is the bottleneck (though among consumer-facing marketplaces, the supply side is more likely to be the bottleneck). 

Just as in the financial world we decompose earnings into the profit margin, asset turnover and asset, in product analytics we can break it down into LTV, CAC, etc., and do all sorts of funnel analysis.

3. Good metrics should come in pairs

Bad:

  • focusing on “volume” only and overlooking “quality”

  • focusing on one side of a marketplace and overlooking the other side

  • focusing on revenue only without pay attention to the margin

Ugly Cases:

Take talent partnership in a video platform for example. In the initial stage, we used the number of creators signed up to measure the success of the talent partners, and as you can imagine we reached our goal in terms of “volume” perfectly but quite some creators weren’t producing “quality” content after they got on board; therefore the experience from the user end was still not good. 

After we fixed the issue by changing the goals on the number of creators to their share in video views, we were able to measure the contribution of the talents we partner with, and in turn, measure the contribution of all partners. However, not until we compared the performance per video of partners to that of organic creators that we didn’t invest much except provide them the tools (here’s another pair, comparing partner creators to organic creators), we wouldn’t know that we need to improve the ROI of our partners.

Generally speaking, having a pair in key metrics will provide benefits such as 1)avoid distorted action plan in OKR driven cross-functional environment; 2)reflecting the opposing or competing priorities in most business — volume vs quality, supply side vs demand side, revenue vs margin, monetization vs user experience, investment vs shareholder dividends, …

4. Good metrics should be comparable to your competitors, to your own history, and even within competiting teams

Bad:

  • the same data of the competitors are not available

  • the way the metrics are calculated changes from time to time

  • there is no pattern or even no change in historical time series whatsoever (insensitive to any impact)

  • using different metrics to measure the internal teams delivering similar value

Ugly Cases:

Comparing to your old self was a somewhat satisfying thing, especially when your business is really young and crappy because the baseline is so low. And the internal teams can be so overwhelmed with “regional optimization” and pursue one more percentage point increase. But, we live in a world full of competition. There’s no way to know where you are in this industry unless you look at how your competitors are doing and how fast they are improving (even if you don’t, your investors will make sure you do). Therefore, having some metrics widely adopted by the industry is also important. For the consumer-facing content industry, time spent was still a universal language. In the social media industry, DAU and ARPU are still the most common ones.

5. Good metrics should be aligned with the team structure or the processes so that it’s actionable

Bad: 

  • the metrics cannot be breakdown into actions items of specific teams

  • the team is not structured in a way that its success can be measure by a certain metric

  • the teams work on related metrics are working in silos

Ugly Cases:

In above 2 and 3, we talked about the importance of having enough intermediate metrics and even opposing metrics. To make sure that we can act on the information, the team structure and the processes need to be fine-tuned to be aligned with the metrics. 

Take the famous AARRR funnel for example. If the teams weren’t structured in a way that somewhat matches this funnel, it would be hard for us to know where the bottleneck is and where to invest more. Especially for some teams whose strength was not in data analysis, empowering them with data and action plans at the same time will guarantee a great return.

Another issue with a mismatch of metrics and processes is the teams could work in silos. There was a period that the teams in acquisition (user growth, social media) and activation (product, video ranking) were working in silos. While the product team was working hard to forge a certain user experience for users, the user growth and ranking teams were working on their own “regional optimization”. The ads our users see on social media can be quite different from the experience they get after they get on board. The content was different, sometimes the UI was different. By bringing the teams that would be far-flung otherwise work closer together, we weren’t able to achieve growth in conversion and retention.


The list could go on and on, but these are my current top 5. If you are interested to share yours, don’t hesitate to reach out or comment!


TL;DR

Good metrics for a business usually have the following traits:

  1. fit the stage and the nature of the business/product. Choose the right “north star” for your business, and for a complex product portfolio, don’t try to apply one-size-fits-all metrics. The consequence of measuring an emerging business with metrics for mature products can be fatal.

  2. metrics come in layers—don’t dump metrics without a focus, and also don’t provide too few metrics without necessary breakdowns

  3. metrics come in pairs. Opposing or competing priorities is very usual in most businesses — volume vs quality, supply side vs demand side, revenue vs margin, monetization vs user experience, investment vs shareholder dividends, … For a business to succeed, you need to strike a balance and find your sweet spot through metrics

  4. comparable to your competitor and to your own history. It’s self-explanatory.

  5. metrics are aligned with the team structure or the processes so that it’s actionable. Not structuring the team this way will make it hard to locate and bottleneck and hard to determine where to invest more. It’s also hard to improve the conversion rate by grouping relating team closer to provide a seamless experience.


References and Further Reading

— — — — — — — — — — — — — — — — — — — — — — — — 

Alistair Croll, Benjamin Yoskovitz, Lean Analytics: Use Data to Build a Better Startup Faster, O’Reilly Media, March 2013 (I just finished the preface and find it still super relevant even 8 years have passed since it was published)

Andrew Chen, Benefit-Driven Metrics: Measure the lives you save, not the life preservers you sell (an interesting perspective on how to better set your north star to reflect the value you create for customers)

Previous
Previous

[Ep3]Successful User Acquisition Isn’t Always a Blessing

Next
Next

[Ep2]What Make a Short Video Platform Tick: An Overview