Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Follow publication

Measuring product success with data and metrics

--

Photo by Luke Chesser on Unsplash

Whether you are building a new feature or an entirely new product, it’s critical to measure success. The measurement allows the Product Team to understand the current state, plan for a new iteration with better targets or even shelve the idea completely and work on something else. Of course, we should use qualitative methods like interviewing clients or sending out surveys to identify the users’ pain points and learn more about the advantages and problems of our product, but to evaluate a product’s success, it’s better to look at the data. The quantitative insights can give us a bigger picture and a better judgment about our product performance. The insights must be available to everybody in the product team including all PMs, designers, and engineers. Also, some data may be interesting for other stakeholders, management, and even clients. It may seem a pretty easy job but most of the time it’s not as straightforward as we expected. There are so many common problems like not measuring the right metrics, lack of having enough precise data, collecting biased data, the complexity of measurement, and even getting lost in measurement. So how to do it right? Here you go:

Define the product success

Sometimes teams start the solution design stage too fast. Before planning to build any prototype, we must clarify what problems we want to fix and who the beneficiaries are. Then we can define what success looks like. Also, we should have a clear understanding of the product or business strategies to reach that stage. The strategy helps us to define current step goals. These definitions and clarifications must be done at the first step of building any product or a feature of a product. We should make sure what we are trying to build can help us to reach our ultimate goal and also it’s aligned with our current product and business strategies.
For example, we intend to make a video-sharing platform to enable everyone to share their videos easily. Success can have different shapes. We can define the product as successful when it

  • is being used by everyone to share their short creative videos (TikTok)
  • empowers all professional content makers to share their contents (Vimeo)
  • gives everyone a voice and shows everyone the world (YouTube)

As you can see all three examples above are going to build a video-sharing product but their definition of success is different according to their purpose. None of these platforms realize their success in one day. They may come up with different strategies every few years. In the example above, the success at this stage can change if our strategy focuses on getting more viewers or it focuses on getting the most videos uploaded? or does it focus on the acquisition of new users or does it focus on converting the users to active daily users?

Find the key metrics to measure

To measure anything we need an indicator. That’s why we should find out the right metrics. Sometimes the metrics already exist. However, in many cases, we need to build a custom metric for the specific problem we are going to solve. Let’s start with our goals. Our definition of success showed us the ultimate goal. The ultimate goal should be measured by one key metric and according to Sean Ellis & Morgan Brown in their Growth Hacking book, it can be called the North Star. Any other metrics we define must help us to improve our North Star. In addition to North Star, we can define metrics according to our current strategy. For example, maybe the total number of daily views of videos is our North star. But at this stage, our strategy is to improve the North Star by increasing the video supply and to achieve that we want to redesign the video uploader wizard. Then we have some specific metrics to measure like the number of daily uploaded videos or the number of new video creators or churn rate of active video creators.

We should remember defining the metrics alone is not enough. We should also set a minimum desired number to improve that indicator. This minimum expectation helps us at the end of Product Discovery to decide should we deliver the product according to the efforts and costs it takes to build a scalable and reliable solution.

We should also not forget the negative impacts and tradeoffs. Most of the time by building a new feature we may achieve one goal by sacrificing some other benefits. Try to find the negative impacts and define the metrics for those assumptions as well. In the example above, if the significant number of video uploads causes a performance issue on the speed of uploads then we have a problem. It may reduce the number of uploads. Then maybe we should measure the average time spent to upload and the total number of failed uploads as well.
When the product team defined these metrics, they had some assumptions or questions. In my experience, writing down those questions can help you in Analyze and Validation phase. At that stage, you may realize the defined metrics were not enough to answer your questions, and you need to define new metrics to shed light on the discovery.

Build unbiased data for measurement

Now we have the metrics to measure and a defined goal for each metric. Do we have the right data? Sometimes it’s so simple. With a simple query in the database, we can find out the total number of daily videos uploaded. But what if the total number of uncompleted uploads? Do we even record it somewhere? If not we have to find a way to track it. Sometimes we have all the records but we have big data and huge logs. Then we need to make sure with the help of Data Engineers we can make the data visible and accessible enough to be able to query and then wrangle it.

We should consider the fact that data is not reliable to judge at any stage. We should find out at which volume we can rely on our data and make the right decision. Time can help us a lot. It increases the quantity of the data and also it adds different patterns to the result. We should make sure our data is not biased and only affected by the change we want to validate. For example, if at the new year’s time, everyone uploads more video, then we may not be able to justify the more video uploads to a better wizard design. We should collect the data in a longer period to make sure we get a better result on all other occasions as well.

Analyze the data and validate assumptions

Remember the questions and metrics we defined at the planning time. Now it’s time to analyze the data and validate our assumptions. AB tests are the famous and common practice at this stage. Make sure the only difference between different versions of your application is the change you are going to test. That’s why it’s important to run AB tests at the same time, in the same region, and with similar user types. If you still can’t validate your assumptions, then maybe we need new metrics or even a new iteration. Maybe we should record new data and then rerun the AB test.

Always try to measure the impact on North Star. Don’t forget, we don’t want to hurt our North Star metric. Measuring the North Star impact helps us to verify the efficiency of our Product Strategy. If we constantly see, improving the current metrics doesn’t improve our North Star then maybe it’s time to revise our strategy.

Although we are running a quantitative test, it’s an excellent practice to interview some customers to understand their perspectives about the data. We can find out what their expected range is or even if we missed any other metric to measure which might be important for them.

Monitoring the metrics regularly

Product Success Measurement is not a one-off task. Good Product Teams constantly monitor the key metrics. When you need to measure something more than once, then you should consider the time and effort you spend on every measurement. That’s why the automation of data wrangling becomes important. The combination of automation and proper alerts helps us to achieve operational excellence by identifying the sudden issues before hearing any complaints from the customers. If the data is only to be used in a Product Discovery Sprint then maybe a simple Python Script on Jupyter Notebook or even an automated script in Microsoft Excel is enough. But if business stakeholders or other teams need to check this data, then we should consider a proper BI tool to visualize the data and make it accessible to all relevant people.
Usually, after product success measurements we find new insights. These insights can initiate new ideas or new iterations for the current product. If we are serious about creating an innovative environment for our teams, then we must share these insights and learnings with the relevant people outside the product team. Don’t forget data and measurements of product success are not an exclusive responsibility of people who build or maintain a product but also other business departments. Data visibility facilitates cross-team works and enables ordinary people to innovate and create products that solve real problems. I explained more about how to build an innovative work environment in creating a culture of Product and Innovation.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Iman Sedighi
Iman Sedighi

Written by Iman Sedighi

Enabling creative people to solve real problems by building high-impact products. Talking about #product #engineering #design and #leadership

No responses yet

Write a response