Return to site

Measuring Success

Forecasting is easy, but measuring is hard

For the OrderUp product team, forecasting and measuring is one of our primary responsibilities. I’ve found that with a bit of discipline forecasting what the expected results of any project should be is a fairly straightforward process. Measuring, on the other hand, can turn out to be a much more difficult.


(User traffic with annotations of what’s being tested. Source: Google Analytics)

Why measure?

Everyone on our team always has a large number of initiatives that we can spend our time on. For the product team, every change we make to the site can affect a large number of different metrics. Without a strong set of data and a cohesive strategy to digest that data, it would be nearly impossible for us to know whether or not we’re moving the ship in the right direction.

Most changes that we make are not huge shifts in the way that users and restaurants interact with our software, they are for the most part much more contained. All of these small changes can make a big difference, something like increasing our average order amount by $.50 translates to large amount of money for our partners at scale.


(A clickmap, showing where most of our users are clicking. Source: CrazyEgg)

How to measure.

Given a large number of possible choices which each yield an incremental return, how does one measure the efficacy of these changes?

The first step in our process is to have a strong idea as to what the purpose of this change is. Who does it affect, what problem does it solve, and which specific aspect of our customers lifecycle does this address? For example, there are a number of features that we’ve implemented to increase the number of first time users, adding more total users to our base. Other features have been tested to increase the frequency of orders that a user places, increasing the lifetime value of our users.

Since there are multiple variables, it’s important to decide first which of these are expected to be affected. Once you’ve established that, it’s much easier to instrument analytics against that metric. For example, if we’re expecting to increase the amount of food a user orders at one time, we’re going to specifically segment out the ticket average. For a deeper dive into how we instrument A/B test for customer facing changes, take a look at Oren’s recent post.

Once that target metric is identified, we put tracking in place to measure that specific change. We have a large number of tools at our disposal to track changes, ranging from Google Analytics to more specific products like Crazy Egg and Kissmetrics; we’ll sometimes even go directly to the database to measure changes, the critical step is to make sure you have a way to track the changes before you get started.

Even if the change we’re making is subjective, we still make a huge effort to measure what effect our change has made. If we’re fixing an issue that affects how our brand is perceived, we may need to hit the street and interview our restaurants or users directly to get their honest feedback.

Finally, we can take all the data we’ve captured to compare against the baseline to see how successful we were.


(Time to transaction tracking, showing some users need to visit the site multiple times before placing an order. Source: Google Analytics)

How this benefits our partners.

While this process takes a bit of time to setup and continuously maintain, we’ve found that to be a great investment as we take all that data and analyze it. By sticking to this system we have confidence that the most important issues are being addressed, and that any changes that are made add value for everyone in our ecosystem.

We find that this system works really well, and always urge everyone to find their own version of it that works for them. The next time you go to engage in a project, marketing campaign, or sign up for a new service, ask yourself who benefits, how will they benefit, and how can you measure that change.

Forecasting is easy, but measuring is hard

For the OrderUp product team, forecasting and measuring is one of our primary responsibilities. I’ve found that with a bit of discipline forecasting what the expected results of any project should be is a fairly straightforward process. Measuring, on the other hand, can turn out to be a much more difficult.

(User traffic with annotations of what’s being tested. Source: Google Analytics)

Why measure?

Everyone on our team always has a large number of initiatives that we can spend our time on. For the product team, every change we make to the site can affect a large number of different metrics. Without a strong set of data and a cohesive strategy to digest that data, it would be nearly impossible for us to know whether or not we’re moving the ship in the right direction.

Most changes that we make are not huge shifts in the way that users and restaurants interact with our software, they are for the most part much more contained. All of these small changes can make a big difference, something like increasing our average order amount by $.50 translates to large amount of money for our partners at scale.


(A clickmap, showing where most of our users are clicking. Source: CrazyEgg)

How to measure.

Given a large number of possible choices which each yield an incremental return, how does one measure the efficacy of these changes?

The first step in our process is to have a strong idea as to what the purpose of this change is. Who does it affect, what problem does it solve, and which specific aspect of our customers lifecycle does this address? For example, there are a number of features that we’ve implemented to increase the number of first time users, adding more total users to our base. Other features have been tested to increase the frequency of orders that a user places, increasing the lifetime value of our users.

Since there are multiple variables, it’s important to decide first which of these are expected to be affected. Once you’ve established that, it’s much easier to instrument analytics against that metric. For example, if we’re expecting to increase the amount of food a user orders at one time, we’re going to specifically segment out the ticket average. For a deeper dive into how we instrument A/B test for customer facing changes, take a look at Oren’s recent post.

Once that target metric is identified, we put tracking in place to measure that specific change. We have a large number of tools at our disposal to track changes, ranging from Google Analytics to more specific products like Crazy Egg and Kissmetrics; we’ll sometimes even go directly to the database to measure changes, the critical step is to make sure you have a way to track the changes before you get started.

Even if the change we’re making is subjective, we still make a huge effort to measure what effect our change has made. If we’re fixing an issue that affects how our brand is perceived, we may need to hit the street and interview our restaurants or users directly to get their honest feedback.

Finally, we can take all the data we’ve captured to compare against the baseline to see how successful we were.


(Time to transaction tracking, showing some users need to visit the site multiple times before placing an order. Source: Google Analytics)

How this benefits our partners.

While this process takes a bit of time to setup and continuously maintain, we’ve found that to be a great investment as we take all that data and analyze it. By sticking to this system we have confidence that the most important issues are being addressed, and that any changes that are made add value for everyone in our ecosystem.

We find that this system works really well, and always urge everyone to find their own version of it that works for them. The next time you go to engage in a project, marketing campaign, or sign up for a new service, ask yourself who benefits, how will they benefit, and how can you measure that change.