It is generally known that only approximately 30% of experiments lead to a successful outcome. This is the nature of the game, after all it is an experiment, if you know it will be a success, there is no need to test.
So if the probability of success is so low, why bother in running an experimentation program at all? Well the fact that 70% of your experiments have no winner, does not mean there is no value in them. It is extremely important to get the value of your CXO program and know how to explain it to senior management. In the worst case scenario, you have to live through a number of unsuccessful experiments before you can show an uplift. This will challenge the patience of senior management and increase the pressure on the CXO team to deliver results.
If the pressure gets too high the CXO team might opt for low risk experiments. Experiments that can be considered as best practices where the outcome for sure will have a positive impact. This will help the CXO program to survive but management pleasing will kill long term CXO success.
Here are three ways to get the value of your CXO program (even if the outcome is negative):
1 – Report on Risk avoided
Include the metric “Risk avoided” in your dashboard and reporting. In case the variant outperforms the control in an A/B test, you have a winner and you can calculate the uplift in revenue for the time the experiment was running. You can even include what the additional revenue will be over time including diminishing effect once the winning variant is implemented.
The opposite of uplift is risk avoided. In this scenario the control performs better than the variant. Thanks to the brilliant minds of the CXO team, the new variant was tested instead of putting it live because it looks so sexy. As users have spoken you now know it is better to stick to the control and not implement the variant. Because you don’t implement the variant, you have avoided a loss in revenue. You have avoided risk.
On an experiment level this might not be a huge amount of money, however summing up all amounts related to risk avoided over time, it is significant. Besides the money that was not wasted in implementing the wrong variant, “risk avoided” will raise awareness about the importance of experimentation. How good or sexy an idea might seem, letting users do the final judgement is the better option.
2 – Put learning first
Every experiment reveals a puzzle piece of user behaviour and preferences. One puzzle piece will not get you to deeper insights or will allow you to complete the puzzle. But with many experiments in your pocket (or better off course, in your CXO library), the contours will get clear. The more puzzle pieces get in place, the better you understand the end result. The desire for learning and understanding user behaviour is perhaps the most important force in your journey to get an optimal user experience.
3 – Draft new hypothesis
In some cases, it is not immediately clear why the outcome of an experiment was negative. Especially when you did everything right. You researched user behaviour, used the data available, defined a strong hypothesis and designed a great solution for it. Expectations are high and you just wait for the A/B test to finish and shovel in those extra Dollars.
Much to your surprise the the expected uplift is nowhere to be seen. But this is an opportunity to learn so you dig in the results, crunch the numbers but no meaningful learning can be extracted. This sucks. However there still is value in this experiment. Just store it in your CXO knowledgebase and then it might be very likely that when you combine this”failed” experiment with the results and learnings of other experiments you see a new trend in user behaviour. A new insight reveals itself and allows you to draft a new hypothesis.
We all like winners…
We all like winners, but winning is the outcome of a structured repetitive process. If you are not comfortable with accepting the nature of an experimentation program: winners and losers, then don’t run a program and accept the fact that you miss out on the opportunity to learn what your customers really want.
This series is inspired by an article published by GO Group Digital. Check out the other posts that are part of this series on Why experimentation programs fail: