Conversion optimisation can be complex. There might be a million reasons why the effectiveness of your program is not living up to its expectations. Your experiments just don’t deliver the desired uplift. Just googling for conversion optimisation mistakes shows that there are over 3 million results. The top ten pages alone show a 100 or so mistakes every CRO makes. Talk to a CXO expert about possible pitfalls and you end up discussing false positives, statistical significance or the difference between causality and correlation. Although this might be an interesting discussion, solving effectiveness might be easier than you think. But first let’s have a look at most common conversion optimisation mistakes.

The most common mistakes fall into one of these categories:

1 – Ignoring factors that affect your conversion rate

Examples are slow web page, no clear value proposition for the user, seasonality, traffic from bots or channels that deliver poor quality traffic (affiliates). Lack of traffic or conversions to even run the test is also a common mistake.

Lack of traffic

2 – Poor data quality

Not having a sound analytics implementation makes your CXO program useless. If you can’t trust the data, you can’t trust the outcome of your experiments.

3 – Absence of basic UX principles

If your website is not working properly on mobile devices or has a bad information architecture you can test all elements but you’ll have to solve the usability and accessibility first.

4 – Poor execution of test program

Some common examples of poor execution is having multiple test with overlapping traffic or switching experiments off before they reach stable traffic volumes. Sometimes tests run to short so no complete business cycle is included or an experiment coincides with other events like website migration.

5 – Wrong interpretation of the data

Statistics and their interpretation are ingredients for a nice debate. The purists will tell you that only 99% significance is good enough while other CXO experts will take the 80% bayesian methodology as their guiding principle. Both have their pros and cons. The debate which one to choose has to take place but it has to be settled before the  experiment program starts.

The first three categories are related to “fixing the basics”. Just put in the right resources and all of these mistakes and pitfalls will be solved. Categories 4 and 5 are dependent on the skills of the team. A well trained CXO team is certainly able to overcome the challenges described above.

So let’s assume the basics are right and the team is skilled, what is the easiest way to improve the effectiveness of your CXO program? The answer is easy: to avoid double work, that you run the exact same test twice (or more). In the excellent article why experimentation programs fail, one of the main reasons is the absence of a centralised CXO knowledge base. If you don’t have a CXO library than how can you keep track of your progress and how do you remember what you have tested last year or even a few months ago. So having a CXO knowledge base is the easiest step to avoid double work and improve the overall effectiveness of your test program.