Customer Experience Optimisation is quite a costly process. Brainstorming and research for ideas, selecting the most promising ones, running the experiments and ultimately implement the winning variants requires a lot of time and resources from many different specialists: designers, developers, web analysts, online marketeers and off course CXO experts. However when executed properly returns are significant. Because resources are scarce and growth targets tend to increase every year, prioritising your experiments is a crucial part of the CXO process.
But how do you prioritise effectively and what are the most important criteria for effective prioritisation. Before diving into it, let’s have a look at the most common prioritisation frameworks available in the CXO space.
|Prioritisation framework||Publication date|
|ICE (1) Impact Cost Effect||2012|
|PIE Potential Importance Ease||2013|
|ICE (2) Impact Confidence Ease||2015|
|Hotwire prioritisation model||2015|
I am not going to elaborate on the pros and cons of each framework. I can highly recommend this article of ConversionXL where they perfectly explain what the differences between the frameworks are. The article describes really well how prioritisation has evolved over the years. I think all of them have elements that are helpful but there is no perfect prioritisation model that works for everyone.
Customisation it is
So if there is no such thing as a perfect prioritisation model, the only option is to create a custom model that works for you. MetaDimensions supports by default PXL and PIE model for prioritisation but also the option to create a custom model. So if a customised model is the best way forward, how do you start, what is important when setting up your own framework?
Criteria for a prioritisation framework
The essence of a prioritisation framework is to get a good balance between effort and gain. The following criteria can help you with that.
1 – Customer driven
As we are optimising the customer experience, the proposed changes must be customer driven. So bug fixes or technical issues are not part of the framework, they should be solved as quickly as possible. Only elements that create value for users or relieve their frustration should be taken into account.
2 – Backed by data
Related to the first, the customer driven insights have to be backed by data. It doesn’t matter if it quantitative or qualitative data. But guessing or assuming what users want is a set up for failure. The PXL model has two good questions in its model that force people to work data driven:
- Is it addressing an issue discovered via user testing?
- Is it addressing an issue discovered via qualitative feedback (surveys, polls, interviews)?
3 – Related to business objectives
Indecisiveness and hyper creativity are fatal for CXO. If there is no clear guidance on what to test, you end up testing everything without any tangible result. Some is for creativity. The wildest test ideas might come up during brainstorm sessions. I think this is a vital part of the creative process but at the end A/B tests must be related to business objectives in order to be relevant. An experiment that is not tight to business objectives is a fun project and sooner or later will end up without funding.
4 – Objectivity
Another important criterion is objectivity. You want to eliminate the guesswork as much as possible. Discussion about the relevance of a certain test is good but if the debate is only about assumptions, expectations or hopes it is a waste of time. Try include questions in your framework that are specific and can be quantified as much as possible.
5 – Flexibility in scoring
As all organisations are different, the scoring in your framework needs to be flexible. Some companies depend on suppliers to implement changes. Some CXO teams (to their great frustration) have to wait endlessly before the IT department follows up on CXO requests. So ease of implementation might differ a lot between organisations which should be reflected in your framework by assigning the right scores. Same is for other guiding principles that you could use. Some companies require that any experiment must comply with a certain principle from consumer psychology. If this is the case then this should be included in your prioritisation model.
Now you got your priorities right, you have a new problem…
What all prioritisation models mentioned in the table above have in common is that they use a spreadsheet (Excel or Google spreadsheet) to calculate the priority. Although spreadsheets are flexible and widely accepted it leads to fragmentation of data and insights. If you have one spreadsheet per experiment then you still are lost. Somehow you need to create an overview with all experiments and their scores. This is really difficult to realise with spreadsheets, especially when there are multiple people and teams involved. For that reason a centralised CXO library where all scores are logged is mandatory for all CXO experts.
So in order to prioritise effectively it is important to create a custom prioritisation framework using the 5 criteria in this post. Once your model is done and has proven to be successful make sure you build your CXO library where you keep track of your prioritisation scores. Rank them and you have your experiment roadmap.