According to Forrester’s latest State of Retailing Online report, 38% of web merchants report A/B testing of their site design as “effective.” Does that sound low to you?
While this doesn’t necessarily mean 62% don’t consider site testing an effective tactic (it’s likely the survey included retailers who are not doing any testing at all). For example, if only 50% of respondents were site testing, the actual figure would be 76%. Nevertheless, if site testing is as powerful and life-changing as all us ecommerce bloggers say it is, why do some folks feel it’s not effective?
1. Our expectations for conversion lift is too high. Perhaps a good lift is 3%, where the team was expecting double-digits. A small lift may indicate a pretty good existing process. There is only so much you can squeeze out of a juiced lemon.
2. We test the wrong things. Sometimes we’re squeezing the wrong end. Focusing on tweaking one field at a time, or making too-conservative changes initially typically leads to ho-hum test differentials. Go for the radical changes for radical results. Supplementing your analytics data with user testing (even if it’s with only 5 testers) to form your hypothesis before you start can help you stay on track and test the conversion elements with the most impact.
3. Cart abandonment is often due to non-usability related factors. Remember this, and focus not only on checkout page design but also pre-checkout persuasion factors like value propositions, creating urgency and supporting multi-visit conversions with persistent carts.
An inconclusive test is not a failure. Testing an element or combination of elements that produces an insignificant change still provides valuable information, it answers your hypothesis. The key is to adjust your strategy based on your learning, and to never stop testing.
Looking for help with A/B and multivariate testing? Contact the Elastic Path consulting team at firstname.lastname@example.org to learn how our conversion optimization services can improve your business results.