Quantitative site analysis (web analytics and A/B or multivariate testing) is phenomenal for learning about your site’s performance and user behavior. But the drawback is, this data tells you the “what” but not the “why.” To get into users’ heads, it’s necessary to perform qualitative research (interviews, surveys, user tests and eye-tracking/click tracking heatmapping).
Today we’re going to hone in on one of these research tools – heatmapping. The good folks at Gazehawk graciously provided Get Elastic with a sample landing page study to provide an example for this post. (This page was submitted for testing my me, not TurboTax).
I chose to compare the TurboTax US and Canada home pages because they are true “landing pages” – pages that would typically be landed on from a search engine or affiliate link for tax software. They are very different in design, and may be viewed as a quasi-radical-redesign. Also, Turbo Tax is a product that has many calls to action, so this study makes for interesting analysis!
Above: TurboTax US site
Above: TurboTax Canada site
11 Tips for Conducting and Analyzing Heat Map Studies
In this study, each page was viewed by 10 test participants (different users for each page), their eye movements recorded and their written feedback provided. In my Gazehawk account, I can see both aggregate and individual sessions (along with a video replay of the progression of attention).
1. Start with a research question / hypothesis.
There are many questions I have on how users “see” these pages, including:
- Do visitors carefully evaluate all options?
- Do they look at top/left-most options first?
- Do they notice anything in the sidebar?
- Does the hero image attract attention or repel it? Does it lead the eye toward the call to action?
- Does they look at the call to action?
- Do users scroll below the fold?
- Does the most “prominent” option attract attention? (TurboTax – evidently not, the eye did not go there first, in fact, more folks started their gaze at the second to the right (small business)
Of course, this is just scratching the surface of the questions you may ask.
Brainstorm the questions you want to answer about your pages, and use one or more of them to form a hypothesis.
For example, you might expect the the highlighted option on the US site gets the most attention of its 5 available options, and the first option on the Canadian site of its 3. Your hypothesis could be that the prominence of one option will attract more attention to it over others, affirmed by the 2 pages’ designs. Or, you could hypothesize that a stacked presentation draws more attention to the “featured” choice than a horizontal presentation, and that the treatment page is more effective in getting sign-ups than the “cluttered” control.
Above: US website
Above: Canadian website
Is it just me, or do the heat maps resemble these countries’ respective geographic maps?
2. Create panel from your customers or target market.
Investing in a custom panel may be more expensive, but you will get better quality data than using the general population, who when viewing your site, may follow guided instructions, but are viewing your page as a “user tester” rather than a buyer. Also consider qualifying participants based on whether they’re an existing customer or a new visitor. Existing customers may be more familiar with your product and need to do less reading about product options, and will have different “goals” (repurchase, renew/upgrade vs. initial purchase).
Another benefit to a custom panel is you’re not “recycling” testers. After a few tests, testers start to sound like they’re the “experts” and feedback leans more to suggestions for improvement than about their user experience. For example, one user commented: “The left side square box should be less graphic, because it looks like an ad. Try to substitute it with text and icons of features of your product. Overall, it looks pretty clear.”
The last thing you need is more cooks in the kitchen!
3. Be very specific in your task requirements.
Like a traditional usability test, you’ll want to give the tester a clue about the task they are to perform. You don’t just sit someone in front of a page and say “tell me what you think.” In the TurboTax example, you may tell the tester she is a small business owner looking to purchase a tax software product for the first time, and that she, as a user, is not sure what differentiates one tax prep software from another. She is aware of the TurboTax brand but has not purchased this or any other tax product before.
4. Choose at least one alternative design to test (radical redesign) to compare.
Like what you're reading?
Subscribe to our weekly newsletter.
Join over 20,000 ecommerce leaders who have subscribed
and receive expert advice about the world of enterprise commerce.
Remember, these are tips, not hard rules – but I recommend not only looking at existing pages, but testing challengers as well.
5. Don’t stop at the aggregate.
Insights are always hidden in aggregate data. While aggregate is important so you can gauge what is important across the board, aggregate still only tells you the “what” and not the “why.” Analyzing individual sessions (including playback videos and written feedback) will give you some context around each visit.
For example, a user who gazes for a long time at a hero shot image may be confused by it. He may be thinking “how does the “family man” represent business software? Am I on the right page?” His feedback may be that the options weren’t clear enough, and his eye movements reveal he was “stuck” on the graphic, and not even reading text.
6. Don’t skip the playback video.
The final plot can also conceal nuggets of insight. This tester’s plot looks like everything was important:
But watching the playback, you discover that attention was initially concentrated on the Home and Business option. In fact, it took a good 2 minutes before this user even glanced over the other product options.
Think about how a user navigates your page supports or refutes your hypothesis.
The progression playback may show that he started at the hero shot and eventually began to skim the rest of the page, scrolling down and scrolling back up, whereas others users who found the site clear also gazed at the hero shot, but quickly moved to the product options.
7. Use participant feedback.
Encourage every tester to leave feedback, or videotape each session to capture the “out loud” sentiments. Facial expressions can also tell you whether a person is interested or confused. (Your vendor/service provider may have this capability). Feedback will always help you unpack what’s going on on the behavior side.
8. Avoid drawing conclusions from a small sample.
Remember, this is qualitative, not quantitative data. You conclude that 50% of users will scroll because 5 of your user testers did – that’s not statistically valid.
9. Track clicks and mouseovers
If possible, track whatever you can! This helps you identify which calls to action stand out, and which tasks were completed vs. no action taken.
10. A/B test
Complement eye test plots with quantitative A/B tests. (e.g. hotspots due to confusion rather than interest, the plot with more “red” resulted in less clicks) quantitative data that can confirm your hypothesis, unobserved customers
Eye tracking can help you understand more about user behavior (the “why,” not just the “what”), but like quantitative data, qualitative data has its drawbacks. When performing eye tracking studies, it’s important that you approach them knowing what questions you want to answer (hypotheses), who you are going to survey, and ensure your testing service provides you with as much feedback as possible (written, video tape, mouse movements, clicks etc). Eye tracking data should not be solely relied upon, it’s important to sanity-check your hypotheses with quantitative data for the full picture.
Looking for help with ecommerce? Contact the Elastic Path consulting team at firstname.lastname@example.org to learn how our ecommerce strategy and conversion optimization services can improve your business results.