“It performed well in usability testing.”
This type of statement can occasionally pop up in digital team discussions, when a new page or feature isn’t performing as well as it should and comes under scrutiny for improvement.
But the thing about usability testing is that it only shows us if something works in a functional sense. It doesn’t tell us whether it’s any good at converting customers.
And what users do on their own matters far more than what they tell us they would do (or what they do under the gaze of a researcher).
Even if you were to run a follow-me-home study, or observe the user doing their best attempt to mimic what they would genuinely do in that scenario, you’ll never be able to completely replicate the real conditions in which a real user visits your website when looking for a solution to their real problem.
Psychology is at the heart of CRO and, as human beings, we’re all eager to please – that’s why users who are being watched as they use a website or prototype will try harder than they would in real life. It’s what’s known as the Hawthorne effect.
So, while a user testing might identify an improvement to the fourth page of your site journey, the reality may well be that in real life the user would have abandoned the website on page one. (This is also why it’s a good idea to run lab-based testing when possible, rather than remote video-based research, as it doesn’t allow the same visibility of useful ‘physical data’, such as facial expressions and body language.)
With that in mind, here are three ways to look beyond your analytics data and observe how your users behave on your website whilst no one’s watching.
1. User session recordings
Session recordings are a great way to eliminate guesswork from your understanding of how visitors behave on-site. By watching a visitor’s real clicks, taps and mouse movements, you can see the issues that they encounter whilst trying to complete their real-life action.
The key thing with user recordings is to have a focus whilst watching them. Unless you only receive a handful of visitors to your website each week, it’s not practical to sit and watch every single session recording. Furthermore, even if you do pick a random sample to sit and watch, the chances of stumbling across an actionable insight tends to be slim.
Instead, use clickstream data or user feedback to identify a potential pain point within your journey and then watch the recordings of your real users in isolation – it’s at this stage that you may be able to identify the issues.
2. Feedback tools
The feedback mechanisms you choose could be delivered through prompts to the user e.g. a purchase confirmation page poll to ask users how they found the checkout process, or an NPS survey opt-in message delivered on the initial landing page. Another option could be a method that would allow the visitor to use ad-hoc when they want it, e.g. a ‘Leave Feedback’ CTA displayed consistently across the site, immediately after they’ve encountered a problem somewhere on your website.
These techniques will allow you to collect unfiltered opinions of your users in real-time and on an ongoing basis, providing you with an evolving view of what the customer needs and just exactly what is stopping them from achieving it.
Regardless of the size of your site, there can be an intrinsic sample bias issue with the user feedback you gather and you do need to decide which you take seriously. If an issue has generated negative feedback from three visitors, and your clickstream data doesn’t suggest the problem is any wider, should this really be at the top of your priority list to fix?
3. Interaction maps
By this I mean clickmaps, scrollmaps and heatmaps. It’s important to remember that these aren’t perfect solutions, but if you have enough traffic, they will give you a reasonably solid view of where your users click, scroll and hover on your webpages. Furthermore, they’re more time-efficient to analyse than session recordings.
You get a one-shot view sample of what your users are doing on a particular page. Of course, we don’t always hover cursors over the things we look at, so it’s important not to read the results of a heatmap as an eye tracking study.
You may run into problems using heatmaps when you apply them to SPAs or pages that use accordions and other expandable sections, for example, drop-down menus. Although certain solutions are improving at this, with others, you may just have to fill in the gaps when you see a random patch of activity on a page where an accordion would typically expand to.
Ultimately, guidelines – whether based on usability testing or not – are simply hypotheses waiting to be proven or evolved by data. As CRO practitioners it’s our job to challenge the guidelines when we believe the evidence tells us there’s a more successful way to be – and then use AB or MV testing to prove or disprove the hypotheses. Either way, the data always wins.