User Testing 101: How We Drive Real Value From Our User Tests
For most marketers, “user testing” is a scary term — it’s something you know you should be doing, but you aren’t.
The fact is, every website could benefit from some good, consistent user testing. Whether it’s smoothing out your navigation or gauging public perception of your site and its functions, testing is a great way to improve your users’ experience and make sure they’re accomplishing the tasks you want them to accomplish.
But where do you start? And once you’ve run the tests, how do you turn user insights into meaningful strategy? In this article, we’ll walk you through our user testing process, and pass on some lessons we’ve learned over our decade-plus in the industry. By the end, you should be ready to embark on your own user testing journey.
What is user testing?
Think about your website for a second. Now answer this question: In terms of functionality and UX, which of its features are most important? That is, what essential tasks do you want your users to be able to accomplish?
Your answer will will likely differ depending on your business type and the services you provide, but here are some common ones:
- We want our users to be able to complete the actions we want of them (e.g. check out, fill out a form, download something, etc.).
- We want a functional and smooth experience from start to finish.
In our industry, we talk a lot about how to bring new users to our site — but once they’re there, our job isn’t over. It’s incredibly important that they’re able to do the things we want them to, and that the entire experience feels logical and intuitive from beginning to end.
This is exactly why we do user tests. User testing lets us get a better understanding of our users, and tells us why and how they interact with our product (in this case, the product is our site).
At its core, user testing looks something like this: You pay a group of users to perform some actions on your website, they give feedback (where were the pain points? How could the experience be better?), and you take those notes and revise. And you do that again, and again, and again, until, eventually, it’s perfect (or something close to it).
In this way, the user testing process is very similar to the writing/editing process, and the end goal is ultimately the same: to remove areas of friction and uncover opportunities to improve the overall experience.
But getting started with user testing isn’t as easy and straightforward as it may seem. Finding the right platform, defining your goals, and writing your script all take time and experience to get right. That’s why, to help you along on your user testing journey, we wrote this glimpse into our own testing process. Follow along below to learn how we’re able to drive real value from our user tests and end up with results that our clients love.
(For more information on user testing, be sure to read this blog post on 3 reasons you should incorporate user testing into your strategy.)
Part 1: Conducting the test
1. Choose a platform
When it comes to user testing, everything starts with the platform you pick.
There are two main platforms that we use all the time at Wheelhouse, and we’re big fans of them both. Each one offers moderated and unmoderated testing, so they’ll work for a wide variety of different experiments. (We’ll talk more about moderated vs. unmoderated testing in the next section.)
Playbook UX has participants in countries across the world, meaning you can test with international users if needed. The platform also provides analytics so you can view demographic charts and data based on specific keywords and quotes, as well as analyze rating scales, written responses, multiple choice questions, and time on task.
UserTesting.com offers same-day video interviews, so if you’re in a rush and need to run a usability test in a short amount of time, it’s a great option. They also do highlight reels, which are video recordings of key moments during the test that can help you glean valuable insights and convince stakeholders when necessary.
We’ll typically use one platform or the other, but feel free to use both in combination if the situation demands it. Frequent testing on a regular basis is the gold standard here, and you should aim to run as many tests as possible if you’re really committed to improving your site.
2. Decide whether you want to run a moderated or unmoderated test.
There are two types of user tests: moderated and unmoderated.
Moderated tests are done with a facilitator or moderator — they’ll walk the participant through the test from start to finish. A moderated test will require that the facilitator ask questions and take notes for the entire duration of the test.
Unmoderated tests are done without a facilitator or moderator. These tests are typically taken online and on the participant’s own time. The user will be asked to respond verbally or write out their thoughts and answers as the platform records them.
Choosing between moderated and unmoderated testing will depend largely on your budget and what you’re trying to accomplish.
For those who have less experience with user testing, we recommend starting off with unmoderated testing. This method requires much less preparation and it’s usually cheaper, meaning you can run more tests with a more diverse set of participants.
On the flipside, moderated testing is great for those times when you need a higher level of interaction between you and your participant. Of the two methods, moderated testing is much more qualitative, allowing you to ask questions and get more elaboration when the participant becomes stuck or confused.
3. Define goals and target users
If you want to gain any kind of real value from user tests, you’ll need something to aim for. The best way to figure that out is by defining your user goals and creating task-based scenarios in order to test and validate areas of your product.
A user goal is the fancy-sounding name for the key actions we discussed in the beginning of this article. They answer the question, “What essential tasks do you want your users to be able to accomplish?”
If you’re doing user testing for an airline, then a good user goal may be: Research and book a flight. It’s OK if the goal seems broad or overly general at the moment — the idea here is to define the end state that we want our users to reach.
Once our user goals are squared away, we can go about refining those end states into something more specific and testable.
A task-based scenario is an extension of our user goal — one that puts the goal in context and defines the actual task. Essentially, this is the action that we’ll ask the participant to perform on our site.
In the case of our airline, a good task-based scenario may be: Book a round-trip flight from Richmond, VA to Seattle, WA for August 28 – September 1. Try to keep the total under $300.
A task-based scenario like this one is good because it does everything we need it to: It asks the user to accomplish a well-defined task, and it’s specific enough that we can easily measure their ability to complete it.
Finally, you should ensure users fit into your target group by creating screening questions.
Much like picking a jury, we should make sure the people testing our site are right for the job — that is, they should be similar to the people who will actually be using it in the future.
Screening questions are a good way to make sure of this. They should be simple, straightforward, yes-or-no questions that will help refine your testing pool.
Here are some examples of things to ask when drafting your screening questions:
- Does your product only cater to people living in the United States?
- Does your product only cater to users that identify as female?
- Is your product only applicable to a specific field or industry?
The number of screening questions you decide to draft is up to you. It should be enough to ensure your audience is relevant, but not so many that you drastically reduce the size of your testing pool. It’s a delicate balance, and one that may take a few tries to get right!
4. Write your test script
After the platform, goals, tasks, and users have been decided on, next comes the test script.
When you’re writing a test script, you’re asking users to speak truthfully about their experience with your site. They’ll share their thoughts page by page or file by file, and hopefully, their notes will help you understand what’s working on the site and what’s not.
An important note here: This script is an opportunity and a prompt to get your users speaking candidly. The better you get at writing them, the more valuable each subsequent test will become.
Here are some sample questions for your script:
- What are your first impressions of the site?
- If you were shopping for [product], does [page] provide you with everything you’d need to make a purchasing decision?
- What frustrated you most about the site?
- (After the user completes the task) On a scale of 1-10, how difficult was it to complete that task?
5. Conduct the test
Finally, at long last, it’s time to conduct the actual user test.
You should expect tests to run for at least 1-2 weeks if the test is unmoderated. For moderated tests, that number can vary depending on the moderator’s schedule and the amount of tests you’re doing.
In our case, each individual test typically takes around 20-30 minutes to complete, depending on how many questions we want to ask the participants.
After that, you’re done! Well, not quite yet. The hard part is over — now you just have to turn that big pile of notes into something resembling strategy.
But how do you do that? In the next section, we’ll tell you how we analyze our testing data and turn it into something concrete and actionable.
Part 2: Analyzing your findings
1. Prioritizing your findings with pain points
As you’re watching the user interact with your site, it’s important to note every single issue they run into along the way. This should take the form of one long list of user problems — let’s call them “pain points.”
After you’ve gone through every test and compiled your list of pain points, your next step should be grouping them by the severity of each point.
Did the user run into an issue where the check-out button wasn’t working? Did they mention something about the look of the website or a spelling error? When we’re thinking about the severity of the issue, we should consider whether the issue prevents users from accomplishing an essential task, or if it’s just a cosmetic one (one that hurts your image but doesn’t necessarily impact the user’s overall experience). The former will take priority over the latter.
Here’s how we break down our pain points by severity:
High – These include issues that have to be fixed as soon as possible, as they cause friction and prevent the user from accomplishing key tasks on the site.
Medium – These include issues that might not interfere with the overall UX, but can still be frustrating for users to experience.
Low – These include cosmetic issues like the color of a button or a grammatical error, but they don’t necessarily hinder the user’s experience or their ability to complete a task.
2. Turning your pain points into strategy
Now that we’ve categorized the issues by severity, we need to consider the level of effort required to remedy each one. This will allow us to prioritize our findings by business value.
Here are some questions to consider when you’re defining the level of effort needed for each issue:
- Does the issue require support from multiple resources (e.g. designer, developer, analyst, etc.)?
- How much time will be required to make the changes (ex. 10-15 hours of dev support vs. 3 hours of analyst support)?
- Can the issue be remedied with available resources or will the work require support from an outside agency?
- Do you need to test a new design or run a CRO test in order to gather more data?
When paired with our pain points from the last section, these new considerations can help us decide how best to proceed with our site improvements.
For example: If you identified the issue as low in terms of severity but it requires support from a designer as well as an outside agency, you may want to hold off on implementing those changes until you’ve got more resources available to you.
Conversely, if the issue is high in severity and can be remedied relatively quickly by an immediate team member, you should tackle that one ASAP.
Using this method, we can prioritize our site changes and implementations in a way that feels logical and proactive. In other words, we’re able to walk away from our user tests with a concrete strategy for getting things done.
The system lets us take a methodical approach to the work we love, and our clients appreciate the value that our tests provide. It’s a win win!
We like to look at user testing as a skill and an art — one that you can get better at over time. While it may seem pretty straightforward at first glance, it takes time and repetition to master.
The simple truth is that a person who’s done one user test will have a very different process than someone who’s done a thousand — and that’s okay! What matters is that you’re doing it, preferably on a regular basis, with a clear idea of how you’ll proceed after each test is complete.
We’re a great example of this. Our process is one that’s grown and evolved over a decade in the industry, and it’s still improving all the time. Now, our strategists are user testing pros who are well versed in assessing and validating the quality of all things website related, from UX to navigation to user perceptions and beyond.
If you’re looking to get your user testing work off the ground, we can help. At Wheelhouse, we’ve conducted user tests for businesses ranging from small local startups to fortune 500 companies. In one case, our experiments resulted in a staggering 632% increase in conversions for a client — and that was just in the first year.
Whether you’re looking for advice on platforms or you’re ready to start a longer conversation, we’d love to hear from you. Give us a call, use our online form, or leave a comment below to get in touch. We’d love to hear from you.