Do You Know What Type of User Test Is Best?

How to choose the right usability test for your next UX project

Dr Anna Harrison
8 min readMar 6, 2017

--

Usability testing. Photo by UX Store on Unsplash
Photo by UX Store on Unsplash

Testing is a core part of the design process. Google does it. Facebook does it. And here at Tiny, we do it too. With so many methodologies and tools available, getting started with user testing can be a little daunting.

A hilarious look at usability testing in the life of a UX Designer (at 1:22 in the video)

The right user-testing strategy will depend on the type of company you work for and the type of product that you are bringing to market. As with any creative process, the element of art in the science becomes more predictable with experience. I hope that sharing my journey in establishing a solid, evidence-based UX practice at Ephox will provide you with some tips that you can apply to your UX practice.

A little about me and user testing at Tiny

In my pre-Ephox working life, I spent some time doing formal research using observational research methodologies, semi-structured-interviews and a variety of other techniques (the very interested reader is invited to read about it here).

Until my arrival at Tiny, both design and user testing were informal processes led by the company’s veterans of text editing: the likes of Andrew Roberts, Andrew Herron, Joakim Lindkvist and Spocke Sörlin (you can find their friendly faces here). User testing often happened through anecdotal conversations with clients, occasional surveys and changes made in response to support tickets filed by customers.

This system can work well enough if you are in the “maintenance” phase of a product’s lifecycle. Where it falls short is in its ability to pro-actively design for product futures that will be loved by your clients. This is because the methods above are mostly reactive and, more importantly, based on the assumption that your customers can identify their future needs (an assumption that has been disproven time and time again).

Part of my role at Tiny has been to establish robust processes for user-testing, and integrating these into existing development and release cycles. No doubt the particulars of how we do things will change over time, but for right now, our user-testing toolbox includes the following user-testing techniques:

  • Anecdotes from customers
  • Surveys
  • Semi-structured interviews
  • Observational user tests
  • Online user tests
  • Design critique sessions

I’ll describe each of these techniques below, give some pointers about their pros and cons, and recommendations for when to use the approach. But before jumping into that, let’s talk about user-testing payoffs.

Does user testing payoff?

I won’t go into the reasons why user-testing is a good idea — lots of outstanding people have written about it, talked about it, and made films about it.

What I can share here is my experience at Tiny: in only four months, we are seeing the benefits of user testing coming through in our products. In the current Textbox.io 2.0 release, we saved weeks of effort in making changes to the updated image editing feature by testing before release; we also sped up our development effort by months by tightly integrating internal design-test-build processes in the delivery of the new skin-creator.

Our product pipeline is currently filled with designs that we have prototyped and tested using a combination of our user-testing techniques.

Customer stories

Customer anecdotes, support tickets, and informal user feedback provide an excellent entry point into the user testing process. Each of these channels of information gives you direct access to areas where the customer “feels pain”.

The temptation here is to immediately jump into “solution mode” to solve the customer problem or create the customer’s “must have” feature. But before you do that, stop!

Pause for a moment to validate and ask yourself these questions:

  • In addressing the issue reported by the customer, are you solving the right problem?
  • If not, what is the real problem the customer is experiencing?
  • Is the issue a problem for other customers? If so, how many?

I use the remaining techniques to validate potential issues brought to us from direct customer channels.

Surveys

Surveys are an excellent way to get an indication of the general feelings of your customers on a single, uncomplicated issue.

The trick with surveys is in phrasing the questions, and knowing what to ask. As an example, in a recent user-test (link to blog post comparing online vs. in-person testing) I observed that 8 out of 10 participants struggled with a specific feature, describing it as “frustrating” and “hard to find”. Despite this frustration, only 1 of the 8 reported the feature as “not intuitive” to use.

The lesson to take away here is that survey results can be misleading. Given the low cost of conducting an online survey via platforms such as SurveyMonkey, it can be tempting to “throw a survey out there”. Before you do, spend a bit of time thinking about how to phrase your questions, and validate whether you are asking the right questions in the first place. This post will help you avoid the most common mistakes in UX testing.

Semi-structured interviews

The semi-structured interview is one of my personal favorites for its effectiveness in uncovering the drivers behind customer behaviors. A semi-structured interview can be a powerful way to follow up a customer issue or anecdotal problem and discover why. To me as a UX designer, uncovering the why is like hitting the future-proofing product jackpot.

Conducting a great interview is a complicated verbal dance in which one needs to establish rapport with the interviewee, keep the conversation focused and relevant to the topic under investigation and yet, not bias or lead the participant.

It takes a while to become proficient at conducting a great interview. The great news is that you can practice these skills anywhere: on the bus, on a date, buying groceries and even talking to Grandma at the next family BBQ. You’ll know you are getting close to perfection when strangers start to mistake you for a journalist.

Observational user tests

Traditionally, observational research places the tester as a “fly on the wall” observing participants using products in their natural setting. Over the years, I have adapted this technique slightly to be a little more efficient. When conducting in-house observational tests, I use a combination of traditional observational methods and semi-structured interview techniques. This hybrid approach has proven to be efficient at drawing out the why’s from participants.

My biggest tip for observational research is to start each test by reassuring your participant that

…this is not a test. There are no right or wrong answers. By watching how you use our product, I get valuable information about how to make the design better …

Irrespective of how technology literate your participant is, they will feel silly if they think they don’t “get it”. As you get more experienced in user testing, you will be able to note the subtle changes in tone and body language that indicate this. Use these moments to reassure your participant and make them comfortable again. These moments often lead to the most interesting insights!

Online user tests

There are a variety of online user testing platforms that provide quick and inexpensive access to remote observational research. This mode of user-testing is new to me, so I recently ran a small experiment to compare online vs. in-house user testing.

I was impressed with the speed and cost of online testing with Validately. After setting up the test (which was easy) and launching it, I had all participant results on my desk within a few hours-and at the cost of only $10 per participant.

From my experiment, I can see online user testing as being a perfect complement to the more expensive and time-consuming in-house observational approach. While I still believe that nothing beats observational research for its ability to provide breakthrough insights, using a platform like Validately is an excellent way to verify preliminary results from smaller, in-person, tests against a larger sample size.

Design critique

I also incorporated design critique sessions with both internal and external stakeholders into the user-testing process at Ephox. If you are thinking of using this approach, consider using the Google Ventures approach to structure these meetings (a shout-out to Jai Mitchell and the Brisbane Product Design Group for introducing me to this format!).

An external, peer-review design critique can be very helpful in refining elements of interaction, identifying likely sticking points and making sure that your design stands up to the latest best practices in UX. If you feel daunted by the prospect of showing your idea to your peers, or perhaps think (like all designers do) that your peers are actually far superior designers and in fact hold hero-like status in your head… don’t be! We all have moments of insecurity. The beauty of a peer review is that most design professionals are trained in how to provide feedback and can do so with tact and care.

Internal design critique sessions, however, are more likely to veer off-track. Depending on the mix of individuals providing feedback, it may be a good idea to make the session goals explicit and stick to a well-defined format. [You can use something similar to this meeting agenda based on the Google Sprints methodology].

My final tip for surviving a design critique session is to de-personalise. Feedback, or criticism, from your superiors at work, can sometimes lead to moments that require a thick skin. I find that keeping an impersonal perspective and following a well-defined meeting structure can bring out the best from these sessions — ultimately, your internal stakeholders are an ideal audience to provide design feedback from a business feasibility perspective… the only thing better than a great design is a great design sold to a gazillion customers.

In summary

There are a number of ways to test design concepts and product ideas with participants from your target market. Ephox bases our user-testing process on the following techniques:

  • Anecdotes from customers: extremely useful and continue to provide a direct glimpse into the perceived “pain points” of our 500 million users around the world.
  • Surveys: these are another excellent tool for getting an indication of the general feelings of our users on an uncomplicated issue.
  • Semi-structured interviews: provide an opportunity to interrogate customer’s issues, or ideas for new features and designs and uncover the hidden drivers.
  • Observational user tests: allow us to road test new concepts with a small set of users allowing us the privilege to observe the actual points of pain/joy for our users.
  • Online user tests: these provide an opportunity to validate our findings from the (more expensive) observational research tests.
  • Design critique: both internal and external, slightly formal sessions whereby new designs are stress-tested by a panel of stakeholders.

How do you choose between various techniques for user tests in your work? Share your thoughts below as we can all learn from your experience!

Originally published on the Tiny Blog on March 6, 2017

--

--

Dr Anna Harrison

Australia’s foremost Consumer Interaction Specialist and ‘Brand Relationship Therapist’. Keynote Speaker | Media Commentator | Published Author