It’s well-known in the SaaS community that usability testing is a powerful tool to use throughout the product design process.

Doing usability testing at Toggl

 

For Toggl, though, usability testing is still pretty new. Up until a few months ago, we relied mostly on in-house testing. And although this was a great approach for finding weird bugs, it wasn’t so great for collecting insights on how actual users would interact with the product.

Recently, we decided to refine our process and start conducting usability testing on Toggl’s marketing pages, and the app itself. Our goal is to find out what we can do to increase signup rates, and improve the onboarding flow and general usability of our app.

Here’s a little overview of how we run these tests and what we’ve learned so far.


1 – Start with setting clear goals for your test

I’m a curious person – so when I conducted my first tests, I wanted to know everything.

I asked tons of questions about anything that caught my attention. I ended up having lots of information about small details, but no useful insights or a clear action plan.

The more objectives you test at once, the more room for confusion and error.

To avoid this rookie mistake, limit each test to only one problem; a single new flow, a single page layout — the more objectives you test at once, the more room for confusion and error.

2 – Choose the right type of test

The main reason to do usability testing is to gain deep understanding of what aspects of a workflow are not meeting user expectations and why.

In general there are two types of remote tests: moderated and unmoderated testing.

We’ve chosen to run unmoderated usability tests, which means that there is no test facilitator who would ask additional questions or clarifications during the test. The participants take the test alone following a script we’ve given them, and we get the screen recordings of their tests.

We chose this type of testing because it has the added benefits of being speedy and efficient — you don’t have to spend a lot of time on recruiting or running tests, which ultimately means more valuable data in less time.

Although there are a bunch of usability tools available, we’re using WhatUsersDo, because it allows us to easily select participants that fit our target demographic, run tests across platforms, and tag the most interesting findings in videos for easier sharing.

 

3 – Write your test script & start testing

Once you’ve defined a goal for your test, you should write a clear scenario for the participants.

When testing onboarding flows, I try to recruit participants who are likely to try Toggl, but are not yet familiar with the product.

My scenarios are there to disguise the true object of the experiment, and the test script is meant to coax participants into the area that’s of interest to me. For example, if my hypothesis is that users will be delighted to see colorful charts representing their data, I put together a series of tasks that will get them there quickly and observe their reaction as well as any frustrations they encountered on the way.

I try to be as precise as possible with my wording, but not to accidentally drop any hints on how the tasks should be completed. And still, I’ve found that no matter how perfect I think my test is, and how easy it is to understand, there’s always something that I missed that trips up participants.

A good tip is to run a pilot test on a single user to identify any trouble spots before rolling out the test to more people.

 

4 – Collect insights in a sharable document

Finally, the most interesting part!

Once I’ve gathered around 5–10 videos per test, I collect all interesting findings into a simple document. I also check metrics like ‘Task Completion Rate’ and ‘Perceived Ease of Use’ on the WhatUsersDo app to get a better picture of the overall test completion and user satisfaction.

In this step I usually look at 4 key factors:

  • Efficiency → How much time and how many steps did it take for participants to complete the tasks in the test script?
  • Accuracy →  How many tries did it take the participants to perform these tasks and were they successful in the end?
  • Recall → How much did the participants remember about our product and what words did they use to describe it?
  • Emotional response → How did the participants feel about the tasks they had to complete? Were they stressed, confident, confused, happy?

After going through all these points, I prioritize any identified problems and form a hypotheses for A/B testing, using ConversionXL’s prioritization framework:

 

 

For instance, after going through usability testing videos on our public web, I learned that the short, “quote of the day” style testimonials on our main page often failed to motivate visitors to sign up – in fact, sometimes they even left visitors with more unanswered questions about the app.

Knowing this helped me realize that in order to successfully motivate our web visitors with social proof, the testimonials need to be more personal and showcase the different types of use cases more thoroughly. This in turn helped me to come up with lots of new ideas for A/B testing social proof, like using longer, use case style testimonials or creating a whole new page on our website for customer stories.

 

Tl; dr:

Usability testing is a complete must-have step in every medium to large scale project. It provides a lot of insights about where and why a design is failing that other methods of data collection (such as user behavior data) cannot. A simple process that we at Toggl follow is:

  • Defining clear goals for your test (best is to have one goal per test)
  • Choosing the right type of test (we, for instance, use unmoderated testing with WhatUsersDo)
  • Writing as precise test script as you can
  • Collecting insights, prioritizing findings and forming hypotheses for further A/B testing

 

We’re no strangers to testing at Toggl. To read how we got started with A/B testing, read this blog post here.