Using Usabilla for simple A/B testing Demo UX Cases

Using Usabilla for simple A/B testing

A/B testing is a popular method to optimize conversion on a website. Basically you set up two or more variants, measure the differences in conversion rates between these variants, and select a winner based on your test results. The winning variant is the one with the highest conversion rates. A/B testing is a great way to improve your webpages one step a time. Unfortunately implementing tests is not always as simple as it sounds, even if you use for example the nifty interface of Google Website Optimizer. To set up a test, you need to have access to the source code and someone who’s able to adapt it, a live website with visitors or a working prototype, etc.. We’ll show you an example of how Usabilla’s One-Click-Tasks can be used as an interesting and low-budget alternative for A/B testing.

The original Firefox A/B test

The website abtests.com shows examples of A/B tests and their results. To demonstrate the use of Usabilla to quickly measure differences between multiple variants we picked a case submitted by Firefox (original writeup at the Mozilla blog). Firefox tested two variants for the button on their homepage that needs to trigger users to download the latest Firefox version. They used two variants of a call-to-action button: “Try Firefox 3″ and “Download Now – Free”.

Firefox - Variant A

Download Firefox - Variant A: 'Try Firefox 3'

Firefox - Variant B

Download Firefox - Variant B: 'Download now - Free'

In the original A/B test conducted by Mozilla 9.73% of all visitors (+/- 150.000) visiting variant A (‘Try Firefox 3′) downloaded Firefox. Using variant B (‘Download Now – Free’) 10.07% of the visitors (+/- 150.000) downloaded Firefox. Variant B has a 3.6% higher conversion than variant A.

The Usabilla test

We’ve set up two similar tests in Usabilla in about 5 minutes, using screenshots of variant A and B.  We created a new test (let’s call it ‘A’) and used a screenshot of the Firefox homepage showing the call-to-action button with the text ‘Try Firefox 3′. We’ve created another test (test ‘B’) using the screenshot of variant B (‘Download now – Free’). In both tests we asked participants to perform only one task: ‘Where do you click to download Firefox?’. To compare the performance on Test A and B we focused on ‘time per task’, which is a standard Usabilla feature.

Introduction test for participants

Short introduction for participants

One task per participant

Participants read the task before they start

firefox_task

Participants perform the task by clicking on the screen

We sent about half of the participants to Test A and the other half to Test B. We invited participants via Twitter and asked them at the end of the test to help to spread the test. We used the the ‘Redirect URL’ for in Usabilla to redirect users to Twitter to post a message about this case.

Test results

We exported the test results for both test cases as XML and imported this in a Google Docs spreadsheet. We used the test data to calculate the Geometric mean (preferred type of mean for task times).

Test A Test B
Try Firefox 3 Download Now – Free
Participants 201 187
Geo Mean (seconds) 5.30 4.84

task performance

Usabilla as an alternative for A/B tests

In this demo case we set up two simple Usabilla cases in about 5 minutes as a quick and dirty alternative for A/B testing. We measured the task performance (time) on the task ‘Where do you click to download Firefox’ for two different variants of the Firefox homepage for a total of 388 participants, split into two different groups. It took participants in test A (‘Try Firefox 3′) on average 5.3 seconds to click the download button. Participants in test B (‘Download Now – Free’) clicked the download button in 4.8 seconds (almost 10% faster).

We demonstrated that simple comparative Usabilla cases like this one can help you to improve your conversion. The results of quick tests can give you guidance when you want to improve your website, without worrying about development, a release protocol, server access, and other difficulties. Use mockups or (edited) screenshots to test multiple variants in a short time span.

2 comments

  1. Harry Brignull

    Your test involves an initial screen that instructs users what to do: “You want to get the latest version of Firefox on the page we’re about to show you. Where do you click to get Firefox? Click anywhere on the screen to add a point.”

    This is fundamentally different to what most people would consider to be “real” A/B testing (e.g. Google Website Optimizer). Why? In real A/B testing (for want of a better term), the users’ goals are not predefined. They are real users, doing their own thing on your site. They need to be enticed to engage in an action. Your test completely skips this important aspect. Consider the old “AIDA” conceptual model (Attention / Interest / Decision / Action). Your test only looks at very last step – Action.

    The test you describe above looks useful, your system looks very neat, but this isn’t an alternative to real A/B testing. It’s a different type of user research entirely.

  2. Paul Veugen

    Thanks for the comment, Harry.

    We indeed cover only a small part of a ‘real’ A/B test and guide users with predefined task. What we measured in this test was the task performance of users on the ‘goal’ task of this page: download Firefox. Comparing this performance can help you to optimize a page, if you don’t have the ability to do real A/B tests on a live website. Quick and dirty.

    We could possibly use different types of tasks to compare pages in a “A/B” test to cover other aspects of the AIDA model. Like: “What draws your attention on this page?”, “Which elements are interesting?” etc.

Leave a Reply

Your email address will not be published. Required fields are marked *