Author Archives: Paul Veugen

Paul Veugen

Founder / CEO @ Usabilla User Experience designer, entrepreneur and metrics junky. Follow the author on Twitter
Upgrade: New monthly accounts Announcements

Upgrade: New monthly accounts

Today we’ve made an important change to our accounts. We’ve switched from yearly plans to flexible monthly plans. We’re very happy to have you as a customer, so we’ll be doing our very best to make this switch a valuable upgrade for all of our current customers.

Our new plans

We’re switching from yearly accounts to monthly plans. New clients can now upgrade and downgrade their account at any time. Also, accounts are no longer restricted to a limited number of pages. Instead you can setup one or more tests simultaneously. Once you’ve hit your limit of active tests and deactivate a test, you can start a new one. As often as you like.


Questions? Feedback?

We’ve automatically migrated our current accounts to the new plans. We carefully planned this migration and manually checked the upgraded accounts. We might have missed something or may be you’re not happy with this change at all… If this is the case, please let us know. I’m sure we can help you.

Submitting form data to a Usabilla test How-tos

Submitting form data to a Usabilla test

Not so long ago we’ve added ‘custom variables’ to our tests. We store every variable you specify in the URL of your test and include these variables and an unique participant id (pid) in your redirect URL. That might sound a bit complicated, but enables you to store additional data for individual participants in Usabilla and to pass through data to other services as well. Let me try to explain with a simple example using a web form to submit data to a Usabilla test:

A simple web form

You can submit a simple web form to a Usabilla test. If you use ‘GET’ as method, the fields in your form are submitted as URL encoded variables. Example:

<form method="get" id="screener" name="screener" action=""> 	
	<label for="name">Name:</label> 
	<input id="name" name="name" type="text" /> 
	<label for="group">Group:</label> 
	<select id="group" name="group"> 
		<option value="A">A</option> 
		<option value="B">B</option> 
	<input type="submit" value="Submit" /> 
<form method=”get” id=”screener” name=”screener” action=”″>
<label for=”name”>Name:</label>
<input id=”name” name=”name” type=”text” />
<label for=”group”>Group:</label>
<select id=”group” name=”group”>
<option value=”A”>A</option>
<option value=”B”>B</option>
<input type=”submit” value=”Submit” />

This example form submits two variables (name and group) to a Usabilla test. The form input is stored together with the test data of your participant. At the end of the test the variables will also be attached to the redirect URL. This redirect URL could be another survey as well: read about combining Usabilla with a PollDaddy survey.

Read the rest of this entry »

Design for Conversion NYC – November 18th Announcements

Design for Conversion NYC – November 18th

After three inspiring events in Holland and one in Germany, the team behind Design for Conversion will be organizing a New York City edition. The fifth edition of the Design for Conversion conference wil take place at Thursday, November 18th in the Galapagos Art Space (Brooklyn, NY). A good reason to visit New York and meet and interact with conversion experts from all over the world.

Design for Conversion

Experts from all over the world will gather in New York for the fifth Design for Conversion NYC Conference. The event has been created to bring together a diverse group of talented professionals and thought leaders. The goal is to share new research and high level professional insight and to invite participation in ground breaking discussions and hands‐on activities that will assist designers of technology, website developers, and marketing professionals in understanding changing trends in consumer behavior, the use of persuasive technology, and the convergence of these issues to influence consumer or societal action.

When: Thursday, November 18, 2010
Where: Galapagos Art Space, Brooklyn, New York

Read the rest of this entry »

Why do you trust Mint? Takeaways from a visual survey Demo UX Cases

Why do you trust Mint? Takeaways from a visual survey

Two weeks ago we published four tiny visual surveys to help inspire Usabilla users. Our users and visitors participated in these surveys and shared their feedback on different webpages. We will share the results of these test cases to give you an idea of how these short visual surveys can help you understanding your users. As requested we will start with a selection of the results from the Mint demo case: “Click on the things that make you trust Mint. Please explain why.

We analyzed the results 177 participants in this survey. These participants answered the question with 596 points (3.4 points / participant) and explained with 140 notes (0.8 / participant). We will share four takeaways based on a selection of their feedback.


Read the rest of this entry »

Feeding the hunger of data junkies with Google Docs How-tos

Feeding the hunger of data junkies with Google Docs

Last week we silently released the first version of our API. Sido explained our new XML export options and the new API key in his last blogpost. In short: we’ve implemented two new XML export options. You can use these exports to create XML files with the content of your test and the results of your test. We built this feature as a first step in opening up Usabilla and to make the data you collect in Usabilla accessible in any other tool. To demonstrate some of the possibilities with these new feeds, I’ve created a Google Spreadsheet that imports test results. In this post I will explain step by step how this Spreadsheet works and how you can build your own.


Read the rest of this entry »

Usabilla has been selected as a White Bull finalist Announcements

Usabilla has been selected as a White Bull finalist

Usabilla has been selected as a finalist for the Bully Awards, an award honoring the leading European technology, media, and telecommunications companies. We’ve been selected as finalist in the category of ‘White Bulls’ and are thrilled to get the opportunity to present Usabilla at the White Bull Summit event in Barcelona.

A total of sixty European companies were named as finalists for the 2010 Bully Award. The deserving finalists were selected from a pool of entries that included hundreds of nominated European TMT companies. The finalists fall into three categories:

• Yearlings: firms that seek or have received angel/seed rounds or equivalent
• Young Bulls: firms that seek or have received Series A financing
• Longhorns: post Series A firms

“The Bully Award finalists were selected first and foremost for values of excellence and innovation in the TMT sector,” stated Farley Duvall, Founder and Chairman of White Bull Summits. “Each firm has been recognized as a leader in its field, with a bright business proposition and meaningful market strategies, driven by a rich understanding of customer needs and technological solutions.”

Thirty winners (ten winners per category) will be announced live at the inaugural, invitation-only White Bull Summit 2010, Pathways to Exit event in Barcelona, September 20 – 22.

Some statistics on the use of different tasks Theory

Some statistics on the use of different tasks

When you set up your Usabilla test you can pick one of our suggested tasks or add your own tasks. Most of the tasks in Usabilla tests are predefined tasks in any of the 20 languages we’re offering. I was curious which tasks are most popular and analyzed the tasks and results in our database. I want to share a selection of my findings about how our users use tasks and how participants respond to these tasks.

Which tasks are the most popular?

When we launched our first beta we didn’t provide any suggestions for tasks. Most of our early users were completely lost without any guidelines. Based on tests with our prototypes and the input of usability pro’s we made a small list of predefined tasks. We guessed that these tasks can inspire our users to add their own research questions. The following chart shows the ‘market share’ of each of the 9 predefined tasks (out of all predefined tasks).

Which predefined task is the most popular?

Which predefined task is the most popular?

Roughly 80 percent of all the Usabilla tasks are one of the predefined tasks. About 20% of all tasks are custom tasks added by users. Our users are experimenting more and more with different types of tasks. Some of the custom tasks defined by users to inspire:

  • ‘Test’
  • ‘Where do you click for more information about …’
  • ‘Mark the things that you find confusing and tell us why.’
  • ‘Where would you click to continue after reading the content?’
  • ‘Mark your three least favorite elements of this page.’
  • ‘Take a look at this page and give us some comments about layout and styling.’
  • ‘You would like to add a comment to an article you’re reading. Where would you go?’
  • ‘Where do you click if you want to know more about …’
  • ‘Where do you click first?’
  • ‘Click on the search box.’
  • ‘Where do you click to select a date.’
  • ‘Where would you click to learn about … ?’

How many points do participants add?

We offer two types of tasks in Usabilla. A standard task allows participants to answer a question or tasks with zero or more points. Participants can attach notes to these points to share additional feedback. A one-click task can be answered with only one point and participants can’t add any notes. These tasks are especially interesting to measure first impressions and task performance (incl. time). All our predefined tasks are standard tasks, where participants can add multiple points and are able to comment with notes. The chart below shows how many of these points participants add on average for each of the predefined tasks.

How many points do participants use per task?

How many points do participants use to answer a task?

The positive predefined tasks seem to make participants more trigger happy. Participants answer the positive tasks on average with 12 points and use only 10.5 points to answer negative questions.

How many points do participants use for negative and postive tasks?

How many points do participants use for negative and postive tasks?

What works for you?

We would love to hear which tasks work the best for you. We’re planning to expand the list of predefined tasks in the near future and add some options to make it easier to set up different types of tests (for example measuring call-to-actions, attention, recognition, credibility, findability, collecting design feedback, information scent, etc). Feel free to share your experiences and wild ideas in the comments, on Twitter (@usabilla), or by mail (support@…).

Improvements in the test interface Announcements

Improvements in the test interface

One of our most important focus points is the usability of our own test interface. Participating in a Usabilla test must be simple, fun, and way more exciting than a standard survey. In the past year we’ve released about 15 iterations on our test interface. Each iteration was based on feedback from participants, users, and experts using Usabilla. We’ve just released a small update and we think this update solves the most important usability issues of our test interface.

Problem: Participants don’t have a clue that they can add notes.

Usabilla is primary a quantitative tool. Notes add a qualitative aspect to the tests and are of in many cases of great help to interpret test results.  If you run a test case with for example 250 participants and all those 250 participants add 20 points with notes, your ‘lean & mean’ test is probably no longer ‘lean & mean’. This is the reason why we focus on points as primary response and notes as secondary. Unfortunately in our previous releases we didn’t manage to find a good balance between these two interactions.

Read the rest of this entry »