Screen shot 2012-06-25 at 3.34.47 PM Theory

Test Everything You Got, Regardless Of Its Polish Or Fidelity

Sketches, wireframes, et al -- all worthy of testing

Sketches, wireframes, et al -- all worthy of testing

Whether you test your work on a regular cadence or only once or twice per cycle, the inevitable question that arises is what to actually test. We start to wrestle with the pressure of maximizing our time and money spent on testing and getting the most insight for that expense. Is it best to put a rough sketch of an idea in front of potential or existing customers or to wait until there’s a more fleshed out version to show? Should it be clickable (really clickable, i.e., working code) or a mocked up experience created using Axure, Powerpoint, Fireworks or any other tool?

The short answer is test everything. By “everything” I mean whatever you have ready regardless of its polish or fidelity. The challenge is to set your expectations about the feedback you’ll receive for each type of asset your present and what you will actually learn.

Why does testing everything make sense?

Adopting a policy of testing everything relieves your team of the pressure of hitting certain test-based deadlines. Ultimately, the testing schedule shouldn’t drive your design and development cycles. There are plenty of other factors that shape that timeline. Testing should provide insight into that timeline but not drive it. By accepting that, “whatever you have ready on test day” will be what goes in front of a test participant you are no longer hounded by yet another deadline.

Additionally, you start to develop a testing rubric that delivers insight at various phases of the product cycle. In many traditional waterfall shops, testing is done too late in the process when the design is baked, developed and ready to be deployed. Feedback at this point is educational, perhaps enlightening, but fruitless from a product launch perspective (at least for that phase of the project). Freeing the organization from the obligation of testing complete, polished experiences allows testing to happen earlier and more frequently. No longer does your company have to wait for an ideal state to proceed with user validation (only to find out that ideal state was totally off the mark). Increasing your time with customers throughout the design and build process improves the outcome of your project by continually nudging the interface in a more appropriate direction. As an added side benefit, you also begin to build a user-centric culture within the company if it didn’t already exist – a huge plus.

What should I expect from each type of test?

The challenge with testing whatever is ready on testing day is the feedback you receive. Each level of fidelity and refinement brings forward a different reaction in your target audience. It’s important to realize this and understand what to expect so that you can apply your findings appropriately.

Sketches

Let’s start with the lowest fidelity options first – sketches. Yes, it’s ok to show sketches to your customers. The feedback you’ll receive however will only guide your concept directionally. It will provide validation as to whether or not your approach is generally alleviating the customer’s pain points and if your proposed workflow makes sense in the way they go about their day-to-day tasks. What you won’t get at this level is tactical, step-by-step feedback on the process, insight about design elements or even meaningful feedback on copy choices.

Sketches are easily iterated, especially during the testing sessions and between participant to participant which adds to their viability as a testing tool.

Wireframes

Next come wireframes. Showing test participants wireframes allows you to assess the information hierarchy and layout of your experience. In addition, you’ll get feedback on the taxonomy, navigation and general information architecture. The beginnings of workflow feedback start to trickle in but at this point your test participants are focused primarily on the words and the selections they’re making. Wireframes provide a good opportunity to start testing copy choices.

Updating wireframe decks is relatively easy between testing days but becomes a bit more challenging in between test participants (depending on the tool used to create them).

Mockups (not clickable)

As soon as you move into visually designed assets the level of feedback becomes increasingly tactical. Reactions from your test participants encapsulate branding, aesthetics, visual hierarchy as well as aspects of figure/ground relationships, grouping of elements and the clarity of your calls to action. The success of our color pallet is validated (or not) by your test participants which translates directly into how well your ideas translate to your customers.

The limitations of unclickable mockups however don’t allow your customers to react naturally to the design shown nor any subsequent steps in the experience. Instead of asking your customers how they feel about the outcome of each click, you have to ask them what they would expect and then validate those responses against your planned experience.

Mockups are difficult to iterate on during testing day leaving incremental learnings to the next time participants see the design.

Mockups (clickable)

A set of clickable mockups avoid the pitfalls of showing screens that don’t link together as described above by letting users actually see the results of their clicks. There is, however, significant overhead in mocking up multiple paths of each experience thereby limiting the amount of material that can be tested this way. In addition, the linked mocks will likely not behave the way a real web page behaves creating an artificial experience that affects the quality of the feedback. Early workflow feedback (along with the elements mentioned in the Mockups section) is best collected with this tactic.

HTML prototype

Prototyping with HTML is the next step up from clickable mockups. The HTML itself may not replicate the entire visual design perfectly (nor does it absolutely have to) but the workflow, page behavior, scrolling and other native browser behaviors all work as the user would experience it in the real world providing your test participants a realistic product experience to assess.

These prototypes also allow for quick iterations as it is quite simple to clone a page, make some tweaks and redeploy the destination of certain links.

Code

Live code is likely the best experience you can put in front of your test participants. It replicates the design, behavior and workflow of your product. The feedback is real and can be applied directly to the experience shown. The challenge is that at this point you’re in some of kind “done” state (whether that’s QA, staging or production) and making fundamental changes to the experience becomes more difficult. In a waterfall environment, if you’re testing code you’re likely near the end of the project and incorporating feedback from usability testing will likely have to wait for phase 2 (if there is one). In Agile environments, live code is generated much sooner and, if the experience fails to meet users’ needs in the lab, can be updated before the end of the iteration. In a worst-case situation, the outcome of that iteration can be scrapped. Hence, testing live code is typically best done in more agile situations.

Test early, test often

That should be your motto when it comes to validating your designs. Getting whatever is available on testing day in front of your users will provide feedback on the experience. The depth, focus and relevance of that feedback will largely be dependent on the fidelity of the designs shown. While the range of feedback is broad, knowing what to expect and ensuring that your organization gets customers into the product design conversation is worth showing them whatever you have ready when testing day rolls on.

Join the discussion on Hackernews or let me know what you think below!


Jeff Gothelf is a user experience designer, blogger, speaker and Lean UX advocate based in metro NYC. He has spent his 13 year career defining and designing engaging experiences for clients big and small. He is currently the Director of User Experience at TheLadders.com where he helps executive jobseekers and recruiters make meaningful connections with each other. Previously he helped shape the designs at Publicis Modem, AOL, Webtrends and Fidelity. Jeff publishes his thoughts on his blog and on Twitter @jboogie.

3 comments

  1. Bex

    TEST everything? Please define this concept TEST. Getting feedback at all times is useful but not mandatory. Testing everything all the time develops a testing mentality to find problems – but also tends to get an organization aligned around the idea that usability will guarantee good design. Not so. SO very not so.

  2. Ifraz Mughal

    Great article. Slight issue with following statement “There is, however, significant overhead in mocking up multiple paths of each experience thereby limiting the amount of material that can be tested this way”.

    Axure cuts through any of this overhead and the beauty is that one doesn’t need to write a single line of code.

  3. John Daniel Castillo

    Really helpful.. Will share this to our product owner, they thought wireframes and mock ups are just set of images and layouts.. This should give them some insights..

Leave a Reply

Your email address will not be published. Required fields are marked *