One of the primary goals of any user test is to determine if the user can successfully accomplish the task set before them. But determining the usability of a feature is only one component. Is the feature fun to use? It’s sometimes difficult to get test participants to be completely honest because there is a tendency to want to try to please the test facilitator and not talk bad about a feature.
Microsoft developed a toolkit that has been used to help encourage test participants to be more truthful about both what they like and do not like about the features they experienced in a test. Generally, the participant chooses 5 words from a listing of words, generally 60% positive terms and 40% negative terms. The goal is to get participants to open up and provide more detail about what they like and do not like about the system, regardless of their level of success completing tasks. The key is not only to ask the participant to choose 5 words, but to elaborate on what they mean by each. Sometimes I’ve found the participant appears to choose polar opposite terms, but their explanation makes perfect sense once I listen to them.
In my practice, I’ve used a listing of 50 words, which I’ve found helps sore spots and bright spots stand out a bit better. The Microsoft team used a listing of over 100 words. The standard question I ask after the test is complete is the following:
“Please select 5 words from the following list that best describes your experience with _____. Explain…”
Only recently did I realize how to best compile this data. I had been looking for frequency patterns, but then my boss suggested I simply plug in this data into a word cloud, such as Wordle.
Because we use this measurement tool for each feature we test, it’s easy to communicate the “pulse” of the feature to the entire design team to determine if we are meeting our goals for usability and desirability. It also provides a more holistic assessment of the feature, because if users can successfully accomplish tasks using the feature but the word cloud suggests otherwise, then we need to rely on participants’ self-report to understand what is not pleasing to them.
Using this tool is not a complete solution – it’s part art and part science, but it truly does provide quantitative data to a qualitative measurement. I would be remiss if I didn’t point you to a nice resource by the folks at UserFocus. I didn’t even realize that others had the same idea of creating a word cloud to represent this data. They also provide a nice spreadsheet if you want a resource to get you started.