Posts Tagged ‘usability testing report’

I used to teach public relations. In public relations, your livelihood depends on details. If you send out a promotional brochure with spelling errors, it ruins credibility. In public relations classes, one spelling error in a final project would bring down a course grade by one entire letter. I assume that in professions such as engineering details matter even more. One misplaced number may result in a collapsed bridge and casualties.

Details matter for you, too. When you send out your resume, or even a relatively important email, lack of attention to detail, manifested, for example, in spelling errors, can cost you a job or opportunity.

You’ve done hard work this semester. Don’t let its value and your credibility be ruined by lack of attention to detail: Spelling, consistency in formatting and alignment, use of punctuation – these details matter.

I’m trying to proofread your presentations and reports and point out as many details that I can find that need correcting. But don’t let me do this alone. Pay obsessive attention to detail – as if your life depended on it. As if your grade would go down one letter grade for each small error. Without attention to detail, there’s no such thing as excellence.

This might be the most painful and the most important lesson you learn from me this semester.

Let’s assume you have a scale that measures a variable “hotness” on a scale of 1 to 5. According to most people’s intuition, 1 is less hot, and 5 is very hot. A shorter column in a graph means less hotness, a longer one means more hotness. It makes sense, doesn’t it?

Now, look at all your scales and all the graphs you created. Do they ALL make sense?

Sometimes, because of the way you laid out your answers in Qualtrics and because of the way Qualtrics assigns values to answers, you may end up with a reverse scale that is very confusing.

In the examples below, the scales are very confusing. In a culture that reads left to right, were things increase from left to right and from bottom to the top, the image below means that the actual difficulty was higher than the expected difficulty – but that’s not what the authors mean!

Similarly, when you look at the column graph below, you’d think the blue one indicates more difficulty, and the red, less. Alas, that’s not true…

Solution: Flip the scale!

For this measure, go into Excel, replace 5 with 1, 4 with 2, draw the graphs again, and voila! – they make sense.

In usability principles, this falls under consistency and standards – use accepted standards in your interface.

Please make sure to check your scales and graphs, make sure they make sense – in the generally accepted way in American culture.

I have noticed several times in the past that, although students write perfectly clear sentences in emails and blog posts, when they get into “paper writing” mode the quality of writing decreases dramatically: Sentences become long, wordy, and impossible to follow. Passive voice is used more often than it should be (as opposed to” They use passive voice a lot”).

Good writing is simple, clear, direct. Your writing will be easier to understand if:

  1. You use short sentences.
  2. You use simple sentence structures: Start with the Subject (Who is doing the action), follow with the Verb (the action) and then qualify as needed. In each sentence, Someone is Doing Something (Subject, Verb, Object). Try to stick to this structure as much as you can. Avoid passive voice: Something is being Done to Someone (Object, Verb, Subject).
  3. Use fewer words. Examine your sentences and see how many words you can take away without compromising  meaning. I tell students to imagine each word costs 10 cents. Try to save your money when you write.

As you write, the main goal you keep in mind should be: How can I communicate this clearly? – NOT: How can I sound more elegant/academic? Focus on the reader (user), not on yourself.

Here is an example of rephrasing a sentence to make it shorter and clearer:

First of all, the open-ended questions after the post-task questionnaire as qualitative research were asked to the participants to analyze the nanoHUB website usability.

Start by asking yourself: What do I REALLY want to say? Then, just say it:

After each task, we asked participants two open-ended questions.

Some more tips/reminders for writing the final report:

  • It’s OK to use “We” – as in “We conducted usability research.”
  • It’s OK to use numbers inside sentences, but spell them out if them out if they are at the beginning of a sentence: “Three out of 5 participants completed the task.”
  • Be consistent across sections. Use the same style. If you refer to participants as P1, P2, do so in all sections. If you capitalize Task 1, Task 2, then do so in all sections.

I know that one of the most difficult challenges of your final report and presentation is figuring out the most effective ways to communicate data. It takes scientific precision, artistic creativity, and great communication skills. It should be a fun challenge for graduate students – but, at this time of the semester, it gets quite painful, I know…

Take a break, watch the video below. It will remind you that there’s power and joy in data visualization:

 

Oh, and… never mind Power Point. Here’ the new requirement for your final presentation! /badjoke.

Remember that, the end-goal of conducting usability testing doesn’t stop at timing how long it takes users to accomplish tasks. Ultimately, we need to identify usability issues: aspects of the site’s design, organization, functionality that presented problems to users. Here is where your observations and the interviews provide useful data.

Make sure that, in addition to detailed and clear presentation of usability metrics, as discussed in my previous posts, you identify and explain usability issues. Your report should make it clear to the reader what aspects of the website presented problems.

You can identify major issues for each task, and have a separate section where you list the usability issues, and make recommendations for fixing them. Remember to be very specific about what the issue is and what your recommendations are.

When presenting this information on slides, I recommend placing each issue and recommendation on a separate slide. The slide could look something like this:

Screen shots are helpful here. If a screen shot refers to a specific URL, make sure you write the URL at the bottom of the image, so it’s visible and maybe even clickable. If you use screen shots in the slides, then you will need 2 slides per issue (one with the screenshot), and another one like the one above.

How do you decide what is the best way to present information? When should you use a table, a bar graph, or a pie chart? It takes a bit of thinking about the nature of the information and the message you want to get across – then, Excel can do the rest.

Here is a quick resource for you that explains a bit about making these decisions (link opens pdf).

I believe tables and bar graphs will be most useful to you. So, how do you decide whether to use a table or a bar graph?

  • Tables are great for presenting individual values, but if you cram too much information into one table, it becomes overwhelming. Tables do not facilitate comparisons. In a table, the reader has to search for comparisons among data and compute them mentally, then remember them for later. This is a lot of hard work!
  • Bar graphs, on the other hand, are great for rankings and comparisons.

One thing to be careful about are the sides of bar graphs: What do the axes represent? Are they accurately titled? If you display scaled along the axes, are they correct? Check what types of units you are displaying (absolute numbers, percentages, etc.).  When you work with 5 participants, one participant represents 20%. While marking 20% on an axis is technically accurate, it is a bit misleading. When working with such small numbers, I suggest being very cautious about using percentages, if using them at all.

Let’s look below at three ways of presenting the same information: Level of agreement on a 5-point SA-SD scale, where 5 is SA, and 1 is SD. Imagine the agreement is that the task was easy to complete.

The table presents all the individual values. Look at it for 3 seconds or less and answer: Which task was easiest to accomplish?

Here is a different view of the values each participant gave. It doesn’t show the averages, but, can a quick look at the colors give you an overall idea about which task was the easiest?

(Stacked bar graphs can be very effective. We saw a great example in one presentation yesterday of a stacked bar chart showing task completion rates).

Below is a simple column graph representing only the means for each task. You lose the detailed information about each participant’s responses, but you gain even more clarity about what task was easiest to complete:

The examples in this post are all useful for the first part of the results section, where you present combined results. Things should be much easier in the second part, where you present the data for each task. See my previous post for an overview of the three parts of the results section.

I talked about this in class, but I want to provide a written explanation of how to structure the Results section of your reports.

The results section is the most important one. You spent a lot of time and effort collecting data, and now is the time to analyze and present it. The results section shows off your work. Use and present all the data you collected, don’t keep it secret!

The results section should progress from broad to more and more specific: The first part should present results across tasks, and the second part, results for each task. Then, include in the Appendix the data for each user. So we move from an aggregate of data to individual data points.

Overall Results Across Tasks

This sub-section presents data that enables comparisons across tasks. Compare the tasks on each metric, and show averages across tasks. This is broad-level data that sets expectations for what’s to come: Which was the easiest task? Which was the most difficult? How does expected difficulty compare to actual difficulty across tasks? And so on. Comparisons are best illustrated with bar graphs.

Results for Each Task

This sub-section presents the metrics for each one of the tasks, and enables comparisons across individual users. This is where we begin to have access to individual-level data. Which participant completed the task fastest? Which one took the most time?

Within the sub-sub-section for each task, present the quantitative and qualitative data for all the metrics you collected, and discuss anything you know from observations that might explain the results. Include quotations that illustrate the main points you extracted from the qualitative data.

This blog post has some charts that show metrics per task, and then overall metrics at the bottom. Take a look and note the difference.

Appendix

The Appendix presents the results at an even more granular level. Present the results for each participant. So, your appendix will have 5 sections, one for each participant. Start with the demographics (but withdraw information that may compromise the participant’s anonymity), then move on and present the participant’s results for the pre-session questionnaire, metrics for each task (the qualitative data can be a summary with 1-2 quotes, not a full transcription), and the post-session questionnaire.

If you follow this progression, you give your reader different views of the data, starting with a broader picture and moving on to individual data points.

The usability report templates I pointed you to, chapter 8 in the Tullis & Albert book, and the class presentations give you options for presenting this information. See also sample reports by Tullis that give you more ideas about what how to present information.

Bonus link: Nielsen’s famous article Why You Only Need to Test with Five Users. You can cite it to back up the number of participants you tested with.