4. User Testing — I’ve never been so aware of what to say

Fernanda Alves
4 min readJan 24, 2021

--

“Speech is silver, silence is golden” — Arabic Proverb

One of my most intense experiences around UX was regarding the testing sessions. Like a romantic date, you have to be strategic, but you will not feel 100% prepared, no matter what you do. You have a plan in your head, you discussed it many times before. At least, it was exactly what my group and I did.

The plan included 2 tasks, that should be performed in remote sessions. Using tools such as Zoom to record the sessions, Figma to share the prototype, and Microsoft Word to ensure we were following the same script. We considered using platforms as Loop11 as well, but because of time constraints, we had to leave it to the future.

Firstly, we had to define our bot behaviour and script, to then develop an adequate general script test. Our bot was the core of our innovative approach; not much because its functions, since Alexa, Google and so many customer bots are already in use, but the objective and utilisation in a job board. We brought versions of what we considered being the “bot language” ideal for the project. Through comparison, we reached the final version after comments and adjustments from all team members and the lecturer, as can be seen in Figure 1.

Figure 1

Note. We had a total of 5 reviews on our bot script. Some of them with the intention of diminishing the information load, language errors, and even the tone.

After that, we created the screener questionnaire, to give us further knowledge of our potential users preferences (Rubin et al., 2008). We ran our questionnaires in channels such as Facebook, WhatsApp, and within our social circles. Our participants were selected based off their age group, technology literacy, job hunt profile, as we wanted a similar profile to Zack, our main persona. As result, we had a total of 8 people in the usability testing phase.

Initially, our test script followed a strict structure of tasks and questions. It was divided into 2 parts, as can be checked in Figures 2 and 3.

Figures 2 and 3

Note. The first testing script was divided into 2 tasks, with pre-defined questions. Part 1 and Part 2

However, our last version was simplified into one file based on the lecture’s feedback and research, as shown in Figure 4.

Figure 4

Note. Concise version based on feedback and research.

The benefit of having scripts is that brings uniformity to usability testing. Besides that, you can transform these user testing sessions insights into actionable and fixable (when possible). We made use of qualitative data with techniques such as Thinking aloud, and observational results; quantitative with SUS (System Usability Scale) questionnaires to measure how satisfied our users were (Rubin et al., 2008).

We had a small sample with SUS, mainly because of time constraints. However, it was enough to reveal to us that we were in the right direction. Figures 5 and 6 show some of the changes we made as a result of the first and second iterations’ feedback.

Figure 5

Note. Onboarding cards improvements: More suitable illustrations, less text, clearer hierarchy, tone of voice, and calls-to-action.

Figure 6

Note. Bot improvements: Fewer calls-to-action, removal of the bot icon, the introduction of carousel job cards, a better visual hierarchy with the indication of who is talking.

We used the commenting tool on Figma to share findings from user testing sessions, as shown in Figure 7.

Figure 7

Note. The comments in the area that the issues happened were great to see the critical screens.

As a conclusion, if we had more iterative rounds and more SUS results, we would have discovered more things to improve and refine. It is still interesting to observe how user’s testing sessions are affected by how we manage it (moderators and note-takers), and how it can affect so directly our users' behaviour. Although, our results were satisfactory and compatible with our plan and deadline.

Reference List:

Krug, S. (2009). Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems (1st ed.). New Riders.

Horton, S., Quesenbery, W., & Gustafson, A. (2014). A Web for Everyone: Designing Accessible User Experiences (1st ed.). Rosenfeld Media.

Rubin, J., Chisnell, D., & Spool, J. (2008). Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests (2nd ed.). Wiley.

Shevat, A. (2017). Designing Bots: Creating Conversational Experiences (1st ed.). O’Reilly Media.

--

--

Fernanda Alves
Fernanda Alves

Written by Fernanda Alves

I’m a UX Designer, I love drawing, and I love pets.

No responses yet