Remote Usability Testing

Head note: This lecture assumes that you have read the usability testing lecture.  

Remote usability testing is usability testing at a distance, using the internet rather than in a lab or face-to-face. Remote usability testing techniques are classified into two type:

  • Moderated usability test
  • Unmoderated usability test

Moderated usability testing implies that a test administrator controls the testing process in real time as described in my usability testing lecture. Unmoderated usability testing implies that participants perform the testing tasks without a test administrator controlling the testing process. 

Moderate and unmoderated testing have advantages and disadvantages. 

Testing Category AdvantagesDisadvantages
ModeratedProcess control
Agile to user and context
More observations
Slow/time consuming
More expensive
Smaller participant pool
UnmoderatedCheaper
Faster
Larger participant pool
Not agile to user and context
Requires participant initiative
Fewer observation

Moderated usability testing has the advantage that the testing process can be controlled and manipulated by the test administrator. The test administrator can control the platform used and the entry point for scenarios. Also the test administrator can be agile to the participants’ responses and the context of the testing. For example, if the participant runs into an error, the test administrator can guide the participant out of the error.  Typically more observations are possible with moderate testing, for example observing participants’ facial expression or audio vocalization. Moderated testing has disadvantages compared to unmoderated testing. Moderate testing is slower and more time consuming because the test administrator can conduct only one test at a time. Consequently moderated testing is more expensive. In addition, moderated testing typically requires a usability lab or a room and more equipment. Because moderated tests are more expensive and time consuming, typically fewer participants are tested. 

Unmoderated usability testing is cheaper and faster than moderated testing. Many participants can perform the tests simultaneously. Because unmoderated testing uses few tools, they are cheaper. Although currently there are applications for conducting unmoderated usability tests which can be expensive.  Because unmoderated testing is faster and cheaper, many more participants can be tested. Unmoderated testing has disadvantages. They are not agile to the participant or the testing context. If the participant make errors, the participant must figure out how to correct the error or more typically abandon the testing. Another serious disadvantage is the participants have to initiate the testing. This requires that the participant have intrinsic motivation to perform the test. Fewer observations are made in unmoderated tests, typically limited to app logging and participant’s responses to surveys. 

The app tested using unmoderated usability test should be bug free and aesthetically pleasing. The participants naturally assume that they are testing a finished app. If there are bugs the participants will become frustrated and stop testing. Participants will find your bug; even bugs that you explain to the participants exist prior to the testing. All software bugs must be fixed before testing. If the app is not aesthetic, most of the feedback from participants will be about the app’s aesthetics even if the usability test instructions stress that only functionality of the app is being evaluated/tested. 

Because of these advantages and disadvantages, moderated and unmoderated usability testing are generally appropriate for different contexts. 

Testing Category Appropriate Testing Contexts
ModeratedTesting early or late in development
Testing complex apps
Pilot studies
UnmoderatedTesting mature apps
Exploring complex apps
Testing simple apps

Moderated usability testing is more appropriate for testing apps earlier in development than unmoderated testing because of the possibilities of usability or app failure-. The test administrator can assist the user with the recovery of the scenario or the app failure.  In addition, making direct visual observations of the participants reaction to the app can be important for apps early in development. Moderated pilot studies are more beneficial for designing the final usability test because the intuition gained from direct observation of the participant can be very instructive. 

Unmoderated usability testing can be successful for mature apps where failures are not expected.  Most typically, unmoderated testing is generally used for very directed tasks or simple apps. But unmoderated tests can be used for participants’ freeform exploration of complex apps, especially if the app is logging the navigation. 

I am acquainted with two specific remote usability testing techniques:

  • Unmoderated Usability Testing using Emails and Google Forms
  • Moderated Usability Testing using Zoom

The following sections describe the details of the techniques.

Email Usability Testing

Unmoderated usability testing using emails is a discount usability test for small development teams without resources to user experience experts or test administrators. They are cheap and require only minimal time from the developers. Developers can quickly design the test, deploy the app for testing on the production server and continue developing the app using the development server.  The challenge designing the usability test is that during testing there is limited control on the participant. Consequently, unmoderated usability testing using emails is most successful for small simple apps later in the development stage.

The general process of unmoderated usability testing is:

  1. Design usability test and deploy the test app
  2. Acquire participant email list
  3. Email the participants
  4. Collect participant responses
  5. Analyze participant responses

Each step is a small task that fits into a sprint for an agile development team. 

Sections below describe some details of each step.  

Designing the Usability Test

The usability test consist of three parts:

  • User manual
  • Instructions for performing tasks
  • Survey for collect responses

User Manuals

The test designers need to decide if the usability test will include a user manual or instructions on how to use the app. It is appropriate for complex apps to have user manuals. 

Designers may believe that the app is “intuitive” and are interested in the first time use experience then the usability test will not include a user manual or the instructions will be minimum.  Designers may expect that users will eventually become experts using the app or that they will receive instructions on how to use the app. If the designer and development team want to know the efficiency of expert use then the user manual will be very detailed with screenshots using the app.

Scenarios/Task Instructions

The instructions for performing the tasks depend on:

  • Task specifics 
  • Expected participants
  • Test Goals

Generally, the instructions should not be a step by step procedure for performing the scinario’s goal, rather they should resemble a usability test scenario. But if the participants are not acquainted with the task scenario, they may not be able to imagine or recreate the task context, so the task instructions may need to be more detailed.    

The test goals may want to explore the general usability of the app or navigational features of the app. In which case, the instructions should be more general to allow the participant to explore the app. 

An email usability test should have only one or two scenarios to perform because participants monitor themselves and will get lost in a long list of scenarios. Also, participants are unlikely to have the patience to perform more than one or two scenarios. 

Survey

Observations of the participants are typically by survey response. The survey can be questions in the email or a Google Form. Google Forms are more convenient for the development team and have many features, but requires the participant to follow a link. 

Because there is no test administrator to follow up with an interview, the survey should be short and containing open ended questions. You cannot assume that the participant understands the survey or will complete the survey. I like to keep the survey to 6 or less open ended questions. The typical open ended questions you can ask:

  • What difficulties did you have performing the task? Please be as specific as possible.
  • What did you like about the app?
  • Do you have suggestions for improving the app?

These questions are few enough that you can be assured that every participant will answer all questions. Also the questions are general enough that participants can write as much as they want. Most participants will answer the questions with short sentences, but some participants will answer the questions with a wealth of information in many paragraphs. These questions accommodate either participant’s style and cover the basic information that the developers need to know.  

A questions that must be in every survey is:

  • What device did you use to perform the task? Please include device model, operating system, and browser. 

You may want to ask specific questions about the app design, but I only recommend asking specific questions only if the designers and development teams are truly concerned about the specifics. The survey should not have a long list of app specifics. Unmoderated usability tests do not have the observational tools or measurement to make reliable observations on app specifics.  

You may consider having a couple of questions asking the participant to rate the app:  

  • Please rate how easy it was to perform the task, where 1 means very difficult and 5 means very easy. 
  • Please rate how efficient it was for you to perform the task, where 1 means very inefficient and 5 means very efficient.

 The rating can help determine the severity of the problems that the participant describes in the open ended question.  Responses to the scales can also be used by the development team to gauge the completion of app development. 

Acquiring Participant Email List

Typically, the development team acquires a participant email list from the client. Although clients are generally domain experts, most clients have little understanding of user centered design. Consequently, the client will need guidance: 

  • Identifying the correct user types
  • Ensuring appropriate participant

The development team should identify the user types. For example, a crowd-sourced citizen science app generally has two types of users: citizens making observations and scientists using the data. If the observational part of the app is to be tested then naturally the participants should represent the citizens actually using the app. Even if the client understands the user types needed for the usability test, they will typically generate a participant list that are not actual users of the app. For example, in the crowd source citizen science app, the client may suggest that other scientists or supervisors participate in the test instead actual citizens. The scientists and supervisors are called user surrogates. Sometimes it is not possible to avoid user surrogates, but clients seldom understand what makes a good user surrogate. Scientists may be appropriate user surrogates if they have worked with the citizens making the observations, but if scientists have never met the citizens, they will not be good surrogates.  

Emailing Participants

A decision to make is who should send the email requesting participation in the usability testing. Either the client or the development team can send the email. The advantage to having the client send the email is generally the recipients are acquainted with the client, and the client can make appeals to the recipients to participate. But the development team will lose control over the process. Any direct reply to the email will go to the client and not to the development team.  If the development team sends email request, they can control the timing of the testing and send reminders to participants to perform the test. The best compromise might be to have the client make an email introduction of the development team to the potential participants. 

The Email

Although the email requesting participation will be longer than an ordinary email, it should be as short as possible. Good structure and formatting of the email can help the potential participant read and follow instructions. 

A typical outline for the email:

  1. Thank the participants for volunteering 
  2. Quick explanation for the reasons of the usability test
  3. Ask for consent
  4. General description of the process, i.e. performing tasks and answering a survey
  5. User manual
  6. Scenario/Task instruction
  7. Survey
  8. Encourage replies to this email, if the participant wishes
  9. Thank the participant

The email can be kept short by having attachment for the user manual and task instructions, and a link to the survey. Attachments have the additional benefit that participants can print the user manuals and task instructions for reference while performing the tasks. This is particularly helpful if the usability test is to be performed on a phone. 

Offer to the participants that they respond to the usability test by direct reply to the email. Participants may have difficulties with the survey or the survey may not ask the questions that they want to address.  A direct reply is an alternative for the participants. 

Collect Participants Responses 

Ideally all the participants have responded by answering the google survey, but that is seldom the case. Typically, some participants answer the google survey and other participants reply to the email. Consequently, the responses need to be collected into a single document. Even if all the participants respond by answering the google survey, some participants will have many ideas in their answers. The participant answers need to be parsed into a list of single ideas or issues using the app. 

Analysis

After collecting the participants’ responses or problems and parsing the responses or problems, the responses can be analyzed. Because many web app issues are browser and device dependent, the participant’s device, operating system and browser must be associated with each participant response. 

Generally participants will report four categories of problems or ideas:

  • Bugs
  • Missing or additional features
  • Usability problems
  • User errors

The above order is the common frequency of categories. Participants are very good at finding bugs. If the participants like the app, they are eager to recommend new features. Participants seldom recognize usability problems. If they do express usability problems, they may be expressed as s user errors, but any user error is really a usability problem. Sometimes a description of an issue of using the app may be a user error because the participant did not understand either the app or scenario.

The analysis process is for one or two development team members to categorize the participant responses into one of the four categories above. After categorizing, each participant’s response should be ranked. A ranking I like to use:

  1. Severe
  2. Minor
  3. Enhancement
  4. Wrong

Severe represents problems that cause failure of the tasks, while minor represents problems that have a work around or the failure affects only a small aspect of the task. Enhancements can be new features or styling changes. Implementing them can improve the usability of the app. Wrong represent ideas about the app that the participants are wrong about because they misunderstand the app’s goals. User errors should be careful considered to determine if they should be categorized wrong or as usability problems that should ranked sever or minor

After all the issues, problems or participants’ responses have been categorized and ranked the development team can write user stories and decide on the priority of the user stories.

Zoom Usability Testing

Video conferencing software can be used to conduct moderated remote usability tests. Features that the video conferencing software should have:

  • screen sharing
  • multiple attendees capabilities
  • Video recording
  • host control of audio and video mute
  • Polling or questionnaires

The list of features is ordered by need.  Zoom has these features and is free for Michigan Tech faculty and students. So this section will assume that Zoom is used for remote usability testing. Using Zoom to conduct a usability test remotely is very similar to a usability test in the lab. But there are differences, disadvantages and advantages:

Test Category AdvantagesDisadvantages
In a labMore observations
More context control
Requires travel
More setup
Video conferenceConvenience
Automatic video & audio recording
Better interaction observation
Less observations
Less control of testing
Takes longer
Participant must have laptop

Using Zoom for usability testing has less possible observations than in a lab. Although the participant’s face can be tracked in a video call only one view is possible. In addition, the test administrator does not have control over the video camera orientation. Rather the test administrator will have to direct the participant to orientate the video camera. In general, the test administrator has less control over the testing environment. The participants choose where to conduct the usability test, although the test administrator can suggest where to conduct the usability test. Using Zoom, the usability test will take longer, and the flow of the testing will not be as natural as face-to-face testing in the lab. 

Using Zoom, the usability test can be automatically recorded which the test administrator can use to determine the exact interaction and timing. But the test administrator will need to instruct the participant to share the screen. All these instructions will cause the usability test to take longer. The usability test plan must anticipate the increased time required for testing. 

Finally using Zoom, the participant must have a laptop or smartphone. If testing is performed on a mobile device, there is less control of the video and possibly video will be inadequate. Also screen sharing will not give as much information because there will not be a cursor. Consequently, the test administrator may choose to have the participant use a laptop and direct the user to use developer tools to simulate mobile devices. The test administrator will have to instruct the participant on how to use developer tools.

Observations

The potential observations that can be made using video conferencing software:

  • Facial expressions from video recordings
  • Vocalizations from audio recordings
  • Screen sharing recordings
  • App logging
  • Interviews 
  • Surveys

Using Zoom facial expression can be observed, but probably not body language. Test administrator should make effective use of audio recording and ask the participant to “think aloud”. Screen sharing recording is a very effective observation that can be used to determine:

  • exact interaction
  • interaction time 
  • user error

Reviewing the screen sharing recording, the test administrator can determine the exact widget that the participant interacted with. Interaction time can be determined exactly. The analysis does not need to count frames in the video but can use a stopwatch. In case of an user error or a bug, the screen sharing recording can be reviewed to determine the exact cause of the user error or the app state prior to the appearance of the bug.  

Surveys or questionnaires can be used with usability testing using Zoom. Although Zoom has polling features which can be used like a multiple choice questionnaire, I do not recommend using them. Questions are limited to multiple choices and the “meeting records” will have to be downloaded and parsed in order to record the participant response. A better alternative is to direct the participant to a google form, but then the participant will need the link to the google form and open another browser tab. The Zoom chat panel can be used to communicate the link. 

Remote Usability Testing Process  

The Zoom usability testing process is similar to the face-to-face testing in the lab with obvious modifications:

  1. Email participant the Zoom connect information
  2. Start the Zoom meeting before the participant arrives
  3. Greet the participants: Purpose of the test and verbal consent
  4. Explain and/or demo the app
  5. Read the scenario. Ask for “think aloud”.
  6. Instruct participant to share screen.
  7. Share the URL via chat
  8. Participant perform the task
  9. Repeat 5 – 8 for each scenario.
  10. Interview participant
  11. Participant fill outs questionnaire
  12.  Thank the participant

Although the process is very similar the subtle differences are important. 

Email Participant

Email the participant a day or two ahead of the testing appointment. Realize that you when you email your participants the first time that they do not know you. Your participants may think that the email is a scam, especially when requesting their address or giving a Zoom link. You  should start your email first describing who you are and the purpose of the email. Realize that your first email to your participant will not have any context. So explain everything. I don’t believe the email needs to be long, but everything needs to be explained. 

The email should give a brief and general explanation of the testing process explaining that the testing will be conducted using Zoom to record performing tasks on an app. Participants need to clearly understand that they will be video recorded and sharing their screen. In addition, the email should specify the hardware requirements and ask the participant to verify the hardware. Finally ask the participant to reply, verifying the hardware and willingness to be video recorded and sharing their screen. A typical email outline:

  1. Introduction and purpose of email 
  2. Brief description of the process: Zoom to record performing tasks using a web app.
  3. Explain that video recording and screen sharing
  4. Give the minimum hardware requirement: laptop and/or smartphone
  5. Ask for reply for verification of hardware and willingness to video recorded and screen sharing
  6. Zoom connect information

Note that some laptops may not have functioning video cameras, so you may not want to insist that a working video camera is necessary.  The recording will still have audio and the screen sharing will still work.

You can ask your participant to test their hardware and practice joining a Zoom meeting by visiting:

https://zoom.us/test

A final comment about using Zoom for observing tasks, participant can join the meeting using web client. Web client image is good but is slower then the downloaded Zoom client. This is particularly noticeable when tracking the cursor. So for usability testing, the participant should use downloaded Zoom client.

Prepare Zoom Meeting

The test administrator must start the Zoom meeting before the participant arrives. All the team members observing the testing should join the meeting before the participant. Their video and audio should be muted. Check that recording is on. 

Greet Participant

As in a face-to-face usability test in the lab, the participant should be greeted by thanking them for arriving and explaining the purpose of the testing. It is important that participants understand  that it is the app that is being tested, not them. 

Explain to them that you will use verbal consent and read the consent form to them. Ask them if they give consent. Be sure to explain that the testing will be recorded and that their screen will be shared, but that the recordings will remain confidential. If the participant does not give video recording consent, instruct them that they can mute their video. Instruct the participant that should clear their desktop of any any opened.

App Demo

If the usability test plan includes explaining how to use the app or demonstrating the app, one the development team members can “share” their screen to give the demonstration, the mic should be unmuted. Demonstrating the app by screen sharing also demonstrates to the participant what the test administrator will see while they are performing tasks on the app. 

After the app is demonstrated, the demonstrator will need to stop screen sharing and mute their mic. 

Read Scenario

Reading the scenario during Zoom testing is similar to face-to-face testing in the lab, but the test administrator should confirm the participants’ attention by having them answer a question. 

After reading the scenario, the test administrator ask the participant to perform “think aloud” while performing the task. 

Screen Sharing & Preparing the Browser

Screen sharing needs to setup. The test administrator should first ask the participant if they a multiple monitors or a single screen because screen sharing is more precarious with multiple monitors. Using Zoom, if the participant moves the window to another monitor then the test administrator will see a frozen image if the screen is shared or no image if the browser window. The setting up screen sharing and preparing the browser window easier on single screen devices. I prefer the participant share the screen rather than sharing the browser window because can be done before the browser is open. Sharing the screen, the test administrator can monitor and assist opening the browser navigating to the website. Explain to participants how to share their screen can be particularly hard because the toolbar showing the “green square share” icon disappears if the mouse cursor is not in the Zoom window.

After the screen sharing is setup, the test administrator should ask the participant to open the browser. The test administrator should share the URL via Zoom’s chat. Verify that the participant has chat open and copied the URL. Once the browser has navigated to the website, the participant can be instructed to resize and position the browser window depend on the testing specifics.

Interviewing Participant

I recommend interviewing the participant immediately after performing the task because the participant’s impressions will be fresh. Interviewing can occur while the participants are still sharing their screen, but after the participant is done performing tasks. The test administrator will probably have to explain how to stop sharing the screen, so interviewing during screen sharing may be more convenient.

Questionnaire

During remote testing, a paper questionnaire cannot be used. Although Zoom has the polling capabilities which can be used to record responses to multiple choice questions, I recommend using a Google form. The link to the form can be shared via Zoom chat. Again participants might need instructions on how to open the chat window. The test administrator should explain to the participants that the responses will not be reviewed until after the testing is completed. 

While the participant is filling out the form, the test administrator and test observers should remain quiet and mute their video. Recall that the participant may feel acutely observed, especially after performing the scenarios. 

Testing Mobile Apps

There are four alternative for Zoom usability testing of mobile apps:

  • Use desktop with possibly smaller browser window for the app
  • Use developer tools to simulate the mobile device
  • Connect to Zoom using the mobile device
  • Connect to Zoom using the desktop and use a mobile device to perform tasks

There are advantages and disadvantages to using either the desktop or the mobile device for task scenarios. If the participant uses the desktop to perform the task scenarios then the participant will be using the mouse or trackpad and not their fingers, but the cursor movements will be recorded. If the participant uses the mobile device to connect to Zoom and perform the task scenarios, the participant will use their fingers on the actual devices that users are expected to use. Although the screen can be shared on the mobile device, their finger will not be tracked and you’ll not be sure what the participant clicked on. Think aloud protocol can alleviate some of the unknowns, but the participant will not precisely describe their interactions. The decision for designing the usability test is to determine what observations are most critical for evaluation:

  • precise observation of the interaction
  • precise simulation of the experience for the user

If the cognitive tasks of interaction or navigating the app is important then the participant should use the desktop to perform the task scenarios. If the physical interaction aspects are more important, e.g. are icons properly located and sized then the participant should connect to Zoom using their mobile device and perform the scenarios on their device.  

Note that participants connecting to Zoom using the desktop and using their mobile device to perform the tasks is probably the worst possible choice. The test administrator will need to interview the participant immediately after completing the scenario and ask them to recall the task. Using a second device to perform a scenario may be a good choice when combined with performing scenarios on the desktop.  

References 

Remote Usability Testing

Remote Testing on usability.gov
https://www.usability.gov/how-to-and-tools/methods/remote-testing.html

Remote Usability Tests: Moderated and Unmoderated on Nielsen Norman Group
https://www.nngroup.com/articles/remote-usability-tests/

Unmoderated, Remote Usability Testing: Good or Evil?
By Kyle Soucy
https://www.uxmatters.com/mt/archives/2010/01/unmoderated-remote-usability-testing-good-or-evil.php

Remote usability testing tools: A guide (and infographic) for moderated research.
By Pototypr.io
https://blog.prototypr.io/remote-usability-testing-tools-3fdc0c18053e

Zoom

Daily Webinars 
https://livetraining.zoom.us/webinar/register/1415808540529/WN_5B1e6RECQyuMtbo_DeENOA?timezone_id=America%2FNew_York

Recorded Webinar
https://livetraining.zoom.us/rec/play/vJR5d-j5q283HNGcsgSDV_5wW9Tpe_qs0CVM_PIEmknkVnAAYVGlZuMaZ7NAORKJB_fFeWJ9L4d5ilrn?continueMode=true

Support
https://support.zoom.us/hc/en-us

Testing 
https://zoom.us/test