LiveLike Mobile Application
Product Testing
About the Project
Synopsis
LiveLike mobile audience engagement suite provides sports broadcasters capabilities to activate their mobile audience. We evaluated the usability of the interactive widgets of the suite. The project was part of the academic syllabus for HCIN730: Usability Testing class in Spring 2019.
Logistics
Role: Team lead, user researcher and usability evaluator
Duration: Sept 2018 - Dec 2018
Product Understanding
Kick-Off Meeting
The kickoff meeting marked our first interaction with the client. The major topics covered were -
Client Expectations -Identify usability issues and hindrances in using the product.
Participant Profile - Audiences who engage in sports live stream services (familiarity principle).
Project Scope - Qualitative or formative testing the usability to gain preference data about the product.
Context of use of the product
The mobile engagement suite integrates with a live streaming broadcasting application. Hence, we logged the application usage context as product usage. This includes - recreational environments and watching games alone, or with friends, or in social environments.
User Profiles
We identified the target audience and their user profiles. The user profiles include sports fans looking for social experience and non-sports fans looking for an interactive social experience.
Table 1: User Profiles
Heuristic Evaluation
Heuristics Evaluation (click the arrow for more details )
We evaluated the system's heuristic and accessibility concerns for iOS and Android platforms derived from Jacob Nielsen's heuristics.
External Consistency
Icons on the system did not follow the industry expectations
Internal Consistency
Similar functionalities in the system functioned distinctly
User Control and Freedom
System restricted actions in landscape mode
Flexibility and efficiency of use
System displayed different information using the same colors
1. External Consistency
The iPhone X or its later versions have a home indicator button (red box in Figure 1). This indicator supports tapping and dragging. LiveLike app also has a very similar indicator (highlighted with a red box as in Figure 2). This indicator is used to switch between split chat-video screen and full video screen, and it can be tapped but not dragged. These indicators are visually similar but are functionally different. This can create confusion while using the app. Thus, the heuristic of external consistency between the home indicator of the iOS system and the app is not met.
Figure 1: iPhone X home indicator which has a tap and swipe interaction
Figure 2: LiveLike app indicator when the phone is in landscape mode.
2. Internal Consistency
The LiveLike app supports polls in various formats. Among the polls, the image poll and text poll are two similar functioning polls, however, their layouts are different. The question of the image poll (red box in figure 3) is at the bottom of the quiz box, while the question of the text poll (red box in Figure 4) is at the top of the quiz box. Thus, the design patterns (location of the polling question) are not used in a consistent manner and hence do not meet the internal consistency usability heuristic.
Figure 3: Example of image poll with poll question bellow the options
Figure 4: Example of text poll with the poll question above the options.
3. User Control and freedom
iPhone users have to tap the empty area (anywhere other than the keyboard) of the chat panel to dismiss the keyboard (Figure 5). In landscape mode, when the keyboard is in use and the quiz box is displayed, the system restricts the users to interact only with the quiz box or the chatbox. There is no way to minimize the keyboard or dismiss the quiz box. This restriction to the user to perform limited tasks results in the user control and freedom heuristic to be not met.
Figure 5: Landscape mode with keypad open and instance when a quiz box pops up.
4. Flexibility and Efficiency of Use → Identify Common Purpose
A poll quiz is a trivia question that displays the correct answer at the end of the quiz and indicates the viewers if the answer they chose was correct. The system is under the assumption that the green color is the right answer and the red color is the wrong answer (as Figure 6). Three different meanings are conveyed through the solution box. There is no context to understand the meaning of the colors or the block. This makes interpreting the solution box difficult.
Figure 6: Solution box meaning interpretation: Red border around the option means that is the option chosen by the user, red slider means the percentage of viewers who chose the option, red means that the answer selected was incorrect and green was the correct answer.
Usability Test Planning
System Aspects
The various system components that are to be evaluated are identified and research questions are designed around them.
Date Range
The test was conducted and the report was presented within a span of 3 weeks - April 10, 2019 to May 1, 2019.
Test Environment
The test was conducted in a usability lab with an observation room and a test room. Click on the arrow to know more.
Recruitment
Based on the target demographics of the product, participants were recruited with the help of flyers and screening surveys.
System Aspects
In this evaluation, 5 aspects of the LiveLike app were tested:
-
Chat screen (contains the ‘LIVE’ button) (Figure 7.g)
-
Quiz box (Figure 7.d)
-
Cheer meter Quiz Box
-
Image Quiz / Text Quiz
-
-
Custom stickers (Figure 7.e)
-
Transition modes - Fullscreen / Landscape / Portrait ( Figure 7 a,b,c)
-
The overall experience of the app.
The participants were exposed to these components through the tasks we devised around the three research questions of the 3 main components (Chat screen, transition modes, and quiz boxes) of the system.
Figure 7: System Components - (a) Portrait mode, (b) Fullscreen mode, (c) Landscape mode, (d) Quiz Box, (e) Stickers, (f) Chat box, (g) Live button
Test Environment
-
The digital camera is to record hand gestures of the participant (See Figure 10.c),
-
The webcam is to record the facial expressions of the participants (See Figure 10.e),
-
The test mobile phone is screen-recorded to see the screen interaction of the participant (See Figure 10.d).
During the test, the participant and the test moderator are sitting in the test room. The participant would be sitting facing the computer (webcam) of the test room. On the other side, the facial expression observer, the time/error tracking observer, and the CMS operator are sitting in the observation room.
Figure 8: Test environment - Testing Room
Figure 9: Test environment - Observation Room
Figure 10: Usability Testing Lab Setup: (a) A computer for the CMS operator to send quiz questions; (b) A computer for observing participants’ facial expression; (c) A digital camera for recording hand gestures; (d) A test mobile phone with screen recording; (e) A webcam for recording facial expressions
Participant Recruitment
Based on the target demographics for the LiveLike app, we divided our test participants into two categories - sports follower and non-sports follower. Then, we further separated sports followers into another two groups of hockey fans and other sports fans since a hockey game was streamed in the LiveLike app during these tests. This was done because the streamed sports fans in this case hockey would have more engagement with the app compared to other sports fans. We considered other sports fans due to their experience and expectations in using live streaming sports applications.
Table 2: Participant Recruitment plan
Testing Goal
Test objective: The purpose of this study is to gather and analyze qualitative and quantitative data about the interactions and usability of different components of the LiveLike app to provide inputs towards improving the usability of the components
Research questions: The research questions are focused to test different components of the LiveLike app.
-
Do the components meet the user’s expectations?
-
Can the users understand the meaning of the “LIVE” (Figure 2.1.d) on the chat screen?
-
Can users understand, answer and interact with the Quiz Boxes (Figure 2.1.a) successfully?
-
What is the user's preference of the screen orientation (landscape/portrait) (Figure 2.2) when watching the game?
Tasks and scenarios
Each scenario is designed to around the research goals and the tasks aim to answer the research questions.
Task matrix
A within-subject test was designed and the task sequence was mapped out in with the help of Latin-square counterbalancing.
Test session
A test session was divided into segments which helped the test moderator and the test observers to be in sync.
1. Tasks and scenarios
For the usability test, three tasks were designed to test different components of the LiveLike app. Each task comprised of task explanation, the scenario when the component can be used by the users, research question the task is trying to answer, roles of the team members during the task, entry criteria and the exit criteria, and the post-task questions to be answered by the participants after the tasks.
While testing the LiveLike app, for testing the quiz box component, the team had to maintain coordination between the test moderator and the CMS operator who wound send a quiz box runtime.
Figure 11: Example of a task distribution
2. Task matrix
The task matrix maps out the sequence of each task for every participant. We conducted usability testing with 8 participants with a within-subject design. The participants each completed all the 3 tasks. All the tasks were independent of each other. The sequence of the tasks was altered to avoid the transfer of the learning effect with the help of the Latin-square counterbalancing technique.
Table 3: Task Matrix for test participants
3. Task session
Participants were recruited through screening surveys. The usability test was scheduled for 30 minutes. Informed Consent and a pre-test questionnaire were required to be completed before conducting the usability test. The participant was asked to perform 3 tasks according to the task matrix. A corresponding post-task questionnaire would be completed by the participant after he/she completes each task. In the end, after the participant finishes all the tasks, he/she needs to fill out a post-test questionnaire. The test distribution is explained in detail in the table below.
Table 3: Task Matrix for test participants
Table 4: Usability Test detail plan
Usability Testing
Emergent Flexibility
The test did not proceed as planned. As we proceeded, we accommodated the changes as required. The biggest issue faced was participant recruitment. Due to limited reach and time, we changed the number of participants recruited from each demographic.
Table 5: Participant planned recruitment versus actual participants recruited.
Data Collected
The usability tests were monitored based on the success rate of the tasks, qualitative data collected from open-ended questions in various questionnaires, participant videos collected during the test and their response of System Usability Scale (SUS) questions. All these data collected helped us provide an informed opinion to the LiveLike development team.
Project Takeaway
Identifying the right tasks for the expected outcomes
It is very important to understand what is to be measured from each task. It can also be helpful to associate some hypothesis to each task.
Needs lot of planning and preparation
While working on this project I realized an immense amount of efforts need to go into designing the usability test. The tests should be timed properly, the moderator needs to be trained and ready for surprises.
Professionalism
I acted as the usability test moderator. This role taught me to be professional, subtle in answering the participants' questions and multitasking.