top of page
livelike.png

LiveLike Mobile Application

Product Testing

About the Project

Synopsis

LiveLike mobile audience engagement suite provides sports broadcasters capabilities to activate their mobile audience. We evaluated the usability of the interactive widgets of the suite. The project was part of the academic syllabus for HCIN730: Usability Testing class in Spring 2019.

Logistics

Team: 4 Graduate Students

Role: Team lead, user researcher and usability evaluator

Duration: Sept 2018 - Dec 2018

quiz.gif

Product Understanding

Kick-Off Meeting

The kickoff meeting marked our first interaction with the client. The major topics covered were -

Client Expectations -Identify usability issues and hindrances in using the product.

Participant ProfileAudiences who engage in sports live stream services (familiarity principle).

Project ScopeQualitative or formative testing the usability to gain preference data about the product.

Context of use of the product

The mobile engagement suite integrates with a live streaming broadcasting application. Hence, we logged the application usage context as product usage. This includes - recreational environments and watching games alone, or with friends, or in social environments.

User Profiles

We identified the target audience and their user profiles. The user profiles include sports fans looking for social experience and non-sports fans looking for an interactive social experience. 

Table 1: User Profiles

Project Planning
Heuristic Evaluation

Heuristic Evaluation

Heuristics Evaluation (click the arrow for more details )

External Consistency

Icons on the system did not follow the industry expectations 

Internal Consistency

Similar functionalities in the system functioned distinctly

User Control and Freedom

System restricted actions in landscape mode 

Flexibility and efficiency of use

System displayed different information using the same colors

1.   External Consistency

The iPhone X or its later versions have a home indicator button (red box in Figure 1). This indicator supports tapping and dragging. LiveLike app also has a very similar indicator (highlighted with a red box as in Figure 2). This indicator is used to switch between split chat-video screen and full video screen, and it can be tapped but not dragged. These indicators are visually similar but are functionally different. This can create confusion while using the app. Thus, the heuristic of external consistency between the home indicator of the iOS system and the app is not met.

iPhoneX.png

Figure 1: iPhone X home indicator which has a tap and swipe interaction

landscape.png

Figure 2: LiveLike app indicator when the phone is in landscape mode.

2.   Internal Consistency

The LiveLike app supports polls in various formats. Among the polls, the image poll and text poll are two similar functioning polls, however, their layouts are different. The question of the image poll (red box in figure 3) is at the bottom of the quiz box, while the question of the text poll (red box in Figure 4) is at the top of the quiz box. Thus, the design patterns (location of the polling question) are not used in a consistent manner and hence do not meet the internal consistency usability heuristic.

imagePoll.png

Figure 3: Example of image poll with poll question bellow the options

textPoll.png

Figure 4: Example of text poll with the poll question above the options.

3.   User Control and freedom

iPhone users have to tap the empty area (anywhere other than the keyboard) of the chat panel to dismiss the keyboard (Figure 5). In landscape mode, when the keyboard is in use and the quiz box is displayed, the system restricts the users to interact only with the quiz box or the chatbox. There is no way to minimize the keyboard or dismiss the quiz box. This restriction to the user to perform limited tasks results in the user control and freedom heuristic to be not met.

landscape.png

Figure 5: Landscape mode with keypad open and instance when a quiz box pops up.

4.   Flexibility and Efficiency of Use → Identify Common Purpose

A poll quiz is a trivia question that displays the correct answer at the end of the quiz and indicates the viewers if the answer they chose was correct. The system is under the assumption that the green color is the right answer and the red color is the wrong answer (as Figure 6). Three different meanings are conveyed through the solution box. There is no context to understand the meaning of the colors or the block. This makes interpreting the solution box difficult.

winLayout.png

Figure 6: Solution box meaning interpretation: Red border around the option means that is the option chosen by the user, red slider means the percentage of viewers who chose the option, red means that the answer selected was incorrect and green was the correct answer.

Anchor 1

Usability Test Planning

System Aspects

The various system components that are to be evaluated are identified and research questions are designed around them.

Date Range

The test was conducted and the report was presented within a span of 3 weeks - April 10, 2019 to May 1, 2019.

Test Environment

The test was conducted in a usability lab with an observation room and a test room. Click on the arrow to know more. 

Recruitment

Based on the target demographics of the product, participants were recruited with the help of flyers and screening surveys.

System Aspects

In this evaluation, 5 aspects of the LiveLike app were tested:

  1. Chat screen (contains the ‘LIVE’ button) (Figure 7.g)  

  2. Quiz box (Figure 7.d)

    1. Cheer meter Quiz Box 

    2. Image Quiz / Text Quiz 

  3. Custom stickers (Figure 7.e)

  4. Transition modes - Fullscreen / Landscape / Portrait  ( Figure 7 a,b,c)

  5. The overall experience of the app.

The participants were exposed to these components through the tasks we devised around the three research questions of the 3 main components (Chat screen, transition modes, and quiz boxes) of the system. 

systemComponents.png

Figure 7: System Components - (a)  Portrait mode, (b) Fullscreen mode, (c) Landscape mode, (d) Quiz Box, (e) Stickers, (f) Chat box, (g) Live button

Test Environment

The tests were conducted in controlled environments of usability testing labs. The lab consists of 2 rooms (observation room and test room) divided by a one-way mirror. There were 3 camera setup - webcam for facial expressions, tripod for hand gesture recording and phone screen recording. 

  • The digital camera is to record hand gestures of the participant (See Figure 10.c),

  • The webcam is to record the facial expressions of the participants (See Figure 10.e),

  • The test mobile phone is screen-recorded to see the screen interaction of the participant (See Figure 10.d).

During the test, the participant and the test moderator are sitting in the test room. The participant would be sitting facing the computer (webcam) of the test room. On the other side, the facial expression observer, the time/error tracking observer, and the CMS operator are sitting in the observation room.

image4.png

Figure 8: Test environment - Testing Room

image7.png

Figure 9: Test environment - Observation Room

image6.jpg

Figure 10: Usability Testing Lab Setup: (a) A computer for the CMS operator to send quiz questions; (b) A computer for observing participants’ facial expression; (c) A digital camera for recording  hand gestures; (d) A test mobile phone with screen recording; (e) A webcam for recording facial expressions

Participant Recruitment

Based on the target demographics for the LiveLike app, we divided our test participants into two categories - sports follower and non-sports follower. Then, we further separated sports followers into another two groups of hockey fans and other sports fans since a hockey game was streamed in the LiveLike app during these tests. This was done because the streamed sports fans in this case hockey would have more engagement with the app compared to other sports fans. We considered other sports fans due to their experience and expectations in using live streaming sports applications.

participant recruitment

Table 2: Participant Recruitment plan

Testing Goal 

Test objective: The purpose of this study is to gather and analyze qualitative and quantitative data about the interactions and usability of different components of the LiveLike app to provide inputs towards improving the usability of the components

Research questions: The research questions are focused to test different components of the LiveLike app.  

  • Do the components meet the user’s expectations?

  • Can the users understand the meaning of the “LIVE” (Figure 2.1.d) on the chat screen?

  • Can users understand, answer and interact with the Quiz Boxes (Figure 2.1.a) successfully?

  • What is the user's preference of the screen orientation (landscape/portrait) (Figure 2.2) when watching the game?

Test Setup

Tasks and scenarios

Each scenario is designed to around the research goals and the tasks aim to answer the research questions.

Task matrix

A within-subject test was designed and the task sequence was mapped out in with the help of Latin-square counterbalancing.

Test session

A test session was divided into segments which helped the test moderator and the test observers to be in sync. 

1.   Tasks and scenarios

For the usability test, three tasks were designed to test different components of the LiveLike app. Each task comprised of task explanation, the scenario when the component can be used by the users, research question the task is trying to answer, roles of the team members during the task, entry criteria and the exit criteria, and the post-task questions to be answered by the participants after the tasks.

While testing the LiveLike app, for testing the quiz box component, the team had to maintain coordination between the test moderator and the CMS operator who wound send a quiz box runtime. 

task.png

Figure 11: Example of a task distribution

2.   Task matrix

The task matrix maps out the sequence of each task for every participant. We conducted usability testing with 8 participants with a within-subject design. The participants each completed all the 3 tasks. All the tasks were independent of each other. The sequence of the tasks was altered to avoid the transfer of the learning effect with the help of the Latin-square counterbalancing technique.

Task matrix

Table 3: Task Matrix for test participants

3.   Task session

Participants were recruited through screening surveys. The usability test was scheduled for 30 minutes. Informed Consent and a pre-test questionnaire were required to be completed before conducting the usability test. The participant was asked to perform 3 tasks according to the task matrix. A corresponding post-task questionnaire would be completed by the participant after he/she completes each task. In the end, after the participant finishes all the tasks, he/she needs to fill out a post-test questionnaire. The test distribution is explained in detail in the table below. 

Table 3: Task Matrix for test participants

Table 4:  Usability Test detail plan

Usability Testing

Emergent Flexibility

The test did not proceed as planned. As we proceeded, we accommodated the changes as required. The biggest issue faced was participant recruitment. Due to limited reach and time, we changed the number of participants recruited from each demographic. 

participants recruited

Table 5: Participant planned recruitment versus actual participants recruited.

Project Takeaway

Identifying the right tasks for the expected outcomes

It is very important to understand what is to be measured from each task. It can also be helpful to associate some hypothesis to each task.

Needs lot of planning and preparation

While working on this project I realized an immense amount of efforts need to go into designing the usability test. The tests should be timed properly, the moderator needs to be trained and ready for surprises. 

Professionalism

I acted as the usability test moderator. This role taught me to be professional, subtle in answering the participants' questions and multitasking. 

Usability Test
My Takeaway
bottom of page