Preface |
|
xv | |
Acknowledgments |
|
xix | |
A Special Note From Cheryl Tullis Sirois |
|
xxi | |
Biographies |
|
xxiii | |
|
|
1 | (16) |
|
1.1 What Is User Experience |
|
|
4 | (4) |
|
1.2 What Are User Experience Metrics? |
|
|
8 | (2) |
|
1.3 The Value of UX Metrics |
|
|
10 | (1) |
|
|
11 | (1) |
|
1.5 New Technologies in UX Metrics |
|
|
12 | (1) |
|
1.6 Ten Myths About UX Metrics |
|
|
13 | (4) |
|
Myth 1 Metrics Take Too Much Time to Collect |
|
|
13 | (1) |
|
Myth 2 UX Metrics Cost Too Much Money |
|
|
14 | (1) |
|
Myth 3 UX Metrics Are Not Useful When Focusing on Small Improvements |
|
|
14 | (1) |
|
Myth 4 UX Metrics Don't Help Us Understand Causes |
|
|
14 | (1) |
|
Myth 5 UX Metrics Are Too Noisy |
|
|
14 | (1) |
|
Myth 6 You Can Just Trust Your Gut |
|
|
15 | (1) |
|
Myth 7 Metrics Don't Apply to New Products |
|
|
15 | (1) |
|
Myth 8 No Metrics Exist for the Type of Issues We Are Dealing With |
|
|
15 | (1) |
|
Myth 9 Metrics Are Not Understood or Appreciated by Management |
|
|
16 | (1) |
|
Myth 10 It's Difficult to Collect Reliable Data With a Small Sample Size |
|
|
16 | (1) |
|
|
17 | (30) |
|
2.1 Independent and Dependent Variables |
|
|
18 | (1) |
|
|
18 | (4) |
|
|
19 | (1) |
|
|
19 | (2) |
|
|
21 | (1) |
|
|
22 | (1) |
|
2.3 Descriptive Statistics |
|
|
22 | (7) |
|
2.3.1 Measures of Central Tendency |
|
|
22 | (2) |
|
2.3.2 Measures of Variability |
|
|
24 | (1) |
|
2.3.3 Confidence Intervals |
|
|
25 | (2) |
|
2.3.4 Displaying Confidence Intervals as Error Bars |
|
|
27 | (2) |
|
|
29 | (4) |
|
2.4.1 Independent Samples |
|
|
29 | (1) |
|
|
30 | (2) |
|
2.4.3 Comparing More Than Two Samples |
|
|
32 | (1) |
|
2.5 Relationships Between Variables |
|
|
33 | (1) |
|
|
33 | (1) |
|
|
34 | (2) |
|
2.6.1 The Chi-Square Test |
|
|
35 | (1) |
|
2.7 Presenting Your Data Graphically |
|
|
36 | (9) |
|
2.7.1 Column or Bar Graphs |
|
|
37 | (2) |
|
|
39 | (1) |
|
|
40 | (2) |
|
2.7.4 Pie or Donut Charts |
|
|
42 | (1) |
|
|
43 | (2) |
|
|
45 | (2) |
|
|
47 | (24) |
|
|
48 | (2) |
|
3.1.1 Formative User Research |
|
|
48 | (1) |
|
3.1.2 Summative User Research |
|
|
49 | (1) |
|
|
50 | (1) |
|
|
50 | (1) |
|
|
50 | (1) |
|
|
51 | (1) |
|
|
51 | (2) |
|
3.4 Choosing the Right UX Metrics |
|
|
53 | (7) |
|
3.4.1 Completing an eCommerce Transaction |
|
|
53 | (2) |
|
|
55 | (1) |
|
3.4.3 Evaluating Frequent Use of the Same Product |
|
|
55 | (1) |
|
3.4.4 Evaluating Navigation and/or Information Architecture |
|
|
56 | (1) |
|
3.4.5 Increasing Awareness |
|
|
56 | (1) |
|
|
57 | (1) |
|
3.4.7 Maximizing Usability for a Critical Product |
|
|
58 | (1) |
|
3.4.8 Creating an Overall Positive User Experience |
|
|
59 | (1) |
|
3.4.9 Evaluating the Impact of Subtle Changes |
|
|
59 | (1) |
|
3.4.10 Comparing Alternative Designs |
|
|
60 | (1) |
|
3.5 User Research Methods and Tools |
|
|
60 | (4) |
|
3.5.1 Traditional (Moderated) Usability Tests |
|
|
61 | (1) |
|
3.5.2 Unmoderated Usability Tests |
|
|
62 | (1) |
|
|
63 | (1) |
|
3.5.4 Information Architecture Tools |
|
|
64 | (1) |
|
3.5.5 Click and Mouse Tools |
|
|
64 | (1) |
|
|
64 | (5) |
|
3.6.1 Budgets and Timelines |
|
|
64 | (2) |
|
|
66 | (1) |
|
|
67 | (1) |
|
|
68 | (1) |
|
|
69 | (2) |
|
Chapter 4 Performance Metrics |
|
|
71 | (38) |
|
|
73 | (9) |
|
|
74 | (4) |
|
|
78 | (3) |
|
4.1.3 Issues in Measuring Success |
|
|
81 | (1) |
|
|
82 | (9) |
|
4.2.1 Importance of Measuring Time-on-Task |
|
|
83 | (1) |
|
4.2.2 How to Collect and Measure Time-on-Task |
|
|
83 | (3) |
|
4.2.3 Analyzing and Presenting Time-on-Task Data |
|
|
86 | (3) |
|
4.2.4 Issues to Consider When Using Time Data |
|
|
89 | (2) |
|
|
91 | (5) |
|
4.3.1 When to Measure Errors |
|
|
91 | (1) |
|
4.3.2 What Constitutes an Error? |
|
|
92 | (1) |
|
4.3.3 Collecting and Measuring Errors |
|
|
92 | (1) |
|
4.3.4 Analyzing and Presenting Errors |
|
|
93 | (3) |
|
4.3.5 Issues to Consider When Using Error Metrics |
|
|
96 | (1) |
|
4.4 Other Efficiency Metrics |
|
|
96 | (5) |
|
4.4.1 Collecting and Measuring Efficiency |
|
|
97 | (1) |
|
4.4.2 Analyzing and Presenting Efficiency Data |
|
|
98 | (2) |
|
4.4.3 Efficiency as a Combination of Task Success and Time |
|
|
100 | (1) |
|
|
101 | (5) |
|
4.5.1 Collecting and Measuring Learnability Data |
|
|
103 | (1) |
|
4.5.2 Analyzing and Presenting Learnability Data |
|
|
104 | (2) |
|
4.5.3 Issues to Consider When Measuring Learnability |
|
|
106 | (1) |
|
|
106 | (3) |
|
Chapter 5 Self-Reported Metrics |
|
|
109 | (44) |
|
5.1 Importance of Self-Reported Data |
|
|
111 | (1) |
|
|
111 | (9) |
|
|
111 | (1) |
|
5.2.2 Semantic Differential Scales |
|
|
112 | (1) |
|
5.2.3 When to Collect Self-Reported Data |
|
|
113 | (1) |
|
5.2.4 How to Collect Ratings |
|
|
113 | (1) |
|
5.2.5 Biases in Collecting Self-Reported Data |
|
|
114 | (1) |
|
5.2.6 General Guidelines for Rating Scales |
|
|
115 | (1) |
|
5.2.7 Analyzing Rating-Scale Data |
|
|
116 | (4) |
|
|
120 | (4) |
|
|
120 | (1) |
|
5.3.2 After-Scenario Questionnaire |
|
|
120 | (1) |
|
5.3.3 Expectation Measure |
|
|
121 | (1) |
|
5.3.4 A Comparison of Post-Task Self-Reported Metrics |
|
|
122 | (2) |
|
5.4 Overall User Experience Ratings |
|
|
124 | (14) |
|
5.4.1 System Usability Scale |
|
|
125 | (3) |
|
5.4.2 Computer System Usability Questionnaire |
|
|
128 | (1) |
|
5.4.3 Product Reaction Cards |
|
|
129 | (1) |
|
5.4.4 User Experience Questionnaire |
|
|
129 | (2) |
|
|
131 | (2) |
|
|
133 | (1) |
|
5.4.7 Additional Tools for Measuring Self-Reported User Experience |
|
|
134 | (2) |
|
5.4.8 A Comparison of Selected Overall Self-Reported Metrics |
|
|
136 | (2) |
|
5.5 Using SUS to Compare Designs |
|
|
138 | (1) |
|
|
139 | (2) |
|
5.6.1 Website Analysis and Measurement Inventory |
|
|
139 | (1) |
|
5.6.2 American Customer Satisfaction Index |
|
|
140 | (1) |
|
|
140 | (1) |
|
5.6.4 Issues With Live-Site Surveys |
|
|
141 | (1) |
|
5.7 Other Types of Self-Reported Metrics |
|
|
141 | (9) |
|
5.7.1 Assessing Attribute Priorities |
|
|
142 | (1) |
|
5.7.2 Assessing Specific Attributes |
|
|
143 | (2) |
|
5.7.3 Assessing Specific Elements |
|
|
145 | (1) |
|
5.7.4 Open-Ended Questions |
|
|
145 | (3) |
|
5.7.5 Awareness and Comprehension |
|
|
148 | (2) |
|
5.7.5 Awareness and Usefulness Gaps |
|
|
150 | (1) |
|
|
150 | (3) |
|
Chapter 6 Issues-Based Metrics |
|
|
153 | (24) |
|
6.1 What Is a Usability Issue? |
|
|
154 | (2) |
|
6.1.1 Real Issues Versus False Issues |
|
|
155 | (1) |
|
6.2 How to Identify an Issue |
|
|
156 | (5) |
|
6.2.1 Using Think-Aloud From One-on-One Studies |
|
|
158 | (1) |
|
6.2.2 Using Verbatim Comments From Automated Studies |
|
|
159 | (1) |
|
6.2.3 Using Web Analytics |
|
|
160 | (1) |
|
|
160 | (1) |
|
|
161 | (4) |
|
6.3.1 Severity Ratings Based on the User Experience |
|
|
161 | (2) |
|
6.3.2 Severity Ratings Based on a Combination of Factors |
|
|
163 | (1) |
|
6.3.3 Using a Severity Rating System |
|
|
164 | (1) |
|
6.3.4 Some Caveats About Rating Systems |
|
|
164 | (1) |
|
6.4 Analyzing and Reporting Metrics for Usability Issues |
|
|
165 | (4) |
|
6.4.1 Frequency of Unique Issues |
|
|
165 | (1) |
|
6.4.2 Frequency of Issues per Participant |
|
|
166 | (1) |
|
6.4.3 Percentage of Participants |
|
|
167 | (1) |
|
|
167 | (2) |
|
|
169 | (1) |
|
6.5 Consistency in Identifying Usability Issues |
|
|
169 | (1) |
|
6.6 Bias In Identifying Usability Issues |
|
|
170 | (2) |
|
6.7 Number of Participants |
|
|
172 | (3) |
|
6.7.1 Five Participants Is Enough |
|
|
172 | (1) |
|
6.7.2 Five Participants Is Not Enough |
|
|
173 | (1) |
|
|
174 | (1) |
|
|
175 | (1) |
|
|
175 | (2) |
|
|
177 | (18) |
|
7.1 How Eye Tracking Works |
|
|
178 | (2) |
|
|
180 | (6) |
|
7.2.1 Measuring Glanceability |
|
|
181 | (1) |
|
7.2.2 Understanding Mobile Users In Context |
|
|
182 | (1) |
|
7.2.3 Mobile Eye Tracking Technology |
|
|
183 | (1) |
|
|
183 | (1) |
|
|
183 | (2) |
|
7.2.6 Software-Based Eye Tracking |
|
|
185 | (1) |
|
7.3 Visualizing Eye Tracking Data |
|
|
186 | (1) |
|
|
187 | (2) |
|
7.5 Common Eye Tracking Metrics |
|
|
189 | (2) |
|
|
189 | (1) |
|
7.5.2 Number of Fixations |
|
|
190 | (1) |
|
|
190 | (1) |
|
|
190 | (1) |
|
7.5.5 Time to First Fixation |
|
|
190 | (1) |
|
|
191 | (1) |
|
|
191 | (1) |
|
7.6 Tips for Analyzing Eye Tracking Data |
|
|
191 | (1) |
|
|
192 | (1) |
|
|
193 | (2) |
|
Chapter 8 Measuring Emotion |
|
|
195 | (22) |
|
8.1 Defining the Emotional User Experience |
|
|
196 | (3) |
|
8.2 Methods to Measure Emotions |
|
|
199 | (3) |
|
8.2.1 Five Challenges In Measuring Emotions |
|
|
200 | (2) |
|
8.3 Measuring Emotions Through Verbal Expressions |
|
|
202 | (1) |
|
|
203 | (3) |
|
8.5 Facial Expression Analysis |
|
|
206 | (4) |
|
8.6 Galvanic Skin Response |
|
|
210 | (2) |
|
8.7 Case Study: The Value of Biometrics |
|
|
212 | (3) |
|
|
215 | (2) |
|
Chapter 9 Combined and Comparative Metrics |
|
|
217 | (26) |
|
|
217 | (14) |
|
9.1.1 Combining Metrics Based on Target Goals |
|
|
218 | (1) |
|
9.1.2 Combining Metrics Based on Percentages |
|
|
219 | (7) |
|
9.1.3 Combining Metrics Based on Z-Scores |
|
|
226 | (3) |
|
9.1.4 Using SUM: Single Usability Metric |
|
|
229 | (2) |
|
9.2 UX Scorecards and Framework |
|
|
231 | (6) |
|
|
231 | (5) |
|
|
236 | (1) |
|
9.3 Comparison to Goals and Expert Performance |
|
|
237 | (4) |
|
9.3.1 Comparison to Goals |
|
|
237 | (2) |
|
9.3.2 Comparison to Expert Performance |
|
|
239 | (2) |
|
|
241 | (2) |
|
Chapter 10 Special Topics |
|
|
243 | (34) |
|
|
243 | (8) |
|
10.1.1 Basic Web Analytics |
|
|
244 | (3) |
|
10.1.2 Click-Through Rates |
|
|
247 | (2) |
|
|
249 | (1) |
|
|
250 | (1) |
|
|
251 | (9) |
|
10.2.1 Analyses of Open Card-Sort Data |
|
|
252 | (6) |
|
10.2.2 Analyses of Closed Card-Sort Data |
|
|
258 | (2) |
|
|
260 | (5) |
|
|
265 | (2) |
|
10.5 Accessibility Metrics |
|
|
267 | (3) |
|
10.6 Return-on-Investment Metrics |
|
|
270 | (4) |
|
|
274 | (3) |
|
|
277 | (44) |
|
11.1 Thinking Fast and Slow in the Netflix TV User Interface |
|
|
278 | (7) |
|
|
278 | (1) |
|
|
279 | (2) |
|
|
281 | (2) |
|
|
283 | (1) |
|
|
283 | (2) |
|
11.2 Participate/Compete/Win (PCW) Framework: Evaluating Products and Features in the Marketplace |
|
|
285 | (7) |
|
|
285 | (1) |
|
11.2.2 Outlining Objective Criteria |
|
|
286 | (1) |
|
|
287 | (2) |
|
11.2.4 "PCW" (Summative) Usability Testing |
|
|
289 | (3) |
|
11.3 Enterprise UX Case Study: Uncovering the "UX Revenue Chain" |
|
|
292 | (10) |
|
|
292 | (1) |
|
11.3.1 Metric Identification and Selection |
|
|
293 | (1) |
|
|
294 | (4) |
|
|
298 | (1) |
|
|
299 | (3) |
|
|
302 | (1) |
|
11.4 Competitive UX Benchmarking of Four Healthcare Websites |
|
|
302 | (10) |
|
|
303 | (2) |
|
|
305 | (5) |
|
11.4.3 Summary and Recommendations |
|
|
310 | (1) |
|
11.4.4 Acknowledgment and Contributions |
|
|
311 | (1) |
|
|
311 | (1) |
|
11.5 Closing the SNAP Gap |
|
|
312 | (9) |
|
|
313 | (1) |
|
|
314 | (1) |
|
11.5.3 Application Questions |
|
|
314 | (2) |
|
|
316 | (1) |
|
11.5.5 Testing Prototypes |
|
|
317 | (1) |
|
|
318 | (1) |
|
|
318 | (2) |
|
|
320 | (1) |
|
Chapter 12 Ten Keys to Success |
|
|
321 | (12) |
|
12.1 Make the Data Come Alive |
|
|
321 | (2) |
|
12.2 Don't Wait to be Asked to Measure |
|
|
323 | (1) |
|
12.3 Measurement is Less Expensive Than You Think |
|
|
324 | (1) |
|
|
325 | (1) |
|
12.5 Benchmark Your Products |
|
|
325 | (1) |
|
|
326 | (1) |
|
12.7 Speak the Language of Business |
|
|
327 | (1) |
|
12.8 Show Your Confidence |
|
|
328 | (1) |
|
12.9 Don't Misuse Metrics |
|
|
329 | (1) |
|
12.10 Simplify Your Presentation |
|
|
329 | (4) |
References |
|
333 | (12) |
Index |
|
345 | |