Preface |
|
xv | |
Acknowledgments |
|
xvii | |
|
|
1 | (14) |
|
Organization of This Book |
|
|
2 | (2) |
|
|
4 | (1) |
|
Why Does Usability Matter? |
|
|
5 | (2) |
|
What Are Usability Metrics? |
|
|
7 | (1) |
|
The Value of Usability Metrics |
|
|
8 | (2) |
|
Ten Common Myths about Usability Metrics |
|
|
10 | (5) |
|
|
15 | (30) |
|
Designing a Usability Study |
|
|
15 | (5) |
|
|
16 | (1) |
|
|
17 | (1) |
|
Within-Subjects or Between-Subjects Study |
|
|
18 | (1) |
|
|
19 | (1) |
|
Independent and Dependent Variables |
|
|
20 | (1) |
|
|
20 | (3) |
|
|
20 | (1) |
|
|
21 | (1) |
|
|
22 | (1) |
|
|
23 | (1) |
|
|
23 | (1) |
|
|
24 | (4) |
|
Measures of Central Tendency |
|
|
25 | (1) |
|
|
26 | (1) |
|
|
27 | (1) |
|
|
28 | (3) |
|
|
28 | (1) |
|
|
29 | (1) |
|
Comparing More Than Two Samples |
|
|
30 | (1) |
|
Relationships between Variables |
|
|
31 | (2) |
|
|
32 | (1) |
|
|
33 | (2) |
|
|
33 | (2) |
|
Presenting Your Data Graphically |
|
|
35 | (9) |
|
|
36 | (2) |
|
|
38 | (2) |
|
|
40 | (2) |
|
|
42 | (1) |
|
|
42 | (2) |
|
|
44 | (1) |
|
Planning a Usability Study |
|
|
45 | (18) |
|
|
45 | (2) |
|
|
45 | (1) |
|
|
46 | (1) |
|
|
47 | (1) |
|
|
47 | (1) |
|
|
47 | (1) |
|
Choosing the Right Metrics: Ten Types of Usability Studies |
|
|
48 | (7) |
|
|
48 | (2) |
|
|
50 | (1) |
|
Evaluating Frequent Use of the Same Product |
|
|
50 | (1) |
|
Evaluating Navigation and/or Information Architecture |
|
|
51 | (1) |
|
|
52 | (1) |
|
|
52 | (1) |
|
Maximizing Usability for a Critical Product |
|
|
53 | (1) |
|
Creating an Overall Positive User Experience |
|
|
54 | (1) |
|
Evaluating the Impact of Subtle Changes |
|
|
54 | (1) |
|
Comparing Alternative Designs |
|
|
55 | (1) |
|
|
55 | (6) |
|
|
55 | (2) |
|
|
57 | (1) |
|
|
58 | (1) |
|
|
59 | (1) |
|
|
60 | (1) |
|
|
61 | (2) |
|
|
63 | (36) |
|
|
64 | (10) |
|
Collecting Any Type of Success Metric |
|
|
65 | (1) |
|
|
66 | (3) |
|
|
69 | (4) |
|
Issues in Measuring Success |
|
|
73 | (1) |
|
|
74 | (7) |
|
Importance of Measuring Time-on-Task |
|
|
74 | (1) |
|
How to Collect and Measure Time-on-Task |
|
|
74 | (3) |
|
Analyzing and Presenting Time-on-Task Data |
|
|
77 | (2) |
|
Issues to Consider When Using Time Data |
|
|
79 | (2) |
|
|
81 | (6) |
|
|
81 | (1) |
|
What Constitutes an Error? |
|
|
82 | (1) |
|
Collecting and Measuring Errors |
|
|
83 | (1) |
|
Analyzing and Presenting Errors |
|
|
84 | (2) |
|
Issues to Consider When Using Error Metrics |
|
|
86 | (1) |
|
|
87 | (5) |
|
Collecting and Measuring Efficiency |
|
|
87 | (1) |
|
Analyzing and Presenting Efficiency Data |
|
|
88 | (2) |
|
Efficiency as a Combination of Task Success and Time |
|
|
90 | (2) |
|
|
92 | (5) |
|
Collecting and Measuring Learnability Data |
|
|
93 | (1) |
|
Analyzing and Presenting Learnability Data |
|
|
94 | (2) |
|
Issues to Consider When Measuring Learnability |
|
|
96 | (1) |
|
|
97 | (2) |
|
|
99 | (24) |
|
Identifying Usability Issues |
|
|
99 | (1) |
|
What Is a Usability Issue? |
|
|
100 | (2) |
|
Real Issues versus False Issues |
|
|
101 | (1) |
|
|
102 | (3) |
|
|
103 | (1) |
|
|
103 | (1) |
|
When Issues Begin and End |
|
|
103 | (1) |
|
|
104 | (1) |
|
|
104 | (1) |
|
|
105 | (3) |
|
Severity Ratings Based on the User Experience |
|
|
105 | (1) |
|
Severity Ratings Based on a Combination of Factors |
|
|
106 | (1) |
|
Using a Severity Rating System |
|
|
107 | (1) |
|
Some Caveats about Severity Ratings |
|
|
108 | (1) |
|
Analyzing and Reporting Metrics for Usability Issues |
|
|
108 | (6) |
|
Frequency of Unique Issues |
|
|
109 | (2) |
|
Frequency of Issues per Participant |
|
|
111 | (1) |
|
Frequency of Participants |
|
|
111 | (1) |
|
|
112 | (1) |
|
|
113 | (1) |
|
Reporting Positive Issues |
|
|
114 | (1) |
|
Consistency in Identifying Usability Issues |
|
|
114 | (2) |
|
Bias in Identifying Usability Issues |
|
|
116 | (1) |
|
|
117 | (4) |
|
Five Participants Is Enough |
|
|
118 | (1) |
|
Five Participants Is Not Enough |
|
|
119 | (1) |
|
|
119 | (2) |
|
|
121 | (2) |
|
|
123 | (44) |
|
Importance of Self-Reported Data |
|
|
123 | (1) |
|
Collecting Self-Reported Data |
|
|
124 | (4) |
|
|
124 | (1) |
|
Semantic Differential Scales |
|
|
125 | (1) |
|
When to Collect Self-Reported Data |
|
|
125 | (1) |
|
How to Collect Self-Reported Data |
|
|
126 | (1) |
|
Biases in Collecting Self-Reported Data |
|
|
126 | (1) |
|
General Guidelines for Rating Scales |
|
|
127 | (1) |
|
Analyzing Self-Reported Data |
|
|
127 | (1) |
|
|
128 | (7) |
|
|
128 | (1) |
|
After-Scenario Questionnaire |
|
|
129 | (1) |
|
|
129 | (3) |
|
Usability Magnitude Estimation |
|
|
132 | (1) |
|
Comparison of Post-Task Self-Reported Metrics |
|
|
133 | (2) |
|
|
135 | (12) |
|
Aggregating Individual Task Ratings |
|
|
137 | (1) |
|
|
138 | (1) |
|
Computer System Usability Questionnaire |
|
|
139 | (1) |
|
Questionnaire for User Interface Satisfaction |
|
|
139 | (3) |
|
Usefulness, Satisfaction, and Ease of Use Questionnaire |
|
|
142 | (1) |
|
|
142 | (2) |
|
Comparison of Post-Session Self-Reported Metrics |
|
|
144 | (3) |
|
Using SUS to Compare Designs |
|
|
147 | (3) |
|
Comparison of ``Senior-Friendly'' Websites |
|
|
147 | (1) |
|
Comparison of Windows ME and Windows XP |
|
|
147 | (1) |
|
Comparison of Paper Ballots |
|
|
148 | (2) |
|
|
150 | (8) |
|
Website Analysis and Measurement Inventory |
|
|
150 | (1) |
|
American Customer Satisfaction Index |
|
|
151 | (2) |
|
|
153 | (4) |
|
Issues with Live-Site Surveys |
|
|
157 | (1) |
|
Other Types of Self-Reported Metrics |
|
|
158 | (8) |
|
Assessing Specific Attributes |
|
|
158 | (3) |
|
Assessing Specific Elements |
|
|
161 | (1) |
|
|
162 | (1) |
|
Awareness and Comprehension |
|
|
163 | (2) |
|
Awareness and Usefulness Gaps |
|
|
165 | (1) |
|
|
166 | (1) |
|
Behavioral and Physiological Metrics |
|
|
167 | (24) |
|
Observing and Coding Overt Behaviors |
|
|
167 | (4) |
|
|
168 | (1) |
|
|
169 | (2) |
|
Behaviors Requiring Equipment to Capture |
|
|
171 | (17) |
|
|
171 | (4) |
|
|
175 | (5) |
|
|
180 | (3) |
|
Skin Conductance and Heart Rate |
|
|
183 | (3) |
|
|
186 | (2) |
|
|
188 | (3) |
|
Combined and Comparative Metrics |
|
|
191 | (20) |
|
|
191 | (12) |
|
Combining Metrics Based on Target Goals |
|
|
192 | (1) |
|
Combining Metrics Based on Percentages |
|
|
193 | (5) |
|
Combining Metrics Based on z-Scores |
|
|
198 | (4) |
|
Using SUM: Single Usability Metric |
|
|
202 | (1) |
|
|
203 | (3) |
|
Comparison to Goals and Expert Performance |
|
|
206 | (4) |
|
|
206 | (2) |
|
Comparison to Expert Performance |
|
|
208 | (2) |
|
|
210 | (1) |
|
|
211 | (26) |
|
|
211 | (6) |
|
|
211 | (2) |
|
|
213 | (2) |
|
|
215 | (1) |
|
|
216 | (1) |
|
|
217 | (10) |
|
Analyses of Open Card-Sort Data |
|
|
218 | (7) |
|
Analyses of Closed Card-Sort Data |
|
|
225 | (2) |
|
|
227 | (4) |
|
Return-on-Investment Data |
|
|
231 | (3) |
|
|
234 | (2) |
|
|
236 | (1) |
|
|
237 | (52) |
|
Redesigning a Website Cheaply and Quickly |
|
|
237 | (7) |
|
|
Phase 1: Testing Competitor Websites |
|
|
237 | (2) |
|
Phase 2: Testing Three Different Design Concepts |
|
|
239 | (4) |
|
Phase 3: Testing a Single Design |
|
|
243 | (1) |
|
|
244 | (1) |
|
|
244 | (1) |
|
Usability Evaluation of a Speech Recognition IVR |
|
|
244 | (8) |
|
|
|
244 | (1) |
|
Results: Task-Level Measurements |
|
|
245 | (1) |
|
|
246 | (1) |
|
|
246 | (1) |
|
|
247 | (1) |
|
|
247 | (3) |
|
Recommendations Based on Participant Behaviors and Comments |
|
|
250 | (1) |
|
|
251 | (1) |
|
|
251 | (1) |
|
|
252 | (1) |
|
Redesign of the CDC.gov Website |
|
|
252 | (11) |
|
|
|
|
|
253 | (1) |
|
|
253 | (1) |
|
|
254 | (1) |
|
|
255 | (1) |
|
Wireframing and FirstClick Testing |
|
|
256 | (2) |
|
Final Prototype Testing (Prelaunch Test) |
|
|
258 | (3) |
|
|
261 | (1) |
|
|
262 | (1) |
|
|
262 | (1) |
|
Usability Benchmarking: Mobile Music and Video |
|
|
263 | (8) |
|
|
|
Project Goals and Methods |
|
|
263 | (1) |
|
Qualitative and Quantitative Data |
|
|
263 | (1) |
|
|
263 | (1) |
|
|
264 | (1) |
|
Study Operations: Number of Respondents |
|
|
264 | (1) |
|
|
265 | (1) |
|
|
265 | (1) |
|
|
266 | (1) |
|
|
266 | (1) |
|
|
266 | (1) |
|
|
266 | (1) |
|
|
267 | (1) |
|
|
267 | (1) |
|
Summary Findings and SUM Metrics |
|
|
267 | (1) |
|
Data Manipulation and Visualization |
|
|
267 | (2) |
|
|
269 | (1) |
|
Benchmark Changes and Future Work |
|
|
270 | (1) |
|
|
270 | (1) |
|
|
270 | (1) |
|
Measuring the Effects of Drug Label Design and Similarity on Pharmacists' Performance |
|
|
271 | (9) |
|
|
|
272 | (1) |
|
|
272 | (1) |
|
|
272 | (3) |
|
|
275 | (1) |
|
|
276 | (1) |
|
|
277 | (2) |
|
|
279 | (1) |
|
|
279 | (1) |
|
|
280 | (9) |
|
|
OneStart: Indiana University's Enterprise Portal Project |
|
|
280 | (1) |
|
Designing and Conducting the Study |
|
|
281 | (1) |
|
Analyzing and Interpreting the Results |
|
|
282 | (1) |
|
Sharing the Findings and Recommendations |
|
|
283 | (3) |
|
|
286 | (1) |
|
|
287 | (1) |
|
|
287 | (1) |
|
|
287 | (1) |
|
|
287 | (2) |
|
|
289 | (10) |
|
Sell Usability and the Power of Metrics |
|
|
289 | (1) |
|
Start Small and Work Your Way Up |
|
|
290 | (1) |
|
Make Sure You Have the Time and Money |
|
|
291 | (1) |
|
|
292 | (1) |
|
|
293 | (1) |
|
|
294 | (1) |
|
Speak the Language of Business |
|
|
295 | (1) |
|
|
295 | (1) |
|
|
296 | (1) |
|
Simplify Your Presentation |
|
|
297 | (2) |
References |
|
299 | (8) |
Index |
|
307 | |