Muutke küpsiste eelistusi

Measuring the User Experience: Collecting, Analyzing, and Presenting UX Metrics 3rd edition [Pehme köide]

(Senior Vice President of User Experience, Fidelity Investments, USA), (Director, Design and Usability Center, Bentley University, USA)
  • Formaat: Paperback / softback, 384 pages, kõrgus x laius: 235x191 mm, kaal: 610 g
  • Sari: Interactive Technologies
  • Ilmumisaeg: 17-Mar-2022
  • Kirjastus: Morgan Kaufmann Publishers In
  • ISBN-10: 0128180803
  • ISBN-13: 9780128180808
  • Formaat: Paperback / softback, 384 pages, kõrgus x laius: 235x191 mm, kaal: 610 g
  • Sari: Interactive Technologies
  • Ilmumisaeg: 17-Mar-2022
  • Kirjastus: Morgan Kaufmann Publishers In
  • ISBN-10: 0128180803
  • ISBN-13: 9780128180808
Measuring the User Experience: Collecting, Analyzing, and Presenting UX Metrics, Third Edition provides the quantitative analysis training that students and professionals need. This book presents an update on the first resource that focused on how to quantify user experience. Now in its third edition, the authors have expanded on the area of behavioral and physiological metrics, splitting that chapter into sections that cover eye-tracking and measuring emotion. The book also contains new research and updated examples, several new case studies, and new examples using the most recent version of Excel.
  • Helps readers learn which metrics to select for every case, including behavioral, physiological, emotional, aesthetic, gestural, verbal and physical, as well as more specialized metrics such as eye-tracking and clickstream data
  • Provides a vendor-neutral examination on how to measure the user experience with websites, digital products, and virtually any other type of product or system
  • Contains new and in-depth global case studies that show how organizations have successfully used metrics, along with the information they revealed
  • Includes a companion site, www.measuringux.com, that has articles, tools, spreadsheets, presentations and other resources that help readers effectively measure user experience

Arvustused

"UX metrics are important but can be intimidating. Tullis and Albert ride to the rescue with a generous dose of demystification spray. Based on vast practical experience, this book covers everything that researchers should know to start running good quant studies, striking the right balance between detail and approachability." -- Jakob Nielsen, PhD, Principal, Nielsen Norman Group

"Nowadays, there are so many really good books about UX that it can be hard to choose where to spend your precious time. Ill make it easy for you: Read this one. There are less than a dozen books on the Really Great UX Books shelf in my office, and this has been one of them since the first edition came out. Regardless of whether youre a quantitative person or a qualitative person, everyone whos involved in producing something people use or interact with (from UX/UI professionals, designers, and developers to marketing people, project/product managers and CEOs) should read this." -- Steve Krug, author of "Dont Make Me Think" and "Rocket Surgery Made Easy"

Preface xv
Acknowledgments xix
A Special Note From Cheryl Tullis Sirois xxi
Biographies xxiii
Chapter 1 Introduction
1(16)
1.1 What Is User Experience
4(4)
1.2 What Are User Experience Metrics?
8(2)
1.3 The Value of UX Metrics
10(1)
1.4 Metrics for Everyone
11(1)
1.5 New Technologies in UX Metrics
12(1)
1.6 Ten Myths About UX Metrics
13(4)
Myth 1 Metrics Take Too Much Time to Collect
13(1)
Myth 2 UX Metrics Cost Too Much Money
14(1)
Myth 3 UX Metrics Are Not Useful When Focusing on Small Improvements
14(1)
Myth 4 UX Metrics Don't Help Us Understand Causes
14(1)
Myth 5 UX Metrics Are Too Noisy
14(1)
Myth 6 You Can Just Trust Your Gut
15(1)
Myth 7 Metrics Don't Apply to New Products
15(1)
Myth 8 No Metrics Exist for the Type of Issues We Are Dealing With
15(1)
Myth 9 Metrics Are Not Understood or Appreciated by Management
16(1)
Myth 10 It's Difficult to Collect Reliable Data With a Small Sample Size
16(1)
Chapter 2 Background
17(30)
2.1 Independent and Dependent Variables
18(1)
2.2 Types of Data
18(4)
2.2.1 Nominal Data
19(1)
2.2.2 Ordinal Data
19(2)
2.2.3 Interval Data
21(1)
2.2.4 Ratio Data
22(1)
2.3 Descriptive Statistics
22(7)
2.3.1 Measures of Central Tendency
22(2)
2.3.2 Measures of Variability
24(1)
2.3.3 Confidence Intervals
25(2)
2.3.4 Displaying Confidence Intervals as Error Bars
27(2)
2.4 Comparing Means
29(4)
2.4.1 Independent Samples
29(1)
2.4.2 Paired Samples
30(2)
2.4.3 Comparing More Than Two Samples
32(1)
2.5 Relationships Between Variables
33(1)
2.5.1 Correlations
33(1)
2.6 Nonparametric Tests
34(2)
2.6.1 The Chi-Square Test
35(1)
2.7 Presenting Your Data Graphically
36(9)
2.7.1 Column or Bar Graphs
37(2)
2.7.2 Line Graphs
39(1)
2.7.3 Scatterplots
40(2)
2.7.4 Pie or Donut Charts
42(1)
2.7.5 Stacked Bar Graphs
43(2)
2.8 Summary
45(2)
Chapter 3 Planning
47(24)
3.1 Study Goals
48(2)
3.1.1 Formative User Research
48(1)
3.1.2 Summative User Research
49(1)
3.2 UX Goals
50(1)
3.2.1 User Performance
50(1)
3.2.2 User Preferences
50(1)
3.2.2 User Emotions
51(1)
3.3 Business Goals
51(2)
3.4 Choosing the Right UX Metrics
53(7)
3.4.1 Completing an eCommerce Transaction
53(2)
3.4.2 Comparing Products
55(1)
3.4.3 Evaluating Frequent Use of the Same Product
55(1)
3.4.4 Evaluating Navigation and/or Information Architecture
56(1)
3.4.5 Increasing Awareness
56(1)
3.4.6 Problem Discovery
57(1)
3.4.7 Maximizing Usability for a Critical Product
58(1)
3.4.8 Creating an Overall Positive User Experience
59(1)
3.4.9 Evaluating the Impact of Subtle Changes
59(1)
3.4.10 Comparing Alternative Designs
60(1)
3.5 User Research Methods and Tools
60(4)
3.5.1 Traditional (Moderated) Usability Tests
61(1)
3.5.2 Unmoderated Usability Tests
62(1)
3.5.3 Online Surveys
63(1)
3.5.4 Information Architecture Tools
64(1)
3.5.5 Click and Mouse Tools
64(1)
3.6 Other Study Details
64(5)
3.6.1 Budgets and Timelines
64(2)
3.6.2 Participants
66(1)
3.6.3 Data Collection
67(1)
3.6.4 Data Cleanup
68(1)
3.7 Summary
69(2)
Chapter 4 Performance Metrics
71(38)
4.1 Task Success
73(9)
4.1.1 Binary Success
74(4)
4.1.2 Levels of Success
78(3)
4.1.3 Issues in Measuring Success
81(1)
4.2 Time-on-Task
82(9)
4.2.1 Importance of Measuring Time-on-Task
83(1)
4.2.2 How to Collect and Measure Time-on-Task
83(3)
4.2.3 Analyzing and Presenting Time-on-Task Data
86(3)
4.2.4 Issues to Consider When Using Time Data
89(2)
4.3 Errors
91(5)
4.3.1 When to Measure Errors
91(1)
4.3.2 What Constitutes an Error?
92(1)
4.3.3 Collecting and Measuring Errors
92(1)
4.3.4 Analyzing and Presenting Errors
93(3)
4.3.5 Issues to Consider When Using Error Metrics
96(1)
4.4 Other Efficiency Metrics
96(5)
4.4.1 Collecting and Measuring Efficiency
97(1)
4.4.2 Analyzing and Presenting Efficiency Data
98(2)
4.4.3 Efficiency as a Combination of Task Success and Time
100(1)
4.5 Learnability
101(5)
4.5.1 Collecting and Measuring Learnability Data
103(1)
4.5.2 Analyzing and Presenting Learnability Data
104(2)
4.5.3 Issues to Consider When Measuring Learnability
106(1)
4.6 Summary
106(3)
Chapter 5 Self-Reported Metrics
109(44)
5.1 Importance of Self-Reported Data
111(1)
5.2 Rating Scales
111(9)
5.2.1 Likert Scales
111(1)
5.2.2 Semantic Differential Scales
112(1)
5.2.3 When to Collect Self-Reported Data
113(1)
5.2.4 How to Collect Ratings
113(1)
5.2.5 Biases in Collecting Self-Reported Data
114(1)
5.2.6 General Guidelines for Rating Scales
115(1)
5.2.7 Analyzing Rating-Scale Data
116(4)
5.3 Post-Task Ratings
120(4)
5.3.1 Ease of Use
120(1)
5.3.2 After-Scenario Questionnaire
120(1)
5.3.3 Expectation Measure
121(1)
5.3.4 A Comparison of Post-Task Self-Reported Metrics
122(2)
5.4 Overall User Experience Ratings
124(14)
5.4.1 System Usability Scale
125(3)
5.4.2 Computer System Usability Questionnaire
128(1)
5.4.3 Product Reaction Cards
129(1)
5.4.4 User Experience Questionnaire
129(2)
5.4.5 AttrakDiff
131(2)
5.4.6 Net Promoter Score
133(1)
5.4.7 Additional Tools for Measuring Self-Reported User Experience
134(2)
5.4.8 A Comparison of Selected Overall Self-Reported Metrics
136(2)
5.5 Using SUS to Compare Designs
138(1)
5.6 Online Services
139(2)
5.6.1 Website Analysis and Measurement Inventory
139(1)
5.6.2 American Customer Satisfaction Index
140(1)
5.6.3 OpinionLab
140(1)
5.6.4 Issues With Live-Site Surveys
141(1)
5.7 Other Types of Self-Reported Metrics
141(9)
5.7.1 Assessing Attribute Priorities
142(1)
5.7.2 Assessing Specific Attributes
143(2)
5.7.3 Assessing Specific Elements
145(1)
5.7.4 Open-Ended Questions
145(3)
5.7.5 Awareness and Comprehension
148(2)
5.7.5 Awareness and Usefulness Gaps
150(1)
5.8 Summary
150(3)
Chapter 6 Issues-Based Metrics
153(24)
6.1 What Is a Usability Issue?
154(2)
6.1.1 Real Issues Versus False Issues
155(1)
6.2 How to Identify an Issue
156(5)
6.2.1 Using Think-Aloud From One-on-One Studies
158(1)
6.2.2 Using Verbatim Comments From Automated Studies
159(1)
6.2.3 Using Web Analytics
160(1)
6.2.4 Using Eye-Tracking
160(1)
6.3 Severity Ratings
161(4)
6.3.1 Severity Ratings Based on the User Experience
161(2)
6.3.2 Severity Ratings Based on a Combination of Factors
163(1)
6.3.3 Using a Severity Rating System
164(1)
6.3.4 Some Caveats About Rating Systems
164(1)
6.4 Analyzing and Reporting Metrics for Usability Issues
165(4)
6.4.1 Frequency of Unique Issues
165(1)
6.4.2 Frequency of Issues per Participant
166(1)
6.4.3 Percentage of Participants
167(1)
6.4.4 Issues by Category
167(2)
6.4.5 Issues by Task
169(1)
6.5 Consistency in Identifying Usability Issues
169(1)
6.6 Bias In Identifying Usability Issues
170(2)
6.7 Number of Participants
172(3)
6.7.1 Five Participants Is Enough
172(1)
6.7.2 Five Participants Is Not Enough
173(1)
6.7.3 What to Do?
174(1)
6.7.4 Our Recommendation
175(1)
6.8 Summary
175(2)
Chapter 7 Eye Tracking
177(18)
7.1 How Eye Tracking Works
178(2)
7.2 Mobile Eye Tracking
180(6)
7.2.1 Measuring Glanceability
181(1)
7.2.2 Understanding Mobile Users In Context
182(1)
7.2.3 Mobile Eye Tracking Technology
183(1)
7.2.4 Glasses
183(1)
7.2.5 Device Stand
183(2)
7.2.6 Software-Based Eye Tracking
185(1)
7.3 Visualizing Eye Tracking Data
186(1)
7.4 Areas of Interest
187(2)
7.5 Common Eye Tracking Metrics
189(2)
7.5.1 Dwell Time
189(1)
7.5.2 Number of Fixations
190(1)
7.5.3 Fixation Duration
190(1)
7.5.4 Sequence
190(1)
7.5.5 Time to First Fixation
190(1)
7.5.6 Revisits
191(1)
7.5.7 Hit Ratio
191(1)
7.6 Tips for Analyzing Eye Tracking Data
191(1)
7.7 Pupillary Response
192(1)
7.8 Summary
193(2)
Chapter 8 Measuring Emotion
195(22)
8.1 Defining the Emotional User Experience
196(3)
8.2 Methods to Measure Emotions
199(3)
8.2.1 Five Challenges In Measuring Emotions
200(2)
8.3 Measuring Emotions Through Verbal Expressions
202(1)
8.4 Self-Report
203(3)
8.5 Facial Expression Analysis
206(4)
8.6 Galvanic Skin Response
210(2)
8.7 Case Study: The Value of Biometrics
212(3)
8.4 Summary
215(2)
Chapter 9 Combined and Comparative Metrics
217(26)
9.1 Single UX Scores
217(14)
9.1.1 Combining Metrics Based on Target Goals
218(1)
9.1.2 Combining Metrics Based on Percentages
219(7)
9.1.3 Combining Metrics Based on Z-Scores
226(3)
9.1.4 Using SUM: Single Usability Metric
229(2)
9.2 UX Scorecards and Framework
231(6)
9.2.1 UX Scorecards
231(5)
9.2.2 UX Frameworks
236(1)
9.3 Comparison to Goals and Expert Performance
237(4)
9.3.1 Comparison to Goals
237(2)
9.3.2 Comparison to Expert Performance
239(2)
9.5 Summary
241(2)
Chapter 10 Special Topics
243(34)
10.1 Web Analytics
243(8)
10.1.1 Basic Web Analytics
244(3)
10.1.2 Click-Through Rates
247(2)
10.1.3 Drop-off Rates
249(1)
10.1.4 A/B Tests
250(1)
10.2 Card-Sorting Data
251(9)
10.2.1 Analyses of Open Card-Sort Data
252(6)
10.2.2 Analyses of Closed Card-Sort Data
258(2)
10.3 Tree Testing
260(5)
10.4 First Click Testing
265(2)
10.5 Accessibility Metrics
267(3)
10.6 Return-on-Investment Metrics
270(4)
10.7 Summary
274(3)
Chapter 11 Case Studies
277(44)
11.1 Thinking Fast and Slow in the Netflix TV User Interface
278(7)
11.1.1 Background
278(1)
11.1.2 Methods
279(2)
11.1.3 Results
281(2)
11.1.4 Discussion
283(1)
11.1.5 Impact
283(2)
11.2 Participate/Compete/Win (PCW) Framework: Evaluating Products and Features in the Marketplace
285(7)
11.2.1 Introduction
285(1)
11.2.2 Outlining Objective Criteria
286(1)
11.2.3 Feature Analysis
287(2)
11.2.4 "PCW" (Summative) Usability Testing
289(3)
11.3 Enterprise UX Case Study: Uncovering the "UX Revenue Chain"
292(10)
11.3.1 Introduction
292(1)
11.3.1 Metric Identification and Selection
293(1)
11.3.2 Methods
294(4)
11.3.2 Analysis
298(1)
11.3.4 Results
299(3)
11.3.5 Conclusion
302(1)
11.4 Competitive UX Benchmarking of Four Healthcare Websites
302(10)
11.4.1 Methodology
303(2)
11.4.2 Results
305(5)
11.4.3 Summary and Recommendations
310(1)
11.4.4 Acknowledgment and Contributions
311(1)
11.4.5 Biography
311(1)
11.5 Closing the SNAP Gap
312(9)
11.5.1 Field Research
313(1)
11.5.2 Weekly Reviews
314(1)
11.5.3 Application Questions
314(2)
11.5.4 Surveys
316(1)
11.5.5 Testing Prototypes
317(1)
11.5.6 Success Metric
318(1)
11.5.7 Organizations
318(2)
11.5.8 Biography
320(1)
Chapter 12 Ten Keys to Success
321(12)
12.1 Make the Data Come Alive
321(2)
12.2 Don't Wait to be Asked to Measure
323(1)
12.3 Measurement is Less Expensive Than You Think
324(1)
12.4 Plan Early
325(1)
12.5 Benchmark Your Products
325(1)
12.6 Explore Your Data
326(1)
12.7 Speak the Language of Business
327(1)
12.8 Show Your Confidence
328(1)
12.9 Don't Misuse Metrics
329(1)
12.10 Simplify Your Presentation
329(4)
References 333(12)
Index 345
William (Bill) Albert is Senior Vice President and Global Head of Customer Development at Mach49, a growth incubator for global businesses. Prior to joining Mach49, Bill was Executive Director of the Bentley University User Experience Center (UXC) for almost 13 years. Also, he was Director of User Experience at Fidelity Investments, Senior User Interface Researcher at Lycos, and Post-Doctoral Researcher at Nissan Cambridge Basic Research. He has more than twenty years of experience in user experience research, design, and strategy. Bill has published and presented his research at more than 50 national and international conferences, and published in many peer-reviewed academic journals within the fields of User Experience, Usability, and Human-Computer Interaction. In 2010 he co-authored (with Tom Tullis and Donna Tedesco), Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies,” published by Elsevier/Morgan Kauffman. Thomas S. (Tom) Tullis retired as Vice President of User Experience Research at Fidelity Investments in 2017. Tom was also an Adjunct Professor in Human Factors in Information Design at Bentley University since 2004. He joined Fidelity in 1993 and was instrumental in the development of the companys User Research department, whose facilities include state-of-the-art Usability Labs. Prior to joining Fidelity, he held positions at Canon Information Systems, McDonnell Douglas, Unisys Corporation, and Bell Laboratories. He and Fidelitys usability team have been featured in a number of publications, including Newsweek, Business 2.0, Money, The Boston Globe, The Wall Street Journal, and The New York Times.