Muutke küpsiste eelistusi

E-raamat: Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics

4.08/5 (1019 hinnangut Goodreads-ist)
(Senior Vice President of User Experience, Fidelity Investments, USA), (Director, Design and Usability Center, Bentley University, USA)
  • Formaat: PDF+DRM
  • Sari: Interactive Technologies
  • Ilmumisaeg: 27-Jul-2010
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080558264
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 41,98 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Sari: Interactive Technologies
  • Ilmumisaeg: 27-Jul-2010
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080558264
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Effectively measuring the usability of any product requires choosing the right metric, applying it, and effectively using the information it reveals. Measuring the User Experience provides the first single source of practical information to enable usability professionals and product developers to do just that. Authors Tullis and Albert organize dozens of metrics into six categories: performance, issues-based, self-reported, web navigation, derived, and behavioral/physiological. They explore each metric, considering best methods for collecting, analyzing, and presenting the data. They provide step-by-step guidance for measuring the usability of any type of product using any type of technology.

• Presents criteria for selecting the most appropriate metric for every case
• Takes a product and technology neutral approach
• Presents in-depth case studies to show how organizations have successfully used the metrics and the information they revealed

Arvustused

"If Tom and Bill could convince me, perhaps the worlds biggest fan of qualitative testing, that usability metrics are really valuablewhich they have, in this wonderful bookthen theres no doubt theyll convince you. I loved reading this book, because it was exactly like having a fascinating conversation with a very smart, very seasoned, and very articulate practitioner. They tell you everything you need to know (and no more) about all the most useful usability metrics, explain the pros and cons of each one (with remarkable clarity and economy), and then reveal exactly how they actually use them after years and years of real world experience. Invaluable!" --Steve Krug, author of Dont Make Me Think: A Common Sense Approach to Web Usability

"This book is a great resource about the many ways you can gather usability metrics without busting your budget. If youre ready to take your user experience career to the next level of professionalism, Tullis and Albert are here for you and share generously of their vast experience. Highly recommended." --Jakob Nielsen, Principal, Nielsen Norman Group, author of Usability Engineering and Eyetracking Web Usability

"If you do any type of usability testing, you need this book. Tullis and Albert have written a clear and comprehensive guide with a common-sense approach to usability metrics." --Ginny Redish, President of Redish and Associates, Inc., author of Letting Go of the Words

Preface xv
Acknowledgments xvii
Introduction
1(14)
Organization of This Book
2(2)
What Is Usability?
4(1)
Why Does Usability Matter?
5(2)
What Are Usability Metrics?
7(1)
The Value of Usability Metrics
8(2)
Ten Common Myths about Usability Metrics
10(5)
Background
15(30)
Designing a Usability Study
15(5)
Selecting Participants
16(1)
Sample Size
17(1)
Within-Subjects or Between-Subjects Study
18(1)
Counterbalancing
19(1)
Independent and Dependent Variables
20(1)
Types of Data
20(3)
Nominal Data
20(1)
Ordinal Data
21(1)
Interval Data
22(1)
Ratio Data
23(1)
Metrics and Data
23(1)
Descriptive Statistics
24(4)
Measures of Central Tendency
25(1)
Measures of Variability
26(1)
Confidence Intervals
27(1)
Comparing Means
28(3)
Independent Samples
28(1)
Paired Samples
29(1)
Comparing More Than Two Samples
30(1)
Relationships between Variables
31(2)
Correlations
32(1)
Nonparametric Tests
33(2)
The Chi-Square Test
33(2)
Presenting Your Data Graphically
35(9)
Column or Bar Graphs
36(2)
Line Graphs
38(2)
Scatterplots
40(2)
Pie Charts
42(1)
Stacked Bar Graphs
42(2)
Summary
44(1)
Planning a Usability Study
45(18)
Study Goals
45(2)
Formative Usability
45(1)
Summative Usability
46(1)
User Goals
47(1)
Performance
47(1)
Satisfaction
47(1)
Choosing the Right Metrics: Ten Types of Usability Studies
48(7)
Completing a Transaction
48(2)
Comparing Products
50(1)
Evaluating Frequent Use of the Same Product
50(1)
Evaluating Navigation and/or Information Architecture
51(1)
Increasing Awareness
52(1)
Problem Discovery
52(1)
Maximizing Usability for a Critical Product
53(1)
Creating an Overall Positive User Experience
54(1)
Evaluating the Impact of Subtle Changes
54(1)
Comparing Alternative Designs
55(1)
Other Study Details
55(6)
Budgets and Timelines
55(2)
Evaluation Methods
57(1)
Participants
58(1)
Data Collection
59(1)
Data Cleanup
60(1)
Summary
61(2)
Performance Metrics
63(36)
Task Success
64(10)
Collecting Any Type of Success Metric
65(1)
Binary Success
66(3)
Levels of Success
69(4)
Issues in Measuring Success
73(1)
Time-on-Task
74(7)
Importance of Measuring Time-on-Task
74(1)
How to Collect and Measure Time-on-Task
74(3)
Analyzing and Presenting Time-on-Task Data
77(2)
Issues to Consider When Using Time Data
79(2)
Errors
81(6)
When to Measure Errors
81(1)
What Constitutes an Error?
82(1)
Collecting and Measuring Errors
83(1)
Analyzing and Presenting Errors
84(2)
Issues to Consider When Using Error Metrics
86(1)
Efficiency
87(5)
Collecting and Measuring Efficiency
87(1)
Analyzing and Presenting Efficiency Data
88(2)
Efficiency as a Combination of Task Success and Time
90(2)
Learnability
92(5)
Collecting and Measuring Learnability Data
93(1)
Analyzing and Presenting Learnability Data
94(2)
Issues to Consider When Measuring Learnability
96(1)
Summary
97(2)
Issues-Based Metrics
99(24)
Identifying Usability Issues
99(1)
What Is a Usability Issue?
100(2)
Real Issues versus False Issues
101(1)
How to Identify an Issue
102(3)
In-Person Studies
103(1)
Automated Studies
103(1)
When Issues Begin and End
103(1)
Granularity
104(1)
Multiple Observers
104(1)
Severity Ratings
105(3)
Severity Ratings Based on the User Experience
105(1)
Severity Ratings Based on a Combination of Factors
106(1)
Using a Severity Rating System
107(1)
Some Caveats about Severity Ratings
108(1)
Analyzing and Reporting Metrics for Usability Issues
108(6)
Frequency of Unique Issues
109(2)
Frequency of Issues per Participant
111(1)
Frequency of Participants
111(1)
Issues by Category
112(1)
Issues by Task
113(1)
Reporting Positive Issues
114(1)
Consistency in Identifying Usability Issues
114(2)
Bias in Identifying Usability Issues
116(1)
Number of Participants
117(4)
Five Participants Is Enough
118(1)
Five Participants Is Not Enough
119(1)
Our Recommendation
119(2)
Summary
121(2)
Self-Reported Metrics
123(44)
Importance of Self-Reported Data
123(1)
Collecting Self-Reported Data
124(4)
Likert Scales
124(1)
Semantic Differential Scales
125(1)
When to Collect Self-Reported Data
125(1)
How to Collect Self-Reported Data
126(1)
Biases in Collecting Self-Reported Data
126(1)
General Guidelines for Rating Scales
127(1)
Analyzing Self-Reported Data
127(1)
Post-Task Ratings
128(7)
Ease of Use
128(1)
After-Scenario Questionnaire
129(1)
Expectation Measure
129(3)
Usability Magnitude Estimation
132(1)
Comparison of Post-Task Self-Reported Metrics
133(2)
Post-Session Ratings
135(12)
Aggregating Individual Task Ratings
137(1)
System Usability Scale
138(1)
Computer System Usability Questionnaire
139(1)
Questionnaire for User Interface Satisfaction
139(3)
Usefulness, Satisfaction, and Ease of Use Questionnaire
142(1)
Product Reaction Cards
142(2)
Comparison of Post-Session Self-Reported Metrics
144(3)
Using SUS to Compare Designs
147(3)
Comparison of ``Senior-Friendly'' Websites
147(1)
Comparison of Windows ME and Windows XP
147(1)
Comparison of Paper Ballots
148(2)
Online Services
150(8)
Website Analysis and Measurement Inventory
150(1)
American Customer Satisfaction Index
151(2)
OpinionLab
153(4)
Issues with Live-Site Surveys
157(1)
Other Types of Self-Reported Metrics
158(8)
Assessing Specific Attributes
158(3)
Assessing Specific Elements
161(1)
Open-Ended Questions
162(1)
Awareness and Comprehension
163(2)
Awareness and Usefulness Gaps
165(1)
Summary
166(1)
Behavioral and Physiological Metrics
167(24)
Observing and Coding Overt Behaviors
167(4)
Verbal Behaviors
168(1)
Nonverbal Behaviors
169(2)
Behaviors Requiring Equipment to Capture
171(17)
Facial Expressions
171(4)
Eye-Tracking
175(5)
Pupillary Response
180(3)
Skin Conductance and Heart Rate
183(3)
Other Measures
186(2)
Summary
188(3)
Combined and Comparative Metrics
191(20)
Single Usability Scores
191(12)
Combining Metrics Based on Target Goals
192(1)
Combining Metrics Based on Percentages
193(5)
Combining Metrics Based on z-Scores
198(4)
Using SUM: Single Usability Metric
202(1)
Usability Scorecards
203(3)
Comparison to Goals and Expert Performance
206(4)
Comparison to Goals
206(2)
Comparison to Expert Performance
208(2)
Summary
210(1)
Special Topics
211(26)
Live Website Data
211(6)
Server Logs
211(2)
Click-Through Rates
213(2)
Drop-Off Rates
215(1)
A/B Studies
216(1)
Card-Sorting Data
217(10)
Analyses of Open Card-Sort Data
218(7)
Analyses of Closed Card-Sort Data
225(2)
Accessibility Data
227(4)
Return-on-Investment Data
231(3)
Six Sigma
234(2)
Summary
236(1)
Case Studies
237(52)
Redesigning a Website Cheaply and Quickly
237(7)
Hoa Loranger
Phase 1: Testing Competitor Websites
237(2)
Phase 2: Testing Three Different Design Concepts
239(4)
Phase 3: Testing a Single Design
243(1)
Conclusion
244(1)
Biography
244(1)
Usability Evaluation of a Speech Recognition IVR
244(8)
James R. Lewis
Method
244(1)
Results: Task-Level Measurements
245(1)
PSSUQ
246(1)
Participant Comments
246(1)
Usability Problems
247(1)
Adequacy of Sample Size
247(3)
Recommendations Based on Participant Behaviors and Comments
250(1)
Discussion
251(1)
Biography
251(1)
References
252(1)
Redesign of the CDC.gov Website
252(11)
Robert Bailey
Cari Wolfson
Janice Nall
Usability Testing Levels
253(1)
Baseline Test
253(1)
Task Scenarios
254(1)
Qualitative Findings
255(1)
Wireframing and FirstClick Testing
256(2)
Final Prototype Testing (Prelaunch Test)
258(3)
Conclusions
261(1)
Biographies
262(1)
References
262(1)
Usability Benchmarking: Mobile Music and Video
263(8)
Scott Weiss
Chris Whitby
Project Goals and Methods
263(1)
Qualitative and Quantitative Data
263(1)
Research Domain
263(1)
Comparative Analysis
264(1)
Study Operations: Number of Respondents
264(1)
Respondent Recruiting
265(1)
Data Collection
265(1)
Time to Complete
266(1)
Success or Failure
266(1)
Number of Attempts
266(1)
Perception Metrics
266(1)
Qualitative Findings
267(1)
Quantitative Findings
267(1)
Summary Findings and SUM Metrics
267(1)
Data Manipulation and Visualization
267(2)
Discussion
269(1)
Benchmark Changes and Future Work
270(1)
Biographies
270(1)
References
270(1)
Measuring the Effects of Drug Label Design and Similarity on Pharmacists' Performance
271(9)
Agnieszka Bojko
Participants
272(1)
Apparatus
272(1)
Stimuli
272(3)
Procedure
275(1)
Analysis
276(1)
Results and Discussion
277(2)
Biography
279(1)
References
279(1)
Making Metrics Matter
280(9)
Todd Zazelenchuk
OneStart: Indiana University's Enterprise Portal Project
280(1)
Designing and Conducting the Study
281(1)
Analyzing and Interpreting the Results
282(1)
Sharing the Findings and Recommendations
283(3)
Reflecting on the Impact
286(1)
Conclusion
287(1)
Acknowledgment
287(1)
Biography
287(1)
References
287(2)
Moving Forward
289(10)
Sell Usability and the Power of Metrics
289(1)
Start Small and Work Your Way Up
290(1)
Make Sure You Have the Time and Money
291(1)
Plan Early and Often
292(1)
Benchmark Your Products
293(1)
Explore Your Data
294(1)
Speak the Language of Business
295(1)
Show Your Confidence
295(1)
Don't Misuse Metrics
296(1)
Simplify Your Presentation
297(2)
References 299(8)
Index 307
William (Bill) Albert is Senior Vice President and Global Head of Customer Development at Mach49, a growth incubator for global businesses. Prior to joining Mach49, Bill was Executive Director of the Bentley University User Experience Center (UXC) for almost 13 years. Also, he was Director of User Experience at Fidelity Investments, Senior User Interface Researcher at Lycos, and Post-Doctoral Researcher at Nissan Cambridge Basic Research. He has more than twenty years of experience in user experience research, design, and strategy. Bill has published and presented his research at more than 50 national and international conferences, and published in many peer-reviewed academic journals within the fields of User Experience, Usability, and Human-Computer Interaction. In 2010 he co-authored (with Tom Tullis and Donna Tedesco), Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies,” published by Elsevier/Morgan Kauffman. Thomas S. (Tom) Tullis retired as Vice President of User Experience Research at Fidelity Investments in 2017. Tom was also an Adjunct Professor in Human Factors in Information Design at Bentley University since 2004. He joined Fidelity in 1993 and was instrumental in the development of the companys User Research department, whose facilities include state-of-the-art Usability Labs. Prior to joining Fidelity, he held positions at Canon Information Systems, McDonnell Douglas, Unisys Corporation, and Bell Laboratories. He and Fidelitys usability team have been featured in a number of publications, including Newsweek, Business 2.0, Money, The Boston Globe, The Wall Street Journal, and The New York Times.