Muutke küpsiste eelistusi

E-raamat: Beyond the Usability Lab: Conducting Large-scale Online User Experience Studies

(Senior Vice President of User Experience, Fidelity Investments, USA), (Senior Usability Specialist), (Director, Design and Usability Center, Bentley University, USA)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 21-Dec-2009
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080953854
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 40,74 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 21-Dec-2009
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080953854
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Usability testing and user experience research typically take place in a controlled lab with small groups. While this type of testing is essential to user experience design, more companies are also looking to test large sample sizes to be able compare data according to specific user populations and see how their experiences differ across user groups. But few usability professionals have experience in setting up these studies, analyzing the data, and presenting it in effective ways.  Online usability testing offers the solution by allowing testers to elicit feedback simultaneously from 1,000s of users. Beyond the Usability Lab offers tried and tested methodologies for conducting online usability studies. It gives practitioners the guidance they need to collect a wealth of data through cost-effective, efficient, and reliable practices. The reader will develop a solid understanding of the capabilities of online usability testing, when it’s appropriate to use and not use, and will learn about the various types of online usability testing techniques.

    *The first guide for conducting large-scale user experience research using the internet *Presents how-to conduct online tests with 1000s of participants – from start to finish *Outlines essential tips for online studies to ensure cost-efficient and reliable results



    Arvustused

    "If you need to reach a large audience using online tools, this book will guide you, inform you, and help you with practical tips from a team that has a wealth of practical and academic experience. They explain how to plan, organise, conduct, analyse, and present your study, and they arent afraid to tackle tricky topics like dealing with possibly fraudulent participants. Highly recommended." --Caroline Jarrett, Effortmark, Ltd, author of Forms that Work"Beyond the Usability Lab will become the gold standard for how to conduct online usability studies. Albert, Tedesco and Tullis masterfully combine strategic design related issues with tactical and practical step-by-step instructions on how to implement an online usability study." --Jim Berling, Managing Director, Burke Institute "Beyond the Usability Lab is intended to help you move your usability testing out of the lab environment and into the online world as a cost-effective way to reach more participants. Albert and colleagues write from their own experience conducting online studies and describe situations when online studies are most and least effective, employing real-world examples with abundant black-and-white screen shots and other graphics. A quest to share knowledge seems to be a driving force behind the book, and I am particularly impressed with how often the authors recommend other, potentially rival texts as references for additional information. Beyond the Usability Lab also has a companion Web site, www.beyondtheusabilitylab.com, to which youre often referred for support material. Though simple in design, the site is a helpful auxiliary source." --Devor Barton, Technical Communication, Vol 58, Number 1, February 2011"The book is well-structured for the practitioner. It is a real pleasure to encounter a book that not only takes the reader on a journey through the rich possibilities of technique, but does so in a a manner that is clear, readable, and accessible. I was particularly pleased with the simple explanations of statistical techniques, which are so often presented as incomprehensible. Whether youve conducted remote studies in the past and want to extend your capability and knowledge, or if you are a complete newcomer, this excellent book is a necessary companion on your journey from the lab into the outside world. You will refer to it often, and it will alert you to opportunities and dangers. What more could you ask of a book?" --as reviewed in User Experience

    Muu info

    Attain feedback from thousands simultaneously with this first ever guide on how to conduct user research and usability testing online.
    PREFACE xi
    ACKNOWLEDGMENTS xiii
    DEDICATION xv
    AUTHOR BIOGRAPHIES xvii
    CHAPTER 1 INTRODUCTION 1
    1.1 What Is an Online Usability Study?
    2
    1.2 Strengths and limitations of Online Usability Testing
    5
    1.2.1 Comparing designs
    6
    1.2.2 Measuring the user experience
    6
    1.2.3 Finding the right participants
    7
    1.2.4 Focusing design Improvements
    7
    1.2.5 Insight Into users' real experience
    8
    1.2.6 Where are users going (click paths)?
    9
    1.2.7 What users are saying about their experiences
    9
    1.2.8 Saving time and money
    9
    1.2.9 Limitations of online usability testing
    9
    1.3 Combining Online Usability Studies with Other User Research Methods
    10
    1.3.1 Usability lab (or remote) testing
    11
    1.3.2 Expert review
    12
    1.3.3 Focus groups
    12
    1.3.4 Web traffic analysis
    13
    1.4 Organization of the Book
    14
    CHAPTER 2 PLANNING THE STUDY 17
    2.1 Target Users
    17
    2.2 Type of Study
    19
    2.3 Between-Subjects Versus Within-Subjects
    22
    2.4 Metrics
    24
    2.4.1 Task-based data
    24
    2.4.2 End-of-session data
    26
    2.5 Budget and Timeline
    27
    2.5.1 Budget
    27
    2.5.2 Timeline
    30
    2.6 Participant Recruiting
    34
    2.6.1 True intent Intercept
    34
    2.6.2 Panels
    35
    2.6.3 Direct and targeted recruiting
    39
    2.7 Participant Sampling
    42
    2.7.1 Number of participants
    42
    2.7.2 Sampling techniques
    43
    2.8 Participant Incentives
    45
    2.9 Summary
    46
    CHAPTER 3 DESIGNING THE STUDY 49
    3.1 introducing the Study
    49
    3.1.1 Purpose, sponsor information, motivation, and Incentive
    50
    3.1.2 Time estimate
    51
    3.1.3 Technical requirements
    52
    3.1.4 Legal information and consent
    53
    3.1.5 instructions
    55
    3.2 Screening Questions
    56
    3.2.1 Types of screening questions
    56
    3.2.2 Misrepresentation checks
    57
    3.2.3 Exit strategy
    58
    3.3 Starter Questions
    58
    3.3.1 Product, computer, and Web experience
    59
    3.3.2 Expectations
    60
    3.3.3 Reducing bias later in the study
    61
    3.4 Constructing Tasks
    62
    3.4.1 Making the task easy to understand
    62
    3.4.2 Writing tasks with task completion rates in mind
    64
    3.4.3 Anticipating various paths to an answer
    67
    3.4.4 Multiple-choice answers
    69
    3.4.5 including a "none of the above" option
    71
    3.4.6 including a "don't know" or "give up" option
    72
    3.4.7 Randomizing task order and answer choices
    73
    3.4.8 Using a subset of tasks
    74
    3.4.9 Self-generated and self-selected tasks
    74
    3.4.10 Self-reported task completion
    77
    3.5 Post-Task Questions and Metrics
    80
    3.5.1 Self-reported data
    80
    3.5.2 Open-ended responses
    82
    3.6 Post-session Questions and Metrics
    83
    3.6.1 Overall rating scales
    83
    3.6.2 Overall assessment tools
    85
    3.6.3 Open-ended questions
    86
    3.7 Demographic Questions and Wrap-Up
    87
    3.7.1 Demographic questions
    87
    3.7.2 Wrap-up
    88
    3.8 Special Topics
    88
    3.8.1 Progress indicators
    89
    3.8.2 Pausing
    90
    3.8.3 Speed traps
    90
    3.9 Summary
    91
    CHAPTER 4 PILOTING AND LAUNCHING THE STUDY 93
    4.1 Pilot Data
    93
    4.1.1 Technical checks
    94
    4.1.2 Usability checks
    97
    4.1.3 Full pilot with data checks
    98
    4.1.4 Preview of results
    101
    4.2 Timing the Launch
    102
    4.2.1 Finding the right time to launch
    102
    4.2.2 Singular and phased launches
    104
    4.3 Monitoring Results
    105
    4.4 Summary
    106
    CHAPTER 5 DATA PREPARATION 107
    5.1 Downloading/Exporting Data
    107
    5.2 Data Quality Checks
    108
    5.3 Removing Participants
    109
    5.3.1 Incomplete data
    110
    5.3.2 Participants who misrepresent themselves
    110
    5.3.3 Mental cheaters
    110
    5.3.4 Tips on removing participants
    111
    5.4 Removing and modifying data for Individual tasks
    112
    5.4.1 Outliers
    112
    5.4.2 Contradictory responses
    113
    5.4.3 Removing a task for all participants
    113
    5.4.4 Modifying task success
    114
    5.5 Receding Data and Creating New Variables
    114
    5.5.1 Success data
    114
    5.5.2 Time variables
    115
    5.5.3 Self-reported variables
    116
    5.5.4 Clickstream data
    117
    5.6 Summary
    119
    CHAPTER 6 DATA ANALYSIS AND PRESENTATION 121
    6.1 Task Performance Data
    122
    6.1.1 Task success
    122
    6.1.2 Task times
    131
    6.1.3 Efficiency
    136
    6.2 Self-Reported Data
    139
    6.2.1 Rating scales
    139
    6.2.2 Open-ended questions, comments, and other verbatims
    146
    6.2.2 Overall assessment tools
    152
    6.3 Clickstream Data
    154
    6.4 Correlations and Combinations
    159
    6.4.1 Correlations
    160
    6.4.2 Combinations (or deriving an overall usability score)
    163
    6.5 Segmentation Analysis
    166
    6.5.1 Segmenting by participants
    166
    6.5.2 Segmenting by tasks
    168
    6.6 identifying Usability Issues and Comparing Designs
    169
    6.6.1 identifying usability issues
    169
    6.6.2 Comparing alternative designs
    171
    6.7 Presenting the Results
    173
    6.7.1 Set the stage appropriately
    173
    6.7.2 Make the participants real
    174
    6.7.3 Organize your data logically
    174
    6.7.4 Tell a story
    174
    6.7.5 Use pictures
    175
    6.7.6 Simplify data graphs
    175
    6.7.7 Show confidence Intervals
    175
    6.7.8 Make details available without boring your audience
    175
    6.7.9 Make the punch line(s) clear
    176
    6.7.10 Clarify the next steps
    176
    6.8 Summary
    176
    CHAPTER 7 BUILDING YOUR ONLINE STUDY USING COMMERCIAL TOOLS 179
    7.1 Loop11
    181
    7.1.1 Creating a study
    181
    7.1.2 Rom the participant's perspective
    182
    7.1.3 Data analysis
    183
    7.1.4 Summary of strengths and limitations
    184
    7.2 RelevantView
    187
    7.2.1 Creating a study
    188
    7.2.2 From the participant's perspective
    190
    7.2.3 Data analysis
    191
    7.2.4 Summary of strengths and limitations
    194
    7.3 UserZoom
    194
    7.3.1 Creating a study
    195
    7.3.2 From the participant's perspective
    198
    7.3.3 Data analysis
    199
    7.3.4 Summary of strengths and limitations
    200
    7.4 WebEffective
    201
    7.4.1 Creating a study
    201
    7.4.2 From the participant's perspective
    204
    7.4.3 Data analysis
    210
    7.4.4 Summary of strengths and limitations
    211
    7.5 Checklist of Questions
    212
    7.6 Summary
    215
    CHAPTER 8 DISCOUNT APPROACHES TO BUILDING AN ONLINE STUDY 217
    8.1 The Basic Approach
    217
    8.2 Measuring Task Success
    219
    8.3 Ratings for Each Task
    222
    8.4 Conditional Logic for a Comment or Explanation
    223
    8.5 Task Timing
    225
    8.6 Randomizing Task Order
    226
    8.7 Positioning of Windows
    228
    8.8 Random Assignment of Participants to Conditions
    234
    8.9 Pulling It all Together
    236
    8.10 Summary
    236
    CHAPTER 9 CASE STUDIES 239
    9.1 Access-Task Surveys: A Low-Cost Method for Evaluating the Efficacy of Software User interfaces
    240
    9.1.1 Background
    240
    9.1.2 Access Task Survey tool
    240
    9.1.3 Methodology
    241
    9.1.4 Results
    243
    9.1.5 Discussion and conclusions
    249
    9.2 Using Sell-Guided Usability Tests During the Redesign of IBM Lotus Notes
    250
    9.2.1 Methodology
    251
    9.2.2 Results
    254
    9.2.3 Self-guided usability testing: Discussion and conclusions
    254
    9.3 Longitudinal Usability and User Engagement Testing for the Complex Web Site Redesign of MTV.com
    257
    9.3.1 Project background
    258
    9.3.2 Why a longitudinal study design
    258
    9.3.3 Task structure
    259
    9.3.4 Data gathering technology and process
    259
    9.3.5 Respondent recruiting and incentives
    259
    9.3.6 Lab study and online data gathering methodology verification
    260
    9.3.7 Data analysis
    260
    9.3.8 Results and discussion
    260
    9.3.9 Conclusion
    264
    9.4 An Automated Study of the UCSF Web Site
    266
    9.4.1 Methodology
    267
    9.4.2 Results and discussion
    270
    9.4.3 Conclusions
    271
    9.5 Online Usability Testing of Tax Preparation Software
    272
    9.5.1 Methodology
    273
    9.5.2 Results and discussion
    273
    9.5.3 Advantages and challenges
    276
    9.5.4 Conclusions
    277
    9.6 Online Usability Testing FamilySearch.org
    278
    9.6.1 Study goals
    278
    9.6.2 Why online usability testing?
    278
    9.6.3 Methodology
    279
    9.6.4 Metrics and data
    283
    9.6.5 Results and discussion
    284
    9.6.6 Data and user experience
    284
    9.6.7 Getting results heard and Integrated
    284
    9.6.8 Conclusions
    285
    9.6.9 Lessons learned
    285
    9.7 Using Online Usability Testing Early In Application Development: Building Usability in From the Start
    285
    9.7.1 Project background
    286
    9.7.2 Creating the usability study environment
    287
    9.7.3 Study goals
    288
    9.7.4 Methodology
    288
    9.7.5 Results and discussion
    290
    9.7.6 Conclusions
    294
    9.7.7 Study limitations and lessons learned
    294
    CHAPTER 10 TEN KEYS TO SUCCESS 297
    10.1 Choose the Right Tool
    297
    10.2 Think Outside of the (Web) Box
    298
    10.3 Test Early In the Design Phase
    299
    10.4 Compare Alternatives
    299
    10.5 Consider the Entire User Experience
    300
    10.6 Use Your Entire Research Toolkit
    301
    10.7 Explore Data
    301
    10.8 Sell Your Results
    302
    10.9 Trust Data (Within Limits)
    303
    10.10 You Don't Have to be an Expert—Just Dive in!
    304
    REFERENCES 305
    INDEX 307
    William (Bill) Albert is Senior Vice President and Global Head of Customer Development at Mach49, a growth incubator for global businesses. Prior to joining Mach49, Bill was Executive Director of the Bentley University User Experience Center (UXC) for almost 13 years. Also, he was Director of User Experience at Fidelity Investments, Senior User Interface Researcher at Lycos, and Post-Doctoral Researcher at Nissan Cambridge Basic Research. He has more than twenty years of experience in user experience research, design, and strategy. Bill has published and presented his research at more than 50 national and international conferences, and published in many peer-reviewed academic journals within the fields of User Experience, Usability, and Human-Computer Interaction. In 2010 he co-authored (with Tom Tullis and Donna Tedesco), Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies,” published by Elsevier/Morgan Kauffman. Thomas S. (Tom) Tullis retired as Vice President of User Experience Research at Fidelity Investments in 2017. Tom was also an Adjunct Professor in Human Factors in Information Design at Bentley University since 2004. He joined Fidelity in 1993 and was instrumental in the development of the companys User Research department, whose facilities include state-of-the-art Usability Labs. Prior to joining Fidelity, he held positions at Canon Information Systems, McDonnell Douglas, Unisys Corporation, and Bell Laboratories. He and Fidelitys usability team have been featured in a number of publications, including Newsweek, Business 2.0, Money, The Boston Globe, The Wall Street Journal, and The New York Times. Donna Tedesco is a Senior User Experience Specialist with over ten years of user research experience. She has published and presented at local, national and international conferences, and is co-author with Bill Albert and Tom Tullis of the book, "Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies." Donna received a BS in Engineering Psychology/Human Factors from Tufts University School of Engineering and a MS in Human Factors in Information Design from Bentley University.