Muutke küpsiste eelistusi

E-raamat: Why Programs Fail: A Guide to Systematic Debugging

(Saarland University, Saarbruecken, Germany)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 22-Jul-2009
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080923000
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 48,15 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 22-Jul-2009
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080923000
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This fully updated second edition includes 100+ pages of new material, including new chapters on Verifying Code, Predicting Errors, and Preventing Errors. Cutting-edge tools such as FindBUGS and AGITAR are explained, techniques from integrated environments like Jazz.net are highlighted, and all-new demos with ESC/Java and Spec#, Eclipse and Mozilla are included.

This complete and pragmatic overview of debugging is authored by Andreas Zeller, the talented researcher who developed the GNU Data Display Debugger(DDD), a tool that over 250,000 professionals use to visualize the data structures of programs while they are running. Unlike other books on debugging, Zeller's text is product agnostic, appropriate for all programming languages and skill levels.

Why Programs Fail explains best practices ranging from systematically tracking error reports, to observing symptoms, reproducing errors, and correcting defects. It covers a wide range of tools and techniques from hands-on observation to fully automated diagnoses, and also explores the author's innovative techniques for isolating minimal input to reproduce an error and for tracking cause and effect through a program. It even includes instructions on how to create automated debugging tools.

  • The new edition of this award-winning productivity-booster is for any developer who has ever been frustrated by elusive bugs.
  • Brand new chapters demonstrate cutting-edge debugging techniques and tools, enabling readers to put the latest time-saving developments to work for them.
  • Learn by doing. New exercises and detailed examples focus on emerging tools, languages and environments, including AGITAR, FindBUGS, Python and Eclipse.
  • The text includes exercises and extensive references for further study, and a companion website with source code for all examples and additional debugging resources.


This book is proof that debugging has graduated from a black art to a systematic discipline. It demystifies one of the toughest aspects of software programming, showing clearly how to discover what caused software failures, and fix them with minimal muss and fuss.

The fully updated second edition includes 100+ pages of new material, including new chapters on Verifying Code, Predicting Erors, and Preventing Errors. Cutting-edge tools such as FindBUGS and AGITAR are explained, techniques from integrated environments like Jazz.net are highlighted, and all-new demos with ESC/Java and Spec#, Eclipse and Mozilla are included.

This complete and pragmatic overview of debugging is authored by Andreas Zeller, the talented researcher who developed the GNU Data Display Debugger(DDD), a tool that over 250,000 professionals use to visualize the data structures of programs while they are running. Unlike other books on debugging, Zeller's text is product agnostic, appropriate for all programming languages and skill levels.

The book explains best practices ranging from systematically tracking error reports, to observing symptoms, reproducing errors, and correcting defects. It covers a wide range of tools and techniques from hands-on observation to fully automated diagnoses, and also explores the author's innovative techniques for isolating minimal input to reproduce an error and for tracking cause and effect through a program. It even includes instructions on how to create automated debugging tools.

The text includes exercises and extensive references for further study, and a companion website with source code for all examples and additional debugging resources is available.

*The new edition of this award-winning productivity-booster is for any developer who has ever been frustrated by elusive bugs

*Brand new chapters demonstrate cutting-edge debugging techniques and tools, enabling readers to put the latest time-saving developments to work for them

*Learn by doing. New exercises and detailed examples focus on emerging tools, languages and environments, including AGITAR, FindBUGS, Python and Eclipse.

Arvustused

Praise from the experts for the first edition: "In this book, Andreas Zeller does an excellent job introducing useful debugging techniques and tools invented in both academia and industry. The book is easy to read and actually very fun as well. It will not only help you discover a new perspective on debugging, but it will also teach you some fundamental static and dynamic program analysis techniques in plain language." --Miryung Kim, Software Developer, Motorola Korea

"Today every computer program written is also debugged, but debugging is not a widely studied or taught skill. Few books beyond this one present a systematic approach to finding and fixing programming errors." --James Larus, Microsoft Research

"From the author of ODD, the famous data display debugger, now comes the definitive book on debugging. Zeller's book is chock-full with advice, insight, and tools to track down defects in programs, for all levels of experience and any programming language. The book is lucidly written, explaining the principles of every technique without boring the reader with minutiae. And best of all, at the end of each chapter it tells you where to download all those fancy tools. A great book for the software professional as well as the student interested in the frontiers of automated debugging." --Walter F. Tichy, Professor, University Karlsruhe, Germany

"Andreas Zeller's Why Programs Fail lays an excellent foundation far practitioners, educators, and researchers alike. Using a disciplined approach based on the scientific method, Zeller provides deep insights, detailed approaches, and illustrative examples." --David Notkin, Professor Computer Science & Engineering, University of Washington

Muu info

The award-winning guide to faster and easier debugging is now updated with the latest tools and techniques.
Foreword xv
Preface xvii
CHAPTER 1 How Failures Come to Be 1
1.1 My Program Does Not Work!
1
1.2 From Defects to Failures
2
1.3 Lost in Time and Space
5
1.4 From Failures to Fixes
8
1.4.1 Track the Problem
8
1.4.2 Reproduce the Failure
9
1.4.3 Automate and Simplify the Test Case
9
1.4.4 Find Possible Infection Origins
9
1.4.5 Focus on the Most Likely Origins
12
1.4.6 Isolate the Origin of the Infection
12
1.4.7 Correct the Defect
13
1.5 Automated Debugging Techniques
14
1.6 Bugs, Faults, or Defects?
18
1.7 Concepts
19
How to debug a program
20
1.8 Tools
20
1.9 Further Reading
21
Exercises
22
CHAPTER 2 Tracking Problems 25
2.1 Oh! A11 These Problems
25
2.2 Reporting Problems
26
2.2.1 Problem Facts
26
2.2.2 Product Facts
28
2.2.3 Querying Facts Automatically
29
2.3 Managing Problems
31
2.4 Classifying Problems
32
2.4.1 Severity
33
2.4.2 Priority
33
2.4.3 Identifier
33
2.4.4 Comments
34
2.4.5 Notification
34
2.5 Processing Problems
34
2.6 Managing Problem Tracking
36
2.7 Requirements as Problems
37
2.8 Managing Duplicates
39
2.9 Relating Problems and Fixes
40
2.10 Relating Problems and Tests
43
2.11 Concepts
44
How to obtain the relevant problem information
44
How to write an effective problem report
44
How to organize the debugging process
44
How to track requirements
44
How to keep problem tracking simple
44
How to restore released versions
45
How to separate fixes and features
45
How to relate problems and fixes
45
How to relate problems and tests, make a problem report obsolete
45
2.12 Tools
45
2.13 Further Reading
46
Exercises
46
CHAPTER 3 Making Programs Fail 49
3.1 Testing for Debugging
49
3.2 Controlling the Program
50
3.3 Testing at the Presentation Layer
53
3.3.1 Low-Level Interaction
53
3.3.2 System-Level Interaction
55
3.3.3 Higher-Level Interaction
55
3.3.4 Assessing Test Results
56
3.4 Testing at the Functionality Layer
57
3.5 Testing at the Unit Layer
59
3.6 Isolating Units
63
3.7 Designing for Debugging
66
3.8 Preventing Unknown Problems
69
3.9 Concepts
70
How to test for debugging
70
How to automate program execution
71
How to test at the presentation layer
71
How to test at the functionality layer
71
How to test at the unit layer
71
How to isolate a unit
71
How to design for debug ink
71
How to prevent unknown problems
71
3.10 Tools
72
3.11 Further Reading
72
Exercises
73
CHAPTER 4 Reproducing Problems 75
4.1 The First Task in Debugging
75
4.2 Reproducing the Problem Environment
76
4.3 Reproducing Program Execution
78
4.3.1 Reproducing Data
80
4.3.2 Reproducing User Interaction
80
4.3.3 Reproducing Communications
82
4.3.4 Reproducing Time
83
4.3.5 Reproducing Randomness
83
4.3.6 Reproducing Operating Environments
84
4.3.7 Reproducing Schedules
86
4.3.8 Physical Influences
88
4.3.9 Effects of Debugging Tools
89
4.4 Reproducing System Interaction
90
4.5 Focusing on Units
91
4.5.1 Setting Up a Control Layer
92
4.5.2 A Control Example
92
4.5.3 Mock Objects
95
4.5.4 Controlling More Unit Interaction
97
4.6 Reproducing Crashes
97
4.7 Concepts
101
How to reproduce a problem
101
How to reproduce the problem environment
101
How to reproduce the problem execution
101
How to reproduce unit behavior
101
How to Mock objects
101
How to reproduce a crash
101
4.8 Tools
101
4.9 Further Reading
102
Exercises
102
CHAPTER 5 Simplifying Problems 105
5.1 Simplifying the Problem
105
5.2 The Gecko BugAThon
106
5.3 Manual Simplification
109
5.4 Automatic Simplification
110
5.5 A Simplification Algorithm
112
5.6 Simplifying User Interaction
117
5.7 Random Input Simplified
118
5.8 Simplifying Faster
119
5.8.1 Caching
119
5.8.2 Stop Early
120
5.8.3 Syntactic Simplification
120
5.8.4 Isolate Differences, Not Circumstances
121
5.9 Concepts
123
How to simplify a test case
123
How to automate simplification
123
How to speed up automatic simplification
123
5.10 Tools
123
5.11 Further Reading
123
Exercises
124
CHAPTER 6 Scientific Debugging 129
6.1 How to Become a Debugging Guru
129
6.2 The Scientific Method
130
6.3 Applying the Scientific Method
132
6.3.1 Debugging sample—Preparation
132
6.3.2 Debugging sample—Hypothesis 1
132
6.3.3 Debugging sample—Hypothesis 2
133
6.3.4 Debugging sample—Hypothesis 3
133
6.3.5 Debugging sample—Hypothesis 4
133
6.4 Explicit Debugging
134
6.5 Keeping a Logbook
135
6.6 Debugging Quick-and-Dirty
136
6.7 Algorithmic Debugging
137
6.8 Deriving a Hypothesis
140
6.8.1 The Description of the Problem
140
6.8.2 The Program Code
140
6.8.3 The Failing Run
140
6.8.4 Alternate Runs
141
6.8.5 Earlier Hypotheses
141
6.9 Reasoning about Programs
142
6.10 Concepts
144
How to isolate a failure cause
144
How to understand the problem at hand
144
How to avoid endless debugging sessions
144
How to locate an error in a functional or logical program
144
How to debug quick-and-dirty
144
How to derive a hypothesis
144
How to reason about programs
144
6.11 Further Reading
144
Exercises
145
CHAPTER 7 Deducing Errors 147
7.1 Isolating Value Origins
147
7.2 Understanding Control Flow
148
7.3 Tracking Dependences
152
7.3.1 Effects of Statements
152
7.3.2 Affected Statements
153
7.3.3 Statement Dependences
154
7.3.4 Following Dependences
156
7.3.5 Leveraging Dependences
156
7.4 Slicing Programs
157
7.4.1 Forward Slices
157
7.4.2 Backward Slices
158
7.4.3 Slice Operations
158
7.4.4 Leveraging Slices
160
7.4.5 Executable Slices
160
7.5 Deducing Code Smells
161
7.5.1 Reading Uninitialized Variables
161
7.5.2 Unused Values
162
7.5.3 Unreachable Code
162
7.6 Limits of Static Analysis
166
7.7 Concepts
170
How to isolate value origins
170
How to slice a program
170
7.8 Tools
170
7.9 Further Reading
171
Exercises
171
CHAPTER 8 Observing Facts 175
8.1 Observing State
175
8.2 Logging Execution
176
8.2.1 Logging Functions
177
8.2.2 Logging Frameworks
180
8.2.3 Logging with Aspects
182
8.2.4 Logging at the Binary Level
186
8.3 Using Debuggers
188
8.3.1 A Debugging Session
189
8.3.2 Controlling Execution
192
8.3.3 Postmortem Debugging
192
8.3.4 Logging Data
193
8.3.5 Invoking Functions
194
8.3.6 Fix and Continue
194
8.3.7 Embedded Debuggers
194
8.3.8 Debugger Caveats
195
8.4 Querying Events
196
8.4.1 Watchpoints
196
8.4.2 Uniform Event Queries
197
8.5 Hooking into the Interpreter
199
8.6 Visualizing State
200
8.7 Concepts
202
How to observe state
203
How to encapsulate and reuse debugging code
203
How to observe the final state of a crashing program
203
How to automate observation
203
8.8 Tools
203
8.9 Further Reading
204
Exercises
204
CHAPTER 9 Tracking Origins 211
9.1 Reasoning Backward
211
9.2 Exploring Execution History
211
9.3 Dynamic Slicing
213
9.4 Leveraging Origins
216
9.5 Tracking Down Infections
219
9.6 Concepts
220
How to explore execution history
220
How to isolate value origins for a specific run
220
How to track down an infection
220
9.7 Tools
221
9.8 Further Reading
221
Exercises
221
CHAPTER 10 Asserting Expectations 223
10.1 Automating Observation
223
10.2 Basic Assertions
224
10.3 Asserting Invariants
226
10.4 Asserting Correctness
229
10.5 Assertions as Specifications
232
10.6 From Assertions to Verification
233
10.7 Reference Runs
235
10.8 System Assertions
238
10.8.1 Validating the Heap with MALLOC_CHECK_
239
10.8.2 Avoiding Buffer Overflows with ELECTRICFENCE
239
10.8.3 Detecting Memory Errors with VALGRIND
240
10.8.4 Language Extensions
241
10.9 Checking Production Code
242
10.10 Concepts
244
How to automate observation
244
How to use assertions
245
How to check a program against a reference program
245
How to check memory integrity
245
How to prevent memory errors in a low-level language
245
10.11 Tools
245
10.12 Further Reading
246
Exercises
247
CHAPTER 11 Detecting Anomalies 253
11.1 Capturing Normal Behavior
253
11.2 Comparing Coverage
254
11.3 Statistical Debugging
259
11.4 Collecting Data in the Field
260
11.5 Dynamic Invariants
262
11.6 Invariants On-the-Fly
265
11.7 From Anomalies to Defects
266
11.8 Concepts
267
How to determine abnormal behavior
267
How to summarize behavior
267
How to detect anomalies
267
How to compare coverage
267
How to sample return values
267
How to collect data from the field
267
How to determine invariants
267
11.9 Tools
268
11.10 Further Reading
268
Exercises
269
CHAPTER 12 Causes and Effects 271
12.1 Causes and Alternate Worlds
271
12.2 Verifying Causes
272
12.3 Causality in Practice
273
12.4 Finding Actual Causes
275
12.5 Narrowing Down Causes
276
12.6 A Narrowing Example
277
12.7 The Common Context
277
12.8 Causes in Debugging
278
12.9 Concepts
279
How to show causality
279
How to find a cause
279
How to find an actual cause
279
12.10 Further Reading
279
Exercises
280
CHAPTER 13 Isolating Failure Causes 283
13.1 Isolating Causes Automatically
283
13.2 Isolating versus Simplifying
284
13.3 An Isolation Algorithm
286
13.4 Implementing Isolation
288
13.5 Isolating Failure-Inducing Input
290
13.6 Isolating Failure-Inducing Schedules
291
13.7 Isolating Failure-Inducing Changes
293
13.8 Problems and Limitations
299
13.9 Concepts
301
How to isolate a failure cause in the input
301
How to isolate a failure cause in the thread schedule
301
How to isolate a failure-inducing code change
301
13.10 Tools
301
13.11 Further Reading
301
Exercises
302
CHAPTER 14 Isolating Cause—Effect Chains 305
14.1 Useless Causes
305
14.2 Capturing Program States
307
14.3 Comparing Program States
311
14.4 Isolating Relevant Program States
312
14.5 Isolating Cause-Effect Chains
316
14.6 Isolating Failure-Inducing Code
320
14.7 Issues and Risks
324
14.8 Concepts
326
How to understand how a failure cause propagates through the program run
326
How to capture program states
326
How to compare program states
326
How to isolate failure-inducing program states
326
How to find the code that causes the failure
326
How to narrow down the defect along a cause-effect chain
326
14.9 Tools
326
14.10 Further Reading
327
Exercises
327
CHAPTER 15 Fixing the Defect 329
15.1 Locating the Defect
329
15.2 Focusing on the Most Likely Errors
330
15.3 Validating the Defect
332
15.3.1 Does the Error Cause the Failure?
333
15.3.2 Is the Cause Really an Error?
333
15.3.3 Think Before You Code
335
15.4 Correcting the Defect
335
15.4.1 Does the Failure No Longer occur?
336
15.4.2 Did the Correction Introduce New Problems?
336
15.4.3 Was the Same Mistake Made Elsewhere?
337
15.4.4 Did I Do My Homework?
338
15.5 Workarounds
338
15.6 Concepts
339
How to isolate the infection chain
339
How to find the most likely origins
339
How to correct the defect
339
How to ensure your correction is successful
339
How to avoid introducing new problems
339
15.7 Further Reading
340
Exercises
340
CHAPTER 16 Learning from Mistakes 343
16.1 Where the Defects Are
343
16.2 Mining the Past
344
16.3 Where Defects Come From
346
16.4 Errors during Specification
347
16.4.1 What You Can Do
347
16.4.2 What You Should Focus On
348
16.5 Errors during Programming
349
16.5.1 What You Can Do
349
16.5.2 What You Should Focus On
350
16.6 Errors during Quality Assurance
351
16.6.1 What You Can Do
352
16.6.2 What You Should Focus On
353
16.7 Predicting Problems
353
16.7.1 Predicting Errors from Imports
354
16.7.2 Predicting Errors from Change Frequency
355
16.7.3 A Cache for Bugs
355
16.7.4 Recommendation Systems
356
16.7.5 A Word of Warning
356
16.8 Fixing the Process
357
16.9 Concepts
359
How to learn from mistakes
359
How to map defects to components
359
How to reduce the risk of errors in specification
359
How to reduce the risk of errors in the code
359
How to reduce the risk of errors in quality assurance
359
How to allocate quality-assurance resources wisely
359
16.10 Further Reading
359
Exercises
360
APPENDIX Formal Definitions 363
A.1 Delta Debugging
363
A.1.1 Configurations
363
A.1.2 Passing and Failing Run
363
A.1.3 Tests
363
A.1.4 Minimality
364
A.1.5 Simplifying
364
A.1.6 Differences
364
A.1.7 Isolating
365
A.2 Memory Graphs
365
A.2.1 Formal Structure
365
A.2.2 Unfolding Data Structures
367
A.2.3 Matching Vertices and Edges
368
A.2.4 Computing the Common Subgraph
368
A.2.5 Computing Graph Differences
369
A.2.6 Applying Partial State Changes
371
A.2.7 Capturing C State
372
A.3 Cause-Effect Chains
374
Glossary 377
Bibliography 381
Index 391
Andreas Zeller is a full professor for Software Engineering at Saarland University in Saarbruecken, Germany. His research concerns the analysis of large software systems and their development process; his students are funded by companies like Google, Microsoft, or SAP. In 2010, Zeller was inducted as Fellow of the ACM for his contributions to automated debugging and mining software archives. In 2011, he received an ERC Advanced Grant, Europe's highest and most prestigious individual research grant, for work on specification mining and test case generation. His book "Why programs fail", the "standard reference on debugging", obtained the 2006 Software Development Jolt Productivity Award.