Preface |
|
xvii | |
List of Figures |
|
xxi | |
List of Tables |
|
xxvii | |
CHAPTER 1 BASIC CONCEPTS AND PRELIMINARIES |
|
|
|
1 | |
|
|
5 | |
|
|
7 | |
|
1.4 Verification and Validation |
|
|
7 | |
|
1.5 Failure, Error, Fault, and Defect |
|
|
9 | |
|
1.6 Notion of Software Reliability |
|
|
10 | |
|
1.7 Objectives of Testing |
|
|
10 | |
|
|
11 | |
|
|
12 | |
|
1.10 Concept of Complete Testing |
|
|
13 | |
|
1.11 Central Issue in Testing |
|
|
13 | |
|
|
14 | |
|
|
16 | |
|
1.14 Sources of Information for Test Case Selection |
|
|
18 | |
|
1.15 White-Box and Black-Box Testing |
|
|
20 | |
|
1.16 Test Planning and Design |
|
|
21 | |
|
1.17 Monitoring and Measuring Test Execution |
|
|
22 | |
|
1.18 Test Tools and Automation |
|
|
24 | |
|
1.19 Test Team Organization and Management |
|
|
26 | |
|
|
27 | |
|
|
28 | |
|
|
30 | |
CHAPTER 2 THEORY OF PROGRAM TESTING |
|
31 | |
|
2.1 Basic Concepts in Testing Theory |
|
|
31 | |
|
2.2 Theory of Goodenough and Gerhart |
|
|
32 | |
|
2.2.1 Fundamental Concepts |
|
|
32 | |
|
|
34 | |
|
|
34 | |
|
2.2.4 Conditions for Reliability |
|
|
36 | |
|
2.2.5 Drawbacks of Theory |
|
|
37 | |
|
2.3 Theory of Weyuker and Ostrand |
|
|
37 | |
|
|
39 | |
|
|
40 | |
|
2.4.2 Power of Test Methods |
|
|
42 | |
|
|
42 | |
|
2.6 Limitations of Testing |
|
|
45 | |
|
|
46 | |
|
|
47 | |
|
|
48 | |
|
|
49 | |
CHAPTER 3 UNIT TESTING |
|
51 | |
|
3.1 Concept of Unit Testing |
|
|
51 | |
|
|
53 | |
|
|
60 | |
|
|
62 | |
|
|
65 | |
|
|
68 | |
|
3.7 Unit Testing in eXtreme Programming |
|
|
71 | |
|
3.8 JUnit: Framework for Unit Testing |
|
|
73 | |
|
3.9 Tools for Unit Testing |
|
|
76 | |
|
|
81 | |
|
|
82 | |
|
|
84 | |
|
|
86 | |
CHAPTER 4 CONTROL FLOW TESTING |
|
88 | |
|
|
88 | |
|
4.2 Outline of Control Flow Testing |
|
|
89 | |
|
|
90 | |
|
4.4 Paths in a Control Flow Graph |
|
|
93 | |
|
4.5 Path Selection Criteria |
|
|
94 | |
|
4.5.1 All-Path Coverage Criterion |
|
|
96 | |
|
4.5.2 Statement Coverage Criterion |
|
|
97 | |
|
4.5.3 Branch Coverage Criterion |
|
|
98 | |
|
4.5.4 Predicate Coverage Criterion |
|
|
100 | |
|
4.6 Generating Test Input |
|
|
101 | |
|
4.7 Examples of Test Data Selection |
|
|
106 | |
|
4.8 Containing Infeasible Paths |
|
|
107 | |
|
|
108 | |
|
|
109 | |
|
|
110 | |
|
|
111 | |
CHAPTER 5 DATA FLOW TESTING |
|
112 | |
|
|
112 | |
|
|
113 | |
|
5.3 Overview of Dynamic Data Flow Testing |
|
|
115 | |
|
|
116 | |
|
|
119 | |
|
5.6 Data Flow Testing Criteria |
|
|
121 | |
|
5.7 Comparison of Data Flow Test Selection Criteria |
|
|
124 | |
|
5.8 Feasible Paths and Test Selection Criteria |
|
|
125 | |
|
5.9 Comparison of Testing Techniques |
|
|
126 | |
|
|
128 | |
|
|
129 | |
|
|
131 | |
|
|
132 | |
CHAPTER 6 DOMAIN TESTING |
|
135 | |
|
|
135 | |
|
6.2 Testing for Domain Errors |
|
|
137 | |
|
|
138 | |
|
6.4 A Types of Domain Errors |
|
|
141 | |
|
|
144 | |
|
6.6 Test Selection Criterion |
|
|
146 | |
|
|
154 | |
|
|
155 | |
|
|
156 | |
|
|
156 | |
CHAPTER 7 SYSTEM INTEGRATION TESTING |
|
158 | |
|
7.1 Concept of Integration Testing |
|
|
158 | |
|
7.2 Different Types of Interfaces and Interface Errors |
|
|
159 | |
|
7.3 Granularity of System Integration Testing |
|
|
163 | |
|
7.4 System Integration Techniques |
|
|
164 | |
|
|
164 | |
|
|
167 | |
|
|
171 | |
|
7.4.4 Sandwich and Big Bang |
|
|
173 | |
|
7.5 Software and Hardware Integration |
|
|
174 | |
|
7.5.1 Hardware Design Verification Tests |
|
|
174 | |
|
7.5.2 Hardware and Software Compatibility Matrix |
|
|
177 | |
|
7.6 Test Plan for System Integration |
|
|
180 | |
|
7.7 Off-the-Shelf Component Integration |
|
|
184 | |
|
7.7.1 Off-the-Shelf Component Testing |
|
|
185 | |
|
|
186 | |
|
|
187 | |
|
|
188 | |
|
|
189 | |
|
|
190 | |
CHAPTER 8 SYSTEM TEST CATEGORIES |
|
192 | |
|
8.1 Taxonomy of System Tests |
|
|
192 | |
|
|
194 | |
|
|
194 | |
|
8.2.2 Upgrade/Downgrade Tests |
|
|
195 | |
|
8.2.3 Light Emitting Diode Tests |
|
|
195 | |
|
|
195 | |
|
8.2.5 Command Line Interface Tests |
|
|
196 | |
|
|
196 | |
|
8.3.1 Communication Systems Tests |
|
|
196 | |
|
|
197 | |
|
8.3.3 Logging and Tracing Tests |
|
|
198 | |
|
8.3.4 Element Management Systems Tests |
|
|
198 | |
|
8.3.5 Management Information Base Tests |
|
|
202 | |
|
8.3.6 Graphical User Interface Tests |
|
|
202 | |
|
|
203 | |
|
|
204 | |
|
|
204 | |
|
8.4.1 Boundary Value Tests |
|
|
205 | |
|
8.4.2 Power Cycling Tests |
|
|
206 | |
|
8.4.3 On-Line Insertion and Removal Tests |
|
|
206 | |
|
8.4.4 High-Availability Tests |
|
|
206 | |
|
8.4.5 Degraded Node Tests |
|
|
207 | |
|
8.5 Interoperability Tests |
|
|
208 | |
|
|
209 | |
|
|
210 | |
|
|
211 | |
|
8.9 Load and Stability Tests |
|
|
213 | |
|
|
214 | |
|
|
214 | |
|
|
215 | |
|
|
216 | |
|
|
218 | |
|
|
219 | |
|
|
220 | |
|
|
221 | |
CHAPTER 9 FUNCTIONAL TESTING |
|
222 | |
|
9.1 Functional Testing Concepts of Howden |
|
|
222 | |
|
9.1.1 Different Types of Variables |
|
|
224 | |
|
|
230 | |
|
9.1.3 Testing a Function in Context |
|
|
231 | |
|
9.2 Complexity of Applying Functional Testing |
|
|
232 | |
|
|
235 | |
|
|
236 | |
|
|
240 | |
|
9.4 Equivalence Class Partitioning |
|
|
244 | |
|
9.5 Boundary Value Analysis |
|
|
246 | |
|
|
248 | |
|
|
252 | |
|
|
255 | |
|
|
256 | |
|
|
258 | |
|
|
260 | |
|
|
261 | |
|
|
262 | |
CHAPTER 10 TEST GENERATION FROM FSM MODELS |
|
265 | |
|
10.1 State-Oriented Model |
|
|
265 | |
|
10.2 Points of Control and Observation |
|
|
269 | |
|
10.3 Finite-State Machine |
|
|
270 | |
|
10.4 Test Generation from an FSM |
|
|
273 | |
|
10.5 Transition Tour Method |
|
|
273 | |
|
10.6 Testing with State Verification |
|
|
277 | |
|
10.7 Unique Input–Output Sequence |
|
|
279 | |
|
10.8 Distinguishing Sequence |
|
|
284 | |
|
10.9 Characterizing Sequence |
|
|
287 | |
|
|
291 | |
|
10.10.1 Local Architecture |
|
|
292 | |
|
10.10.2 Distributed Architecture |
|
|
293 | |
|
10.10.3 Coordinated Architecture |
|
|
294 | |
|
10.10.4 Remote Architecture |
|
|
295 | |
|
10.11 Testing and Test Control Notation Version 3 (TTCN-3) |
|
|
295 | |
|
|
296 | |
|
10.11.2 Data Declarations |
|
|
296 | |
|
10.11.3 Ports and Components |
|
|
298 | |
|
10.11.4 Test Case Verdicts |
|
|
299 | |
|
|
300 | |
|
|
302 | |
|
10.13 Test Generation from EFSM Models |
|
|
307 | |
|
10.14 Additional Coverage Criteria for System Testing |
|
|
313 | |
|
|
315 | |
|
|
316 | |
|
|
317 | |
|
|
318 | |
CHAPTER 11 SYSTEM TEST DESIGN |
|
321 | |
|
|
321 | |
|
11.2 Requirement Identification |
|
|
322 | |
|
11.3 Characteristics of Testable Requirements |
|
|
331 | |
|
11.4 Test Objective Identification |
|
|
334 | |
|
|
335 | |
|
11.6 Modeling a Test Design Process |
|
|
345 | |
|
11.7 Modeling Test Results |
|
|
347 | |
|
11.8 Test Design Preparedness Metrics |
|
|
349 | |
|
11.9 Test Case Design Effectiveness |
|
|
350 | |
|
|
351 | |
|
|
351 | |
|
|
353 | |
|
|
353 | |
CHAPTER 12 SYSTEM TEST PLANNING AND AUTOMATION |
|
355 | |
|
12.1 Structure of a System Test Plan |
|
|
355 | |
|
12.2 Introduction and Feature Description |
|
|
356 | |
|
|
357 | |
|
|
357 | |
|
12.5 Test Suite Structure |
|
|
358 | |
|
|
358 | |
|
12.7 Test Execution Strategy |
|
|
361 | |
|
12.7.1 Multicycle System Test Strategy |
|
|
362 | |
|
12.7.2 Characterization of Test Cycles |
|
|
362 | |
|
12.7.3 Preparing for First Test Cycle |
|
|
366 | |
|
12.7.4 Selecting Test Cases for Final Test Cycle |
|
|
369 | |
|
12.7.5 Prioritization of Test Cases |
|
|
371 | |
|
12.7.6 Details of Three Test Cycles |
|
|
372 | |
|
12.8 Test Effort Estimation |
|
|
377 | |
|
12.8.1 Number of Test Cases |
|
|
378 | |
|
12.8.2 Test Case Creation Effort |
|
|
384 | |
|
12.8.3 Test Case Execution Effort |
|
|
385 | |
|
12.9 Scheduling and Test Milestones |
|
|
387 | |
|
12.10 System Test Automation |
|
|
391 | |
|
12.11 Evaluation and Selection of Test Automation Tools |
|
|
392 | |
|
12.12 Test Selection Guidelines for Automation |
|
|
395 | |
|
12.13 Characteristics of Automated Test Cases |
|
|
397 | |
|
12.14 Structure of an Automated Test Case |
|
|
399 | |
|
12.15 Test Automation Infrastructure |
|
|
400 | |
|
|
402 | |
|
|
403 | |
|
|
405 | |
|
|
406 | |
CHAPTER 13 SYSTEM TEST EXECUTION |
|
408 | |
|
|
408 | |
|
|
409 | |
|
13.3 Preparedness to Start System Testing |
|
|
415 | |
|
13.4 Metrics for Tracking System Test |
|
|
419 | |
|
13.4.1 Metrics for Monitoring Test Execution |
|
|
420 | |
|
13.4.2 Test Execution Metric Examples |
|
|
420 | |
|
13.4.3 Metrics for Monitoring Defect Reports |
|
|
423 | |
|
13.4.4 Defect Report Metric Examples |
|
|
425 | |
|
13.5 Orthogonal Defect Classification |
|
|
428 | |
|
13.6 Defect Causal Analysis |
|
|
431 | |
|
|
435 | |
|
13.8 First Customer Shipment |
|
|
437 | |
|
|
438 | |
|
|
439 | |
|
13.11 Measuring Test Effectiveness |
|
|
441 | |
|
|
445 | |
|
|
446 | |
|
|
447 | |
|
|
448 | |
CHAPTER 14 ACCEPTANCE TESTING |
|
450 | |
|
14.1 Types of Acceptance Testing |
|
|
450 | |
|
|
451 | |
|
14.3 Selection of Acceptance Criteria |
|
|
461 | |
|
14.4 Acceptance Test Plan |
|
|
461 | |
|
14.5 Acceptance Test Execution |
|
|
463 | |
|
14.6 Acceptance Test Report |
|
|
464 | |
|
14.7 Acceptance Testing in eXtreme Programming |
|
|
466 | |
|
|
467 | |
|
|
468 | |
|
|
468 | |
|
|
469 | |
CHAPTER 15 SOFTWARE RELIABILITY |
|
471 | |
|
15.1 What Is Reliability? |
|
|
471 | |
|
|
472 | |
|
|
473 | |
|
15.1.3 Time Interval between Failures |
|
|
474 | |
|
15.1.4 Counting Failures in Periodic Intervals |
|
|
475 | |
|
|
476 | |
|
15.2 Definitions of Software Reliability |
|
|
477 | |
|
15.2.1 First Definition of Software Reliability |
|
|
477 | |
|
15.2.2 Second Definition of Software Reliability |
|
|
478 | |
|
15.2.3 Comparing the Definitions of Software Reliability |
|
|
479 | |
|
15.3 Factors Influencing Software Reliability |
|
|
479 | |
|
15.4 Applications of Software Reliability |
|
|
481 | |
|
15.4.1 Comparison of Software Engineering Technologies |
|
|
481 | |
|
15.4.2 Measuring the Progress of System Testing |
|
|
481 | |
|
15.4.3 Controlling the System in Operation |
|
|
482 | |
|
15.4.4 Better Insight into Software Development Process |
|
|
482 | |
|
15.5 Operational Profiles |
|
|
482 | |
|
|
483 | |
|
15.5.2 Representation of Operational Profile |
|
|
483 | |
|
|
486 | |
|
|
491 | |
|
|
492 | |
|
|
494 | |
|
|
494 | |
CHAPTER 16 TEST TEAM ORGANIZATION |
|
496 | |
|
|
496 | |
|
16.1.1 Integration Test Group |
|
|
496 | |
|
|
497 | |
|
16.2 Software Quality Assurance Group |
|
|
499 | |
|
16.3 System Test Team Hierarchy |
|
|
500 | |
|
16.4 Effective Staffing of Test Engineers |
|
|
501 | |
|
16.5 Recruiting Test Engineers |
|
|
504 | |
|
|
504 | |
|
|
505 | |
|
|
505 | |
|
16.5.4 Coordinating an Interview Team |
|
|
506 | |
|
|
507 | |
|
|
511 | |
|
16.6 Retaining Test Engineers |
|
|
511 | |
|
|
511 | |
|
|
512 | |
|
|
513 | |
|
|
513 | |
|
|
513 | |
|
|
514 | |
|
16.7.3 Information Sharing |
|
|
514 | |
|
|
514 | |
|
|
514 | |
|
|
515 | |
|
|
515 | |
|
|
516 | |
|
|
516 | |
|
|
517 | |
CHAPTER 17 SOFTWARE QUALITY |
|
519 | |
|
17.1 Five Views of Software Quality |
|
|
519 | |
|
17.2 McCall's Quality Factors and Criteria |
|
|
523 | |
|
|
523 | |
|
|
527 | |
|
17.2.3 Relationship between Quality Factors and Criteria |
|
|
527 | |
|
|
530 | |
|
17.3 ISO 9126 Quality Characteristics |
|
|
530 | |
|
17.4 ISO 9000:2000 Software Quality Standard |
|
|
534 | |
|
17.4.1 ISO 9000:2000 Fundamentals |
|
|
535 | |
|
17.4.2 ISO 9001:2000 Requirements |
|
|
537 | |
|
|
542 | |
|
|
544 | |
|
|
544 | |
|
|
545 | |
CHAPTER 18 MATURITY MODELS |
|
546 | |
|
18.1 Basic Idea in Software Process |
|
|
546 | |
|
18.2 Capability Maturity Model |
|
|
548 | |
|
|
549 | |
|
18.2.2 Five Levels of Maturity and Key Process Areas |
|
|
550 | |
|
18.2.3 Common Features of Key Practices |
|
|
553 | |
|
18.2.4 Application of CMM |
|
|
553 | |
|
18.2.5 Capability Maturity Model Integration (CMMJ) |
|
|
554 | |
|
18.3 Test Process Improvement |
|
|
555 | |
|
18.4 Testing Maturity Model |
|
|
568 | |
|
|
578 | |
|
|
578 | |
|
|
579 | |
|
|
579 | |
GLOSSARY |
|
581 | |
INDEX |
|
600 | |