Preface |
|
xiii | |
Acknowledgments |
|
xxv | |
|
PART 1 QUALITY FUNCTION DEPLOYMENT |
|
|
1 | (74) |
|
An Introduction to QFD: Driving Vision Vertically Through the Project |
|
|
5 | (44) |
|
|
6 | (2) |
|
QFD in Use Case-Driven Projects |
|
|
8 | (4) |
|
|
9 | (2) |
|
The ``Chaos'' of Projects and the Importance of Prioritization |
|
|
11 | (1) |
|
Running a QFD Workshop: Mega Motors Example |
|
|
12 | (35) |
|
|
14 | (9) |
|
|
23 | (3) |
|
Analyze Relationship of Use Cases to Business Drivers |
|
|
26 | (6) |
|
Analyze Correlations Between Use Cases |
|
|
32 | (2) |
|
First Matrix Complete; QFD Workshop Status Check |
|
|
34 | (1) |
|
``Flipping the Matrix'': Deployment to Quality Requirements |
|
|
35 | (8) |
|
Flipping the Matrix: Deployment to Vehicle Components |
|
|
43 | (2) |
|
Workshop Conclusion and Summary |
|
|
45 | (2) |
|
|
47 | (2) |
|
Aligning Decision Making and Synchronizing Distributed Development Horizontally in the Organization |
|
|
49 | (26) |
|
Using QFD to Align Decision Making Horizontally Across a Company |
|
|
50 | (11) |
|
A Brief Overview of Oil and Gas Exploration |
|
|
50 | (1) |
|
The Problem: Selecting A Shared Earth Modeling Development Kit |
|
|
51 | (1) |
|
|
52 | (2) |
|
Matrix 1: Prioritize Use Cases |
|
|
54 | (2) |
|
Matrix 2: Prioritize Non-Functional Requirements |
|
|
56 | (2) |
|
Matrix 3: Prioritize Earth Modeling Techniques |
|
|
58 | (1) |
|
Matrix 4: Prioritize Shared Earth Modeling Dev Kits |
|
|
59 | (1) |
|
Example Conclusion and Summary |
|
|
60 | (1) |
|
Using QFD to Synchronize Distributed Development Horizontally Across Component Teams |
|
|
61 | (11) |
|
Entropy Happens in Distributed Software Development |
|
|
61 | (3) |
|
Planning the Length of Iterations and Number of Use Cases per Iteration in Distributed Software Development |
|
|
64 | (8) |
|
|
72 | (3) |
|
PART 2 SOFTWARE RELIABILITY ENGINEERING |
|
|
75 | (84) |
|
Operational Profiles: Quantifying Frequency of Use of Use Cases |
|
|
77 | (44) |
|
Operational Profile of Use Case Scenarios |
|
|
78 | (7) |
|
|
79 | (3) |
|
Pareto Principle and Guesstimates |
|
|
82 | (3) |
|
Working Smarter: Scenarios of a Use Case |
|
|
85 | (5) |
|
Time-Boxing an Inspection |
|
|
86 | (1) |
|
Bottom-Up Estimation of Tests Needed per Scenario |
|
|
87 | (3) |
|
Operational Profile of a Use Case Package |
|
|
90 | (14) |
|
Sanity Check Before Proceeding |
|
|
90 | (1) |
|
|
91 | (1) |
|
|
92 | (6) |
|
Probability that Include/Extend Use Cases Are Actually Used |
|
|
98 | (5) |
|
Concluding Thoughts About Use Case Relationships |
|
|
103 | (1) |
|
Working Smarter: Use Case Packages |
|
|
104 | (5) |
|
Time-Boxing for a Package of Use Cases |
|
|
104 | (1) |
|
Transitioning from High-Level to Low-Level Planning |
|
|
105 | (2) |
|
Air Bags and Hawaiian Shirts |
|
|
107 | (2) |
|
Extending Operational Profiles to Address Critical Use Cases |
|
|
109 | (9) |
|
What Does ``Critical'' Mean? |
|
|
109 | (1) |
|
|
110 | (1) |
|
|
111 | (1) |
|
Profiling Risk in Use Cases |
|
|
112 | (6) |
|
What Have You Got to Lose? |
|
|
118 | (1) |
|
|
118 | (3) |
|
Reliability and Knowing When to Stop Testing |
|
|
121 | (38) |
|
|
122 | (4) |
|
Software Reliability is User-Centric and Dynamic |
|
|
123 | (1) |
|
Software Reliability Is Quantifiable |
|
|
124 | (2) |
|
Reliability: Software Versus Hardware |
|
|
126 | (1) |
|
|
126 | (12) |
|
Visualizing Failure Intensity with a Reliability Growth Curve |
|
|
127 | (1) |
|
Selecting a Unit of Measure for Failure Intensity |
|
|
128 | (1) |
|
Setting a Failure Intensity Objective |
|
|
129 | (2) |
|
But What's the Right Failure Intensity Objective? |
|
|
131 | (7) |
|
|
138 | (15) |
|
|
139 | (2) |
|
Establish Planned Test Coverage as per Operational Profile |
|
|
141 | (1) |
|
Initialize Dashboard Before Each Test Iteration |
|
|
142 | (3) |
|
Update the Dashboard at the End of Each Test Iteration |
|
|
145 | (7) |
|
Tracking the Swamp Through Time |
|
|
152 | (1) |
|
Determining the Effectiveness of Your SRE-Based Test Process |
|
|
153 | (3) |
|
|
156 | (1) |
|
|
156 | (3) |
|
PART 3 MODEL-BASED SPECIFICATION (PRECONDITIONS, POSTCONDITIONS, AND INVARIANTS) |
|
|
159 | (58) |
|
Use Case Preconditions, Postconditions, and Invariants: What They Didn't Tell You, But You Need to Know! |
|
|
161 | (36) |
|
Sanity Check Before Proceeding |
|
|
162 | (1) |
|
A Brief History of Preconditions and Postconditions |
|
|
163 | (2) |
|
Calculating Preconditions from Postconditions |
|
|
165 | (4) |
|
|
165 | (1) |
|
Step 1. Find a ``Risky'' Postcondition: Model as an Equation |
|
|
166 | (1) |
|
Step 2. Identify a Potential Failure: State an Invariant |
|
|
167 | (1) |
|
Step 3. Compute the Precondition |
|
|
168 | (1) |
|
|
169 | (3) |
|
|
172 | (2) |
|
Model-Based Specification |
|
|
174 | (1) |
|
Reasoning About State Through Time |
|
|
174 | (6) |
|
|
175 | (1) |
|
Step 1. Find ``Risky'' Postconditions: Model as Equations |
|
|
176 | (1) |
|
Step 2. Identify a Potential Failure: State an Invariant |
|
|
176 | (2) |
|
Step 3. Calculate Preconditions |
|
|
178 | (2) |
|
Exploring Boundary Condition Failures |
|
|
180 | (3) |
|
Step 1. Identify Postconditions Associated with Boundaries of Operation |
|
|
180 | (1) |
|
Step 2. State an Invariant the Postconditions Should Not Violate |
|
|
181 | (1) |
|
Step 3. Calculate Preconditions |
|
|
181 | (2) |
|
Further Thoughts: Preconditions, Postconditions, and Invariants in Use Cases |
|
|
183 | (8) |
|
Preconditions and Postconditions of Individual Operations Versus the Use Case as a Whole |
|
|
183 | (1) |
|
Scope of Preconditions and Postconditions: Scenario Versus Whole Use Case |
|
|
184 | (1) |
|
Postconditions Can Have More than One Precondition |
|
|
185 | (1) |
|
Weak and Strong Preconditions |
|
|
185 | (2) |
|
Types of Invariants in Use Cases |
|
|
187 | (4) |
|
Working Smart in How You Apply What You've Learned |
|
|
191 | (4) |
|
Prioritize Where You Apply Model-Based Specification |
|
|
192 | (1) |
|
Stick to Numeric Problems |
|
|
193 | (1) |
|
The Absolute Least You Need to Know: One Fundamental Lesson and Three Simple Rules |
|
|
193 | (2) |
|
|
195 | (2) |
|
Triple Threat Test Design for Use Cases |
|
|
197 | (20) |
|
``Triple Threat'' Test Cases? |
|
|
197 | (3) |
|
Threat #1---The Precondition |
|
|
198 | (1) |
|
Threat #2---The Postcondition |
|
|
198 | (1) |
|
Threat #3---The Invariant |
|
|
198 | (2) |
|
Applying the Extended Use Case Test Design Pattern |
|
|
200 | (13) |
|
Step 1. Identify Operational Variables |
|
|
201 | (1) |
|
Step 2. Define Domains of the Operational Variables |
|
|
202 | (1) |
|
Step 3. Develop the Operational Relation |
|
|
203 | (6) |
|
|
209 | (4) |
|
|
213 | (1) |
|
|
214 | (3) |
|
PART 4 USE CASE CONFIGURATION MANAGEMENT |
|
|
217 | (50) |
|
Calculating Your Company's ROI in Use Case Configuration Management |
|
|
221 | (20) |
|
|
221 | (2) |
|
Requirements Management Tools |
|
|
223 | (1) |
|
|
223 | (1) |
|
Conventions and Starting Assumptions |
|
|
224 | (1) |
|
Assumptions About Cost of a Fully Burdened Employee |
|
|
224 | (1) |
|
Initial Actual Data about Use Cases |
|
|
225 | (1) |
|
|
225 | (4) |
|
Cost of Tools, Training, Consulting, and Rollout Team |
|
|
226 | (1) |
|
Cost of Tool Use Overhead |
|
|
226 | (1) |
|
Cost of Added Review and Rigor |
|
|
227 | (2) |
|
|
229 | (7) |
|
Savings from Staff Working more Efficiently |
|
|
229 | (1) |
|
Savings from Avoiding the Cost of Lost Use Cases from Staff Churn |
|
|
230 | (1) |
|
Savings from Avoiding Cost of Unnecessary Development |
|
|
231 | (1) |
|
Savings from Reducing the Cost of Requirements-Related Defects |
|
|
232 | (4) |
|
Bottom Line: Benefit to Cost Ratio |
|
|
236 | (1) |
|
Dealing with Uncertainty in the Model |
|
|
237 | (2) |
|
|
239 | (2) |
|
Leveraging Your Investment in Use Case CM in Project Portfolio Management |
|
|
241 | (26) |
|
What this Chapter Is (and Isn't) About |
|
|
242 | (2) |
|
The Good Thing About Use Cases... |
|
|
244 | (2) |
|
Use Case Metadata (Requirements Attributes) |
|
|
246 | (1) |
|
How Are You Currently Invested? |
|
|
246 | (9) |
|
|
247 | (3) |
|
Metadata Needed for Use Cases |
|
|
250 | (1) |
|
Assign Use Case to Project and Estimate Effort |
|
|
251 | (3) |
|
|
254 | (1) |
|
|
255 | (4) |
|
Full Time Equivalent (FTE) Models of the Project Portfolio |
|
|
256 | (1) |
|
Run Chart of FTEs Through Time |
|
|
257 | (2) |
|
Tracking the Status of the Portfolio via Use Cases |
|
|
259 | (5) |
|
|
260 | (1) |
|
Tracking the Progress of Projects with the Status of Use Cases |
|
|
261 | (3) |
|
|
264 | (3) |
|
|
267 | (20) |
|
Appendix A Sample Use Case |
|
|
269 | (4) |
|
Appendix B Bare-Bones Project Portfolio Database and Use Case Metadata |
|
|
273 | (4) |
|
Bare-Bones Portfolio Database |
|
|
273 | (1) |
|
|
274 | (1) |
|
Checking the Mix of Project Types |
|
|
274 | (3) |
|
Appendix C Run Chart of FTEs Required by Project Portfolio |
|
|
277 | (6) |
|
Query to Sum Use Case Effort by Project Code |
|
|
277 | (3) |
|
Query to Prepare Data for Import to Microsoft Project |
|
|
280 | (3) |
|
Appendix D Reports for Tracking Progress of Projects in Portfolio |
|
|
283 | (4) |
|
Metadata for Use Case Status |
|
|
283 | (1) |
|
Report for Tracking Status of Projects in the Portfolio by Use Case Status |
|
|
284 | (3) |
References |
|
287 | (6) |
Index |
|
293 | |