Preface |
|
xiii | |
Acknowledgments |
|
xvii | |
About the Author |
|
xix | |
Section I: Background |
|
|
|
3 | (10) |
|
Dependable, Embedded Software |
|
|
3 | (1) |
|
|
4 | (2) |
|
|
6 | (1) |
|
Choosing the Techniques to Describe |
|
|
7 | (1) |
|
|
7 | (3) |
|
|
10 | (2) |
|
|
12 | (1) |
|
|
13 | (14) |
|
General Safety Terminology |
|
|
13 | (7) |
|
Software-Specific Terminology |
|
|
20 | (4) |
|
|
24 | (3) |
|
3 Safety Standards and Certification |
|
|
27 | (20) |
|
|
27 | (2) |
|
Accreditation and Certification |
|
|
29 | (2) |
|
Why Do We Need These Standards? |
|
|
31 | (1) |
|
Goal- and Prescription-Based Standards |
|
|
32 | (1) |
|
Functional Safety Standards |
|
|
33 | (8) |
|
|
41 | (2) |
|
Process and the Standards |
|
|
43 | (1) |
|
|
44 | (1) |
|
|
45 | (2) |
|
4 Representative Companies |
|
|
47 | (6) |
|
|
47 | (1) |
|
Beta Component Incorporated |
|
|
48 | (1) |
|
Using a Certified Component |
|
|
48 | (5) |
Section II: The Project |
|
|
|
53 | (24) |
|
|
53 | (1) |
|
|
54 | (2) |
|
|
56 | (5) |
|
|
61 | (6) |
|
|
67 | (5) |
|
Analyses by Example Companies |
|
|
72 | (2) |
|
|
74 | (1) |
|
|
74 | (3) |
|
6 Certified and Uncertified Components |
|
|
77 | (12) |
|
|
77 | (1) |
|
Certified or Uncertified SOUP |
|
|
78 | (1) |
|
Using Non-Certified Components |
|
|
79 | (4) |
|
Using a Certified Component |
|
|
83 | (2) |
|
|
85 | (1) |
|
|
85 | (4) |
Section III: Design Patterns |
|
|
7 Architectural Balancing |
|
|
89 | (8) |
|
Availability/Reliability Balance |
|
|
90 | (1) |
|
Usefulness/Safety Balance |
|
|
91 | (1) |
|
Security/Performance/Safety Balance |
|
|
92 | (2) |
|
Performance/Reliability Balance |
|
|
94 | (1) |
|
|
94 | (1) |
|
|
95 | (1) |
|
|
95 | (2) |
|
8 Error Detection and Handling |
|
|
97 | (26) |
|
|
97 | (1) |
|
Error Detection and the Standards |
|
|
98 | (1) |
|
|
98 | (14) |
|
|
112 | (5) |
|
|
117 | (3) |
|
A Note on the Diverse Monitor |
|
|
120 | (1) |
|
|
120 | (1) |
|
|
120 | (3) |
|
9 Expecting the Unexpected |
|
|
123 | (8) |
|
|
123 | (3) |
|
|
126 | (1) |
|
|
127 | (1) |
|
Anticipation of the Unexpected by the Example Companies |
|
|
128 | (1) |
|
|
129 | (1) |
|
|
129 | (2) |
|
10 Replication and Diversification |
|
|
131 | (24) |
|
History of Replication and Diversification |
|
|
131 | (1) |
|
Replication in the Standards |
|
|
132 | (1) |
|
Component or System Replication? |
|
|
132 | (1) |
|
|
133 | (3) |
|
|
136 | (5) |
|
|
141 | (5) |
|
|
146 | (1) |
|
|
147 | (3) |
|
|
150 | (1) |
|
|
150 | (5) |
Section IV: Design Validation |
|
|
|
155 | (10) |
|
|
155 | (1) |
|
Markov Models and the Standards |
|
|
156 | (1) |
|
The Markovian Assumptions |
|
|
156 | (2) |
|
|
158 | (4) |
|
Markovian Advantages and Disadvantages |
|
|
162 | (1) |
|
|
163 | (2) |
|
|
165 | (14) |
|
|
165 | (1) |
|
Fault Tree Analysis in the Standards |
|
|
166 | (1) |
|
|
166 | (1) |
|
Example 1: Boolean Fault Tree |
|
|
167 | (2) |
|
Example 2: Extended Boolean Fault Tree |
|
|
169 | (2) |
|
Example 3: Bayesian Fault Tree |
|
|
171 | (5) |
|
|
176 | (1) |
|
|
176 | (1) |
|
|
177 | (1) |
|
|
177 | (2) |
|
13 Software Failure Rates |
|
|
179 | (8) |
|
|
179 | (2) |
|
Compiler and Hardware Effects |
|
|
181 | (1) |
|
|
182 | (2) |
|
|
184 | (1) |
|
|
185 | (2) |
|
14 Semi-Formal Design Verification |
|
|
187 | (24) |
|
Verification of a Reconstructed Design |
|
|
188 | (2) |
|
Discrete Event Simulation |
|
|
190 | (9) |
|
|
199 | (8) |
|
Simulation and the Example Companies |
|
|
207 | (1) |
|
|
208 | (3) |
|
15 Formal Design Verification |
|
|
211 | (28) |
|
|
211 | (1) |
|
History of Formal Methods |
|
|
212 | (1) |
|
Formal Methods and the Standards |
|
|
213 | (3) |
|
|
216 | (1) |
|
|
217 | (1) |
|
Automatic Code Generation |
|
|
218 | (1) |
|
|
218 | (7) |
|
|
225 | (5) |
|
Formal Modeling by the Example Companies |
|
|
230 | (1) |
|
|
231 | (1) |
|
|
232 | (7) |
Section V: Coding |
|
|
|
239 | (8) |
|
Programming Language Selection |
|
|
239 | (1) |
|
Programming Languages and the Standards |
|
|
240 | (1) |
|
|
240 | (4) |
|
|
244 | (2) |
|
So, What Is the Best Programming Language? |
|
|
246 | (1) |
|
|
246 | (1) |
|
|
247 | (16) |
|
|
247 | (1) |
|
|
248 | (6) |
|
Coverage and the Standards |
|
|
254 | (1) |
|
Effectiveness of Coverage Testing |
|
|
255 | (1) |
|
|
256 | (1) |
|
|
257 | (4) |
|
|
261 | (1) |
|
|
261 | (2) |
|
|
263 | (16) |
|
What Static Analysis Is Asked to Do |
|
|
263 | (2) |
|
Static Code Analysis and the Standards |
|
|
265 | (1) |
|
|
265 | (7) |
|
|
272 | (2) |
|
|
274 | (1) |
|
|
275 | (4) |
Section VI: Verification |
|
|
|
279 | (14) |
|
|
280 | (4) |
|
Back-to-Back Comparison Test between Model and Code |
|
|
284 | (4) |
|
Requirements-Based Testing |
|
|
288 | (3) |
|
Anomaly Detection During Integration Testing |
|
|
291 | (1) |
|
|
292 | (1) |
|
|
293 | (14) |
|
Validation of the Tool Chain |
|
|
293 | (1) |
|
|
294 | (1) |
|
BCI's Tools Classification |
|
|
295 | (1) |
|
|
295 | (1) |
|
|
296 | (6) |
|
ADC's and BCI's Compiler Verification |
|
|
302 | (3) |
|
|
305 | (2) |
|
|
307 | (4) |
Section VII: Appendices |
|
|
A Goal Structuring Notation |
|
|
311 | (4) |
|
|
311 | (1) |
|
|
312 | (2) |
|
|
314 | (1) |
|
|
314 | (1) |
|
B Bayesian Belief Networks |
|
|
315 | (12) |
|
Frequentists and Bayesians |
|
|
315 | (1) |
|
|
316 | (1) |
|
|
317 | (1) |
|
|
318 | (1) |
|
What Do the Arrows Mean in a BBN? |
|
|
319 | (1) |
|
BBNs in Safety Case Arguments |
|
|
320 | (4) |
|
|
324 | (1) |
|
BBN or GSN for a Safety Case? |
|
|
324 | (2) |
|
|
326 | (1) |
|
|
327 | (8) |
|
|
327 | (1) |
|
|
328 | (1) |
|
|
329 | (1) |
|
Components in Parallel and Series |
|
|
329 | (1) |
|
|
330 | (3) |
|
|
333 | (1) |
|
|
334 | (1) |
Index |
|
335 | |