Preface |
|
xiii | |
|
|
xiii | |
|
|
xiv | |
|
|
xv | |
|
|
xv | |
|
|
xvi | |
|
|
xvii | |
Section 1: Background |
|
|
|
3 | (10) |
|
|
4 | (2) |
|
|
6 | (1) |
|
Selecting the Techniques to Describe |
|
|
7 | (1) |
|
|
8 | (2) |
|
|
10 | (2) |
|
|
12 | (1) |
|
|
13 | (14) |
|
General Safety Terminology |
|
|
13 | (7) |
|
Software-Specific Terminology |
|
|
20 | (5) |
|
|
25 | (2) |
|
3 Safety Standards and Certification |
|
|
27 | (26) |
|
|
27 | (2) |
|
Accreditation and Certification |
|
|
29 | (2) |
|
Why Do We Need These Standards? |
|
|
31 | (1) |
|
Goal- and Prescription-Based Standards |
|
|
32 | (1) |
|
Functional Safety Standards |
|
|
33 | (10) |
|
|
43 | (2) |
|
Machine Learning and SOTIF |
|
|
45 | (4) |
|
Process and the Standards |
|
|
49 | (1) |
|
|
50 | (1) |
|
|
51 | (2) |
|
4 Representative Companies |
|
|
53 | (6) |
|
|
53 | (1) |
|
Beta Component Incorporated |
|
|
54 | (1) |
|
Using a Certified Component |
|
|
54 | (5) |
Section 2: The Project |
|
|
|
59 | (26) |
|
|
59 | (1) |
|
|
60 | (2) |
|
|
62 | (5) |
|
|
67 | (7) |
|
|
74 | (6) |
|
Analyses by Example Companies |
|
|
80 | (3) |
|
|
83 | (1) |
|
|
84 | (1) |
|
6 Certified and Uncertified Components |
|
|
85 | (12) |
|
|
85 | (1) |
|
Certified or Uncertified SOUP |
|
|
86 | (1) |
|
Using Non-Certified Components |
|
|
87 | (5) |
|
Using a Certified Component |
|
|
92 | (1) |
|
|
93 | (1) |
|
|
93 | (4) |
Section 3: Design Patterns |
|
|
7 Architectural Balancing |
|
|
97 | (8) |
|
Availability/Reliability Balance |
|
|
98 | (1) |
|
Usefulness/Safety Balance |
|
|
99 | (2) |
|
Security/Performance/Safety Balance |
|
|
101 | (2) |
|
Performance/Reliability Balance |
|
|
103 | (1) |
|
|
103 | (1) |
|
|
104 | (1) |
|
|
104 | (1) |
|
8 Error Detection and Handling |
|
|
105 | (26) |
|
|
105 | (1) |
|
Error Detection and the Standards |
|
|
106 | (1) |
|
|
106 | (16) |
|
|
122 | (3) |
|
|
125 | (3) |
|
A Note on the Diverse Monitor |
|
|
128 | (1) |
|
|
129 | (1) |
|
|
129 | (2) |
|
9 Expecting the Unexpected |
|
|
131 | (8) |
|
|
131 | (3) |
|
|
134 | (1) |
|
|
135 | (1) |
|
Anticipation of the Unexpected by the Example Companies |
|
|
136 | (1) |
|
|
137 | (1) |
|
|
137 | (2) |
|
10 Replication and Diversification |
|
|
139 | (24) |
|
History of Replication and Diversification |
|
|
140 | (1) |
|
Replication in the Standards |
|
|
140 | (1) |
|
Component or System Replication? |
|
|
140 | (2) |
|
|
142 | (2) |
|
|
144 | (5) |
|
|
149 | (7) |
|
|
156 | (1) |
|
|
157 | (2) |
|
|
159 | (1) |
|
|
160 | (3) |
Section 4: Design Validation |
|
|
|
163 | (10) |
|
|
163 | (1) |
|
Markov Models and the Standards |
|
|
164 | (1) |
|
The Markovian Assumptions |
|
|
164 | (1) |
|
|
165 | (5) |
|
Markovian Advantages and Disadvantages |
|
|
170 | (1) |
|
|
171 | (2) |
|
|
173 | (14) |
|
|
173 | (1) |
|
Fault Tree Analysis in the Standards |
|
|
174 | (1) |
|
|
174 | (1) |
|
Example 1: Boolean Fault Tree |
|
|
175 | (2) |
|
Example 2: Extended Boolean Fault Tree |
|
|
177 | (1) |
|
Example 3: Bayesian Fault Tree |
|
|
178 | (5) |
|
|
183 | (1) |
|
|
184 | (1) |
|
|
185 | (1) |
|
|
185 | (2) |
|
13 Software Failure Rates |
|
|
187 | (12) |
|
|
187 | (2) |
|
Compiler and Hardware Effects |
|
|
189 | (1) |
|
|
190 | (3) |
|
|
193 | (2) |
|
|
195 | (2) |
|
|
197 | (2) |
|
14 Semi-Formal Design Verification |
|
|
199 | (24) |
|
Verification of a Reconstructed Design |
|
|
200 | (2) |
|
Discrete Event Simulation |
|
|
202 | (9) |
|
|
211 | (10) |
|
Simulation and the Example Companies |
|
|
221 | (1) |
|
|
222 | (1) |
|
15 Formal Design Verification |
|
|
223 | (28) |
|
|
223 | (1) |
|
History of Formal Methods |
|
|
224 | (1) |
|
Formal Methods and the Standards |
|
|
225 | (3) |
|
|
228 | (2) |
|
|
230 | (1) |
|
Automatic Code and Test Generation |
|
|
230 | (1) |
|
|
231 | (6) |
|
|
237 | (6) |
|
Formal Modeling by the Example Companies |
|
|
243 | (3) |
|
|
246 | (1) |
|
|
247 | (4) |
Section 5: Coding |
|
|
|
251 | (10) |
|
Programming Language Selection |
|
|
251 | (1) |
|
Programming Languages and the Standards |
|
|
252 | (1) |
|
|
252 | (5) |
|
|
257 | (2) |
|
So, What is the Best Programming Language? |
|
|
259 | (1) |
|
Programming with Floating Point |
|
|
259 | (1) |
|
|
260 | (1) |
|
|
261 | (12) |
|
|
261 | (1) |
|
|
262 | (6) |
|
Coverage and the Standards |
|
|
268 | (1) |
|
Effectiveness of Coverage Testing |
|
|
268 | (2) |
|
|
270 | (1) |
|
|
271 | (1) |
|
|
271 | (2) |
|
|
273 | (16) |
|
What Static Analysis Is Asked to Do |
|
|
273 | (2) |
|
Static Code Analysis and the Standards |
|
|
275 | (1) |
|
|
275 | (8) |
|
|
283 | (2) |
|
|
285 | (1) |
|
|
286 | (3) |
Section 6: Verification |
|
|
|
289 | (20) |
|
|
290 | (5) |
|
Back-to-Back Comparison Test Between Model and Code |
|
|
295 | (3) |
|
|
298 | (4) |
|
Requirements-Based Testing |
|
|
302 | (4) |
|
Anomaly Detection During Integration Testing |
|
|
306 | (1) |
|
|
307 | (2) |
|
|
309 | (14) |
|
Validation of the Tool Chain |
|
|
309 | (1) |
|
|
310 | (1) |
|
BCI's Tools Classification |
|
|
311 | (1) |
|
|
311 | (1) |
|
|
312 | (6) |
|
ADC's and BCI's Compiler Verification |
|
|
318 | (3) |
|
|
321 | (2) |
|
|
323 | (4) |
Section 7: Appendices |
|
|
A Goal Structuring Notation |
|
|
327 | (6) |
|
|
327 | (1) |
|
|
328 | (2) |
|
Eliminative Argumentation |
|
|
330 | (1) |
|
|
331 | (1) |
|
|
331 | (2) |
|
B Bayesian Belief Networks |
|
|
333 | (12) |
|
Frequentists and Bayesians |
|
|
333 | (1) |
|
|
334 | (1) |
|
|
335 | (1) |
|
|
336 | (1) |
|
What Do the Arrows Mean in a BBN? |
|
|
337 | (1) |
|
BBNs in Safety Case Arguments |
|
|
338 | (3) |
|
|
341 | (1) |
|
BBN or GSN for a Safety Case? |
|
|
342 | (2) |
|
|
344 | (1) |
|
|
345 | (4) |
|
|
345 | (1) |
|
|
345 | (1) |
|
|
346 | (3) |
|
|
349 | (8) |
|
|
349 | (1) |
|
|
350 | (1) |
|
|
351 | (1) |
|
Components in Parallel and Series |
|
|
351 | (1) |
|
|
352 | (3) |
|
|
355 | (1) |
|
|
356 | (1) |
Index |
|
357 | |