Muutke küpsiste eelistusi

Safety of Computer Architectures [Kõva köide]

  • Formaat: Hardback, 512 pages, kõrgus x laius x paksus: 234x155x33 mm, kaal: 885 g
  • Ilmumisaeg: 16-Jul-2010
  • Kirjastus: ISTE Ltd and John Wiley & Sons Inc
  • ISBN-10: 184821197X
  • ISBN-13: 9781848211971
Teised raamatud teemal:
  • Formaat: Hardback, 512 pages, kõrgus x laius x paksus: 234x155x33 mm, kaal: 885 g
  • Ilmumisaeg: 16-Jul-2010
  • Kirjastus: ISTE Ltd and John Wiley & Sons Inc
  • ISBN-10: 184821197X
  • ISBN-13: 9781848211971
Teised raamatud teemal:
It is currently quite easy for students or designers/engineers to find very general books on the various aspects of safety, reliability and dependability of computer system architectures, and partial treatments of the elements that comprise an effective system architecture. It is not so easy to find a single source reference for all these aspects of system design. However, the purpose of this book is to present, in a single volume, a full description of all the constraints (including legal contexts around performance, reliability norms, etc.) and examples of architectures from various fields of application, including: railways, aeronautics, space, automobile and industrial automation.The content of the book is drawn from the experience of numerous people who are deeply immersed in the design and delivery (from conception to test and validation), safety (analysis of safety: FMEA, HA, etc.) and evaluation of critical systems. The involvement of real world industrial applications is handled in such as a way as to avoid problems of confidentiality, and thus allows for the inclusion of new, useful information (photos, architecture plans/schematics, real examples). It is currently quite easy for students or designers/engineers to find very general books on the various aspects of safety, reliability and dependability of computer system architectures, and partial treatments of the elements that comprise an effective system architecture. It is not so easy to find a single source reference for all these aspects of system design. However, the purpose of this book is to present, in a single volume, a full description of all the constraints (including legal contexts around performance, reliability norms, etc.) and examples of architectures from various fields of application, including: railways, aeronautics, space, automobile and industrial automation. The content of the book is drawn from the experience of numerous people who are deeply immersed in the design and delivery (from conception to test and validation), safety (analysis of safety: FMEA, HA, etc.) and evaluation of critical systems. The involvement of real world industrial applications is handled in such as a way as to avoid problems of confidentiality, and thus allows for the inclusion of new, useful information (photos, architecture plans/schematics, real examples).

Arvustused

"The text is clearly written, well-illustrated, and includes a helpful glossary." (Booknews, 1 February 2011)

Introduction xiii
Chapter 1 Principles
1(46)
Jean-Louis Boulanger
1.1 Introduction
1(1)
1.2 Presentation of the basic concepts: faults, errors and failures
1(6)
1.2.1 Obstruction to functional safety
1(5)
1.2.2 Safety demonstration studies
6(1)
1.2.3 Assessment
6(1)
1.3 Safe and/or available architecture
7(1)
1.4 Resetting a processing unit
7(1)
1.5 Overview of safety techniques
8(37)
1.5.1 Error detection
8(7)
1.5.2 Diversity
15(1)
1.5.3 Redundancy
16(26)
1.5.4 Error recovery and retrieval
42(2)
1.5.5 Partitioning
44(1)
1.6 Conclusion
45(1)
1.7 Bibliography
45(2)
Chapter 2 Railway Safety Architecture
47(22)
Jean-Louis Boulanger
2.1 Introduction
47(1)
2.2 Coded secure processor
47(6)
2.2.1 Basic principle
47(1)
2.2.2 Encoding
48(3)
2.2.3 Hardware architecture
51(1)
2.2.4 Assessment
52(1)
2.3 Other applications
53(7)
2.3.1 TVM 430
53(1)
2.3.2 SAET-METEOR
54(6)
2.4 Regulatory and normative context
60(6)
2.4.1 Introduction
60(3)
2.4.2 CENELEC and IEC history
63(1)
2.4.3 Commissioning evaluation, certification, and authorization
64(2)
2.5 Conclusion
66(1)
2.6 Bibliography
66(3)
Chapter 3 From the Coded Uniprocessor to 2003
69(36)
Gilles Legoff
Christophe Girard
3.1 Introduction
69(2)
3.2 From the uniprocessor to the dual processor with voter
71(9)
3.2.1 North LGV requirements and the Channel Tunnel
71(2)
3.2.2 The principles of the dual processor with voter by coded uniprocessor
73(1)
3.2.3 Architecture characteristics
74(2)
3.2.4 Requirements for the Mediterranean LGV
76(4)
3.3 CSD: available safety computer
80(13)
3.3.1 Background
80(2)
3.3.2 Functional architecture
82(3)
3.3.3 Software architecture
85(3)
3.3.4 Synhronization signals
88(2)
3.3.5 The CSD mail system
90(3)
3.4 DIVA evolutions
93(6)
3.4.1 ERTMS equipment requirements
93(3)
3.4.2 Functional evolution
96(1)
3.4.3 Technological evolution
97(2)
3.5 New needs and possible solutions
99(2)
3.5.1 Management of the partitions
99(1)
3.5.2 Multicycle services
100(1)
3.6 Conclusion
101(1)
3.7 Assessment of installations
102(1)
3.8 Bibliography
103(2)
Chapter 4 Designing a Computerized Interlocking Module: a Key Component of Computer-Based Signal Boxes Designed by the SNCF
105(44)
Marc Antoni
4.1 Introduction
105(2)
4.2 Issues
107(9)
4.2.1 Persistent bias
107(1)
4.2.2 Challenges for tomorrow
108(1)
4.2.3 Probability and computer safety
109(1)
4.2.4 Maintainability and modifiability
110(3)
4.2.5 Specific problems of critical systems
113(2)
4.2.6 Towards a targeted architecture for safety automatons
115(1)
4.3 Railway safety: fundamental notions
116(8)
4.3.1 Safety and availability
116(3)
4.3.2 Intrinsic safety and closed railway world
119(1)
4.3.3 Processing safety
120(1)
4.3.4 Provability of the safety of computerized equipment
121(1)
4.3.5 The signal box
122(2)
4.4 Development of the computerized interlocking module
124(21)
4.4.1 Development methodology of safety systems
125(5)
4.4.2 Technical architecture of the system
130(6)
4.4.3 MEI safety
136(6)
4.4.4 Modeling the PETRI network type
142(3)
4.5 Conclusion
145(2)
4.6 Bibliography
147(2)
Chapter 5 Command Control of Railway Signaling Safety: Safety at Lower Cost
149(50)
Daniel Drago
5.1 Introduction
149(1)
5.2 A safety coffee machine
149(1)
5.3 History of the PIPC
150(5)
5.4 The concept basis
155(2)
5.5 Postulates for safety requirements
157(2)
5.6 Description of the PIPC architecture 7
159(14)
5.6.1 MCCS architecture
160(2)
5.6.2 Input and output cards
162(7)
5.6.3 Watchdog card internal to the processing unit
169(1)
5.6.4 Head of bus input/output card
170(1)
5.6.5 Field watchdog
171(2)
5.7 Description of availability principles
173(3)
5.7.1 Redundancy
173(2)
5.7.2 Automatic reset
175(1)
5.8 Software architecture
176(10)
5.8.1 Constitution of the Kernel
176(1)
5.8.2 The language and the compilers
177(1)
5.8.3 The operating system (OS)
178(1)
5.8.4 The integrity of execution and of data
179(1)
5.8.5 Segregation of resources of different safety level processes
180(1)
5.8.6 Execution cycle and vote and synchronization mechanism
181(5)
5.9 Protection against causes of common failure
186(2)
5.9.1 Technological dissimilarities of computers
186(1)
5.9.2 Time lag during process execution
187(1)
5.9.3 Diversification of the compilers and the executables
187(1)
5.9.4 Antivalent acquisitions and outputs
187(1)
5.9.5 Galvanic isolation
188(1)
5.10 Probabilistic modeling
188(6)
5.10.1 Objective and hypothesis
188(1)
5.10.2 Global model
189(2)
5.10.3 "Simplistic" quantitative evaluation
191(3)
5.11 Summary of safety concepts
194(3)
5.11.1 Concept 1: 2oo2 architecture
194(1)
5.11.2 Concept 2: protection against common modes
195(1)
5.11.3 Concept 3: self-tests
196(1)
5.11.4 Concept 4: watchdog
196(1)
5.11.5 Concept 5: protection of safety-related data
197(1)
5.12 Conclusion
197(1)
5.13 Bibliography
198(1)
Chapter 6 Dependable Avionics Architectures: Example of a Fly-by-Wire system
199(34)
Pascal Traverse
Christine Bezard
Jean-Michel Camus
Isabelle Lacaze
Herve Leberre
Patrick Ringeard
Jean Souyris
6.1 Introduction
199(6)
6.1.1 Background and statutory obligation
200(2)
6.1.2 History
202(2)
6.1.3 Fly-by-wire principles
204(1)
6.1.4 Failures and dependability
204(1)
6.2 System breakdowns due to physical failures
205(10)
6.2.1 Command and monitoring computers
205(3)
6.2.2 Component redundancy
208(4)
6.2.3 Alternatives
212(3)
6.3 Manufacturing and design errors
215(8)
6.3.1 Error prevention
215(6)
6.3.2 Error tolerance
221(2)
6.4 Specific risks
223(2)
6.4.1 Segregation
223(1)
6.4.2 Ultimate back-up
224(1)
6.4.3 Alternatives
224(1)
6.5 Human factors in the development of flight controls
225(4)
6.5.1 Human factors in design
225(2)
6.5.2 Human factors in certification
227(1)
6.5.3 Challenges and trends
228(1)
6.5.4 Alternatives
229(1)
6.6 Conclusion
229(1)
6.7 Bibliography
229(4)
Chapter 7 Space Applications
233(74)
Jean-Paul Blanquart
Philippe Miramont
7.1 Introduction
233(1)
7.2 Space system
233(4)
7.2.1 Ground segment
234(1)
7.2.2 Space segment
234(3)
7.3 Context and statutory obligation
237(6)
7.3.1 Structure and purpose of the regulatory framework
237(1)
7.3.2 Protection of space
238(1)
7.3.3 Protection of people, assets, and the environment
239(2)
7.3.4 Protection of the space system and the mission
241(1)
7.3.5 Summary of the regulatory context
241(2)
7.4 Specific needs
243(9)
7.4.1 Reliability
243(3)
7.4.2 Availability
246(2)
7.4.3 Maintainability
248(1)
7.4.4 Safety
249(2)
7.4.5 Summary
251(1)
7.5 Launchers: the Ariane 5 example
252(29)
7.5.1 Introduction
252(1)
7.5.2 Constraints
253(2)
7.5.3 Object of the avionics launcher
255(1)
7.5.4 Choice of onboard architecture
256(1)
7.5.5 General description of avionics architecture
257(3)
7.5.6 Flight program
260(21)
7.5.7 Conclusion
281(1)
7.6 Satellite architecture
281(11)
7.6.1 Overview
281(1)
7.6.2 Payload
282(1)
7.6.3 Platform
283(5)
7.6.4 Implementation
288(4)
7.6.5 Exploration probes
292(1)
7.7 Orbital transport: ATV example
292(10)
7.7.1 General information
292(3)
7.7.2 Dependability requirements
295(1)
7.7.3 ATV avionic architecture
296(3)
7.7.4 Management of ATV dependability
299(3)
7.8 Summary and conclusions
302(2)
7.8.1 Reliability, availability, and continuity of service
302(2)
7.8.2 Safety
304(1)
7.9 Bibliography
304(3)
Chapter 8 Methods and Calculations Relative to "Safety Instrumented Systems" at Total
307(38)
Yassine Chaabi
Jean-Pierre Signoret
8.1 Introduction
307(1)
8.2 Specific problems to be taken into account
308(14)
8.2.1 Link between classic parameters and standards' parameters
308(1)
8.2.2 Problems linked to sawtooth waves
309(1)
8.2.3 Definition
310(2)
8.2.4 Reliability data
312(1)
8.2.5 Common mode failure and systemic incidences
313(2)
8.2.6 Other parameters of interest
315(1)
8.2.7 Analysis of tests and maintenance
315(6)
8.2.8 General approach
321(1)
8.3 Example 1: system in 2/3 modeled by fault trees
322(6)
8.3.1 Modeling without CCF
322(1)
8.3.2 Introduction of the CCF by factor β
323(1)
8.3.3 Influence of test staggering
324(2)
8.3.4 Elements for the calculation of PFH
326(2)
8.4 Example 2: 2/3 system modeled by the stochastic petri net
328(5)
8.5 Other considerations regarding HIPS
333(9)
8.5.1 SIL objectives
333(1)
8.5.2 HIPS on topside facilities
334(6)
8.5.3 Subsea HIPS
340(2)
8.6 Conclusion
342(1)
8.7 Bibliography
343(2)
Chapter 9 Securing Automobile Architectures
345(34)
David Liaigre
9.1 Context
345(2)
9.2 More environmentally-friendly vehicles involving more embedded electronics
347(1)
9.3 Mastering the complexity of electronic systems
348(2)
9.4 Security concepts in the automotive field
350(14)
9.4.1 Ensure minimum security without redundancy
350(3)
9.4.2 Hardware redundancy to increase coverage of dangerous failures
353(5)
9.4.3 Hardware and functional redundancy
358(6)
9.5 Which security concepts for which security levels of the ISO 26262 standard?
364(12)
9.5.1 The constraints of the ISO 26262 standard
364(9)
9.5.2 The security concepts adapted to the constraints of the ISO 26262 standard
373(3)
9.6 Conclusion
376(1)
9.7 Bibliography
377(2)
Chapter 10 SIS in Industry
379(46)
Gregory Buchheit
Olaf Malasse
10.1 Introduction
379(5)
10.2 Safety loop structure
384(23)
10.2.1 "Sensor" sub-system
386(2)
10.2.2 "Actuator" sub-system
388(1)
10.2.3 Information processing
388(13)
10.2.4 Field networks
401(6)
10.3 Constraints and requirements of the application
407(6)
10.3.1 Programming
407(2)
10.3.2 Program structure
409(2)
10.3.3 The distributed safety library
411(1)
10.3.4 Communication between the standard user program and the safety program
412(1)
10.3.5 Generation of the safety program
413(1)
10.4 Analysis of a safety loop
413(10)
10.4.1 Calculation elements
414(2)
10.4.2 PFH calculation for the "detecting" sub-system
416(1)
10.4.3 PFH calculation by "actuator" sub-system
417(1)
10.4.4 PFH calculation by "logic processing" sub-system
418(1)
10.4.5 PFH calculation for a complete loop
419(4)
10.5 Conclusion
423(1)
10.6 Bibliography
424(1)
Chapter 11 A High-Availability Safety Computer
425(22)
Sylvain Baro
11.1 Introduction
425(1)
11.2 Safety computer
426(7)
11.2.1 Architecture
427(5)
11.2.2 Properties
432(1)
11.3 Applicative redundancy
433(1)
11.4 Integrated redundancy
433(10)
11.4.1 Goals
434(1)
11.4.2 Overview of operation
435(1)
11.4.3 Assumptions
435(1)
11.4.4 Hardware architecture
436(1)
11.4.5 Synchronization and time stamping
437(1)
11.4.6 Safety inputs
437(1)
11.4.7 Application and context
438(2)
11.4.8 Safety output
440(3)
11.4.9 Passivation
443(1)
11.5 Conclusion
443(3)
11.6 Bibliography
446(1)
Chapter 12 Safety System for the Protection of Personnel in the CERN Large Hadron Collider
447(30)
Pierre Ninin
Silvia Grau
Tomasz Ladzinski
Francesco Valentini
12.1 Introduction
447(3)
12.1.1 Introduction to CERN
447(1)
12.1.2 Legislative and regulatory context
448(1)
12.1.3 Goal of the system
449(1)
12.2 LACS
450(2)
12.3 LASS
452(7)
12.3.1 IIS Beams
452(2)
12.3.2 LASS architecture
454(2)
12.3.3 Performance of safety functions
456(1)
12.3.4 Wired Loop
457(2)
12.4 Functional safety methodology
459(7)
12.4.1 Functional safety plan
459(1)
12.4.2 Preliminary risk analysis (PRA)
459(1)
12.4.3 Specification of safety functions
460(1)
12.4.4 Provisional safety study
461(4)
12.4.5 Definitive safety study
465(1)
12.4.6 Verification and validation plan
465(1)
12.4.7 Operation and maintenance plan
465(1)
12.5 Test strategy
466(6)
12.5.1 Detailed description of the validation process
466(1)
12.5.2 Architecture of the test and the simulation platform
466(1)
12.5.3 Organization of tests
467(1)
12.5.4 Test platforms
468(1)
12.5.5 Unit Validation on site
469(2)
12.5.6 Functional validation on site
471(1)
12.6 Feedback
472(1)
12.7 Conclusions
473(1)
12.8 Bibliography
474(3)
Glossary 477(8)
List of Authors 485(2)
Index 487
Jean-Louis Boulanger is an Independent Safety Assessor (ISA) for software in the railway industry. After 15 years working at the RATP (the authority that manages the subway in Paris) and 6 years as a researcher and teacher at the University of Technology of Compiegne in France, he is currently working as an expert for the French notified body CERTIFER in the field of certification of safety critical railway applications based on software (ERTMS, SCADA, automatic subway, etc.). His research interests include requirements, software verification and validation, traceability and RAMS with a special focus on safety.