Muutke küpsiste eelistusi

Introduction to Concurrency in Programming Languages [Kõva köide]

(University of Oregon, Eugene, USA), (Los Alamos National Laboratory, New Mexico, USA), (Intel Corporation, Dupont, Washington, USA)
  • Formaat: Hardback, 344 pages, kõrgus x laius: 234x156 mm, kaal: 612 g
  • Ilmumisaeg: 28-Sep-2009
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1420072137
  • ISBN-13: 9781420072136
Teised raamatud teemal:
  • Formaat: Hardback, 344 pages, kõrgus x laius: 234x156 mm, kaal: 612 g
  • Ilmumisaeg: 28-Sep-2009
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1420072137
  • ISBN-13: 9781420072136
Teised raamatud teemal:
Exploring how concurrent programming can be assisted by language-level techniques, Introduction to Concurrency in Programming Languages presents high-level language techniques for dealing with concurrency in a general context. It provides an understanding of programming languages that offer concurrency features as part of the language definition.





The book supplies a conceptual framework for different aspects of parallel algorithm design and implementation. It first addresses the limitations of traditional programming techniques and models when dealing with concurrency. The book then explores the current state of the art in concurrent programming and describes high-level language constructs for concurrency. It also discusses the historical evolution of hardware, corresponding high-level techniques that were developed, and the connection to modern systems, such as multicore and manycore processors. The remainder of the text focuses on common high-level programming techniques and their application to a range of algorithms. The authors offer case studies on genetic algorithms, fractal generation, cellular automata, game logic for solving Sudoku puzzles, pipelined algorithms, and more.





Illustrating the effect of concurrency on programs written in familiar languages, this text focuses on novel language abstractions that truly bring concurrency into the language and aid analysis and compilation tools in generating efficient, correct programs. It also explains the complexity involved in taking advantage of concurrency with regard to program correctness and performance.

Arvustused

a clear focus in this book is on keeping the material accessible. The authors succeed at this brilliantly. if you are just jumping into the world of concurrent programming, or taking a more theoretical look at the approaches weve all been taking for granted for the past 20 years in an attempt to make things better, then this book is a great start. The authors present a clear motivation for the relevance of continuing this work, and provide both the historical context and knowledge of present day practice that youll need to get off on the right foot. That they manage to do this while keeping the language clear and the text accessible is a tribute to the effort Sottile, Mattson, and Rasmussen put into the creation of the text. insideHPC.com, October 2010

Sottile, Mattson, and Rasmussen have successfully managed to provide a nice survey of the current state of the art of parallel algorithm design and implementation in this well-written 300-page textbook, suitable for undergraduate computer science students this concise yet thorough book provides an outstanding introduction to the important field of concurrent programming and the techniques currently employed to design parallel algorithms. It is clearly written, well organized, and cuts to the point It is an informative read that I highly recommend to those interested in the design and implementation of parallel algorithms. Fernando Berzal, Computing Reviews, May 2010

Introduction
1(16)
Motivation
3(3)
Navigating the concurrency sea
3(3)
Where does concurrency appear?
6(3)
Why is concurrency considered hard?
9(2)
Real-world concurrency
9(2)
Timeliness
11(1)
Approach
12(2)
Intended audience
13(1)
Acknowledgments
14(1)
Exercises
14(3)
Concepts in Concurrency
17(26)
Terminology
19(10)
Units of execution
19(4)
Parallelism versus concurrency
23(2)
Dependencies and parallelism
25(3)
Shared versus distributed memory
28(1)
Concepts
29(11)
Atomicity
30(4)
Mutual exclusion and critical sections
34(2)
Coherence and consistency
36(2)
Thread safety
38(2)
Exercises
40(3)
Concurrency Control
43(22)
Correctness
44(8)
Race conditions
44(2)
Deadlock
46(3)
Liveness, starvation and fairness
49(2)
Nondeterminism
51(1)
Techniques
52(10)
Synchronization
52(2)
Locks
54(2)
Semaphores
56(1)
Monitors
57(3)
Transactions
60(2)
Exercises
62(3)
The State of the Art
65(20)
Limitations of libraries
66(3)
Explicit techniques
69(7)
Message passing
69(6)
Explicitly controlled threads
75(1)
Higher-level techniques
76(4)
Transactional memory
77(1)
Event-driven programs
78(1)
The Actor model
79(1)
The limits of explicit control
80(2)
Pointers and aliasing
81(1)
Concluding remarks
82(1)
Exercises
83(2)
High-Level Language Constructs
85(24)
Common high-level constructs
88(6)
Expressions
89(2)
Control flow primitives
91(1)
Abstract types and data structures
92(2)
Using and evaluating language constructs
94(8)
Cognitive dimensions
98(3)
Working with the cognitive dimensions
101(1)
Implications of concurrency
102(2)
Sequential constructs and concurrency
103(1)
Interpreted languages
104(2)
Exercises
106(3)
Historical Context and Evolution of Languages
109(40)
Evolution of machines
111(9)
Multiprogramming and interrupt driven I/O
111(1)
Cache-based memory hierarchies
112(1)
Pipelining and vector processing
113(1)
Dataflow
114(1)
Massively parallel computers
115(2)
Clusters and distributed memory systems
117(1)
Integration
118(1)
Flynn's taxonomy
118(2)
Evolution of programming languages
120(25)
In the beginning, there was FORTRAN
120(2)
The ALGOL family
122(3)
Coroutines
125(1)
CSP and process algebras
125(3)
Concurrency in Ada
128(3)
Declarative and functional languages
131(7)
Parallel languages
138(6)
Modern languages
144(1)
Limits to automatic parallelization
145(2)
Exercises
147(2)
Modern Languages and Concurrency Constructs
149(26)
Array abstractions
150(8)
Array notation
152(3)
Shifts
155(2)
Index sets and regions
157(1)
Message passing
158(5)
The Actor model
160(1)
Channels
160(1)
Co-arrays
161(2)
Control flow
163(5)
ALGOL collateral clauses
163(1)
PAR, SEQ and ALT in occam
164(2)
Parallel loops
166(2)
Functional languages
168(1)
Functional operators
169(3)
Discussion of functional operators
171(1)
Exercises
172(3)
Performance Considerations and Modern Systems
175(22)
Memory
176(10)
Architectural solutions to the performance problem
177(1)
Examining single threaded memory performance
178(2)
Shared memory and cache coherence
180(5)
Distributed memory as a deeper memory hierarchy
185(1)
Amdahl's law, speedup, and efficiency
186(2)
Locking
188(3)
Serialization
188(1)
Blocking
189(1)
Wasted operations
190(1)
Thread overhead
191(3)
Exercises
194(3)
Introduction to Parallel Algorithms
197(10)
Designing parallel algorithms
198(1)
Finding concurrency
199(1)
Strategies for exploiting concurrency
200(1)
Algorithm patterns
201(2)
Patterns supporting parallel source code
203(1)
Demonstrating parallel algorithm patterns
204(1)
Exercises
205(2)
Pattern: Task Parallelism
207(26)
Supporting algorithm structures
208(7)
The Master-worker pattern
209(1)
Implementation mechanisms
210(2)
Abstractions supporting task parallelism
212(3)
Genetic algorithms
215(7)
Population management
218(2)
Individual expression and fitness evaluation
220(1)
Discussion
221(1)
Mandelbrot set computation
222(8)
The problem
222(1)
Identifying tasks and separating master from worker
223(3)
Cilk implementation
226(3)
OpenMP implementation
229(1)
Discussion
230(1)
Exercises
230(3)
Pattern: Data Parallelism
233(14)
Data parallel algorithms
233(3)
Matrix multiplication
236(2)
Cellular automaton
238(2)
Limitations of SIMD data parallel programming
240(2)
Beyond SIMD
242(2)
Approximating data parallelism with tasks
243(1)
Geometric Decomposition
244(1)
Exercises
245(2)
Pattern: Recursive Algorithms
247(16)
Recursion concepts
248(6)
Recursion and concurrency
252(1)
Recursion and the divide and conquer pattern
253(1)
Sorting
254(3)
Sudoku
257(4)
Exercises
261(2)
Pattern: Pipelined Algorithms
263(16)
Pipelining as a software design pattern
265(1)
Language support for pipelining
266(1)
Pipelining in Erlang
267(5)
Pipeline construction
268(1)
Pipeline stage structure
269(1)
Discussion
270(2)
Visual cortex
272(4)
Peta Vision code description
274(2)
Exercises
276(3)
Appendix A OpenMP Quick Reference
279(16)
OpenMP fundamentals
280(1)
Creating threads and their implicit tasks
280(2)
OpenMP data environment
282(3)
Synchronization and the OpenMP memory model
285(3)
Work sharing
288(3)
OpenMP runtime library and environment variables
291(1)
Explicit tasks and OpenMP 3.0
292(3)
Appendix B Erlang Quick Reference
295(10)
Language basics
295(5)
Execution and memory model
300(1)
Message passing syntax
301(4)
Appendix C Cilk Quick Reference
305(10)
Cilk keywords
306(4)
Cilk model
310(2)
Work and span metrics
310(1)
Memory model
311(1)
Cilk standard library
312(2)
Further information
314(1)
References 315(8)
Index 323
Matthew J. Sottile is a research associate and adjunct assistant professor in the Department of Computer and Information Sciences at the University of Oregon. He has a significant publication record in both high performance computing and scientific programming. Dr. Sottile is currently working on research in concurrent programming languages and parallel algorithms for signal and image processing in neuroscience and medical applications.





Timothy G. Mattson is a principal engineer at Intel Corporation. Dr. Mattsons noteworthy projects include the worlds first TFLOP computer, OpenMP, the first generally programmable TFLOP chip (Intels 80 core research chip), OpenCL, and pioneering work on design patterns for parallel programming.





Craig E Rasmussen is a staff member in the Advanced Computing Laboratory at Los Alamos National Laboratory (LANL). Along with extensive publications in computer science, space plasma, and medical physics, Dr. Rasmussen is the principal developer of PetaVision, a massively parallel, spiking neuron model of visual cortex that ran at 1.14 Petaflops on LANLs Roadrunner computer in 2008.