Muutke küpsiste eelistusi

Art of Multiprocessor Programming, Revised Reprint [Pehme köide]

(Professor of Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA), (Brown University, Providence, RI, USA)
  • Formaat: Paperback / softback, 536 pages, kõrgus x laius: 235x191 mm, kaal: 1090 g, Contains 1 Digital (delivered electronically)
  • Ilmumisaeg: 25-Jun-2012
  • Kirjastus: Morgan Kaufmann Publishers In
  • ISBN-10: 0123973376
  • ISBN-13: 9780123973375
Teised raamatud teemal:
  • Pehme köide
  • Hind: 84,25 €*
  • * saadame teile pakkumise kasutatud raamatule, mille hind võib erineda kodulehel olevast hinnast
  • See raamat on trükist otsas, kuid me saadame teile pakkumise kasutatud raamatule.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 536 pages, kõrgus x laius: 235x191 mm, kaal: 1090 g, Contains 1 Digital (delivered electronically)
  • Ilmumisaeg: 25-Jun-2012
  • Kirjastus: Morgan Kaufmann Publishers In
  • ISBN-10: 0123973376
  • ISBN-13: 9780123973375
Teised raamatud teemal:

Revised and updated with improvements conceived in parallel programming courses, The Art of Multiprocessor Programming is an authoritative guide to multicore programming. It introduces a higher level set of software development skills than that needed for efficient single-core programming. This book provides comprehensive coverage of the new principles, algorithms, and tools necessary for effective multiprocessor programming. Students and professionals alike will benefit from thorough coverage of key multiprocessor programming issues.

  • This revised edition incorporates much-demanded updates throughout the book, based on feedback and corrections reported from classrooms since 2008
  • Learn the fundamentals of programming multiple threads accessing shared memory
  • Explore mainstream concurrent data structures and the key elements of their design, as well as synchronization techniques from simple locks to transactional memory systems
  • Visit the companion site and download source code, example Java programs, and materials to support and enhance the learning experience

Arvustused

"The book could be used for a short course for practitioners looking for solutions to particular problems, a medium course for non-computer science major who would use multiprocessor programming in their own field, or a semester-long course for computer science majors." --Reference and Research Book News

Muu info

Update of the best-selling multicore programming text with more than 100 pages of updates in response to reader feedback
Acknowledgments xvii
Preface xxi
Suggested Ways to Teach the Art of Multiprocessor Programming xxiii
1 Introduction
1(18)
1.1 Shared Objects and Synchronization
3(3)
1.2 A Fable
6(4)
1.2.1 Properties of Mutual Exclusion
8(1)
1.2.2 The Moral
9(1)
1.3 The Producer-Consumer Problem
10(2)
1.4 The Readers-Writers Problem
12(1)
1.5 The Harsh Realities of Parallelization
13(2)
1.6 Parallel Programming
15(1)
1.7
Chapter Notes
15(1)
1.8 Exercises
16(3)
I PRINCIPLES
19(120)
2 Mutual Exclusion
21(24)
2.1 Time
21(1)
2.2 Critical Sections
22(6)
2.3.2 Thread Solutions
24(1)
2.3.1 The LockOne Class
25(1)
2.3.2 The LockTwo Class
26(1)
2.3.3 The Peterson Lock
27(1)
2.4 The Filter Lock
28(3)
2.5 Fairness
31(1)
2.6 Lamport's Bakery Algorithm
31(2)
2.7 Bounded Timestamps
33(4)
2.8 Lower Bounds on the Number of Locations
37(3)
2.9
Chapter Notes
40(1)
2.10 Exercises
41(4)
3 Concurrent Objects
45(26)
3.1 Concurrency and Correctness
45(3)
3.2 Sequential Objects
48(1)
3.3 Quiescent Consistency
49(2)
3.3.1 Remarks
51(1)
3.4 Sequential Consistency
51(3)
3.4.1 Remarks
52(2)
3.5 Linearizability
54(1)
3.5.1 Linearization Points
55(1)
3.5.2 Remarks
55(1)
3.6 Formal Definitions
55(4)
3.6.1 Linearizability
57(1)
3.6.2 Compositional Linearizability
57(1)
3.6.3 The Nonblocking Property
58(1)
3.7 Progress Conditions
59(2)
3.7.1 Dependent Progress Conditions
60(1)
3.8 The Java Memory Model
61(3)
3.8.1 Locks and Synchronized Blocks
62(1)
3.8.2 Volatile Fields
63(1)
3.8.3 Final Fields
63(1)
3.9 Remarks
64(1)
3.10
Chapter Notes
65(1)
3.11 Exercises
66(5)
4 Foundations of Shared Memory
71(28)
4.1 The Space of Registers
72(5)
4.2 Register Constructions
77(10)
4.2.1 MRSW Safe Registers
78(1)
4.2.2 A Regular Boolean MRSW Register
78(1)
4.2.3 A Regular M-Valued MRSW Register
79(2)
4.2.4 An Atomic SRSW Register
81(1)
4.2.5 An Atomic MRSW Register
82(3)
4.2.6 An Atomic MRMW Register
85(2)
4.3 Atomic Snapshots
87(6)
4.3.1 An Obstruction-Free Snapshot
87(1)
4.3.2 A Wait-Free Snapshot
88(2)
4.3.3 Correctness Arguments
90(3)
4.4
Chapter Notes
93(1)
4.5 Exercises
94(5)
5 The Relative Power of Primitive Synchronization Operations
99(26)
5.1 Consensus Numbers
100(3)
5.1.1 States and Valence
101(2)
5.2 Atomic Registers
103(3)
5.3 Consensus Protocols
106(1)
5.4 FIFO Queues
106(4)
5.5 Multiple Assignment Objects
110(2)
5.6 Read-Modify-Write Operations
112(2)
5.7 Common2 RMW Operations
114(2)
5.8 The compareAndSet() Operation
116(1)
5.9
Chapter Notes
117(1)
5.10 Exercises
118(7)
6 Universality of Consensus
125(14)
6.1 Introduction
125(1)
6.2 Universality
126(1)
6.3 A Lock-Free Universal Construction
126(4)
6.4 A Wait-Free Universal Construction
130(6)
6.5
Chapter Notes
136(1)
6.6 Exercises
137(2)
II PRACTICE
139(312)
7 Spin Locks and Contention
141(36)
7.1 Welcome to the Real World
141(3)
7.2 Test-And-Set Locks
144(2)
7.3 TAS-Based Spin Locks Revisited
146(1)
7.4 Exponential Backoff
147(2)
7.5 Queue Locks
149(8)
7.5.1 Array-Based Locks
150(1)
7.5.2 The CLH Queue Lock
151(3)
7.5.3 The MCS Queue Lock
154(3)
7.6 A Queue Lock with Timeouts
157(2)
7.7 A Composite Lock
159(8)
7.7.1 A Fast-Path Composite Lock
165(2)
7.8 Hierarchical Locks
167(6)
7.8.1 A Hierarchical Backoff Lock
167(1)
7.8.2 A Hierarchical CLH Queue Lock
168(5)
7.9 One Lock To Rule Them All
173(1)
7.10
Chapter Notes
173(1)
7.11 Exercises
174(3)
8 Monitors and Blocking Synchronization
177(18)
8.1 Introduction
177(1)
8.2 Monitor Locks and Conditions
178(5)
8.2.1 Conditions
179(2)
8.2.2 The Lost-Wakeup Problem
181(2)
8.3 Readers-Writers Locks
183(4)
8.3.1 Simple Readers-Writers Lock
184(1)
8.3.2 Fair Readers-Writers Lock
185(2)
8.4 Our Own Reentrant Lock
187(2)
8.5 Semaphores
189(1)
8.6
Chapter Notes
189(1)
8.7 Exercises
190(5)
9 Linked Lists: The Role of Locking
195(28)
9.1 Introduction
195(1)
9.2 List-Based Sets
196(2)
9.3 Concurrent Reasoning
198(2)
9.4 Coarse-Grained Synchronization
200(1)
9.5 Fine-Grained Synchronization
201(4)
9.6 Optimistic Synchronization
205(3)
9.7 Lazy Synchronization
208(5)
9.8 Non-Blocking Synchronization
213(5)
9.9 Discussion
218(1)
9.10
Chapter Notes
219(1)
9.11 Exercises
219(4)
10 Concurrent Queues and the ABA Problem
223(22)
10.1 Introduction
223(1)
10.2 Queues
224(1)
10.3 A Bounded Partial Queue
225(4)
10.4 An Unbounded Total Queue
229(1)
10.5 An Unbounded Lock-Free Queue
230(6)
10.6 Memory Reclamation and the ABA Problem
236(2)
10.6.1 A Naive Synchronous Queue
237(1)
10.7 Dual Data Structures
238(3)
10.8
Chapter Notes
241(1)
10.9 Exercises
241(4)
11 Concurrent Stacks and Elimination
245(14)
11.1 Introduction
245(1)
11.2 An Unbounded Lock-Free Stack
245(3)
11.3 Elimination
248(1)
11.4 The Elimination Backoff Stack
249(5)
11.4.1 A Lock-Free Exchanger
249(2)
11.4.2 The Elimination Array
251(3)
11.5
Chapter Notes
254(1)
11.6 Exercises
255(4)
12 Counting, Sorting, and Distributed Coordination
259(40)
12.1 Introduction
259(1)
12.2 Shared Counting
259(1)
12.3 Software Combining
260(9)
12.3.1 Overview
261(6)
12.3.2 An Extended Example
267(2)
12.3.3 Performance and Robustness
269(1)
12.4 Quiescently Consistent Pools and Counters
269(1)
12.5 Counting Networks
270(12)
12.5.1 Networks That Count
270(3)
12.5.2 The Bitonic Counting Network
273(7)
12.5.3 Performance and Pipelining
280(2)
12.6 Diffracting Trees
282(4)
12.7 Parallel Sorting
286(1)
12.8 Sorting Networks
286(4)
12.8.1 Designing a Sorting Network
287(3)
12.9 Sample Sorting
290(1)
12.10 Distributed Coordination
291(1)
12.11
Chapter Notes
292(1)
12.12 Exercises
293(6)
13 Concurrent Hashing and Natural Parallelism
299(30)
13.1 Introduction
299(1)
13.2 Closed-Address Hash Sets
300(9)
13.2.1 A Coarse-Grained Hash Set
302(1)
13.2.2 A Striped Hash Set
303(2)
13.2.3 A Refinable Hash Set
305(4)
13.3 A Lock-Free Hash Set
309(7)
13.3.1 Recursive Split-Ordering
309(3)
13.3.2 The Bucket List Class
312(1)
13.3.3 The LockFreeHashSet<T>Class
313(3)
13.4 An Open-Addressed Hash Set
316(9)
13.4.1 Cuckoo Hashing
316(2)
13.4.2 Concurrent Cuckoo Hashing
318(4)
13.4.3 Striped Concurrent Cuckoo Hashing
322(2)
13.4.4 A Refinable Concurrent Cuckoo Hash Set
324(1)
13.5
Chapter Notes
325(1)
13.6 Exercises
326(3)
14 Skiplists and Balanced Search
329(22)
14.1 Introduction
329(1)
14.2 Sequential Skiplists
329(2)
14.3 A Lock-Based Concurrent Skiplist
331(8)
14.3.1 A Bird's-Eye View
331(2)
14.3.2 The Algorithm
333(6)
14.4 A Lock-Free Concurrent Skiplist
339(9)
14.4.1 A Bird's-Eye View
339(2)
14.4.2 The Algorithm in Detail
341(7)
14.5 Concurrent Skiplists
348(1)
14.6
Chapter Notes
348(1)
14.7 Exercises
349(2)
15 Priority Queues
351(18)
15.1 Introduction
351(1)
15.1.1 Concurrent Priority Queues
351(1)
15.2 An Array-Based Bounded Priority Queue
352(1)
15.3 A Tree-Based Bounded Priority Queue
353(2)
15.4 An Unbounded Heap-Based Priority Queue
355(8)
15.4.1 A Sequential Heap
356(1)
15.4.2 A Concurrent Heap
357(6)
15.5 A Skiplist-Based Unbounded Priority Queue
363(3)
15.6
Chapter Notes
366(1)
15.7 Exercises
366(3)
16 Futures, Scheduling, and Work Distribution
369(28)
16.1 Introduction
369(6)
16.2 Analyzing Parallelism
375(3)
16.3 Realistic Multiprocessor Scheduling
378(2)
16.4 Work Distribution
380(2)
16.4.1 Work Stealing
380(1)
16.4.2 Yielding and Multiprogramming
381(1)
16.5 Work-Stealing Dequeues
382(9)
16.5.1 A Bounded Work-Stealing Dequeue
382(4)
16.5.2 An Unbounded Work-Stealing DEQueue
386(3)
16.5.3 Work Balancing
389(2)
16.6
Chapter Notes
391(1)
16.7 Exercises
392(5)
17 Barriers
397(20)
17.1 Introduction
397(1)
17.2 Barrier Implementations
398(1)
17.3 Sense-Reversing Barrier
399(1)
17.4 Combining Tree Barrier
400(2)
17.5 Static Tree Barrier
402(2)
17.6 Termination Detecting Barriers
404(4)
17.7
Chapter Notes
408(1)
17.8 Exercises
409(8)
18 Transactional Memory
417(34)
18.1 Introduction
417(4)
18.1.1 What is Wrong with Locking?
417(1)
18.1.2 What is Wrong with compareAndSet()?
418(2)
18.1.3 What is Wrong with Compositionality?
420(1)
18.1.4 What can We Do about It?
421(1)
18.2 Transactions and Atomicity
421(3)
18.3 Software Transactional Memory
424(21)
18.3.1 Transactions and Transactional Threads
427(1)
18.3.2 Zombies and Consistency
428(1)
18.3.3 Atomic Objects
429(2)
18.3.4 Dependent or Independent Progress?
431(1)
18.3.5 Contention Managers
431(2)
18.3.6 Implementing Atomic Objects
433(1)
18.3.7 An Obstruction-Free Atomic Object
434(4)
18.3.8 A Lock-Based Atomic Object
438(7)
18.4 Hardware Transactional Memory
445(3)
18.4.1 Cache Coherence
446(1)
18.4.2 Transactional Cache Coherence
447(1)
18.4.3 Enhancements
447(1)
18.5
Chapter Notes
448(1)
18.6 Exercises
449(2)
III APPENDIX
451(32)
A Software Basics
453(16)
A.1 Introduction
453(1)
A.2 Java
453(1)
A.2.1 Threads
453(2)
A.2.2 Monitors
455(3)
A.2.3 Yielding and Sleeping
458(1)
A.2.4 Thread-Local Objects
458(2)
A.3 C#
460(1)
A.3.1 Threads
460(1)
A.3.2 Monitors
461(1)
A.3.3 Thread-Local Objects
462(2)
A.4 Pthreads
464(1)
A.4.1 Thread-Local Storage
465(1)
A.5
Chapter Notes
466(3)
B Hardware Basics
469(14)
B.1 Introduction (and a Puzzle)
469(3)
B.2 Processors and Threads
472(1)
B.3 Interconnect
472(1)
B.4 Memory
473(1)
B.5 Caches
473(1)
B.5.1 Coherence
474(2)
B.5.2 Spinning
476(1)
B.6 Cache-Conscious Programming, or the Puzzle Solved
476(1)
B.7 Multi-Core and Multi-Threaded Architectures
477(1)
B.7.1 Relaxed Memory Consistency
478(1)
B.8 Hardware Synchronization Instructions
479(2)
B.9
Chapter Notes
481(1)
B.10 Exercises
481(2)
Bibliography 483(12)
Index 495
Maurice Herlihy received an A.B. in Mathematics from Harvard University, and a Ph.D. in Computer Science from M.I.T. He has served on the faculty of Carnegie Mellon University, on the staff of DEC Cambridge Research Lab, and is currently a Professor in the Computer Science Department at Brown University. Dr. Herlihy is an ACM Fellow, and is the recipient of the 2003 Dijkstra Prize in Distributed Computing. He shared the 2004 Gödel Prize with Nir Shavit, with whom he also shared the 2012 Edsger W. Dijkstra Prize In Distributed Computing. Nir Shavit received a B.A. and M.Sc. from the Technion and a Ph.D. from the Hebrew University, all in Computer Science. From 1999 to 2011 he served as a member of technical staff at Sun Labs and Oracle Labs. He shared the 2004 Gödel Prize with Maurice Herlihy, with whom he also shared the 2012 Edsger W. Dijkstra Prize in Distributed Computing. He is a Professor in the Electrical Engineering and Computer Science Department at M.I.T. and the Computer Science Department at Tel-Aviv University.