Muutke küpsiste eelistusi

E-raamat: Art of Multiprocessor Programming

(Computer Science and Engineering, Lehig), (Brown University, Providence, RI, USA), (Senior Algorithms Researcher, Algorand, Cambridge, MA, USA), (Professor of Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 08-Sep-2020
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780123914064
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 80,47 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 08-Sep-2020
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780123914064
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The Art of Multiprocessor Programming, Second Edition, provides users with an authoritative guide to multicore programming. This updated edition introduces higher level software development skills relative to those needed for efficient single-core programming, and includes comprehensive coverage of the new principles, algorithms, and tools necessary for effective multiprocessor programming. The book is an ideal resource for students and professionals alike who will benefit from its thorough coverage of key multiprocessor programming issues.
  • Features new exercises developed for instructors using the text, with more algorithms, new examples, and other updates throughout the book
  • Presents the fundamentals of programming multiple threads for accessing shared memory
  • Explores mainstream concurrent data structures and the key elements of their design, as well as synchronization techniques, from simple locks to transactional memory systems

Arvustused

"The book is largely self-contained, has countless examples, and focuses on what really matters. As such, it is very well suited for both a teaching environment and for practitioners looking for an opportunity to learn about this topic...The book is written in a way that makes multiprocessor programming accessible. This updated version will further confirm its status as a classic." --ComputingReviews.com, 2013

Preface xv
Acknowledgments xix
Suggested ways to teach the art of multiprocessor programming xxi
Chapter 1 Introduction
1(20)
1.1 Shared objects and synchronization
3(3)
1.2 A fable
6(3)
1.2.1 Properties of a mutual exclusion protocol
8(1)
1.2.2 The moral
9(1)
1.3 The producer-consumer problem
9(2)
1.4 The readers-writers problem
11(1)
1.5 The harsh realities of parallelization
12(2)
1.6 Parallel programming
14(1)
1.7
Chapter Notes
15(1)
1.8 Exercises
15(6)
PART 1 Principles
Chapter 2 Mutual exclusion
21(28)
2.1 Time and events
21(1)
2.2 Critical sections
22(3)
2.3 Two-thread solutions
25(4)
2.3.1 The LockOne class
25(1)
2.3.2 The LockTwo class
26(1)
2.3.3 The Peterson lock
27(2)
2.4 Notes on deadlock
29(1)
2.5 The filter lock
30(3)
2.6 Fairness
33(1)
2.7 Lamport's Bakery algorithm
34(1)
2.8 Bounded timestamps
35(4)
2.9 Lower bounds on the number of locations
39(2)
2.10
Chapter Notes
41(1)
2.11 Exercises
42(7)
Chapter 3 Concurrent objects
49(26)
3.1 Concurrency and correctness
49(3)
3.2 Sequential objects
52(1)
3.3 Sequential consistency
53(5)
3.3.1 Sequential consistency versus real-time order
55(1)
3.3.2 Sequential consistency is nonblocking
56(1)
3.3.3 Compositionality
57(1)
3.4 Linearizability
58(1)
3.4.1 Linearization points
58(1)
3.4.2 Linearizability versus sequential consistency
59(1)
3.5 Quiescent consistency
59(1)
3.5.1 Properties of quiescent consistency
60(1)
3.6 Formal definitions
60(4)
3.6.1 Histories
60(1)
3.6.2 Linearizability
61(2)
3.6.3 Linearizability is compositional
63(1)
3.6.4 Linearizability is nonblocking
63(1)
3.7 Memory consistency models
64(1)
3.8 Progress conditions
64(4)
3.8.1 Wait-freedom
65(1)
3.8.2 Lock-freedom
65(1)
3.8.3 Obstruction-freedom
66(1)
3.8.4 Blocking progress conditions
67(1)
3.8.5 Characterizing progress conditions
67(1)
3.9 Remarks
68(1)
3.10
Chapter Notes
69(1)
3.11 Exercises
70(5)
Chapter 4 Foundations of shared memory
75(28)
4.1 The space of registers
76(5)
4.2 Register constructions
81(11)
4.2.1 Safe MRSW registers
82(1)
4.2.2 A regular Boolean MRSW register
83(1)
4.2.3 A regular A/-valued MRSW register
84(1)
4.2.4 An atomic SRSW register
85(2)
4.2.5 An atomic MRSW register
87(3)
4.2.6 An atomic MRMW register
90(2)
4.3 Atomic snapshots
92(6)
4.3.1 An obstruction-free snapshot
92(1)
4.3.2 A wait-free snapshot
93(4)
4.3.3 Correctness arguments
97(1)
4.4
Chapter Notes
98(1)
4.5 Exercises
99(4)
Chapter 5 The relative power of primitive synchronization operations
103(26)
5.1 Consensus numbers
104(2)
5.1.1 States and valence
105(1)
5.2 Atomic registers
106(3)
5.3 Consensus protocols
109(1)
5.4 FIFO queues
110(3)
5.5 Multiple assignment objects
113(3)
5.6 Read-modify-write operations
116(1)
5.7 Common2 RMW operations
117(2)
5.8 The compare And Set operation
119(1)
5.9
Chapter Notes
120(1)
5.10 Exercises
121(8)
Chapter 6 Universality of consensus
129(18)
6.1 Introduction
129(1)
6.2 Universality
130(1)
6.3 A lock-free universal construction
130(4)
6.4 A wait-free universal construction
134(6)
6.5
Chapter Notes
140(1)
6.6 Exercises
141(6)
PART 2 Practice
Chapter 7 Spin locks and contention
147(36)
7.1 Welcome to the real world
147(3)
7.2 Volatile fields and atomic objects
150(1)
7.3 Test-and-set locks
150(4)
7.4 Exponential back-off
154(2)
7.5 Queue locks
156(7)
7.5.1 Array-based locks
156(3)
7.5.2 The CLH queue lock
159(2)
7.5.3 The MCS queue lock
161(2)
7.6 A queue lock with timeouts
163(3)
7.7 Hierarchical locks
166(5)
7.7.1 A hierarchical back-off lock
167(1)
7.7.2 Cohort locks
167(3)
7.7.3 A cohort lock implementation
170(1)
7.8 A composite lock
171(7)
7.9 A fast path for threads running alone
178(2)
7.10 One lock to rule them all
180(1)
7.11
Chapter Notes
180(1)
7.12 Exercises
181(2)
Chapter 8 Monitors and blocking synchronization
183(18)
8.1 Introduction
183(1)
8.2 Monitor locks and conditions
183(6)
8.2.1 Conditions
185(2)
8.2.2 The lost-wakeup problem
187(2)
8.3 Readers-writers locks
189(5)
8.3.1 Simple readers-writers lock
190(2)
8.3.2 Fair readers-writers lock
192(2)
8.4 Our own reentrant lock
194(1)
8.5 Semaphores
194(2)
8.6
Chapter Notes
196(1)
8.7 Exercises
197(4)
Chapter 9 Linked lists: The role of locking
201(28)
9.1 Introduction
201(1)
9.2 List-based sets
202(2)
9.3 Concurrent reasoning
204(2)
9.4 Coarse-grained synchronization
206(1)
9.5 Fine-grained synchronization
207(4)
9.6 Optimistic synchronization
211(4)
9.7 Lazy synchronization
215(5)
9.8 Nonblocking synchronization
220(5)
9.9 Discussion
225(1)
9.10
Chapter Notes
226(1)
9.11 Exercises
226(3)
Chapter 10 Queues, memory management, and the ABA problem
229(22)
10.1 Introduction
229(1)
10.2 Queues
230(1)
10.3 A bounded partial queue
230(5)
10.4 An unbounded total queue
235(1)
10.5 A lock-free unbounded queue
236(4)
10.6 Memory reclamation and the ABA problem
240(4)
10.6.1 A naive synchronous queue
244(1)
10.7 Dual data structures
244(4)
10.8
Chapter Notes
248(1)
10.9 Exercises
248(3)
Chapter 11 Stacks and elimination
251(14)
11.1 Introduction
251(1)
11.2 An unbounded lock-free stack
251(3)
11.3 Elimination
254(1)
11.4 The elimination back-off stack
255(5)
11.4.1 A lock-free exchanger
255(2)
11.4.2 The elimination array
257(3)
11.5
Chapter Notes
260(1)
11.6 Exercises
261(4)
Chapter 12 Counting, sorting, and distributed coordination
265(40)
12.1 Introduction
265(1)
12.2 Shared counting
265(1)
12.3 Software combining
266(10)
12.3.1 Overview
267(7)
12.3.2 An extended example
274(1)
12.3.3 Performance and robustness
275(1)
12.4 Quiescently consistent pools and counters
276(1)
12.5 Counting networks
276(12)
12.5.1 Networks that count
276(3)
12.5.2 The bitonic counting network
279(8)
12.5.3 Performance and pipelining
287(1)
12.6 Diffracting trees
288(4)
12.7 Parallel sorting
292(1)
12.8 Sorting networks
293(3)
12.8.1 Designing a sorting network
294(2)
12.9 Sample sorting
296(2)
12.10 Distributed coordination
298(1)
12.11
Chapter Notes
299(1)
12.12 Exercises
300(5)
Chapter 13 Concurrent hashing and natural parallelism
305(30)
13.1 Introduction
305(1)
13.2 Closed-address hash sets
306(9)
13.2.1 A coarse-grained hash set
308(2)
13.2.2 A striped hash set
310(1)
13.2.3 A refinable hash set
311(4)
13.3 A lock-free hash set
315(8)
13.3.1 Recursive split-ordering
315(3)
13.3.2 The BucketList class
318(1)
13.3.3 The LockFreeHashSet<T> class
319(4)
13.4 An open-address hash set
323(9)
13.4.1 Cuckoo hashing
323(1)
13.4.2 Concurrent cuckoo hashing
324(5)
13.4.3 Striped concurrent cuckoo hashing
329(2)
13.4.4 A refinable concurrent cuckoo hash set
331(1)
13.5
Chapter Notes
332(2)
13.6 Exercises
334(1)
Chapter 14 Skiplists and balanced search
335(24)
14.1 Introduction
335(1)
14.2 Sequential skiplists
335(2)
14.3 A lock-based concurrent skiplist
337(8)
14.3.1 A bird's-eye view
337(2)
14.3.2 The algorithm
339(6)
14.4 A lock-free concurrent skiplist
345(10)
14.4.1 A bird's-eye view
345(3)
14.4.2 The algorithm in detail
348(7)
14.5 Concurrent skiplists
355(1)
14.6
Chapter Notes
356(1)
14.7 Exercises
356(3)
Chapter 15 Priority queues
359(18)
15.1 Introduction
359(1)
15.1.1 Concurrent priority queues
359(1)
15.2 An array-based bounded priority queue
360(1)
15.3 A tree-based bounded priority queue
361(2)
15.4 An unbounded heap-based priority queue
363(8)
15.4.1 A sequential heap
363(2)
15.4.2 A concurrent heap
365(6)
15.5 A skiplist-based unbounded priority queue
371(3)
15.6
Chapter Notes
374(1)
15.7 Exercises
375(2)
Chapter 16 Scheduling and work distribution
377(28)
16.1 Introduction
377(7)
16.2 Analyzing parallelism
384(3)
16.3 Realistic multiprocessor scheduling
387(2)
16.4 Work distribution
389(1)
16.4.1 Work stealing
389(1)
16.4.2 Yielding and multiprogramming
390(1)
16.5 Work-stealing deques
390(10)
16.5.1 A bounded work-stealing deque
391(4)
16.5.2 An unbounded work-stealing deque
395(2)
16.5.3 Work dealing
397(3)
16.6
Chapter Notes
400(1)
16.7 Exercises
401(4)
Chapter 17 Data parallelism
405(26)
17.1 MapReduce
407(7)
17.1.1 The MapReduce framework
408(2)
17.1.2 A MapReduce-based WordCount application
410(1)
17.1.3 A MapReduce-based KMeans application
411(1)
17.1.4 The MapReduce implementation
411(3)
17.2 Stream computing
414(8)
17.2.1 A stream-based WordCount application
416(1)
17.2.2 A stream-based KMeans application
417(2)
17.2.3 Making aggregate operations parallel
419(3)
17.3
Chapter Notes
422(1)
17.4 Exercises
423(8)
Chapter 18 Barriers
431(20)
18.1 Introduction
431(1)
18.2 Barrier implementations
432(1)
18.3 Sense reversing barrier
433(1)
18.4 Combining tree barrier
434(2)
18.5 Static tree barrier
436(2)
18.6 Termination detection barriers
438(4)
18.7
Chapter Notes
442(1)
18.8 Exercises
443(8)
Chapter 19 Optimism and manual memory management
451(16)
19.1 Transitioning from Java to C++
451(1)
19.2 Optimism and explicit reclamation
451(3)
19.3 Protecting pending operations
454(1)
19.4 An object for managing memory
455(1)
19.5 Traversing a list
456(2)
19.6 Hazard pointers
458(4)
19.7 Epoch-based reclamation
462(3)
19.8
Chapter Notes
465(1)
19.9 Exercises
466(1)
Chapter 20 Transactional programming
467(30)
20.1 Challenges in concurrent programming
467(5)
20.1.1 Problems with locking
467(1)
20.1.2 Problems with explicit speculation
468(2)
20.1.3 Problems with nonblocking algorithms
470(1)
20.1.4 Problems with compositionality
471(1)
20.1.5 Summary
472(1)
20.2 Transactional programming
472(3)
20.2.1 An example of transactional programming
473(2)
20.3 Hardware support for transactional programming
475(3)
20.3.1 Hardware speculation
475(1)
20.3.2 Basic cache coherence
475(1)
20.3.3 Transactional cache coherence
476(1)
20.3.4 Limitations of hardware support
477(1)
20.4 Transactional lock elision
478(3)
20.4.1 Discussion
480(1)
20.5 Transactional memory
481(2)
20.5.1 Run-time scheduling
482(1)
20.5.2 Explicit self-abort
483(1)
20.6 Software transactions
483(9)
20.6.1 Transactions with ownership records
485(5)
20.6.2 Transactions with value-based validation
490(2)
20.7 Combining hardware and software transactions
492(1)
20.8 Transactional data structure design
493(1)
20.9
Chapter Notes
494(1)
20.10 Exercises
494(3)
Appendix A Software basics
497(22)
A.1 Introduction
497(1)
A.2 Java
497(7)
A.2.1 Threads
497(1)
A.2.2 Monitors
498(3)
A.2.3 Yielding and sleeping
501(1)
A.2.4 Thread-local objects
502(1)
A.2.5 Randomization
503(1)
A.3 The Java memory model
504(4)
A.3.1 Locks and synchronized blocks
505(1)
A.3.2 Volatile fields
506(1)
A.3.3 Final fields
506(2)
A.4 C++
508(6)
A.4.1 Threads in C++
508(1)
A.4.2 Locks in C++
509(1)
A.4.3 Condition variables
510(2)
A.4.4 Atomic variables
512(1)
A.4.5 Thread-local storage
513(1)
A.5 C#
514(4)
A.5.1 Threads
514(1)
A.5.2 Monitors
515(2)
A.5.3 Thread-local objects
517(1)
A.6 Appendix notes
518(1)
Appendix B Hardware basics
519(14)
B.1 Introduction (and a puzzle)
519(3)
B.2 Processors and threads
522(1)
B.3 Interconnect
522(1)
B.4 Memory
523(1)
B.5 Caches
523(3)
B.5.1 Coherence
524(2)
B.5.2 Spinning
526(1)
B.6 Cache-conscious programming, or the puzzle solved
526(1)
B.7 Multicore and multithreaded architectures
527(2)
B.7.1 Relaxed memory consistency
528(1)
B.8 Hardware synchronization instructions
529(1)
B.9 Appendix notes
530(1)
B.10 Exercises
531(2)
Bibliography 533(8)
Index 541
Maurice Herlihy received an A.B. in Mathematics from Harvard University, and a Ph.D. in Computer Science from M.I.T. He has served on the faculty of Carnegie Mellon University, on the staff of DEC Cambridge Research Lab, and is currently a Professor in the Computer Science Department at Brown University. Dr. Herlihy is an ACM Fellow, and is the recipient of the 2003 Dijkstra Prize in Distributed Computing. He shared the 2004 Gödel Prize with Nir Shavit, with whom he also shared the 2012 Edsger W. Dijkstra Prize In Distributed Computing. Nir Shavit received a B.A. and M.Sc. from the Technion and a Ph.D. from the Hebrew University, all in Computer Science. From 1999 to 2011 he served as a member of technical staff at Sun Labs and Oracle Labs. He shared the 2004 Gödel Prize with Maurice Herlihy, with whom he also shared the 2012 Edsger W. Dijkstra Prize in Distributed Computing. He is a Professor in the Electrical Engineering and Computer Science Department at M.I.T. and the Computer Science Department at Tel-Aviv University. Victor Luchangco is a Senior Algorithms Researcher at Algorand in Cambridge, MA, USA. Professor Spear's research interests are broadly in concurrency, programming languages, and computer architecture. His goal is to make it easier for programmers to write correct, scalable applications.