Muutke küpsiste eelistusi

E-raamat: Using MPI: Portable Parallel Programming with the Message-Passing Interface

(Auburn University), (Argonne National Laboratory), (University of Illinois Urbana-Champaign)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 135,20 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition ofUsing MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition ofUsing MPI reflects these changes in both text and example code.

The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume,Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data.

Series Foreword xiii
Preface to the Third Edition xv
Preface to the Second Edition xix
Preface to the First Edition xxi
1 Background 1(12)
1.1 Why Parallel Computing?
1(1)
1.2 Obstacles to Progress
2(1)
1.3 Why Message Passing?
3(7)
1.3.1 Parallel Computational Models
3(6)
1.3.2 Advantages of the Message-Passing Model
9(1)
1.4 Evolution of Message-Passing Systems
10(1)
1.5 The MPI Forum
11(2)
2 Introduction to MPI 13(10)
2.1 Goal
13(1)
2.2 What Is MPI?
13(1)
2.3 Basic MPI Concepts
14(4)
2.4 Other Interesting Features of MPI
18(2)
2.5 Is MPI Large or Small?
20(1)
2.6 Decisions Left to the Implementer
21(2)
3 Using MPI in Simple Programs 23(46)
3.1 A First MPI Program
23(5)
3.2 Running Your First MPI Program
28(1)
3.3 A First MPI Program in C
29(1)
3.4 Using MPI from Other Languages
29(2)
3.5 Timing MPI Programs
31(1)
3.6 A Self-Scheduling Example: Matrix-Vector Multiplication
32(6)
3.7 Studying Parallel Performance
38(11)
3.7.1 Elementary Scalability Calculations
39(2)
3.7.2 Gathering Data on Program Execution
41(1)
3.7.3 Instrumenting a Parallel Program with MPE Logging
42(1)
3.7.4 Events and States
43(1)
3.7.5 Instrumenting the Matrix-Matrix Multiply Program
43(4)
3.7.6 Notes on Implementation of Logging
47(1)
3.7.7 Graphical Display of Logfiles
48(1)
3.8 Using Communicators
49(6)
3.9 Another Way of Forming New Communicators
55(2)
3.10 A Handy Graphics Library for Parallel Programs
57(3)
3.11 Common Errors and Misunderstandings
60(2)
3.12 Summary of a Simple Subset of MPI
62(1)
3.13 Application: Computational Fluid Dynamics
62(7)
3.13.1 Parallel Formulation
63(2)
3.13.2 Parallel Implementation
65(4)
4 Intermediate MPI 69(44)
4.1 The Poisson Problem
70(3)
4.2 Topologies
73(8)
4.3 A Code for the Poisson Problem
81(10)
4.4 Using Nonblocking Communications
91(3)
4.5 Synchronous Sends and "Safe" Programs
94(1)
4.6 More on Scalability
95(3)
4.7 Jacobi with a 2-D Decomposition
98(2)
4.8 An MPI Derived Datatype
100(1)
4.9 Overlapping Communication and Computation
101(4)
4.10 More on Timing Programs
105(1)
4.11 Three Dimensions
106(1)
4.12 Common Errors and Misunderstandings
107(1)
4.13 Application: Nek5000/NekCEM
108(5)
5 Fun with Datatypes 113(42)
5.1 MPI Datatypes
113(6)
5.1.1 Basic Datatypes and Concepts
113(3)
5.1.2 Derived Datatypes
116(2)
5.1.3 Understanding Extents
118(1)
5.2 The N-Body Problem
119(17)
5.2.1 Gather
120(4)
5.2.2 Nonblocking Pipeline
124(3)
5.2.3 Moving Particles between Processes
127(5)
5.2.4 Sending Dynamically Allocated Data
132(2)
5.2.5 User-Controlled Data Packing
134(2)
5.3 Visualizing the Mandelbrot Set
136(10)
5.3.1 Sending Arrays of Structures
144(2)
5.4 Gaps in Datatypes
146(2)
5.5 More on Datatypes for Structures
148(1)
5.6 Deprecated and Removed Functions
149(1)
5.7 Common Errors and Misunderstandings
150(2)
5.8 Application: Cosmological Large-Scale Structure Formation
152(3)
6 Parallel Libraries 155(34)
6.1 Motivation
155(6)
6.1.1 The Need for Parallel Libraries
155(1)
6.1.2 Common Deficiencies of Early Message-Passing Systems
156(2)
6.1.3 Review of MPI Features That Support Libraries
158(3)
6.2 A First MPI Library
161(9)
6.3 Linear Algebra on Grids
170(9)
6.3.1 Mappings and Logical Grids
170(5)
6.3.2 Vectors and Matrices
175(2)
6.3.3 Components of a Parallel Library
177(2)
6.4 The UNPACK Benchmark in MPI
179(4)
6.5 Strategies for Library Building
183(1)
6.6 Examples of Libraries
184(1)
6.7 Application: Nuclear Green's Function Monte Carlo
185(4)
7 Other Features of MPI 189(56)
7.1 Working with Global Data
189(12)
7.1.1 Shared Memory, Global Data, and Distributed Memory
189(1)
7.1.2 A Counter Example
190(3)
7.1.3 The Shared Counter Using Polling Instead of an Extra Process
193(3)
7.1.4 Fairness in Message Passing
196(2)
7.1.5 Exploiting Request-Response Message Patterns
198(3)
7.2 Advanced Collective Operations
201(7)
7.2.1 Data Movement
201(1)
7.2.2 Collective Computation
201(5)
7.2.3 Common Errors and Misunderstandings
206(2)
7.3 Intercommunicators
208(8)
7.4 Heterogeneous Computing
216(1)
7.5 Hybrid Programming with MPI and OpenMP
217(1)
7.6 The MPI Profiling Interface
218(8)
7.6.1 Finding Buffering Problems
221(2)
7.6.2 Finding Load Imbalances
223(1)
7.6.3 Mechanics of Using the Profiling Interface
223(3)
7.7 Error Handling
226(8)
7.7.1 Error Handlers
226(3)
7.7.2 Example of Error Handling
229(1)
7.7.3 User-Defined Error Handlers
229(3)
7.7.4 Terminating MPI Programs
232(1)
7.7.5 Common Errors and Misunderstandings
232(2)
7.8 The MPI Environment
234(3)
7.8.1 Processor Name
236(1)
7.8.2 Is MPI Initialized?
236(1)
7.9 Determining the Version of MPI
237(2)
7.10 Other Functions in MPI
239(1)
7.11 Application: No-Core Configuration Interaction Calculations in Nuclear Physics
240(5)
8 Understanding How MPI Implementations Work 245(8)
8.1 Introduction
245(4)
8.1.1 Sending Data
245(1)
8.1.2 Receiving Data
246(1)
8.1.3 Rendezvous Protocol
246(1)
8.1.4 Matching Protocols to MPI's Send Modes
247(1)
8.1.5 Performance Implications
248(1)
8.1.6 Alternative MPI Implementation Strategies
249(1)
8.1.7 Tuning MPI Implementations
249(1)
8.2 How Difficult Is MPI to Implement?
249(1)
8.3 Device Capabilities and the MPI Library Definition
250(1)
8.4 Reliability of Data Transfer
251(2)
9 Comparing MPI with Sockets 253(6)
9.1 Process Startup and Shutdown
255(2)
9.2 Handling Faults
257(2)
10 Wait! There's More! 259(4)
10.1 Beyond MPI-1
259(1)
10.2 Using Advanced MPI
260(1)
10.3 Will There Be an MPI-4?
261(1)
10.4 Beyond Message Passing Altogether
261(1)
10.5 Final Words
262(1)
Glossary of Selected Terms 263(10)
A The MPE Multiprocessing Environment 273(6)
A.1 MPE Logging
273(2)
A.2 MPE Graphics
275(1)
A.3 MPE Helpers
276(3)
B MPI Resources Online 279(2)
C Language Details 281(6)
C.1 Arrays in C and Fortran
281(4)
C.1.1 Column and Row Major Ordering
281(1)
C.1.2 Meshes vs. Matrices
281(1)
C.1.3 Higher Dimensional Arrays
282(3)
C.2 Aliasing
285(2)
References 287(14)
Subject Index 301(4)
Function and Term Index 305