Muutke küpsiste eelistusi

E-raamat: Accelerating MATLAB Performance: 1001 tips to speed up MATLAB programs

  • Formaat: 785 pages
  • Ilmumisaeg: 11-Dec-2014
  • Kirjastus: Chapman & Hall/CRC
  • Keel: eng
  • ISBN-13: 9781040078068
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 84,49 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 785 pages
  • Ilmumisaeg: 11-Dec-2014
  • Kirjastus: Chapman & Hall/CRC
  • Keel: eng
  • ISBN-13: 9781040078068
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The MATLAB® programming environment is often perceived as a platform suitable for prototyping and modeling but not for "serious" applications. One of the main complaints is that MATLAB is just too slow.

Accelerating MATLAB Performance aims to correct this perception by describing multiple ways to greatly improve MATLAB program speed. Packed with thousands of helpful tips, it leaves no stone unturned, discussing every aspect of MATLAB.

Ideal for novices and professionals alike, the book describes MATLAB performance in a scale and depth never before published. It takes a comprehensive approach to MATLAB performance, illustrating numerous ways to attain the desired speedup.

The book covers MATLAB, CPU, and memory profiling and discusses various tradeoffs in performance tuning. It describes both the application of standard industry techniques in MATLAB, as well as methods that are specific to MATLAB such as using different data types or built-in functions.

The book covers MATLAB vectorization, parallelization (implicit and explicit), optimization, memory management, chunking, and caching. It explains MATLABs memory model and details how it can be leveraged. It describes the use of GPU, MEX, FPGA, and other forms of compiled code, as well as techniques for speeding up deployed applications. It details specific tips for MATLAB GUI, graphics, and I/O. It also reviews a wide variety of utilities, libraries, and toolboxes that can help to improve performance.

Sufficient information is provided to allow readers to immediately apply the suggestions to their own MATLAB programs. Extensive references are also included to allow those who wish to expand the treatment of a particular topic to do so easily.

Supported by an active website, and numerous code examples, the book will help readers rapidly attain significant reductions in development costs and program run times.

Arvustused

a very interesting new book on MATLAB® performance covering basic tools and an appropriate range of specific programming techniques. The book seems to take a whole-system approach helping readers understand the big picture of how to get better performance.Michelle Hirsch, Ph.D., Head of MATLAB® Product Management, The MathWorks Inc.

Preface xix
Author xxv
1 Introduction to Performance Tuning
1(24)
1.1 Why Should We Bother?
1(2)
1.2 When to Performance-Tune and When Not to Bother
3(2)
1.3 The Iterative Performance-Tuning Cycle
5(4)
1.3.1 Pareto's Principle and the Law of Diminishing Returns
6(1)
1.3.2 When to Stop Tuning
7(2)
1.3.3 Periodic Performance Maintenance
9(1)
1.4 What to Tune
9(1)
1.5 Performance Tuning Pitfalls
10(3)
1.5.1 When to Tune
10(1)
1.5.2 Performance Goals
10(1)
1.5.3 Profiling
11(1)
1.5.4 Optimization
11(2)
1.6 Performance Tuning Tradeoffs
13(3)
1.7 Vertical versus Horizontal Scaling
16(2)
1.8 Perceived versus Actual Performance
18(7)
1.8.1 Presenting Continuous Feedback for Ongoing Tasks
19(1)
1.8.2 Placing the User in Control
20(1)
1.8.3 Enabling User Interaction during Background Processing
20(1)
1.8.4 Streaming Data as It Becomes Available
21(1)
1.8.5 Streamlining the Application
21(1)
1.8.6 Reducing the Run-Time Variability
22(1)
1.8.7 Performance and Real Time
22(3)
2 Profiling MATLAB® Performance
25(34)
2.1 The MATLAB Profiler
26(25)
2.1.1 The Detailed Profiling Report
28(4)
2.1.2 A Sample Profiling Session
32(5)
2.1.3 Programmatic Access to Profiling Data
37(2)
2.1.4 Function-Call History Timeline
39(3)
2.1.5 CPU versus Wall-Clock Profiling
42(1)
2.1.6 Profiling Techniques
42(6)
2.1.7 Profiling Limitations
48(1)
2.1.8 Profiling and MATLAB's JIT
49(2)
2.2 Tic, Toe and Relatives
51(5)
2.2.1 The Built-In tic, toe Functions
51(2)
2.2.2 Comparison between the Profiler and tic, toc
53(1)
2.2.3 Related Tools
54(2)
2.3 Timed Log Files and Printouts
56(1)
2.4 Non-MATLAB Tools
57(2)
3 Standard Performance-Tuning Techniques
59(76)
3.1 Loop Optimization
60(21)
3.1.1 Move Loop-Invariant Code Out of the Loop
60(4)
3.1.2 Minimize Function Call Overheads
64(1)
3.1.3 Employ Early Bail-Outs
65(2)
3.1.4 Simplify Loop Contents
67(2)
3.1.5 Unroll Simple Loops
69(1)
3.1.6 Optimize Nested Loops
70(2)
3.1.7 Switch the Order of Nested Loops
72(2)
3.1.8 Minimize Dereferencing
74(1)
3.1.9 Postpone I/O and Graphics until the Loop Ends
75(1)
3.1.10 Merge or Split Loops
75(1)
3.1.11 Loop Over the Shorter Dimension
76(1)
3.1.12 Run Loops Backwards
77(1)
3.1.13 Partially Optimize a Loop
77(1)
3.1.14 Use the Loop Index Rather than Counters
78(1)
3.1.15 MATLAB's JIT
78(3)
3.2 Data Caching
81(11)
3.2.1 Read-Only Caches
82(1)
3.2.2 Common Subexpression Elimination
83(1)
3.2.3 Persistent Caches
83(2)
3.2.4 Writable Caches
85(3)
3.2.5 A Real-Life Example: Writable Cache
88(2)
3.2.6 Optimizing Cache Fetch Time
90(2)
3.3 Smart Checks Bypass
92(3)
3.4 Exception Handling
95(2)
3.5 Improving Externally Connected Systems
97(12)
3.5.1 Database
97(8)
3.5.2 File System and Network
105(2)
3.5.3 Computer Hardware
107(2)
3.6 Processing Smaller Data Subsets
109(4)
3.6.1 Reading from a Database
109(1)
3.6.2 Reading from a Data File
110(1)
3.6.3 Processing Data
111(2)
3.7 Interrupting Long-Running Tasks
113(2)
3.8 Latency versus Throughput
115(4)
3.8.1 Lazy Evaluation
115(3)
3.8.2 Prefetching
118(1)
3.9 Data Analysis
119(7)
3.9.1 Preprocessing the Data
120(2)
3.9.2 Controlling the Target Accuracy
122(1)
3.9.3 Reducing Problem Complexity
123(3)
3.10 Other Techniques
126(9)
3.10.1 Coding
126(6)
3.10.2 Data
132(1)
3.10.3 General
133(2)
4 MATLAB®-Specific Techniques
135(88)
4.1 Effects of Using Different Data Types
135(8)
4.1.1 Numeric versus Nonnumeric Data Types
135(1)
4.1.2 Nondouble and Multidimensional Arrays
136(1)
4.1.3 Sparse Data
137(3)
4.1.4 Modifying Data Type in Run Time
140(1)
4.1.5 Concatenating Cell Arrays
141(1)
4.1.6 Datasets, Tables, and Categorical Arrays
142(1)
4.1.7 Additional Aspects
142(1)
4.2 Characters and Strings
143(10)
4.2.1 MATLAB's Character/Number Duality
144(1)
4.2.2 Search and Replace
144(5)
4.2.3 Converting Numbers to Strings (and Back)
149(1)
4.2.4 String Comparison
150(2)
4.2.5 Additional Aspects
152(1)
4.3 Using Internal Helper Functions
153(4)
4.3.1 A Sample Debugging Session
154(3)
4.4 Date and Time Functions
157(5)
4.5 Numeric Processing
162(19)
4.5.1 Using inf and NaN
162(1)
4.5.2 Matrix Operations
163(5)
4.5.3 Real versus Complex Math
168(1)
4.5.4 Gradient
169(1)
4.5.5 Optimization
170(5)
4.5.6 Fast Fourier Transform
175(2)
4.5.7 Updating the Math Libraries
177(3)
4.5.8 Random Numbers
180(1)
4.6 Functional Programming
181(12)
4.6.1 Invoking Functions
181(7)
4.6.2 On Cleanup
188(1)
4.6.3 Conditional Constructs
189(1)
4.6.4 Smaller Functions and M-files
190(1)
4.6.5 Effective Use of the MATLAB Path
191(1)
4.6.6 Overloaded Built-In MATLAB Functions
191(2)
4.7 Object-Oriented MATLAB
193(9)
4.7.1 Object Creation
193(2)
4.7.2 Accessing Properties
195(4)
4.7.3 Invoking Methods
199(2)
4.7.4 Using System Objects
201(1)
4.8 MATLAB Start-Up
202(8)
4.8.1 The MATLAB Startup Accelerator
202(4)
4.8.2 Starting MATLAB in Batch Mode
206(1)
4.8.3 Slow MATLAB Start-Up
206(2)
4.8.4 Profiling MATLAB Start-Up
208(1)
4.8.5 Java Start-Up
209(1)
4.9 Additional Techniques
210(13)
4.9.1 Reduce the Number of Workspace Variables
210(1)
4.9.2 Loop Over the Smaller Data Set
211(1)
4.9.3 Referencing Dynamic Struct Fields and Object Properties
212(1)
4.9.4 Use Warning with a Specific Message ID
213(1)
4.9.5 Prefer num2cell Rather than mat2cell
213(1)
4.9.6 Avoid Using containers.Map
213(2)
4.9.7 Use the Latest MATLAB Release and Patches
215(1)
4.9.8 Use is Functions Where Available
216(1)
4.9.9 Specify the Item Type When Using ishghandle or exist
216(1)
4.9.10 Use Problem-Specific Tools
217(1)
4.9.11 Symbolic Arithmetic
217(1)
4.9.12 Simulink
217(2)
4.9.13 Mac OS
219(2)
4.9.14 Additional Ideas
221(2)
5 Implicit Parallelization (Vectorization and Indexing)
223(62)
5.1 Introduction to MATLAB® Vectorization
223(10)
5.1.1 So What Exactly Is MATLAB Vectorization?
223(3)
5.1.2 Indexing Techniques
226(3)
5.1.3 Logical Indexing
229(4)
5.2 Built-In Vectorization Functions
233(14)
5.2.1 Functions for Common Indexing Usage Patterns
233(1)
5.2.2 Functions That Create Arrays
234(1)
5.2.3 Functions That Accept Vectorized Data
234(4)
5.2.4 Functions That Apply Another Function in a Vectorized Manner
238(7)
5.2.5 Set-Based Functions
245(2)
5.3 Simple Vectorization Examples
247(9)
5.3.1 Trivial Transformations
247(1)
5.3.2 Partial Data Summation
248(1)
5.3.3 Thresholding
249(1)
5.3.4 Cumulative Sum
250(1)
5.3.5 Data Binning
251(1)
5.3.6 Using meshgrid and bsxfun
251(1)
5.3.7 A meshgrid Variant
252(1)
5.3.8 Euclidean Distances
253(1)
5.3.9 Range Search
254(1)
5.3.10 Matrix Computations
255(1)
5.4 Repetitive Data
256(5)
5.4.1 A Simple Example
258(1)
5.4.2 Using repmat Replacements
259(1)
5.4.3 Repetitions of Internal Elements
260(1)
5.5 Multidimensional Data
261(3)
5.6 Real-Life Example: Synthetic Aperture Radar Matched Filter
264(4)
5.6.1 Naive Approach
264(2)
5.6.2 Using Vectorization
266(2)
5.7 Effective Use of MATLAB Vectorization
268(17)
5.7.1 Vectorization Is Not Always Faster
268(1)
5.7.2 Applying Smart Indexing
269(1)
5.7.3 Breaking a Problem into Simpler Vectorizable Subproblems
270(1)
5.7.4 Using Vectorization as Replacement for Iterative Data Updates
271(1)
5.7.5 Minimizing Temporary Data Allocations
271(1)
5.7.6 Preprocessing Inputs, Rather than Postprocessing the Output
272(1)
5.7.7 Interdependent Loop Iterations
272(2)
5.7.8 Reducing Loop Complexity
274(2)
5.7.9 Reducing Processing Complexity
276(1)
5.7.10 Nested Loops
276(1)
5.7.11 Analyzing Loop Pattern to Extract a Vectorization Rule
277(1)
5.7.12 Vectorizing Structure Elements
278(1)
5.7.13 Limitations of Internal Parallelization
279(1)
5.7.14 Using MATLAB's Character/Number Duality
280(1)
5.7.15 Acklam's Vectorization Guide and Toolbox
281(1)
5.7.16 Using Linear Algebra to Avoid Looping Over Matrix Indexes
281(1)
5.7.17 Intersection of Curves: Reader Exercise
282(3)
6 Explicit Parallelization Using MathWorks Toolboxes
285(68)
6.1 The Parallel Computing Toolbox --- CPUs
286(22)
6.1.1 Using parfor-Loops
287(5)
6.1.2 Using spmd
292(4)
6.1.3 Distributed and Codistributed Arrays
296(4)
6.1.4 Interactive Parallel Development with pmode
300(2)
6.1.5 Profiling Parallel Blocks
302(3)
6.1.6 Running Example: Using parfor Loops
305(1)
6.1.7 Running Example: Using spmd
306(2)
6.2 The Parallel Computing Toolbox --- GPUs
308(20)
6.2.1 Introduction to General-Purpose GPU Computing
308(2)
6.2.2 Parallel Computing with GPU Arrays
310(6)
6.2.3 Running Example: Using GPU Arrays
316(1)
6.2.4 Running Example: Using Multiple GPUs with spmd Construct
317(2)
6.2.5 Executing CUDA Kernels from MATLAB
319(3)
6.2.6 Running Example: Using CUDA Kernels
322(2)
6.2.7 Programming GPU Using MATLAB MEX
324(3)
6.2.8 Accessing GPUs from within Parallel Blocks
327(1)
6.3 The MATLAB Distributed Computing Server
328(14)
6.3.1 Using MDCS
329(2)
6.3.2 Parallel Jobs Overview
331(2)
6.3.3 Setting Up a Scheduler Interface
333(3)
6.3.4 Programming Independent Jobs
336(1)
6.3.5 Programming Communicating Jobs
337(2)
6.3.6 Using Batch Processing on a Cluster
339(1)
6.3.7 Running Example: Using Communicating Jobs on a Cluster
340(2)
6.4 Techniques for Effective Parallelization in MATLAB
342(11)
6.4.1 General Performance Tips
342(3)
6.4.2 Performance Tips for Parallel CPU Programming
345(3)
6.4.3 Performance Tips for Parallel GPU Programming
348(5)
7 Explicit Parallelization by Other Means
353(36)
7.1 GPU Acceleration Using Jacket
353(11)
7.1.1 Key Ideas of Jacket Design
353(1)
7.1.2 Jacket Interface to MATLAB
354(2)
7.1.3 Using Parallel gfor Loops
356(1)
7.1.4 Compiling M-Code to a CUDA Kernel with gcompile
357(1)
7.1.5 Multi-GPU Support
358(2)
7.1.6 Running Example: Using Parallel gfor-Loop
360(1)
7.1.7 Running Example: Using gcompile
361(1)
7.1.8 Running Example: Using spmd and Multi-GPU Support
362(2)
7.2 Alternative/Related Technologies
364(10)
7.2.1 Using GPUmat
364(4)
7.2.2 Multicore Library for Parallel Processing on Multiple Cores
368(2)
7.2.3 Using ArrayFire Library via MEX Interface
370(3)
7.2.4 Additional Alternatives
373(1)
7.3 Multithreading
374(11)
7.3.1 Using POSIX Threads
374(3)
7.3.2 Using OpenMP
377(2)
7.3.3 Using Java Threads
379(3)
7.3.4 Using Net Threads
382(2)
7.3.5 Using MATLAB Timers
384(1)
7.4 Spawning External Processes
385(4)
8 Using Compiled Code
389(70)
8.1 Using MEX Code
389(24)
8.1.1 Introduction
389(1)
8.1.2 Our First MEX Function
390(4)
8.1.3 MEX Function Inputs and Outputs
394(1)
8.1.4 Accessing MATLAB Data
395(6)
8.1.5 A Usage Example
401(3)
8.1.6 Memory Management
404(2)
8.1.7 Additional Aspects
406(7)
8.2 Using the MATLAB Coder Toolbox
413(20)
8.2.1 Code Adaptation
414(2)
8.2.2 A Simple Example: Euclidean Distances Algorithm
416(4)
8.2.3 A More Realistic Example: Dijkstra's Shortest-Path Algorithm
420(7)
8.2.4 Configuring the Coder for Maximal Performance
427(6)
8.3 Porting MATLAB Algorithms to FPGA
433(12)
8.3.1 Algorithm Adaptation
434(1)
8.3.2 HDL Workflow
435(9)
8.3.3 Run-Time Measurements
444(1)
8.4 Deployed (Compiled) MATLAB Programs
445(5)
8.5 Using External Libraries
450(9)
8.5.1 Introduction
450(1)
8.5.2 Java
451(1)
8.5.3 NAG: Numerical Algorithms Group
452(3)
8.5.4 MATLAB Toolbox
455(1)
8.5.5 MCT: Multi-Precision Computing Toolbox
456(1)
8.5.6 Additional Libraries
457(2)
9 Memory-Related Techniques
459(66)
9.1 Why Memory Affects Performance
459(2)
9.2 Profiling Memory Usage
461(14)
9.2.1 Workspace Browser
461(1)
9.2.2 Whos Function
462(1)
9.2.3 Memory Function
463(1)
9.2.4 Feature memstats and feature dumpmem
464(2)
9.2.5 Feature mtic/mtoc
466(1)
9.2.6 Profiler's Memory-Monitoring Feature
467(2)
9.2.7 Profiling Java Memory
469(1)
9.2.8 Using Third-Party Tools
470(2)
9.2.9 Using the Operating System's Tools
472(1)
9.2.10 Format debug
473(2)
9.3 MATLAB's Memory Storage and Looping Order
475(6)
9.3.1 Memory Storage of MATLAB Array Data
475(2)
9.3.2 Loop Down Columns Rather than Rows
477(3)
9.3.3 Effect of Subindexing
480(1)
9.4 Array Memory Allocation
481(12)
9.4.1 Dynamic Array Growth
481(1)
9.4.2 Effects of Incremental JIT Improvements
482(1)
9.4.3 Preallocate Large Data Arrays
483(5)
9.4.4 Preallocation Location within the Code
488(1)
9.4.5 Preallocating Nondouble Data
488(3)
9.4.6 Alternatives for Enlarging Arrays
491(2)
9.5 Minimizing Memory Allocations
493(25)
9.5.1 MATLAB's Copy-on-Write Mechanism
493(3)
9.5.2 In-Place Data Manipulations
496(6)
9.5.3 Reusing Variables (with Utmost Care)
502(3)
9.5.4 Clearing Unused Workspace Variables
505(2)
9.5.5 Global and Persistent Variables
507(2)
9.5.6 Scoping Rules and Nested Functions
509(2)
9.5.7 Passing Handle References (Not Data) to Functions
511(1)
9.5.8 Reducing Data Precision/Type
512(1)
9.5.9 Devectorizing Huge Data Operations
513(1)
9.5.10 Assign Anonymous Functions in Dedicated Wrapper Functions
514(2)
9.5.11 Represent Objects by Simpler Data Types
516(2)
9.6 Memory Packing
518(1)
9.7 Additional Recommendations
519(6)
9.7.1 MATLAB Variables
519(3)
9.7.2 Java Objects
522(3)
10 Graphics and GUI
525(68)
10.1 Initial Graphs Generation
526(23)
10.1.1 Reduce the Number of Plotting Elements
526(4)
10.1.2 Use Simple or No Plot Markers
530(2)
10.1.3 Use Vectorized Data for Plotting
532(1)
10.1.4 Use Static Axes Properties
533(1)
10.1.5 Only Use drawnow when You Are Finished Plotting
534(2)
10.1.6 Use Low-Level Rather than High-Level Graphic Functions
536(1)
10.1.7 Generate Plots while the Figure Is Hidden
537(1)
10.1.8 Apply Data Reduction to Plotted Data to Fit the Display Area
538(2)
10.1.9 Reuse Plot Axes
540(1)
10.1.10 Avoid Using the axes Function
540(2)
10.1.11 Use the Painters Figure Renderer with Fast Axes DrawMode
542(1)
10.1.12 Images
543(2)
10.1.13 Patches and Volume Surfaces
545(1)
10.1.14 Colorbars
546(1)
10.1.15 Legends
546(1)
10.1.16 Reopen Presaved Figure and/or Plot Axes
547(1)
10.1.17 Set Axes SortMethod to Childorder and Reduce Transparency
548(1)
10.2 Updating Graphs and Images in Real Time
549(10)
10.2.1 Axes Update
549(1)
10.2.2 Plot and Image Update
550(2)
10.2.3 Legends and Colorbars
552(1)
10.2.4 Accessing Object Properties
553(1)
10.2.5 Listeners and Callbacks
554(1)
10.2.6 Trading Accuracy for Speed
555(2)
10.2.7 Avoid Update to the Same Value
557(1)
10.2.8 Cache Graphic Handles
557(1)
10.2.9 Avoid Interlacing Property get and set
558(1)
10.2.10 Use hgtransform to Transform Graphic Objects
559(1)
10.3 Figure Window Performance Aspects
559(6)
10.3.1 Use Hardware-Accelerated OpenGL Renderer and Functionality
560(1)
10.3.2 Set a Nondefault WVisual/XVisual Property Value
561(1)
10.3.3 Disable BackingStore
562(1)
10.3.4 Disable DoubleBuffer
562(1)
10.3.5 Set a Manual DitherMapMode on Old Platforms
563(1)
10.3.6 Reuse Figure Windows
563(1)
10.3.7 Sharing Data between GUI Callback Functions
564(1)
10.3.8 Disable Anti-Aliasing
564(1)
10.3.9 Use Smaller and Fewer Figure Windows
564(1)
10.4 GUI Preparation and Responsiveness
565(26)
10.4.1 Creating the Initial GUI
565(10)
10.4.2 Presenting User Feedback
575(9)
10.4.3 Performing Asynchronous Actions
584(7)
10.5 Avoiding Common Pitfalls
591(2)
10.5.1 Minimize Intentional Pauses
591(1)
10.5.2 Delete Unused Graphic Objects
592(1)
11 I/O Techniques
593(38)
11.1 Reducing the Amount of I/O
593(2)
11.2 Avoiding Repeated File Access
595(2)
11.3 Reading and Writing Files
597(10)
11.3.1 Text versus Binary Format
597(1)
11.3.2 Text File Pre-Processing
598(1)
11.3.3 Memory-Mapped Files
598(2)
11.3.4 Reading Files Efficiently
600(4)
11.3.5 Writing Files Efficiently
604(3)
11.4 Data Compression and the save Function
607(7)
11.5 Excel Files (and Microsoft Office Files in General)
614(6)
11.6 Image Files
620(2)
11.7 Using Java and C I/O
622(3)
11.8 Searching, Parsing, and Comparing Files
625(2)
11.8.1 Searching for Files
625(1)
11.8.2 Parsing and Scanning Files
626(1)
11.9 Additional Aspects
627(4)
11.9.1 Using p-Code
627(1)
11.9.2 Network I/O
628(2)
11.9.3 Miscellaneous
630(1)
Appendix A Additional Resources 631(12)
Appendix B Performance Tuning Checklist 643(2)
References and Notes 645(78)
Index 723
Yair M. Altman