Muutke küpsiste eelistusi

Program Specialization [Kõva köide]

  • Formaat: Hardback, 544 pages, kõrgus x laius x paksus: 243x163x34 mm, kaal: 957 g
  • Ilmumisaeg: 18-Dec-2012
  • Kirjastus: ISTE Ltd and John Wiley & Sons Inc
  • ISBN-10: 1848213999
  • ISBN-13: 9781848213999
Teised raamatud teemal:
  • Formaat: Hardback, 544 pages, kõrgus x laius x paksus: 243x163x34 mm, kaal: 957 g
  • Ilmumisaeg: 18-Dec-2012
  • Kirjastus: ISTE Ltd and John Wiley & Sons Inc
  • ISBN-10: 1848213999
  • ISBN-13: 9781848213999
Teised raamatud teemal:
This book presents the principles and techniques of program specialization — a general method to make programs faster (and possibly smaller) when some inputs can be known in advance. As an illustration, it describes the architecture of Tempo, an offline program specializer for C that can also specialize code at runtime, and provides figures for concrete applications in various domains. Technical details address issues related to program analysis precision, value reification, incomplete program specialization, strategies to exploit specialized program, incremental specialization, and data specialization. The book, that targets both researchers and software engineers, also opens scientific and industrial perspectives.
Chapter 1 Main Principles of Program Specialization 1(42)
1.1 Specialized program
2(14)
1.1.1 Program specialization
2(4)
1.1.2 Context of specialization
6(2)
1.1.3 Specialization of a fragment of program
8(1)
1.1.4 Partial computations
9(2)
1.1.5 Range of specializations
11(4)
1.1.6 Equivalence between the specialized program and the generic program
15(1)
1.2 Specializing to improve performance
16(6)
1.2.1 Execution time
17(1)
1.2.2 Memory space
17(1)
1.2.3 Effect of the compiler
18(2)
1.2.4 Opacity of the code generated
20(1)
1.2.5 Effect of the memory cache
21(1)
1.3 Automatic specialization
22(5)
1.3.1 Specializer
22(2)
1.3.2 Operation of specialization
24(1)
1.3.3 Execution times
25(1)
1.3.4 Advantages and disadvantages to automatic specialization
25(2)
1.4 Main applications of specialization
27(6)
1.4.1 Application 1: compiling using an interpreter
27(3)
1.4.2 Application 2: transforming an interpreter into a compiler
30(1)
1.4.3 Application 3: creating a compiler generator
31(2)
1.5 Specialization times
33(4)
1.5.1 Compile-time specialization
33(1)
1.5.2 Runtime specialization
34(1)
1.5.3 Specialization server
35(1)
1.5.4 Specialized code cache
36(1)
1.6 Financial viability of specialization
37(6)
1.6.1 Specialization gain
38(1)
1.6.2 Specialization time
39(1)
1.6.3 Size of the specializer
40(1)
1.6.4 Specialization before execution
40(2)
1.6.5 Runtime specialization and break-even point
42(1)
Chapter 2 Specialization Techniques 43(28)
2.1 Transforming specialization programs
44(13)
2.1.1 Partial evaluation
44(1)
2.1.2 Specialization strategies
44(1)
2.1.3 Formulation of specialization using general transformations
45(3)
2.1.4 Formulation of specialization using ad hoc transformations
48(2)
2.1.5 Techniques for executing precomputations
50(1)
2.1.6 Speculative specialization
51(4)
2.1.7 Interprocedural specialization
55(1)
2.1.8 Polyvariant specialization
56(1)
2.2 Termination of specialization
57(3)
2.2.1 Online control
58(1)
2.2.2 Offline control
59(1)
2.3 Correctness of specialization
60(5)
2.3.1 Soundness, completeness and correctness
60(1)
2.3.2 Remedying laziness
61(1)
2.3.3 Execution error handling
62(1)
2.3.4 Portability
63(1)
2.3.5 Pre-processor
64(1)
2.4 Other forms of specialization
65(6)
2.4.1 Driving and supercompilation
65(1)
2.4.2 Generalized partial computation
66(1)
2.4.3 Configurable partial computation
66(1)
2.4.4 Program slicing
67(1)
2.4.5 Comparison with a compiler
67(1)
2.4.6 Comparison with a multilevel language
68(3)
Chapter 3 Offline Specialization 71(46)
3.1 Main principles of offline specialization
72(20)
3.1.1 Specification of input binding times
72(2)
3.1.2 Binding-time analysis
74(4)
3.1.3 Specialization by binding-time interpretation
78(1)
3.1.4 Action analysis
79(2)
3.1.5 Specialization by action interpretation
81(1)
3.1.6 Generating extension
82(2)
3.1.7 Compiler generator
84(1)
3.1.8 Generation of a specialized program
85(3)
3.1.9 Offline specializer
88(1)
3.1.10 Correction of offline specialization
89(1)
3.1.11 Specialization grammar
89(2)
3.1.12 Polyvariant offline specialization
91(1)
3.2 Compared advantages of offline specialization
92(7)
3.2.1 Evaluation a priori of the specialization degree
92(1)
3.2.2 Visualization of specialization information
93(1)
3.2.3 Declaration of expected binding times
94(1)
3.2.4 Specialization debugging
95(1)
3.2.5 Improvement of binding times
95(1)
3.2.6 Specialization speed
96(1)
3.2.7 Specialization time
97(1)
3.2.8 Task and expertise distribution
97(1)
3.2.9 Intellectual property
98(1)
3.2.10 Limits of offline specialization
98(1)
3.3 Main components of binding-time analysis
99(10)
3.3.1 Definition and use of memory locations
100(2)
3.3.2 Standard binding times and composition operations
102(1)
3.3.3 Static-and-dynamic binding time
103(1)
3.3.4 Undefined values and dead code
104(3)
3.3.5 Preliminary alias analysis requirement
107(1)
3.3.6 Formal definition and analysis implementation
108(1)
3.4 When static inputs become dynamic
109(8)
3.4.1 Initial and actual binding times
109(1)
3.4.2 Preservation of specialization interfaces
110(3)
3.4.3 Program specialization with revision of the static inputs
113(4)
Chapter 4 A Specializer for C: Tempo 117(28)
4.1 History
118(3)
4.1.1 Origins
118(1)
4.1.2 The "Tempo project"
119(1)
4.1.3 The tool Tempo
120(1)
4.2 Disruptive technologies
121(2)
4.2.1 Revision of specialization principles
121(1)
4.2.2 Revision of specialization analyses
121(1)
4.2.3 Revision of specialization transformations
122(1)
4.3 Architecture
123(9)
4.3.1 Preprocessing
123(5)
4.3.2 Processing
128(3)
4.3.3 Post-processing
131(1)
4.3.4 Interface
131(1)
4.4 Engineering economics
132(7)
4.4.1 Pragmatics of knowledge
133(1)
4.4.2 Language construction coverage
133(2)
4.4.3 The case of function pointers
135(4)
4.5 Beyond Tempo
139(3)
4.5.1 Certified runtime specialization
140(1)
4.5.2 Java program specialization
140(1)
4.5.3 C++ program specialization
141(1)
4.5.4 Specialization declaration and verification
141(1)
4.6 Other specializers for the C language
142(3)
4.6.1 C-Mix
142(1)
4.6.2 DyC
143(2)
Chapter 5 Applications of Specialization 145(40)
5.1 Applications in operating systems and networks
146(13)
5.1.1 Sun's RPC
148(7)
5.1.2 BSD Packet Filter
155(1)
5.1.3 Unix signals
156(1)
5.1.4 Chorus IPC
157(1)
5.1.5 Summary
158(1)
5.2 Applications to numerical computation
159(1)
5.3 Applications to compilation using an interpreter
160(4)
5.4 Applications to the optimization of software architectures
164(16)
5.4.1 Issue of flexibility
165(1)
5.4.2 Sources of inefficiency in the implementation of software architectures
166(3)
5.4.3 Improving efficiency while preserving flexibility
169(1)
5.4.4 Some case studies
169(5)
5.4.5 Framework of application of specialization
174(2)
5.4.6 Other approaches to optimizing software architectures
176(2)
5.4.7 Program specialization to optimize software architectures
178(2)
5.5 Specialization as a software engineering tool
180(5)
5.5.1 High-level optimizer
180(1)
5.5.2 Think generic
181(2)
5.5.3 Predefined adaptable components
183(1)
5.5.4 Other uses of specialization in software engineering
183(2)
Chapter 6 Precision of Program Analysis 185(36)
6.1 Choosing the precision of an analysis
186(3)
6.1.1 Degrees of freedom
186(1)
6.1.2 Too much of a good thing
187(1)
6.1.3 Targeting a program class
188(1)
6.1.4 Analysis combination
188(1)
6.1.5 Different analysis sensitivities
189(1)
6.2 Sensitivity to (control) flow
189(4)
6.2.1 Concrete specialization requirements
192(1)
6.3 Sensitivity to speculative evaluation
193(1)
6.3.1 Concrete specialization requirements
193(1)
6.4 Sensitivity to data structure components
194(2)
6.4.1 Concrete specialization requirements
196(1)
6.5 Sensitivity to data structure instances
196(5)
6.5.1 Concrete specialization requirements
200(1)
6.6 Sensitivity to use (of memory locations)
201(7)
6.6.1 Mapping of definitions and uses
201(1)
6.6.2 Static-and-dynamic binding time
202(1)
6.6.3 Sensitivity to the dynamic use of memory locations
203(1)
6.6.4 Case of non-reifiable values
204(2)
6.6.5 Sensitivity to the static use of memory locations
206(1)
6.6.6 Sensitivity to the use of memory locations
207(1)
6.7 Sensitivity to use of literal constants
208(3)
6.7.1 Concrete specialization requirements
210(1)
6.8 Intraprocedural versus interprocedural analysis
211(2)
6.8.1 Concrete specialization requirements
212(1)
6.9 Sensitivity to the context (of function call)
213(1)
6.9.1 Concrete specialization requirements
214(1)
6.10 Sensitivity to the return value
214(2)
6.10.1 Concrete specialization requirements
215(1)
6.11 Other precision forms
216(1)
6.11.1 Sensitivity to the execution context of code templates
216(1)
6.11.2 Sensitivity to continuations
217(1)
6.12 Precision of the existing C specializers
217(4)
6.12.1 Precision of Tempo
218(1)
6.12.2 Precision of C-Mix
219(1)
6.12.3 Precision of DyC
220(1)
Chapter 7 Reification: From a Value to a Term 221(28)
7.1 Different types of reification
222(4)
7.1.1 Direct literal reification
222(1)
7.1.2 Latent literal lifting
223(1)
7.1.3 Indirect literal lifting
223(1)
7.1.4 Computable lifting
223(1)
7.1.5 Complete and partial lifting
224(1)
7.1.6 Incremental lifting
224(1)
7.1.7 Memory zone of lifting
225(1)
7.1.8 Optimized lifting
225(1)
7.1.9 Instrumented lifting
226(1)
7.2 Constraints of lifting
226(5)
7.2.1 Reflexiveness constraints
226(1)
7.2.2 Syntactic constraints
227(1)
7.2.3 Semantic constraints
228(1)
7.2.4 Influence of the moment of specialization
229(1)
7.2.5 Efficiency constraints
230(1)
7.2.6 Decision to lift and processing of non-liftable values
230(1)
7.3 Lifting of immutable data
231(3)
7.3.1 Lifting of an elementary piece of data
231(1)
7.3.2 Lifting of an immutable composite piece of data
232(2)
7.4 Lifting of a non-shared mutable piece of data
234(3)
7.4.1 Lifting of a non-shared table
234(2)
7.4.2 Reification of a structure
236(1)
7.5 Reification of a shared mutable piece of data
237(1)
7.6 Reification of a reference
238(5)
7.6.1 Reference and memory address
238(1)
7.6.2 Symbolic entities, dynamic data, and visibility
239(1)
7.6.3 From a pointer to a symbol
239(1)
7.6.4 Type-based reification of a reference
240(1)
7.6.5 Offset-based reification of a pointer
241(1)
7.6.6 Profitability of pointer reification
242(1)
7.7 Physical data sharing between execution times
243(2)
7.7.1 Lifespan
243(1)
7.7.2 Controlled access
244(1)
7.8 Reification and binding time
245(4)
7.8.1 Dynamic treatment of non-reifiable values
245(1)
7.8.2 Superfluous copy elimination
245(4)
Chapter 8 Specialization of Incomplete Programs 249(34)
8.1 Constraints on the code to be specialized
250(4)
8.1.1 The "outside" of a program
250(1)
8.1.2 Availability of the code and resource sharing
251(1)
8.1.3 The specialization profitability in a development process
252(1)
8.1.4 Complex scaling of specialization techniques
252(1)
8.1.5 Software component specialization
252(1)
8.1.6 Modular and separated specialization
253(1)
8.2 Specialization module and language module
254(2)
8.2.1 Specialization module
254(1)
8.2.2 Modularity in C
255(1)
8.2.3 Modularity in Tempo
255(1)
8.3 Revision of the expression of specialization
256(8)
8.3.1 Componential semantics
256(2)
8.3.2 Interactions of an incomplete program
258(1)
8.3.3 Observability and equivalence of incomplete programs
259(1)
8.3.4 Observability and specialization
260(2)
8.3.5 Incomplete program specialization and binding times
262(2)
8.4 Calling context of a function to be specialized
264(2)
8.4.1 Calling context and specialization
265(1)
8.4.2 Modeling of a calling context
266(1)
8.5 Effect of external function calls
266(3)
8.5.1 External functions and specialization
266(2)
8.5.2 Modeling of an external function
268(1)
8.6 Abstract modeling languages
269(3)
8.6.1 Existing modeling languages
269(2)
8.6.2 Advantages and drawbacks
271(1)
8.7 Concrete modeling
272(11)
8.7.1 Concrete models
272(2)
8.7.2 Concrete calling contexts
274(1)
8.7.3 Concrete calling effects
275(3)
8.7.4 Advantages and drawbacks
278(2)
8.7.5 Experiment with concrete models
280(3)
Chapter 9 Exploitation of Specialization 283(26)
9.1 Means of exploiting specialization
284(2)
9.1.1 Specialization of programs versus specialization of subprograms
284(1)
9.1.2 Correctly exploiting specialization
285(1)
9.1.3 Knowing the input values
286(1)
9.2 Invariant execution context
286(2)
9.2.1 Fixed exploitation of a specialized program
287(1)
9.2.2 Fixed exploitation of a specialized function
287(1)
9.2.3 Fixed exploitation of runtime specialization
287(1)
9.3 Optimistic specialization
288(6)
9.3.1 Case-based specialization of a function call
288(1)
9.3.2 Moments of optimistic specialization
289(1)
9.3.3 Profitability of optimistic specialization
290(1)
9.3.4 Specialized function selection
290(1)
9.3.5 Explicit optimistic specialization
291(1)
9.3.6 Implicit optimistic specialization: The Trick
292(2)
9.3.7 Comparison of explicit and implicit optimistic specializations
294(1)
9.4 Selection by necessity of a specialized function
294(4)
9.4.1 Case-based specialization of a function
295(1)
9.4.2 Centralized selection by necessity
295(1)
9.4.3 Selection by necessity at the call site
296(1)
9.4.4 Inlining of selection-by-necessity functions
297(1)
9.5 Selection by anticipation of a specialized function
298(11)
9.5.1 General idea
298(2)
9.5.2 Specialized-call variable and indirect call
300(1)
9.5.3 Adaptation to different conformations of specialization
301(1)
9.5.4 Generic case by default
301(2)
9.5.5 Modification of a determined context
303(1)
9.5.6 Guards and complex execution contexts
303(2)
9.5.7 Placing of guards
305(1)
9.5.8 For or against selection by anticipation
306(3)
Chapter 10 Incremental Runtime Specialization 309(34)
10.1 Data availability staging
310(3)
10.1.1 Staging and program specialization
311(1)
10.1.2 Staging in program loops
311(1)
10.1.3 Advantages of incremental specialization
312(1)
10.2 Models for incremental specialization
313(9)
10.2.1 Mandatory or optional incrementality
314(1)
10.2.2 Multiple program specialization
314(2)
10.2.3 Multilevel generating extension
316(1)
10.2.4 Multilevel compiler generator
317(1)
10.2.5 Bi-level iterated specialization
317(2)
10.2.6 Understanding the nature of incremental specialization
319(1)
10.2.7 Multiple self-application
320(1)
10.2.8 Multistage specialization
321(1)
10.3 Binding-time analyses for incremental specialization
322(1)
10.3.1 Multilevel binding-time analysis
322(1)
10.3.2 Iterated bi-level binding-time analysis
322(1)
10.3.3 Comparison of the multilevel analysis with the iterated bi-level analysis
323(1)
10.4 Implementation
323(12)
10.4.1 (bi-level) runtime specialization
324(4)
10.4.2 Iterated runtime specialization
328(4)
10.4.3 Optimized iterated specialization
332(1)
10.4.4 Experimental results
333(2)
10.5 Compared advantages of iterated specialization
335(4)
10.5.1 Degree of specialization
335(3)
10.5.2 Engineering
338(1)
10.6 Related works
339(2)
10.7 Improving incremental runtime specialization
341(2)
Chapter 11 Data Specialization 343(50)
11.1 Program specialization and loop unrolling
344(6)
11.1.1 Principle of (offline) program specialization
345(1)
11.1.2 Staging in program loops
345(3)
11.1.3 Manual control of the loop unrolling
348(2)
11.2 General concept of data specialization
350(10)
11.2.1 Principle of data specialization
351(2)
11.2.2 Example of specialization encoded in the form of data
353(5)
11.2.3 Loading integrated at the first execution
358(1)
11.2.4 Data specialization times
358(1)
11.2.5 Exploitation of data specialization
359(1)
11.3 Caching and binding time
360(5)
11.3.1 Selection of the static expressions to be cached
360(3)
11.3.2 Speculative specialization
363(1)
11.3.3 Loader-reader analyses and separations
364(1)
11.3.4 Common inputs at the loader and at the reader
365(1)
11.4 Structuring the cache
365(6)
11.4.1 Cache structured by expressions to be stored
365(1)
11.4.2 Cache structured by iteration
366(1)
11.4.3 Cache structured in homogeneous data flow
366(2)
11.4.4 Cache structured in flow of heterogeneous data
368(1)
11.4.5 Cache structured as a flow of aligned heterogeneous data
368(1)
11.4.6 Cache structured as a flow of compact heterogeneous data
369(1)
11.4.7 Dynamic management of the cache size
370(1)
11.5 The question of control in data specialization
371(4)
11.5.1 Preserved control
371(2)
11.5.2 Cached control
373(1)
11.5.3 Rebuilt control
374(1)
11.6 Reconstructions of control
375(7)
11.6.1 Reconstruction of a simple loop
375(3)
11.6.2 Reconstruction of nested loops
378(2)
11.6.3 Reconstruction and interprocedural representation
380(2)
11.7 Program specialization versus data specialization
382(5)
11.7.1 Comparison of program and data specialization
382(1)
11.7.2 From statement locality to data locality
383(1)
11.7.3 Combination of two specialization encodings
384(3)
11.8 Experimental results
387(6)
11.8.1 Integration in Tempo
387(1)
11.8.2 Experiments on various program types
387(6)
Chapter 12 Scientific Perspectives 393(28)
12.1 Improving the specialized code
394(10)
12.1.1 Improving the analyses of specialization
394(2)
12.1.2 Choice of online specialization among alternative offline techniques
396(1)
12.1.3 Partial unfolding of specialization grammars
397(1)
12.1.4 Post-processing for runtime specialization
398(2)
12.1.5 Binding-time improvement
400(1)
12.1.6 Better integration of program specializations and data specializations
401(1)
12.1.7 Choice of differed marshaling
402(2)
12.2 Complexity of the process of specialization
404(4)
12.2.1 Optimizing using a specializer
404(2)
12.2.2 Optimizing using a compiler
406(1)
12.2.3 Fine optimization with a compiler versus with a specializer
407(1)
12.3 Simplifying the process of specialization
408(10)
12.3.1 Automation of tasks peripheral to the specialization
408(1)
12.3.2 Automatically seeking and exploiting specialization opportunities
409(4)
12.3.3 Integration into a compiler
413(1)
12.3.4 Monitoring and debugging of binding times
414(3)
12.3.5 Binding-time improvement
417(1)
12.4 Integration into a software engineering process
418(3)
12.4.1 Integration into the software's lifecycle
418(1)
12.4.2 Methodology for writing specializable programs
419(1)
12.4.3 A specialization-oriented programming environment
420(1)
Chapter 13 Conclusion: From Prototype to Product 421(14)
13.1 The race for performance
422(1)
13.1.1 Pareto's law
422(1)
13.1.2 Proebsting's law
423(1)
13.2 A different viewpoint
423(2)
13.2.1 Specializing for a better performance
424(1)
13.2.2 Specialization to better produce
424(1)
13.3 Difficulties for investing in software engineering
425(4)
13.3.1 Critical thinking
425(2)
13.3.2 Critical path
427(1)
13.3.3 Critical mass
428(1)
13.3.4 Critical moment
429(1)
13.3.5 Critical situation
429(1)
13.4 Niche uses
429(3)
13.4.1 Niche applications
430(1)
13.4.2 Niche functionalities
431(1)
13.5 Developing a specialization platform
432(3)
13.5.1 Magnitude of the task
432(1)
13.5.2 The economic model
433(1)
13.5.3 Specializing to study
434(1)
Appendix. Basic Facts about Languages and Programs 435(52)
A1 Programming languages
436(9)
A1.1 Code
436(3)
A1.2 Data
439(1)
A1.3 Programs and subprograms
440(2)
A1.4 Input
442(2)
A1.5 Output
444(1)
A2 Semantics
445(13)
A2.1 Semantic functions
446(2)
A2.2 Semantic framework
448(1)
A2.3 Multiple or undefined semantics
449(2)
A2.4 Non-determinism
451(1)
A2.5 Under-specification
451(1)
A2.6 Undefined errors
452(1)
A2.7 Defined errors
453(1)
A2.8 Non-termination and infinite data
454(1)
A2.9 Output of an abnormal execution
455(2)
A2.10 Interactions of a program and an external code
457(1)
A3 Program equivalence
458(4)
A3.1 Domain of definition
458(1)
A3.2 Strict or lazy equivalence
458(2)
A3.3 Non-termination with partial output
460(1)
A3.4 Equivalence of subprograms
460(2)
A4 Execution
462(11)
A4.1 Execution process
462(3)
A4.2 Interpretation
465(4)
A4.3 Compilation
469(3)
A4.4 Observation, modification and code generation
472(1)
A5 Program performance
473(6)
A5.1 Execution time
474(3)
A5.2 Memory size
477(1)
A5.3 Performance optimization
478(1)
A6 Program analysis
479(2)
A6.1 Abstract executions
479(1)
A6.2 Flow analysis
480(1)
A6.3 Result of a program analysis
481(1)
A7 Program transformation
481(6)
A7.1 Program transformation
481(1)
A7.2 Observation and equivalence
482(1)
A7.3 Soundness, completeness and correctness
483(1)
A7.4 Transformation of subprograms
484(1)
A7.5 Transformation and termination algorithm
485(2)
Bibliography 487(36)
Index 523
Renaud Marlet is senior researcher at école des Ponts ParisTech (ENPC), head of the IMAGINE research group, and delegate director by interim of the Laboratoire d'Informatique Gaspard Monge (LIGM) for the école des Ponts ParisTech, France.