Muutke küpsiste eelistusi

E-raamat: Engineering a Compiler

(Department of Computer Science, Rice University, Houston, Texas, USA), (Principal Investigator on the Massively Scalar Compiler Project, Rice University, Houston, Texas, USA)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 20-Aug-2022
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780128189269
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 87,35 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 20-Aug-2022
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780128189269
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Engineering a Compiler, Third Edition covers the latest developments in compiler technology, with new chapters focusing on semantic elaboration (the problems that arise in generating code from the ad-hoc syntax-directed translation schemes in a generated parser), on runtime support for naming and addressability, and on code shape for expressions, assignments and control-structures. Leading educators and researchers, Keith Cooper and Linda Torczon, have revised this popular text with a fresh approach to learning important techniques for constructing a modern compiler, combining basic principles with pragmatic insights from their own experience building state-of-the-art compilers.
  • Presents in-depth treatments of algorithms and techniques used in the front end of a modern compiler
  • Pays particular attention to code optimization and code generation, both primary areas of recent research and development
  • Focuses on how compilers (and interpreters) implement abstraction, tying the underlying knowledge to students’ own experience and to the languages in which they have been taught to program
  • Covers bottom-up methods of register allocation at the local scope
About the Authors xvii
About the Cover xix
Preface xxi
Chapter 1 Overview of Compilation
1(26)
1.1 Introduction
1(6)
1.2 Compiler Structure
7(4)
1.3 Overview of Translation
11(11)
1.3.1 The Front End
11(4)
1.3.2 The Optimizer
15(3)
1.3.3 The Back End
18(4)
1.4 Engineering
22(2)
1.5 Summary and Perspective
24(1)
Chapter Notes
25(1)
Exercises
26(1)
Chapter 2 Scanners
27(58)
2.1 Introduction
27(3)
2.2 Recognizing Words
30(6)
2.2.1 A Formalism for Recognizers
32(1)
2.2.2 Recognizing More Complex Words
33(3)
2.3 Regular Expressions
36(9)
2.3.1 Formalizing the Notation
38(1)
2.3.2 Examples of Regular Expressions
39(3)
2.3.3 Closure Properties of REs
42(3)
2.4 From Regular Expression to Scanner
45(18)
2.4.1 Nondeterministic Finite Automata
45(3)
2.4.2 RE to NFA: Thompson's Construction
48(1)
2.4.3 NFA to DFA: The Subset Construction
49(5)
2.4.4 DFA to Minimal DFA
54(5)
2.4.5 Using a DFA as a Scanner
59(4)
2.5 Implementing Scanners
63(12)
2.5.1 Table-Driven Scanners
63(4)
2.5.2 Direct-Coded Scanners
67(2)
2.5.3 Hand-Coded Scanners
69(1)
2.5.4 Practical Implementation Issues
70(5)
2.6 Advanced Topics
75(5)
2.6.1 DFA to Regular Expression
75(2)
2.6.2 Closure-Free Regular Expressions
77(1)
2.6.3 An Alternative DFA Minimization Algorithm
78(2)
2.7 Summary and Perspective
80(1)
Chapter Notes
80(2)
Exercises
82(3)
Chapter 3 Parsers
85(74)
3.1 Introduction
85(2)
3.2 Expressing Syntax
87(12)
3.2.1 Why Not Use Regular Expressions?
87(1)
3.2.2 Context-Free Grammars
88(3)
3.2.3 More Complex Examples
91(4)
3.2.4 Encoding Meaning into Structure
95(3)
3.2.5 Discovering a Derivation for an Input String
98(1)
3.3 Top-Down Parsing
99(19)
3.3.1 Transforming a Grammar
101(11)
3.3.2 Top-Down Recursive-Descent Parsers
112(2)
3.3.3 Table-Driven LL(1) Parsers
114(4)
3.4 Bottom-Up Parsing
118(24)
3.4.1 The LR(1) Parsing Algorithm
122(6)
3.4.2 Building LR(1) Tables
128(10)
3.4.3 Errors in the Table Construction
138(4)
3.5 Practical Issues
142(4)
3.5.1 Error Recovery
142(1)
3.5.2 Unary Operators
143(1)
3.5.3 Handling Context-Sensitive Ambiguity
144(2)
3.6 Advanced Topics
146(7)
3.6.1 Optimizing a Grammar
146(3)
3.6.2 Reducing the Size of LR(1) Tables
149(4)
3.7 Summary and Perspective
153(1)
Chapter Notes
154(1)
Exercises
155(4)
Chapter 4 Intermediate Representations
159(50)
4.1 Introduction
159(3)
4.2 An IR Taxonomy
162(4)
4.3 Graphical IRs
166(9)
4.3.1 Syntax-Related Trees
166(4)
4.3.2 Graphs
170(5)
4.4 Linear IRs
175(8)
4.4.1 Stack-Machine Code
176(1)
4.4.2 Three-Address Code
177(1)
4.4.3 Representing Linear Codes
178(2)
4.4.4 Building the CFG from Linear Code
180(3)
4.5 Symbol Tables
183(6)
4.5.1 Name Resolution
184(3)
4.5.2 Table Implementation
187(2)
4.6 Name Spaces
189(8)
4.6.1 Name Spaces in the IR
190(3)
4.6.2 Static Single-Assignment Form
193(4)
4.7 Placement of Values in Memory
197(6)
4.7.1 Memory Models
197(2)
4.7.2 Keeping Values in Registers
199(1)
4.7.3 Assigning Values to Data Areas
200(3)
4.8 Summary and Perspective
203(1)
Chapter Notes
204(1)
Exercises
204(5)
Chapter 5 Syntax-Driven Translation
209(66)
5.1 Introduction
209(2)
5.2 Background
211(2)
5.3 Syntax-Driven Translation
213(14)
5.3.1 A First Example
213(4)
5.3.2 Translating Expressions
217(7)
5.3.3 Translating Control-Flow Statements
224(3)
5.4 Modeling the Naming Environment
227(12)
5.4.1 Lexical Hierarchies
227(6)
5.4.2 Inheritance Hierarchies
233(4)
5.4.3 Visibility
237(1)
5.4.4 Performing Compile-Time Name Resolution
238(1)
5.5 Type Information
239(12)
5.5.1 Uses for Types in Translation
240(2)
5.5.2 Components of a Type System
242(5)
5.5.3 Type Inference for Expressions
247(4)
5.6 Storage Layout
251(11)
5.6.1 Storage Classes and Data Areas
251(2)
5.6.2 Layout Within a Virtual Address Space
253(2)
5.6.3 Storage Assignment
255(5)
5.6.4 Fitting Storage Assignment into Translation
260(1)
5.6.5 Alignment Restrictions and Padding
260(2)
5.7 Advanced Topics
262(7)
5.7.1 Grammar Structure and Associativity
262(3)
5.7.2 Harder Problems in Type Inference
265(2)
5.7.3 Relative Offsets and Cache Performance
267(2)
5.8 Summary and Perspective
269(1)
Chapter Notes
270(1)
Exercises
270(5)
Chapter 6 Implementing Procedures
275(52)
6.1 Introduction
275(3)
6.2 Background
278(4)
6.3 Runtime Support for Naming
282(11)
6.3.1 Runtime Support for Algol-Like Languages
282(6)
6.3.2 Runtime Support for Object-Oriented Languages
288(5)
6.4 Passing Values Between Procedures
293(11)
6.4.1 Passing Parameters
294(2)
6.4.2 Returning Values
296(2)
6.4.3 Establishing Addressability for Nonlocal Variables
298(6)
6.5 Standardized Linkages
304(5)
6.6 Advanced Topics
309(9)
6.6.1 Explicit Heap Management
310(3)
6.6.2 Implicit Deallocation
313(5)
6.7 Summary and Perspective
318(1)
Chapter Notes
319(2)
Exercises
321(6)
Chapter 7 Code Shape
327(52)
7.1 Introduction
327(3)
7.2 Arithmetic Operators
330(7)
7.2.1 Function Calls in an Expression
332(1)
7.2.2 Mixed-Type Expressions
333(2)
7.2.3 Reducing Demand for Registers
335(2)
7.3 Access Methods for Values
337(12)
7.3.1 Access Methods for Scalar Variables
337(3)
7.3.2 Access Methods for Aggregates
340(7)
7.3.3 Range Checks
347(2)
7.4 Boolean and Relational Operators
349(6)
7.4.1 Hardware Support for Relational Expressions
349(3)
7.4.2 Variations in Hardware Support
352(3)
7.5 Control-Flow Constructs
355(11)
7.5.1 Conditional Execution
356(1)
7.5.2 Loops and Iteration
357(5)
7.5.3 Case Statements
362(4)
7.6 Operations on Strings
366(3)
7.6.1 String Length
366(1)
7.6.2 String Assignment
366(2)
7.6.3 String Concatenation
368(1)
7.6.4 Optimization of String Operations
368(1)
7.7 Procedure Calls
369(4)
7.7.1 Evaluating Actual Parameters
370(1)
7.7.2 Saving and Restoring Registers
371(2)
7.8 Summary and Perspective
373(1)
Chapter Notes
374(1)
Exercises
375(4)
Chapter 8 Introduction to Optimization
379(70)
8.1 Introduction
379(2)
8.2 Background
381(10)
8.2.1 Examples
383(3)
8.2.2 Considerations for Optimization
386(4)
8.2.3 Opportunities for Optimization
390(1)
8.3 Scope of Optimization
391(3)
8.4 Local Optimization
394(17)
8.4.1 Local Value Numbering
395(7)
8.4.2 Tree-Height Balancing
402(9)
8.5 Regional Optimization
411(7)
8.5.1 Superlocal Value Numbering
411(4)
8.5.2 Loop Unrolling
415(3)
8.6 Global Optimization
418(12)
8.6.1 Finding Uninitialized Variables with Live Sets
418(6)
8.6.2 Global Code Placement
424(6)
8.7 Interprocedural Optimization
430(12)
8.7.1 Inline Substitution
431(4)
8.7.2 Procedure Placement
435(4)
8.7.3 Pragmatics of Interprocedural Optimization
439(3)
8.8 Summary and Perspective
442(1)
Chapter Notes
443(1)
Exercises
444(5)
Chapter 9 Data-Flow Analysis
449(68)
9.1 Introduction
449(2)
9.2 Iterative Data-Flow Analysis
451(18)
9.2.1 Dominance
452(4)
9.2.2 Live-Variable Analysis
456(5)
9.2.3 Limitations on Data-Flow Analysis
461(2)
9.2.4 Other Data-Flow Problems
463(6)
9.3 Static Single-Assignment Form
469(28)
9.3.1 A Naive Method for Building SS A Form
470(2)
9.3.2 Dominance Frontiers
472(3)
9.3.3 Placing k-Functions
475(4)
9.3.4 Renaming
479(6)
9.3.5 Translation out of SSA Form
485(8)
9.3.6 Using SSA Form
493(4)
9.4 Interprocedural Analysis
497(8)
9.4.1 Call-Graph Construction
497(3)
9.4.2 Interprocedural Constant Propagation
500(5)
9.5 Advanced Topics
505(6)
9.5.1 Structural Data-Flow Analysis and Reducibility
505(3)
9.5.2 Speeding up the Iterative Dominance Framework
508(3)
9.6 Summary and Perspective
511(1)
Chapter Notes
511(2)
Exercises
513(4)
Chapter 10 Scalar Optimization
517(58)
10.1 Introduction
517(4)
10.2 Dead Code Elimination
521(8)
10.2.1 Eliminating Useless Code
522(2)
10.2.2 Eliminating Useless Control Flow
524(3)
10.2.3 Eliminating Unreachable Code
527(2)
10.3 Code Motion
529(10)
10.3.1 Lazy Code Motion
529(8)
10.3.2 Code Hoisting
537(2)
10.4 Specialization
539(4)
10.4.1 Tail-Call Optimization
539(2)
10.4.2 Leaf-Call Optimization
541(1)
10.4.3 Parameter Promotion
541(2)
10.5 Redundancy Elimination
543(5)
10.5.1 Value Identity Versus Name Identity
543(1)
10.5.2 Dominator-Based Value Numbering
544(4)
10.6 Enabling Other Transformations
548(5)
10.6.1 Superblock Cloning
548(2)
10.6.2 Procedure Cloning
550(1)
10.6.3 Loop Unswitching
550(1)
10.6.4 Renaming
551(2)
10.7 Advanced Topics
553(17)
10.7.1 Combining Optimizations
553(5)
10.7.2 Strength Reduction
558(10)
10.7.3 Choosing an Optimization Sequence
568(2)
10.8 Summary and Perspective
570(1)
Chapter Notes
570(2)
Exercises
572(3)
Chapter 11 Instruction Selection
575(42)
11.1 Introduction
575(4)
11.2 Background
579(8)
11.2.1 The Impact of ISA Design on Selection
580(4)
11.2.2 Motivating Example
584(2)
11.2.3 Ad-Hoc Matching
586(1)
11.3 Selection via Peephole Optimization
587(9)
11.3.1 Peephole Optimization
588(3)
11.3.2 TheSimplifier
591(3)
11.3.3 The Matcher
594(2)
11.4 Selection via Tree-Pattern Matching
596(15)
11.4.1 Representing Trees
597(1)
11.4.2 Rewrite Rules
597(5)
11.4.3 Computing Tilings
602(7)
11.4.4 Tools
609(2)
11.5 Advanced Topics
611(2)
11.5.1 Learning Peephole Patterns
611(1)
11.5.2 Generating Instruction Sequences
612(1)
11.6 Summary and Perspective
613(1)
Chapter Notes
614(2)
Exercises
616(1)
Chapter 12 Instruction Scheduling
617(46)
12.1 Introduction
617(3)
12.2 Background
620(7)
12.2.1 Architectural Features That Affect Performance
621(3)
12.2.2 The Instruction Scheduling Problem
624(3)
12.3 Local Scheduling
627(14)
12.3.1 The Algorithm
629(1)
12.3.2 Renaming
629(2)
12.3.3 Building the Dependence Graph
631(2)
12.3.4 Computing Priorities
633(1)
12.3.5 List Scheduling
634(4)
12.3.6 Forward Versus Backward List Scheduling
638(3)
12.4 Regional Scheduling
641(7)
12.4.1 Superlocal Scheduling
642(1)
12.4.2 Trace Scheduling
643(3)
12.4.3 Cloning for Context
646(2)
12.5 Advanced Topics
648(9)
12.5.1 The Strategy Behind Software Pipelining
648(3)
12.5.2 An Algorithm for Software Pipelining
651(5)
12.5.3 A Final Example
656(1)
12.6 Summary and Perspective
657(1)
Chapter Notes
658(1)
Exercises
659(4)
Chapter 13 Register Allocation
663(50)
13.1 Introduction
663(2)
13.2 Background
665(9)
13.2.1 A Name Space for Allocation: Live Ranges
666(2)
13.2.2 Interference
668(2)
13.2.3 Spill Code
670(1)
13.2.4 Register Classes
671(3)
13.3 Local Register Allocation
674(8)
13.3.1 Renaming in the Local Allocator
675(2)
13.3.2 Allocation and Assignment
677(5)
13.4 Global Allocation via Coloring
682(17)
13.4.1 Find Global Live Ranges
683(2)
13.4.2 Build an Interference Graph
685(2)
13.4.3 Coalesce Copy Operations
687(2)
13.4.4 Estimate Global Spill Costs
689(2)
13.4.5 Color the Graph
691(3)
13.4.6 Insert Spill and Restore Code
694(1)
13.4.7 Handling Overlapping Register Classes
694(5)
13.5 Advanced Topics
699(8)
13.5.1 Variations on Coalescing
699(2)
13.5.2 Variations on Spilling
701(2)
13.5.3 Other Forms of Live Ranges
703(4)
13.6 Summary and Perspective
707(1)
Chapter Notes
708(1)
Exercises
709(4)
Chapter 14 Runtime Optimization
713(44)
14.1 Introduction
713(4)
14.2 Background
717(10)
14.2.1 Execution Model
718(2)
14.2.2 Compilation Triggers
720(2)
14.2.3 Granularity of Optimization
722(1)
14.2.4 Sources of Improvement
723(3)
14.2.5 Building a Runtime Optimizer
726(1)
14.3 Hot-Trace Optimization
727(9)
14.3.1 Flow of Execution
729(4)
14.3.2 Linking Traces
733(3)
14.4 Hot-Method Optimization
736(10)
14.4.1 Hot-Methods in a Mixed-Mode Environment
736(6)
14.4.2 Hot-Methods in a Native-Code Environment
742(4)
14.5 Advanced Topics
746(6)
14.5.1 Levels of Optimization
746(1)
14.5.2 On-Stack Replacement
747(1)
14.5.3 Code Cache Management
748(2)
14.5.4 Managing Changes to the Source Code
750(2)
14.6 Summary and Perspective
752(1)
Chapter Notes
752(1)
Exercises
753(4)
APPENDIX A IL0C
757(12)
A.1 Introduction
757(2)
A.2 Naming Conventions
759(1)
A.3 Computational Operations
759(1)
A.4 Data Movement Operations
760(3)
A.5 Control-Flow Operations
763(3)
A.6 Opcode Summary Tables
766(3)
APPENDIX B Data Structures
769(24)
B.1 Introduction
769(1)
B.2 Representing Sets
770(5)
B.2.1 Representing Sets as Ordered Lists
771(2)
B.2.2 Representing Sets as Bit Vectors
773(1)
B.2.3 Representing Sparse Sets
774(1)
B.2.4 The Role of Hash Tables
775(1)
B.3 IR Implementation
775(8)
B.3.1 Graphical Intermediate Representations
776(5)
B.3.2 Linear Intermediate Forms
781(2)
B.4 Implementing Hash Tables
783(6)
B.4.1 Choosing a Hash Function
783(2)
B.4.2 Open Hashing
785(1)
B.4.3 Open Addressing
786(2)
B.4.4 Storing Symbol Records
788(1)
B.5 A Flexible Symbol-Table Design
789(1)
Appendix Notes
790(3)
Bibliography 793(22)
Index 815
Dr. Cooper Ph.D., Professor, Dept. of Computer Science at Rice University, is the leader of the Massively Scalar Compiler Project at Rice, which investigates issues relating to optimization and code generation for modern machines. He is also a member of the Center for High Performance Software Research, the Computer and Information Technology Institute, and the Center for Multimedia Communication -- all at Rice. He teaches courses in Compiler Construction at the undergraduate and graduate level. Linda Torczon is a principal investigator on the Massively Scalar Compiler Project at Rice University, and the Grid Application Development Software Project sponsored by the next Generation Software program of the National Science Foundation. She also serves as the executive director of HiPerSoft and of the Los Alamos Computer Science Institute. Her research interests include code generation, interprocedural dataflow analysis and optimization, and programming environments.