Update cookies preferences

Computer Organization and Design ARM Edition: The Hardware Software Interface [Paperback / softback]

3.97/5 (1696 ratings by Goodreads)
(Pardee Professor of Computer Science, Emeritus, University of California, Berkeley, USA), (Departments of Electrical Engineering and Computer Science, Stanford University, USA)
Other books in subject:
  • Paperback / softback
  • Price: 104,04 €
  • This book is not in stock. Book will arrive in about 2-4 weeks. Please allow another 2 weeks for shipping outside Estonia.
  • Quantity:
  • Add to basket
  • Delivery time 4-6 weeks
  • Add to Wishlist
Other books in subject:

The new ARM Edition of Computer Organization and Design features a subset of the ARMv8-A architecture, which is used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies, and I/O.

With the post-PC era now upon us, Computer Organization and Design moves forward to explore this generational change with examples, exercises, and material highlighting the emergence of mobile computing and the Cloud. Updated content featuring tablet computers, Cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures is included.

An online companion Web site provides links to a free version of the ARMv8 Foundation Platform (a virtual platform incorporating an AArch64 architecture simulation model), as well as additional advanced content for further study, appendices, glossary, references, and recommended reading.

More info

Features the ARMv8-A architecture to present the fundamentals of computer system design!
Preface xv
1 Computer Abstractions and Technology
2(58)
1.1 Introduction
3(8)
1.2 Eight Great Ideas in Computer Architecture
11(2)
1.3 Below Your Program
13(3)
1.4 Under the Covers
16(8)
1.5 Technologies for Building Processors and Memory
24(4)
1.6 Performance
28(12)
1.7 The Power Wall
40(3)
1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors
43(3)
1.9 Real Stuff: Benchmarking the Intel Core i7
46(3)
1.10 Fallacies and Pitfalls
49(3)
1.11 Concluding Remarks
52(2)
1.12 Historical Perspective and Further Reading
54(1)
1.13 Exercises
54(6)
2 Instructions: Language of the Computer
60(126)
2.1 Introduction
62(1)
2.2 Operations of the Computer Hardware
63(4)
2.3 Operands of the Computer Hardware
67(8)
2.4 Signed and Unsigned Numbers
75(7)
2.5 Representing Instructions in the Computer
82(8)
2.6 Logical Operations
90(3)
2.7 Instructions for Making Decisions
93(7)
2.8 Supporting Procedures in Computer Hardware
100(10)
2.9 Communicating with People
110(5)
2.10 LEGv8 Addressing for Wide Immediates and Addresses
115(10)
2.11 Parallelism and Instructions: Synchronization
125(3)
2.12 Translating and Starting a Program
128(9)
2.13 AC Sort Example to Put it All Together
137(9)
2.14 Arrays versus Pointers
146(4)
2.15 Advanced Material: Compiling C and Interpreting Java
150(1)
2.16 Real Stuff: MIPS Instructions
150(2)
2.17 Real Stuff: ARMv7 (32-bit) Instructions
152(2)
2.18 Real Stuff: x86 Instructions
154(9)
2.19 Real Stuff: The Rest oftheARMv8 Instruction Set
163(6)
2.20 Fallacies and Pitfalls
169(2)
2.21 Concluding Remarks
171(2)
2.22 Historical Perspective and Further Reading
173(1)
2.23 Exercises
174(12)
3 Arithmetic for Computers
186(68)
3.1 Introduction
188(1)
3.2 Addition and Subtraction
188(3)
3.3 Multiplication
191(6)
3.4 Division
197(8)
3.5 Floating point
205(25)
3.6 Parallelism and Computer Arithmetic: Subword Parallelism
230(2)
3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86
232(2)
3.8 Real Stuff: The Rest of the ARMv8 Arithmetic Instructions
234(4)
3.9 Going Faster: Subword Parallelism and Matrix Multiply
238(4)
3.10 Fallacies and Pitfalls
242(3)
3.11 Concluding Remarks
245(3)
3.12 Historical Perspective and Further Reading
248(1)
3.13 Exercises
249(5)
4 The Processor
254(132)
4.1 Introduction
256(4)
4.2 Logic Design Conventions
260(3)
4.3 Building a Datapath
263(8)
4.4 A Simple Implementation Scheme
271(12)
4.5 An Overview of Pipelining
283(14)
4.6 Pipelined Datapath and Control
297(19)
4.7 Data Hazards: Forwarding versus Stalling
316(12)
4.8 Control Hazards
328(8)
4.9 Exceptions
336(6)
4.10 Parallelism via Instructions
342(13)
4.11 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Pipelines
355(8)
4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply
363(3)
4.13 Advanced Topic: An Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and More Pipelining Illustrations
366(1)
4.14 Fallacies and Pitfalls
366(1)
4.15 Concluding Remarks
367(1)
4.16 Historical Perspective and Further Reading
368(1)
4.17 Exercises
368(18)
5 Large and Fast: Exploiting Memory Hierarchy
386(128)
5.1 Introduction
388(4)
5.2 Memory Technologies
392(5)
5.3 The Basics of Caches
397(15)
5.4 Measuring and Improving Cache Performance
412(20)
5.5 Dependable Memory Hierarchy
432(6)
5.6 Virtual Machines
438(3)
5.7 Virtual Memory
441(24)
5.8 A Common Framework for Memory Hierarchy
465(7)
5.9 Using a Finite-State Machine to Control a Simple Cache
472(5)
5.10 Parallelism and Memory Hierarchy: Cache Coherence
477(4)
5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks
481(1)
5.12 Advanced Material: Implementing Cache Controllers
482(1)
5.13 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Memory Hierarchies
482(5)
5.14 Real Stuff: The Rest of the ARMv8 System and Special Instructions
487(1)
5.15 Going Faster: Cache Blocking and Matrix Multiply
488(3)
5.16 Fallacies and Pitfalls
491(5)
5.17 Concluding Remarks
496(1)
5.18 Historical Perspective and Further Reading
497(1)
5.19 Exercises
497(17)
6 Parallel Processors from Client to Cloud
514
6.1 Introduction
516(2)
6.2 The Difficulty of Creating Parallel Processing Programs
518(5)
6.3 SISD, MIMD, SIMD, SPMD, and Vector
523(7)
6.4 Hardware Multithreading
530(3)
6.5 Multicore and Other Shared Memory Multiprocessors
533(5)
6.6 Introduction to Graphics Processing Units
538(7)
6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors
545(5)
6.8 Introduction to Multiprocessor Network Topologies
550(3)
6.9 Communicating to the Outside World: Cluster Networking
553(1)
6.10 Multiprocessor Benchmarks and Performance Models
554(10)
6.11 Real Stuff: Benchmarking and Rooflines of the Intel Core i7 960 and the NVIDIA Tesla GPU
564(5)
6.12 Going Faster: Multiple Processors and Matrix Multiply
569(3)
6.13 Fallacies and Pitfalls
572(2)
6.14 Concluding Remarks
574(3)
6.15 Historical Perspective and Further Reading
577(1)
6.16 Exercises
577
ACM named David A. Patterson a recipient of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. David A. Patterson is the Pardee Chair of Computer Science, Emeritus at the University of California Berkeley. His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM, CRA, and SIGARCH. ACM named John L. Hennessy a recipient of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. John L. Hennessy is a Professor of Electrical Engineering and Computer Science at Stanford University, where he has been a member of the faculty since 1977 and was, from 2000 to 2016, its tenth President. Prof. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Science, and the American Philosophical Society; and a Fellow of the American Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neumann Award, which he shared with David Patterson. He has also received seven honorary doctorates.