Muutke küpsiste eelistusi

E-raamat: Computer Organization and Design RISC-V Edition: The Hardware Software Interface

3.97/5 (1698 hinnangut Goodreads-ist)
(Departments of Electrical Engineering and Computer Science, Stanford University, USA), (Pardee Professor of Computer Science, Emeritus, University of California at Berkeley, USA)
  • Formaat - EPUB+DRM
  • Hind: 106,40 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Computer Organization and Design RISC-V Edition: The Hardware Software Interface, Second Edition, the award-winning textbook from Patterson and Hennessy that is used by more than 40,000 students per year, continues to present the most comprehensive and readable introduction to this core computer science topic. This version of the book features the RISC-V open source instruction set architecture, the first open source architecture designed for use in modern computing environments such as cloud computing, mobile devices, and other embedded systems. Readers will enjoy an online companion website that provides advanced content for further study, appendices, glossary, references, links to software tools, and more.
  • Covers parallelism in-depth, with examples and content highlighting parallel hardware and software topics
  • Focuses on 64-bit address, ISA to 32-bit address, and ISA for RISC-V because 32-bit RISC-V ISA is simpler to explain, and 32-bit address computers are still best for applications like embedded computing and IoT
  • Includes new sections in each chapter on Domain Specific Architectures (DSA)
  • Provides updates on all the real-world examples in the book
Preface xi
1 Computer Abstractions and Technology 2(64)
1.1 Introduction
3(7)
1.2 Seven Great Ideas in Computer Architecture
10(3)
1.3 Below Your Program
13(3)
1.4 Under the Covers
16(9)
1.5 Technologies for Building Processors and Memory
25(4)
1.6 Performance
29(11)
1.7 The Power Wall
40(3)
1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors
43(3)
1.9 Real Stuff: Benchmarking the Intel Core i7
46(3)
1.10 Going Faster: Matrix Multiply in Python
49(1)
1.11 Fallacies and Pitfalls
50(3)
1.12 Concluding Remarks
53(2)
1.13 Historical Perspective and Further Reading
55(1)
1.14 Self-Study
55(4)
1.15 Exercises
59(7)
2 Instructions: Language of the Computer 66(122)
2.1 Introduction
68(1)
2.2 Operations of the Computer Hardware
69(4)
2.3 Operands of the Computer Hardware
73(7)
2.4 Signed and Unsigned Numbers
80(7)
2.5 Representing Instructions in the Computer
87(8)
2.6 Logical Operations
95(3)
2.7 Instructions for Making Decisions
98(6)
2.8 Supporting Procedures in Computer Hardware
104(10)
2.9 Communicating with People
114(6)
2.10 RISC-V Addressing for Wide Immediates and Addresses
120(8)
2.11 Parallelism and Instructions: Synchronization
128(3)
2.12 Translating and Starting a Program
131(9)
2.13 A C Sort Example to Put it All Together
140(8)
2.14 Arrays versus Pointers
148(3)
2.15 Advanced Material: Compiling C and Interpreting Java
151(1)
2.16 Real Stuff: MIPS Instructions
152(1)
2.17 Real Stuff: ARMv7 (32-bit) Instructions
153(4)
2.18 Real Stuff: ARMv8 (64-bit) Instructions
157(1)
2.19 Real Stuff: x86 Instructions
158(9)
2.20 Real Stuff: The Rest of the RISC-V Instruction Set
167(1)
2.21 Going Faster: Matrix Multiply in C
168(2)
2.22 Fallacies and Pitfalls
170(2)
2.23 Concluding Remarks
172(2)
2.24 Historical Perspective and Further Reading
174(1)
2.25 Self-Study
175(3)
2.26 Exercises
178(10)
3 Arithmetic for Computers 188(64)
3.1 Introduction
190(1)
3.2 Addition and Subtraction
190(3)
3.3 Multiplication
193(6)
3.4 Division
199(9)
3.5 Floating Point
208(25)
3.6 Parallelism and Computer Arithmetic: Subword Parallelism
233(1)
3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86
234(2)
3.8 Going Faster: Subword Parallelism and Matrix Multiply
236(2)
3.9 Fallacies and Pitfalls
238(3)
3.10 Concluding Remarks
241(1)
3.11 Historical Perspective and Further Reading
242(1)
3.12 Self-Study
242(4)
3.13 Exercises
246(6)
4 The Processor 252(134)
4.1 Introduction
254(4)
4.2 Logic Design Conventions
258(3)
4.3 Building a Datapath
261(8)
4.4 A Simple Implementation Scheme
269(13)
4.5 Multicyle Implementation
282(1)
4.6 An Overview of Pipelining
283(13)
4.7 Pipelined Datapath and Control
296(17)
4.8 Data Hazards: Forwarding versus Stalling
313(12)
4.9 Control Hazards
325(8)
4.10 Exceptions
333(7)
4.11 Parallelism via Instructions
340(14)
4.12 Putting it All Together: The Intel Core i7 6700 and ARM Cortex-A53
354(9)
4.13 Going Faster: Instruction-Level Parallelism and Matrix Multiply
363(2)
4.14 Advanced Topic: An Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and More Pipelining Illustrations
365(1)
4.15 Fallacies and Pitfalls
365(2)
4.16 Concluding Remarks
367(1)
4.17 Historical Perspective and Further Reading
368(1)
4.18 Self-Study
368(1)
4.19 Exercises
369(17)
5 Large and Fast: Exploiting Memory Hierarchy 386(132)
5.1 Introduction
388(4)
5.2 Memory Technologies
392(6)
5.3 The Basics of Caches
398(14)
5.4 Measuring and Improving Cache Performance
412(19)
5.5 Dependable Memory Hierarchy
431(5)
5.6 Virtual Machines
436(4)
5.7 Virtual Memory
440(24)
5.8 A Common Framework for Memory Hierarchy
464(6)
5.9 Using a Finite-State Machine to Control a Simple Cache
470(5)
5.10 Parallelism and Memory Hierarchy: Cache Coherence
475(4)
5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks
479(1)
5.12 Advanced Material: Implementing Cache Controllers
480(1)
5.13 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies
480(6)
5.14 Real Stuff: The Rest of the RISC-V System and Special Instructions
486(2)
5.15 Going Faster: Cache Blocking and Matrix Multiply
488(1)
5.16 Fallacies and Pitfalls
489(5)
5.17 Concluding Remarks
494(1)
5.18 Historical Perspective and Further Reading
495(1)
5.19 Self-Study
495(4)
5.20 Exercises
499(19)
6 Parallel Processors from Client to Cloud 518
6.1 Introduction
520(2)
6.2 The Difficulty of Creating Parallel Processing Programs
522(5)
6.3 SISD, MIMD, SIMD, SPMD, and Vector
527(7)
6.4 Hardware Multithreading
534(3)
6.5 Multicore and Other Shared Memory Multiprocessors
537(5)
6.6 Introduction to Graphics Processing Units
542(7)
6.7 Domain-Specific Architectures
549(3)
6.8 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors
552(5)
6.9 Introduction to Multiprocessor Network Topologies
557(4)
6.10 Communicating to the Outside World: Cluster Networking
561(1)
6.11 Multiprocessor Benchmarks and Performance Models
561(11)
6.12 Real Stuff: Benchmarking the Google TPUv3 Supercomputer and an NVIDIA Volta GPU Cluster
572(8)
6.13 Going Faster: Multiple Processors and Matrix Multiply
580(3)
6.14 Fallacies and Pitfalls
583(2)
6.15 Concluding Remarks
585(2)
6.16 Historical Perspective and Further Reading
587(1)
6.17 Self-Study
588(2)
6.18 Exercises
590
Appendix
A The Basics of Logic Design
A-2
A.1 Introduction
A-3
A.2 Gates, Truth Tables, and Logic Equations
A-4
A.3 Combinational Logic
A-9
A.4 Using a Hardware Description Language
A-20
A.5 Constructing a Basic Arithmetic Logic Unit
A-26
A.6 Faster Addition: Carry Lookahead
A-37
A.7 Clocks
A-47
A.8 Memory Elements: Flip-Flops, Latches, and Registers
A-49
A.9 Memory Elements: SRAMs and DRAMs
A-57
A.10 Finite-State Machines
A-66
A.11 Timing Methodologies
A-71
A.12 Field Programmable Devices
A-77
A.13 Concluding Remarks
A-78
A.14 Exercises
A-79
Index I-1
David Patterson is the Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, which he joined after graduating from UCLA in 1977.His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Prof. Patterson received the IEEE Technical Achievement Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Prof. Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM, CRA, and SIGARCH. ACM named John L. Hennessy a recipient of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. John L. Hennessy is a Professor of Electrical Engineering and Computer Science at Stanford University, where he has been a member of the faculty since 1977 and was, from 2000 to 2016, its tenth President. Prof. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Science, and the American Philosophical Society; and a Fellow of the American Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neumann Award, which he shared with David Patterson. He has also received seven honorary doctorates.