Muutke küpsiste eelistusi

E-raamat: Computer Organization and Design RISC-V Edition: The Hardware Software Interface

3.97/5 (1693 hinnangut Goodreads-ist)
(Departments of Electrical Engineering and Computer Science, Stanford University, USA), (Pardee Professor of Computer Science, Emeritus, University of California at Berkeley, USA)
  • Formaat - EPUB+DRM
  • Hind: 75,06 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The new RISC-V Edition of Computer Organization and Design features the RISC-V open source instruction set architecture, the first open source architecture designed to be used in modern computing environments such as cloud computing, mobile devices, and other embedded systems.

With the post-PC era now upon us, Computer Organization and Design moves forward to explore this generational change with examples, exercises, and material highlighting the emergence of mobile computing and the Cloud. Updated content featuring tablet computers, Cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures is included.

An online companion Web site provides advanced content for further study, appendices, glossary, references, and recommended reading.

  • Covers parallelism in depth with examples and content highlighting parallel hardware and software topics
  • Features the Intel Core i7, RISC-V, and NVIDIA Fermi GPU as real-world examples throughout the book
  • Adds a new concrete example, "Going Faster," to demonstrate how understanding hardware can inspire software optimizations that improve performance by 200X
  • Discusses and highlights the "Eight Great Ideas" of computer architecture: Performance via Parallelism; Performance via Pipelining; Performance via Prediction; Design for Moore's Law; Hierarchy of Memories; Abstraction to Simplify Design; Make the Common Case Fast; and Dependability via Redundancy.
  • Includes a full set of updated exercises

Muu info

Covers relevant examples, exercises, and material highlighting the emergence of mobile computing and the cloud
Preface xv
1 Computer Abstractions and Technology
2(58)
1.1 Introduction
3(8)
1.2 Eight Great Ideas in Computer Architecture
11(2)
1.3 Below Your Program
13(3)
1.4 Under the Covers
16(8)
1.5 Technologies for Building Processors and Memory
24(4)
1.6 Performance
28(12)
1.7 The Power Wall
40(3)
1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors
43(3)
1.9 Real Stuff: Benchmarking the Intel Core i7
46(3)
1.10 Fallacies and Pitfalls
49(3)
1.11 Concluding Remarks
52(2)
1.12 Historical Perspective and Further Reading
54(1)
1.13 Exercises
54(6)
2 Instructions: Language of the Computer
60(112)
2.1 Introduction
62(1)
2.2 Operations of the Computer Hardware
63(4)
2.3 Operands of the Computer Hardware
67(7)
2.4 Signed and Unsigned Numbers
74(7)
2.5 Representing Instructions in the Computer
81(8)
2.6 Logical Operations
89(3)
2.7 Instructions for Making Decisions
92(6)
2.8 Supporting Procedures in Computer Hardware
98(10)
2.9 Communicating with People
108(5)
2.10 RISC-V Addressing for Wide Immediates and Addresses
113(8)
2.11 Parallelism and Instructions: Synchronization
121(3)
2.12 Translating and Starting a Program
124(9)
2.13 A C Sort Example to Put it All Together
133(8)
2.14 Arrays versus Pointers
141(3)
2.15 Advanced Material: Compiling C and Interpreting Java
144(1)
2.16 Real Stuff: MIPS Instructions
145(1)
2.17 Real Stuff: x86 Instructions
146(9)
2.18 Real Stuff: The Rest of the RISC-V Instruction Set
155(2)
2.19 Fallacies and Pitfalls
157(2)
2.20 Concluding Remarks
159(3)
2.21 Historical Perspective and Further Reading
162(1)
2.22 Exercises
162(10)
3 Arithmetic for Computers
172(62)
3.1 Introduction
174(1)
3.2 Addition and Subtraction
174(3)
3.3 Multiplication
177(6)
3.4 Division
183(8)
3.5 Floating Point
191(25)
3.6 Parallelism and Computer Arithmetic: Subword Parallelism
216(1)
3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86
217(1)
3.8 Going Faster: Subword Parallelism and Matrix Multiply
218(4)
3.9 Fallacies and Pitfalls
222(3)
3.10 Concluding Remarks
225(2)
3.11 Historical Perspective and Further Reading
227(1)
3.12 Exercises
227(7)
4 The Processor
234(130)
4.1 Introduction
236(4)
4.2 Logic Design Conventions
240(3)
4.3 Building a Datapath
243(8)
4.4 A Simple Implementation Scheme
251(11)
4.5 An Overview of Pipelining
262(14)
4.6 Pipelined Datapath and Control
276(18)
4.7 Data Hazards: Forwarding versus Stalling
294(13)
4.8 Control Hazards
307(8)
4.9 Exceptions
315(6)
4.10 Parallelism via Instructions
321(13)
4.11 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Pipelines
334(8)
4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply
342(3)
4.13 Advanced Topic: An Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and More Pipelining Illustrations
345(1)
4.14 Fallacies and Pitfalls
345(1)
4.15 Concluding Remarks
346(1)
4.16 Historical Perspective and Further Reading
347(1)
4.17 Exercises
347(17)
5 Large and Fast: Exploiting Memory Hierarchy
364(126)
5.1 Introduction
366(4)
5.2 Memory Technologies
370(5)
5.3 The Basics of Caches
375(15)
5.4 Measuring and Improving Cache Performance
390(20)
5.5 Dependable Memory Hierarchy
410(6)
5.6 Virtual Machines
416(3)
5.7 Virtual Memory
419(24)
5.8 A Common Framework for Memory Hierarchy
443(6)
5.9 Using a Finite-State Machine to Control a Simple Cache
449(5)
5.10 Parallelism and Memory Hierarchy: Cache Coherence
454(4)
5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks
458(1)
5.12 Advanced Material: Implementing Cache Controllers
459(1)
5.13 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Memory Hierarchies
459(5)
5.14 Real Stuff: The Rest of the RISC-V System and Special Instructions
464(1)
5.15 Going Faster: Cache Blocking and Matrix Multiply
465(3)
5.16 Fallacies and Pitfalls
468(4)
5.17 Concluding Remarks
472(1)
5.18 Historical Perspective and Further Reading
473(1)
5.19 Exercises
473(17)
6 Parallel Processors from Client to Cloud
490
6.1 Introduction
492(2)
6.2 The Difficulty of Creating Parallel Processing Programs
494(5)
6.3 SISD, MIMD, SIMD, SPMD, and Vector
499(7)
6.4 Hardware Multithreading
506(3)
6.5 Multicore and Other Shared Memory Multiprocessors
509(5)
6.6 Introduction to Graphics Processing Units
514(7)
6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors
521(5)
6.8 Introduction to Multiprocessor Network Topologies
526(3)
6.9 Communicating to the Outside World: Cluster Networking
529(1)
6.10 Multiprocessor Benchmarks and Performance Models
530(10)
6.11 Real Stuff: Benchmarking and Rooflines of the Intel Core i7 960 and the NVIDIA Tesla GPU
540(5)
6.12 Going Faster: Multiple Processors and Matrix Multiply
545(3)
6.13 Fallacies and Pitfalls
548(2)
6.14 Concluding Remarks
550(3)
6.15 Historical Perspective and Further Reading
553(1)
6.16 Exercises
553
APPENDIX
A The Basics of Logic Design
2
A.1 Introduction
3(1)
A.2 Gates, Truth Tables, and Logic Equations
4(5)
A.3 Combinational Logic
9(11)
A.4 Using a Hardware Description Language
20(6)
A.5 Constructing a Basic Arithmetic Logic Unit
26(11)
A.6 Faster Addition: Carry Lookahead
37(10)
A.7 Clocks
47(2)
A.8 Memory Elements: Flip-Flops, Latches, and Registers
49(8)
A.9 Memory Elements: SRAMs and DRAMs
57(9)
A.10 Finite-State Machines
66(5)
A.11 Timing Methodologies
71(6)
A.12 Field Programmable Devices
77(1)
A.13 Concluding Remarks
78(1)
A.14 Exercises
79
Index 1
David Patterson is the Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, which he joined after graduating from UCLA in 1977.His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Prof. Patterson received the IEEE Technical Achievement Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Prof. Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM, CRA, and SIGARCH. ACM named John L. Hennessy a recipient of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. John L. Hennessy is a Professor of Electrical Engineering and Computer Science at Stanford University, where he has been a member of the faculty since 1977 and was, from 2000 to 2016, its tenth President. Prof. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Science, and the American Philosophical Society; and a Fellow of the American Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neumann Award, which he shared with David Patterson. He has also received seven honorary doctorates.