Muutke küpsiste eelistusi

E-raamat: Computer Organization and Design MIPS Edition: The Hardware/Software Interface

3.97/5 (1828 hinnangut Goodreads-ist)
(Pardee Professor of Computer Science, Emeritus, University of California, Berkeley, USA), (Departments of Electrical Engineering and Computer Science, Stanford University, USA)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 70,97 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The fifth edition of Computer Organization and Design-winner of a 2014 Textbook Excellence Award (Texty) from The Text and Academic Authors Association-moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures.

Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture.

As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O.

Instructors looking for fourth edition teaching materials should e-mail textbook@elsevier.com.

  • Winner of a 2014 Texty Award from the Text and Academic Authors Association
  • Includes new examples, exercises, and material highlighting the emergence of mobile computing and the cloud
  • Covers parallelism in depth with examples and content highlighting parallel hardware and software topics
  • Features the Intel Core i7, ARM Cortex-A8 and NVIDIA Fermi GPU as real-world examples throughout the book
  • Adds a new concrete example, "Going Faster," to demonstrate how understanding hardware can inspire software optimizations that improve performance by 200 times
  • Discusses and highlights the "Eight Great Ideas" of computer architecture: Performance via Parallelism; Performance via Pipelining; Performance via Prediction; Design for Moore's Law; Hierarchy of Memories; Abstraction to Simplify Design; Make the Common Case Fast; and Dependability via Redundancy
  • Includes a full set of updated and improved exercises

Arvustused

"...the fundamental computer organization book, both as an introduction for readers with no experience in computer architecture topics, and as an up-to-date reference for computer architects." --Computing Reviews, July 22 2014

Muu info

The classic introduction to computer organization now updated for mobile computing and the cloud!
Preface xv
Computer Abstractions and Technology
2(58)
1.1 Introduction
3(8)
1.2 Eight Great Ideas in Computer Architecture
11(2)
1.3 Below Your Program
13(3)
1.4 Under the Covers
16(8)
1.5 Technologies for Building Processors and Memory
24(4)
1.6 Performance
28(12)
1.7 The Power Wall
40(3)
1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors
43(3)
1.9 Real Stuff. Benchmarking the Intel Core i7
46(3)
1.10 Fallacies and Pitfalls
49(3)
1.11 Concluding Remarks
52(2)
1.12 Historical Perspective and Further Reading
54(1)
1.13 Exercises
54(6)
Instructions: Language of the Computer
60(116)
2.1 Introduction
62(1)
2.2 Operations of the Computer Hardware
63(3)
2.3 Operands of the Computer Hardware
66(7)
2.4 Signed and Unsigned Numbers
73(7)
2.5 Representing Instructions in the Computer
80(7)
2.6 Logical Operations
87(3)
2.7 Instructions for Making Decisions
90(6)
2.8 Supporting Procedures in Computer \ lardware
96(10)
2.9 Communicating with People
106(5)
2.10 MIPS Addressing for 32-Bit Immediates and Addresses
111(10)
2.11 Parallelism and Instructions: Synchronization
121(1)
2.12 Translating and Starting a Program
121(11)
2.13 A C Sort Example to Put It All Together
132(9)
2.14 Arrays versus Pointers
141(4)
2.15 Advanced Material: C Compiling C and Interpreting Java
145(1)
2.16 Real Stuff: ARMv7 (32-bit) Instructions
145(4)
2.17 Real Stuff: x86 Instructions
149(9)
2.18 Real Stuff: ARMv8 (64-bit) Instructions
158(1)
2.19 Fallacies and Pitfalls
159(2)
2.20 Concluding Remarks
161(2)
2.21 Historical Perspective and Further Reading
163(1)
2.22 Exercises
164(12)
Arithmetic for Computers
176(66)
3.1 Introduction
178(1)
3.2 Addition and Subtraction
178(5)
3.3 Multiplication
183(6)
3.4 Division
189(7)
3.5 Floatingpoint
196(26)
3.6 Parallelism and Computer Arithmetic: Subword Parallelism
222(2)
3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86
224(1)
3.8 Going Faster: Subword Parallelism and Matrix Multiply
225(4)
3.9 Fallacies and Pitfalls
229(3)
3.10 Concluding Remarks
232(4)
3.11 Historical Perspective and Further Reading
236(1)
3.12 Exercises
237(5)
The Processor
242(130)
4.1 Introduction
244(4)
4.2 Logic Design Conventions
248(3)
4.3 Building a Datapath
251(8)
4.4 A Simple Implementation Scheme
259(13)
4.5 An Overview of Pipelining
272(14)
4.6 Pipelined Datapath and Control
286(17)
4.7 Data Hazards: Forwarding versus Stalling
303(13)
4.8 Control Hazards
316(9)
4.9 Exceptions
325(7)
4.10 Parallelism via Instructions
332(12)
4.11 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Pipelines
344(7)
4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply
351(3)
4.13 Advanced Topic: An Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and More Pipelining Illustrations
354(1)
4.14 Fallacies and Pitfalls
355(1)
4.15 Concluding Remarks
356(1)
4.16 Historical Perspective and Further Reading
357(1)
4.17 Exercises
357(15)
Large and Fast: Exploiting Memory Hierarchy
372(128)
5.1 Introduction
374(4)
5.2 Memory Technologies
378(5)
5.3 The Basics of Caches
383(15)
5.4 Measuring and Improving Cache Performance
398(20)
5.5 Dependable Memory Hierarchy
418(6)
5.6 Virtual Machines
424(3)
5.7 Virtual Memory
427(27)
5.8 A Common Framework for Memory Hierarchy
454(7)
5.9 Using a Finite-State Machine to, Control a Simple Cache
461(5)
5.10 Parallelism and Memory Hierarchies: Cache Coherence
466(4)
5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks
470(1)
5.12 Advanced Material: Implementing Cache Controllers
470(1)
5.13 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies
471(4)
5.14 Going Faster: Cache Blocking and Matrix Multiply
475(3)
5.15 Fallacies and Pitfalls
478(4)
5.16 Concluding Remarks
482(1)
5.17 Historical Perspective and Further Reading
483(1)
5.18 Exercises
483(17)
Parallel Processors from Client to Cloud
500
6.1 Introduction
502(2)
6.2 The Difficulty of Creating Parallel Processing Programs
504(5)
6.3 SISD, MIMD, SIMD, SPMD, and Vector
509(7)
6.4 Hardware Multithreading
516(3)
6.5 Multicore and Other Shared Memory Multiprocessors
519(5)
6.6 Introduction to Graphics Processing Units
524(7)
6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors
531(5)
6.8 Introduction to Multiprocessor Network Topologies
536(3)
6.9 Communicating to the Outside World: Cluster Networking
539(1)
6.10 Multiprocessor Benchmarks and Performance Models
540(10)
6.11 Real Stuff: Benchmarking Intel Core i7 versus NVIDIA Tesla GPU
550(5)
6.12 Going Faster: Multiple Processors and Matrix Multiply
555(3)
6.13 Fallacies and Pitfalls
558(2)
6.14 Concluding Remarks
560(3)
6.15 Historical Perspective and Further Reading
563(1)
6.16 Exercises
563
APPENDICES
Assemblers, Linkers, and the SPIM Simulator
2(1)
A.1 Introduction
3(7)
A.2 Assemblers
10(8)
A.3 Linkers
18(1)
A.4 Loading
19(1)
A.5 Memory Usage
20(2)
A.6 Procedure Call Convention
22(11)
A.7 Exceptions and Interrupts
33(5)
A.8 Input and Output
38(2)
A.9 SPIM
40(5)
A.10 MIPS R2000 Assembly Language
45(36)
A.11 Concluding Remarks
81(1)
A.12 Exercises
82
The Basics of Logic Design
2
B.1 Introduction
3(1)
B.2 Gates, Truth Tables, and Logic Equations
4(5)
B.3 Combinational Logic
9(11)
B.4 Using a Hardware Description Language
20(6)
B.5 Constructing a Basic Arithmetic Logic Unit
26(12)
B.6 Faster Addition: Carry Lookahead
38(10)
B.7 Clocks
48(2)
B.8 Memory Elements: Flip-Flops, Latches, and Registers
50(8)
B.9 Memory Elements: SRAMs and DRAMs
58(9)
B.10 Finite State Machines
67(5)
B.11 Timing Methodologies
72(6)
B.12 Field Programmable Devices
78(1)
B.13 Concluding Remarks
79(1)
B.14 Exercises
80
Index 1
ACM named David A. Patterson a recipient of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. David A. Patterson is the Pardee Chair of Computer Science, Emeritus at the University of California Berkeley. His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM, CRA, and SIGARCH. ACM named John L. Hennessy a recipient of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. John L. Hennessy is a Professor of Electrical Engineering and Computer Science at Stanford University, where he has been a member of the faculty since 1977 and was, from 2000 to 2016, its tenth President. Prof. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Science, and the American Philosophical Society; and a Fellow of the American Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neumann Award, which he shared with David Patterson. He has also received seven honorary doctorates.