Muutke küpsiste eelistusi

E-raamat: Network Algorithmics: An Interdisciplinary Approach to Designing Fast Networked Devices

(Jonathan B. Postel Professor of Networking, University of California, Los Angeles, California, USA)
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 31-Dec-2004
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080479644
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 64,21 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 31-Dec-2004
  • Kirjastus: Morgan Kaufmann Publishers In
  • Keel: eng
  • ISBN-13: 9780080479644
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

In designing a network device, you make dozens of decisions that affect the speed with which it will perform sometimes for better, but sometimes for worse. Network Algorithmics provides a complete, coherent methodology for maximizing speed while meeting your other design goals.

Author George Varghese begins by laying out the implementation bottlenecks that are most often encountered at four disparate levels of implementation: protocol, OS, hardware, and architecture. He then derives 15 solid principles ranging from the commonly recognized to the groundbreaking that are key to breaking these bottlenecks.

The rest of the book is devoted to a systematic application of these principles to bottlenecks found specifically in endnodes, interconnect devices, and specialty functions such as security and measurement that can be located anywhere along the network. This immensely practical, clearly presented information will benefit anyone involved with network implementation, as well as students who have made this work their goal.

Addresses the bottlenecks found in all kinds of network devices, (data copying, control transfer, demultiplexing, timers, and more) and offers ways to break them.
Presents techniques suitable specifically for endnodes, including Web servers.
Presents techniques suitable specifically for interconnect devices, including routers, bridges, and gateways.
Written as a practical guide for implementers but full of valuable insights for students, teachers, and researchers.
Includes end-of-chapter summaries and exercises.

Arvustused

"George Varghese has had a remarkable impact on the real world of networking with his algorithmic innovations over many years. The networking research and development community is fortunate that he has now distilled his knowledge in this very readable, insightful, and much-needed book." --Bruce Davie, Cisco Fellow, Cisco Systems

"This book nicely describes implementation tricks for building fast networking stacks, particularly in routers. This is a much needed book, I don't know of any other that covers this sort of implementation advice. George Varghese has invented several techniques to help speed up the Internet and in his book he provides interesting insight into this, and much more." --Radia Perlman, Distinguished Engineer, Sun Microsystems

Muu info

An interdisciplinary approach to applying principles for efficient implementation in network devices, addressing and offering solutions to the problem of network implementation bottlenecks.
PREFACE xix
PART I The Rules of the Game 1(104)
CHAPTER 1 Introducing Network Algorithmics
3(13)
1.1 The Problem: Network Bottlenecks
3(4)
1.1.1 Endnode Bottlenecks
4(1)
1.1.2 Router Bottlenecks
5(2)
1.2 The Techniques: Network Algorithmics
7(8)
1.2.1 Warm-up Example: Scenting an Evil Packet
8(1)
1.2.2 Strawman Solution
9(1)
1.2.3 Thinking Algorithmically
9(1)
1.2.4 Refining the Algorithm: Exploiting Hardware
10(1)
1.2.5 Cleaning Up
11(2)
1.2.6 Characteristics of Network Algorithmics
13(2)
1.3 Exercise
15(1)
CHAPTER 2 Network Implementation Models
16(34)
2.1 Protocols
17(4)
2.1.1 Transport and Routing Protocols
17(1)
2.1.2 Abstract Protocol Model
17(2)
2.1.3 Performance Environment and Measures
19(2)
2.2 Hardware
21(11)
2.2.1 Combinatorial Logic
21(1)
2.2.2 Timing and Power
22(1)
2.2.3 Raising the Abstraction Level of Hardware Design
23(2)
2.2.4 Memories
25(4)
2.2.5 Memory Subsystem Design Techniques
29(1)
2.2.6 Component-Level Design
30(1)
2.2.7 Final Hardware Lessons
31(1)
2.3 Network Device Architectures
32(7)
2.3.1 Endnode Architecture
32(2)
2.3.2 Router Architecture
34(5)
2.4 Operating Systems
39(5)
2.4.1 Uninterrupted Computation via Processes
39(2)
2.4.2 Infinite Memory via Virtual Memory
41(15)
2.4.3 Simple I/O via System Calls
43(1)
2.5 Summary
44(1)
2.6 Exercises
44(6)
CHAPTER 3 Fifteen Implementation Principles
50(23)
3.1 Motivating the Use of Principles - Updating Ternary Content-Addressable Memories
50(4)
3.2 Algorithms versus Algorithmics
54(2)
3.3 Fifteen Implementation Principles - Categorization and Description
56(9)
3.3.1 Systems Principles
56(5)
3.3.2 Principles for Modularity with Efficiency
61(2)
3.3.3 Principles for Speeding Up Routines
63(2)
3.4 Design versus Implementation Principles
65(1)
3.5 Caveats
66(4)
3.5.1 Eight Cautionary Questions
68(2)
3.6 Summary
70(1)
3.7 Exercises
70(3)
CHAPTER 4 Principles in Action
73(1)
4.1 Buffer Validation of Application Device Channels
74(2)
4.2 Scheduler for Asynchronous Transfer Mode Flow Control
76(1)
4.3 Route Computation Using Dijkstra's Algorithm
77(3)
4.4 Ethernet Monitor Using Bridge Hardware
80(1)
4.5 Demultiplexing in the X-Kernel
81(2)
4.6 Tries with Node Compression
83(2)
4.7 Packet Filtering in Routers
85(2)
4.8 Avoiding Fragmentation of Link State Packets
87(3)
4.9 Policing Traffic Patterns
90(2)
4.10 Identifying a Resource Hog
92(1)
4.11 Getting Rid of the TCP Open Connection List
93(3)
4.12 Acknowledgment Withholding
96(2)
4.13 Incrementally Reading a Large Database
98(2)
4.14 Binary Search of Long Identifiers
100(2)
4.15 Video Conferencing via Asynchronous Transfer Mode
102(3)
PART II Playing with Endnodes 105(114)
CHAPTER 5 Copying Data
107(32)
5.1 Why Data Copies
109(2)
5.2 Reducing Copying via Local Restructuring
111(10)
5.2.1 Exploiting Adaptor Memory
111(2)
5.2.2 Using Copy-on-Write
113(2)
5.2.3 Fbufs: Optimizing Page Remapping
115(4)
5.2.4 Transparently Emulating Copy Semantics
119(2)
5.3 Avoiding Copying Using Remote DMA
121(4)
5.3.1 Avoiding Copying in a Cluster
122(1)
5.3.2 Modern-Day Incarnations of RDMA
123(2)
5.4 Broadening to File Systems
125(4)
5.4.1 Shared Memory
125(1)
5.4.2 IO-Lite: A Unified View of Buffering
126(2)
5.4.3 Avoiding File System Copies via I/O Splicing
128(1)
5.5 Broadening beyond Copies
129(2)
5.6 Broadening beyond Data Manipulations
131(4)
5.6.1 Using Caches Effectively
131(4)
5.6.2 Direct Memory Access versus Programmed I/O
135(1)
5.7 Conclusions
135(2)
5.8 Exercises
137(2)
CHAPTER 6 Transferring Control
139(30)
6.1 Why Control Overhead?
141(2)
6.2 Avoiding Scheduling Overhead in Networking Code
143(3)
6.2.1 Making User-Level Protocol Implementations Real
144(2)
6.3 Avoiding Context-Switching Overhead in Applications
146(7)
6.3.1 Process per Client
147(1)
6.3.2 Thread per Client
148(2)
6.3.3 Event-Driven Scheduler
150(1)
6.3.4 Event-Driven Server with Helper Processes
150(1)
6.3.5 Task-Based Structuring
151(2)
6.4 Fast Select
153(6)
6.4.1 A Server Mystery
153(1)
6.4.2 Existing Use and Implementation of Select()
154(1)
6.4.3 Analysis of Select()
155(2)
6.4.4 Speeding Up Select() without Changing the API
157(1)
6.4.5 Speeding Up Select() by Changing the API
158(1)
6.5 Avoiding System Calls
159(4)
6.5.1 The Virtual Interface Architecture (VIA) Proposal
162(1)
6.6 Reducing Interrupts
163(2)
6.6.1 Avoiding Receiver Livelock
164(1)
6.7 Conclusions
165(1)
6.8 Exercises
166(3)
CHAPTER 7 Maintaining Timers
169(1)
7.1 Why Timers?
169(2)
7.2 Model and Performance Measures
171(1)
7.3 Simplest Timer Schemes
172(1)
7.4 Timing Wheels
173(2)
7.5 Hashed Wheels
175(1)
7.6 Hierarchical Wheels
176(2)
7.7 BSD Implementation
178(1)
7.8 Obtaining Fine-Granularity Timers
179(1)
7.9 Conclusions
180(1)
7.10 Exercises
181(1)
CHAPTER 8 Demultiplexing
182(15)
8.1 Opportunities and Challenges of Early Demultiplexing
184(1)
8.2 Goals
184(1)
8.3 CMU/Stanford Packet Filter: Pioneering Packet Filters
185(1)
8.4 Berkeley Packet Filter: Enabling High-Performance Monitoring
186(3)
8.5 Pathfinder: Factoring Out Common Checks
189(3)
8.6 Dynamic Packet Filter: Compilers to the Rescue
192(3)
8.7 Conclusions
195(1)
8.8 Exercises
195(2)
CHAPTER 9 Protocol Processing
197(22)
9.1 Buffer Management
198(5)
9.1.1 Buffer Allocation
199(2)
9.1.2 Sharing Buffers
201(2)
9.2 Cyclic Redundancy Checks and Checksums
203(6)
9.2.1 Cyclic Redundancy Checks
204(3)
9.2.2 Internet Checksums
207(2)
9.2.3 Finessing Checksums
209(1)
9.3 Generic Protocol Processing
209(4)
9.3.1 UDP Processing
212(1)
9.4 Reassembly
213(3)
9.4.1 Efficient Reassembly
214(2)
9.5 Conclusions
216(1)
9.6 Exercises
217(2)
PART III Playing with Routers 219(158)
CHAPTER 10 Exact-Match Lookups
221(12)
10.1 Challenge 1: Ethernet under Fire
222(2)
10.2 Challenge 2: Wire Speed Forwarding
224(4)
10.3 Challenge 3: Scaling Lookups to Higher Speeds
228(3)
10.3.1 Scaling via Hashing
228(2)
10.3.2 Using Hardware Parallelism
230(1)
10.4 Summary
231(1)
10.5 Exercise
232(1)
CHAPTER 11 Prefix-Match Lookups
233(1)
11.1 Introduction to Prefix Lookups
234(36)
11.1.1 Prefix Notation
234(1)
11.1.2 Why Variable-Length Prefixes?
235(1)
11.1.3 Lookup Model
236(2)
11.2 Finessing Lookups
238(1)
11.2.1 Threaded Indices and Tag Switching
238(2)
11.2.2 Flow Switching
240(1)
11.2.3 Status of Tag Switching, Flow Switching, and Multiprotocol Label Switching
241(1)
11.3 Nonalgorithmic Techniques for Prefix Matching
242(1)
11.3.1 Caching
242(1)
11.3.2 Ternary Content-Addressable Memories
242(1)
11.4 Unibit Tries
243(2)
11.5 Multi bit Tries
245(1)
11.5.1 Fixed-Stride Tries
246(1)
11.5.2 Variable-Stride Tries
247(3)
11.5.3 Incremental Update
250(1)
11.6 Level-Compressed (LC) Tries
250(2)
11.7 Lulea-Compressed Tries
252(3)
11.8 Tree Bitmap
255(1)
11.8.1 Tree Bitmap Ideas
255(1)
11.8.2 Tree Bitmap Search Algorithm
256(1)
11.9 Binary Search on Ranges
257(2)
11.10 Binary Search on Prefix Lengths
259(2)
11.11 Memory Allocation in Compressed Schemes
261(2)
11.11.1 Frame-Based Compaction
262(1)
11.12 Lookup-Chip Model
263(2)
11.13 Conclusions
265(1)
11.14 Exercises
266(4)
CHAPTER 12 Packet Classification
270(1)
12.1 Why Packet Classification?
271(2)
12.2 Packet-Classification Problem
273(2)
12.3 Requirements and Metrics
275(1)
12.4 Simple Solutions
276(26)
12.4.1 Linear Search
276(1)
12.4.2 Caching
276(1)
12.4.3 Demultiplexing Algorithms
277(1)
12.4.4 Passing Labels
277(1)
12.4.5 Content-Addressable Memories
278(1)
12.5 Two-Dimensional Schemes
278(1)
12.5.1 Fast Searching Using Set-Pruning Trees
278(3)
12.5.2 Reducing Memory Using Backtracking
281(1)
12.5.3 The Best of Both Worlds: Grid of Tries
281(3)
12.6 Approaches to General Rule Sets
284(1)
12.6.1 Geometric View of Classification
284(2)
12.6.2 Beyond Two Dimensions: The Bad News
286(1)
12.6.3 Beyond Two Dimensions: The Good News
286(1)
12.7 Extending Two-Dimensional Schemes
287(1)
12.8 Using Divide-and-Conquer
288(1)
12.9 Bit Vector Linear Search
289(3)
12.10 Cross-Producting
292(1)
12.11 Equivalenced Cross-Producting
293(3)
12.12 Decision Tree Approaches
296(3)
12.13 Conclusions
299(1)
12.14 Exercises
300(2)
CHAPTER 13 Switching
302(2)
13.1 Router versus Telephone Switches
304(1)
13.2 Shared-Memory Switches
305(1)
13.3 Router History: From Buses to Crossbars
305(2)
13.4 The Take-a-Ticket Crossbar Scheduler
307(4)
13.5 Head-of-Line Blocking
311(1)
13.6 Avoiding Head-of-Line Blocking via Output Queuing
312(2)
13.7 Avoiding Head-of-Line Blocking by Using Parallel Iterative Matching
314(2)
13.8 Avoiding Randomization with iSLIP
316(23)
13.8.1 Extending iSLIP to Multicast and Priority
320(2)
13.8.2 iSLIP Implementation Notes
322(1)
13.9 Scaling to Larger Switches
323(1)
13.9.1 Measuring Switch Cost
324(1)
13.9.2 Clos Networks for Medium-Size Routers
324(4)
13.9.3 Benes Networks for Larger Routers
328(5)
13.10 Scaling to Faster Switches
333(3)
13.10.1 Using Bit Slicing for Higher-Speed Fabrics
333(1)
13.10.2 Using Short Links for Higher-Speed Fabrics
334(1)
13.10.3 Memory Scaling Using Randomization
335(1)
13.11 Conclusions
336(1)
13.12 Exercises
337(2)
CHAPTER 14 Scheduling Packets
339(1)
14.1 Motivation for Quality of Service
340(2)
14.2 Random Early Detection
342(3)
14.3 Token Bucket Policing
345(1)
14.4 Multiple Outbound Queues and Priority
346(1)
14.5 A Quick Detour into Reservation Protocols
347(1)
14.6 Providing Bandwidth Guarantees
348(14)
14.6.1 The Parochial Parcel Service
348(2)
14.6.2 Deficit Round-Robin
350(1)
14.6.3 Implementation and Extensions of Deficit Round-Robin
351(3)
14.7 Schedulers That Provide Delay Guarantees
354(4)
14.8 Scalable Fair Queuing
358(1)
14.8.1 Random Aggregation
359(1)
14.8.2 Edge Aggregation
359(1)
14.8.3 Edge Aggregation with Policing
360(1)
14.9 Summary
361(1)
14.10 Exercises
361(1)
CHAPTER 15 Routers as Distributed Systems
362(15)
15.1 Internal Flow Control
363(5)
15.1.1 Improving Performance
364(1)
15.1.2 Rescuing Reliability
365(3)
15.2 Internal Striping
368(3)
15.2.1 Improving Performance
368(1)
15.2.2 Rescuing Reliability
369(2)
15.3 Asynchronous Updates
371(2)
15.3.1 Improving Performance
372(1)
15.3.2 Rescuing Reliability
373(1)
15.4 Conclusions
373(1)
15.5 Exercises
374(3)
PART IV Endgame 377(56)
CHAPTER 16 Measuring Network Traffic
379(2)
16.1 Why Measurement Is Hard
381(18)
16.1.1 Why Counting Is Hard
381(1)
16.2 Reducing SRAM Width Using DRAM Backing Store
382(2)
16.3 Reducing Counter Width Using Randomized Counting
384(1)
16.4 Reducing Counters Using Threshold Aggregation
385(2)
16.5 Reducing Counters Using Flow Counting
387(1)
16.6 Reducing Processing Using Sampled NetFlow
388(1)
16.7 Reducing Reporting Using Sampled Charging
389(1)
16.8 Correlating Measurements Using Trajectory Sampling
390(2)
16.9 A Concerted Approach to Accounting
392(1)
16.10 Computing Traffic Matrices
393(2)
16.10.1 Approach 1: Internet Tomography
394(1)
16.10.2 Approach 2: Per-Prefix Counters
394(1)
16.10.3 Approach 3: Class Counters
395(1)
16.11 Sting as an Example of Passive Measurement
395(1)
16.12 Conclusion
396(1)
16.13 Exercises
397(2)
CHAPTER 17 Network Security
399(18)
17.1 Searching for Multiple Strings in Packet Payloads
401(4)
17.1.1 Integrated String Matching Using Aho-Corasick
402(1)
17.1.2 Integrated String Matching Using Boyer-Moore
403(2)
17.2 Approximate String Matching
405(1)
17.3 IP Traceback via Probabilistic Marking
406(3)
17.4 IP Traceback via Logging
409(4)
17.4.1 Bloom Filters
410(2)
17.4.2 Bloom Filter Implementation of Packet Logging
412(1)
17.5 Detecting Worms
413(2)
17.6 Conclusion
415(1)
17.7 Exercises
415(2)
CHAPTER 18 Conclusions
417(16)
18.1 What This Book Has Been About
418(5)
18.1.1 Endnode Algorithmics
418(1)
18.1.2 Router Algorithmics
419(1)
18.1.3 Toward a Synthesis
420(3)
18.2 What Network Algorithmics Is About
423(4)
18.2.1 Interdisciplinary Thinking
423(1)
18.2.2 Systems Thinking
424(1)
18.2.3 Algorithmic Thinking
425(2)
18.3 Network Algorithmics and Real Products
427(2)
18.4 Network Algorithmics: Back to the Future
429(2)
18.4.1 New Abstractions
429(1)
18.4.2 New Connecting Disciplines
430(1)
18.4.3 New Requirements
431(1)
18.5 The Inner Life of a Networking Device
431(2)
APPENDIX Detailed Models 433(12)
A.1 TCP and IP
433(4)
A.1.1 Transport Protocols
433(3)
A.1.2 Routing Protocols
436(1)
A.2 Hardware Models
437(5)
A.2.1 From Transistors to Logic Gates
437(2)
A.2.2 Timing Delays
439(1)
A.2.3 Hardware Design Building Blocks
439(1)
A.2.4 Memories: The Inside Scoop
440(1)
A.2.5 Chip Design
441(1)
A.3 Switching Theory
442(1)
A.3.1 Matching Algorithms for Clos Networks with k = n
442(1)
A.4 The Interconnection Network Zoo
443(2)
Bibliography 445(12)
Index 457


George Varghese is a widely recognized authority on the art of network protocol implementation. Currently he holds the Jonathan B. Postel Chair of Networking at the University of California, Los Angeles. Earlier he was a Partner at Microsoft Research, and served as a professor in the departments of Computer Science at UC-San Diego and Washington University. He was elected to American Academy of Arts and Sciences in 2022, to the Internet Hall of Fame in 2021, to the National Academy of Inventors in 2020, to the National Academy of Engineering in 2017, and as a Fellow of the ACM in 2002. He co-founded a startup called NetSift in 2004 that was acquired by Cisco in 2005. With colleagues, he holds 26 patents in the general field of network algorithmics. Several algorithms that he helped develop have found their way into commercial systems, including Linux (timing wheels), the Cisco GSR (DRR), and MS Windows (IP lookups). Varghese has written more than 100 papers on networking, computer architecture, genomics, and databases.