danny writes "Modern network devices have to handle traffic in huge volumes at low
latencies; achieving this requires ideas and approaches from all of
computer science: hardware, algorithms, protocols, software engineering
and their integration in a discipline which Varghese calls "network
algorithmics". Read the rest of Danny's review.
After an introduction in chapter one to bottlenecks and techniques for avoiding them, Varghese provides a brief overview of protocols, hardware, network architectures, and operating systems, with examples, in chapter two. This is best suited as a refresher: Network Algorithmics assumes a general familiarity with networking protocols, operating systems and computer architecture, and if used as a text would be suitable for higher undergraduates. (The exercises at the end of each chapter are often open-ended.)
Chapter three introduces the fifteen implementation principles that are the core of Network Algorithmics: P1 Avoid obvious waste; P2 Shift computation in time (precompute, evaluate lazily, share expenses or batch); P3 Relax system requirements (trade certainty or accuracy for time, shift computation in space); P4 Leverage off system components (exploit locality, trade memory for speed, exploit existing hardware), P5 Add hardware (use memory interleaving and pipelining, use wide word parallelism, combine DRAM and SRAM effectively); P6 Create efficient specialized routines; P7 Avoid unnecessary generality; P8 Don't be tied to reference implementation; P9 Pass hints in layer interfaces, P10 Pass hints in protocol headers; P11 Optimize the expected case (use caches); P12 Add state for speed (compute incrementally); P13 Optimize degrees of freedom; P14 Use bucket sorting, bitmaps; and P15 Create efficient data structures. Chapter four presents fifteen problems that illustrate these principles in action, with hints to solutions.
Part II of Network Algorithmics is devoted to end-nodes. A chapter "Copying Data" takes a web server, delivering files from disk to network, as the prototype, and explores different approaches to reduce pressure on the memory and I/O bus by reducing the number of copies required: copy-on-write, fbufs, RDMA, IO-Lite, and more. It also touches on making cache use more effective and the tantalizing possibilities of "integrated layer processing".
"Transferring Control" looks at minimizing scheduling overhead and maximizing concurrency: at context-switches, processes, threads, and event-driven servers. It evaluates the existing Unix select() call and considers ways of speeding it up, with and without changing the API. And it touches on ways of avoiding system calls and reducing interrupt overhead.
There are three shorter chapters. "Maintaining Timers" explores hashed and hierarchical timing wheels, the BSD implementation, and fine granularity. "Demultiplexing" looks at the development of packet filters: CMU/Stanford, Berkeley, Pathfinder, hardware, and Dynamic (generating filter code in real time). And "Protocol Processing" looks at some miscellaneous tasks that can become bottlenecks: buffer management, CRC checks and checksums, TCP and UDP protocol processing, and packet reassembly.
So far I've only skimmed Part III, "Playing with Routers". This has chapters on "Exact-Match Lookups", "Prefix-Match Lookups", "Packet Classification", "Switching", "Scheduling Packets", and "Routers as Distributed Systems". And Part IV offers chapters on "Measuring Network Traffic" and "Network Security", along with a summary and overview.
Varghese gets right down into the details in some places, but he sets the material he covers into its broader context. He often takes a historical approach, looking at how implementations have been driven by changing requirements. And he stresses that only with a broad view can the overall costs and possible optimizations be seen: with web server performance, for example, it is necessary to consider "the whole system, from HTTP and its headers, to the file system, and down to the instruction caches". Where many computing disciplines emphasize the isolation of components, network algorithmics stresses links across layers and boundaries. (It's not overdone, but approaches and principles are often illustrated with analogies to ordinary life serving tables in a restaurant, for example or parallels with other areas of computing.)
Purists might find the practical approach of network algorithmics distressing Varghese admits that it "may seem drab and shallow" compared to "the beauty of theoretical techniques" but it has an attraction all of its own. I've often felt that "computer science" as commonly constructed lacks any coherence, spanning everything from nearly pure mathematics to hardware and engineering, but Network Algorithmics has made me rethink that.
Little in Network Algorithmics is relevant to my job as a system and network administrator I just plug switches in and configure them, and don't have to worry about their internals but I found it fascinating. Apart from curious general readers with a computer science background, or hackers who enjoy stretching their minds, the obvious audience is anyone building high-performance network devices or looking at optimization of networking code, say in the Linux kernel.
Danny Yee has written over 900 other book reviews."
You can purchase Network Algorithmics: An Interdisciplinary Approach to Designing Fast Networked Devices from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.