Stanford University held a workshop last Friday - The Policy Implications of End-to-End - covering some of the policy questions cropping up which threaten the end-to-end paradigm that serves today's Internet so well. It was attended by representatives from the FCC, along with technologists, economists, lawyers and others. Here are my notes from the workshop. I'm going to try to skip describing each individual's background and resume, instead substituting a link to a biography page whenever I can. (Part one of two.)
The summary provided by the conference organizers has a brief description of end-to-end:"The "end-to-end argument" was proposed by network architects Jerome Saltzer, David Reed and David Clark in 1981 as a principle for allocating intelligence within a large scale computer network. It has since become a central principle of the Internet's design. End-to-end [e2e] counsels that "intelligence" in a network should be placed at its ends -- in applications -- while the network itself should remain as simple as is feasible, given the broad range of applications that the network might support."
Another way to view end-to-end might be as a sort of network non-interference policy: all bits are created equal. The problem is that there are substantial economic incentives to treat bits differently, and these incentives are changing the architecture of the Internet in ways which may be detrimental to public values.
The workshop covered a number of areas:
- Voice over IP
- Network Security
- Quality of Service
- Content Caching
Jerome Saltzer started off with a technical overview of the end-to-end argument. In summary: digital technology builds systems of stunning complexity, and the way to manage this complexity is to modularize. For networking, this resulted in the layer model that many slashdot readers are familiar with. He suggested that designers should be wary of putting specific functions in lower layers, since all layers above must deal with that design decision. For a longer explanation, one can always read the original paper. If you've never heard of end-to-end before, I do suggest reading this paper before continuing. It's short.
First, Scott Bradner described two competing architectures for voice-over-IP protocols: one which employs central servers to direct and manage calls (the Media Gateway Control model, or Megaco), and one which puts most of the intelligence in the end-points, with the phones/computers originating the calls (the Session Initiation Protocol, or SIP). One important difference: SIP phones can use a central server to direct calls, but Megaco phones have no capability to act independently. Building a great deal of intelligence into the central servers is less end-to-end-compliant than building it into phones at the edges of the network.
One member of the audience pointed out that Federal law requires companies to build wiretapping capabilities into phone switches and wireless network equipment, and wondered how that would be implemented if the phones initiated the connections themselves (SIP). Traditional wiretapping is predicated upon the idea that there is a central server which all communications pass through. The panel candidly replied that when no central server is used and encryption is employed, wiretapping is difficult. One audience member pointed out that wiretapping at centralized switches is not the most effective way to do it, anyway -- since switches can be routed around and communications can be encrypted, the only truly effective way to wiretap would be to build tapping capabilities all the way at the edge of the network -- the phone itself. While some of the audience laughed, I think most of the participants also realized the dark undertones of this suggestion.
Next the discussion turned to innovation. In one model, the central servers would be controlled by companies with a vested interest in managing them conservatively, suppressing competition, etc. In the other, individuals would be able to create/control their own phones on the perimeter of the network, and the only barrier to innovation would be finding someone else to adopt your improvement as well so that the two of you could communicate. In the first model, innovations which benefited the company would be the only ones permitted. In the second one, any innovation which benefited the end-user would be possible.
Finally the discussion moved to a rarely thought about side effect of voice over IP. Universal service -- phone service to (nearly) every resident of the United States -- is funded through access charges on your phone bill. In effect, people in cheap-to-service areas are subsidizing those in expensive-to-service areas, ranging from the badlands of Nevada to wilderness areas of Alaska. From a societal point of view, ubiquitous access to telephones has been a great boon, but providing it requires a societal commitment -- otherwise people living outside of major population centers might never have phone service. Suppose now that traditional telephony is replaced by voice over IP, and no central servers are involved -- there would be no easy way to collect the access charges which subsidize outlying areas. While lowering such taxes may have widespread appeal, completely abandoning the commitment to universal service would be a great loss to society.
The next focus was network security. Firewalls are probably the most obvious breaks in the end-to-end paradigm -- after all, these devices' sole purpose is to stand in the way of network connections, and decide which are permitted and which are not. Participants brought up (but thankfully, quickly moved past) the true-but-useless point that if all operating systems were secured properly, there would be no need for firewalls.Hans Kruse pointed out that if security must be implemented at the end anyway -- as it must if any incoming traffic is permitted through the firewall -- then there's no reason to do it at the center as well. David Clark put forth the useful distinction between mandatory and discretionary access controls -- mandatory controls being ones put into place by someone else, discretionary ones put into place by you. Discretionary controls do not violate end-to-end, but mandatory ones generally do. Michael Kleeman noted that the reasons firewalls are put into place include the desire to control the actions of users inside the firewall as often as the desire to control access from outside.
Doug Van Houweling spoke regarding Network Address Translation (NAT). NAT allows two networks to be joined together, and is typically used to join a network of machines with non-routable IP addresses to the global internet. NAT is an outgrowth of the limited availability of IPv4 addresses, but is also employed in some cases as a poor man's security measure. Generally, Houweling described NAT as an affront to end-to-end, because any application which requires transparency of addresses breaks, making end-to-end encryption impossible. Added to which, applications sometimes transmit data in the TCP/IP headers which NAT alters. The group noted that NAT can be eliminated simply by putting more addresses into circulation. Later in the workshop, Andrew McLaughlin talked about the address allocation process for IPv6 and said that it is shaping up to be much better than that for IPv4.
The workshop moved on next to Quality of Service. QoS in this case covers a wide range of proposals (and a few working implementations) for selectively speeding up or slowing down network traffic -- a sort of nice for network data flows. The "benign" use of QoS is to ensure that traffic which is strongly time-sensitive like videoconferencing or telephony gets priority over the download of NT Service Pack 16. There are less-benign uses: Cisco's 1999 White Paper which encouraged cable Internet operators to use Cisco's QoS features to speed up access to proprietary (read: profitable) content while slowing down content from competitors was the red flag in the QoS realm, raising concerns about the role of ISPs in traffic delivery and abuses by telecom carriers which are also content providers.
This segment started with an overview of QoS. There are several ways to implement QoS on a network. The simplest is to build a network with a capacity great enough to never be maxed out; if the network has sufficient bandwidth, there's no need to worry about QoS in the first place. There are costs, though, to maintain sufficient excess capacity on the network. This is called "adequate provisioning" if it is your preferred method of managing traffic, or "over-provisioning" if you prefer one of the other QoS approaches. The other ways under consideration are an integrated service architecture and a differentiated service architecture. The former would monitor and track each individual data flow -- the call you place to your mother in Singapore could be treated differently from the call you place to your grandmother in Kracow. The latter would only allow differentiation between classes of services -- all videoconferencing would be treated similarly, for example. Of the three, adequate provisioning is fully end-to-end while DiffServ is less so, and IntServ is highly non-compliant.
Jerome Saltzer (from the audience) made the point that no QoS technique provides real guarantees of service, and any technique except having plenty of excess bandwidth available violates the principles of end-to-end. He emphasized that people should be aware of the trade-offs.
Jamie Love mentioned not only the Cisco white paper but pointed out that this situation lent itself to behavior like that which has landed Microsoft in hot water -- using one's control of a particular system to speed up one's own content and impede competitors' from flowing. A member of the audience countered QoS would allow companies to create different levels of service -- pay more for fast access, less for slow access -- and that this was a good thing.
There were two distinct classes of problems identified. The first is similar to the distinction among methods for carrying voice over IP: the companies that control the QoS-enabled servers get to control who gets to innovate in QoS-related areas. The second, related problem is that of carriers using QoS features to promote their own content. The second problem has traditionally been solved by requiring a separation of carriage and content -- keeping the owner of the lines and the provider of content over those lines separate. The current FCC and FTC are not enforcing that traditional check against monopolization of content in telecommunications; thus it's likely that unless governmental policies change, AOL/Time Warner will be a position to promote its own content through control of the cable Internet services it owns.
Doug Van Houweling then spoke and noted that the Internet2 project is taking a very strong stance promoting QoS, because that stance is seen as necessary to promote investment in Internet2 architecture.
An audience member spoke up and suggested that the best regulatory course would be regulation with a light touch -- regulation could provide the minimum necessary controls to provide really necessary QoS while disallowing abusive uses. At this point Deborah Lathen asked the $64,000 question: how would the FCC make this fine regulatory distinction? No one had a good answer to that question.
In Part two tomorrow: transparent caching, broadband and wireless access, and capitalism.