Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

ARPANET Co-Founder Calls for Flow Management

ScuttleMonkey posted more than 6 years ago | from the so-i-rewired-it dept.

The Internet 163

An anonymous reader writes "Lawrence Roberts, co-founder of ARPANET and inventor of packet switching, today published an article in which he claims to solve the congestion control problem on the Internet. Roberts says, contrary to popular belief, the problem with congestion is the networks, not Transmission Control Protocol (TCP). Rather than overhaul TCP, he says, we need to deploy flow management, and selectively discard no more than one packet per TCP cycle. Flow management is the only alternative to peering into everyone's network, he says, and it's the only way to fairly distribute Internet capacity."

cancel ×

163 comments

RMS on the same subject. (4, Interesting)

gnutoo (1154137) | more than 6 years ago | (#22968900)

He seems to agree [stallman.org] . This surprised me but it seems that equipment can do this fairly.

Re:RMS on the same subject. (1)

gnutoo (1154137) | more than 6 years ago | (#22968946)

The right link [stallman.org] . Sorry about that.

I don't think it is unreasonable to give lower priority to large data transfers, when the net is loaded, as long as that is done fairly for all large data transfers.

Re:RMS on the same subject. (-1, Troll)

shaitand (626655) | more than 6 years ago | (#22969134)

I fail to see how Stallman's opinion on the topic is relevant to anyone but Stallman.

Re:RMS on the same subject. (1)

The Clockwork Troll (655321) | more than 6 years ago | (#22969176)

Agreed.

If they want an expert opinion on flow management, they really need to seek out Q-Tip of A Tribe Called Quest.

Re:RMS on the same subject. (4, Insightful)

Idiomatick (976696) | more than 6 years ago | (#22969302)

It immediately kills any worries that this could be used for "evil" as stallman wouldnt stand up for anything that could be used for censorship. Big-names can be useful sometimes, even in nerd circles.

Re:RMS on the same subject. (0, Offtopic)

timmarhy (659436) | more than 6 years ago | (#22969564)

when did stallman become an expert on everything?

Re:RMS on the same subject. (5, Funny)

Culture20 (968837) | more than 6 years ago | (#22969644)

when did stallman become an expert on everything?
Your question implies that he - at some point - was not.

Re:RMS on the same subject. (4, Funny)

TooMuchToDo (882796) | more than 6 years ago | (#22969938)

Kind of like saying, "When did Chuck Norris become a bad ass." He's always been a bad ass, just like the universe has always existed.

Re:RMS on the same subject. (1, Interesting)

Anonymous Coward | more than 6 years ago | (#22969784)

In this case he's not necessarily wrong, but certainly misleading. What he describes ("lower priority to large data transfers [...] as long as that is done fairly for all large data transfers") can't be done. It would require some form of trustworthy tagging (while we're dreaming, let's have world peace.) Otherwise a clogged router in the middle would throttle the traffic of a business, which uses one external IP address (web proxy, for example) on an expensive and fast uplink, harder than it would throttle the traffic of a measly DSL line. The information "large data transfer" is not available, plain and simple. Any attempt to use "number of TCP streams", "traffic per IP", "packet size" or other metrics as a substitute can easily be worked around and ultimately doesn't solve the problem, which is that users regularly can't use the network for the intended application because some higher-ups at the ISPs gave themselves a bonus or invested in technology for managing congestions instead of building a faster network. The edge routers could perform some sort of fair queuing, but obviously the edge routers are at a point where there shouldn't be a bandwidth shortage, because all customers pay for a defined uplink and if network congestion is a regular problem at that point, then the ISP oversold its capacity.

You fail to see? (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22969312)

No. You simply fail.

Re:RMS on the same subject. (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22969448)

Maybe because as far as your rights go in relation to privacy, copyright, and computing in general, RMS is one of the few people that has dedicated their lives to defending those rights. If there is a problem that might infringing upon fair equal access to public information, who do you want to comment? Should those that make a buck off of charging you to get to said information be the only ones that have a right to comment or someone that is dedicated to preserving his and your freedom? I personally thing both have equal credit speaking on the subject.

Re:RMS on the same subject. (0, Flamebait)

shaitand (626655) | more than 6 years ago | (#22969548)

Everyone has the right to comment. But unless they have some sort of special knowledge on the topic at hand their comment deserves no more weight than that of an AC on Slashdot.

I welcome comments from RMS on topics where he is an expert and will happily grant them weight. On other issues he is just another individual with a pulpit who somehow thinks I need him to tell me what to think.

I fail to see how Shiatland's opinion is relevant (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#22969530)

to anyone but Shiatland.

Accessible, knowlegible and fair (4, Interesting)

gnutoo (1154137) | more than 6 years ago | (#22969552)

Everyone's got their favorite experts and they are often a shortcut to lots of research you don't have time for. He's an independent expert who cares more about your rights than other things, happens to be an expert in OS design who's been working since the early 70s and knows something about networking as well. Finally, he likes to answers email.

Opps (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#22969608)

Looks like you're not getting enough moderation on this thread, twitter. Why don't you invite a couple of your sockuppets [slashdot.org] to come on over and start agreeing with you so you can impress the moderators?

Re:Opps (-1, Offtopic)

gnutoo (1154137) | more than 6 years ago | (#22969930)

Why don't you nut cases take your insane twitter hate and moderation abuse someplace else? I'm not very interested in modpoints but the blatant knockdown of this thread [slashdot.org] is clearly abusive. People were intersted in what I had to say but you people buried it because you think I'm someone you hate. That's not helping the discussion here.

SLASHDOT SUX0RZ (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22968906)

_0_
\''\
'=o='
.|!|
.| |
peering into everyone's network [goatse.ch]

Re:SLASHDOT SUX0RZ (1)

McGiraf (196030) | more than 6 years ago | (#22969204)

Could you elaborate on this?

Re:SLASHDOT SUX0RZ (1)

Sancho (17056) | more than 6 years ago | (#22969232)

Why bother feeding the trolls?

And where can I buy this flow management? (4, Insightful)

Wesley Felter (138342) | more than 6 years ago | (#22968956)

Oh, from his company of course.

Re:And where can I buy this flow management? (5, Informative)

langelgjm (860756) | more than 6 years ago | (#22969028)

I was about to chastise you for being overly cynical, but then I visited the website [anagran.com] of the author:

Anagran eliminates congestion in the worldâ(TM)s busiest networks with Fast Flow Technologyâ, developed from the ground-up to specifically eliminate and resolve congestion created by the proliferation of todayâ(TM)s broadband applications such as video, P2P, voice, gaming, YouTube etc. â" anywhere in the network.

Should be able to get it anywhere. (2, Informative)

gnutoo (1154137) | more than 6 years ago | (#22969108)

I have been told that the ability to do this has been around since the 1970s. Don't all equipment makers have some version?

Re:And where can I buy this flow management? (1)

StonyUK (173886) | more than 6 years ago | (#22970036)

Damn, that's a funny sig!

Re:And where can I buy this flow management? (1)

SEAL (88488) | more than 6 years ago | (#22969046)

Funny, but true. The first thing that popped into my head reading that article was... it'll never gain acceptance because everyone will be trying to figure out "how can I monetize this?". Then I clicked the link at the bottom to his company and sure enough, there it is. :)

He's even got the *look.* (1)

zippthorne (748122) | more than 6 years ago | (#22969118)

Doesn't the picture kind of look like the juice-man? Or more generally, "Active old guy, excited about stuff, selling nearly worthless Ronco trinkets"

Re:And where can I buy this flow management? (3, Informative)

interiot (50685) | more than 6 years ago | (#22969200)

That's an ad hominem, and an unnecessary one at that. A proposal to change something so important as TCP is bound to fail unless it has significant technical merit. To make things simple, let's just assume that the proposer openly admits they were motivated by self-interest to make the proposal. And the result is: nothing changes. Heck, Al Gore could make this proposal, and it wouldn't change the fact that proposals that are deeply technical will be evaluated on a technical basis.

Re:And where can I buy this flow management? (2, Insightful)

Wesley Felter (138342) | more than 6 years ago | (#22969332)

I was being somewhat sarcastic. In reality I believe that Roberts decided that flow routing is a good idea and then started Anagran to implement it, so he's not a total opportunist. But even on a technical level, I'm having trouble finding people who like flow routing. So we have one expert with an idea that most of the other experts reject. So I don't quite trust this idea on a technical level and I don't entirely trust the guy who's selling it either.

Re:And where can I buy this flow management? (2, Informative)

zolf13 (941799) | more than 6 years ago | (#22969568)

There is no flow routing in Anagran solution - it is just per (TCP) flow shaper/policer that you put at the ingres/egress of your network.
Anyway, in 99% of cases to achieve same thing you could use Linux with SFQ queueing.

inventor of packet switching (5, Informative)

Anonymous Coward | more than 6 years ago | (#22968966)

Larry Roberts was co-founder of the ARPAnet, but he did NOT invent packet switching. That invention goes to Donald Davies of the National Physical Laboratory in the UK. His work was well-credited by the ARPAnet designers.

Re:inventor of packet switching (0)

Anonymous Coward | more than 6 years ago | (#22969722)

Larry Roberts was co-founder of the ARPAnet, but he did NOT invent packet switching.

Oh really. Well then how come you never see any saccharin on the table? Now it's always either Sweet-n-Low or Equal. Explain that one. And don't even get me started about Splenda.

Re:inventor of packet switching (2, Informative)

the eric conspiracy (20178) | more than 6 years ago | (#22970018)

It is not quite that simple. There were multiple researchers working in that area at the same time, including Kleinrock and Baran. Kleinrock has a good claim based on a 1961 publication and his PhD dissertation. Baran clearly developed the concept in conjunction with his ideas about secure networks. Davies has a good case because he built the first working example.

If you ask Larry Roberts he would say that the honor belongs to Kleinrock.

Personally I don't think that you can say that there is a sole inventor because several people contributed to the seminal idea.

That's all fine... (4, Interesting)

Em Adespoton (792954) | more than 6 years ago | (#22968976)

The problem is, some people will start throwing away 2 packets instead of 1 so that they can get more "throughput" on more limited hardware. Someone else will compete by tossing 3, and the arms race for data degradation will begin.

Will this method really offset the retransmits it triggers? Only if not everyone does it, unless I'm missing something.

What might work better is scaled drops: if a router and its immediate peers are nearing capacity, they start to drop a packet per cycle, automatically causing the routers at their perimeter to route around the problem, easing up on their traffic.

It still seems like a system where an untrusted party could take advantage to drop packets in this manner from non-preferred sources or to non-preferred destinations however.

Re:That's all fine... (1)

CodeBuster (516420) | more than 6 years ago | (#22969152)

It still seems like a system where an untrusted party could take advantage to drop packets in this manner from non-preferred sources or to non-preferred destinations however.
Sounds like something that Comcast might do...

Re:That's all fine... (3, Interesting)

markov_chain (202465) | more than 6 years ago | (#22969170)

Routing does not change based on traffic on that short a timescale, it changes if a link goes down, or a policy agreement changes, an engineer changes some link allocation, etc. Doing traffic-sensitive routing is hard because of oscillations; in your example, would the perimeter nodes switch back to the now congestion-free router?

Re:That's all fine... (5, Interesting)

jd (1658) | more than 6 years ago | (#22969318)

Ok, here's the theory. Two packts have travelled some distance along two distinct paths p1 and p2. If nothing is done, then at least one packet is guaranteed lost, and quite likely both will. Thus, you will need to retransmit both packets, thus hitting every node along p1 and p2 will have the total traffic shifted over some given time period increased. When traffic levels are low enough, the extra traffic is absorbed into the flows and there's no impact beyond a slight fluctuation in latency.

If the total traffic is above some certain threshold, but below a critical value, then a signficant number of packets will be retransmitted. This causes the load to increase, the next cycle around, causing further packet loss and further retransmits. There will be a time - starting with a fall in fresh network demand - in which observed network demand actually rises, due to accumulation of erors.

There will then be a third critical value, close to but still below the rated throughput of the switch or router. Provided no errors occur, the traffic will flow smoothly and packet loss should not occur. This isn't entirely unlike superheating - particularly on collapse. Only a handful of retransmits would be required - and they could occur anywhere in the system for which this is merely one hop of many - to cause the traffic to suddenly exceed maximum throughput. Since the retransmitted packets will add to the existing flows, and since the increase in traffic will increase superlinearly, that node is effectively dead. If there's a way to redirect the traffic for dead nodes, there is then a high risk of cascading errors, where the failure will ripple out through the network, taking out router/switch after router/switch.

Does flow management work? Linux has a range of RED and BLUE implementions. Hold a contest at your local LUG or LAN Gamer's meets, to see who can set it up the best. Flow management also includes ECN. Have you switched that on yet? There's MTUs and window sizes to consider - default works fine most times, but do you understand those controls and when they should be used?

None of this stuff needs to be end-to-end unless it's endpoint-active (and only a handful of such protocols exist). It can all be done usefully anywhere in the network. I'll leave it as an exercise to the readership to identify any three specific methods and the specific places on the network they'd be useful on. Clues: Two, possibly all three, are described in detail in the Linux kernel help files. All of them have been covered by Slashdot. At least one is covered by the TCP/IP Drinking Game.

Re:That's all fine... (2, Informative)

klapaucjusz (1167407) | more than 6 years ago | (#22969482)

Linux has a range of RED and BLUE implementions. Hold a contest at your local LUG or LAN Gamer's meets, to see who can set it up the best.

RED is tricky to set up, but neither Blue nor PI require much tuning, if any. (I'm running Blue on all of my 2.6 Linux routers, and RED on all the 2.4 ones.)

Flow management also includes ECN. Have you switched that on yet?

Yes, I have. On all of my hosts and routers. It's a big win for interactive connections, but doesn't matter that much for bulk throughput.

There's MTUs and window sizes to consider - default works fine most times, but do you understand those controls and when they should be used?

Unless you're running some fancy link technology, you don't get to tune your MTU. If, like most of us, you're running Ethernet and WiFi only, you're stuck with 1500 bytes.

As for window sizes, they're pretty much tuned automatically nowadays, at least if you're running a recent Linux or Windows Vista.

toss one packet?! (2, Insightful)

ILuvRamen (1026668) | more than 6 years ago | (#22968988)

I'm not big on networking but if I'm sending data to someone and some "flow management" dumps one of the packets, won't my computer or modem just resend it? Seems like not such a good idea to me.

Re:toss one packet?! (2, Informative)

denormaleyes (36953) | more than 6 years ago | (#22969090)

Yes, your computer will resend it. But it does even more! TCP interprets dropped packets as congestion and will reduce its load on the network, generally by 50%. Dropping more than one packet per round trip just serves to confuse TCP a bit more but the net effect is the same: reduce the load by 50%.

Re:toss one packet?! (1)

Nibbler999 (1101055) | more than 6 years ago | (#22969098)

It will get resent via a different path, hopefully one that it is less congested.

Re:toss one packet?! (1)

mapleneckblues (1145545) | more than 6 years ago | (#22969306)

Ummm no. TCP would reduce its window size and slow down.

Re:toss one packet?! (4, Interesting)

shaitand (626655) | more than 6 years ago | (#22969114)

'I'm not big on networking but if I'm sending data to someone and some "flow management" dumps one of the packets, won't my computer or modem just resend it?'

Yes and when the retransmission occurs the router may be able to handle your packet. The router won't be overloaded forever after all.

The bigger part of the equation is that with TCP the more packets are dropped the slower you transmit packets. With this solution the heaviest transmissions would have more packets dropped and therefore be slowed down the most.

I admit, I'd have to check the details of the protocol to see if this is open to abuse by those with a modified TCP stack. The problem is that the packets are dropped in a predictable manner and a modified TCP stack could be designed to 'filter' the noise and yet still degrade when other packets are lost and provide a reliable connection.

Re:toss one packet?! (2, Informative)

markov_chain (202465) | more than 6 years ago | (#22969220)

Yes, TCP congestion control relies on everyone following the protocol. If you hack your TCP stack to send each packet twice, not cut down the congestion window, etc., you can get better performance. In practice, anyone doing this on a scale large enough to be noticed (think Apple) would get yelled at by the ISPs. Big players wouldn't do it because if the majority of users tried to cheat their performance would get worse.

IMHO hacks like this don't help enough to go through the trouble of installing, and if they do help, they likely need both endpoints to cooperate in which case you might as well use a custom UDP protocol.

Re:toss one packet?! (0)

Anonymous Coward | more than 6 years ago | (#22969352)

This all seems like an extraordinarily bad idea. I admit ignorance to a certain degree but flow control at the IP layer is flawed for several reasons:

1. There is little avaliable information and context to make smarter decisions that can only be made intelligent at higher levels of the stack (For example dynamically changing codecs) What constitutes a 'flow' is easy for HTTP but quite difficult to understand for many other protocols including P2P clients.

2. Implementing flow control algorithms on top of other flow control algorithms is typically the very definition of disaster.

Re:toss one packet?! (1)

shaitand (626655) | more than 6 years ago | (#22969442)

Big players wouldn't do something like use a hacked TCP stack, but a P2P application might. Just as there are P2P applications that use a hacked version of their own protocol to thwart fairness efforts.

20 million P2P users with hacked stacks in this scenario would probably result in poorer performance and greater congestion than we have now.

Re:toss one packet?! (1)

markov_chain (202465) | more than 6 years ago | (#22969500)

Ultimately, cheating doesn't help if enough people do it, whether it's by using hacked TCP stacks, or roll-your-own UDP protocols. The only real way to improve performance is to grow the network capacity.

Creates incentive to remove retransmit delay (2, Interesting)

Burz (138833) | more than 6 years ago | (#22969484)

...from one's own TCP stack.

I think this proposal is a bit reckless and naive at the same time. Not a good combination. Add to that he is trying to set a precedent for data degradation when none is needed.

If networks want to reduce traffic in a civil manner, they will price their service similar to the way hosting providers do: Offer a flat rate up to a set cap measured in Gb/month, with overages priced at a different rate. People would then pay for their excesses, allowing the ISP to spend more on adding capacity.

End-users like this arrangement for cellphone service. They would understand and appreciate such a thing coming to their Internet service, especially if it meant that most of them ended up paying $10 less on their monthly bill.

I think we are not seeing a transition to cap-and-overage pricing structure because the ISPs are more interested in becoming monopolies, not with competing the way wireless services naturally do. Verizon is turning its back on many lower-middle-income areas while it tears out common-carrier POTS lines from the neighborhoods it does serve. Comcast winks, nods and accepts its role for the lower-end neighborhoods, degrading throughput and behaving as a paternalistic manipulator of data in the process. They are carving up the market, so don't expect rational and time-tested solutions that benefit the customer.

Re:Creates incentive to remove retransmit delay (1)

shaitand (626655) | more than 6 years ago | (#22969626)

'End-users like this arrangement for cellphone service.'

I am not sure what world you live in, but I don't know many people who are happy with the pricing schemes of cell phones.

All the same problems would be shared with the scheme you propose. First, you would be charged for incoming bandwidth. Second, the rates are never lower than unlimited service people pay the higher rates because cell phones are more convenient. Third, you have to constantly track your usage and would have refrain from using your connections at times.

The only people who like the cell phone schemes and would like this scheme are those who do not fully utilize their connection.

Re:toss one packet?! (2, Informative)

dynchaw (1188279) | more than 6 years ago | (#22969342)

Yes, your computer will resend it but due to the sliding window protocol http://en.wikipedia.org/wiki/Sliding_window [wikipedia.org] it will also reduce the speed at which it is sending. By dropping a packet TCP will detect congestion and reduce the size of the window. For every dropped frame it will reduce the window by a factor of 2. Each time an entire window is sent without a drop, it increases the size of the window by one.
So it quickly drops down to below the available bandwidth then slowly grows the speed up to it.

This normally happens auto-magically between the two ends of a TCP connection to grow the connection to the capacity of the smallest link in the chain as a result of random drop or FIFO queues. By tracking each flow and their window management, the window size and thus speed of the flow can be controlled by any hop in the chain.

Re:toss one packet?! (1)

timmarhy (659436) | more than 6 years ago | (#22969602)

it works kind of, but it's murder on things like games which need a good ping time.

in short i'd drop any isp that did this

Hmmm (1, Funny)

Anonymous Coward | more than 6 years ago | (#22968992)

Why didn't he think of that 40 years ago?

Why not now? (3, Interesting)

eldorel (828471) | more than 6 years ago | (#22969012)

Can someone explain why this hasn't already been implemented?
Seems like there would have to be a good reason, otherwise this would just make more sense, right?

Re:Why not now? (1)

shaitand (626655) | more than 6 years ago | (#22969124)

I dunno but he said the flow management idea was just presented recently.

Re:Why not now? (4, Informative)

Burdell (228580) | more than 6 years ago | (#22969140)

Overhead. Right now, routers just track individual packets: receive a packet, look up the next-hop IP in the forwarding table (which might have 250,000 entries), and send it on its merry way. To do anything based on flows, routers would have to keep track of all the active flows, which amounts to all open TCP connections going through that router. For an active router, there would be millions of active flows at any one time, so the overhead would be huge. This would be like a NAT or stateful firewall device that could do line-rate forwarding at gigabit, 10G, or 100G port speeds.

You also have problems tracking flows; routes change, so while a router may be tracking an active flow, the flow may choose another path. The router has no way of knowing this, so it has to keep track of the flow until it times out (and the timeout would have to be more than just a few seconds).

There are flow-based router architectures, but they are not generally used for ISP core/edge routers because there are too many ways they can break.

Re:Why not now? (2, Informative)

liquidpele (663430) | more than 6 years ago | (#22969192)

Routers already track flows if you want them to. Look up "netflow" (cisco), "jflow" (juniper), and "sflow" (standard). There are others, but those are the main too.

Re:Why not now? (2, Informative)

Burdell (228580) | more than 6 years ago | (#22969326)

Netflow/Jflow are for statistics, and on the larger routers, they are just sampled (not every packet/flow is monitored, just one out of every N packets). Originally netflow could be used as a packet switching method on Cisco, but it is just for statistics now.

Re:Why not now? (1)

the eric conspiracy (20178) | more than 6 years ago | (#22970054)

Unfiltered netflow data can add up to about 10% of the total throughput of the router. That's like getting a drink of water through a fire hose.

Re:Why not now? (1)

liquidpele (663430) | more than 6 years ago | (#22970060)

Yes... but the statistics include things like bytes/second (among many others). How else were you expecting them to track flows and decide which flows needed packets to be dropped to slow them down?

Also, only sflow (by it's own definition) takes samples and can't track all packet data, and future versions are supposed to remove this limitation from what I hear (maybe just rumors?). Netflow and jflow are usually configured for total analysis if they turn it on at all.

Re:Why not now? (1)

markov_chain (202465) | more than 6 years ago | (#22969236)

Good post. Another issue is that many flows are way too short for flow-tracking to help.

Re:Why not now? (2, Interesting)

klapaucjusz (1167407) | more than 6 years ago | (#22969610)

To do anything based on flows, routers would have to keep track of all the active flows, which amounts to all open TCP connections going through that router.

Only if you want to be fair.

In practice, however, you only want to be approximately fair: to ensure that the grandmother can get her e-mail through even though there's a bunch of people on her network running Bittorrent. So in practice it is enough to keep track of just enough flow history to make sure that you're fair enough often enough, and no more.

A number of techniques have been developed to do that with very little memory; my favourite happens to be Stochastic Fair Blue [wikipedia.org] .

Re:Why not now? (1)

roblaird (633935) | more than 6 years ago | (#22969728)

Cisco CEF does pretty much what you describe. It avoids the route table lookup by keeping an adjacency table of recent connections, and switches the packet to the correct port. CEF is one of three route caching technologies on Cisco routers, and is default on modern versions of IOS. Route table lookups (i.e. process switching) are only done when there is no cache entry, either because it's the first flow or the entry has aged out of the adjacency table. Cisco CEF [cisco.com]

Re:Why not now? (1)

Burdell (228580) | more than 6 years ago | (#22970142)

Ahh, the Customer Enragement Feature. It isn't a table of "recent connections" or really any kind of cache. CEF builds a forwarding table from the routing table; the forwarding table has all routes resolved to the next-hop interface. When there is a routing table update, a CEF forwarding table update is also made; the CEF forwarding table has just as many entries as the routing table. CEF has nothing to do with flows or connections.

Re:Why not now? (0)

Anonymous Coward | more than 6 years ago | (#22969362)

Why are you claiming that it hasn't been? What is your agenda?

Random early detection (RED) has been used for over a decade even in low end cisco routers and with Linux. It was first introduced in a paper from Van Jacobson in 1993. While it doesn't guarantee that a single stream won't lose more than a single packet, it is likely because it randomly picks packets to drop.

Re:Why not now? (4, Insightful)

Anonymous Coward | more than 6 years ago | (#22969440)

Can someone explain why this hasn't already been implemented?


It has been implemented and abandoned already because it doesn't scale. Serious routers today use the concept of interface adjacency: for a given inbound packet there are only a few possible destinations: they are each of the interfaces on the router.


When a route is installed into the FIB, you can recursively follow that route until you find the egress interface and the layer 2 address of the next hop - those will typically never change! So long as the router always keeps this adjacency information up to date, individual packets never need to have a route lookup performed - the destination prefix is checked in the adjacency table, the layer 2 header is rewritten, and the packet is queued for egress on the appropriate interface.


This allows for substantially higher throughput (in packets per second) than other methods because the adjacency table can be cleverly stored in content-addressable memory that provides constant time answers. A prefix will be installed in a content-addressable memory circuit as a lookup key. The value associated with that key is a pointer into the adjacency table that holds the interface and layer 2 information for that prefix.


By reconsidering the routing problem, and by using some smart circuits, the route lookup for a single packet has been reduced from O(k) to O(1), where k is the length of the longest prefix. For IPv4, that's up to 32-bits - so that means you do a single fetch and lookup instead of 32 or so comparisons for each packet. At a million packets per second, that's a huge difference.


Traditional flow-based routing requires creating in-memory structures for each flow, collectively called the flow cache. Each packet requires an initial full route lookup, which builds the structures for that flow. Then, subsequent packets in that flow can be matched against the cache and switched directly to the egress interface. This operation is much closer to that of a contemporary firewall. The good thing about this method is that it gives you a lot of visibility into the traffic. The bad side is that it requires a very large amount of memory for all of these structures. When that memory is exhausted, you can't route anymore flows!


This comparison is a bit apples to oranges - the adjacency table described above is pretty much state-of-the-art for off the shelf gear, while the flow cache architecture is highly dated. But without some substantial advances in the ways flows are created, tracked, and expired, no flow router is going to reach the number of packets per second that are required for very large installations in the Internet.

Re:Why not now? (2, Interesting)

skiingyac (262641) | more than 6 years ago | (#22969638)

I have to think tracking/throttling the rate per IP has to be already done by ISPs where bandwidth is a problem. Otherwise all the P2P sessions as well as UDP traffic (which doesn't have congestion control and so doesn't respond to a loss by reducing its rate) would clobber most TCP sessions. Fixing TCP will just lead people to use UDP. Skype, worms, and P2P applications already exploit UDP for exactly this reason. So, cut straight to the chase, don't bother with making TCP fairer or whatever, just do per-IP rate control at the ISP level.

It isn't clear why this needs to be done in the core routers at all instead of just at the endpoints, if the goal is to make P2P traffic more manageable. P2P using so many flows (and different routes) will probably just get around any core-based solution anyway.

Also "loss unfairness" is already solved by ECN, but ECN (which is implemented) isn't really used as much as it should/could be, because some routers drop your packets if you use it (I believe either chase.com or chaseonline.com still does this?), and nobody really cares. Why add something else to the client stacks that nobody will end up using either? That is basically why this hasn't been implemented.

Flow control??? (3, Funny)

3-State Bit (225583) | more than 6 years ago | (#22969016)

So now we actually DO need to make the Internet more like a series of tubes??? brain asplode

Reduce hop count. (2, Interesting)

suck_burners_rice (1258684) | more than 6 years ago | (#22969128)

This does not sound like a correct solution. Rather, emphasis should be placed on installing more links, both in parallel to existing links, and "bypass" links that will shorten the number of hops from one given location to another. Whether based on copper, fiber, satellite, or other technology, the sheer number of separate paths and additional routing points will make a huge difference. Special emphasis should be placed shortening the hop count between any two given areas.

Re:Reduce hop count. (1)

mapleneckblues (1145545) | more than 6 years ago | (#22970162)

and while we are at it, why not make the Internet a fully connected mesh eh?

Beyond flow fairness, user fairness... (2, Insightful)

nweaver (113078) | more than 6 years ago | (#22969146)

ANy device sophisticated enough to do the flow fairness described can also do "user" fairness by averaging behavior across multiple flows from the same soure, and the behavior of the source over time.

This solves the P2P problem, and has a bunch of other advantages.

Note, also, you only need to do this at the edges, as the core is pretty overprovisioned currently.

Re:Beyond flow fairness, user fairness... (3, Interesting)

shaitand (626655) | more than 6 years ago | (#22969174)

That brings up a question of entitlement. It suggests that there are users who should be punished.

Those who engage is low bandwidth activities are not entitled to more bandwidth while those engaging in high bandwidth activities are entitled to less. Both are entitled to equal bandwidth and have the right to utilize or not utilize accordingly.

Re:Beyond flow fairness, user fairness... (1)

Digi-John (692918) | more than 6 years ago | (#22969338)

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and as much Bandwidth as they can take?
People act like implementing bandwidth limits and such is some sort of human rights violation--call the UN, my bandwidth has been capped!

Re:Beyond flow fairness, user fairness... (1)

shaitand (626655) | more than 6 years ago | (#22969388)

bandwidth caps aren't the problem, pretty much everyone implements bandwidth caps. We are talking about punishing people who use their connections.

I have an eight megabit connection. But not really, the connection is actually much faster than that, my bandwidth is capped at eight megabit. I have no problem with that. After all, I pay for unlimited use of an eight megabit connection. The problems come when you can't actually deliver that eight megabits on an unlimited basis despite me paying for it.

Now, if you can't deliver what you promised AND you are giving priority to someone else who doesn't use the service they pay for I am going to be really pissed.

We are entitled with rights not because some magic man in the sky grants them but because we paid for a service. By not providing that service and taking our money the ISPs are stealing from us.

Sadly, no... (1)

nweaver (113078) | more than 6 years ago | (#22969418)

The problem is, without user fairness, the heavy users get MORE bandwidth. This is the multiflow problem.

Your neigbor and you share a common bottleneck. Your websurfing, he's got 6 torrents downloading. He is going to have at least 24 active flows, running full bore, and you will have 1 or 2 (which are bursty even). Thanks to how TCP works, without traffic shaping, you will receive 1 packet for every 24 he gets.

User fairness is necessary to be implemented in the network to keep his traffic from walking all over you.

Re:Sadly, no... (2, Insightful)

shaitand (626655) | more than 6 years ago | (#22969516)

The multi-flow problem is already solved with the level of management he has proposed. It equates the total bandwidth coming from one IP, not just a specific flow.

'Your websurfing, he's got 6 torrents downloading. He is going to have at least 24 active flows, running full bore, and you will have 1 or 2 (which are bursty even). Thanks to how TCP works, without traffic shaping, you will receive 1 packet for every 24 he gets.'

As things stand now, yes. But under the scheme he is suggesting my flows would be slowed down while yours would not.

This slows the rate at which the P2P user transmits packets from a given IP (regardless of how many flows are used) so that whether I am using one or 200 connections, you and I will receive packets at the same rate while we are both requesting them.

At the end of the day the P2P user used more bandwidth, and received more data. But that is well and good, there is nothing wrong with utilizing your connection. The surfer (or any other type of user) never had to wait longer because of how the P2P user used their connection.

Too many people want to punish the people who use their connection regularly because light users experience congestion. Network admins in particular begin seeing high bandwidth users as evil.

Re:Sadly, no... (1)

nweaver (113078) | more than 6 years ago | (#22969782)

Oops, me bad. Missed that last bit.

per-user fairness and Nat (1)

j1m+5n0w (749199) | more than 6 years ago | (#22969890)

From the article:

Control each flow so that the total traffic to each IP address (home) is equally and fairly distributed no matter how many flows they use.

This sounds like a pretty good idea until you start thinking about NAT'ed networks. Is it really fair to treat an entire office or dorm (or even a small country) the same as a single user who happens to have a unique IP from their ISP? And what about the transition to IPV6, when presumably IP addresses are no longer going to be scarce?

Re:per-user fairness and Nat (1)

nobody/incognito (63469) | more than 6 years ago | (#22970086)

Is it really fair to treat an entire office or dorm (or even a small country) the same as a single user who happens to have a unique IP from their ISP?

in a word: yes.

in several words: if the office/dorm/whatever is paying for one network attachment, the "fair" thing to do is to provide them with the same level of service provided to anyone else paying for a single hook up.

Privacy issues? (2, Interesting)

jmac880n (659699) | more than 6 years ago | (#22969150)

It seems to me that by moving knowledge of flows into the routers, you make it easier to tap into these flows from a centralized place - i.e., the router.

Not that tapping connections can't be done now by spying on packets, of course, but it would make it much cheaper to implement. High-overhead packet matching, reassembly, and interpretation is replaced by a simple table lookup in the router.

Donning my tinfoil hat, I can foresee a time when all routers 'must' implement this as a backdoor...

Re:Privacy issues? (1)

Creepy Crawler (680178) | more than 6 years ago | (#22969414)

Then I can run a Linux router, with honeyd where the backdoor must be.

Weird solution (3, Insightful)

Percy_Blakeney (542178) | more than 6 years ago | (#22969180)

I couldn't help but laugh a bit at his solution. He talks about "flow management" being put into the core of the network to solve TCP's unfairness problem, but at the end of the article he says:

Although the multi-flow unfairness that P2P uses remains, flow management gives us a simple solution to this: Control each flow so that the total traffic to each IP address (home) is equally and fairly distributed no matter how many flows they use.

So, in other words, his solution to the "P2P problem" is just a fancy version of a token bucket.

Re:Weird solution (1)

markov_chain (202465) | more than 6 years ago | (#22969256)

Good luck doing that on a core router passing millions of flows, like another comment above said.

Another point is that TFA seems to imply that fixing TCP, whatever that means, will somehow solve the network congestion problems. However, the only real way to fix congestion is to grow capacity, which seems to have worked thus far.

Re:Weird solution (2, Funny)

Bill Dog (726542) | more than 6 years ago | (#22969542)

However, the only real way to fix congestion is to grow capacity, which seems to have worked thus far.
To use an obligatory car analogy, think of congestion on the roadways. What you're talking about is widening the highways. But that just encourages more packets to be placed onto the network. We need to stop letting people use the Internet for what they want, when they want (freedom is bad). What we really need is the equivalent of, you guessed it, public transportation. Here's how it would work: As we all know your data is broken up and encapsulated into packets. Your packets would arrive at your edge router, where the data would disembark and then sit around and wait for the equivalent of the TCP/IP bus. This mega-packet that carries multiple ordinary packets worth of data around would then stop at each hop on its fixed route. Your data would have to check the fucking mega-packet schedule to see where to get off and then wait for another mega-packet to come along for the next leg of its journey.

Re:Weird solution (1)

markov_chain (202465) | more than 6 years ago | (#22969700)

I get it, anyone has the right to board the bus, so it's fair.

Re:Weird solution (1)

TubeSteak (669689) | more than 6 years ago | (#22969390)

So, in other words, his solution to the "P2P problem" is just a fancy version of a token bucket.
His (server side) solution is cheaper than requiring everyone, everywhere to upgrade/change their TCP stack.

Hell, the idea of asking everyone to change their stack practically invites a "Your solution advocates a" reponse. You'd either have to lock out people running the 'old' stack or... wait for it... throttle them server side.

Re:Weird solution (0)

Anonymous Coward | more than 6 years ago | (#22969536)

If the solution mentions TCP, it is broken. TCP is a payload of IP. The only information about a packet that a network operator can rely on is the IP header (mostly just the target IP address, if he's not near the leaf node.) Everything else can look like random data and only make sense to the endpoints which are sending/receiving these packets. Any investment in TCP shaping technology becomes obsolete when opportunistic IPSec enters the mainstream.

Linux already has per-flow fairness (2, Informative)

Anonymous Coward | more than 6 years ago | (#22969238)

When using Linux as a router there are already several ways to per-flow fairness. The simplest and most obvious one is Stochastic Fair Queuing. The problem is that commercial routers can't do that in hardware.

No need for hard state (1)

klapaucjusz (1167407) | more than 6 years ago | (#22969406)

There exists quite a few techniques that allow for approximately fair traffic engineering without the need for ``hard'' state in the network core:

While approximately fair is not quite as good as fair, avoiding hard state makes for a cheaper and more reliable network, which allows you to more easily over-provision your link capacity. Unless, of course, you're in the business of selling routers that implement hard state...

no need for a better small hose (1, Interesting)

Anonymous Coward | more than 6 years ago | (#22969424)

when a big hose will do the job better

just build the capacity, .then they will come

research on root causes of congestion? (2, Insightful)

False Data (153793) | more than 6 years ago | (#22969508)

wonder if routing algorithms themselves aren't contributing to the problem.

BGP and the intra-domain routing protocols assume there is at most one correct route from a given source address to a given destination address. That assumption could give rise to unnecessary congestion. For example, suppose the source wants to use bandwith of 100 units and the destination is capable of keeping up. But between them there are two routers, in parallel, each of which can supply only 50 units. If there's exactly one path, source and dest can't talk any faster than 50 units because everything has to go through one of the two routers. (There are mechanisms to share bandwith in some situations, like the simple parallel routers one I described, but they dont' work for arbitrarily complex routing topologies across multiple BGP domains.)

It's possible, though, to imagine a network that routes in such a way that data could use both routers. For instance, in circuit switched networks the preestablished path tends to hang around even as the current "best" route changes, so in the earlier example two 50 unit connections between source and dest might end up being spread across both routers.

Rather than taking the congestion as a given, and figuring out work-arounds, I wonder if someone's done some research into why it exists and whether it's due to hot spots forming in the traffic flow.

Re:research on root causes of congestion? (1)

klapaucjusz (1167407) | more than 6 years ago | (#22969650)

Rather than taking the congestion as a given [...] I wonder if someone's done some research into why it exists and whether it's due to hot spots forming in the traffic flow.

Modern networking protocols are designed to use as much throughput as is available. Think of unused capacity as wasted capacity.

Let me make this clear on an example. Suppose that there's 10Mb/s free, and you're transferring a large file. Then you want your file transfer to go at a rate as close to the available 10Mb/s as possible. If it goes at 9.9Mb/s, then all is fine; but if it goes at 1Mb/s, then you're wasting 9Mb/s.

So while congestion is to be avoided, the network is designed to always stay at the very edge of congestion; a slight instability, and some routers will get congested.

Re:research on root causes of congestion? (1)

klapaucjusz (1167407) | more than 6 years ago | (#22969664)

It's possible, though, to imagine a network that routes in such a way that data could use both [parallel routes to a given destination].

It's been done, and it's called Equal Cost Multi-Path [wikipedia.org] .

Re:research on root causes of congestion? (0)

Anonymous Coward | more than 6 years ago | (#22969764)

"network coding theory" (just google it...) is a way to get the effect you're probably imagining. If the whole internet was a network-coding network, then incredible bandwidth could be had.

routing via multiple paths (1)

j1m+5n0w (749199) | more than 6 years ago | (#22970052)

One reason why that might be problematic in practice, is that, iirc, TCP doesn't like getting packets out of order, and tends to respond to out-of-order packets similarly to dropped packets. If you have packets taking multiple paths, they are very likely to arrive out of order.

One could mitigate this, I suppose, by making sure all packets that are part of the same flow take the same path.

No, TCP does not work by losing packets (1)

Animats (122034) | more than 6 years ago | (#22969556)

TCP is mostly controlled by round trip time measurement and window size. Response to packet loss is a backup mechanism. If packet loss were the primary control mechanism, TCP would never work.

It's much better to throttle back before packet loss occurs, since any lost packet has to be resent and uses up resources from the sender to the drop point. Since the main bandwidth bottleneck is at the last mile to the consumer, the drop point tends to be close to the destination.

Don't trust the "Clean Slate Internet [stanford.edu] " project too much. That started as telco-supported scheme to move more functionality to the telcos so they can add charges for specific services, like cell phone providers.

Re:No, TCP does not work by losing packets (1)

klapaucjusz (1167407) | more than 6 years ago | (#22969678)

TCP is mostly controlled by round trip time measurement and window size. Response to packet loss is a backup mechanism.

That's not true. The currently deployed variants of TCP -- Reno, NewReno and SACK-TCP -- only use packet loss as a measure of congestion. ECN-enhanced variants only use packet loss and ECN marking.

There do exist experimental variants of TCP that use delay (TCP-Vegas comes to mind), but they are not widely deployed at this time.

Re:No, TCP does not work by losing packets (1)

m.dillon (147925) | more than 6 years ago | (#22969872)

Yah. TCP-Vegas is what I modeled net.inet.tcp.inflight_enable on FreeBSD/DragonFly after. I didn't quite agree with Vegas's algorithm so I implemented a somewhat different one, but the result is basically the same.

These protocols work on the transmit side (trying to do the same thing on the receive side is a lot harder). They use round trip times to figure out whether excessive packet backlog is occuring on intermediate routers, and reduce the TCP window size accordingly in an attempt to reduce that backlog.

The feature is very good at dealing with fixed bandwidth constrictions... for example if someone is on a slow speed line (on either side). It is not really designed to handle congestion in the middle of the network. Personally speaking, every server I've ever run turns that feature on. It can be tuned to not be overly conservative and still have the effect of cutting the packet backlog routers have to deal with considerably. E.G. if a router's backlog due to your servers is, say, 40 packets, the feature will cut the backlog down to 10 packets.

-Matt

depends a lot on business models (1)

drDugan (219551) | more than 6 years ago | (#22969582)

Like all technologies, the effects depend on how the technology is used. While there are issues of unfairness with random drops, one can *imagine* ways that (from TFA), "What is really necessary is to detect just the flows that need to slow down" - however, it would seem just as easily networks could "detect just the flows that need to slow down" based on who is paying more for that flow (the sender or the receiver) - leading to even more "unfairness" (read: non-neutral network a la net neutrality) than we currently have.

Depends where the packet is in the flow, too. (3, Interesting)

m.dillon (147925) | more than 6 years ago | (#22969806)

What I've noticed the most, particularly since I'm running about a dozen machines over a DSL line (just now switched from the T1 I had for many years), is that packet management depends heavily on how close the packet is to the end points. Packet management also very heavily depends on whether the size of your pipe near the end point is large relative to available cross country bandwidth, or small (like a DSL uplink).

When the packet is close to an end point it is possible to use far more sophisticated queueing algorithms to make the flow do precisely what you want it to do. It's important for me because my outgoing bandwidth is pegged 24x7. Packet loss is not acceptable that close to the end point so I don't use RED or any early drop mechanism (and frankly they don't work that close to the end point anyway... they do not prevent bulk traffic from seriously interfering with interactive traffic), and it is equally unacceptable to allow a hundred packets build up on the router where the pipe constricts down to T1/DSL speeds, (which completely destroys interactive responsiveness).

For my egress point I've found that running a fair share scheduler works wonderfully. My little cisco had that feature and it works particularly well in newer IOS's. With the DSL line I couldn't get things working smoothly with PF/ALTQ until I sat down and wrote an ALTQ module to implement the same sort of thing.

Fair share scheduling basically associates the packets with 'connections' (in this case using PF's state table) and is thus able to identify those TCP connections with large backlogs and act on them appropriately. Being near the end point I don't have to drop any of the packets, but neither do I have to push out 50 tcp packets for a single connection and starve everything else that is going on. Fair share scheduling on its own isn't perfect, but when combined with PF/ALTQ and some prioritization rules to assign minimum bandwidths the result is quite good.

Another feature that couples very nicely with queueing in the egress router is turning on (for FreeBSD or DragonFly) the net.inet.tcp.inflight_enable sysctl. This feature is designed to specifically reduce packet backlogs in routers (particularly at any nearby bandwidth constriction point). While it can result in some unfair bandwidth allocation it can also be tuned to not be quite so conservative and simply give the egress router a lot more runway in its packet queues to better manage multiple flows.

The combination of the two is astoundingly good. Routers do much better when their packet queues aren't overstressed in the first place, only dropping packets in truely exceptional situations and not as a matter of course.

The real problem lies in what to do at the CENTER of the network, when you TCP packet has gone over 5 hops and has another 5 to go. Has anyone tried tracking the hundreds of thousands (or more) active streams that run through those routers? RED seems to be the only real solution at that point, but I really think dropping packets in general is something to be avoided at all costs and I keep hoping something better will be developed for the center of the network.

-Matt

PLEASE PLEASE PLEASE STOP POSTING HIS RANTS (1)

jsailor (255868) | more than 6 years ago | (#22970268)

Roberts has been harping on the same thing since 2000, probably earlier. Guess why he has built several failed companies around the concept. It seems that Slashdot forgets this every few months and posts another one of his rants.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...