Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Router Manages Flows, Not Packets

ScuttleMonkey posted about 5 years ago | from the let-the-injections-begin dept.

122

An anonymous reader writes "A new router, designed by one of the creators of ARPANET, manages flows of packets instead of only managing individual packets. The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals. When overloaded, the router can make better choices of which packets to drop. 'Indeed, during most of my career as a network engineer, I never guessed that the queuing and discarding of packets in routers would create serious problems. More recently, though, as my Anagran colleagues and I scrutinized routers during peak workloads, we spotted two serious problems. First, routers discard packets somewhat randomly, causing some transmissions to stall. Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays, significantly reducing throughput (TCP throughput is inversely proportional to delay). These two effects hinder traffic for all applications, and some transmissions can take 10 times as long as others to complete.'"

cancel ×

122 comments

Well duh (5, Funny)

Em Emalb (452530) | about 5 years ago | (#28653845)

Damn right, they manage flows. It keeps the tubes from clogging.

Duuuurrrrrr.

Re:Well duh (2, Funny)

nine-times (778537) | about 5 years ago | (#28654079)

I don't know if I trust this guy with my interweb tubes, though. Did you notice the mess of cables [ieee.org] behind him?

If we can't trust him to keep his wiring closet organized, how can we trust him to clean the tubes?

Re:Well duh (0)

Anonymous Coward | about 5 years ago | (#28654353)

On the other hand, if he has time to organise his wiring closet he wouldn't have time to get any of the real work done!

on flipside it als can do the nasty (0)

Anonymous Coward | about 5 years ago | (#28654889)

it can do the exact reverse and stop flows it recognizes
and sounds like a type a DRM they will slip by you in the name of "think of the children"

P.S. interesting news is that the pirate party of Canada now states in chat and on the ite that they are not supporting non commercial p2p / fair use / file sharing. Wonder how that's gonna work fer em?

Been tried, and they saw it was *not* good (5, Interesting)

OeLeWaPpErKe (412765) | about 5 years ago | (#28654893)

All older cisco equipment worked this way. This was nice, and worked very well for the first router(s) closest to the end customer. However for routers meant to route for large numbers of users this turned out to be a disaster.

Just to give you an idea, this was EOS (end of support) before I turned 10 [cisco.com] (look for "netflow routing")

There are a number of very problematic properties :
-> trivial to ddos (just generate too many flows to fit in memory, or generally increase the per-packet lookup time)
-> not p2p compatible (p2p will cause flow based routers to perform at a snail's pace, because they open so much connections)
-> possible triple penalty for every new flow (first a failed flow lookup, followed by a failed route lookup, going to default route)
-> very hard to have a good qos policy this way. A pipe has a fixed bandwidth, and you almost always oversubscribe. Therefore useful policies are very hard to formulate per-flow.
-> if you divide bandwidth per-flow over tcp then a large overload will "synchronize" everything. So let's explain what happens if 3 users are happily surfing about and another user starts bittorrent. Bandwidth gets divided over all the flows, and *every* connection closes, due to timeouts.

There are a number of advantages
-> easy, very extensive QOS is trivial to implement
-> stateful firewalling is almost laughably easy to implement, and very advanced firewalling can be done (e.g. easy to block ssh but not https, just filter on the string "openssh" anywhere in the connection. Added bonus : hilarity ensues if you email someone the text "openssh", and his pop3 connection keeps getting closed)

Here's the deal : a router has to lookup in a table of about 300.000 entries in per-packet switching (excepting MPLS P routers). My PC is, at this moment, opening 331 flows to various destinations, each sending an average of 5 packets (probably a lot of DNS requests are dragging this number down), but you have to keep in mind that a flow-based router has to look up first in the "flow table" AND in the route table (which still has 300.000 entries).

As soon as a flow-based router services more than 1000 machines (in either direction, ie. 100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router. That's not a lot. If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance. 2000 machines and packet-based switching is double as efficient.

So : flow-based routing ... for your wireless access point ... perhaps. For anything more serious than that ? No way in hell.

Re:Been tried, and they saw it was *not* good (1)

TapeCutter (624760) | about 5 years ago | (#28657559)

Thanks for the excellent debunking. I have very little to do with the mechanics of networks but I do remeber "netflow routing", IIRC there was quite a debate back in the 90's about packet vs flow management and flow lost. From a naive point of view, I could never see how adding an extra lookup table would make things more efficient.

Re:Well duh (0)

Anonymous Coward | about 5 years ago | (#28656921)

Who cares... Come get me when they make one that doesn't drop sync with my cable modem.

Net neutrality anyone? (1, Interesting)

girlintraining (1395911) | about 5 years ago | (#28653897)

So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules. Aren't we supposed to be against this? Because it sounds a lot to me like encrypted packets, UDP, and peer-to-peer, three things that certain well-funded groups have been trying to kill or restrict for awhile, would seem to be the worst-affected here.

Re:Net neutrality anyone? (0)

Anonymous Coward | about 5 years ago | (#28654203)

It might be useful for a corporate router where network neutrality is secondary to prioritization of network traffic (say the top brass does a lot of online meetings) so the A/V streaming would get priority over someone's large downloads in Finance.

Outside of that, it would create an arms race if used by ISPs. If a router throttled encrypted packets, then people would tunnel the connections via bogus movie streams or DNS queries. The only way ISPs would be able to win is create a whitelist of sites that have a higher priority of traffic than everyone else, and that brings up another legal can of worms.

Yep (0)

Anonymous Coward | about 5 years ago | (#28654213)

If you're an ISP, you route IP packets. TCP is a payload like any other.

Re:Net neutrality anyone? (4, Informative)

0racle (667029) | about 5 years ago | (#28654235)

What you describe (packet inspection and prioritizes traffic based on internal rules) is QoS. No one in their right mind is against that. The net neutrality debate is about ISP's throttling some traffic in order to extort money from both their customers and content providers that otherwise have no other relationship with the ISP. The debate is that all ISP's should be are the tubes the content is delivered over, not gate keepers of content.

That an ISP may prioritize services like VOIP over http or bittorrent is not what net neutrality is about and quite frankly is something that a good network engineer would look into and would probably implement.

Re:Net neutrality anyone? (1, Insightful)

girlintraining (1395911) | about 5 years ago | (#28654365)

That an ISP may prioritize services like VOIP over http or bittorrent is not what net neutrality is about and quite frankly is something that a good network engineer would look into and would probably implement.

QoS isn't a bad thing, but the user should be in control of it, not the ISP; Who's to say that encrypted packet doesn't need a low-latency link more than the unencrypted VoIP connection? The ISP doesn't know -- it has to guess based on protocol data that may or may not be accurate. But that's a lot more work to implement and so most ISPs won't do it...

Re:Net neutrality anyone? (1)

Em Emalb (452530) | about 5 years ago | (#28654817)

The problem is, there are WAY too many people out there that think QOS stands for Queen of the Stone-Age and not Quality of Service.

The ISP is in a no win situation, IMO. On the one hand, they have potentially hundreds of thousands of users using VOIP services who don't know the first thing about QOS, but do know the effects of jitter or packet loss, so they complain.

What's the ISP supposed to say? "Turn on QOS?" Not that simple. On the other hand, if they do prioritize packets, then they get people who don't want them to. So it's a damned if you do, damned if you don't situation.

Re:Net neutrality anyone? (0)

Anonymous Coward | about 5 years ago | (#28654971)

What's the ISP supposed to say?

Without network neutrality, their support will be trained to say "Gee, I have no idea whatsoever why [insert voip company here] sucks so hard. Why don't you ditch their lousy service and upgrade to ours?"

Re:Net neutrality anyone? (1)

shentino (1139071) | about 5 years ago | (#28655917)

For starters, hardware that handles VoIP should be taking advantage of the TOS bits, and setting their packets for Minimum Latency. Secondly, QoS implementation should be taking advantage of this "opt in priority request" This is exactly the sort of situation that TOS, traffic class, and so on were designed for.

Re:Net neutrality anyone? (4, Funny)

babyrat (314371) | about 5 years ago | (#28655033)

QoS isn't a bad thing, but the user should be in control of it

Exactly! That way MY packets (not some of them, ALL OF THEM) need to be prioritized.

Kind of reminds me of the good old days when I had access to print queue priorities. No-one ever understood why my printouts always came out first...I maintained I was just lucky.

Re:Net neutrality anyone? (1)

hairyfeet (841228) | about 5 years ago | (#28655281)

And this is why we can't have nice things.

Re:Net neutrality anyone? (1)

raddan (519638) | about 5 years ago | (#28655647)

Well, what the user (where 'user' is more likely the application) should be in control over is the QoS parameters. Like, "I want low jitter, occasional packet loss is OK". Leave the mechanism up to the intermediaries. If someone asks for "all of the above", treat it as invalid input and ignore it, or even better, treat their traffic like shit, since they don't know how to play nice. "Priority" is only one QoS parameter, and one that the end-user should have no control over.

Re:Net neutrality anyone? (1)

pyite (140350) | about 5 years ago | (#28655453)

<cite>QoS isn't a bad thing, but the user should be in control of it, not the ISP; </cite>

The problem is that I'd be afraid of other people prioritizing all traffic rather than just some. So now I'm gonna prioritize all of my traffic. So now everything is in a gold queue and nothing gets prioritized. It is somewhat of a prisoner's dilemma. Contrast this with a network with end to end control where you can trust DSCP or COS values along the way. A possible solution is maybe to allow end users to mark their packets, but make the queue they go into pretty small, so they can only prioritize a bit of voice or video and not much more.

Re:Net neutrality anyone? (1)

Apocros (6119) | about 5 years ago | (#28655903)

I was thinking this same thing... Say 50% of packets have to go into a normal/bulk queue, 35% into a medium queue, and the rest into high. Adjust the thresholds as you like. For users that don't know better, default to normal, and maybe promote to higher levels based on the destination port.

Then, if the rolling average (over a few days or so) for packets marked "high" exceeds the threshold, all subsequent "high" packets get demoted until the average gets back to the required level. If high and medium queues both exceed the threshold, demote everything one more level.

This way, anyone that marked all their packets as high would quickly find all their connections operating at the lowest QoS level. That would either motivate them to fix their settings, or they'd just have to live with it and not be penalizing anyone else by hogging the available bandwidth.

Re:Net neutrality anyone? (0)

Anonymous Coward | about 5 years ago | (#28656619)

Ideally the end user would be in control of it, up to a point.

E.g. 10Mbps best effort Internet connection, with an allowance for 500kbps of user-prioritized traffic.

Re:Net neutrality anyone? (1)

vertinox (846076) | about 5 years ago | (#28654273)

So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules. Aren't we supposed to be against this?

I dunno. If the router is designed to look at packet flow rather than the contents of said packets or its source and destination, then you have still can have net neutrality.

Re:Net neutrality anyone? (4, Insightful)

B'Trey (111263) | about 5 years ago | (#28654291)

Exactly how is this different from what we currently have?

Consider a conventional router receiving two packets that are part of the same video. The router looks at the first packet's destination address and consults a routing table. It then holds the packet in a queue until it can be dispatched. When the router receives the second packet, it repeats those same steps, not "remembering" that it has just processed an earlier piece of the same video.

Uh, no. This is called process switching. It hasn't been used in anything but the most low-end routers for quite some time. CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control. The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds. MPLS adds a label to the packet which identifies the flow, so it isn't even necessary to check the packet for the five components which define the flow. Just look at the label and send it on its way.

QOS (Quality Of Service) has multiple modes of operation and multiple queue types which address the issues of which packets to drop. It may or may not include deep packet inspection to attempt to determine the type of packet.

Perhaps they've come up with some new innovations that aren't obvious in the write-up because it's written at a relatively high level, but there's nothing here that isn't already implemented and that I don't already work with on a daily basis in production networks.

Re:Net neutrality anyone? (0)

Anonymous Coward | about 5 years ago | (#28654811)

In principle I agree completely with the parent. However, it looks like his company is working on provider edge equipment, which doesn't get the benefit of MPLS since its at the edge of the MPLS cloud. I think this is more analagous to the Cisco Express Forwarding mentioned above since that can operate on IP packets in general. But really, its just caching, a solution any programmer worth his paycheck would think of immediately when faced with this sort of performance issue.

I know I'm oversimplifying the issue, but this isn't exactly something that requires the brains of the architect of the internet to come up with.

Re:Net neutrality anyone? (1)

BitZtream (692029) | about 5 years ago | (#28655039)

My cable modem connects to a Cisco 7200, which most certainly supports CEF and has for at least 10 years, which was when I first started playing with 7200s.

How much closer to the edge do you want?

Its been a few years since I was a router flunky so if I get the exact model wrong don't castrate me, but as I recall the Cisco 12k came out screaming about how it did this for many gb/s of data without even breathing heavy. I realize thats not the highest of high end by any means, and that model is years old, but this isn't new in any way. Its hard for me to think that the quality and performance of routers has declined since I stopped doing it.

Sounds like the author just hasn't actually used any real routing equipment in years and thinks he's inventing something new.

Re:Net neutrality anyone? (0)

Anonymous Coward | about 5 years ago | (#28655273)

CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control. The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.

Small correction regarding CEF and MPLS :

The CEF table is built in advance of any traffic flows, based on the contents of the IP routing table.

"Fast Switching" is the switching method that builds an on-demand forwarding table or cache entry for each flow, performing a lookup on the first packet.

The MPLS forwarding table (lfib) is also pre-consructed rather than built on demand. It uses 1) local routing table entries and 2) label information advertised by downstream neighbors.

Re:Net neutrality anyone? (1)

bogd (912084) | about 5 years ago | (#28655419)

CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control. The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.

It's more than that. The older techologies ("fast switching" in the Cisco world) used to do this - route first packet, then switch the other packets in the flow. However, CEF goes one step forward, and allows for all the packets to be switched by the hardware (not even the first packet in the flow hits the router processor). Which means that what the author seems to be suggesting would actually mean moving backwards.
Either there is more to the router than the article says, or the author hasn't been keeping track of developments in this field...

Re:Net neutrality anyone? (1)

AdamBv1 (1382569) | about 5 years ago | (#28654311)

I believe hes talking about managing flows to keep streams from getting interrupted so if it sees data transfers that have been going on between 2 points consistently its going to be less likely to drop packets from that stream than it is some other random ping or small packet. Basicly the idea is to keep things working that are streaming instead of dropping a packet and stalling them or sending packets out of order due to queuing. Benefit is downloads and streams of data are more likely to stay working while one off communications and handshakes would be more likely to get dropped.

Re:Net neutrality anyone? (3, Insightful)

jd (1658) | about 5 years ago | (#28654357)

No, it doesn't break net neutrality in and of itself, any more than a traffic light or a roundabout breaks road neutrality. The idea of routing flows, rather than packets, permits more packets to get through for the same bandwidth.

So long as all flows are treated fairly, this will actually BOOST network neutrality as network companies will have less justification to throttle back protocols which take disproportionate bandwidth - as they will no longer do so. Users will also have less cause to complain, as the effective bandwidth will move closer to the theoretical bandwidth.

The only concern is if corporations and ISPs use this sort of router to discriminate against flows (ie: ensure unfair usage) rather than to improve the quality of the service (ie: ensure fair usage).

The belief by ISPs that you cannot have high throughput unless you block legitimate users is nothing more than FUD. It has no basis in reality. It is possible, by moving away from best-effort and towards fair-effort, to get higher throughput for everyone.

Congested networks can be modeled as turbulent flow in a river. Blocking streams is like damming up some of the tributary streams. It causes a lot of grief and isn't really that effective.

On the other hand, smoothing out the turbulence will improve the throughput without having to dam up anything. QoS services are intended as smoothing mechanisms, not dams. For the most part, at least.

Most "net neutrality" advocates would be advised to focus only on the efforts to build gigantic dams, rather than to be unkind or unfair on those merely smoothing the way, with no bias or discrimination intended.

Re:Net neutrality anyone? (0)

Anonymous Coward | about 5 years ago | (#28656213)

You don't need DPI to do that. You can use IP destination, source, and destination TCP port. It's called stateful inspection.

Re:Net neutrality anyone? (1)

mysidia (191772) | about 5 years ago | (#28656383)

Flow-based QoS, in the form of Flow-based WRED [cisco.com] is not a new concept.

Furthermore, Flow-based routing is not a new concept, it's a very old one.

Perhaps what has happened is general purpose computing hardware, CPUs, and Memory, have gotten a lot cheaper, at much higher speeds and capacities, than in recent years.

It may now be possible to build routers that have the capacity to do it. Flow-based routing is extremely expensive, especially in terms of CPU and memory for bookkeeping all those flows.

Think about it: every single open TCP connection is going to be using memory slots in a flow-based router. If too many distinct flows come in, are started, or continue within recent history, for the available memory to record them all, the device will be in trouble and have to reboot, or utilize some other routing strategy that it wasn't optimized for.

I would fully expect a core router of a sufficient large ISP to have billions if not trillions of flows to have to be in memory under normal loads of a flow-based router.

Keep in mind a 'DNS Request' is a flow, even if it's UDP, oh yeah, and there are some UDP-based protocols that involve data exchange at wider intervals.

A client may transmit a UDP message and expect a response sequence 5 minutes later.

This does not solve the problem (4, Insightful)

raddan (519638) | about 5 years ago | (#28653907)

It just makes the packet switching faster. But really, we're talking about the same idea here: datagram networks. Congestion avoidance has been known to be a difficult problem in datagram networks for a long time [wikipedia.org] .

TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here, and this router does nothing to fix that. The way to fix that is to dump TCP's congestion control and replace it with real flow control in the network layer. That requires lots of memory on intermediaries, because you need all the hosts along the data path to cooperate with each other to communicate about flow control, and that means keeping state. At which point, we're not talking about datagram networks anymore. And that means dumping the other desirable thing about datagram networks: fault tolerance. Packets are path-independent.

Anyway: getting back to TCP's congestion control: his article even says that "During congestion, it adjusts each flow rate at its input instead." Wait, what? "If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down." That's how it works right now! The only difference that I can see is that he's being a little smarter about which packets to discard, unlike RED, which is what he's comparing this to. If so, that's an improvement, but it doesn't solve the problem. It will still take awhile for TCP to notice the problem, because the host has to wait for a missed ACK. TCP can only "see" the other host-- it does not know (or care) about flow control along the path. Solving the problem requires flow control along that path, i.e., in the network layer, but IP lacks such a mechanism.

Re:This does not solve the problem (1, Informative)

Anonymous Coward | about 5 years ago | (#28654179)

Routing doesn't even concern itself with TCP. TCP issues such as windowing & acks are a host to host concern.

There are some advanced QoS features on routers that will send an explicit congestion notifications to hosts if their tcp flow is in a class that's exceeding it bandwidth. The end hosts are supposed to back off their transmission rate without all the missed ack, window shortening, and retransmissions that would otherwise be involved if traffic was dropped. That's about as gracefully as a router can deal with congestion for a tcp flow. Of course the end hosts need to support ECN for this to work.

The thing is though, QoS policies are the antithesis of the net neutral / traditional best effort internet.

Re:This does not solve the problem (1)

Cyberax (705495) | about 5 years ago | (#28654207)

Flow control can be greatly improved by adding NACKs to protocol. I.e. a router will (try to) send a NACK packet after it drops your packet.

This NACK might get lost, sure, so a timeout mechanism is still required. But in general NACKs give much better flow control. Another variant is heartbeat ACKs (used in SCTP), they allow a range of other optimizations.

It's possible to do better than TCP. Though of course, circuit-switched networks are still superior in flow control.

Re:This does not solve the problem (1)

raddan (519638) | about 5 years ago | (#28655735)

Of course, when you have the router inserting NACKs into the transport stream, you're violating your layers (although it could be argued that they're already violated because congestion avoidance got put into TCP). You're fixing lame congestion control in TCP by breaking IP. Wouldn't it be better to remove congestion control from TCP and put real flow control in IP?

I think the real interesting thing to see happen would be LLC [wikipedia.org] , because then you could run datagrams and virtual circuits on the same wire. But if I understand correctly Ethernet is not capable of doing this.

Re:This does not solve the problem (1)

Cyberax (705495) | about 5 years ago | (#28656019)

NACKs in the IP layer won't be a violation of layered model if we move NACKs down to the IP level (as an optional feature, of course) :)

In fact, we already have half-assed attempts like ICMP Source Quench. They are not sufficient, though.

Virtual circuits (I mean real circuits with guaranteed bandwidth) are cool, but they are inefficient - you have to reserve bandwidth even if circuit is not used right now. That makes sense for telecom apps, but not for HTTP and other 'bursty' protocols.

Re:This does not solve the problem (1, Informative)

Anonymous Coward | about 5 years ago | (#28654217)

Doesn't WRED (Weighted RED) already do this?

Re:This does not solve the problem (2, Interesting)

Wesley Felter (138342) | about 5 years ago | (#28655223)

Anagran has a paper on just this topic; they claim to do better than WRED because they track the rate of every TCP connection.

http://www.packet.cc/files/IFD2c.pdf [packet.cc]

Re:This does not solve the problem (3, Interesting)

RichiH (749257) | about 5 years ago | (#28654299)

> TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here

In a dumb network with intelligence on the edges, you can:

1) cause congestion and then back off (TCP)
2) hammer away at whatever rate you think you need (UDP)
3) use a pre-set limit (which might be too high as well so no one does that on public networks)

State-ful packet switching is literally impossible, fixed-path routing not desirable for the reason you stated above and I would not want anyone to inspect my traffic _by design_, anyway.

TCP may not be perfect, but I fail to see an alternative.

Re:This does not solve the problem (3, Interesting)

John.P.Jones (601028) | about 5 years ago | (#28654335)

TCP's congestion control backs off exponentially because it has to. There is a stability property that if the network is undergoing increased congestion (this is how TCP learns the available throughput and utilizes it) and the senders do not back off exponentially then their backing off will not be fast enough to relieve congestion and therefore stabilize the system. If this router is selectively stalling individual flows I do not believe that will be fast enough to deal with growing congestion from many greedy clients.

Basically, eventually the buffer space of the router will become exhausted and it will be forced to drop packets non-selectively hence initiating TCP backoffs from randomly selected flows, resulting in current behavior. So, of course in that gray area between the first dropped flow and when we need to revert back to normal behavior we may see improved network performance for some flows but they will just take advantage of this by opening up their TCP windows more until the inevitable collapse comes.

The end result will be delaying backing off many TCP flows (which will speed them up creating more congestion) at the expense of completely trashing a few flows (which will stall anyways for packet reordering). and so the resulting system will be less stable.

Re:This does not solve the problem (2, Interesting)

raddan (519638) | about 5 years ago | (#28655059)

TCP's congestion control backs off exponentially because it has to.

Sure, but it's looking at the problem from the wrong end. IP has no feedback mechanism to allow for flow control (i.e., to prevent the sender from overrunning the receiver), so TCP has congestion control instead to stop it from happening when it does. Since TCP has no way of knowing what the available bandwidth is, it goes looking for it by causing the problem and then backing off. And since packet-switched traffic is "bursty", it resumes increasing the rate until it hits the ceiling again (because maybe you just so happened to have an abnormally low ceiling when you checked before), and backs off, ad infinitum.

This is analogous to saying "I have this problem where cars keep crashing into my house!" and so, designing your house so it can dodge cars.

Re:This does not solve the problem (0)

Anonymous Coward | about 5 years ago | (#28654377)

See, now this is the sort of post that keeps me reading /. Thanks!

Re:This does not solve the problem (3, Informative)

B'Trey (111263) | about 5 years ago | (#28654387)

This has already been addressed in the IP specs: ECN [wikipedia.org]

One of the big problems with getting ECN adopted has been that Windows hasn't supported it. Vista does and I haven't seen anything specific but I'm reasonably certain that Windows 7 does as well. MAC OSX 10.5 supports it as well. Linux has supported it for quite awhile. It's usually disabled by default, so that may be an issue in getting it widely supported. But the issue isn't that we don't know how to do it better. It's just overcoming the inertia.

Re:This does not solve the problem (1)

religious freak (1005821) | about 5 years ago | (#28654437)

I defer to your knowledge relative to mine, but I do wonder why we work on switching pieces of transport protocols around and changing the tiny things when we could just move to something entirely different. I recall reading IPv6 has a host of new mechanisms built directly into the protocol which address these types of concerns. IPv6 is far more than just NAT avoidance and long IP addresses - with its built in packet priority values and other bells and whistles I think IPv6 could help solve this type of problem?

Re:This does not solve the problem (1)

lorenzo.boccaccia (1263310) | about 5 years ago | (#28655089)

only if nobody is cheating - the current mechanism doesn't assume trusted third parties to provide useful congestion information, which is why is currently so difficult for isp not getting busted for violating the net neutrality principle

Re:This does not solve the problem (1)

snaz555 (903274) | about 5 years ago | (#28654983)

TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here, and this router does nothing to fix that. The way to fix that is to dump TCP's congestion control and replace it with real flow control in the network layer.

Just remove the excess forwarding buffers; there's no point buffering more than what's required for the internal forwarding jitter, which should really be no more than a few datagrams at most. TCP is based on a model where congestion = loss, not congestion = pileup. Other UDP based protocols - DNS, etc, all have their own retransmission mechanisms, also based on the same model of congestion = loss. What happens when routers have ridiculous quantities of buffer - several seconds' worth - is that entire TCP windows' worth get piled up, and _then_ TCP fast retransmit piles it up _again_. When the congestion eases and the router is draining its massive buffer including all the piled up retransmits, the source TCPs are still polynomially backing off. The culprit really isn't the congestion control, but the excess buffering. Some experimental congestion control mechanisms attempt to get around this by continuously measuring one-way latency and in this way detect intermediate buffer pileups - and stop piling up more until the buffer is drained - but it's really silly to add complexity to work around something that shouldn't be in the datagram path in the first place. These methods of congestion control however tend not to work as well where congestion is actually caused by loss, but this tends to be pretty rare these days where loss is indicative of buffer overflow or traffic shaping. (E.g. the common ethernet collision domain is gone due to switched full-duplex infrastructure.)

This looks like an Anagran ad (0)

e9th (652576) | about 5 years ago | (#28653933)

Is it just me, or does the article read like an Anagran ad for the FR-1000?

Re:This looks like an Anagran ad (1)

edittard (805475) | about 5 years ago | (#28653977)

You won't get many answers, since that would require somebody else to be:
a) foolish enough to read it
b) even more foolish to admit tt

Re:This looks like an Anagran ad (0)

Anonymous Coward | about 5 years ago | (#28653985)

Yes it does.

so... (0)

Anonymous Coward | about 5 years ago | (#28653965)

Cut out the bullshit, and you get a router that prioritizes packets for already-established connections, amirite? Are stateful routers actually a new thing, or can I start mocking the word "flow" now?

so... (2, Funny)

Anonymous Coward | about 5 years ago | (#28653975)

a router tampon?

This isn't new (1)

khafre (140356) | about 5 years ago | (#28653987)

Ah yes, Larry Roberts. He seems to poke his head up every once in a while. From Caspian Networks, and now Anagran. He certainly likes to push flow routing, although it's been shown not to scale in practice.

Re:This isn't new (2, Insightful)

BrotherBeal (1100283) | about 5 years ago | (#28654053)

Can you explain a little more? I just RTFA and I'm not convinced this is revolutionary either, but it's hard to say because this seems more like marketing than actual research. However, I'm hesitant to say he's full of shit without hearing a bit more of the debate around his ideas.

One question I was hoping would be answered is what this flow routing buys you that something like SCTP wouldn't?

Re:This isn't new (5, Informative)

Spazmania (174582) | about 5 years ago | (#28654641)

I'm hesitant to say he's full of shit without hearing a bit more of the debate around his ideas.

There really isn't a debate around his ideas, at least not any more.

The hitch is management overhead. Managing a flow requires remembering the flow. That means data structures and stateful processing. It's expensive and no one has demonstrated hardware accelerators that do a good job of it. On the other hand, devices like a TCAM can accelerate stateless packet switching a couple orders of magnitude past what's possible with a generic PC.

At low data rates where DRAM latency is not an issue (presently around the 500mbps range), flows can work and accomplish much of what he claims. At higher data rates (like the 10-100gbps links on the backbone) we simply can't build hardware capable of managing flows for any kind of reasonable price.

Beyond that, Larry has really missed the boat. The next routing challenge isn't raw bits per second. That's pretty much in hand. Rather the next challenge is the number of routes in the system. If you want two ISPs for reliability (instead of one), you currently have to announce a route into the backbone that is processed by every single router in the backbone even if it never sees your packets. That currently costs about $8k per route per year, the cost is falling a lot more slowly than the route count is climbing and the lack of filtering and accounting systems mean that each one of those $8k's is an overhead cost to the backbone networks rather than a cost directly recoverable from the user who announced the route.

Flow based routing doesn't help us solve that challenge in the least. If anything, it makes it worse.

If you're interested in routing theory and research, I recommend the Internet Research Task Force Routing Research Group (IRTF RRG). They're chartered by the IETF to perform basic research into Internet routing architectures and anyone interested can participate.

Re:This isn't new (3, Informative)

Anonymous Coward | about 5 years ago | (#28654061)

Definitely not new.

"The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals."

Where have I heard this before...oh hay...

http://en.wikipedia.org/wiki/Cisco_Express_Forwarding [wikipedia.org]

Re:This isn't new (1)

cgori (11130) | about 5 years ago | (#28655771)

Thank you, he is the very same one whose ideas evaporated 200M+ in VC money on Caspian, right? They were across the highway from me for years when I was in the valley. plus ca change...

Pretty girls make things go faster (1, Funny)

Anonymous Coward | about 5 years ago | (#28654075)

Why can't we just put a pretty girl on top of it and make the packets go faster.

Seems to work with car advertising and on animals.

Re:Pretty girls make things go faster (4, Funny)

NotBornYesterday (1093817) | about 5 years ago | (#28654721)

What do you mean? 99% of the internet's packets are pretty girls.

Re:Pretty girls make things go faster (1)

T Murphy (1054674) | about 5 years ago | (#28655155)

Damn. I better read more of my spam then.

I don't actually know what I'm talking about (0)

Anonymous Coward | about 5 years ago | (#28654107)

Finally! It's about time! I mean, jeez, could it have been more obvious? I've been saying this for years!

But in all seriousness, it would be pretty sweet if it helps streamed media; be it audio, video, games, or some super google chrome plot to launch skynet.

This sounds like a cracker's dream (2, Insightful)

Bandman (86149) | about 5 years ago | (#28654111)

It manages flow of traffic, recognizing when one packet belongs with the others. This sounds wonderful, at least for people trying to inject packets.

I hope these things recognize the evil bit [faqs.org] .

Puffery by a startup (5, Informative)

Ungrounded Lightning (62228) | about 5 years ago | (#28654129)

The main players in the routing industry have been working on flow-aware routing for years.

(I'm in the hardware side of our company so I'm not sure where how many and which of the features built on the flow-based architecture are already in the field. But I'm willing to bet a significant chunk of change that that the full bore will be deployed on more than one name-brand company's product line and be the dominant paradigm in routing long before these guys can convince the telecoms and ISPs to adopt their product. No matter how many big names they have on staff - or how good their box is. Breaking into networking is HARD.)

Re:Puffery by a startup (1)

Sycraft-fu (314770) | about 5 years ago | (#28654559)

Ya I'm failing to see what is special here. Now the article was kind of light on the details, so maybe there's more to it, but to me it sounds like what the Cisco 6000s and such already do. When you start a flow the first packet hits the router and it decides where it is going, if it is allowed and all that jazz. After that, the subsequent packets are switched which makes it much faster. Routing is essentially done on flows, not packets.

Maybe this is somehow way more amazing, but it doesn't look like it.

Re:Puffery by a startup (1)

BitZtream (692029) | about 5 years ago | (#28655065)

Especially trying to break into a market by telling everyone about your awesome super cool new way of doing things ... that everyone else has been doing for 10 years already.

Didn't Ipsilon try this a long time back? (1)

nokiator (781573) | about 5 years ago | (#28654149)

But seriously, flow management/queuing may be useful at the very edge of the network, like a BRAS. But most provider edge products (Juniper, Ericsson/Redback, ...) already have similar capabilities. Flow management past the edge of a network is pointless, especially for TCP/IP traffic.

Little help? (1)

xZgf6xHx2uhoAj9D (1160707) | about 5 years ago | (#28654195)

I read the article and can't exactly distinguish this from IntServ [wikipedia.org] . What's the difference?

Re:Little help? (1)

Steve Blake (13873) | about 5 years ago | (#28655257)

IntServ assumed that flows are signalled (with RSVP). The Anagram box (and the Caspian box before it) detects the first packet in a flow (by a miss in the flow cache), and then creates a flow cache entry.

The utility of this feature is very questionable, especially since routers have been able to IP forward and apply ACLs at line rate for years.

Some thoughts (3, Interesting)

intx13 (808988) | about 5 years ago | (#28654263)

First, routers discard packets somewhat randomly, causing some transmissions to stall.

While it is true that whether or not a particular packet will be discarded is the result of a probabilistic process, it is unfair to call it "random". Based on a model of the queue within the router and estimation of the input parameters the probability of a packet being discarded can be calculated. In fact, that's how they design routers. You pick a bunch of different situations and decide how often you can afford to drop packets, then design a queueing system to meet those requirements. Queueing theory is a well-established field (the de-facto standard textbook was written in 1970!) and networking is one of the biggest applications.

Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays

You wouldn't expect uniform delays. A queueing system with a uniform distribution on expected number of customers in the queue is a very strange system indeed. Those sorts of systems are usually related to renewal processes and don't often show up in networking applications. That's actually a good thing, because systems with uniform distributions on just about anything are much more difficult to solve or approximate than most other systems.

"Substantial" is the key word here. Effectively the concept of managing "flows" just means that the router is caching destinations based on fields like source port, source IP address, etc. By using the cache rather than recomputing the destination the latencies can be reduced, thus reducing the number of times you need to use the queue. In queueing theory terms you are decreasing mean service time to increase total service rate. Note however that this can backfire: if you increase the variance in the service time distribution too much (some delays will be much higher when you eventually do need to use the queue) you will actually decrease performance. Of course assumedly they've done all of this work. In essence "flow management" seems to be the replacement of a FIFO queue with a priority queue in a queueing system, with priority based on caching.

Personally, I'm not sure how much of a benefit this can provide. Does it work with NAT? How often do you drop packets based on incorrect routing as compared to those you would have dropped if you had put them in the queue? If this was a truly novel queueing theory application I would have expected to see it in a IEEE journal, not Spectrum.

And of course, any time someone opens with "The Internet is broken" you have to be a little skeptical. Routing is a well-studied and complex subject; saying that you've replaced "packets" with "flows" ain't gunna cut it in my book.

Re:Some thoughts (0)

Anonymous Coward | about 5 years ago | (#28655661)

the de-facto standard textbook was written in 1970!

Hmm... I'm not sure which book you mean. I've always taken the classic text to be Kleinrock's (1975).

Incidentally, many routers out there implement Random Early Discard -- so while you are right to say that the probability of a packet being dropped can be calculated it is also correct to say that the actual packet which is dropped is random.

No big thang (1, Funny)

Anonymous Coward | about 5 years ago | (#28654307)

this sounds fancy, but the only real improvement is hash-table lookup, everything is already implemented with current generation routers.

and it starts at $30000 a model, ROFLMAO. Thanks, umm , but NO thanks!

So, they've reimplemented CEF (3, Interesting)

elbuddha (148737) | about 5 years ago | (#28654315)

Yippee.

Cisco (and probably several others) have done this by default for many many moons now. By way of practical demonstration, notice that equal weight routes load balance per flow, not per packet. What it allows is subsequent routing decisions to be offloaded from a route processor down to the asics on the card level. And don't try to turn CEF off on a layer 3 switch - even a lightly loaded one - unless you want your throughput to resemble 56k.

Re:So, they've reimplemented CEF (0)

Anonymous Coward | about 5 years ago | (#28654405)

Yippee.

Cisco (and probably several others) have done this by default for many many moons now. By way of practical demonstration, notice that equal weight routes load balance per flow, not per packet. What it allows is subsequent routing decisions to be offloaded from a route processor down to the asics on the card level. And don't try to turn CEF off on a layer 3 switch - even a lightly loaded one - unless you want your throughput to resemble 56k.

Beat me to it. I was going to say the same thing welcome to the 90's!

It looks like horrible technolgy (2, Funny)

Anonymous Coward | about 5 years ago | (#28654339)

Among the innovations:

no ram for buffering flows to cope with any temporary overcommitments. Instead it does this:

"Even more significant, the FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down."

Um, discarding a random packet in the middle of my session will indeed slow the flow down, much in the same way as if you shoot me in the knee it will slow me down.

i don't get it, isn't that what is done now? (1)

tukia (1375091) | about 5 years ago | (#28654359)

Revolutionary? The concept isn't new. On software based router, we cache route information after the first lookup from the routing table for a certain period of time based on parameters like destination ip address, nexthop and interface. So instead of looking up the route table again, we just look up the cached route. It's called IP Flows and it's way old.

Whats the date on this, 1998? (1, Informative)

Anonymous Coward | about 5 years ago | (#28654385)

Someone put the spades back. flow routing is SUCH old news... How the heck did this make slashdot?

Re:Whats the date on this, 1998? (1)

swmike (139450) | about 5 years ago | (#28654961)

I agree. Flow routing with 4M flows didn't work in devices engineered in the late 1990ies, and they won't work now. Any script kiddie can create new "flows" by sending random port/dst IP packets thru the device, and it'll fall over and die just like the devices 10 years back did.

We stopped doing flow routing for a reason, it didn't work. Routers need to care about IP addresses and perhaps take into account port numbers to do load sharing between equal cost links, but nothing else. Looking into flows does NOT scale.

Re:Whats the date on this, 1998? (1)

SuiteSisterMary (123932) | about 5 years ago | (#28655469)

Well, by that logic, packet-based routers don't work either. Any script kiddie can create new 'packets' at random, flood them at the router, and the router'll fall over and die.

This Design is Flawed (3, Informative)

neelsheyal (1595659) | about 5 years ago | (#28654399)

Routing/Switching based on flows is highly flawed. The article claims that the benefit is due to reduced table lookup based on individual packet content. Instead if the 5 tuple is hashed to a flowid. then the presence of flowid indicates that the flow is already active and will be treated preferentially during a congestion. First of all, if the number of flowids are large then there is no way to store all the different flowids in a scalable and cost effective manner. Which means you associate an eviction clause which can hurt you more with all these complexities. Secondly, there is concept of hardware caching which works better than hashing flowids. Finally, all the classes of flow which are really important, can be protected with class based queuing.

x25 (0)

Anonymous Coward | about 5 years ago | (#28654427)

This is called x25 protocol and was the European equivalent of TCP/IP. Since the EU never had an IT industry, US had no problem enforcing it. Interestingly, when the standard war of ATM took place, US-JP made a compromise between two round numbers, midway. 53.

Don't Cross The Streams (3, Insightful)

BigBlueOx (1201587) | about 5 years ago | (#28654429)

Why?
It would be bad.
I'm fuzzy on the whole good/bad thing. What do you mean, "bad"?
Try to imagine all the packets on your network stopping instantaneously and every router on the Internet exploding at the speed of light.
Total TCP reversal!!
Right, that's bad. Important safety tip. Thanks, Egon.

p2p (1)

visible.frylock (965768) | about 5 years ago | (#28654455)

This capability is especially convenient for managing network overload due to P2P traffic. Conventionally, P2P is filtered out using a technique called deep packet inspection, or DPI, which looks at the data portion of all packets. With flow management, you can detect P2P because it relies on many long-duration flows per user. Then, without peeking into the packets' data, you can limit their transmission to rates you deem fair.

If routers started doing this, wouldn't torrent clients just start randomizing their port numbers? According to him, different port numbers will get counted as a different "flow". I'd think, if they wanted to do this, they'd at least have to look at IPs, port numbers are easy to change.

Re:p2p (1)

Gerald (9696) | about 5 years ago | (#28654727)

If you have dozens or hundreds of long-duration, active flows (BitTorrent) and your neighbor has a few intermittent, short-duration flows (Firefox), it's pretty obvious who to throttle. The port numbers in use is irrelevant in this case.

Wrong (2, Informative)

slashnik (181800) | about 5 years ago | (#28654461)

"TCP throughput is inversely proportional to delay"

Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay
As long as the window is large enough

Re:Wrong (1)

TooMuchToDo (882796) | about 5 years ago | (#28654821)

You do run into throughput limitations per flow though based on the speed of light.

http://fasterdata.es.net/ [es.net]

Re:Wrong (1)

TooMuchToDo (882796) | about 5 years ago | (#28654851)

Argh. That was supposed to say "based on your latency, which is caused by the speed of light."

Re:Wrong (1)

sharpenyourteeth (827502) | about 5 years ago | (#28655723)

"TCP throughput is inversely proportional to delay"

Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay As long as the window is large enough

Actually it is correct. The throughput is equal to TCP's congestion window divided by the round trip time (end to end delay), or TP = CWND/RTT. What he means is that assuming the window size is the same, the throughput is inversely proportional to delay.

Re:Wrong (0)

Anonymous Coward | about 5 years ago | (#28655841)

"TCP throughput is inversely proportional to delay"

Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay
As long as the window is large enough

But in reality a HUGE window is impractical. For real world tcp communication he is correct.

Re:Wrong (1)

mevets (322601) | about 5 years ago | (#28657043)

Wouldn't it be cool to have a protocol where throughput was not indirectly proportional to delay? The more {comcast,bell,...} throttled your pirate^Wtorrent, the faster it got. If you could combine this with a protocol where latency was inversely proportional to bandwidth, just by unplugging you could have an infinite throughput, zero latency network. How cool is that!

I see what you did there... (2, Interesting)

Anonymous Coward | about 5 years ago | (#28654531)

He has re-invented the layer 3 switch... now with less jitter and latency because:

The FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down. And rather than just delaying or dropping packets as in regular routers, in the FR-1000 the output provides feedback to the input. If thereâ(TM)s bandwidth available, the equipment increases the flow rates or accepts more flows at the input; if bandwidth is scarce, the router reduces flow rates or discards packets.

So we are going to implement WRED on a per flow basis, get rid of the queuing, and force the tcp stream to scale back it's window size when we run out of bandwidth by dropping a packet out of that conversation...

I mis-spoke, this is a layer 2 and a half switch!

Look what it *doesn't* have. (2, Insightful)

JakiChan (141719) | about 5 years ago | (#28654929)

He's doing it *without* custom ASICs and without TCAM. TCAM is very expensive. I'm not sure this is faster than CEF or the like, but it may very well be cheaper.

yaaaawwwwnnn (0)

Anonymous Coward | about 5 years ago | (#28654975)

Yeah, I dont see what's radical either... this tech has been out for some time now.

trOllkore (-1, Offtopic)

Anonymous Coward | about 5 years ago | (#28655309)

Caspian Networks Reloaded (1)

Eristone (146133) | about 5 years ago | (#28655531)

Fun watching people say this "doesn't work" -- back when I was at Caspian, the real world runs were working quite well at gigabit speed and if memory serves, they had a 10 gigabit line card (this was 2006). The cost was they had to design asics to do this and they were trying to get the same performance out of commodity hardware. It looks like this is the case - which means it's dropped the cost of the equipment significantly.

Where it does improvement over current routing and qos is that it does it on the fly and at wire speeds with an administrator putting in parameters that state the type of performance he wants for "detected flows". In addition, a lot of profiling was done on various traffic to figure out what type of traffic produces what. For instance, the stuff Caspian was selling could identify a VOIP connection on the fly without doing deep packet inspection even if the traffic was encrypted. It did the same with torrent traffic, video traffic, web surfing traffic, im traffic, irc traffic, etc. So, instead of having a deep packet inspection, a router and a switch, you'd get the flow traffic, identify it based on the traffic profile, establish a qos on the flow and then maintain it. It would help against DDOS situations - maintain the current connections that are coming through while establishing new ones as needed. And this is all in the same box.

I don't know what the Anagram folks have managed to do but if they're working off the same model (and probably have a bunch of the same people working on things) the stuff I mentioned should be definitely part of the same equipment.

Traffic throttling long-lived connections? (0)

Anonymous Coward | about 5 years ago | (#28655623)

From the article, it sounds like it's trying to sell the idea of managing traffic (specifically P2P) fairly, which is admirable. I like the idea of a long-lived high-throughput P2P connection being treated the same as a long-lived high-throughput HTTP connection. That seems incredibly fair to me.

However, if they start treating long-lived low-throughput connections differently (more likely in a P2P setting, I'd imagine), then that seems a little unfair.

The very quick workaround would be for P2P clients to set a limit on the duration of a connection, and once that connection expires, to drop the connection and re-establish it. From what the article says, that would get around their "fairness" system. Of course, this assumes the port is part of the hash table key (which is how the article makes it sound.) If it is not (and the "fairness" is based on one IP to another, then that's a different story.

OpenBSD anyone? (1)

Narcocide (102829) | about 5 years ago | (#28655757)

Isn't this something that you can accomplish with OpenBSD packet filter?

Re:OpenBSD anyone? (1)

Shatrat (855151) | about 5 years ago | (#28656117)

Software routing/switching isn't going to touch a hardware solution for speed and reliability.
Unix routers have their place, but they can't do big pipes.

Re:OpenBSD anyone? (0)

Anonymous Coward | about 5 years ago | (#28656161)

how is a packet filter related to routing?

didn't rtfa, but i assume it's about tampax. (0, Offtopic)

gadabyte (1228808) | about 5 years ago | (#28655869)

"managing flows" unburied a memory of a tampon commercial.

During your period, your flow level can change from one day to the next. That's why Tampax developed the Compak Multipax. You get 3 tampon absorbencies to meet your changing needs, in one convenient package.

Aunt Flow (0)

Anonymous Coward | about 5 years ago | (#28655929)

Will it manage how often she comes to visit? That would be a sure-fire hit with the married men, at least.

What A Great New Concept! (1)

DynaSoar (714234) | about 5 years ago | (#28657509)

Cool, a transfer protocol that adapts what's sent when according to traffic flow. It needs a catchy name.

I suggest Zmodem.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...