Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Rerouting the Networks

kdawson posted more than 7 years ago | from the coding-around-damage dept.

Networking 108

prostoalex writes "Scientific American looks at a new approach to clearing networking jams, in research funded by the US military. Instead of using routers to route the packets from point A and point B, thus making some hop in the sequence critical for delivering the message, researchers are exploring a new approach called 'network coding.' (Here is the illustration cited in the article.)" Quoting: "[Four researchers] then at the University of Hong Kong published groundbreaking work that introduced a new approach to distributing information across shared networks. In... network coding, routers are replaced by coders, which transmit evidence about messages instead of sending the messages themselves. When receivers collect the evidence, they deduce the original information from the assembled clues. Although this method may sound counterintuitive, network coding, which is still under study, has the potential to dramatically speed up and improve the reliability of all manner of communications systems and may well spark the next revolution in the field. Investigators are, of course, also exploring additional avenues for improving efficiency; as far as we know, though, those other approaches generally extend existing methods.'"

Sorry! There are no comments related to the filter you selected.

damn commies... (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19215667)

are we going to war with china soon? if we are, were fucked.. they blow the usa away in technology

Re:damn commies... (1)

wellingj (1030460) | more than 7 years ago | (#19215759)

Well it is a published paper. It's not like it's a big secret now, or even patentable. Thank god....

Waste of Taxpayer money (4, Funny)

SpaceLifeForm (228190) | more than 7 years ago | (#19215673)

All of that new NSA equipment would have to be re-worked.

I doubt that the taxpayers would approve.

don't worry (5, Funny)

geedra (1009933) | more than 7 years ago | (#19215717)

we'll cut costs by outsourcing to china ;)

Re:don't worry (0)

Anonymous Coward | more than 7 years ago | (#19216593)

Hell yeah, Hong Kong is part of China, so technically it has already been outsourced ;-p

Re:don't worry (1)

elrous0 (869638) | more than 7 years ago | (#19220169)

Well, who knows routers better than the Chinese? This system has the potential to let them block the word "democracy" at a MUCH faster rate.

Re:Waste of Taxpayer money (1)

Karl Cocknozzle (514413) | more than 7 years ago | (#19216055)

All of that new NSA equipment would have to be re-worked. I doubt that the taxpayers would approve.

I am posting in this thread because I'm fresh out of mod points... But even if I weren't, there isn't any "+1, Deftly Ironic" moderation or I'd set you up with one of those...

Re:Waste of Taxpayer money (3, Funny)

grcumb (781340) | more than 7 years ago | (#19216119)

All of that new NSA equipment would have to be re-worked. I doubt that the taxpayers would approve.
I am posting in this thread because I'm fresh out of mod points... But even if I weren't, there isn't any "+1, Deftly Ironic" moderation or I'd set you up with one of those...

It was removed because people kept using it when they should have chosen the "+1, Biting Satire" option instead.

(For reference, this post should be modded "+1, Deadpan".)

that's crap (0)

Anonymous Coward | more than 7 years ago | (#19215707)

just lube the tubes.

Re:that's crap (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19215887)

Speaking of lubing the tube, us niggers are gonna rape you good. might be something for you to consider.

This sounds vaguely like (4, Interesting)

idontgno (624372) | more than 7 years ago | (#19215719)

the information transmission equivalent of holography. On one level, it seems sensible and obvious. On another level, it's rather novel (at least, the application of holographic distribution of data to a network architecture). At least, I've never heard of it, either in real-world technology research or even (to my recollection) SF technology.

Re:This sounds vaguely like (2, Informative)

beyondkaoru (1008447) | more than 7 years ago | (#19216239)

this already happens on really lossy transmissions -- especially things like satellite tv or, say, deep-ish space missions don't want to waste time retransmitting (in satellite tv, if you didn't get it, you can't ask for it again, your picture just looks terrible for a split second).

so, they use error correcting codes, like, for example, parity (like raid 4,5) or reed-solomon (i think that's used in raid 6 or 7 or something), or turbo-codes.

fyi, for the latter i don't actually know how it works.

anyway, this is a pretty good idea (even though it's certainly been thought before); i'd be happy to see tcp replaced with something a bit smarter and just as functional.

Re:This sounds vaguely like (0)

Anonymous Coward | more than 7 years ago | (#19219211)

To me it sounds like someone is reusing BitTorrent idea for cashing on funds.

I must be missing something. (2, Insightful)

khasim (1285) | more than 7 years ago | (#19215747)

The numbers they give do not correspond to the diagram.

Suppose we additionally send Amy's missive to Carl along link 1 and Ben's to Dana along link 4. Sending these two messages uses network resources (links 1 and 4) that the routing system could not usefully employ for meeting Amy's and Ben's demands. Carl's node receives Amy's transmission and knows for each instant (from link 6) whether the number of 1s in the pair of messages issued by Amy and Ben is even or odd. If Carl's node is programmed to also "know" the rule used by the coders at the start of link 5 or if it can infer the rule from the evidence itself, the collected evidence will enable it to decipher the message sent by Ben. And Dana's node will similarly uncover Amy's message.

So they had originally started with one channel from A to D and that channel was also used by B sending to C.

They said that it was inefficient because A to D would have to wait at times for B to C.

Their "solution" seems to be to add a SECOND CHANNEL that will transmit a bit AS FAST AS THE PRIMARY CHANNEL and so both messages will get to their destination sooner.

Why not just use that secondary channel to transmit the data in the first place?

Did I miss something?

Re:I must be missing something. (1)

Volante3192 (953645) | more than 7 years ago | (#19215797)

That's what it was looking like to me...

It sounds the article's original hypothesis is 'Frames go from Router A to Router B.' So what they do to solve it is...add more routers. The diagram simply looks like 1 router that can route to 6 further routers (A-F) which then route to a further 20 (G-Z).

I'm probably missing something too...that seems too easy.

And they added FASTER routers. :) (1)

khasim (1285) | more than 7 years ago | (#19216131)

The top router can only handle 6 outbound connections at 1 bit per second (total 6 bits per second).

The second rank of routers can handle TEN outbound connections at 1 bit per second (total 10 bits per second).

And they aren't adding any time delays for the merging of the data streams.

I think I can see what they're trying to do the pictures. And it doesn't look right to me (but who am I anyway). Their choke point (the first router) needs a LOT of information about the second rank of routers.

In essence, they're turning 3 data streams into 4 data streams. The 3 original streams go out on 3 of the six outbound lines to 3 of the super fast routers which will send them to 30 of the end points. Given that there are only 20 end points, 10 of them will receive two of the original messages.

Now, those 3 data streams are also merged and that merged stream is sent out on the THREE LEFTOVER LINES. Which end up going to 30 end points.

The problem in my example is that it leaves 10 of the 20 end points with 2 copies of the merged messages and only 1 of the original streams to attempt to unlock them with.

Why not just save time and effort and send the original 3 messages out with 2 lines each? That's GOT to be easier than merging them and HOPING that each endpoint gets the correct unlocking keys.

Re:And they added FASTER routers. :) (1)

Sillygates (967271) | more than 7 years ago | (#19216251)

The top router can only handle 6 outbound connections at 1 bit per second (total 6 bits per second). The second rank of routers can handle TEN outbound connections at 1 bit per second (total 10 bits per second).
I think you are off by several orders of magnatude.

Re:And they added FASTER routers. :) (1)

somersault (912633) | more than 7 years ago | (#19221713)

I think you didn't RTFA.. though you aren't missing much. It vaguely states a method of compressing information together without having any reasonable way to uncompress it again. That's what they need to work out before this will be useful. And the fact that he says things like "the startling truth" before stating things that aren't at all startling makes him sound like either an idiot or a salesman..

Re:I must be missing something. (2, Informative)

LiquidCoooled (634315) | more than 7 years ago | (#19215803)

What happens when the second node goes down?
If you automatically broadcast the knowledge of the message across all available channels, if it gets to the other end its won.

This is the not putting all your eggs in one basket scenario which attempts to address bandwidth limitation issues.

It was only recently I was thinking about global communications on a wireless mesh without ISP involvement (reroute the backbone across home networks), I wonder if this could be the kickstart it needs?

Re:I must be missing something. (0)

Anonymous Coward | more than 7 years ago | (#19216299)

i'm a biology graduate, not a physics student so i'm not well-versed in wireless physics, but here's what I wonder about:

I was also thinking about a p2p internet (home network backbone as you describe) service that would send data using wireless networks (also possibly using some undiscovered "sub-quark/gluon" level wireless medium) along with encrypting all of it such that it would be currently undetected by the FCC's radio-sized wave probing. It would be good in the sense that people would get free internet supported by the people and could compile multiple input signals to tell and verify if a data packet (which could attempt to "spread") sending false information across networks and a "majority signal" could discredit questionable or unsubstantiated news alerts (not like a simply slandering meme (still bad), but one that is disinformation or "malware" for the system. It would be bad in the sense that terrorist factions could hide/transmit their conversations through such tinier "strings" if such a system is intrinsically larger in terms of the number of "wireless threads" it has if they're present at 1/1000x, etc the "diameter" of a conventional wireless frequency's wave medium and is more obscuring of information during traces.

Re:I must be missing something. (1)

dosquatch (924618) | more than 7 years ago | (#19223281)

I see what you're saying, but wouldn't it be easier to just reverse the polarity of the main deflector to ward off the anti-gravitons?

Re:I must be missing something. (3, Informative)

markov_chain (202465) | more than 7 years ago | (#19215819)

You're right, their example is not very clear. It works best when there are extra resources available to the system that *can't* be used for other purposes; the best example of this is wireless communication, where all transmissions are broadcast. For example, if A sends a packet to B, but C also hears it, we can't possibly use the A-C transmission for some other data.

That still wouldn't work. (2, Informative)

khasim (1285) | more than 7 years ago | (#19215899)

For their system to work, it seems that A has to send to B and C

which blocks C from sending anything

but C must be sending to D at the same time so that the messages can be merged at the "coder" point.

And the coder would then use TWO transmissions to send the "hint" and the merged data.

It's still sending two data streams (one for merged stream and one for hints) when they were originally complaining about how it was a single channel.

Re:That still wouldn't work. (5, Informative)

markov_chain (202465) | more than 7 years ago | (#19216095)

The standard example in wireless networks goes like this. Suppose we have a 3-node chain A-B-C, where A has stuff to send to C, and vice versa, i.e. the communication is bidirectional. A and C are too far from each other, so they need to go through B.

Without any coding this is how time is spent:
1. A->B: pAB
2. B->C: pAB
3. C->B: pBA
4. B->A: pBA

Note than in steps 2 and 4, thanks to the nature of wireless channels both A and C get the packet transmitted by B. One of these receptions is thus wasted. With coding, it can be used to send the same data in 25% less time:

1. A->B: pAB
2. C->B: pBA
3. B->A, B->C: pAB xor pBA

To decode, A xors the received data with pAB, C with pBA.

Re:That still wouldn't work. (3, Interesting)

SurturZ (54334) | more than 7 years ago | (#19216127)

What about bad packets? Wouldn't error correction stuff the whole thing up? i.e. one re-sent packet and you lose out?

No. (1)

jsn13 (734418) | more than 7 years ago | (#19216849)

[i didn't RTFA].
Packet loss/retransmits doesn't matter much, IMHO. In GP example, host B could use some explicit out-of-band signaling to tell A and C what packets are used as xor context. Like, via IP options / IP ID field or something.

Re:No. (3, Informative)

SurturZ (54334) | more than 7 years ago | (#19217321)

I worked it out, coding is still better.

For the example before:
1. A->B: pAB
2. C->B: pBA
3. B->A, B->C: pAB xor pBA

Assume that at time=3 a bird flies between B & C. The sequence becomes:
1. A->B: pAB
2. C->B: pBA
3. B->A, B->C: pAB xor pBA [B->A SUCCESS, B->C ERROR]
4. B->C: pAB xor pBA [retry]

B resends the packet, but addresses it only to C. This is still quicker than routing, which would take 5 time slices.

Re:No. (1)

superpulpsicle (533373) | more than 7 years ago | (#19222589)

So is this the end of routers and the beginning of coders wtf?

Re:That still wouldn't work. (0)

Anonymous Coward | more than 7 years ago | (#19223431)

Bad packets are reasonably rare, and the packet would have to be bad in a way that it looks the same with respect to identification for decoding, but is actually different. E.g., has the same source and destination addresses as well as the same CRC, but its contents are corrupted. All in all, it makes them very unlikely, and these errors would be caught at higher layers (e.g., TCP).

Wireless is the place where this technique is especially powerful, due to the inhererently broadcast medium. There was an excellent paper in SIGCOMM 2006 (from MIT), entitled "XORs in the Air" that had some excellent results.

Where is "D"? (2, Interesting)

khasim (1285) | more than 7 years ago | (#19216243)

Their discussion isn't about
A to B to C
and
C to B to A

It's about A sending to D while at the same time B is sending to C.

You've left off "D".

And you failed to account for how B would know ahead of time that C would be sending a message. Which fails completely when you try to account for "D" in the equation. You need to account for the packets telling B which points wish to transmit.

In your wireless example, it would be easier to just skip B and have A broadcast its message to all and sundry and then C can broadcast. B and D would pickup whatever was meant for them.

1. A broadcasts to everyone, D receives the message.
2. B broadcasts to everyone, C receives the message.

Only two steps required. A further 33% savings and it includes an additional recipient.

Re:Where is "D"? (1)

markov_chain (202465) | more than 7 years ago | (#19216917)

You've left off "D".
I wasn't explaining their example, I was giving you a different example where network coding can be put to good use with a clear reason for the existence of the additional unused resources. There are more good examples in wikipedia. [wikipedia.org]

And you failed to account for how B would know ahead of time that C would be sending a message.
  That's an implementation detail. It's been solved by buffering some packets at B. There are good papers on this out there, like xor in the air. [mit.edu]


In your wireless example, it would be easier to just skip B and have A broadcast its message to all and sundry and then C can broadcast.

What part of "A and C are too far from each other, so they need to go through B" did you not understand?

Fascinating. (0)

khasim (1285) | more than 7 years ago | (#19217313)

I wasn't explaining their example, I was giving you a different example where network coding can be put to good use with a clear reason for the existence of the additional unused resources.

So while the discussion was about their example (did you notice the part where I said that the numbers in the diagram didn't match up with their example) you decide that you should be talking about something completely different.

Yes, I'm glad you managed to work your way through the simple A B C problem.

It's just a shame that that wasn't the topic of discussion. And the topic of discussion is the problem they're having getting A to D and B to C.

And it seems that they are talking about using additional channels, simultaneously. Which is why we were discussing their example and why I kept pointing out that you were leaving "D" off of your example.

Can we stay on the subject of their example now?

Re:Fascinating. (1)

markov_chain (202465) | more than 7 years ago | (#19217449)

Like the guy below said, check out the butterfly network on the wikipedia page, that explains it more clearly. The trick is that the transmissions are multicast, not unicast, and that the side links would go unused without coding.

Re:That still wouldn't work. (1)

phooka.de (302970) | more than 7 years ago | (#19218541)

Ahm... no.

In your example, C's response dos not depend on what C receives from A. Also, B is magically able to simutanously send two messages (no serial device can do that).

If the responce (C -> A) is indeed independent from the first message (A -> C), then A and C can sart sending in parallel, thus eliminating the gain in speed. Also, you assume that the message sent from A to C is identical to the message sent from C to A. Assuming that, I'd offen the even more efficient startegy of using A -> A, C -> C, eliminating the need for B entirely.

butterfly and quantum networks (5, Informative)

kidquantum (956549) | more than 7 years ago | (#19215959)

There is a good explaination in this reference [arxiv.org] . Although that paper mostly treats the case of quantum networks (which is ubber cool) it gives the background. The standard example (not explained well in TFA) is the http://en.wikipedia.org/wiki/Network_coding [wikipedia.org] >butter fly network and the wikipedia article does a reasonable job explaining it. Basically, even with the three different routes, there is still a bottleneck if you just reroute information. But by coding the information, you get it all through. The example of the butterfly network is very simple and worth learning -- just google it if the wikipedia entry or the arxiv paper doesn't help.

Re:I must be missing something. (1)

maop (309499) | more than 7 years ago | (#19218713)

Did I miss something?

The original submitter missed something. The illustration in the summary has nothing to do with the example in the article. The illustration is a completely separate example illustrating multicasting using network coding.

The example in the article has 6 nodes (A,B,...,F) and 7 links (1,2,...,7). You can reconstruct the topology based on the textual information in the article. The links are 1:(A,C) 2:(A,E) 3:(B,E) 4:(B,D) 5:(E,F) 6:(F,C) 7:(F,D).

Leave it to the DoD (1)

markov_chain (202465) | more than 7 years ago | (#19215751)

to design communication systems that can route around nuclear attacks.

Re:Leave it to the DoD (0)

Anonymous Coward | more than 7 years ago | (#19215949)

Shoot anyone can design it so it can route AROUND a nuclear attack. It takes a real man to design it to route THRU the nuclear attack.

I'm all for it... (0)

camperdave (969942) | more than 7 years ago | (#19215785)

Hey, as long as I get my torrents faster, I'm all for it.

3 years too late (1)

hardtofindanick (1105361) | more than 7 years ago | (#19215789)

Re:3 years too late (1)

shadowmatter (734276) | more than 7 years ago | (#19216103)

Avalanche was about applying network coding to an overlay network; this research is about applying network coding at the network core itself. So while they share the same technology, expect fundamentally different challenges.

- sm

Re:3 years too late (0)

Anonymous Coward | more than 7 years ago | (#19216685)

You mean 30 years too late. This all started with Reed-Solomon codes. And in any event, Avalanche is clearly a rip-off of Tornado codes, which was both used for filesharing and is also really just a better knock-off of traditional Reed-Solomn coding algorithms/equations.

realistic receiver connectivity and bandwidth (2, Interesting)

dltaylor (7510) | more than 7 years ago | (#19215805)

In the article, I need to transmit all three messages to all receivers and the transmitter/intermediate node link has been fabricated as an obvious bottleneck. How often, in a real network, are the receivers going to have that much connectivity and bandwidth? Also, where is the knowledge that all receiver links are up? The loss of any receiver link means that it will get NO messages, since no complete message is routed of any of its links. If I add enough parity data to allow reconstruction of partial messages, I've just consumed a significant fraction of the supposedly free bandwidth and imposed a higher processing cost at the receiver, which is likely to be the a lower-performance device.

Great way to create battlefield targets, though. Take out any one intermediate link and lots of the troops at the receiver end are cut off.

Re:realistic receiver connectivity and bandwidth (2, Insightful)

johndoe42 (179131) | more than 7 years ago | (#19215909)

Not at all. This idea is cooler than it sounds in the article.

Rather than using the silly parity schemes as mentioned in the article, you'd use a long code (i.e. a code that could extend beyond a single transmission). Then, if you lose part of the transmission, you could request enough additional symbols to reconstruct the message. Any decent (i.e. maximum distance separable) erasure code has the property that the amount of data you need does not exceed the size of the message, even if you lose a subset of it that is unknown to the sender. These codes exist -- Reed-Solomon is an example. (As a stupid analogy, you could send a bunch of data by mailing a wad of RAID-6 hard drives (although you'd use a code that could survive more than two erasures) using different carriers, and then, when some get lost, you just mail more of the RAID array.)

I imagine that the current research is on even better codes that wouldn't require as much feedback to work well.

(Note: I am not a network coding theorist, but I have some knowledge of information theory and non-network coding.)

Re:realistic receiver connectivity and bandwidth (1)

mrogers (85392) | more than 7 years ago | (#19220715)

I imagine that the current research is on even better codes that wouldn't require as much feedback to work well.

Yup, rateless codes (e.g. LT, Raptor and online codes) produce an endless stream of output symbols, any sufficiently large subset of which can be used to reconstruct the input.

Re:realistic receiver connectivity and bandwidth (1)

packetmon (977047) | more than 7 years ago | (#19215979)

Not to mention anything encrypted via AH+ESP is actually useless. Oh and tunneling? How often would they drop at the expense of one router failing. There is no mention either about trust factors. Would these nodes+receivers be configurable to accept certifications, preshared keys, MD5 (or better) checksums. This has gotten to be the biggest *cough* rip off of stating "Hey I invented P2P!... But its for routers! And uh, its connectionless so don't expect error control or recovery! Isn't that cool!"

Re:realistic receiver connectivity and bandwidth (1)

Zombywuf (1064778) | more than 7 years ago | (#19226313)

Er, they seem to be talking about a replacement for IP. You know IP right? It's connectionless.

Chick tech (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#19215811)

In... network coding, routers are replaced by coders, which transmit evidence about messages instead of sending the messages themselves. When receivers collect the evidence, they deduce the original information from the assembled clues.

Sounds like it was deigned by a woman.

Security (0)

Anonymous Coward | more than 7 years ago | (#19215817)

Wouldn't this method mean that both D and C could potentially decode each others messages? If everyone is sent the same clues, then you should be able to, right?

Could be really beneficial to multi-cast networks like they were saying.

Man in the Middle Attacks (3, Interesting)

jihadist (1088389) | more than 7 years ago | (#19215821)

The problem with metadata, as Microsoft found out [slashdot.org] (note: not an MS-bashing post, just an example), is that by making information about information a higher source of "truth" than the information itself, you create an opportunity for any who control the metadata source (mass media, religion, network controllers) to re-define what is true.

From that, they can stage any number of man-in-the-middle attacks -- the least potent but most widespread of which is convincing a clueless electorate^W userbase that they are certain sources of acclaimed truth, and manipulating them for their own narrow or evil ends.

Philosophers write about this in inspiring 5-page screeds like "On Truth and Lies in a Non-Moral Sense" [anus.com] by Friedrich Nietzsche. That theory in itself is the foundation of postmodernism and some more dangerous philosophies as well. Could it be that philosophy applies to computer science?

Re:Man in the Middle Attacks (1)

theraptor05 (908452) | more than 7 years ago | (#19216363)

Yes, because computer science relies heavily on logic, and logic is itself a philosophy (as well as the basis for most higher level philosophies).

Re:Man in the Middle Attacks (1)

jamesh (87723) | more than 7 years ago | (#19217357)

One has to wonder if a link to 'anus.com' is SFW...

Re:Man in the Middle Attacks (1)

LarsG (31008) | more than 7 years ago | (#19218211)

- This is /.
- The topic is MitM attacks
- Someone posts a link to anus.com

Any decent content filter would mark that one as suspect.

Re:Man in the Middle Attacks (0)

Anonymous Coward | more than 7 years ago | (#19219831)

Would not an equal only mirrored method also be to convince the electorate that certain sources, perhaps highly correlated with those you disagree with, are acclaimed sources of untruth? If someone listens to all messages except those of your opponents, by definition they are listening to yours.

Re:Man in the Middle Attacks (1)

Vo1t (1079521) | more than 7 years ago | (#19220257)

BTW: the site where Nietzsche's document is hosted has a self-explanatory domain name. They are nihilists indeed.

Or you know... (1)

Cadallin (863437) | more than 7 years ago | (#19215859)

They could just start dropping 2 out of ever 3 packets. That would give lots more capacity!

Re:Or you know... (4, Funny)

the_greywolf (311406) | more than 7 years ago | (#19216121)

Comcast already does this.

It's called "cable Internet service."

This is for multicast (4, Informative)

Animats (122034) | more than 7 years ago | (#19215871)

Note that this is a distribution network for multicasting. Nothing wrong with that, but it doesn't do anything for ordinary point to point Internet communications. The cable TV people might find it useful, though.

Also wireless (1)

shadow_slicer (607649) | more than 7 years ago | (#19216415)

This is also a hot topic in wireless networks. In multi-hop wireless networks it can enable nodes to forward many packets in a single transmission. A neat paper about this is here [mit.edu] .

Also the coding people are going crazy about this too. There were a couple papers showing that network coding is a simple extension of linear block codes.

Insecure routing why not... (0, Troll)

packetmon (977047) | more than 7 years ago | (#19215905)

Firstly, this might work for P2P, DHCP, home based (l)users, but it would never be functional in a real world business network. For one, lets take into consideration security. How would this network carry IPSec tunnel information. Those packet headers need to stay in tact not come from ranDumb address. Not only that, they're introducing n+r number of failures where n = number of nodes and r = number of receivers. Secondly sequencing... Would be a nightmare. How would each node know sequencing. What happens if one fails, the sender would have to resend to ALL routers since there is no mention of a mechanism to detect which sequence went where in this topology. Finally... Anything that has to do with governments and routers leads me to remember AT&T and the NSA's taps... First of all, I don't want/need anyone managing my traffic nor would I want to configure this nightmare. It reeks worse than IS-IS + OSPF + MOSPF + MCAST combined on steroids... (My CCIE R&S/Security lab [infiltrated.net] )

Re:Insecure routing why not... (1)

SpaceLifeForm (228190) | more than 7 years ago | (#19217649)

Parent should not be moderated 'troll'.

SSL would not work for example because it could not be trusted.

It would only be usable for Multicast where you don't care about the integrity of the data (ex: IPTV).

In... network coding, routers are replaced by coders, which transmit evidence about messages instead of sending the messages themselves. When receivers collect the evidence, they deduce the original information from the assembled clues.

It's a solution in search of a problem. We already have Multicast, and those that want to do IPTV should be doing it over IPv6 (over fibre) as it is efficient.

"deduce the original information..." (3, Funny)

NewbieV (568310) | more than 7 years ago | (#19215931)

... "from the assembled clues"?

Congratulations. You've just hardwired a rumor mill. Everyone knows how fast those things travel.

Re:"deduce the original information..." (1)

LarsG (31008) | more than 7 years ago | (#19218299)

I was thinking more in the direction of CSI - WAN

Umm most traffic is unicast (1, Informative)

Anonymous Coward | more than 7 years ago | (#19215947)

Its a pitty that 99% of the internet's traffic is unicast not multicast so this won't actually help. Plus this gets really hard computationally quickly.

Re:Umm most traffic is unicast (3, Interesting)

Spy Hunter (317220) | more than 7 years ago | (#19216197)

Actually, most Internet traffic (BitTorrent) is multicast mapped onto unicast in a terribly inefficient manner. We are forced to use BitTorrent because ISPs refuse to implement multicast (which makes it hardly surprising that there is no multicast traffic on the Internet).

Re:Umm most traffic is unicast (1)

Compholio (770966) | more than 7 years ago | (#19216337)

Switch to someone that provides IPv6 (or use a 4-to-6 connection), IPv6 mandates support for multicast.

Re:Umm most traffic is unicast (5, Funny)

glwtta (532858) | more than 7 years ago | (#19216555)

Switch to someone that provides IPv6 (or use a 4-to-6 connection), IPv6 mandates support for multicast.

Hmm, that's like telling someone who's complaining that horses are hard to find to get a unicorn instead.

IPv6 multicast (1)

thegameiam (671961) | more than 7 years ago | (#19222501)

Just to be clear, "mandates suppport" means neither "is implemented" nor "works."

Besides, if you're using a tunneling method, you'll by definition be creating a unicast stream to the v6 endpoint, which is probably topologically further away than the actual source of the content. Now, if you were able to buy a native IPv6 connection from some provider which didn't rely on IPv4 somewhere, you might be in luck. Then again, it would probably be easier to get most providers to implement IPv4 multicast than native IPv6 + v6 multicast...

Re:Umm most traffic is unicast (1)

aprilsound (412645) | more than 7 years ago | (#19224937)

We are forced to use BitTorrent because ISPs refuse to implement multicast
If by "implement" you mean "turn on", you are correct. Virtually all networking hardware still in use will have multicast support built in. The ISPs just don't want to turn it on because they don't know how to make it fit their existing billing model. Think about it, right now, they accept one packet from a peer, they know they're only going to have to deliver at most one packet to another peer. With a (typical) multicast packet, they could have to deliver thousands of packets, but there isn't an efficient way to determine that in a border router.

Revisiting IP Multicast [intel-research.net] provides an alternate multicast protocol whereby the source precomputes the AS-level distribution tree and encodes it in the packet, which makes determining the fanout (and thus the cost to the ISP) simple. If multicast ever gets enabled, it will look more like this than any of the original proposals (e.g. Deering's DVMRP or a rendezvous based method).

Already 'sort of' happens? (1)

Tribbin (565963) | more than 7 years ago | (#19216075)

Don't do routers already 'efficiently devide' amongst the intermediate nodes, if necessary, by monitoring used/free bandwidth? Isn't the effect the same?

I might be wrong; I just though this already sort of happens.

Re:Already 'sort of' happens? (1)

zolf13 (941799) | more than 7 years ago | (#19217915)

Using link load as routing metric is like asking someone to shoot yourself in the foot.

Isn't comprehensible to me (0)

macraig (621737) | more than 7 years ago | (#19216077)

In spite of being a Mensa-smart guy, I couldn't comprehend exactly how this proposed process would work. I'm blaming it on a needlessly inept description by the author. The best I could visualize was something resembling a parity-bit reconstruction process.

I wish someone could translate this into a more comprehensible form.

Re:Isn't comprehensible to me (5, Interesting)

BridgeBum (11413) | more than 7 years ago | (#19216469)

Think of solving simultaneous equations. Something like:

2x + y - z = 1
x - y + z = 2
x + 2y - 3z = -4

Okay, now instead of x,y & z being numbers, imagine they are blocks of data. Bits. And instead of using addition and subtraction, make your operand xor.

Next step, pretend that your original message is the variables strung together in order, xyz.

Easy as 1,2,3.

Re:Isn't comprehensible to me (1)

sgtrock (191182) | more than 7 years ago | (#19221527)

Now watch your latency go up by orders of magnitude. :(

Don't get me wrong. I think that what the research team has come up with is remarkable. I just don't see it as being an efficient use of resources. Demonstrate that the CPU cycles to do all this math won't impact throughput and latency, and I'll be the first to applaud them. Otherwise, it's just a nifty way of recovery that covers such small set of corner cases that it's not worth engineering for.

Re:Isn't comprehensible to me (1)

BridgeBum (11413) | more than 7 years ago | (#19222323)

There certainly will be a small CPU latency hit on the server side for encode/decode. The number of raw packets that are being transmitted should be approximately equal, so no additional network latency.

Having said that, it certainly doesn't seem to me that the math involved here (Gaussian elimination) would be very processor intensive, certainly compared to other common protocols such as TLS/SSL. There are SSL acceleration cards, I could see the Gaussian elimination being done in hardware as well.

20 Questions (4, Funny)

BinarySkies (920189) | more than 7 years ago | (#19216297)

Point A: (sends a picture of a ladybug)
Point B: 'Okay... is it an animal, vegetable, or mineral?'
Coder: 'It's an animal'
Point B: 'Is it... red?'

Sounds efficient!

Re:20 Questions (1)

grumling (94709) | more than 7 years ago | (#19217119)

It was Ms. Scarlett in the conservatory with the candlestick.

Re:20 Questions (1)

multipartmixed (163409) | more than 7 years ago | (#19222233)

OMG! Porny!

Re:20 Questions (1)

aj50 (789101) | more than 7 years ago | (#19225621)

We joked about using something similar in our Software Engineering project.

It was a flight booking system. People always complain that computers are hard to use so we'll link it to something they understand in the real world. Pretty much everyone knows the concept of higher or lower games so we considered a system which would tell you an available flight and you could decide to take it or ask for one earlier or later.

Routing protocols (1)

mrsteveman1 (1010381) | more than 7 years ago | (#19216453)

"route the packets from point A and point B, thus making some hop in the sequence critical for delivering the message"

In a well designed network, nothing is dependent on one hop, no matter where it is. We call this a multi homed network and most ISPs utilize more than one path to a certain area of the net.

Another thing to consider is that link state routing protocols are capable of quickly seeing that a route is down and picking another route from the topology table, and inserting that route into the routing table, in most cases this is very very quick. Inside an ISPs network this would be OSPF, or between ISPs or other providers this would be BGP, both of which are capable of managing outages as long as a backup route has been provisioned, and in the case of a large ISP, you can be sure it has.

So to sum things up, a given network node in an ISP typically has more than one connection to areas that cover certain IP ranges, if one of the available paths goes down, the routing protocol in use will quickly see that the link is down and start routing over the 2nd link.

I fail to see why people think something needs to change, and this article sounds like they are trying to fix something that isn't actually broken.

Re:Routing protocols (1)

darkpixel2k (623900) | more than 7 years ago | (#19217729)

"route the packets from point A and point B, thus making some hop in the sequence critical for delivering the message"

In a well designed network, nothing is dependent on one hop, no matter where it is. We call this a multi homed network and most ISPs utilize more than one path to a certain area of the net.


I seem to remember this coming up a few years back. ISP A has connections to backbone provider B using various paths C, D, E, and F--which while seeming redundant actually all pass through point G which is directly in the path of a backhoe blade...

Re:Routing protocols (1)

Comen (321331) | more than 7 years ago | (#19218271)

If all your connection go through G, and a backhoe cuts that fiber etc... then your down, I don't care how you code anything, the original poster seems right to me, you have to manage physical links no matter what.
Every area of most properly engineered networks are multi homed via at least 2 physical paths.
In our CO's each different physical path comes in to a separate fiber patch panel, leaves out different paths out of the CO, and should be riding on different sides of the street etc, usually in different directions.
Sometimes things are not this optimal, but really should be, this is the most important kind of redundancy.
On MPLS networks, most every area has tunnels and backups tunnels to every other area of the network, backup tunnels are already provisioned so that fail over happens in sub second times, you wont notice a link going down.
Most higher end routers, don't really do a route lookup anymore either, they mostly fast-switch packets based on preloaded hardware tables, usually on each interface card, so I don't know how packets are going to get there any faster either.
Seems people get to focused on virtual links etc, when it comes down to it 1 physical path = bad, 2 or more = good, as long as you have set things up correctly, and physical paths do not share the same equipment along the way.
There has been a lot of bad press on how non redundant the Internet is, this is usually bad engineering, coupled with the need for some organization to save money, I don't see it as a technical issue anymore.
Maybe I am wrong.

Re:Routing protocols (1)

maop (309499) | more than 7 years ago | (#19218845)

The point is that network coding more fully utilizes network capacity and it is just as reliable or more reliable than traditional network routing.

Re:Routing protocols (0)

Anonymous Coward | more than 7 years ago | (#19219405)

BGP is not a link state protocol. It is a path vector protocol.

The algorithm looks good, but there are holes. (4, Informative)

postbigbang (761081) | more than 7 years ago | (#19216525)

Consider using the described method instead of routers, which use the ISO/OSI stack. The message uses an inference, in some ways like how striping works in a RAID 5 drive failure; the failed drive can be inferred from data on the surviving striped drives. Not a big deal. Chunks/nibbles of the messages have a low occupancy time within a cloud, but many receivers must evaluate many messages-- not all are intended for them and must be discarded or validated (imagine spoofing in such a concept as described in the diagrams!).

But in the message diagram, there's a talker, and a receiver that gets messages through inference. Between the two, the cloud between them becomes unbelievably saturated with diffuse messages-- the same reason that ATM looks good on paper with its 53-byte packets, but in reality can't deal with traffic jams. This is why MPLS and other deterministic routing methods were invented-- to qualify transmission routes, not just shoot them like a shotgun into a cloud, hoping that the intended receiver gets the message and deciphers it through receive-side heuristics.

This has been done before, and it didn't work. I'd love to hear comments on why this should succeed.

Mod parent up! (2, Informative)

khasim (1285) | more than 7 years ago | (#19217435)

Exactly. Their diagram is pretty and has unused resources in the overly simplified "before" case ... but it does NOT scale when your have multiple senders and not every receiver wants every message.

So each point in their diagram will either have to be aware of every other point (you think routing tables are bad right now) or they have to send MASSIVE amounts of extra data.

Just from their diagram, they went from 3 lines to 6 lines at the first router. Then they fed those into 6 routers each with TEN lines.

That is a total of 7 routers and and 66 lines to deliver 3 messages to 20 recipients (60 transmissions). Or, 10% more than would be needed if you just ran 1 each.

Now add 2 more senders and 20 more recipients. But 5 of those only want 1 message, 10 want both the new messages and the other 5 want all 5 of the messages from all five of the senders.

You end up with congestion just from the packets telling the receivers which streams are merged and which streams they need to unlock them.

And the machines that only want 1 message have to decode 5 messages to get it.

Network coding (4, Informative)

gardarh (1105395) | more than 7 years ago | (#19216617)

Network coding is far from a brand new idea. It is introduced in a paper by Ahlswede http://pdos.csail.mit.edu/decouto/papers/ahlswede0 0.pdf [mit.edu] published in 2000 and has ever since been a very popular research topic in the networking world (http://www.ifp.uiuc.edu/~koetter/NWC/index.html [uiuc.edu] ). These "clues" are linear equations where actual packets can be retrieved by applying a gaussian reduction on the equations. Its most obvious applications are with multicasting where utilization of network links can indeed be increased in an informational theoretical perspective. The tradeoffs are increased CPU load on the intermediate and end nodes. The research so far is a bit from getting into the practical stages but it has promise. As for "99% of internet traffic being unicast"; even though that might be true, one needs to think outside the box. If it turns out that multicasting will be much cheaper than now (multicasting is now in most cases basically multiple unicasting), broadcasting TV through packet networks might become much more efficient and this proportion could change. Finally, don't forget that research is still in early stages. I believe there are some years, even decades (if ever), until we will see any of this in practice. Maybe it turns out to be useless for computer networks, who knows. Even so, the basic principle might still prove useful for other applications such as routing inside solid state chips.

Re:Network coding (1)

Jah-Wren Ryel (80510) | more than 7 years ago | (#19217863)

A simple form of it is already in use today with great results.

It's Parchive [wikipedia.org] and it is crucial for the transmission of large data files through the usenet network. Usenet is essentially a multicast network, like that in the article and the development of parchive error correction files made it significantly more efficient, essentially bringing the need for retransmits down to nearly zero at a slight cost of 5-10% more data transmitted than would be necessary in a perfectly lossless network.

Re:Network coding (0)

Anonymous Coward | more than 7 years ago | (#19218331)

Parchive is an example of standard forward error correction, not network coding. In network coding the receiver does not necessarily receive the message being transmitted. Instead it receives some number of functions of the message, which it can then use to calculate the actual message. It's as if the only thing received was parity bits (no systematic bits), possibly from different sources, and from those you calculate the message.

Re:Network coding (1)

Jah-Wren Ryel (80510) | more than 7 years ago | (#19221855)

It's as if the only thing received was parity bits (no systematic bits), possibly from different sources, and from those you calculate the message.
Which parchive is quite capable of doing, just that no one sends only parity files because the overhead to recalculate the original message is quite high in that case.

Nothing to see here folks (3, Interesting)

Anonymous Coward | more than 7 years ago | (#19216691)

First of all, this particular algorithm is really only suited for multicasting the same data to many nodes at the same time. It does not help at all with point-to-point unicasting, which is what the majority of Internet traffic looks like. Secondly, the only real trick here is that you can XOR the data together to transmit more information. So if you have equal length messages A, B, and C, instead of transmitting all three to any particular node, you only transmit XORs of some subset of them (A^B and B^C, or A^C and B^C, etc.) and from those XOR'd versions you can then extract the original 3 messages. Magic! Transmit the equivalent of 2 messages but get 3 messages worth of data! Not groundbreaking; not revolutionary; it's actually a pretty old trick. I've thought for a long time about what real world situation this might be practical for, and have come up with pretty much nothing. It's cute, though.

Re:Nothing to see here folks (1)

ezrec (29765) | more than 7 years ago | (#19216981)

XOR would be a bad choice.

Assume you have A^B, B^C, and A^C

A^B ^ B^C = A^C

A^B ^ A^C = B^C

B^C ^ A^C = A^B

No matter what you do, without *one original message*, you cannot reconstruct the originals. (Now, this is cool for things like one time pad encryption, but it won't work for network coding).

However, if you *do* have an original message, they all fall out (assume A^B, B^C, and you have, say, 'A')

A^B ^ A = B (that's one!)

B^C ^ B = C (that's the other!)

So to reconstruct all three messages ('A', 'B', and 'C') we need three messages - two XORed, and one original.

Re:Nothing to see here folks (0)

Anonymous Coward | more than 7 years ago | (#19219347)

Yep. Otherwise you'd just have invented perfect compression (reduce any message to 2/3 its size!) Which breaks reality.

Essentially Erasure Channels (1)

xquark (649804) | more than 7 years ago | (#19216903)

Essentially what they are using are erasure codes such as tornado
and digital fountains. The problem with these codes are that even
though they near Shannon's limit, they are not nice towards other
forms of data transmission such as TCP (aka bandwidth hungry).

By using such codes the round-trip overheads that are evident in
protocols such as TCP are eliminated.

These codes are mainly used for massive multi-cast and stateless
loss transmissions.

But using these codes doesn't mean you get rid of routers, it just
means you get rid of the need to persist connections over multiple
routers.

Essentially most comms will be over something similar to UDP. But one
must remember that there will always be an inherent need for reliable
comms. As most of these schemes require an initial exchange of
information in the case of digital fountains a seed and a scheme for
the distribution must be communicated to the end user.

Such technology would be great for wireless connections.

Far-reaching implications (1)

Kanel (1105463) | more than 7 years ago | (#19218181)

If this works, it could have several implications that is not immediately visible:

When the data locally, between two nodes, look like gibberish, does it make it harder to charge traffic by the content in it? Like how a provider may in the future, with IPV6 charge you more per megabyte for say, downloading streaming video than for websurfing. Unlike IPV6 encryption, even the sender and receiver identities would be obfuscated.

If network coding ends at my ISP, it could still charge me. Websites could also charge the ISP if network coding is used merely to transmit data, not store it as the article also hints at.
But network coding would end at my ISP simply because there's only one cable between it and me. With a wireless connection, network coding may offer a great advantage if the user is allowed to connect to several wlan transmitters simultaneously. Not only would this put a spanner in the works for charging-by-content, it may also make our habit of being subscribed to a single ISP seem archaic!

After all, it's only fair that the end-users should enjoy the same level of redundance the internet was originally built with :-)

The rise of the stupid network ... (1)

neutrino38 (1037806) | more than 7 years ago | (#19218745)

The concept is pretty interesting although the article is disapointing (and illustrate the priciple using another slashdot lile lame care analogy).

However, even though we would design a workable technology from this idea, I expect huge resistance from router vendors but also from some Internet designer at IETF.

Embedding such an advanced function within the network would violate the dogma where Network needs to be kept stupid and most of the function are to be supported by terminals.

Of course, Internet is not the whole network and we could imagine that such a technology wouls develop on military and banking networks. This would mean to recognise de facto that Internet is not fit for all usages and will never be the Network of networks but one network among others.

isn't this just a trade-off? (1)

TheGratefulNet (143330) | more than 7 years ago | (#19221155)

the usual tradeoff of squashing data and causing it to take up less space (and move faster) - but at the expense of compute complexity on the send or receive sides?

I do NOT want end nodes to have to work harder than they do today. and routers already do their thing very well. adding MORE complexity to save line bandwidth seems silly to me unless you are still dial-up bound.

Missing diagrams (1)

MECC (8478) | more than 7 years ago | (#19221455)

The hypothetical six-node digital network depicted in the box on these two pages can help clarify those options.
I looked at the diagram link in the submission, and for the above referenced diagram that correlated to the example in the sentences that followed, and didn't find them. Anyone else encounter that problem?

Rerouting? (1)

gggggggg (862650) | more than 7 years ago | (#19224351)

...you surely mean Repiping.

Same as ECIP (1)

John Sokol (109591) | more than 7 years ago | (#19225951)

With my ECIP protocol we were doing that for video distribution in 1996 and 1997 with the largest network of adult video.
  www.ecip.com

I called is server based routing, were a cluster of servers would keep tabs on each others status and network communication quality.
  How much latency and loss between each node.
  Then when a message was to be sent, they could try direct or go around the blockage by reflecting packets off a series of servers.
  Packets would also be split of across multiple paths.

  Specifically we had 3 T1 lines at our source site. The end was a user on a PC, we had 40 servers in 40 Co_Lo's on as varied and different backbones as possible.

  ECIP would carry the video from the main site with live entertainment to the 40 servers.
  End users would get directed via HTML to the server they had the best connection with an then livecam video www.livecamserver.com
would be sent to them.

  This as also used for several live video streaming events with Arthur C. Clarke. From Sri Lanka and ECIP
  was able to get video out over a very bad and over congested 64K line that provided all Internet to Sri Lanka in 1997.
  http://www.dnull.com/~sokol/clarke.html [dnull.com]
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?