Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds

samzenpus posted about 2 months ago | from the greased-lightning dept.

Cellphones 129

colinneagle (2544914) writes "What if you could transmit data without link layer flow control bogging down throughput with retransmission requests, and also optimize the size of the transmission for network efficiency and application latency constraints? In a Network World post, blogger Steve Patterson breaks down a recent breakthrough in stateless transmission using Random Linear Network Coding, or RLNC, which led to a joint venture between researchers at MIT, Caltech, and the University of Aalborg in Denmark called Code On Technologies.

The RLNC-encoded transmission improved video quality because packet loss in the RLNC case did not require the retransmission of lost packets. The RLNC-encoded video was downloaded five times faster than the native video stream time, and the RLNC-encoded video streamed fast enough to be rendered without interruption.

In over-simplified terms, each RLNC encoded packet sent is encoded using the immediately earlier sequenced packet and randomly generated coefficients, using a linear algebra function. The combined packet length is no longer than either of the two packets from which it is composed. When a packet is lost, the missing packet can be mathematically derived from a later-sequenced packet that includes earlier-sequenced packets and the coefficients used to encode the packet."

cancel ×

129 comments

obviously (-1)

Anonymous Coward | about 2 months ago | (#47126453)

duh

shifts the reliance (1)

Anonymous Coward | about 2 months ago | (#47126469)

and what if the coefficients get lost or corrupted?

Re: shifts the reliance (1)

Anonymous Coward | about 2 months ago | (#47126553)

It's either the entire packet comes or it's lost.

Re:shifts the reliance (0)

Anonymous Coward | about 2 months ago | (#47126565)

and what if the coefficients get lost or corrupted?

How? The packet before or after will have the same coefficients and presumably some portion of the previous and next packet. Or, did your head explode trying to read the last paragraph of the summary like mine almost did?

Re:shifts the reliance (0)

Anonymous Coward | about 2 months ago | (#47126629)

" The packet before or after will have the same coefficients"
All the summary (and the article for that matter) say is that it's encoded using randomly generated generated coefficients. There's nothing about these staying the same from one packet to the next. There's nothing about how or when these coefficients are transmitted either. Are these coefficients sent with the packet?
Reading through again (for about the 5th time) it looks like it uses the coefficients from the later packet (which makes more sense), until of course you lose two packets in a row, cannot reconstruct anything, your transmission ceases, and since there is no acknowledgement needed (and I'm guessing none sent), you'll start losing packets completely, rather than just having to wait for them to be resent.

Re:shifts the reliance (1)

Immerman (2627577) | about 2 months ago | (#47128207)

Acknowledgment is conceptually a separate thing from retransmission requests. If the normal transmission stream is such that dropped packets can be detected, then the only return communication necessary is the occasional "please repeat packet X" Of course you probably still want *some* acknowledgment to avoid the scenario where someone watches only the first 5 seconds of video before shutting off their computer but the server keeps streaming video for the next hour to a now non-existent client, but it need not be with every packet. An occasional "I'm still here, keep it coming" would be fine.

Re:shifts the reliance (1)

globaljustin (574257) | about 2 months ago | (#47126669)

I wondered why a system would use it...if I understand it correctly...

it's a new encoding that requires software on either end for TCP, that's in TFA...

Can we update the title please? (3, Interesting)

hamster_nz (656572) | about 2 months ago | (#47126485)

"A better coding for data error correction and redundancy than Reed-Solomon" - this is News for Nerds after all.

And why the "oooh - flappy birds on my phone might be faster" slant? I want a faster SAN!.

Re:Can we update the title please? (4, Insightful)

Ajay Anand (2838487) | about 2 months ago | (#47126705)

Reed-solmon is theoretically best. I hope this new encoding is practically better than Reed-Solmon.

Re:Can we update the title please? (1)

AK Marc (707885) | about 2 months ago | (#47126893)

How is R-S theoretically best, when it's been beaten by almost everything else in wireless FEC?

I saw this and was looking for the tech details. It looks like they are putting in FEC at the application layer. It's been done before. They are just hiding it better to not have to explain why 1/8th of the data is padding (or whatever ratio they are using).

Re:Can we update the title please? (1)

Anonymous Coward | about 2 months ago | (#47127027)

Probably from a misleading name. For example, Golay codes are perfect codes. Despite being perfect, there are better choices.

Re:Can we update the title please? (1)

AK Marc (707885) | about 2 months ago | (#47127121)

And it may be different applications. R-S may be "perfect" for the type of coding it does, but it isn't purely FEC, so it's both perfect, and beaten by lots of codes for wireless FEC. Or it's perfect for "random" drops/noise, but noise/drops aren't random. So many others can beat it because "perfect" is a mathematical truth, but doesn't work for the real world.

Re:Can we update the title please? (3, Informative)

Zironic (1112127) | about 2 months ago | (#47127213)

Because R-S assumes uniformly random error distribution which is usually not the case when it comes to wireless interference.

Re:Can we update the title please? (1)

TapeCutter (624760) | about 2 months ago | (#47127337)

You win the internet - quote TFA - "tests show it could deliver dramatic potential gains in many use cases" (my emph.)

Re:Can we update the title please? (0)

Anonymous Coward | about 2 months ago | (#47127669)

You win the internet - quote TFA - "tests show it could deliver dramatic potential gains in many use cases" (my emph.)

now that he's won the internet, maybe he can ask the nsa to get off of it.

Re:Can we update the title please? (1)

delt0r (999393) | about 2 months ago | (#47128185)

That why we use turbo codes, or at least interleaves. The problem with RS codes is that to optimal decode them requires quite a bit of hardware/cpu cycles.

Re:Can we update the title please? (0)

Anonymous Coward | about 2 months ago | (#47127281)

Reed-solmon is theoretically best. I hope this new encoding is practically better than Reed-Solmon.

Reed-Solomon is good for a stream. If packet-loss is more likely than bit-errors or bursts then it is less ideal.

Re:Can we update the title please? (0)

Anonymous Coward | about 2 months ago | (#47128227)

Well, no.

The best error-correction code (ECC) depends on your needs, and channel type and conditions.
But clearly, modern ECC like LDPC, Turbo and even Polar codes are very good as well.

e.g.
- DVB-S2: LDPC + RS
- LTE: Turbo
- WiMAX: Turbo or LDPC
- 10G Ethernet: LDPC
- Magnetic storage: mostly RS atm, companies are looking at LDPC and Polar
- Quantum Key Distribution: Polar

How about instead (0)

Anonymous Coward | about 2 months ago | (#47126491)

Code on 45

Ay? Ay?

you mean to say.. (-1)

Anonymous Coward | about 2 months ago | (#47126517)

It wasn't already this way?

Nothing new under the sun, just new uses (3, Insightful)

preaction (1526109) | about 2 months ago | (#47126525)

Sounds like Parchive from usenet, which is a really good idea in a lossy environment now that I think about it.

Re:Nothing new under the sun, just new uses (4, Informative)

NoNonAlphaCharsHere (2201864) | about 2 months ago | (#47126555)

Yup, sounds a lot like par2, except this system works inline, in a streaming context to repair mangled blocks/packets, where par2 uses out-of-band data (the par2 files) to do the repairs after all the data is transmitted.

Re:Nothing new under the sun, just new uses (5, Informative)

TubeSteak (669689) | about 2 months ago | (#47126657)

And like par2, it's going to require a healthy amount of processing from your CPU

The trends to higher-performance multicore processors and parallel operations everywhere in the network and on mobile devices lends itself to an encoding scheme utilizing linear algebra and matrix equations that might not have been possible in the past.

Notice they talk about multicore processors and not some hardware decoding embedded in the networking chip.
From their published paper

Abstract-- Random Linear Network Coding (RLNC) provides
a theoretically efficient method for coding. The drawbacks associ-
ated with it are the complexity of the decoding and the overhead
resulting from the encoding vector. Increasing the field size and
generation size presents a fundamental trade-off between packet-
based throughput and operational overhead
. On the one hand,
decreasing the probability of transmitting redundant packets is
beneficial for throughput and, consequently, reduces transmission
energy. On the other hand, the decoding complexity and amount
of header overhead increase with field size and generation
length, leading to higher energy consumption. Therefore, the
optimal trade-off is system and topology dependent, as it depends
on the cost in energy of performing coding operations versus
transmitting data. We show that moderate field sizes are the
correct choice when trade-offs are considered. The results show
that sparse binary codes perform the best, unless the generation
size is very low.

Processing power is going to be an issue in mobile devices which have the most to gain from this innovation.

Re: Nothing new under the sun, just new uses (0)

Anonymous Coward | about 2 months ago | (#47126905)

If this is successfull it will be implemented in hardware.

Re: Nothing new under the sun, just new uses (1)

TapeCutter (624760) | about 2 months ago | (#47127397)

It probably will be if it turns out to be useful, but I doubt processing power will be a significant factor in the idea's commercial success or failure. The desktop video card is the supercomputer of a decade ago, albeit without the memory, disk space and air-conditioning bill. My $150 card maxes out at a teraflop. There are demos on the net of 3D modelling (with sound) running on a cell phone using Nvidia's APX2500 chip, android + java. Developers can develop stuff on their desktop video card and drop it straight into an android phone built around the chip.

Re:Nothing new under the sun, just new uses (1)

AK Marc (707885) | about 2 months ago | (#47126903)

If you named all the files the same, then the repairs are done in-band. All that's "new" here is calling all the packets *.PAR, rather than some RAR and some PAR. You need one extra block per block lost (though you can have multiple blocks per packet and other tricks to hide the actual redundancy).

what the FEC... (3, Funny)

bugs2squash (1132591) | about 2 months ago | (#47126531)

they've invented forward error correction, this could enable data communications and audio CDs.

Re:what the FEC... (2)

nyet (19118) | about 2 months ago | (#47126653)

FEC generally does not seek to recover lost data, only the proper state of flipped bits.

Re:what the FEC... (1)

Guspaz (556486) | about 2 months ago | (#47126689)

There's no difference between the two (what's the difference between a 1 flipped to a 0, or a 1 that was lost and so interpreted as a 0?), and FEC is frequently (typically?) used to recover lost data. Interleaving is often used such that losing an entire transmitted packet results only in losing small parts of multiple actual packets, which FEC can then compensate for.

Re:what the FEC... (1)

nyet (19118) | about 2 months ago | (#47126763)

FEC algorithms do not treat (an unknown number) of lost bits as a stream of 0 bits, since it can't know how many bits are lost.

Re:what the FEC... (1)

Guspaz (556486) | about 2 months ago | (#47127997)

They don't need to. The underlying layer knows how many bits are missing based on the timing.

Re:what the FEC... (1)

YoopDaDum (1998474) | about 2 months ago | (#47127271)

FEC is commonly used in streaming over IP applications to support lost packets, see for example the "above IP" FEC used in LTE multicast (eMBMS) or DVB-H, Raptor [wikipedia.org] . In those applications the L2 transport has its own error detection scheme, so IP packets will typically be either fully lost or ok (in other words, the probability of having a corrupt packet is very low).

Re:what the FEC... (2)

TapeCutter (624760) | about 2 months ago | (#47127449)

Wow, were you aiming for humour? - Because it would be sad if you were serious

What people are saying is they have implemented FEC at the packet level. I don't know much about wireless but I do remember the maths lectures I attended 25yrs ago on error correction techniques. At the time I could perform the FEC algorithm by hand, I consider myself "mathematically inclined" and have been awarded bits of paper attesting to that inclination but to this day I still have just enough understanding of the mathematical concepts of error correction to marvel at the geniuses who worked it all out in the first place.

Packet FEC has been tried before and found to be wanting, but I would not be as quick as some to dismiss a strong claim by MIT just because others have failed in the past.

Re:what the FEC... (1)

timeOday (582209) | about 2 months ago | (#47126793)

Do all of these ultimately result in a corrected transmission? How about a transmission that degrades gracefully instead, so you don't have to re-transmit OR send redundant information that isn't even used unless packets are lost. Each packet would contain only a little low-frequency information , since most of the bits are high-frequency information that will not be missed as much. Maybe your smartphone could receive 4k video broadcasts and just discard 80% of the packets before even decoding them. I'm sure there are encodings that use this technique, so what is it called?

Right. (4, Insightful)

Animats (122034) | about 2 months ago | (#47127007)

This is yet another form of forward error correction. The claim is that it doesn't take any extra data to add forward error correction. That seems doubful, since it would violate Shannon's coding theorem.

This was tested against 3% random packet loss. If you have statistically well behaved packet loss due to noise, FEC works great. If you have bursty packet loss due to congestion, not so great.

Re:what the FEC... (1)

complete loony (663508) | about 2 months ago | (#47127049)

What if you could transmit data without link layer flow control bogging down throughput with retransmission requests

TFS makes it look like network coding can magically send data without any form of ACK & Retransmit. Network coding still requires feedback from a flow control protocol. You need to tweak the percentage of extra packets to send based on the number of useful / useless packets arriving, and you still need to control the overall rate of transmitted packets to avoid congestion. The goal is to make sure that every packet that arrives is useful for decoding the stream, regardless of which packets are lost. So yes, it's a kind of automatically adapting error correction protocol for a stream service.

Re:what the FEC... (1)

m00sh (2538182) | about 2 months ago | (#47127115)

they've invented forward error correction, this could enable data communications and audio CDs.

I think they combined FEC with TCP.

TCP sends packet and then waits for ACK. However they eliminated the ACK part and just added extra parity in the packets. So, even when packet is lost, it can be reconstructed.

Of course there is a probability that multiple packets are lost so that it cannot be reconstructed. Of course, there is more redundant data in the stream instead of the small ACK packets. The packet loss characteristics of the channel probably makes this approach more efficient.

Re:what the FEC... (1)

wagnerrp (1305589) | about 2 months ago | (#47127823)

The ACK is crucial. Even if you pipeline your packets so you do not have to wait for each ACK to return, they are still vital to measuring network capacity. Too many people trying to send too much data will eventually overwhelm the transmit buffer at some congestion point, and loss will shoot past the ability of your error correction to compensate for.

Re:what the FEC... (1)

Impy the Impiuos Imp (442658) | about 2 months ago | (#47127925)

So this is basically an efficient redundancy transmissiin scheme that out-performs (in spite of using extra data) retransmission.

At a 3% artificially-induced error rate. What about real-world conditions? Wouldn't this clog a network more, not less?

in simplified terms, it's forward error correction (1)

YesIAmAScript (886271) | about 2 months ago | (#47126547)

And why do they use TCP if they are trying to avoid retransmissions due to lost/corrupt packets?

This seems to say that it's most trying to avoid link-layer retransmission, not transport-layer. So somehow I need to figure out all the links my transmission is traversing and disable link-layer retransmission on all of them?

Re:in simplified terms, it's forward error correct (1)

globaljustin (574257) | about 2 months ago | (#47126659)

yeah TFA says it's link-layer flow control:

What if you could transmit data without link layer flow control bogging down throughput with retransmission requests, and also optimize the size of the transmission for network efficiency and application latency constraints? Researchers from MIT, Caltech and the University of Aalborg claimed to have accomplished this...

it can "piggyback" on TC-IP but you have to use their "software" on either end

RLNC encoding can ride on top of the TCP-IP protocol, so implementation does not require the replacement of communications equipment. But it does require software incorporating RLNC-licensed technology to execute on both ends.

i'm not sure where they want this to be used, "mobile" WiFi and LTE are mentioned but I dont see this as anything other than a demonstration of a capability, not an application

i come from a CCNA background but I've never worked with or seen something like this...seems like they would use it in data centers?

Re:in simplified terms, it's forward error correct (2)

AK Marc (707885) | about 2 months ago | (#47127001)

I think they are lying.

Many of the details fail under closer examination. It doesn't "use" TCP-IP. It could use a propritary IP stack that is TCP/IP compatible. So it'll use IP addresses and port numbers like a TCP packet would so that switches and routers in the middle wouldn't know or care what it's doing. If they put it on an IPX/SPX core it'd fail to route across the Internet. But it isn't TCP. It doesn't use a TCP compliant stack. It just looks like one to the outside world. It will not re-transmit a lost packet via TCP mechanisms. But it'll work on the same hardware on both ends and software in the middle. You just have to replace the network stack on both ends, which isn't hard. Though they indicate that the only thing it needs is "software" on both ends. Maybe they'll be doing it over actual TCP/IP. That, and the way they keep saying TCP-IP, that includes UDP. They don't say TCP, or UDP. And I don't trust them when they keep saying TCP-IP, it should be a slash, not a dash. So is it running over TCP/IP (which could mean UDP)? Or does it run over TCP (which excludes UDP)?

Re:in simplified terms, it's forward error correct (1)

tlambert (566799) | about 2 months ago | (#47126731)

And why do they use TCP if they are trying to avoid retransmissions due to lost/corrupt packets?

This seems to say that it's most trying to avoid link-layer retransmission, not transport-layer. So somehow I need to figure out all the links my transmission is traversing and disable link-layer retransmission on all of them?

I believe the issue is that you can't sell it to the cable companies and the DSL providers that implement PPPOE in order to track your surfing, to make sure you are buying your television programming from them, rather than file sharing, and they can intentionally make things like Tor not work. Not that PMTUD works on those things unless the modem proxies the ICMP messages, which are usually blocked by the cable companies, unless you explicitly ifconfig down to 1492 yourself, or enable active probing for black hole (rfc4821).

sounds like an ad for the future fast lane (5, Funny)

CaptainStumpy (1132145) | about 2 months ago | (#47126549)

Xfinity video in your face 4650% faster! Xfinity introduces the RLNC fast lane data transmission! Its like an over caffeinated jaguar solving linear matrices while orbiting the earth in the space shuttle and doing coke. RAAAWRR! Don't like the jaguar? Tough floating jaguar shit, you don't have a choice! We own teh tubes! ©omcastic!

Re:sounds like an ad for the future fast lane (-1)

Anonymous Coward | about 2 months ago | (#47126603)

With net neutrality, the incentive for trying such ideas will be nonexistent.

Re:sounds like an ad for the future fast lane (1)

blue trane (110704) | about 2 months ago | (#47126641)

Engineers want to try it because it's a cool idea. Biz, focused on profit, doesn't. This is why AT&T rejected Kleinrock and others who came to them with ideas of the internet, The market delayed implementation out of short-sighted, blinkered concentration on profit as opposed to the General Welfare.

Re:sounds like an ad for the future fast lane (1)

rasmusbr (2186518) | about 2 months ago | (#47126943)

With net neutrality, the incentive for trying such ideas will be nonexistent.

As long as there is a significant bottleneck somewhere in the last mile or so (typically 4G/LTE or crowded WiFi) the incentive for efficient use of the available bandwidth will remain huge, regardless of politics.

The thing that might hamper development is if one company manages to get a monopoly on the creative content so that everyone have to use their service regardless of how shitty it performs.

Re:sounds like an ad for the future fast lane (1)

AK Marc (707885) | about 2 months ago | (#47127023)

No, it's the other away around. You get "pay" priority performance in the "free" queue under FCC net brutality.

GAAA! (1)

Anonymous Coward | about 2 months ago | (#47126551)

In over-simplified terms, each RLNC encoded packet sent is encoded using the immediately earlier sequenced packet and randomly generated coefficients, using a linear algebra function. The combined packet length is no longer than either of the two packets from which it is composed. When a packet is lost, the missing packet can be mathematically derived from a later-sequenced packet that includes earlier-sequenced packets and the coefficients used to encode the packet.

If that's "over-simplified" I am not sure I want to try to read the paper. That paragraph alone gave me a headache. Maybe it's the grapevine and the paper hurts less to read. Meh, tomorrow.

Re:GAAA! (1)

AK Marc (707885) | about 2 months ago | (#47127043)

"Application level FEC" Is that easier? They try to make it sound cooler, or harder, but it's been done before.

Word (5, Funny)

Dan East (318230) | about 2 months ago | (#47126567)

"the immediately earlier sequenced packet". There a word for that. It's called "previous". As in "the previous packet".

Re:Word (1)

Anonymous Coward | about 2 months ago | (#47126655)

Head east, Dan. Head! East!

Nothing says the previous packet has the previous sequence number. Packets may arrive in any order, or not arrive at all (the point here). The stack makes it all work at the ends. This is all layman should need to understand, and as you have shown, all they can be expected to understand. It was so wored for a reason, and it it's lost on the layman, that's all right.

Re:Word (1)

Your.Master (1088569) | about 2 months ago | (#47126749)

Nothing says the previous packet has the previous sequence number

The word "previous" says the previous packet has the previous sequence number. I that reasonable people skilled in the art would generally assume previous meant sequentially previous, not time-of-receipt previous.

However I would say is that nothing says that the "immediately earlier" packet has the previous sequence number.

We could get pedantic and talk about sequentially previous and chronologically previous, further clarifying that chronologically previous is from the point of view of the receiver (since in the sender's point of view, the sequence and chronology are likely identical since we don't usually have to re-transmit lost packets).

As a result, I think "immediately earlier sequenced packet" is ambiguous. I could read "sequenced packet" as a compound noun, and thus it means the chronologically previous packet. Or I could read "immediately earlier sequenced" to mean previous, as in sequentially previous, as the GP did. From context, I think the latter is more likely, because what relevance could chronology possibly have? But that makes "immediately previous" an exceptionally poor choice of words, and thus I disagree that:

it was so worded [sic] for a reason

Re:Word (1)

Paradise Pete (33184) | about 2 months ago | (#47126963)

I that reasonable people skilled in the art would generally assume previous meant sequentially previous, not time-of-receipt previous.

Sure, just like readers are able to reconstruct the missing second word in your post. But the word "previous" has some ambiguity. The original wording is more precise and specific.

Re:Word (0)

Anonymous Coward | about 2 months ago | (#47126809)

'Previous' implies temporal sequencing

Re:Word (2)

Wraithlyn (133796) | about 2 months ago | (#47126887)

...so does "earlier"

Re:Word (1)

Paradise Pete (33184) | about 2 months ago | (#47126971)

..so does "earlier"

Thus the additional qualifiers "immediately" and "sequenced".

Ass Burgers (-1)

Anonymous Coward | about 2 months ago | (#47127839)

how's that aspergers working out for you?

Re:Word (1)

AK Marc (707885) | about 2 months ago | (#47127055)

Maybe they are trying to cover the case where they come out of order, then there's a drop?

But I agree in principle, the whole thing was poorly worded. Like it was written by a writer that doesn't understand it. That was dictated to by an engineer who was ordered to not be specific about it.

Re:Word (1)

c (8461) | about 2 months ago | (#47127615)

There a word for that. It's called "previous". As in "the previous packet".

When you're talking about packet protocols, where things get lost, duplicated, or reordered, "previous" is an incredibly imprecise word.

Neat (0)

Anonymous Coward | about 2 months ago | (#47126599)

Sounds like a principle similar to http://en.wikipedia.org/wiki/Parchive

I like.

Re:Neat (0)

Anonymous Coward | about 2 months ago | (#47127857)

Sounds like some people are too stupid, fat or lazy to type <a href="http://en.wikipedia.org/wiki/Parchive">Parchive</a> huh?

Re:Neat (1)

wagnerrp (1305589) | about 2 months ago | (#47127883)

Only in the most fundamental sense that they are both forms of error correction.

Network Coding (1)

Boycott BMG (1147385) | about 2 months ago | (#47126609)

I remember working on a project where network coding was proposed for micro satellite cluster communications. If I remember correctly, network coding requires that all the nodes in the network have complete knowledge of the state of the network at any given transmission window. This requires transmission of the network state which used something like 7% overhead. The routing of a message from one end of the cluster to the other was difficult. I believe it might have been an np-complete problem. Have they solved the routing issues?

Zmodem (0)

Anonymous Coward | about 2 months ago | (#47126635)

Resume.

Better than FEC (0)

Anonymous Coward | about 2 months ago | (#47126685)

It seems the coding sequence does some compression. Forward error correction uses *extra* bits to allow bit errors to be corrected. The redundant data means that the TV station I'm watching on the other monitor is streaming data into my TV tuner card at 32 Mb/s, and the effective data rate is 19.39 Mb/s (so 12.61 Mb/s is the redundant data). They mentioned (at least twice) that the packet length for this scheme is exactly the same. So if its like FEC, its damned efficient.

Re:Better than FEC (1)

AK Marc (707885) | about 2 months ago | (#47127079)

There are 7/8 FEC (meaning 1/8 wasted on FEC), and if you code the FEC into the stream, then you'll hide the "loss". They look to be hiding the loss, so people don't complain about the "waste."

Good Improvement (1)

Davidlogann2 (3670697) | about 2 months ago | (#47126691)

If the RLNC is five times faster than the native lines, It mean that the entire transmission network may go on a roll?

More research needed (1)

ketomax (2859503) | about 2 months ago | (#47126703)

How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds

I wish they also made some breakthrough to increase the data caps we are stuck with.

Re:More research needed (1)

citizenr (871508) | about 2 months ago | (#47127003)

caps are Layer 8 problem

random? (1)

superwiz (655733) | about 2 months ago | (#47126737)

The random element seems to be there to avoid having to come up with an optimal distribution of information. Otherwise, it seems like a frame-level (or even finer) RAID.

Re:random? (1)

AK Marc (707885) | about 2 months ago | (#47127087)

Go look at Silver Peak. They do a WAN optimization that uses variable FEC to increase FEC as packet loss increases, to minimize waste as conditions change.

Packet loss models? (2)

antifoidulus (807088) | about 2 months ago | (#47126767)

TFA doesn't seem to state what their assumptions were on how packets get lost and how many packet losses the algorithm can deal with, and what their distribution is. There are a lot of ways you can drop k packets out of n packets sent.

If you assume that every packet has a k/n chance of being lost, then being able to reconstruct a single missing packet could be incredibly useful. However cell phone packet losses tend to be incredibly bursty, i.e. they will have a very throughput for a while, but then all of a sudden(maybe you changed towers or went under a bridge etc) lose a whole ton of packets. Can this algorithm deal with bursty losses? I wish TFA was a bit more clear on that

Re:Packet loss models? (1)

complete loony (663508) | about 2 months ago | (#47127101)

(I haven't read this paper yet, but I've read other Network Coding data and experimented with the idea myself)

With TCP, when a packet is lost you have to retransmit it. You could simply duplicate all packets, like using RAID1 for HDD redundancy. But this obviously wastes bandwidth.

Network coding combines packets in a more intelligent way. More like RAID5 / RAID6 with continuous adaptation based on network conditions. Any packet that arrives may allow you to deduce information that you haven't seen before. Basically each packet is the result of an equation like; f(p1, p2, p3) = a*p1 + b*p2 + c*p3. When each packet arrives, you attempt to solve the set of simultaneous equations you have received. When you have reduced each expression to just a single packet, you send it up the protocol stack.

You still need a TCP like acknowledgement scheme so that you can; rate limit the flow of packets based on measured congestion, tweak the percentage of redundant packets being sent due to measured packet loss, and advance the stream to include new data.

If you get the network coding parameters wrong, the connection still might stall, or you might be sending too much redundant data. If everything is going well, a link with 10% packet loss just means that your stream is transferred 10% slower.

Re:Packet loss models? (0)

Anonymous Coward | about 2 months ago | (#47127629)

This sounds like RAID for network data streams, with some fancy encoding to, I assume, compress the data (otherwise why not just send everything twice?) So what if you /can't/ reconstruct a lost packet because a subsequent packet is also lost? When you use something like UDP, you build recovery mechanisms into your app (if you care). Will anyone who uses RNLC do that?

Fountain/Raptor codes? (0)

Anonymous Coward | about 2 months ago | (#47126807)

Isn't this just raptor encoding? This idea has been around for a long time, I'd like to hear how this specific instance differs from what came before.

http://en.wikipedia.org/wiki/Raptor_code

Re: Fountain/Raptor codes? (0)

Anonymous Coward | about 2 months ago | (#47126871)

It is like raptor codes, bit with higher decoding complexity. Instead of carefully choosing the coefficents they draw them at random. Good thing is that they are cover ed by differenti patents.

A better explanation (1)

grahamsz (150076) | about 2 months ago | (#47126849)

The story linked to seems to have an awful explanation of what's going on. This makes a lot more sense:

http://www.codeontechnologies.... [codeontechnologies.com]

Reminds me a little of a random project I started back in college where I'd transmit a file in a bunch of packets where each contained the original file modulo a specified prime number. That way, if the file was split into 10,000 packets, then the transmitter could send out a constant stream of packets module different primes and as soon as the receiver got any 10,000 of them they could use the extended euclidean algorithm to reconstruct the original file.

I was hoping we'd someday be able to multicast udp over the net to multiple random locations and this would be a fast way to send files.

Re:A better explanation (1)

Ingenium13 (162116) | about 2 months ago | (#47127005)

It might be possible now with IPv6.

Re:A better explanation (1)

gnasher719 (869701) | about 2 months ago | (#47127009)

That link shows their website, where they tell us that if the sender is about to send packet 83, and figures out it didn't get an acknowledgement for packet 22, it then has to resend packets 22 to 82. Which seems an entirely stupid thing to do, and an obvious improvement would be to resend only the packets that are actually lost.

Guess what: According to the Wikipedia article about TCP, that's what the "selective acknowledgment" (SACK) option defined in RFC 2018 does. So _poof_ goes the benefit of this scheme, which was based on an incorrect representation of TCP/IP.

On the other hand, my TV receives a purely digital signal with no way to ask for re-transmission of lost data and works just fine with it. There is a variant of h.264 specicially for streaming connections without retransmission, and that together with UDP instead of TCP/IP would solve the problem just fine. If it was a problem.

Really? (1)

Viol8 (599362) | about 2 months ago | (#47127495)

"That way, if the file was split into 10,000 packets, then the transmitter could send out a constant stream of packets module different primes and as soon as the receiver got any 10,000 of them they could use the extended euclidean algorithm to reconstruct the original file."

So if the receiver got 10K copies of the 1st packet and nothing else it could still reconstruct the file? Thats impressive. Which college was this, Hogworts?

wtf? (0)

Anonymous Coward | about 2 months ago | (#47126851)

More importantly, RLNC encoding can ride on top of the TCP-IP protocol, so implementation does not require the replacement of communications equipment.

What? Why would you do that? The whole point of RLNC from the article is that it's to avoid retransmissions due to packet loss. Why would you put that on top of a streaming protocol like TCP/IP that's got all the flow control and retry logic happening under the covers? Surely you'd run RLNC over UDP!

The RLNC-encoded video was downloaded five times faster than the native video stream time, and the RLNC-encoded video streamed fast enough to be rendered without interruption.

That's probably because you ran that test with RLNC over UDP where there was no flow control and retransmissions. UDP streaming without RLNC is probably just as fast.

Re:wtf? (1)

AK Marc (707885) | about 2 months ago | (#47127135)

Why would you put that on top of a streaming protocol like TCP/IP that's got all the flow control and retry logic happening under the covers? Surely you'd run RLNC over UDP!

UDP is a subset of TCP/IP. They worded it horribly, but it's likely not using TCP for flow control. Note the large number of odd statements, "TCP-IP" rather than TCP/IP. And "the immediately earlier sequenced packet" rather than "the previously transmitted packet" or "the packet with the previous sequence number" or something else that isn't unclear. I doubt they had an engineer write it, and the tech writer didn't understand what they were writing about.

Re:wtf? (1)

Anonymous Coward | about 2 months ago | (#47127267)

UDP is a subset of TCP/IP. They worded it horribly, but it's likely not using TCP for flow control. Note the large number of odd statements, "TCP-IP" rather than TCP/IP. And "the immediately earlier sequenced packet" rather than "the previously transmitted packet" or "the packet with the previous sequence number" or something else that isn't unclear. I doubt they had an engineer write it, and the tech writer didn't understand what they were writing about.

UDP is not a subset of TCP. It's just that no one says "UDP/IP". It's a socketless datagram and has its own protocol ID.

halve the packets? (0)

Anonymous Coward | about 2 months ago | (#47126939)

does this effectively negate the need for every other packet in a loss-less environment?

Re:halve the packets? (1)

citizenr (871508) | about 2 months ago | (#47127015)

yes, by sending packet twice the size

PAR2 for packets? (0)

Anonymous Coward | about 2 months ago | (#47127085)

Seems plausible. I hope the patent reviewers are knowledgable on prior art on this one.

Excellent! (1)

davidc (91400) | about 2 months ago | (#47127263)

This means my mobile internet speed might soon be up to 10 bps instead of the 2 bps I seem to get at the moment!

Re:Excellent! (1)

Anonymous Coward | about 2 months ago | (#47127367)

It means that your ISP can oversell their bandwidth even more.

I've seen this before (1)

statemachine (840641) | about 2 months ago | (#47127273)

Except it's called MPEG video. And it's used for TV.

MPEG also has a mode to recover the errors, but it's expensive, and when you're streaming, who cares? If your link sucks, you don't blame the stream.

Turbo codes (0)

Anonymous Coward | about 2 months ago | (#47127283)

And I barely understand those and they go back to 1960...

Good press release / slashvertisement (1)

Anonymous Coward | about 2 months ago | (#47127289)

"Code On" has done well at generating some buzz.
Unfortunately, the only details on their website are of the type "this is awesome", with no description of how the breakthrough works.

http://www.codeontechnologies.com/technology/white-papers/

I just wish they would post actual details on the encoding method. How does it compare to Fountain Codes such as RFC 5053?

only send the last packet? (0)

Anonymous Coward | about 2 months ago | (#47127309)

so... if all the packets include the last packet and the current one... then all you need to send is the last packet and the rest can be inferred? i.e.

packet 256 contains packet 255 which contains 254 which contains 253?

been there, done that... (Sqore:1,000,000) (0)

Anonymous Coward | about 2 months ago | (#47127339)

It's called U.D.P. (The periods help you read it slowly).

Geez! Really, this is an improvement?!

Nucleus (1)

zawarski (1381571) | about 2 months ago | (#47127565)

The greatness of human accomplishment has always been measured by size. The bigger, the better. Until now. Nanotech. Smart cars. Small is the new big. In the coming months, Hooli will deliver Nucleus, the most sophisticated compression software platform the world has ever seen. Because if we can make your audio and video files smaller, we can make cancer smaller. And hunger. And AIDS.

That's the over-simplified version? (1)

wonkey_monkey (2592601) | about 2 months ago | (#47127603)

In over-simplified terms, each RLNC encoded packet sent is encoded using the immediately earlier sequenced packet and randomly generated coefficients, using a linear algebra function. The combined packet length is no longer than either of the two packets from which it is composed. When a packet is lost, the missing packet can be mathematically derived from a later-sequenced packet that includes earlier-sequenced packets and the coefficients used to encode the packet.

Uh... could you simplify it just a little more?

How does a "later-sequenced packet [...] include earlier-sequenced packets"?

huffman error correction was in fax machines (0)

Anonymous Coward | about 2 months ago | (#47127675)

so this really doesn't seem like a 'breakthrough'. It's just a new application of existing technologies. http://www.fileformat.info/mir... [fileformat.info]

Time saver (1)

PsyMan (2702529) | about 2 months ago | (#47127707)

Now I can free up 4 of the 5 minutes it used to take burning through my monthly bandwidth to do something constructive.

IANANE (0)

Anonymous Coward | about 2 months ago | (#47128091)

What if you could actually understand the first sentence of the summary?

Or even the last sentence?

parses like a teaspoon of sugar (2)

epine (68316) | about 2 months ago | (#47128439)

I've been parsing this kind of press release for a long, long time now. I can pretty much tell what we're dealing with by how hard it is to state the advantage of a new approach in narrow and precise language.

That this blurb doesn't even disclose the error model class (error correction is undefined without doing so) suggests that the main advantage of this codec lies more in the targeting of a particular loss model than a general advance in mathematical power.

Any error correction method looks good when fed data that exactly corresponds to the loss model over which it directly optimises.

The innovators of this might have something valuable, but they are clearly trying to construe it as more than it is. This suggests that there are other, equally viable ways to skin this particular cat.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...