×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

UDP + Math = Fast File Transfers

michael posted more than 12 years ago | from the udp-is-painless dept.

Technology 449

Wolfger writes: "A new file transfer protocol talked about in EE Times gets data from one place to another without actually transmitting the data..." Well, the information is transmitted, just not in its original form. I think there are a couple of other places working on similar ideas - wasn't there a company using this for a fast file download application? User would go to download a game demo or something, receive pieces from several different places, and knit them together? Wish I could recall the company's name.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

449 comments

page widening post! (-1)

Klerck (213193) | more than 12 years ago | (#2698416)

Goat

<a href=For.years.now,.the.common.American.penis.bird .has.been.a.staple.of.every.American's.daily.diet. .Whether.it.be.penis.bird.sandwiches,.fried.penis. bird,.or.perhaps.penis.bird.under.glass.for.the.ri ch,.we.all.have.penis.bird.at.least.once.a.day..Ma ny.Americans.have.no.clue.how.the.penis.bird.becam e.so.important.in.the.pyramid.of.a.balanced.diet,. so.in.this.article.I.will.attempt.to.explain.its.h istory.and.why.it.is.so.useful..In.the.early.1870s ,.Francis.Zefran.became.the.first.penis.bird.breed er.in.North.America..He.started.his.famous.Penis.B ird.Ranch.in.Canton,.OH..At.the.time,.not.much.was .known.of.the.penis.bird's.nutritional.value,.but. the.Penis.Bird.Ranch.changed.all.of.that..Not.only .did.Francis.Zefran.raise.penis.birds.to.sell.thei r.colorful.plumes.a.VERY.lucrative.business,.he.al so.set.up.the.world's.first.research.lab.dedicated .solely.to.the.study.of.the.penis.bird.The.lab.fou nd.many.interesting.things..First,.it.was.discover ed.that.thepenis.bird.was.actually.semisentient..S econd,.the.scientists.found.that.the.meat.of.the.p enis.bird.was.high.in.protein,.vitamin.A,.vitamin. B,.and.calcium,.while.low.in.fat,.cholestorol,.and .sodium..Never.before.had.such.a.nutritious.meal.b een.had.without.supplement.or.fortification..The.s cientists.of.the.lab.recommended.immediately.that. the.penis.bird.become.a.part.of.every.American's.d aily.diet..When.the.news.of.the.penis.bird's.usefu lness.reached.president.Rutherford.B..Hayes,.he.wa s.absolutely.ecstatic..You.see,.President.Hayes.ow ed.a.number.of.favors.to.Francis.Zefran.because.as .I.said.earlier,.the.penis.bird.plume.trade.was.an .extremely.lucrative.business.and.Mr..Zefran.was.i mportant.in.getting.RBH.elected.through.a.number.o f.monetary.gifts..President.Hayes.immediately.aske d.Congress.to.pass.what.we.all.know.today.as.the.H ayes/Zefran.Penis.Bird.Consumption.Act..The.act.di d.a.number.of.things.to.make.the.penis.bird.a.dail y.meal,.most.important.of.which.was.the.requiremen t.that.for.every.four.people.in.a.household,.one.p enis.bird.must.consumed.every.day..Another.thing.t he.act.did.was.create.an.artificial.monopoly.for.F rancis.Zefran's.Penis.Bird.Industries..The.act.sta ted.that.the.only.supplier.of.penis.bird.meat.in.t he.US.would.be.PBI..As.one.would.imagine,.this.qui ckly.made.Francis.Zefran.into.the.richest.man.in.t he.world..He.was.soon.a.multibillionaire.quadrilli onaire.with.today's.inflation..Never.before.had.a. single.man.seen.such.wealth..Many.challenges.were. made.to.the.Hayes/Zefran.Penis.Bird.Consumption.Ac t,.and.several.even.made.it.the.Supreme.Court..It. was.argued.that.the.act.was.unconstitutional.and.w ent.against.liberty.itself,.but.once.the.detractor s.tasted.delicious.penis.bird.meat.for.the.first.t ime,.they.immediately.dropped.their.cases.and.foll owed.the.law.to.the.letter..We.all.know.today.that .penis.bird.is.the.most.delicious.meat.man.has.eve r.known,.but.at.that.time,.the.only.meats.people.a te.were.pork.and.beef..In.the.early.1970s,.though, .challenges.to.the.act.began.again..Many.argued.th at.the.monopoly.given.to.Penis.Bird.Industries.by. the.act.was.in.all.ways.unamerican..The.Supreme.Co urt.finally.agreed,.and.in.1974,.Section.II.of.the .act.was.struck.down..This.in.effect.opened.the.ma rket.to.competition.for.all..Today,.Penis.Bird.Ind ustries.is.almost.no.more..Today.we.have.the.marke t.leader.Penis.Bird.Meat.International.facing.agai nst.Penissoft,.a.recent.startup..Where.will.the.fu ture.lead.the.penis.bird.market?.Only.time.will.te ll.us,.but.one.thing.is.certain:.penis.birds.are.h ere.to.stay!></A>

Re:page widening post! (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2698436)

You win the "most innovative troll of the year"-Award on Slashdot. Nobody is more annoying than you!

Re:page widening post! (-1)

Klerck (213193) | more than 12 years ago | (#2698471)

Typing more for my comment.

<a href=Thank.you!.I.just.realized.that.not.only.is.m y.post.wide,.it's.also.pretty.long.because.of.my.e xtra.annoying.sig..I.have.to.thank.a.certain.someb ody.for.finding.that.little.hole.though..Also,.it. was.me.posting.in.the.previous.story.as.an.AC..I.w anted.to.expose.as.many.people.as.possible..I.hope .this.isn't.too.hard.to.read..Again,.thank.you!></ a>

ASSHOLE (-1, Flamebait)

Tasty Beef Jerky (543576) | more than 12 years ago | (#2698470)

This is brilliant!

<a href=For.years.now,.the.common.American.penis.bird .has.been.a.staple.of.every.American's.daily.diet. .Whether.it.be.penis.bird.sandwiches,.fried.penis. bird,.or.perhaps.penis.bird.under.glass.for.the.ri ch,.we.all.have.penis.bird.at.least.once.a.day..Ma ny.Americans.have.no.clue.how.the.penis.bird.becam e.so.important.in.the.pyramid.of.a.balanced.diet,. so.in.this.article.I.will.attempt.to.explain.its.h istory.and.why.it.is.so.useful..In.the.early.1870s ,.Francis.Zefran.became.the.first.penis.bird.breed er.in.North.America..He.started.his.famous.Penis.B ird.Ranch.in.Canton,.OH..At.the.time,.not.much.was .known.of.the.penis.bird's.nutritional.value,.but. the.Penis.Bird.Ranch.changed.all.of.that..Not.only .did.Francis.Zefran.raise.penis.birds.to.sell.thei r.colorful.plumes.a.VERY.lucrative.business,.he.al so.set.up.the.world's.first.research.lab.dedicated .solely.to.the.study.of.the.penis.bird.The.lab.fou nd.many.interesting.things..First,.it.was.discover ed.that.thepenis.bird.was.actually.semisentient..S econd,.the.scientists.found.that.the.meat.of.the.p enis.bird.was.high.in.protein,.vitamin.A,.vitamin. B,.and.calcium,.while.low.in.fat,.cholestorol,.and .sodium..Never.before.had.such.a.nutritious.meal.b een.had.without.supplement.or.fortification..The.s cientists.of.the.lab.recommended.immediately.that. the.penis.bird.become.a.part.of.every.American's.d aily.diet..When.the.news.of.the.penis.bird's.usefu lness.reached.president.Rutherford.B..Hayes,.he.wa s.absolutely.ecstatic..You.see,.President.Hayes.ow ed.a.number.of.favors.to.Francis.Zefran.because.as .I.said.earlier,.the.penis.bird.plume.trade.was.an .extremely.lucrative.business.and.Mr..Zefran.was.i mportant.in.getting.RBH.elected.through.a.number.o f.monetary.gifts..President.Hayes.immediately.aske d.Congress.to.pass.what.we.all.know.today.as.the.H ayes/Zefran.Penis.Bird.Consumption.Act..The.act.di d.a.number.of.things.to.make.the.penis.bird.a.dail y.meal,.most.important.of.which.was.the.requiremen t.that.for.every.four.people.in.a.household,.one.p enis.bird.must.consumed.every.day..Another.thing.t he.act.did.was.create.an.artificial.monopoly.for.F rancis.Zefran's.Penis.Bird.Industries..The.act.sta ted.that.the.only.supplier.of.penis.bird.meat.in.t he.US.would.be.PBI..As.one.would.imagine,.this.qui ckly.made.Francis.Zefran.into.the.richest.man.in.t he.world..He.was.soon.a.multibillionaire.quadrilli onaire.with.today's.inflation..Never.before.had.a. single.man.seen.such.wealth..Many.challenges.were. made.to.the.Hayes/Zefran.Penis.Bird.Consumption.Ac t,.and.several.even.made.it.the.Supreme.Court..It. was.argued.that.the.act.was.unconstitutional.and.w ent.against.liberty.itself,.but.once.the.detractor s.tasted.delicious.penis.bird.meat.for.the.first.t ime,.they.immediately.dropped.their.cases.and.foll owed.the.law.to.the.letter..We.all.know.today.that .penis.bird.is.the.most.delicious.meat.man.has.eve r.known,.but.at.that.time,.the.only.meats.people.a te.were.pork.and.beef..In.the.early.1970s,.though, .challenges.to.the.act.began.again..Many.argued.th at.the.monopoly.given.to.Penis.Bird.Industries.by. the.act.was.in.all.ways.unamerican..The.Supreme.Co urt.finally.agreed,.and.in.1974,.Section.II.of.the .act.was.struck.down..This.in.effect.opened.the.ma rket.to.competition.for.all..Today,.Penis.Bird.Ind ustries.is.almost.no.more..Today.we.have.the.marke t.leader.Penis.Bird.Meat.International.facing.agai nst.Penissoft,.a.recent.startup..Where.will.the.fu ture.lead.the.penis.bird.market?.Only.time.will.te ll.us,.but.one.thing.is.certain:.penis.birds.are.h ere.to.stay!></A>

Mad propz man!

eff peeeeeee (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2698419)

word to the mother yo

Kazaa does that (2, Informative)

DOsinga (134115) | more than 12 years ago | (#2698422)

User would go to download a game demo or something, receive pieces from several different places, and knit them together?

The file sharing networks based on fasttrack technology do that. You download a movie or game from different users at the same time. Kazaa stitches it back together.

Re:Kazaa does that (2)

zachhendershot (470923) | more than 12 years ago | (#2698432)

A program by the name of Download Accelerator does that also. It splits up the file from the same location and downloads different chunks at the same time. Worked pretty well from my limited experience.

Re:Kazaa does that (1)

Kingpin (40003) | more than 12 years ago | (#2698494)


To what purpose? I'd say that bandwidth is the limiting factor in 9 out of 10 cases, no? So unless the site you're downloading from has a bandwidth/connection policy, the accelerator doesn't help you a whole lot.

Re:Kazaa does that (2)

autocracy (192714) | more than 12 years ago | (#2698513)

Yes, but in many cases it's the sending end that bottle necks. Major FTP site have some serious bandwidth, but during the day it gets split pretty rough. Get it from many sending ends, and suddenly you're pulling all you can handle.

Re:Kazaa does that (0)

Anonymous Coward | more than 12 years ago | (#2698555)

You've obviously never tried to download the latest Counter-Strike update on the day it was released then. :-) You can be sitting on a DS-3 but if the sites are overloaded you're still only going to get it at a couple of kilobytes a second. Stitch together 10 or 20 of these and it is reasonably fast.

getright (4, Informative)

awptic (211411) | more than 12 years ago | (#2698425)

The name of the program michael is referring to is called getright, which can connect to several known mirrors of a file and download seperate fragments from each.

prozilla (0)

Anonymous Coward | more than 12 years ago | (#2698549)

prozilla [delrom.ro] does the same job under linux.

oh dear lord (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2698426)

i hope i get a good post number. you know. like number one?

--diazepam

Re:oh dear lord (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2698498)

why thats not a troll you stupid fucking moderator. thats a dumb post, BUT NOT A TROLL. do you guys even like oh i dunno have a fucking brain cell. this is why slashdot sucks. people like the one that modded that post a troll. have a nice day burning in hell, stupid fuck.

A good idea, but old (1)

Marx_Mrvelous (532372) | more than 12 years ago | (#2698427)

This is a good idea, and pretty natural. But it isn't anything new. There are many problems to overcome, not the least of which is managing all the TCP/IP conenctions and doing the decompression/assembly.

Of course, when a 1GHz CPU costs about $90, I guess we can afford CPU-heavy file transfers.

I NEED THIS! (-1, Offtopic)

skrowl (100307) | more than 12 years ago | (#2698429)

Anything that helps me get my porn faster is an excellent development!

And cheap, too! (5, Funny)

Tsar (536185) | more than 12 years ago | (#2698435)

The Transporter Fountain sits alongside a switch or router, and one Transporter Fountain is needed at the sending and receiving ends of a connection. Prices will range between $70,000 and $150,000.

Oh, boy, I'm gonna stop by CompUSA on the way home and grab one of these.

edonkey does the same i think (0)

Anonymous Coward | more than 12 years ago | (#2698437)

The Edonkey Software does the Same i think for download Movies and other big files, does not work perfect so could we perhaps Spawn an Open Source Project with an open Protocal for this ?

i believe it could be nice for apt-get debian install's
or distributing cd images of linux software

Vectors... (2)

CoolVibe (11466) | more than 12 years ago | (#2698442)

Those would transfer the fastest, since they don't consist of bitmapped data, but just the instructions to create the image.

I wonder what equations are used to convert raw unpredictable streams of data to formulas, and how come that the formulas used aren't bigger than the sent packets themselves? They mentioned XOR, but that just sounds silly, because XOR does nothing with data except do some reversible equation on them which does neither shrink or grow data.

Does anyone have more info? It does sound interesting though...

Re:Vectors... (1)

beff (135968) | more than 12 years ago | (#2698469)

ISTM that this technology would only work when there is very little information per byte in the original data. Text, for example, has a little more than one bit of information per byte. That is why compression functions work so well, they compress the actual information. How would this technology fare on efficiently compressed data or data that appears truly random (as well encrypted data is suppose to appear)? All of the Information Theory that I learned in college indicates that there is a minimum number of bits that represent any information. Once compressed to that point, you can't go any further.

Re:Vectors... (3, Insightful)

CoolVibe (11466) | more than 12 years ago | (#2698493)

> Theory that I learned in college indicates that there is a minimum number of bits that represent any information. Once compressed to that point, you can't go any further.

Exactly. This is also the point where a equasion to represent the data is going to end up bigger than the data its trying to send. But it depends on the algorythm used too. If the data may be sent out of order, one could try block-sorting and then compressing (like bzip2 does), but since this is UDP, out of order packets will be dropped or either not delt with (I think).

DISCLAIMER: I am not a protocol god, nor am I trying to be. Just spouting my views :-)

Re:Vectors... (5, Informative)

hburch (98908) | more than 12 years ago | (#2698544)

Consider the following (almost certainly bad, but workable) scheme:
  • Convert a chunk of the file into an order-k polynomial (use the coefficients of the polynomial to encode the chunk)
  • Send the evaluation of the polynomial at several distinct locations (more than k+1).
  • Receiver gets at least k+1 packets.
  • Using math, it recreates the original polynomial, and thus the chunk.


Please note that I'm not saying this is a good scheme. It is just an example one, and one that doesn't detail the chunk polynomial conversion, which would be very important. There are several papers describing schemes where people have actually worked at making them tenable.

Modulo compression, if you want such a system to require only receiving k packets (although you send more than that), the sum of the size of the k packets must be at least the size of the original file (otherwise, you could use such a scheme to compress the file).

Compression? (3, Interesting)

mi (197448) | more than 12 years ago | (#2698444)

In essence, is not this the same as file compression? The amount of information is the same
(for those, who remember, what Bit is). It is just, that the usual one character per byte is awfully wastefull. Which is why the various compressors are so effective.

Add a modern data transfer protocol and you may
get some start up money :-)

Re:Compression? (4, Informative)

s20451 (410424) | more than 12 years ago | (#2698531)

In essence, is not this the same as file compression? The amount of information is the same (for those, who remember, what Bit is).

It is more than merely compression. The received data is compressed, which saves transmission time, but this technology is already well known (and the company isn't claiming a compression rate better than entropy [data-compression.com], or anything else silly). The innovation here is the elimination of acknowledgement or ARQ packets. I'm speculating here, but it looks like they are encoding the data by transforming a file into a huge "codeword" -- when the codeword is transmitted, the receiver waits for enough packets to correctly decode the codeword, which results in the recovery of the file. There's no need for ARQ or TCP because transmitting extra codeword elements will automatically correct any errors incurred in transmission.

compression? (1)

4im (181450) | more than 12 years ago | (#2698447)

How is this different from a nifty compression and transmitting slightly differently?

To me, this sounds like a mix of compression and protocol, not necessarily that groundbreaking.

If it works, cool. But I guess it won't be that efficient on that old 486 Linux router...

Flow Control (3, Informative)

Detritus (11846) | more than 12 years ago | (#2698448)

You still need some form of flow control or rate limiting, otherwise a large percentage of the UDP packets are going to get dropped. Plus, you have the problem of UDP streams stealing bandwidth from TCP streams on a limited bandwidth link.

Re:Flow Control (3, Interesting)

Omnifarious (11933) | more than 12 years ago | (#2698537)

Quite correct. This protocol does not sound at all TCP friendly [yahoo.com]. It needs some way of dynamically responding to network conditions to be that way. Even something so simple as doing an initial bandwidth test, then rate limiting the UDP packets to 90% of that would be a big help, though for a large file that would still create a lot of congestion problems.

Does anybody know if IPV6 has any kind of congestion notification ICMP messages so stacks can know to throttle applications when the applications are blasting out data that's congesting the network?

Re:Flow Control (2)

M100 (78773) | more than 12 years ago | (#2698574)

It doesn't matter if you lose data in the stream. You actually send more data than the original file - but the receiver has to receive approx the same amount of data as the original file (about 5% if I remember correctly). Thus you can send 100kbytes (for a 50k file) but the receiver only needs to get half the packets - so loss is not a problem.
Think about simultaneous equations. If I have 2 unknowns I can solve it if I have two equations. Now imagine that I send you 4 equations, each with the two variables. You only need to receive 2 of the equations to be able to reconstruct the original two pieces of data.
The other neat thing about this is that you can multicast traffic - and each receiver can start listenin when it wants - so if a receiver starts listening halfway through you can still get the whole file!

Flashget (1)

xantho (14741) | more than 12 years ago | (#2698449)

Sounds like flashget/jetcar to me. It's been available for quite some time. Tell that to the USPTO!

Compression (1, Funny)

heikkile (111814) | more than 12 years ago | (#2698450)

So, someone has invented a data compression technique, and applied it over a communication channel. The only original thing in the article was the clever marketing ploy to describe this old technique as something new and wonderful...

Re:Compression (1)

O2n (325189) | more than 12 years ago | (#2698503)

Yep, that seems to be the case.
That, or someone didn't get it (the author? the marketing guy?)

The quirk is that none of the data is ever transmitted; the receiving end creates its own copy of a file based on a complete set of mathematical equations.

This simply doesn't work. If you have something already compressed (no redundancy) - let's say a .zip file or a .jpeg picture, there is no "set of mathematical equations" that considerably reduce the data size. Note that JPEG being a lossy algorithm, it can achieve higer compression rates than non-lossy algorithms (in theory). And you can't talk about lossy compression in the same phrase as data backup. :)

They may have designed something that speeds up transfers, that's not relying on the exact packet sequence etc. - but it's not spelled out in the article.

Re:Compression (1, Redundant)

M100 (78773) | more than 12 years ago | (#2698556)

No - it's not compression. You actually need to send more data than the original file - but the receiver has to receive approx the same amount of data as the original file (about 5% if I remember correctly).
Think about simultaneous equations. If I have 2 unknowns I can solve it if I have two equations. Now imagine that I send you 4 equations, each with the two variables. You only need to receive 2 of the equations to be able to reconstruct the original two pieces of data.

An open solution? (1)

darrint (265374) | more than 12 years ago | (#2698451)

I wonder if it's possible to duplicate this with an open solution. If this is really as revolutionary as they say then they've earned their patents. Could free/open hackers can come up with something that delivers the same results but is unencumbered?

Kinda like IFS? (2, Interesting)

pointym5 (128908) | more than 12 years ago | (#2698452)

I mean it's not for image compression specifically, but it definitely reminds me of IFS image compression in some ways. I'll bet that compression is very time consuming, but that's fine if you're warehousing data. I wonder if the clients are pre-loaded with a body of parameterized functions, so that the server just sends information describing what functions to run and what the parameters are. I guess if it's all based on polynomials all it needs to send are vectors of constants.

Neat idea. Patents: here [uspto.gov] and here [uspto.gov].

Fact: Slashdot is dying (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2698453)

The WIPO Troll now confirms: Slashdot is dying.

Yet another crippling bombshell hit the beleaguered Slashdot community when recently IDC confirmed that Slashdot accounts for less than a fraction of 1 percent of all servers. Coming on the heels of the latest The WIPO Troll survey which plainly states that Slashdot has lost more market share, this news serves to reinforce what we've known all along. Slashdot is collapsing in complete disarray, as further exemplified by failing dead last [goatse.cx] in the recent Sys Admin comprehensive networking test.

You don't need to be a Kreskin [goatse.cx] to predict Slashdot's future. The hand writing is on the wall: Slashdot faces a bleak future. In fact there won't be any future at all for Slashdot because Slashdot is dying. Things are looking very bad for Slashdot. As many of us are already aware, Slashdot continues to lose market share. Red ink flows like a river of blood. CmdrTaco is the most endangered of them all, having lost 93% of its core developers.

Let's keep to the facts and look at the numbers.

Hemos leader Theo states that there are 7000 users of Hemos. How many users of Timothy are there? Let's see. The number of Hemos versus Timothy posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 Timothy users. ChrisD posts on Usenet are about half of the volume of Timothy posts. Therefore there are about 700 users of ChrisD. A recent article put CmdrTaco at about 80 percent of the Slashdot market. Therefore there are (7000+1400+700)*4 = 36400 CmdrTaco users. This is consistent with the number of CmdrTaco Usenet posts.

Due to the troubles of VA Linux, abysmal sales and so on, CmdrTaco went out of business and was taken over by OSDN who sell another troubled OS. Now OSDN is also dead, its corpse turned over to yet another charnel house. All major surveys show that Slashdot has steadily declined in market share. Slashdot is very sick and its long term survival prospects are very dim. If Slashdot is to survive at all it will be among OS hobbyist dabblers. Slashdot continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, Slashdot is dead.

heh.. (5, Insightful)

Xzzy (111297) | more than 12 years ago | (#2698456)

> These files routinely are mailed on tape rather
> than transmitted electronically. "FedEx is a
> hell of a lot more reliable than FTP when
> you're running 20 Mbytes,"

Having worked in the industry they mention, I'd hazard that they don't use ftp more because of the illusion of security than anything else. People in the EDA world (which is where I worked, and has a close relationship with chip manufacturers) are immensely paranoid about people getting ahold of their chip designs, because if someone steals that.. you not only lose your next chip, you enable someone else to make it for you.

These people just don't trust firewalls and ftp yet, but they do trust putting a tape in an envelope and snail mailing it. At the very least it makes someone liable if the letter gets stolen, which you can't do with electronic transfers..

At any rate, ftp is plenty reliable for transfering 20mb files.. I do it every time a new game demo comes out. :P Maybe they meant 20gb. Cuz I've seen chip designs + noise analysis + whatever take dozens of gigs.

Re:heh.. (2)

swb (14022) | more than 12 years ago | (#2698501)

These people just don't trust firewalls and ftp yet, but they do trust putting a tape in an envelope and snail mailing it.

I've heard this said about the diamond business and the postal service. Diamond couriers, who are carrying just diamonds, can be tracked and robbed easily. Once a package enters the postal stream its nearly impossible to steal that specific package.

I dunno if its really true or not, but it has a certain counterintuitive logic that makes it believable.

doesn't leave a lot of room for error (1)

baronben (322394) | more than 12 years ago | (#2698457)

According to the artical, the technology needs exactly the right kind of equation (or what ever this technology uses to get information) according to the repersentive quoted in this artical, if you got 98% of the packets, you don't have the file. I supose this means theres a large chance that network conditions can completly mess up a download, say interference on a router somewere in Kalamazoo, or even on local ethernet line. Not sure if this is a big thing or not, but who knows.

DAP? (2)

BIGJIMSLATE (314762) | more than 12 years ago | (#2698458)

"User would go to download a game demo or something, receive pieces from several different places, and knit them together? Wish I could recall the company's name."

Uh...doesn't something like Download Accelerator Plus (yeah yeah, I know its a hive of spyware) already do that (downloads from multiple locations only to recombine the file later)?

yeah i remember... (1)

jthm (31469) | more than 12 years ago | (#2698460)

User would go to download a game demo or something, receive pieces from several different places, and knit them together? Wish I could recall the company's name.

the network is called usenet and the company was just broken up by the government.

XOLOX is the name (1, Interesting)

arnwald (468380) | more than 12 years ago | (#2698461)

The program was called xolox,
I know the developper personally and he is very disappointed about the corporate feedback he got.

People loved it, corporations didnt, so he shut down his site and with it Xolox ( unless you have a hacked version of course ;)

Cheers.

Think Geek Geforce 3 Add (0)

Anonymous Coward | more than 12 years ago | (#2698464)

I wish they would quit saying my video card sucks since I own a Visiontek G3 and it ownz.

I think I know how this works (1, Redundant)

David H (139673) | more than 12 years ago | (#2698462)

It looks like they just use UDP to "send" the original data and then follow it up with parity information until the "receiving" client gets enough parity data to reconstruct any missing original data. The parity files everyone has started using on Usenet are pretty cool, and this just sounds too similar.

not always great (1)

SylentBobb (515192) | more than 12 years ago | (#2698466)

Well, one thing I noticed with Kazaa/Morpheus was that partially downloaded files were useless. I hated trying to download a movie, the file provider going offline (just rude), and then note even being able to watch the part I had downloaded. I don't know if there was a work-around to it. I just switched to using Direct Connect.

Re:not always great (0)

Anonymous Coward | more than 12 years ago | (#2698561)

Most of the movie-files can be watched even if you dont get the ending. you just have to have a program that simulates the ending. mplayer for linux can read em all even without the ending but 'mplayer -idx' indexes it so you can fastforward. mpg's can be watched with windows media player without fixing and divx's can be fixed with somekinda divxfix (run a search, shouldnt be hard to find), again - in windows. dont need to bother when using supreme software like mplayer for linux :).

Michael, did you even read it? (5, Informative)

Chirs (87576) | more than 12 years ago | (#2698467)

Guys, this is nothing like Kazaa. Kazaa will let you download from several sources simultaneously, but only because it just requests different parts of the file from each source. At that point there are still send/ack type protocols in use.

This technology (from the write-up anyway) uses some kind of proprietary technique to re-map the data into another domain and send the information required to reproduce it. It sounds kind of like sending a waveform as a series of Fourier coefficients rather than as actual data samples. By changing to a different domain, it is possible to send metadata from which the original information can be recreated.

I have no idea exactly how they would do it, but it doesn't sound completely impossible.

However, its nothing like Kazaa or GetRight.

mod up plz (0)

Anonymous Coward | more than 12 years ago | (#2698491)

and it doesn't actually transfer the data.

PostScript for data (2)

swb (14022) | more than 12 years ago | (#2698546)

This sounds a lot like what PostScript is to a rasterized file. A set of descriptions of what the picture looks like, which are small and easy to transmit, which are then drawn to produce the picture.

With real vector PS its easy, since you start out by creating vectors (eg, Adobe Illustrator). How you get from a non-vector "destination" to the metadata you really want to transmit sounds complicated.

Morpheus does this... (1)

equalize (129721) | more than 12 years ago | (#2698468)

When downloading something large (probably everything, just more noticeable when the file is large) Morpheus connects to different users with the same file.

vector graphics (0, Offtopic)

jas79 (196511) | more than 12 years ago | (#2698474)

This sounds a lot like how vector graphics work. They don't transmit every pixel,but only send the coordinates and the instructions how to draw the image.

Somehow they managed to do the same for applications. maybe they only send the sourcecode and compile the code on location

Swarmcast (0)

Anonymous Coward | more than 12 years ago | (#2698475)

The company which did that was openCola with a OpenSource product called Swarmcast, but as far as I know they have abandoned the project and cut of the developers off their payroll. However they are still going on at sourceforge [sourceforge.net].

I know few of the developers and one of them has started a service company, selling services for swarmcast called Onion Networks [onionnetworks.com]

AK

Cool (1, Offtopic)

athmanb (100367) | more than 12 years ago | (#2698482)

Now you just need to combine that with the revolutionary algorithm to compress any data to one bit and power your computer by cold fusion, and you got one heck of a file transferring machine!

Doesn't e-Donkey (1)

cruelshoes (122132) | more than 12 years ago | (#2698484)

User would go to download a game demo or something, receive pieces from several different places, and knit them together?

Doesn't e-Donkey grab from all kinds of different sources and then assemble the file?

By George, I believe it does.

Name of company and product (4, Informative)

Omnifarious (11933) | more than 12 years ago | (#2698485)

The company's name is OpenCola [opencola.com] and the name of the product was SwarmCast. The guy who did SwarmCast, Justin Chapewske, is now at a company he started named Onion Networks [onionnetworks.com]. OpenCola appears to have completely abandon its original Open Source approach to their software.

Apparently, Justin has taken the GPL portions of Swarmcast and is improving them at Onion Networks.

Re:OOPS, name of person slightly wrong (3, Informative)

Omnifarious (11933) | more than 12 years ago | (#2698496)

Oops, make that Justin Chapweske. That's what I get for typing out an odd name from memory. :-)

Debian (2)

Some guy named Chris (9720) | more than 12 years ago | (#2698490)

Debian does something similar with the Pseudo Image Kit [debian.org].

It gets all the parts of the install ISO cd image, from disparate sources, stitches them together, and then uses rsync to patch it to exactly make a duplicate of the original install image.

Very nifty.

Not a new concept (2, Informative)

KarmaBlackballed (222917) | more than 12 years ago | (#2698495)

The quirk is that none of the data is ever transmitted; the receiving end creates its own copy of a file based on a complete set of mathematical equations.

This is called compression. Everybody is doing it and it has been done before.

When you download a ZIP file, you are not downloading the content. You are downloading a mathematically transformed version of it. You then translate it back. Modems have been compressing and decrompressing on the fly since the late 1980s.

Maybe they have a better compression scheme? (Fractal based?) That would be news. Everything else is a distraction.

Re:Not a new concept (4, Informative)

Omnifarious (11933) | more than 12 years ago | (#2698564)

UDP drops packets. What they are saying is they can packetize things in such a way that as soon as you pick up any N packets, you get the file, no matter what. They are also implying that anything less than N packets leaves you gibberish. This is quite different from file compression. It may be related to fractal file compression, but I think it's probably more related to cryptographic key sharing schemes.

Yea, Right. (1, Insightful)

Anonymous Coward | more than 12 years ago | (#2698497)

"FedEx is a hell of a lot more reliable than FTP when you're running 20 Mbytes," said Charlie Oppenheimer

Who does this guy think he is kidding?? We regulary FTP AutoCad files of 100 Megs and ISO images of 500 Megs with no issues what so ever.

Sure it might take an hour or or so to complete but, that beats the hell out of FedEx and it's a lot cheaper too. This guy's been smoking a few too many of his own marketing brochures.

Fountain (0)

Anonymous Coward | more than 12 years ago | (#2698502)

You'd think if they were going to call their product Fountain, and try to name it something out of Star Trek, they would at least call it "Particle Fountain" which actually appeared in at least 1 episode of TNG, I think.

Oh, wait, the particle fountain blew up and didn't work so well...

Nevermind ;)

Glenn

One of two (1)

Looke (260398) | more than 12 years ago | (#2698504)

This is one of two:

  • Yet another compression algorithm
  • A perpetuum mobile

Guess what?

"Game Demo" (1)

SkOink (212592) | more than 12 years ago | (#2698507)

Well, Morpheus will let you download gigs of these so-called "Game Demoz" for free from multiple sources at the same time!

UDP? Not always good (1)

darrad (216734) | more than 12 years ago | (#2698509)

From what I have seen in the past, using UDP is not always a good thing. Many of the major backbone providers, and a lot of ISP's block UDP traffic at different times for many different reasons(Smurf attacks, DoS). This can lead to several services being shutdown.

The idea itself sounds good. You more or less send a description of the file in a mathematical equation. If the equation itself is smaller in size than the file, great.

Re:UDP? Not always good (0)

Anonymous Coward | more than 12 years ago | (#2698524)

Maybe your ISP blocks UDP but, certainly none of the "major backbone providers". If they did, you wouldn't even be able to resolve a DNS name (UDP 53).

The internet would cease to function if UDP were blocked.

Basically (2)

glowingspleen (180814) | more than 12 years ago | (#2698511)

For anyone that just wants the jist of the article:

"The sending side transmits these symbols until the box on the receiving end confirms that it's collected enough symbols. "

So basically, it's not much more than UDP with a single reply telling the server to stop transmitting.

Not bad, but you better have some good timeouts worked into this thing. UDP by definition is a non-replying "if it gets dropped who cares?" protocol. If the receiver's connection were to go down, wouldn't the server just get flooding all the in-between routers with packets for awhile? That's not good for traffic congestion.

Bad Netizens? (1)

mattrwilliams (534984) | more than 12 years ago | (#2698514)

Isn't one of TCP's purposes to throttle connections when loss (=contention) in the core starts to affect a stream? This is a method by which multiple users can share the same public network without adversely affecting one another. This technology looks like it is working around this problem by adding redundancy to the orignal data and then flooding the network, ignoring any indications of contention. This smells pretty selfish to me and could cause problems to the public internet if this technology ever takes off in large enough numbers.

Just send numbered UDF Packats (2, Interesting)

seanmceligot (21501) | more than 12 years ago | (#2698515)

This could be done easily without the proprietary algorithms. Just send update packets with a header in each on stating that it is packet number N and there are X total packets. Then, request missing packets when you get towards the end, and put them all together in order when you get them all.

Somewhat unrelated --- Does anyone else miss Z-Modem. We need a zmodem like program for that works over telnet so we don't have to open a separate FTP session. In the BBS days, you just typed rz filename and it came to you.

my take (5, Informative)

bpowell423 (208542) | more than 12 years ago | (#2698516)

There have been lots of comments along the lines of, "this is just a novel compression/transmittion scheme". In a way, that looks to be true, but here's my take.

Judging from this:

The sending side transmits these symbols until the box on the receiving end confirms that it's collected enough symbols. The receiving box then performs an XOR operation on the symbols to derive the original data.

It appears to me that the transmitting side generates the symbols (parameters of the equations, I guess) and begins sending them to the receiving side as fast as it can. Apparently there are multiple solutions to the equations that will arrive at the same answer, so when the receiving end has received enough symbols to make it works it says, "stop sending already!" Apparently they're getting their speed because A) things don't have to go in any order (that's how the 'net is supposed to work, right?) and B) Alice and Bob don't have to keep up this conversation: Alice: Hey, Bob, can you send me X? Bob: Okay, are you ready? A: Yes, Go ahead? B: Okay, here it comes. A: I'm waiting. B: Here's the first packet." A: What? That packet didn't make it over. B: Okay, here it is again. A: Okay, I got that packet. B: Good. A: Okay, I'm ready for the second packet. B: Okay, here's the second packet.

Okay, I had too much fun with the Alice and Bob conversation there. Anyway, it looks like there scheme is compressing things in the form of their equations, and then just sending them in a burst until the receiver is happy.

Sounds like it might work, but it'll generate a ton of network traffic, I'd bet!

File Resume (0)

The_Flames (184659) | more than 12 years ago | (#2698517)

Anything that uses the music city protocol has the download from multiple available sources at the same time; if your downloading from the web then downloads managers like getright ect also are available to do a similar task.

Technique: FEC (Forward Error Correction) (1, Insightful)

Anonymous Coward | more than 12 years ago | (#2698521)

These guys are implementing a Forward Error Correction mechanism for compression. It's all from research of Michael Luby at Berkeley with FEC and Tornado codes. He is a co-founder of the company. Pretty effective technology for certain applications.

Details (1)

adam303 (543395) | more than 12 years ago | (#2698522)

That article leaves alot to wonder. Is it just some data compression? I don't think there's any more to be done in that field that hasn't been discovered yet. Is it a protocol that sends alot of UDP packets and partiy packets so you can fill in the missing blanks? Then you have to deal with bandwitdth throttling, because you can't just have some machine sending out UDP as fast as it can to your smaller pipe, it will cause some DoS. Anyone have links to their patents? adam

fastest (4, Funny)

cr@ckwhore (165454) | more than 12 years ago | (#2698532)

Someday when we all have extraordinarily fast computers, we'll simply be able to send somebody an MD5 sum and the computers will be able to "crack" it back into the original file. At that point, commercial software wouldn't even have to come on a CD... just print the hash on a slip of paper and the user could type it in.

word.

Re:fastest (1)

pointym5 (128908) | more than 12 years ago | (#2698571)

No, they won't, because given any MD5 hash there are an infinitude of files that hash to it.

Now if you also send the file size, you reduce the possibility of collision.

But there's still a minor problem of iterating through the set of candidates. If you send me the MD5 hash for a 500KB file, you'll need to get cranking on computing the MD5 hash of each 4 million bit number. 2^4000000000 different MD5 hashes will take a few lifetimes of the universe to perform with any computing device bounded by time quanta.

dropped packets.. no problem? (1)

ianxm (473419) | more than 12 years ago | (#2698534)

The article says that they don't care if packets get dropped, as long as the right number of packets get transmitted.

Aren't the packets unique? If a packet gets dropped, how do they know which one to resend?
It doesn't make sense to me.

Entry level is $70k (2)

joshv (13017) | more than 12 years ago | (#2698535)

Yeah, I don't ftp is so slow that anyone is going to pay $70k for their proprietary 'Transporter Fountains'. Seems like anyone with a little common sense and math ability could easily cobble together a software UDP based transfer protocol that has all of the properties described in the article.

The key is to build in redundancy without increasing the amount of data sent so much that you counteract the speed gains you get by using UDP.

-josh

FTP already provides for this (2, Informative)

autocracy (192714) | more than 12 years ago | (#2698536)

Just send restart at commands to many different servers, then cat the files onto each other. This is how Dowload Accelerator does it, and Fast Track is the same theory. Programs just take all the mental work out of it.

Uhh... my shit detector just went off (1, Flamebait)

tzanger (1575) | more than 12 years ago | (#2698539)

From the article:

Meltzer recalled a job where the client had a 32-Mbit/second connection available but was getting a throughput of 0.5 Mbits/s. "It wasn't a question of mere bandwidth. They had too much turnaround," he said.

Um... if you're getting 500kbps on a 32Mbps connection your protocol stinks. 1/64th of your available bandwidth isn't FTP's fault, nor is it TCP's. Either there was a severe bottleneck somewhere between the endpoints, or the protocol was designed to minimize throughput.

More shit:

"FedEx is a hell of a lot more reliable than FTP when you're running 20 Mbytes," said Charlie Oppenheimer, vice president of marketing at Digital Fountain.

They may have better bandwidth but the latency sucks. Furthermore, I've never had FTP destroy my packets. It either made it or it didn't, and it makes it 100% of the time, barring connection failure.

Sorry. I don't buy it. Yeah sending over UDP gives you less hassle than TCP but now you have to take into account all the sequencing and data transfer checks. Not terribly difficult but no rocket science, either.

TCP Fair? (1)

Mdog (25508) | more than 12 years ago | (#2698540)

In my networking class last year, they talked about new protocols having to be "TCP fair," in that they don't gain their advantages over the standard TCP by just cutting in line in front of other TCP packets...I wonder if this new algorithm claims to keep that in mind. The scenario to avoid is everybody who's 31337 switching to this new stuff, thereby slowing down the other half to gain their speedup.

Conspiracy theory? Yes. But hey, this is /.

moron, you didn't read the article (0)

Anonymous Coward | more than 12 years ago | (#2698541)

User would go to download a game demo or something, receive pieces from several different places, and knit them together?

There is no download happenning at all here. It is doing something like this, the so-called reciever is building a binary representation of a CRC. Both sender and reciver use propietary hardware. This isn't fucking GoZilla you fucking fucktards!

Article is wrong (4, Interesting)

saridder (103936) | more than 12 years ago | (#2698545)

The article quotes that "...FTP requires packets to arrive in sequence, and TCP requires a receiving end to acknowledge every packet that arrives, so that dropped packets can be resent..."

This is incorrect. TCP has a concept of sliding windows where once a number of packets has been received successfully, the receiver increases the number of packets that can be sent without an ack. This is an exponential number, so if it receives 2 packets successfully, it will then tell the sender that it will take 4 before an ack is needed. The only time you get a 1 for 1 ack ratio is if you miss a packet and the window slams shut.

Furthermore, UDP for data is highly unreliable, and I wouldn't trust it across WAN's. Frame Relay switches may drop packets if you exceed your CIR and begin bursting, so that whole transfer will never succeed. Therefore you actually waste bandwidth cause the whole transfer is doomed to fail, and the sender will never know it.

Also some routers have WRED configured in their queues, purposely dropping TCP packets to increase bandwidth on a global scale. This would damage the file transfer process as well if it was UDP based, as this system is.

Stick with the RFC's and the tried and true TCP transport system. This company will fail.

Udpcast (2)

BlueUnderwear (73957) | more than 12 years ago | (#2698548)

Udpcast [linux.lu] in FEC mode does this too: in addition to the original data, it can transmit an arbitratry amount of "FEC" blocks which are a linear combination of the data blocks. If some data blocks are lost in transit, udpcast can recalculate them from the FEC blocks by multiplying the vector of received data by the inverse encoding matrix.

XOR = advanced algorithm (2, Informative)

null etc. (524767) | more than 12 years ago | (#2698552)

Quoted from the article:
In this case, the Transporter Fountain creates not equations but hundreds of millions of "symbols" which can be used to reconstruct the data. The sending side transmits these symbols until the box on the receiving end confirms that it's collected enough symbols. The receiving box then performs an XOR operation on the symbols to derive the original data.
So, assuming that each "symbol" is at least one byte, then creating "hundreds of millions" of these symbols would result in hundreds of megabytes of data. Furthermore, the guy quoted 20MB as being a large amount of data to send.

Conclusion: Only sales & marketing would try to sell a product that turns 20MB into 100MB, sends it via UDP, only in order to have the results XOR'd together.

Where do they get these people?

Tornado Codes (5, Informative)

Jonas ÷berg (19456) | more than 12 years ago | (#2698560)

While not actually related, John Byers, Michael Luby and Michael Mitzenmacher wrote a paper on using Tornado codes to speed up downloads. Basically, what they propose is clients accessing a file from more than one mirror in parallel and using erasure codes to make the system feedback-free. The abstract:

Mirror sites enable client requests to be serviced by any of a number of servers, reducing load at individual servers and dispersing network load. Typicall, a client requests service from a single mirror site. We consider enabling a client to access a file from multiple mirror sites in parallel to speed up the download. To eliminate complex client-server negotiations that a straightforward implementation of this approach would require, we develop a feedback-free protocol based on erasure codes. We demonstrate that a protocol using fast Tornado codes can deliver dramatic speedups at the expense of transmitting a moderate number of additional packets into the network. Our scalable solution extends naturally to allow multiple clients to access data from multiple mirror sites imultaneously. Our approach applies naturally to wireless networks and satellite networks as well.

I don't have the paper in a computer format, but the number is TR-98-021 and John and Michael were both at Berkeley at the time (1998), so it should be fairly easy to find if someone is interested. Doubtlessly, a number of other reports on the subject should also exist that deals with the same problem but with different solutions.

What's going on here? (2)

gotan (60103) | more than 12 years ago | (#2698562)

Sorry, the whole article seems to make some magic mumbo-jumbo out of the process. Apparently the file is transformed, but how does that transformation help? The main difference between UDP and TCP in this case is, that TCP maintains the sequence of Packets, so after splitting a file up, sending it as TCP-Packets and combining it again, all parts (sent as Packets) are in the right place. UDP does no such thing, and also UDP doesn't check, if a packet really reached it's destination. This frees UDP of some overhead TCP has. But to send a large File (with a simple approach), now you have to label each UDP-Packet with a sequential number, and, at the end, check if all Packets arrived (and maybe request missing Packets again), then rearrange them according to the sequence numbers.

Now i don't see, how a transformation of content helps here, instead of adding the information where in the file the packet goes (a kind of serial number), now you have to label, where in the equation it should go (a kind of coefficient index), so the receiving end knows, whether it has all information, and which information is still missing, and must be requested again.

More commentary (2)

autocracy (192714) | more than 12 years ago | (#2698566)

It's an encoding scheme that sends you the instructions on how to build something rather than the stuff itself. Not so special as they make it sound. Saying that you get the data without it being sent to you is the biggest troll for mid-level clueless managers that want to download their "repr0ns" faster. Not that I'm even sure it will work that well....

Get a clue - you compression bigots (0)

Anonymous Coward | more than 12 years ago | (#2698567)

That the part about /. I hate the most, the stupid phuknutz who post without engaging their brain. The company doesn't do compression like you all think, they abstract the data into a series of equations that represent the content. The result is that it doesn't matter which packets you download from the source, just that you download enough of them. Once you have enough packets, you can solve the series or equations (n equations, n unknowns) to get back the original data. Duh.

Would be nice if it works... (1)

voronoi++ (208553) | more than 12 years ago | (#2698568)

Interesting idea, I wonder how it could work in practice.

They are going to have to deal with flow control, dropped packets, etc... I wonder what happens if the receiver crashes?

I have a feeling that they may be sending quite a bit of redundant data (perhaps similar to the way CDs are encoded at the hardware level), and they are betting that the signal to noise ratio is good enough for error correction software to deal with it. With a bit of luck they should be able to use more of the bandwidth available.

I wonder what would happen if lots of people start using something like this. Would the extra bandwidth actually slow things down, even though an individual download was faster?

I wish I had more energy left after doing my day lob, since this sounds like a fun side project...

FTP doesn't require in order delivery?!?1? (1)

hammy (22980) | more than 12 years ago | (#2698572)

I didn't think FTP required packets to arrive in sequence. Although FTP and TCP acknowledge packets in sequence they don't actually require the packets to be recieved in sequence.

They don't really make clear why this product is interesting at all except giving it a cute name.

Think Reliable Multicast + XOR Recovery (2)

hughk (248126) | more than 12 years ago | (#2698578)

This is basically what the guys doing reliable multicast get up to plus what you do for tape backups. It isn't particularly new.

You create your data in records and groups. Each group contains a longitudenal XOR of the other records within the block. This comes from tape backups that were proof against bad-spots and was later used in RAID.

You then sequence and shoot your data in records across the network. If one record is dropped, it can be recreated through the XOR redundancy record. If two records are dropped, you need a rerequest mechanism. This can be either on UDP or via a separate TCP link.

If you want an example of prior art, go to the trading room of a bank. How do you think all those prices are delivered to every workstation?

michael didn't read the article carefully, I guess (2)

rlowe69 (74867) | more than 12 years ago | (#2698580)

...wasn't there a company using this for a fast file download application? User would go to download a game demo or something, receive pieces from several different places, and knit them together?

michael, this is not what the product does. From the article:
By translating a packet stream into mathematical elements, the company eliminates the back-and-forth transactions that confirm whether data has reached its destination. In the Digital Fountain approach, the receiving end waits until it has received a certain number of packets, then signals the transmitting side to stop sending. The operation doesn't require a network processor, but relies instead on the computational power of standard PC processors.

The quirk is that none of the data is ever transmitted; the receiving end creates its own copy of a file based on a complete set of mathematical equations.

It appears as though the singal is broken down into equations, that when combined produce the original data. These equations are all sent from the same server to the destination client. The speed increase then comes from the fact that the size of the equations is less than the size of the data.

The article does not mention that the equations come from multiple servers, which is a very big difference! IMO, this technology is much more newsworthy than yet another multi-server downloading tool like Kazaa.

Read the article! (1)

alt.sex.fetish.jesus (542450) | more than 12 years ago | (#2698585)

> User would go to download a game demo or something,
> receive pieces from several different places, and knit
> them together?

This technology has *nothing* to do with ``downloading chunks from multiple sources and splicing them together''. Man, it's bad enough seeing how many Slashdot readers didn't bother reading the article, but Michael himself didn't bother reading the article.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...