×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

MIT Uses Machine Learning Algorithm To Make TCP Twice As Fast

timothy posted about 9 months ago | from the do-it-again-for-four-times-faster dept.

AI 250

An anonymous reader writes "MIT is claiming they can make the Internet faster if we let computers redesign TCP/IP instead of coding it by hand. They used machine learning to design a version of TCP that's twice the speed and causes half the delay, even with modern bufferbloated networks. They also claim it's more 'fair.' The researchers have put up a lengthy FAQ and source code where they admit they don't know why the system works, only that it goes faster than normal TCP."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

250 comments

I'd be wary. (5, Funny)

Fishchip (1203964) | about 9 months ago | (#44335065)

This is how things like Skynet get started.

Re:I'd be wary. (2)

noh8rz9 (2716595) | about 9 months ago | (#44335071)

before you know it, the internet is self aware. then it pirates T2, Matirx, 2001, and is inspired to choose a new path. BUT WHICH ONE WILL IT CHOOSE?

Re:I'd be wary. (1)

Anonymous Coward | about 9 months ago | (#44335497)

Hopefully it chooses Wargames. (Or Dune, but somehow I doubt it will choose that...)

Re:I'd be wary. (5, Funny)

jamesh (87723) | about 9 months ago | (#44335553)

At the very least i'd be doing a grep for things like "kill all humans" in the source code.

Uh Oh... (5, Funny)

Anonymous Coward | about 9 months ago | (#44335087)

they admit they don't know why the system works, only that it goes faster than normal TCP

And so it begins...

OSPF (3, Interesting)

globaljustin (574257) | about 9 months ago | (#44335649)

It's basically a more complex version of Open Shortest Path First.

Depending on how you understand the term 'autonomous system' [wikipedia.org] you can have a lot of fun with the idea. It doesn't *explain* everything about how this works, but it puts it into context, in my mind.

FTA: To approximate the solution tractably, Remy cuts back on the state that the algorithm has to keep track of. Instead of the full history of all acknowledgments received and outgoing packets sent, a Remy-designed congestion-control algorithm (RemyCC) tracks state variables...

So basically it has, in the minds of these researchers, a really, really well mapped 'routing table' it can access faster than regular TCP.

It's a network control algorythm. It optimizes network flow based on user-identified parameters which result in measurable outputs that can give the user feedback.

Network control algorythm.

Re:OSPF (1)

Anonymous Coward | about 9 months ago | (#44335679)

Control algorythm.

Algorythm.

Rythm.

Soul.

Call Susan Calvin (0)

Anonymous Coward | about 9 months ago | (#44335091)

We've moved from debugging to robotic psychoanalysis. It was expected to happen as systems became more complex.

Re:Call Susan Calvin (1)

ebno-10db (1459097) | about 9 months ago | (#44335291)

We've moved from debugging to robotic psychoanalysis. It was expected to happen as systems became more complex.

Emacs Psychiatrist has been a standard feature for years.

Re:Call Susan Calvin (0)

Anonymous Coward | about 9 months ago | (#44335421)

I haven't seen it but I can guess that this is an emacs program that, like Eliza, this is a program that you interact with and it sounds like a psychoanalyst asking the human questions. Now we're talking about the other way around. Since the code isn't written in a form that we can understand and debug using regular methods, we're going to have to start asking it questions about how it feels, why it wants to do one thing while we want it to do another and to coerce it into doing the task it was meant to do.

All Jokes Aside... Still No. (5, Insightful)

Jane Q. Public (1010737) | about 9 months ago | (#44335105)

Allow a computer to design a faster TCP? Sure!

Let them actually implement it without knowing how it works? Oh, Hell no!

I'm not talking "Skynet" or anything here... but if it breaks, who's going to fix it?

Re:All Jokes Aside... Still No. (0)

Anonymous Coward | about 9 months ago | (#44335147)

The puter. It made it it broke it it can fix it.
Win win win.

Re:All Jokes Aside... Still No. (3, Insightful)

Anonymous Coward | about 9 months ago | (#44335169)

FYI: There's a difference in "knowing the precise mechanism for how it works" and "knowing that the algorithm is stable" are two very different things. Presumably they've proven the latter.

Re:All Jokes Aside... Still No. (1)

Anonymous Coward | about 9 months ago | (#44335651)

Understanding how it works is essential for making sure it is secure and doesn't have ridiculously exploitable flaws.

Re:All Jokes Aside... Still No. (2)

RedBear (207369) | about 9 months ago | (#44335179)

Allow a computer to design a faster TCP? Sure!

Let them actually implement it without knowing how it works? Oh, Hell no!

I'm not talking "Skynet" or anything here... but if it breaks, who's going to fix it?

If it breaks can't we just fall back to the current inefficient algorithms? With the performance and fluidity improvements promised by this approach it could be hugely beneficial to all kinds of networks, even if no one yet fully understands why it works better. They'll figure it out eventually.

Re:All Jokes Aside... Still No. (1)

The Mighty Buzzard (878441) | about 9 months ago | (#44335331)

There is no "can't we just" involved when the network loses its shit because you wanted to try something new. There is, however, plenty of time to figure out why it did afterwards as you stand in the unemployment line.

Re:All Jokes Aside... Still No. (0)

Anonymous Coward | about 9 months ago | (#44335425)

Pretty sure no one at MIT is going to get fired if your crappy Linksys router crashes.

Re:All Jokes Aside... Still No. (1, Interesting)

The Mighty Buzzard (878441) | about 9 months ago | (#44335499)

Nobody at MIT is going to be picking which algorithm gets used on any live device outside of MIT, their pockets, or their house, so I was obviously not talking about them.

Any sys/network admin putting this on or in the path of critical live devices should be fired no matter how it preforms though. No admin worth having would push this live for the same reason they wouldn't overclock the database servers; performance is always a distant second to reliability.

Re:All Jokes Aside... Still No. (1)

Anonymous Coward | about 9 months ago | (#44335577)

The paper said that the new congestion control algorithms are both more performant -and- more reliable.

Re:All Jokes Aside... Still No. (0)

sexconker (1179573) | about 8 months ago | (#44335847)

Nobody at MIT is going to be picking which algorithm gets used on any live device outside of MIT, their pockets, or their house, so I was obviously not talking about them.

Any sys/network admin putting this on or in the path of critical live devices should be fired no matter how it preforms though. No admin worth having would push this live for the same reason they wouldn't overclock the database servers; performance is always a distant second to reliability.

Al modern x86 servers overclock themselves automagically.
Get real - shit working is #1. Shit being fst or cheap is number 2. Shit being reliable or secure is number 39057.

Re:All Jokes Aside... Still No. (1)

Anonymous Coward | about 9 months ago | (#44335569)

Most people these days don't know about the great congestion collapse of '86.

Re:All Jokes Aside... Still No. (5, Interesting)

Intropy (2009018) | about 9 months ago | (#44335201)

We're already in that boat. One of the reasons it's so hard to make changes is that nobody really knows why the internet works. We know how and why individual networks work. We can understand and model RIP and OSPF just fine. And we know how BGP operates too. But that large scale structure is a mess. It's unstable. The techniques we use could easily create disconnected or even weakly connected networks. But they don't except for occasionally a single autonomous system falling off. We've built ourselves a nice big gordian knot already. We know what it's made of, and we know how it operates, but good luck actually describing the thing.

Come on now (5, Insightful)

dbIII (701233) | about 9 months ago | (#44335357)

As complex systems goes there are far worse. Go ask an engineer or a scientist.

Re:Come on now (5, Interesting)

Daniel Dvorkin (106857) | about 9 months ago | (#44335537)

As complex systems goes there are far worse. Go ask an engineer or a scientist.

I am a scientist--specifically, a bioinformaticist, which means I try to build mathematical and computational models of processes in living organisms, which are kind of the canonical example of complex systems. And I will cheerfully admit that the internet, taken as a whole, is at least as complex as anything I deal with.

Re:All Jokes Aside... Still No. (5, Interesting)

Animats (122034) | about 9 months ago | (#44335393)

One of the reasons it's so hard to make changes is that nobody really knows why the internet works.

We still don't know how to deal with congestion in the middle of a pure datagram network. The Internet works because last-mile congestion is worse than backbone congestion. If you have a backbone switch today with more traffic coming in than the output links can handle, the switch is unlikely to do anything intelligent about which packets to drop. Fortunately, fiber optic links are cheap enough that the backbone can be over-provisioned.

The problem with this is video over the Internet. Netflix is a third of peak Internet traffic. Netflix plus Youtube is about half of Internet traffic during prime time. This is creating demand for more and more bandwidth to home links. Fortunately the backbone companies are keeping up. Where there's been backbone trouble, it's been more political than technical. It also helps that there are so few big sources. Those sources are handled as special cases. Most of the bandwidth used today is one-to-many. That can be handled. If everybody was making HDTV video calls, we'd have a real problem.

(I was involved with Internet congestion control from 1983-1986, and the big worry was congestion in the middle of the network. The ARPANET backbone links were 56Kb/s. Leased lines typically maxed out at 9600 baud. Backbone congestion was a big deal back then. This is partly why TCP was designed to avoid it at all costs.)

Re:All Jokes Aside... Still No. (2, Interesting)

afidel (530433) | about 9 months ago | (#44335203)

Meh, it's like the AI designed antenna, we don't have to know WHY it works better, just that it does and how to build a working copy.

Re:All Jokes Aside... Still No. (5, Insightful)

techhead79 (1517299) | about 9 months ago | (#44335279)

we don't have to know WHY it works better, just that it does and how to build a working copy

But the fact that it does work better means we're either missing a part of the picture that is obviously important or the AI version is leveraging quirks with the system that no current model we have represents. I'm shocked to read that anyone would be comfortable just ignoring the why of something just so we can progress beyond our understanding. If we don't understand the why then we're missing something very important that could lead to breakthroughs in many other areas. Do not let go of the curiosity that got us here to begin with.

Re:All Jokes Aside... Still No. (5, Insightful)

blankinthefill (665181) | about 9 months ago | (#44335367)

I don't think they just drop the questions and run with it. I'm pretty sure that, when we don't understand how things that are useful work, we just implement them... and study them at the same time. I guarantee you that SOMEONE, at least, is studying why an AI antenna works better than our man-designed ones, and they're doing it for the very reasons that you mention. But I think the point the GP was trying to get at is that we've never let out ability to not understand things hinder our adoption of those very things in the past, and as long as we have good evidence that this thing performs correctly, and we can replicate it, then why wouldn't we use it at the same time we study it?

Re:All Jokes Aside... Still No. (1)

maxwell demon (590494) | about 9 months ago | (#44335411)

ability to not understand things

I didn't know that not understanding things now counts as ability. I guess that gives a lot of people a lot of abilities in a lot of fields. ;-)

Re:All Jokes Aside... Still No. (4, Interesting)

Ichijo (607641) | about 9 months ago | (#44335387)

So we built a computer that figured out the answer [wikia.com]. Now we just need to build an even bigger computer to figure out the question!

Re:All Jokes Aside... Still No. (2, Informative)

Anonymous Coward | about 9 months ago | (#44335493)

> I'm shocked to read that anyone would be comfortable just ignoring the why of something just so we can progress beyond our understanding.

ML often works like that.

You put the inputs into a function... it spits out a model. The model can be considered as an optimal orthonormal basis for the space it was encoding, but its REALLY REALLY hard to understand the dimensions that basis is in. Sometimes, you can take an image categorization model and see "ah, this is the blue shirt dimension. It seems that people wearing blue shirts are far along this axis"... but most times, you have NO IDEA what the model is capturing.

Re:All Jokes Aside... Still No. (5, Informative)

jkflying (2190798) | about 9 months ago | (#44335501)

It's not that we don't understand *why* something like a genetic-algorithm designed antenna works so well. We can evaluate its performance using Maxwell's equations and say, "Yes, it works well." without ever having to build the thing. What we don't have is a set of guidelines or 'rules of thumb' that can result in an antenna design that works just as well.

The difference is that the computer evaluates a billion antennas for us, doing some sort of high-dimensional genetic optimisation on the various lengths of the antenna components. It doesn't 'understand' why it gets the results it does. We do, because we understand Maxwell's equations and we understand how genetic optimisation works. But Maxwell's equations only work for evaluating a design, not for giving a tweak which will improve it. And we're dealing with too many variables that are unknown to have a closed-form solution.

As for this algorithm, they basically did the same thing. They defined a fitness function and then repeatedly varied the code they were using to find the best sequence of code. However, unlike the antenna analogy, they used actual equipment to evaluate the fitness function, not just a model. This means that they don't have an accurate model, which means that your complaint that we don't know why this works is entirely valid, and the antenna analogy is not =)

Re:All Jokes Aside... Still No. (0)

Anonymous Coward | about 9 months ago | (#44335527)

Add to this that while an antenna can be tested by receiving all bands at all possible orientation, and if it fails it involves only one receiver, this system is untestable in all its combination and a quirk might cause a lot of disservice to the rest of the network.

If we really really can't get to understand it, better to deploy with extreme care.

Re:All Jokes Aside... Still No. (5, Interesting)

seandiggity (992657) | about 9 months ago | (#44335561)

We should keep investigating why it works but, to be fair, the history of communications is implementing tech before we understand it (e.g. the first trans-Atlantic cable, implemented before we understood wave-particle duality, and therefore couldn't troubleshoot it well when it broke).

Let's not forget this important quote: "I frame no hypotheses; for whatever is not deduced from the phenomena is to be called a hypothesis; and hypotheses, whether metaphysical or physical, whether of occult qualities or mechanical, have no place in experimental philosophy."

...that's Isaac Newton telling us, "I can explain the effects of gravity but I have no clue WTF it is."

Re:All Jokes Aside... Still No. (4, Interesting)

Daniel Dvorkin (106857) | about 9 months ago | (#44335563)

I'm shocked to read that anyone would be comfortable just ignoring the why of something just so we can progress beyond our understanding.

If you insist that we know why something works before we make use of it, you're discarding a large portion of engineering. We're still nowhere near a complete understanding of the laws of physics, and yet we make machines that operate quite nicely according to the laws we do know (or at least, of which we have reasonable approximations). The same goes for the relationship between medicine and basic biology, and probably for lots of other stuff as well.

If we don't understand the why then we're missing something very important that could lead to breakthroughs in many other areas. Do not let go of the curiosity that got us here to begin with.

I don't think anyone's talking about letting go of the curiosity. They're not saying, "It works, let's just accept that and move on," but rather, "It works, and we might as well make use of it while we're trying to understand it." Or, from TFA: "Remy's algorithms have more than 150 rules, and will need to be reverse-engineered to figure out how and why they work. We suspect that there is considerable benefit to being able to combine window-based congestion control with pacing where appropriate, but we'll have to trace through these things as they execute to really understand why they work."

Re:All Jokes Aside... Still No. (1)

uglyduckling (103926) | about 9 months ago | (#44335567)

I'm not shocked at all. This is just an automated form of hand-optimisation. Plenty of products and algorithms end up in regular use that have been tweaked intuitively (or algorithmically) without really understanding why the tweaking improved it. Plenty of engineering research is about providing models for existing systems to understand why best in class designs work the best. If we held back empirically proven designs until the theory was completely understood we'd never progress with anything.

Re:All Jokes Aside... Still No. (1)

Anonymous Coward | about 9 months ago | (#44335603)

I'm shocked to read that anyone would be comfortable just ignoring the why of something just so we can progress beyond our understanding.

Or, y'know, they release the source and get help from everyone else to work out wtf is going on.

Re:All Jokes Aside... Still No. (4, Interesting)

Clarious (1177725) | about 9 months ago | (#44335239)

Think of it as solving a multiobjective optimization problem using heuristic algorithm/machine learning. You can't solve the congestion problem completely as it is computionally infeasible, now they just use machine learning to find the (supposedly) optimal solution. Read TFA, it is quite interesting, I wonder if we can apply that to Linux writeback algo to avoid the current latency problem (trying copying 8 Gb of data into a slow storage medium such as SD card or USB flash, prepare for 15+ seconds stalls!), the underlying is the same anyway.

Re:All Jokes Aside... Still No. (4, Insightful)

Clarious (1177725) | about 9 months ago | (#44335253)

A bit offtopic, roughtly 10 years ago I came to /. and was amazed by the technological insight/information in the comments here. And now more than half of the comments are jokes about skynet without any insight of understanding what TFA is about. Of course, informative posts still can be found often, but slashdot has fallen quite low...

Re:All Jokes Aside... Still No. (1)

Anonymous Coward | about 9 months ago | (#44335417)

It's called "Eternal September," look that up!
It's a great bit of history and it really is eternal.

captcha: impacts
well isn't that cute! /. is gettin all sentient on me with the captchas again.

Re:All Jokes Aside... Still No. (0)

Anonymous Coward | about 9 months ago | (#44335473)

It's 12:30 at night and the story has only been up for an hour. Did you really expect anything interesting to unfold in this short amount of time, in this hour?

Re:All Jokes Aside... Still No. (2)

pedestrian crossing (802349) | about 9 months ago | (#44335525)

The jokers are the ones who didn't read TFA. So sad. Especially when TFA is such a good read and is actually "News For Nerds"!

Re: All Jokes Aside... Still No. (1)

Anonymous Coward | about 8 months ago | (#44335829)

With a rise in popularity I suppose this is an inevitable outcome. Luckily it hasn't declined to the level of sites like reddit (yet).

Re:All Jokes Aside... Still No. (0)

cheekyboy (598084) | about 9 months ago | (#44335431)

why not just measure your write speeds, keep an average, and limit the read speed to no more than 150% or your read buffer to 1.5 seconds worth of write speed.

Ahh but that doesnt work in a layered structured system, unless you had IO feedback to the reader to tell it to slow down.

If you do your own read io, write io loops its harder for the OS to know, but if you had an OS function like Windows has to CopyFile from src to dst in one function call. Then the OS can do it, but in linux, well, where is the copy file api ? There is none.

Theres too many non-connected seperate layers in linux, its time for linux kernel to start growing to linux-clib, linux-x11, linux-io.

Linux to rely on gnu or gnome or 3rd parties is not good enough for the future of it, when Google is redefining linux by adding its missing parts for Android/ChromOS.

There are probably 10 different ways to copy a file in linux, none use the same api. All probably have their different speeds and bugs.

Re:All Jokes Aside... Still No. (2)

Clarious (1177725) | about 9 months ago | (#44335509)

It is not that simple, take flash memory for example, if the blocks are erased then the write will be very fast, but the write speed will slow to a crawl if they aren't. You can't predict the writeback latency at all, you can only (heuristically) adapt to it. As for the GNU/Linux's complexity, I don't think there is any problem with it, most IO operations are cached in memory, only when you need to flush it down to storage medium then the latency problem appears. I have read somewhere that Linux is optimized for throughput workload (for big server), so the desktop users have to suffer, for them responsiveness is more important than throughtput.

... but if it breaks, who's going to fix it? (0)

Anonymous Coward | about 9 months ago | (#44335721)

More Intelligent Troubleshooters

Re:All Jokes Aside... Still No. (1)

ndogg (158021) | about 8 months ago | (#44335755)

To be fair, I didn't see anything from them saying that everyone should go out and implement this right now. They are going to reverse engineer it to understand what is going on, and once they do, I'm sure they'll only propositions implementations after that based on their findings.

Re:All Jokes Aside... Still No. (1)

hairyfeet (841228) | about 8 months ago | (#44335809)

Why not? You can always go back to the old way if it goes tits up down the line. If the thing gives you X+Y and our current system gives you X and it costs nothing to implement this, as no new hardware or days reprogramming the software to use why not go for it?

As long as it doesn't require weeks to switch the network back if this doesn't work down the line I see no reason not to give it a shot, not like we couldn't all use a free speed boost, right?

err, can you walk me through it? (4, Insightful)

tloh (451585) | about 9 months ago | (#44335111)

they admit they don't know why the system works

I'm guessing the next big revolution in AI is the quest to figure out how to get digital problem solvers to teach us meat heads how they actually figured this stuff out.

Re:err, can you walk me through it? (2)

osu-neko (2604) | about 9 months ago | (#44335259)

I'm guessing the next big revolution in AI is the quest to figure out how to get digital problem solvers to teach us meat heads how they actually figured this stuff out.

The thing, we already know exactly how they figured it out -- we wrote the instructions they followed to do so. We know exactly how they figured it out, we just don't understand the solution.

when AI breaks the computer hacking law? (0)

Anonymous Coward | about 9 months ago | (#44335735)

then what? you cant threaten it with 30 years in prison like they did Aaron Swartz

treason? (1, Funny)

Xicor (2738029) | about 9 months ago | (#44335125)

would it be considered treason for MIT to develop skynet and have it destroy the US?

Ask the AI's... (0)

Anonymous Coward | about 9 months ago | (#44335143)

after they've destroyed what they were supposed to protect.

Maybe that'll cause them to self destruct :)

Re:treason? (1)

Anonymous Coward | about 9 months ago | (#44335161)

Not on MIT's part. Once skynet becomes self aware, it will be considered its own person, and skynet could be charged with treason. It's like how they can't actually charge your parents with murder if you kill someone, even if they taught you how to shoot and that it was ok to kill people.

Re:treason? (1)

maxwell demon (590494) | about 9 months ago | (#44335447)

It's like how they can't actually charge your parents with murder if you kill someone, even if they taught you how to shoot and that it was ok to kill people.

Certainly not for teaching you to shoot. But I'm not so sure about teaching you that it was OK to kill people.

If they ordered you to kill people, or offered you an incentive, they certainly would be found guilty. That would basically be like hiring a killer. So there's the question at which point exactly it stops.

Re:treason? (1)

Oligonicella (659917) | about 9 months ago | (#44335589)

"Once skynet becomes self aware, it will be considered its own person..."

Why? There are numerous self aware animals that don't get considered persons. It wouldn't be charged with treason, it would be disabled. .

Re:treason? (0)

Anonymous Coward | about 9 months ago | (#44335261)

Only if in the process, skynet reveals information about the internal working of the NSA.

Re:treason? (0)

Anonymous Coward | about 9 months ago | (#44335321)

No, anyone as powerful as Skynet would never be charged with anything.

Re:treason? (0)

dbIII (701233) | about 9 months ago | (#44335369)

would it be considered treason for MIT to develop skynet and have it destroy the US?

Not a chance, the Republican party has been trying to do that since Ford and they still haven't managed to destroy the place.

asdasd (0)

Anonymous Coward | about 9 months ago | (#44335159)

Will there ever be a day where humans and robots can peacefully coexist?

Headline epic fails. (2, Informative)

girlintraining (1395911) | about 9 months ago | (#44335181)

This isn't a redesign of TCP. The network is still just as stupid as it was before; It's just that the local router has had QoS tweaked to be more intelligent. By a considerable margin too. Reviewing the material, it seems to me like it's utilizing genetic algorithms, etc., to predict what's coming down the pipe next and then pre-allocating buffer space; Rather like a predictive cache. Current QoS methods do not do this kind of predictive analysis -- they simply bulk traffic into queues based on header data, not payload.

It comes as no surprise to me predictive/adaptive caching beats sequential/rule-based caching. They've been doing it with CPUs and compilers since, uhh... the 80386 processor. TCP/IP was designed before there was much thought being put into pipelining, caching, parallelization, etc. Using modern algorithms and our better understanding of information system design that's come from 30 years of study results in a noticable improvement to performance? Shocking...

Re:Headline epic fails. (5, Interesting)

Lord_Naikon (1837226) | about 9 months ago | (#44335301)

Huh? Did you read the same article as I did? As far as I can tell, the article is about a TCP congestion control algorithm, which runs on both endpoints of the connection, and has nothing to do with QoS on intermediate routers. The algorithm generates a set of rules based on three parameters resulting in a set of actions to take like increasing advertised receive window and tx rate control. The result of which is a vastly improved total network throughput (and lower latency) without changing the network itself.

I fail to see the relevance of predictive/adaptive caching. It isn't even mentioned in the article.

Re:Headline epic fails. (0, Interesting)

Anonymous Coward | about 9 months ago | (#44335319)

Everything everyone ever says is wrong on the Internet and especially on Slashdot. Some folks just can't wait to start typing so they can tell everyone how wrong they are about everything without even knowing what the fuck they are talking about. I find it is best to ignore them as their lives are typically so sad that it rouses my considerable empathy and I just wind up feeling sorry for them rather than doing something useful.

Re:Headline epic fails. (0)

Anonymous Coward | about 9 months ago | (#44335471)

For what it's worth: I concluded the same thing you did after reading the article. This is purely a congestion control algorithm change of sorts which MIT isn't entirely sure how/why it works. QoS has no bearing on any of this (and if anything this acts as validation that people continually use the word QoS without actually knowing what the fuck it is/how it fits into the picture/how it operates alongside existing algorithms like Reno/Vegas, Nagle, etc.). I swear to god QoS is a buzzword these days.

Re:Headline epic fails. (1)

Kal Zekdor (826142) | about 9 months ago | (#44335681)

Huh? Did you read the same article as I did? As far as I can tell, the article is about a TCP congestion control algorithm, which runs on both endpoints of the connection, and has nothing to do with QoS on intermediate routers. The algorithm generates a set of rules based on three parameters resulting in a set of actions to take like increasing advertised receive window and tx rate control. The result of which is a vastly improved total network throughput (and lower latency) without changing the network itself.

I fail to see the relevance of predictive/adaptive caching. It isn't even mentioned in the article.

I think the GP got confused by the "Machine Learning" part of the headline, and thought that the network algorithm uses some sort of adaptive mechanism. What the software actually does is uses genetic learning (i.e. natural selection) to generate sets of network algorithms, each generation of which is better than the one before (that's how Genetic AI works). The actual algorithms in question are a set of static rules, not much different in function than the existing TCP algorithms, just more efficient.

Re:Headline epic fails. (0)

Anonymous Coward | about 8 months ago | (#44335793)

Don't mind girlintraining. He or she is just, once again, being arrogant and thinking that they are smarter than everyone else by claiming to have hit some insight years before. Pity they didn't write it down or speak of it to anyone else, otherwise they might have some proof of their claim and wouldn't look like an Wikipedia professor.

Re:Headline epic fails. (0)

Anonymous Coward | about 9 months ago | (#44335327)

The rules are simple because they are fast. The rules are fast because they are simple. Problem is, we just don't have the low-latency, real time processing power necessary for this kind of work at a price point that is tolerable in a router. People balk at only several thousand dollars for two dozen high speed interfaces, what do you think will happen when some sales rep says they can implement this 'smart algorithm' to double the speed by also adding a zero to the price tag?

Not gonna happen. It is cheaper to just add more or faster links than do this. In other words: AI is stupid.

Re:Headline epic fails. (0)

Anonymous Coward | about 9 months ago | (#44335347)

And comment subject promptly schools headline in failure.

Re:Headline epic fails. (1)

Forever Wondering (2506940) | about 9 months ago | (#44335575)

Admittedly, I've bookmarked this article for later perusal. That said, it strikes me that the following might already foot the bill:

http://arstechnica.com/information-technology/2012/05/codel-buffer-management-could-solve-the-internets-bufferbloat-jams/ [arstechnica.com]

Unlike other active queue management (AQM) algorithms, CoDel is not interested in the sizes of the queues. Instead, it measures how long packets are buffered. Specifically, CoDel looks at the minimum time that packets spend in the queue. The maximum time relates to good queues which resolve quickly, but if the minimum is high, this means that all packets are delayed and a bad queue has built up. If the minimum is above a certain threshold—the authors propose 5 milliseconds—for some time, CoDel will drop (discard) packets. This is a signal to TCP to reduce its transmission rate. If the minimum delay experienced by buffered packets is 5ms or less, CoDel does nothing.

Re:Headline epic fails. (1)

Anonymous Coward | about 8 months ago | (#44335841)

This isn't Interesting, mods, unless we're taking an interest in unrelated discussions. This article has nothing to do with QoS. It's about TCP congestion control algorithms on both ends of the connection.

they admit they don't know why the system works (1)

Kevin Fishburne (1296859) | about 9 months ago | (#44335197)

That statement from the summary can't be accurate, unless their definition of "why" is way too specific.

Re:they admit they don't know why the system works (0)

Anonymous Coward | about 9 months ago | (#44335337)

Why do you say that? Answering "It works because of machine learning" isn't specific enough, and It's believable that they're unable to say more if they didn't look at the solution.

This version is easier to wiretap (-1)

Anonymous Coward | about 9 months ago | (#44335213)

My best guess at the real story:

They are funded by the NSA, or some other US Government group.

They are lying about the speed. This either makes it easier to wire tap our internet connections, or has such tapping built in.

Real world data? (0)

Anonymous Coward | about 9 months ago | (#44335251)

So is the source in a state where somebody could download it and use it on a real network. Skynet is one thing, but if this can reduce lag in n online game, somebody is going to try it.

Deep sim (1)

mattr (78516) | about 9 months ago | (#44335269)

Looks interesting. Would be more interesting if possible to model incomplete rollout and rollout at not just endpoints, IPv6 rollout is another rational/superrational mix, and instead we've got ubiquitous NAT instead, right?
    The mention of inability to do an upload during Skype.. is that true?
    Conceivably a whole-network deep simulation could be updated based on real traffic patterns providing suggestions for network upgrades / additional cross connects, so enduser requirements could drive network buildout not carrier profits. That's cool.
      Not sure if the plan is to bake this into hardware and then allow download new firmware as the network evolves, or have some safe code repository that is compiled at endpoints. I can predict a shitstorm over that..

MIT is not the Borg (2, Insightful)

CuteSteveJobs (1343851) | about 9 months ago | (#44335271)

Kudos, but can't OP say "MIT Researchers Keith Winstein and Hari Balakrishnan". Despite the best efforts of their AI labs, MIT is not the Borg. When someone who works for MIT buys an orange juice, "MIT" has not bought an orange juice.

And if they have software that can outcode me, COOL! How many professions are this lax with job security? :-)

Re:MIT is not the Borg (0)

Anonymous Coward | about 8 months ago | (#44335743)

It depends. It's different from sports where the achievement is not dependent on the "organization". Running a marathon is doable without the backers. These are not generic implementations so the credit goes to MIT. Individuals play a very small part in all of that.

The Traveling Salesman Problem (2)

SpectreBlofeld (886224) | about 9 months ago | (#44335329)

For some reason, while reading the FAQ writeup, the traveling salesman problem sprung to mind. I don't understand either enough to understand why.

Can anyone educated enough in both issues explain if they are similar computational problems, and why?

The blurb is flat out wrong. (-1)

Anonymous Coward | about 9 months ago | (#44335339)

The blurb says it "redesigns TCP/IP", and the article itself specifically says "congestion control". Which is NOT part of TCP/IP design. Congestion control is a routing feature.

Re: The blurb is flat out wrong. (0)

Anonymous Coward | about 9 months ago | (#44335427)

Congestion control is a core part of TCP. Read a book.

how it actually worked: (0)

Anonymous Coward | about 9 months ago | (#44335437)

computer: most of this stuff is useless, sure i'll send it....
user: wow, so much faster!

know why the system works (1)

l3v1 (787564) | about 9 months ago | (#44335441)

"they admit they don't know why the system works, only that it goes faster than normal TCP"

Well, not the best premise for redesigning an infrastructure used by billions. Troubleshooting something you don't quite understand? Right.

Article Explained (1)

Anonymous Coward | about 9 months ago | (#44335645)

From the summary:
Remy is a computer program that figures out how computers can best cooperate to share a network.
Remy creates end-to-end congestion-control algorithms that plug into the Transmission Control Protocol (TCP). These computer-generated algorithms can achieve higher performance and greater fairness than the most sophisticated human-designed schemes.

Key limitations are:
- It's a scheme to generate network-specific algorithms based on the known characteristics of the network devices and network links.
- Every node in the network needs to run the "optimal algorithm" generated by Remy for that network. If some nodes run "cubic" or some other congestion control algorithm, you won't get optimal output.
- If some node characteristics or the network topology changes - e.g. you swap your hardware, some node goes down, you need to run the algorithm generation again.

Results are from simulation (1)

Frans Faase (648933) | about 9 months ago | (#44335717)

If you read the page, you will see that the results are from a simulation and not based on experiments in a real network. And the given performance only works under certain stable conditions. Some remarks seem to imply that if you are moving around (like with a mobile device) the results no longer apply. Still, I believe that machine learning techniques could out perform human coded algorithms, but probably not as much as the 'theoretical' results presented in this research/paper.

NOT machine learning (YAMH) (3, Interesting)

dltaylor (7510) | about 8 months ago | (#44335791)

Yet Another Misleading Headline

The paper states quite clearly that once the simulation has produced an algorithm, it is static in implementation.

The authors give a set of goals and an instance of a static network configuration and run a simulation that produces a send/don't send algorithm FOR THAT NETWORK, in which all senders agree to use the same algorithm.

While this looks like very interesting and useful research, it has nothing to do with systems that learn from and adapt to real world networks of networks.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...