Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

BitTorrent Calls UDP Report "Utter Nonsense"

kdawson posted more than 5 years ago | from the internet-still-alive-film-at-11 dept.

The Internet 238

Ian Lamont writes "BitTorrent has responded to a report in the Register that suggested uTorrent's switch to UDP could cause an Internet meltdown. Marketing manager Simon Morris described the Register report as 'utter nonsense,' and said that the switch to uTP — a UDP-based implementation of the BitTorrent protocol — was intended to reduce network congestion. The original Register report was discussed enthusiastically on Slashdot this morning."

cancel ×

238 comments

Sorry! There are no comments related to the filter you selected.

Here Here (-1, Offtopic)

alexborges (313924) | more than 5 years ago | (#25953379)

I call it nonsense too.

Where where? (5, Informative)

Emperor Zombie (1082033) | more than 5 years ago | (#25953703)

"Hear", not "here".

Re:Where where? (4, Funny)

binarylarry (1338699) | more than 5 years ago | (#25954115)

Their their now, let the poor unedjumicated bastard post in piece.

I mean, those in glass houses shouldn't through stones. ;)

Re:Where where? (4, Funny)

Cylix (55374) | more than 5 years ago | (#25954215)

You misspelled bastige.

If you're glass house is made of bullet proof glass then it's OK to throw stones.

Re:Where where? (5, Informative)

Zironic (1112127) | more than 5 years ago | (#25954239)

I'm not sure, bouncing stones could be pretty painful.

Re:Where where? (0)

Anonymous Coward | more than 5 years ago | (#25954463)

oh so now wear trowing bounsing stones in bullit proofe glase ??? whose the unedjumacatedest around hear now punk? Stones don't bounce!

Re:Where where? (-1, Offtopic)

hairyfeet (841228) | more than 5 years ago | (#25954565)

Ain't it just like you dam yankees to go throwin dem good rocks and then wastin it by makin the dang place shotgun proof. Y'all just suck the big wet titty when it comes to a good rock throwin. So y'all keep them dadburned crazy yankee ways to yousef,ya hear?

Re:Where where? (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#25954777)

Stones don't bounce!

I dunno, my stones seem to bounce real good off yo mama's chin.

Re:Where where? (1)

danomac (1032160) | more than 4 years ago | (#25955369)

I'm not sure, bouncing stones could be pretty painful.

Depends where it bounces off of. If it bounces off the head you may be unconscious for a while, not necessarily in pain. Lower down, not only will you be in pain, you be talking like the chipmunks too.

Re:Where where? (0)

Anonymous Coward | more than 5 years ago | (#25954401)

I agree, but only if you are glass house is made of bulletproof glass.

Re:Where where? (1)

JackieBrown (987087) | more than 5 years ago | (#25954631)

woosh

Re:Where where? (1)

ShieldW0lf (601553) | more than 4 years ago | (#25955169)

No hear here...

Re:Where where? (0)

Anonymous Coward | more than 4 years ago | (#25955393)

Woosh right back to you.

Cylix didn't mean to use that as his joke. Stop trying to downplay his inability to spell as though it was intentional.

Re:Where where? (0)

Anonymous Coward | more than 4 years ago | (#25954809)

If you're glass house is maid of bullet proof glass than its OK too through stones.

Fixed that for you.

Re:Where where? (5, Funny)

PJ The Womble (963477) | more than 5 years ago | (#25954461)

A friend of mine made a lot of money selling marijuana, but then got word that the heat was on him, so he absconded to his own island with the stash, and disguised it from the prying eyes of the DEA by building a mansion entirely from bricks of Acapulco Gold. He decided to launder his money by starting a collection of expensive gold and velvet chairs from the ancient royal houses of the world.

Of course, the whole enterprise ended in tears when he bought one too many 24-karat armchair and the building simply disintegrated. He was whining to me on the phone afterwards and I could only offer him the following simple advice: "People in grass houses shouldn't stow thrones".

Sorry.

Re:Where where? (0)

Anonymous Coward | more than 4 years ago | (#25955091)

There there now.....

Fixed it for ya

Re:Where where? (2, Funny)

julian67 (1022593) | more than 5 years ago | (#25954597)

Maybe he was calling his dog?

Re:Where where? (1)

ScrewMaster (602015) | more than 5 years ago | (#25954623)

"Hear", not "here".

I can't hear here because there's too damn much noise.

Best of intentions (5, Interesting)

seanadams.com (463190) | more than 5 years ago | (#25953385)

BT may have the best of intentions here in developing this experimental protocol, but this quote leads me to believe that their understanding of the problem is terribly naive:

It so happens that the congestion control mechanism inside TCP is quite crude and problematic. It only detects congestion on the internet once "packet loss" has occurred - i.e. once the user has lost data and (probably) noticed there is a problem.

Packet loss is a normal and deliberate mechanism by which TCP detects the maximum thoughput of a path. Periodically it increases the number of packets in flight until the limit is reached, then it backs off. You have to test again from time to time, in order to increase throughput if more capacity becomes available. This in no way incurs "loss of data" or a noticeable problem. Packets lost due to congestion window growth are handled by the fast retransmit algorithm, which means that there is no timeout or drop in throughput (that would be pretty stupid if the whole purpose of growing the congestion window is to _maximize_ throughput).

I wonder if Simon Morris was merely oversimplifying for the benefit of the layman, but I still find that statement disturbing. As I sugggested in the other thread, it really sounds like they're going to reinvent TCP (poorly). That's not to say you couldn't design a better protocol specifically for point-to-multipoint transfer, but I question if they're on the right track here.

Re:Best of intentions (3, Informative)

AndresCP (979913) | more than 5 years ago | (#25953479)

Yeah, that is a ridiculous description. No one uses TCP for real-time work, which is the only time a lost packet would be noticeable to any end user.

Re:Best of intentions (3, Insightful)

644bd346996 (1012333) | more than 5 years ago | (#25953481)

Detecting maximum throughput via packet dropping is really bad in high-latency links and in applications that need low latency. It is also apparently easy to implement TCP in such a way that overall transfer speed takes a nosedive when latency gets high, as evinced by Microsoft having done just that.

Re:Best of intentions (4, Informative)

seanadams.com (463190) | more than 5 years ago | (#25953537)

Detecting maximum throughput via packet dropping is really bad in high-latency links and in applications that need low latency.

I disagree. Please be more specific. First, what exactly do you believe is the problem? Secondly, how else would you do it (on an IP network)?

It is also apparently easy to implement TCP in such a way that overall transfer speed takes a nosedive when latency gets high, as evinced by Microsoft having done just that.

So you're saying it's possible to implement it with a bug? I've recently found a heinous bug in a recent Redhat kernel which would result in _deadlocked_ TCP connections. It happens to the best of us.

Re:Best of intentions (4, Informative)

TheRaven64 (641858) | more than 5 years ago | (#25953613)

No it isn't. The size of the window can be adjusted to compensate for this. With a larger window, you get more throughput over high-latency links at the expense of having to wait longer for retransmits when they are needed. Modern TCP stacks dynamically adjust the window size based on the average RTT for any given link.

Re:Best of intentions (1)

644bd346996 (1012333) | more than 4 years ago | (#25955391)

I haven't used Vista, but I'm pretty sure that as of XP SP2, the TCP stack in Windows didn't fit that definition of modern.

For the purposes of this discussion, it doesn't really matter whether that's a flaw inherent in the protocol, or just a bug in the implementation with 90% market share. A real network will still have very sub-optimal performance under load if some of the important links have high latency, even if the bandwidth is high.

TCP is a good all-around protocol, but there are a lot of situations where it ends up sucking, and not all of those situations are obscure corner cases.

Re:Best of intentions (5, Informative)

boyko.at.netqos (1024767) | more than 5 years ago | (#25953493)

I agree with you Seanadams, but I just finished an interview with Simon Morris [networkper...edaily.com] and it's not that he's saying that the way TCP handles packet loss is a particular problem, he just thinks he can do better.

BitTorrent essentially already has it's own methods to deal with dropped packets of information - it gets the information from elsewhere. Moving to UDP eliminates the triple handshake, and it eliminates throttling down packet sizes in response to a dropped packet.

The only problem is that it also eliminates the Layer 4 [transport] traffic congestion safeguards, which is why BitTorrent is looking to establish new and better ones at layer 7 [application].

Re:Best of intentions (2, Insightful)

morgan_greywolf (835522) | more than 5 years ago | (#25953641)

And why, exactly, does Simon Morris think that he, single-handedly, can do better than the folks at the IETF, most of whom have PhDs in computer science? Seems a bit presumptious, if you ask me.

Re:Best of intentions (5, Insightful)

naasking (94116) | more than 5 years ago | (#25953683)

Because application-specific knowledge allows easier and often better optimizations than application-generic protocols, which have to be good enough for all applications at the expense of top end performance for specific applications. Isn't it obvious?

Re:Best of intentions (4, Interesting)

Xelios (822510) | more than 5 years ago | (#25953849)

On top of this TCP hasn't seen a major update since the 80's. Most of it was implemented to deal with a very different internet than the one we have today. If you can side step TCP's shortcomings by doing the congestion control more efficiently at the application level then why not give it a shot?

Re:Best of intentions (4, Interesting)

CTachyon (412849) | more than 4 years ago | (#25955453)

On top of this TCP hasn't seen a major update since the 80's. Most of it was implemented to deal with a very different internet than the one we have today. If you can side step TCP's shortcomings by doing the congestion control more efficiently at the application level then why not give it a shot?

Uhh, TCP Vegas [wikipedia.org] , TCP New Reno [faqs.org] , BIC [wikipedia.org] and CUBIC [wikipedia.org] ? All of which have been implemented in the Linux kernel?

TCP has only been standing still since the 80's if you're using an OS from the 80's... or a Microsoft OS.

Re:Best of intentions (4, Insightful)

ushering05401 (1086795) | more than 5 years ago | (#25953713)

Maybe he has spent years working with the technology he is trying to one up?

This relates directly to a reply I just finished on another thread regarding whether a degree is required for success.

This Morris character may be right, he may be wrong, but you citing the education level of his rivals blows your point out of the water IMO.

Re:Best of intentions (1, Insightful)

Free the Cowards (1280296) | more than 5 years ago | (#25953741)

Whatever TCP replaced was probably designed by a bunch of PhDs too. I really don't see your point.

Re:Best of intentions (0)

Anonymous Coward | more than 5 years ago | (#25954291)

TCP replaced TCP

Re:Best of intentions (5, Funny)

FictionPimp (712802) | more than 5 years ago | (#25953753)

I know, how dare anyone try to improve things without being old and having a PHD.

Fucking punk ass kids. Next thing we know we will have guys with only master degrees trying to write operating systems.

Re:Best of intentions (2, Funny)

spartacus_prime (861925) | more than 5 years ago | (#25954305)

So THAT explains Windows ME...

Re:Best of intentions (0)

Anonymous Coward | more than 5 years ago | (#25954545)

Wait, so does that also mean that Vista was written by Copyright Lawyers?

Re:Best of intentions (2, Insightful)

ScrewMaster (602015) | more than 4 years ago | (#25954663)

Wait, so does that also mean that Vista was written by Copyright Lawyers?

As it happens, yes it was. And when you think about, that explains a lot.

Re:Best of intentions (3, Interesting)

Anonymous Coward | more than 4 years ago | (#25955523)

I just have to mention this real quick...a bit off topic but I never get to point this out...

Albert Einstein = High School Graduate, College Professor. Kinda speaks for himself. NOT College Student
Thomas Edison = High School Dropout. Inventor of so many overly complicated things it makes...well...pretty much everyone's head spin.
Nicholas Tesla = Last formal schooling was in the Croatian equivalent of 5th grade. Inventor of Alternating Current & the Tesla Coil amongst many other things.
Bill Gates = Dropped out after 2 semesters at a local technical college. If you don't know who he is, leave Slashdot immediately.

George W. Bush = The single dumbest person to ever receive a vote. Harvard Graduate with Masters Degree.

A college degree means that someone thought it was a good idea to pay anywhere from $80,000 to $500,000+ and waste 4 otherwise useful years of their life to get a piece of paper. NOTHING can be learned in ANY college or university that cannot be learned by a combination of reading books and talking to yourself in a mirror (since every single major requires a damn public speaking class.) It does not mean you are smart or intelligent and it was not a good choice or a good opportunity. It means you are perfectly happy living within the norms of a society which says if you haven't spent 4 years and lots of money to let someone else stand up and yell knowledge at you, you must be dumb, because there is no other fathomable way that you could acquire an equal amount of knowledge any faster or cheaper. Wake up and smell the damn coffee!

Go to college if you want a valid excuse to spend 4 additional years of your life without a job and with constant hangovers. Go to college if you're too damn immature to grow up and join the real world just yet. Still think Paris Hilton or Brad Pitt has ever made a single shred of important news? Go to college. Still think Bush was a good president? Go to college. For everyone else, for those with common sense and the ability to look in the mirror and not see someone who looks just like the idiot closest to you, drop out, boycott, or even burn it to the ground.

You may now return to your previous job of flaming the other commenter's during the lecture.

Re:Best of intentions (5, Insightful)

Anonymous Coward | more than 5 years ago | (#25954023)

I wonder if you know what you're talking about. TCP is a great general-use protocol, as is UDP. But for specific cases, like this one, developers will tend to roll their own as an extension to UDP to fit their needs.
 
Take any modern networked game. It will not use straight TCP, as that's stupid, nor will it simply use UDP because that doesn't work, either. TCP is fine if you're writing apps that require data to get to the other end, without regards to time taken, but if you need a happy middle-ground you need to do it yourself.

Re:Best of intentions (4, Insightful)

Vellmont (569020) | more than 5 years ago | (#25954145)


does Simon Morris think that he, single-handedly, can do better than the folks at the IETF, most of whom have PhDs in computer science?

You really have a very strange over-estimation of the value of a PhD. It doesn't mean you're a super-genius, you're smarter than everyone (or even anyone) else, or that you're always right. It simply means you've been willing to go through some schooling. That's it. Hopefully you've learned something from that education.

There's plenty of examples of non-PhD's making major contributions. The WWW was largely invented by someone with a degree in physics (undergrad I believe), with no degree in computer science. Linus Torvalds only attained a mere masters degree in Computer Science, but yet his OS seems to have become a bit more successful than quite a few other OS's written by people with more education.

Re:Best of intentions (5, Insightful)

DiegoBravo (324012) | more than 5 years ago | (#25954581)

And don't forget that those IETF PhD's couldn't design a better way to upgrade IPV4 but the incompatible and essentially non-interoperable IPV6 (please don't argue about dual stacks or something similar.)

Re:Best of intentions (1)

rbarreira (836272) | more than 5 years ago | (#25953689)

By user I hope he means the program which is using TCP, not the human user at the computer.

Re:Best of intentions (0)

Anonymous Coward | more than 5 years ago | (#25954395)

Back in my day we hand coded our TCP transmissions, in edlin!

Re:Best of intentions (4, Informative)

sudog (101964) | more than 5 years ago | (#25953795)

Please mod parent up. This place is so damn full of armchair wannabe network experts who've clearly no understanding of how TCP congestion avoidance works it's bordering on the physically painful.

Re:Best of intentions (1)

argent (18001) | more than 5 years ago | (#25953799)

As I sugggested in the other thread, it really sounds like they're going to reinvent TCP (poorly).

Aw, Bullwinkle, that trick never works!

This time for sure, Rocky!

Re:Best of intentions (2, Insightful)

cheater512 (783349) | more than 5 years ago | (#25953871)

Bittorrent doesnt need reliable communications, and congestion control can be tuned for bittorrent connections.

IMHO its a good move.

Re:Best of intentions (5, Insightful)

zmooc (33175) | more than 5 years ago | (#25954183)

TCP does two things at the cost of some overhead: it ensures packets arrive in order and it ensures they arrive. While doing that, it also has to ensure that in the case of network congestion, packets are resent within reasonable time while being fair and allowing each TCP connection an equal share of the speed and bandwidth. The problems TCP solves are at the root of the success of bittorent; bittorrent is extremely good at spreading traffic in such a way that links that have those problems are avoided. It can do this since the order in which the packets arrive does not matter and in fact it does not matter whether they arrive from a certain host at all; it can simply request a lost packet from another host minutes or hours later. In the case of TCP/IP there is no provision to handle such a case; it keeps trying to get the packets at their destination in the right order as quickly as possible.

So none of the problems that TCP solves affect bittorrent and all the overhead that TCP causes, however small, serves no purpose in this case. Instead of many small TCP ACKs, resends, negotiation, and what else TCP does, bittorrent will do just fine with one status update every now and then, which it can conveniently combine with the packets that are sent the other way anyway. Therefore, IMHO, using UDP for bittorrent is fine; it will help spread bittorrent traffic even better over the fastest links while using less bytes of network traffic and it might even end up making it easier on the Internet since now bittorrent traffic no longer has to fight with other TCP connections for a fair share of the bandwidth; it can be tuned to get just a little bit less.

Of course they can fuck it up completely and fill the poor pipes of the Internet with loads of packets that never arrive, but they don't have to; it is not inherent to this solution and therefore such rumours should be classified as... FUD.

Disclaimer: I did not RTFA and now practically nothing about TCP/IP :-)

Re:Best of intentions (3, Informative)

complete loony (663508) | more than 4 years ago | (#25954843)

Another advantage UDP has over TCP; there is no maximum half open limit on windows XP. So having a bittorrent client running in the background should not have any impact (other than the available bandwidth obviously) on every other application that is trying to open a TCP connection.

Re:Best of intentions (5, Insightful)

Aloisius (1294796) | more than 5 years ago | (#25954191)

There are a lot of good reasons why BitTorrent should use a UDP file transfer protocol, but I probably wouldn't put TCP's congestion control mechanism on the top of the list.

If you're going to argue UDP, you might as well bring out the major benefits:

* Going to a NAK-based or hybrid NAK/ACK-based protocol which can significantly improve performance over high latency or poor connections

* Multicast - assuming anyone implements IPv6 or multicast over the internet :)

* NAT to NAT transfers (you can do it with TCP, but it is just harder and you generally have to build a user-space TCP stack anyway).

* Faster start time since you no longer have to do a three-phase startup and all the annoying things Microsoft does to prevent people from starting too many per second

There are plenty of UDP-based protocols with TCP-friendly congestion control mechanisms out there and plenty of research into the subject.

The biggest problems I see happening here revolve around different BitTorrent clients all reimplementing uTP and doing a poor job at it. I'd like to see a spec for uTP and a public domain implementation to help minimize the problems that could pop up.

Re:Best of intentions (1)

STrinity (723872) | more than 5 years ago | (#25954329)

Don't some ISPs intentionally drop packets as a way of blocking/throttling bittorrent? I can see how the statement that in such a situation packet loss is noticeable to users is true.

Re:Best of intentions (2, Informative)

Todd Knarr (15451) | more than 5 years ago | (#25954459)

He was simplifying, and you can see it by looking at what happens before packet loss starts. When traffic loads increase, routers don't immediately start dropping packets. First they start queueing packets up for handling, trying their best to avoid packet loss. Only when their queues fill up do they start discarding packets. TCP waits until that's happened before it starts backing off. A better way is what uTP sounds like it's designed to do: watch for the increasing latency that signals routers are starting to queue packets up, and start backing off before the routers have to start dropping packets. Ideally you avoid ever overflowing the routers' queues and causing packet loss in the first place.

Historical note: TCP uses packet loss because the early routers didn't queue. As soon as they hit 100% of capacity they started dropping packets. It's worked acceptably up until now because another aspect, the adjustable transmit window size (which varies based on round-trip time), that was designed to avoid flooding the receive queue on the destination host also compensates for queues in the intermediate routers. But with more knowledge of how modern infrastructure has evolved, it's possible to do better. Especially when you're designing for a single purpose (uTP won't have to handle small-packet-size latency-sensitive protocols like telnet).

Re:Best of intentions (0)

Anonymous Coward | more than 4 years ago | (#25954795)

One of the more creative IETF folk once made a banner out of a DARE drug abuse banner..

"Datagram Abuse Resistance Education"
Further touting the slogan:
DARE to be congestion avoidant!

Might sound a little geeky but that one was quite well done and had me on the floor laughing.

Some credit needs to be given to the general purpose vs specific application perspective where TCP is quite complex and difficult to get right/optimal from a general purpose standpoint. However if you know the high level parameters of your application and are willing to put the effort in you may very well be able to create a better protocol for your specific purpose that is both less complex and better performing than TCP.

At the end of the day oversubscribing avaliable data channels or otherwise summoning the god of congestive collapse decreases overall BT performance and data transfer rates and does not win you any users.

A good example of where TCP currently sucks is the typical cable modem scenario. Unlimited download, very limited upload. Without having your BT client programatically detect your upload ceiling or setting it explicitly both upload and download throughput tank due to upload oversubscription interfering with receive side acks.

TCP is also unusable for multi-media applications due to the head-of-line blocking issue...

Come to think of it TCP *DOES SUCK* :)

It is The Register... (4, Insightful)

kcbanner (929309) | more than 5 years ago | (#25953399)

...we should have known they are full of shit.

The Register (5, Funny)

QuantumG (50515) | more than 5 years ago | (#25953439)

Nonsense? From The Register? Ya don't say.

Re:The Register (1)

bluefoxlucid (723572) | more than 5 years ago | (#25953829)

How is this troll, but the immediate preceding comment insightful?

Re:The Register (1)

QuantumG (50515) | more than 5 years ago | (#25953929)

The immediate preceding comment was also modded troll.. probably by the same moderator.

To some people: troll = I disagree.

Re:The Register (0, Troll)

mhall119 (1035984) | more than 5 years ago | (#25953939)

Larger UID = Instant karma

Because we all hate people with lower UIDs than ourselves.

Re:The Register (1)

NeoTron (6020) | more than 4 years ago | (#25954757)

Get off my lawn!

In other news... (5, Funny)

kwabbles (259554) | more than 5 years ago | (#25953503)

My ISP, Comcast, is already on top of this new bittorrent over UDP idea and has summarily blocked all UDP traffic.

So, I wonder if I'll be experNO CARRIER

unknown host slashdot.org

Re:In other news... (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25954061)

My ISP, Comcast, is already on top of this new bittorrent over UDP idea and has summarily blocked all UDP traffic.

So, I wonder if I'll be experNO CARRIER

unknown host slashdot.org

Your your full of it - are you trying to say that you don't get internet radio anymore? ALL UDP won't be blocked as too many apps use it.

SMTP - udp
Internet Radio - UDP
VOIP - UDP

Re:In other news... (0)

Anonymous Coward | more than 5 years ago | (#25954381)

Internet radio is usually implemented as a TCP stream, because the order of arriving packets usually matters in a streaming radio setting.

Re:In other news... (1)

Bill, Shooter of Bul (629286) | more than 4 years ago | (#25955403)

Internet radio is usually implemented as a TCP stream, because the order of arriving packets usually matters in a streaming radio setting.

But, wouldn't it be cool if it didn't?

I might need to invent that.

Re:In other news... (2, Informative)

680x0 (467210) | more than 5 years ago | (#25954383)

Sorry, SMTP (e-mail) uses TCP (almost always port 25). Perhaps you mean SNMP (network management)? A typical home router doesn't normally use SNMP, but more expensive ones (Cisco, etc.) do.

The last mile is the real problem (3, Insightful)

Rayeth (1335201) | more than 5 years ago | (#25953541)

I don't really see how this is going to kill VoIP and online gaming. Those two services are big users of UDP, no doubt, but its not like all of a sudden the explosion of UDP requests is going to sqeeze VoIP traffic out. If anything it should encourage ISPs and providers to increase the rate of their roll out of new tech.

This move should bring into focus the last mile problem that is the real source of most of the internet connection speed debate. I don't care how the solution ends up working, but I think there needs to be a plan given that most of the plans I have heard involve several years of lead time.

Re:The last mile is the real problem (2, Interesting)

mini me (132455) | more than 5 years ago | (#25953659)

This move should bring into focus the last mile problem that is the real source of most of the internet connection speed debate.

When someone says "the last mile problem," I think the last mile is short on bandwidth. The problem here is that the last mile has (and is using) more bandwidth than the upstream connections can handle.

Re:The last mile is the real problem (3, Funny)

onkelonkel (560274) | more than 5 years ago | (#25953839)

So should we call this the "First 4,999 Mile Problem"?

Re:The last mile is the real problem (1)

oasisbob (460665) | more than 4 years ago | (#25954939)

The problem here is that the last mile has (and is using) more bandwidth than the upstream connections can handle.

For some ISPs, that is the case. However, for many ISPs/MSOs the last mile is literally the problem.

Take Comcast for example. Upstream bandwidth on the last mile is their biggest limiting factor. Implementing torrent throttling and getting their wrist slapped by the FCC wasn't fun for them.

You think they like getting criticized for over compressing their HD streams and are doing it for the pure joy of pissing off people who can tell the difference?

They aren't developing a $30 digital to analog cable converter to free up many MHz of bandwidth because massive capital outlay is exciting.

DOCSIS shares the literal bandwidth on the last mile. When you're talking about doing a nodesplit, upstream bandwidth seems very cheap in comparison.

DUPE! (-1)

Anonymous Coward | more than 5 years ago | (#25953577)

nothing to see here, especially twice in one day.

The problem is... (5, Informative)

wolfbyte18 (1421601) | more than 5 years ago | (#25953759)

we have an opportunity to detect end-to-end congestion and implement a protocol that can detect problems very quickly and throttle back accordingly so that BitTorrent doesn't slow down the internet connection
----
The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

As soon as TCP notices a single packet loss, the Jacobson Algorithm kicks in and it's throttled to maybe 50-60%, and raises the limit slowly. I highly doubt that uTorrent's reworked version of UDP will play this nicely.

As soon as TCP's throttling kicks in, space will be cleared in the tubes. uTorrent will be able to send more data through UDP without noticing any loss, so it'll quickly move to fill this space. Then, TCP gets hit with more data loss - and goes slower. It seems like a vicious cycle.

Re:The problem is... (1, Insightful)

Wesley Felter (138342) | more than 5 years ago | (#25953809)

As soon as TCP notices a single packet loss, the Jacobson Algorithm kicks in and it's throttled to maybe 50-60%, and raises the limit slowly. I highly doubt that uTorrent's reworked version of UDP will play this nicely.

You're right; uTP is actually nicer than TCP.

Re:The problem is... (1)

JackassJedi (1263412) | more than 5 years ago | (#25953915)

Well, when they say 'hit back' they really mean hit back. Of course there are probable benefits aside from just hitting through the ISPs's "congestion management", but I think just about everybody had it with ISPs throttling (AND not admitting it).

Re:The problem is... (4, Informative)

seanadams.com (463190) | more than 5 years ago | (#25954391)

The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

That statement makes no sense whatsoever. It's the same as saying "IP doesn't play as nicely as TCP". They are not comparable.

You have to be talking about whatever transfer protocol might be implemented ON TOP OF UDP, because UDP is merely a datagram protocol. It's nothing more than an IP packet plus port numbers.

In other words, I could write a simple program which blindly fires UDP packets at the rate of 1GB/s. This would kill my internet connection. I could also write a program which transmits one UDP packet per hour. This would have no effect. See what I mean? It's entirely a function of the application.

Re:The problem is... (1, Interesting)

Creepy Crawler (680178) | more than 5 years ago | (#25954501)

You're talking out your ass here.

TCP and UDP are similar protocols that use IP as the transport.

TCP provides session, exponential backoff, and many other features.
UDP does very little.

Re:The problem is... (1)

wolfbyte18 (1421601) | more than 5 years ago | (#25954521)

Right sorry. Perhaps it would be better saying that I doubt uTorrent will be as strict with packet loss while using this new method. Sure, they're going to keep tabs on it. But unless they're as harsh on less than perfect connections as the Jacobson Algorithm calls for with TCP connections, the torrents will start to take over.

Re:The problem is... (1)

complete loony (663508) | more than 4 years ago | (#25954919)

The major problem I see is that UDP doesn't play as nicely as TCP. Not by a longshot.

The main reason for swapping to UDP is that 600 TCP connections over a single congested link don't play very nice together anyway. If you let uTorrent flood your upload bandwidth with outgoing TCP connections you already have the exact problem your talking about. Only it gets amplified by all the automatic retransmits. Swapping to UDP doesn't magically sweep away all the potential problems of having such a bandwidth greedy application, but it gives the application more control in how the bandwidth of all its connections should be throttled.

UDP and DHT are a nightmare (4, Interesting)

crossmr (957846) | more than 5 years ago | (#25953827)

everyone thought dht was great too, but I found every time I used it it caused massive headaches. I would jump on a popular torrent and for days afterward I would be having poor performance, checking logs etc would show several dozen connection attempts per second on the utorrent port, even 2-3 days after I was done with the torrent because the DHT tracker was still advertising my IP address. I'd have to release renew to bring my performance back up. This was with a fairly standard Linksys router. Any situation where the other party might not just get the message that I'm not there anymore is bound to lead to headaches on popular torrents.

Re:UDP and DHT are a nightmare (1)

Captain Splendid (673276) | more than 5 years ago | (#25954029)

Update your firmware and tweak your BT settings. I run the WRT54G and I get no problems whatsoever even with multiple torrents.

Re:UDP and DHT are a nightmare (2, Informative)

ScrewMaster (602015) | more than 4 years ago | (#25954691)

Update your firmware and tweak your BT settings. I run the WRT54G and I get no problems whatsoever even with multiple torrents.

I run a WRT54G V4 with the Tomato firmware. Likewise I have no problems.

Light on the details (1)

zizzo (86200) | more than 5 years ago | (#25953855)

I encourage the BT folks to work on new protocols and push the envelope. I'm a firm believer in TCP but I see no reason to not try and do better. This article however, doesn't tell me anything other than BT says "Will not," to the Register's "Will too!"

Why UDP? (1)

tomtom (23188) | more than 5 years ago | (#25953863)

It can be convenient to prioritize UDP over other traffic for simple QoS at a shared broadband gateway. It catches DNS queries, VoIP, gamers, and most anything else small and sensitive.

In an attempt to avoid ISP filters the P2P users sprawl across all 64k TCP ports. Now the UDP portspace will be covered in P2P crap too. There are lots of major IP protocol numbers left... at least in my copy of /etc/protocols. I wish they'd used a new protocol for this instead of UDP.

But then again, I think we all know the real motivation for this effort is simply to make it more difficult to segregate P2P at the ISP. I don't really care about that; my problem is the collateral damage.

I look forward to seeing the details... (1)

Aoet_325 (1396661) | more than 5 years ago | (#25953869)

but a new protocol for p2p traffic sounds good to me.

It'd be nice if it allows for faster transfers and less congestion, but what I'd really like to see is something that makes it harder for ISPs to detect/penalize the traffic and more difficult for people to track what other people are transferring. Right now it's simple for your average unlicensed investigator to gather lists of IPs by monitoring torrents or just sharing out the material themselves and seeing who bites.

I wonder if you could use something like this to flood those lists with forged IPs, requesting entire files to be sent to other machines which just silently drop the traffic, all without causing bandwidth problems for anyone else. I'd hesitate to do something like that even if I didn't have to worry about causing random people connection trouble or slower file transfers, but the thought did pop into my head that if the traffic used by this protocol were made insignificant enough others might look into that sort of thing.

Remove TCP-like requirements, and it's a WIN. (5, Informative)

sudog (101964) | more than 5 years ago | (#25953885)

TCP guarantees in-order, mildly error-corrected, delivery of transmitted *DATA*. Not packets. It is a streaming protocol where the data being transmitted is of unknown and indeterminate length, or open-ended length. Since BT already doesn't care that files arrive in pieces at the beginning, middle or end (well, it does a little, but not enough to matter) then you can relax and basically eliminate one of the TCP guarantees right there: in-order delivery of data.

You can eliminate sliding windows, ACK-based retransmits, fast-retransmission, and pretty much every other mechanism that TCP uses to guarantee in-order delivery of data. You can simplify it and the application itself beautifully, and provided it correctly throttles back based on detected packet loss (it MUST be exponential back-off,) you end up with a net win for those reasons. The application can set up its own optimised data structures that don't necessarily have (but likely will end up having anyway) the overhead of an OS-backed TCP stack.

I mean who cares if you miss pieces, until the end when you can re-request them?

Heck, there are already P2P apps that use a UDP-based transfer mechanism and they are WAY less impactful on systems that a typical BitTorrent stream is. They way the hell slower, too, but that's not the point.

I do think there is a point that bears repeating: BT *MUST* have exponential back-off. If it doesn't there is logic already built-in to core routers, ISPs, and firewalls that WILL drop the connection more severely when the endpoints don't respond properly to an initial packet-drop attempt at slowing them down.

There are some really nice academic papers about it, and there are lots of algorithms and choices that companies have. They all assume TCP-like back-off on the endpoints, and they ALL uniformly punish greedy floods.

Re:Remove TCP-like requirements, and it's a WIN. (1)

bluefoxlucid (723572) | more than 5 years ago | (#25954117)

Since BT already doesn't care that files arrive in pieces at the beginning, middle or end (well, it does a little, but not enough to matter) then you can relax and basically eliminate one of the TCP guarantees right there: in-order delivery of data.

As long as the complete bittorrent data unit fits inside ONE packet, yes. If however the thing that describes the file it belongs to and the position in the file it goes in is relatively large, you're being quite wasteful. Assuming 1400 byte max packet data, 8 bytes of file position (i.e. files may be bigger than 4GiB), 4 bytes to give the index of a file, that's 1388 byte packets... 3094357 packets at least for 4GB, an extra 32MB.

If instead we can do 32KiB or 64KiB runs (reliably, i.e. sliding window makes sure of this before doing it), 131072 or 65536 packets, 1.5 megs or about 750k overhead, respectively. For a single user this difference is nothing; for the entire Internet, given the amount of bandwidth and possible traffic (hint: ISPs are complaining), taking it up even higher saves them a huge bundle on bandwidth.

TCP's in-order delivery is basically that you can transmit fewer than 2^32 packets between ACKs, and all the packets are numbered with a 32-bit number. At ACK, the packets are shuffled in order on the host's end; any missing are requested for resend. That's not to say you don't have to negotiate over BitTorrent what you're sending; it just means you can leave actually ordering it as the job of TCP. A re-implementation would benefit from using the same mechanism, just stating the starting point and the file and spew data until it's time to stop (i.e. the client says it has up to a certain point, you fill in the gap); but it could handle missing packets by requesting them from another source. Or not; it's got a perfectly good source already...

But UDP headers are 12 bytes smaller. (1)

spaceturtle (687994) | more than 4 years ago | (#25955059)

Assuming 1400 byte max packet data, 8 bytes of file position (i.e. files may be bigger than 4GiB), 4 bytes to give the index of a file, that's 1388 byte packets... 3094357 packets at least for 4GB, an extra 32MB.

Note that UDP [wikipedia.org] packets are 12 bytes smaller than TCP [wikipedia.org] , so we are already breaking even. Since we control the protocol, we could e.g. download files in 4GB chunks so that we could use an 4 bytes of file position. In principle we could also have the source IP and port number uniquely identify the file, and thus eliminate the the file index from the packet. Then we are saving 8 bytes per packet by using UDP.

Re:Remove TCP-like requirements, and it's a WIN. (1)

sudog (101964) | more than 4 years ago | (#25955069)

Also incorrect, unless I'm misunderstanding what you're saying.

The BT chunks themselves can become the discrete, and known-about, units that must be transferred across the UDP channel--however they decide to do it--and the chunks of the chunks become encapsulated in packets. You still don't have to care about in-order delivery. Add in a mux'd, single, unique identifier, and it can even be reused. In fact, I challenge you to find a single instance, anywhere available to any BT user that has more than even a single GB in-transit at any one time. The endpoint socket identifiers can become the file handle as viewed by each endpoint, and you don't even need anything else beyond a single bit to identify whether the packet is a control packet or a data packet, and you can mux the two into a single socket. Or, skip that and open two UDP sockets per hosthost channel.

You don't need a file position identifier that references the file in bytes. You reference the file in discrete units which are encapsulated in the UDP packets, whatever size that might be.

Still, this is all academic. The BT people I'm sure have solved this problem themselves. All I'm pointing out is that the ability to discard some of the assumptions about a TCP stream allows you more flexibility in designing a replacement because it can be *simpler* and almost certainly *more efficient.* That's the key, and so far everything you've said I don't think applies: you seem to be suggesting that your vision of doing it is the only way, and since your way is unworkable and unwieldy, therefore there is no way. Obviously, this is a logical fallacy.

But I could simply be misunderstanding your note. :)

Re:Remove TCP-like requirements, and it's a WIN. (0)

Anonymous Coward | more than 4 years ago | (#25955243)

Why do you think that every packet has to be completely independent?

If I request block X from some user, you only need enough addressing bits in the packet to tell me where in block X it belongs. You don't need to know the absolute location in the overall file. Simply knowing the source and the offset within the block is sufficient.

Once I have all of the packets or a certain time has passed I can either checksum the entire datablock and deem it good or ask for individual packet retransmission from any host serving the file.

Only retransmission requests would need the full address. If most of a block was lost, you can optionally choose to rerequest the entire thing from someone else.

Congestion or packet loss from any host causes much less harm to the network because you don't ask for retransmission from the same host that dropped the packets.

It is absolutely better then an exponential back off because you simply stop talking to anyone who has network problems.

bleh. (0)

Anonymous Coward | more than 5 years ago | (#25953985)

TCP, UDP who cares, I'll continue to leech off insecure wireless networks for all my piracy needs.
Oh how I love:
http://www.phenoelit-us.org/dpl/dpl.html

What BitTorrent REALLY needs (5, Interesting)

bertok (226922) | more than 5 years ago | (#25954207)

The probablem with BitTorrent is not that it uses a large amount of bandwidth, but that it's using the wrong bandwidth. In every country other than the United States, international bandwidth is substantially more expensive than local bandwidth, and often in short supply. Local bandwidth is cheap, or even free. Even in the US, inter-ISP bandwidth has the same cost issues, but is plentiful.

What I've never understood is what's the excuse for not implementing a peer selection algorithm that prioritises nearby users. Even a naive algorithm is going to be vastly better than a purely random selecton. Simply selecting peers based on, say, the length of the common prefix of the IP address will often produce excellent results. Why in God's name should I transfer at 0.1 kbps from some guy in Peru, when a peer down the road could be uploading to me at 500 kbps?

The truth is that the BitTorrent folks are not playing ball with ISPs. In reality, I think most major ISP could care less about copyright violation, or excessive bandwidth - it makes people pay for more expensive monthly plans - but they DO care about international bandwidth costs.

If they just took 10 minutes to revamp the peer selection algorithm, they would reduce the impact in ISPs enormously, and then they woudldn't be villified and throttled.

Re:What BitTorrent REALLY needs (1)

Zironic (1112127) | more than 5 years ago | (#25954325)

When I last used Azureus they were working on an algorithm that created a virtual map based on pings and were meant to prioritize low ping seeds.

The idea being that low ping usually means that the other computer is on the same or a close network.

I think you're making a bad assumption. (3, Informative)

Ungrounded Lightning (62228) | more than 5 years ago | (#25954369)

The truth is that the BitTorrent folks are not playing ball with ISPs. In reality, I think most major ISP could care less about copyright violation, or excessive bandwidth ...

Unfortunately, the major ISPs are components of conglomerates whose primary moneymaker is selling "content". As such they have a perverse incentive structure that can put "protecting against piracy" above the quality of the network's operation.

The networks also provided asymmetric transport and vastly oversold their bandwidth, assuming a central server / many small clients "broadcast media" model. The rise of peer-to-peer usage bit them mightily and Bit Torrent was the spearhead of that rise. So rather than spending the added billions to expand their backbones to meet their advertised service's requirements they chose to throttle it.

The ISPs were the ones to turn this into a war and fire the first shots. BitTorrent is just trying to engineer a solution on which to build peace - and is being vilified for the attempt.

Having said that, your suggestion for improving things by smarter selection of peers is good. Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better. Good solutions would likely have to be built on additional knowledge - which implies a database to hold and serve it - which implies a new central infrastructure and queries of it - which both breaks the decentralized model and provides additional points of attack if the ISPs continue to treat this as a war and attempt to suppress "unauthorized"/"enemy" torrents.

Re:I think you're making a bad assumption. (2, Insightful)

ztransform (929641) | more than 5 years ago | (#25954473)

Having said that, your suggestion for improving things by smarter selection of peers is good. Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better. Good solutions would likely have to be built on additional knowledge - which implies a database to hold and serve it - which implies a new central infrastructure and queries of it - which both breaks the decentralized model and provides additional points of attack if the ISPs continue to treat this as a war and attempt to suppress "unauthorized"/"enemy" torrents.

I posted a while ago an idea I'd like to see for a Request For Comment (RFC), a new protocol ISPs could easily run in-house. See http://slashdot.org/comments.pl?sid=590741&cid=23883635 [slashdot.org]

Actually even Cisco could add such a protocol to their routers which merely look up the internal routing protocol to decide which IPs were local, and anything out a border gateway (routes advertised via BGP) could be regarded as "non-local". Anyone from Cisco here?

Re:I think you're making a bad assumption. (1)

Locklin (1074657) | more than 5 years ago | (#25954533)

Unfortunately the Internet doesn't have any easy mechanism to indicate which peers would be better. Good solutions would likely have to be built on additional knowledge - which implies a database to hold and serve it - which implies a new central infrastructure and queries of it - which both breaks the decentralized model and provides additional points of attack if the ISPs continue to treat this as a war and attempt to suppress "unauthorized"/"enemy" torrents.

Last time I used ktorrent, it put little flags beside each peer's IP address. It sounds like that "database" is already being used to some degree (although, ping times are probably a better metric).

Re:What BitTorrent REALLY needs (1)

click170 (1170151) | more than 4 years ago | (#25954705)

The exact same thing can be argued of ISPs.

If ISPs created a simple caching program that acted like a proxy for BitTorrent chunks then they could offload much of the traffic from their interconnects with other ISPs onto their own networks, saving them untold amounts of money.

Note also that downloading from your neighbor in most cases in North America will yield only a small upload speed as most residential lines have pitifully small caps on upload speeds. I highly doubt you will see 500Kbps from your neighbor unless hes paying for a t1 line, in which case location is irrelevant.

BitTorrent doesn't care about ISP costs because they aren't impacted by them, nor IMO should they care without some kind of incentive. If the ISPs are going out of their way to get in my way about torrenting then I don't think that BitTorrent should go out of its way to save ISPs some cash that we all know isn't going to go back into the network but into somebodies pocket. If there was a tangeble benefit on the other hand, like if the speeds really ^did^ improve from downloading from local hosts, or if the savings was passed onto the consumer by not counting toward their bandwidth limit, then I could understand the reason, but this is not the case.

Besides all of that, seems to me like its better for the economy to have ISPs spending oodles of money on each other because I'm downloading from outside of their network, spending money keeps the economy going.

Re:What BitTorrent REALLY needs (1)

bertok (226922) | more than 4 years ago | (#25955033)

Note also that downloading from your neighbor in most cases in North America will yield only a small upload speed as most residential lines have pitifully small caps on upload speeds. I highly doubt you will see 500Kbps from your neighbor unless hes paying for a t1 line, in which case location is irrelevant.

This is exactly the arrogant attidue I'm speaking of. To the BitTorrent user the source is irrelevant. To the ISP it is NOT irrelevant, and this what's angering them.

The greedy "it's faster for me, so I don't care about the impact to your business" attidue is why BitTorrent is going to be throttled into nonexistence, or declared illegal. Then it WILL matter to the BitTorrent users, but it'll be too late to go back and fix things.

It's time for the BitTorrent developers to grow up and cooperate.

Re:What BitTorrent REALLY needs (1)

click170 (1170151) | more than 4 years ago | (#25955171)

Note that this 'arrogant' attitude is what keeps companies in business. Do you really think that the ISPs will do anything (Like expanding their network capacity) unless they think there is profit in it for them?

Note also that declaring something illegal doesn't make it disappear, it pushes it underground. When you criminalize legal you still create real criminals, it would be a sad day indeed for any country that began imprisoning people because they 'used a certain procotol' to communicate.

Yeah, BT's going the way of Napster alright. Oh wait, when Napster was shutdown, people didn't change their ways, they invented a different protocol.
Seems to be a hole in your logic there, friend.

Re:What BitTorrent REALLY needs (0)

Anonymous Coward | more than 4 years ago | (#25955515)

ISPs just have to start charging $a per month + $b per domestic_GB + $c per international_GB, and locality-aware BT clients will show up the next day.

The original Register report... (2, Funny)

actionbastard (1206160) | more than 5 years ago | (#25954227)

was discussed this morning...which means all the funny comments were expended before the really funny article was posted.

Reducing network congestion (5, Interesting)

mtarnovan (1337149) | more than 5 years ago | (#25954315)

If they want to improve network congestion why not start by implementing a better peer selection algorithm. IIRC currently peers are selected at random. A network topology aware peer selection algorithm might improve network congestion a great deal. Currently I see peers which are on another continent being 'preferred' (to due the randomness) to peers on my own ISP's network, with which I have a 50+ mbit connection.

Re:Reducing network congestion (2, Interesting)

AngelofDeath-02 (550129) | more than 4 years ago | (#25955513)

This would be cool, but could also be memory intensive. Routers have ram dedicated to a routing table, and if you're planning to implement this like I think you are, you're going to have the server run a bunch of traceroutes to determine how packets are traveling to their destination and sort them appropriately.

That or you could make assumptions about an isp's range and provide a bias to any ip within the same /16 range - but I'm pretty sure this is nowhere near ideal either.

wow (0)

Anonymous Coward | more than 4 years ago | (#25954929)

You'd think the VP of ANYTHING would know to properly capitalize Internet.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?