Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

uTorrent To Build In Transfer-Throttling Ability

timothy posted more than 4 years ago | from the sideways-or-upwards dept.

The Internet 187

vintagepc writes "TorrentFreak reports that a redesign of the popular BitTorrent client uTorrent allows clients to detect network congestion and automatically adjust the transfer rates, eliminating the interference with other Internet-enabled applications' traffic. In theory, the protocol senses congestion based on the time it takes for a packet to reach its destination, and by intelligent adjustments, should reduce network traffic without causing a major impact on download speeds and times. As said by Simon Morris (from TFA), 'The throttling that matters most is actually not so much the download but rather the upload – as bandwidth is normally much lower UP than DOWN, the up-link will almost always get congested before the down-link does.' Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks, thereby saving them money and support costs. Apparently, the v2.0b client using this protocol is already being used widely, and no major problems have been reported."

cancel ×

187 comments

god-fucking-awful summary (0, Informative)

Anonymous Coward | more than 4 years ago | (#29944706)

Ugh. TFA is all about Torrent (or uTorrent if Slashdot can't print a mu).

Re:god-fucking-awful summary (1)

vintagepc (1388833) | more than 4 years ago | (#29944740)

No it cant. I definitely typed (mu)Torrent when submitting. I just typed another here: -->-- Slashdot UTF=broken, but we knew that already.

Re:god-fucking-awful summary (0)

Anonymous Coward | more than 4 years ago | (#29944822)

Well, it's a little confused, but the facts are about as correct as one would expect..... Presumably uTorrent is the first client to take advantage of the new protocol.

Sounds like a useful feature -- our upload bandwidth is limited to 128kb/s by our ISP, so any vaguely reasonable torrent upload rate will make web browsing basically impossible. I wonder if this function will work under Wine..... IIRC, the 'Auto upload speed' function in uTorrent does not...

Re:god-fucking-awful summary (0, Offtopic)

FlyingBishop (1293238) | more than 4 years ago | (#29945150)

uTorrent has always had very fine control over how much bandwidth you used, per-torrent and overall.

All this does is move the throttling the ISPs used to do for no reason to the user's computer to do for no reason. It's a myth that BitTorrent causes a lot of strain on networks - it's multicast streaming like Hulu and Pandora that really do a number on their networks.

But Hulu and Pandora aren't used to download illegal things (never mind MediaFire, Rapidshare, surfthechannel and their brethren handling illegal multicast quite well.)

Oh wait - that's how everyone pirates stuff since they started cracking down on p2p.

Re:god-fucking-awful summary (4, Informative)

norpy (1277318) | more than 4 years ago | (#29945522)

I dont' think you quite understand that word you used there. Hulu/pandora are the OPPOSITE of multicast.

Re:god-fucking-awful summary (4, Informative)

Anonymous Coward | more than 4 years ago | (#29945528)

You have no clue what multicast is, do you? Please stop using that word until you get a fucking clue about what it is.

Hulu and Pandora are NOT multicast. If they were, it would put less of a strain on their networks.

I've owned an ISP and I can tell you, P2P applications like BT put a BIG strain on the network. You saying its a myth doesn't make it so. Just shows that your an idiot who talks out of his ass.

That being said, instead of bitching about network congestion like Comcast does, I would upgrade my network to keep up with the demand. I got a lot of customers that way. Long-term, its the better strategy.

Re:god-fucking-awful summary (-1, Offtopic)

AHuxley (892839) | more than 4 years ago | (#29945710)

Set the upload limits eg 256, 512, 1 mb ect and your rented pipes will be fine.
Your selling pipe space to users, not a massive best effort, back of the napkin 'lets hope only 15 % every upload at max" 1990's alarm system based dot com boom start up.

Re:god-fucking-awful summary (0)

Anonymous Coward | more than 4 years ago | (#29945888)

You've never run an ISP before, have you?

Re:god-fucking-awful summary (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29946192)

Since you are off topic, I'll jump off with you. Having worked for a few ISP's I can tell you it is practically criminal how over sold the capacity was at every single one I've witnessed. I understand it is profitable to oversell your product but the extent that it is done has obviously caused more problems for some ISP's than it has for others.

Re:god-fucking-awful summary (0)

Anonymous Coward | more than 4 years ago | (#29946760)

That being said, instead of bitching about network congestion like Comcast does, I would upgrade my network to keep up with the demand.

And that's probably why you don't own an ISP anymore. Money doesn't grow on trees, and anyone who spends money on upgrading goes bankrupt because you either run out of money directly or indirectly from loss of customers when your service price strays too far above the rest of the pack who bitch instead of upgrading and therefore have less expenses than you do.

reason 1 down. reason 2 in que. (4, Insightful)

pha7boy (1242512) | more than 4 years ago | (#29944712)

I'm sure ISPs such as Comcast will find another reason to suggest they need in interfere with network management. just give them a little bit of time to put their heads together with the guys at RIAA.

Re:reason 1 down. reason 2 in que. (0)

Anonymous Coward | more than 4 years ago | (#29944790)

Dude, the guys from ISPs such as Comcast _ARE_ the RIAA! (for all intents and purposes)

Re:reason 1 down. reason 2 in que. (5, Insightful)

nate11000 (1112021) | more than 4 years ago | (#29944814)

This probably isn't so much for avoiding the eye of your ISP as it is for personal network management. I know I don't want bittorrent interfering with my internet usage, particularly when my wife is at the computer. Not having a router that can prioritize my internet traffic, this is a welcome feature to avoid either slow-downs or having someone else turn off my downloads so they can use the internet.

Re:reason 1 down. reason 2 in que. (2, Interesting)

Firehed (942385) | more than 4 years ago | (#29945494)

I don't think this protocol will replace QoS on your local network - more likely, it will intelligently select peers based off of external network (Internet) factors

Re:reason 1 down. reason 2 in que. (1)

sopssa (1498795) | more than 4 years ago | (#29946734)

It will improve the local internet connection, which is the parents problem as well (since the torrent client is slowing down the other internet usage). Torrent client will analyze how much latency grows and tries to optimize that.

But I'm more unsure about how exactly will this improve ISP's network. They do not have global latency problems because of torrenting but only bandwidth capability problems, and torrent clients have no way to know if bandwidth usage at the ISP level is too much (or where it is).

Re:reason 1 down. reason 2 in que. (0)

Anonymous Coward | more than 4 years ago | (#29944866)

They won't. They'll just say the way it is implemented in v2.0 is still wrong and not many clients really use it. See? They don't even need to come up with anything new!

Re:reason 1 down. reason 2 in que. (1)

arbiter1 (1204146) | more than 4 years ago | (#29944892)

They will, it will give them another reason NOT to upgrade their networks like they should of 3 years ago

Re:reason 1 down. reason 2 in que. (4, Insightful)

wizardforce (1005805) | more than 4 years ago | (#29945096)

I fear that you're right. With our luck, ACTA will probably kill net neutrality stone dead with provisions allowing for perhaps even mandating throttling by ISPs to protect various corporate interests regarding copyright law. The FCC's position on net neutrality supports this view strongly. Allowing for exceptions where activity is deemed illegal.

Re:reason 1 down. reason 2 in que. (4, Insightful)

interkin3tic (1469267) | more than 4 years ago | (#29945690)

I'm sure ISPs such as Comcast will find another reason to suggest they need in interfere with network management. just give them a little bit of time to put their heads together with the guys at RIAA.

Really? I for one am certain that they will continue with the exact same rhetoric. It's a good scapegoat for them, and they don't have a problem with overlooking facts to avoid spending money.

Comcast: "No, we don't need to spend money to relieve congestion, the slowdown is all caused by bittorrent. We need to regulate it."
Us: "No it isn't, bittorrent isn't causing the problem, it's now self-regulating. The problem is on your end."
Comcast: "The slowdown is all caused by illegal bittorrent transfers! We need to regulate it!
Us: "No, see, here's a breakdown of traffic..."
Comcast" "THE SLOWDOWN IS ALL CAUSED BY ILLEGAL BITTORRENT TERRORISM! WE NEED TO REGULATE IT!"

Yeah But... (1)

vintagepc (1388833) | more than 4 years ago | (#29944718)

How much do you want to bet ISPs will suddenly have numerous other non-bandwith reasons to justify traffic shaping practices? :-)

Throttling ... (0)

Anonymous Coward | more than 4 years ago | (#29944726)

the first post! boooyaaa

Re:Throttling ... (0)

Anonymous Coward | more than 4 years ago | (#29944804)

If you had been using the traffic shaping version of BitTorrent your post might not have lagged.

TCP regulating congestion (1)

buchner.johannes (1139593) | more than 4 years ago | (#29944764)

shouldn't TCP do that by itself?

Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).

Re:TCP regulating congestion (5, Interesting)

Anonymous Coward | more than 4 years ago | (#29944818)

shouldn't TCP do that by itself?

Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).

This is probably aimed at average BItTorrent users, i.e. they're on Windows. I highly doubt Windows has the wide variety of TCP congestion management protocols that are available in the Linux kernel. If I am wrong about that, please correct me as I had a really hard time confirming this for certain. It's not exactly a "common support question" that you can easily Google for, or maybe your Google-fu is stronger than mine. I think Windows uses an implementation of Reno and that's it. Hence, the need to build these features into the clients.

Then there's the issue that to a TCP congestion protocol, all traffic is likely to be equal in its eyes. It won't know that torrent traffic should receive lower priority whenever it conflicts with something else, VOIP apparently being the classic example. For that you need actual QoS. So the client itself will now measure latency to help determine this.

Also, I doubt this will eliminate an ISP's excuses for throttling traffic. In terms of bandwidth saturation and network capacity, I highly doubt your ISP really cares whether your BitTorrent client is fully saturating your upstream by itself, or whether it uses only the bandwidth that something else doesn't need. In either case, you'd be maxing out your upstream pipe which is what they might concern themselves about.

Re:TCP regulating congestion (0)

Anonymous Coward | more than 4 years ago | (#29945930)

The problem with Windows' congestion control and QoS is the same as with Linux: it's nigh impossible to set up for ordinary users.

Re:TCP regulating congestion (1)

Jarik_Tentsu (1065748) | more than 4 years ago | (#29946790)

Yeah, it is a big problem. Especially since we've got a very basic router with no type of throttling or priority features.

Generally when downloading a torrent from certain trackers and large amounts of peers, the whole internet pretty much goes down for every other person in the house. Or goes to dial-up rates. Drives my Dad nuts.

It wouldn't be a problem if I had a proper router, but with this feature, it should help if it works well. =)

~Jarik

Re:TCP regulating congestion (4, Informative)

timeOday (582209) | more than 4 years ago | (#29944826)

Bittorrent spawns a huge number of connections. If the OS (or ISP) gives equal bandwidth to each TCP stream, your connection to youtube gets about as much as each one of your 25 bitorrent connections, which destroys the streaming video, voip, or even normal web surfing. I would LOVE it if this provides a solution. (I would be even happier if ToS flags were widely honored, but that has never happened, so I don't know why it would happen now).

Re:TCP regulating congestion (2, Interesting)

causality (777677) | more than 4 years ago | (#29944886)

Bittorrent spawns a huge number of connections. If the OS (or ISP) gives equal bandwidth to each TCP stream, your connection to youtube gets about as much as each one of your 25 bitorrent connections, which destroys the streaming video, voip, or even normal web surfing. I would LOVE it if this provides a solution. (I would be even happier if ToS flags were widely honored, but that has never happened, so I don't know why it would happen now).

I have heard the claim that the reason why ToS/QoS flags are not widely honored is that Windows, by default, sets the highest priority for ALL traffic with no regard for what kind of traffic it is. As I don't run Windows, I have to say I honestly don't know whether this is so. Can anyone affirm or deny this claim?

Re:TCP regulating congestion (0)

Anonymous Coward | more than 4 years ago | (#29944944)

not windows per se, but each apps and its dog.....

Re:TCP regulating congestion (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29945208)

Nope, just some average Linux user bullshit as usual. Windows sets ToS as regular traffic by default, of course.

Re:TCP regulating congestion (5, Informative)

Don Negro (1069) | more than 4 years ago | (#29945030)

Short answer, No. TCP doesn't back off until packets are lost. uTP looks for latency increases which happen before packet loss (and therefore, before TCP congestion control kicks in) and throttles itself preemptively. Put another way, TCP treats all senders as having an equal right to bandwidth. uTP doesn't want to assert an equal right to bandwidth, it wants to send and receive in the unused portion of the available connection.

Re:TCP regulating congestion (1)

amiga500 (935789) | more than 4 years ago | (#29945478)

TCP does effectively limit throughput by means of the Sliding Window Protocol. Packet loss will decrease the size of the sliding window, but on a congested network, the window will be slow to increase. What TCP doesn't offer is different Quality of Service. uTP attempts to run TCP at a lower priority than existing TCP traffic. Allowing Skype or YouTube to run at a higher priority is advantageous to the users of those services.

Re:TCP regulating congestion (1)

BikeHelmet (1437881) | more than 4 years ago | (#29945636)

TCP Vegas? [wikipedia.org]

I remember reading how AT&T's iPhone "zero-packet-loss" was causing network congestion and 8-second ping times.

Re:TCP regulating congestion (1)

Wesley Felter (138342) | more than 4 years ago | (#29946046)

Yes, uTP is like Vegas but it actually works on Windows.

Re:TCP regulating congestion (2, Informative)

AHuxley (892839) | more than 4 years ago | (#29945754)

Think of it as Apples Grand Central Dispatch for your network.
If you have the bandwidth and nothing else is requesting it, your torrents will fly.
Want to watch youtube HD on your low end consumer grade adsl, your torrents will slow and overall networking will still seem responsive.
When done viewing, BT will reclaim the bandwidth.
BT is not just aware of your hard coded BT app max settings, but also your OS networking demands and can adjust?

Re:TCP regulating congestion (1)

_KiTA_ (241027) | more than 4 years ago | (#29946340)

shouldn't TCP do that by itself?

Anyway, I consider this is a good thing, it'll probably increase goodput (less outdated, duplicate packets, preferring "closer" networks).

It would, if Bittorrent et all weren't designed to break TCP's regulating. By default uTorrent starts up with like 800 max connections at a time. The TCPIP spec was never really designed to handle this kind of shotgun flooding. The Bittorrent spec is designed to not care about fragmentation, QoS, et all. It is designed to break through college dorm QoS throttling, which is why this kinda discussion is kinda amusing.

But is it working? (4, Insightful)

wealthychef (584778) | more than 4 years ago | (#29944774)

The summary says that the protocol is already out there, and "no major problems are reported." So how about "and congestion is being reduced, and here is how we know it?"

Re:But is it working? (2, Interesting)

angelbunny (1501333) | more than 4 years ago | (#29945532)

I've been using uTP for a couple of months now and I have to say it is excellent and is working for me quite well.

However, since uTorrent is backwards compatible with the original TCP bit torrent protocol the second I start sending to a client that doesn't support uTP my ping jumps from 20 to 200 or i have to go back to manually limiting my upload rate. Regardless, uTP works.

Re:But is it working? (1)

Klaus_1250 (987230) | more than 4 years ago | (#29945942)

Major problems HAVE been reported, especially with people already using their own Traffic Shaping solutions. I've never gotten v2 to work properly. Uploading fluctuates and uses only half of my upstream on average. Even though 100% of the upstream is available without congestion issues. eMule otoh has absolutely no issues using 99% of my upstream bandwidth.

Re:But is it working? (0)

Anonymous Coward | more than 4 years ago | (#29946178)

Or better yet, will this get AT&T to stop disconnecting me like clockwork every ten minutes even when I limit my down/up speed both to half what they could be?

Why? (1)

Anonymous Coward | more than 4 years ago | (#29944786)

Why do articles always have to be referred to as "TFA," as in "The Fucking Article?" Why can't it just be "the article" most of the time?

Re:Why? (5, Funny)

vintagepc (1388833) | more than 4 years ago | (#29944816)

Let me explain:
a) That's "The Fine Article", you insensitive clod!
b) You must be new here.
c) In light of b) Articles=bad. Summaries=good.

Re:Why? (1, Funny)

Anonymous Coward | more than 4 years ago | (#29944936)

Grandparent must be new here.

He should Read The Fine Manual.

Re:Why? (1)

nate11000 (1112021) | more than 4 years ago | (#29944980)

Why do articles always have to be referred to as "TFA," as in "The Fucking Article?" Why can't it just be "the article" most of the time?

Let me explain:
a) That's "The Fine Article", you insensitive clod!
b) You must be new here.
c) In light of b) Articles=bad. Summaries=good.

or just
a) "The Full Article"
b) Like most acronyms, it's shorter than typing it out. How hard is that to understand?

Re:Why? (1)

toppings (1298207) | more than 4 years ago | (#29946116)

I have always read it as The Featured Article. Maybe I'm new here.

Re:Why? (1)

Mitchell314 (1576581) | more than 4 years ago | (#29946754)

?
I've always read it as Today's | The Featured Article.

We don't need more acronyms (0)

Anonymous Coward | more than 4 years ago | (#29945142)

The problem is that we already have huge amounts of acronyms and they are confusing enough by themselves. We shouldn't add more when we don't need those.

RTFM is an old, well known acronym. RTFA and RTFS can directly be led from it and TFA from those. Changing the pattern to just TA wouldn't win anything but it would add a new acronym when there is no need for such. (A quick google search showed that it means, among other things, Tera Ampere, Technical Assistance, Terminal Adapter... TFA has a lot less other meanings.)

Re:Why? (1)

Dunbal (464142) | more than 4 years ago | (#29945146)

Why can't it just be "the article" most of the time?

      Someone forgot to RTFM...

Linux client? (1)

timeOday (582209) | more than 4 years ago | (#29944846)

In my experience, uTorrent only runs on Linux through Wine, and even then, only a few particular obsolete versions of uTorrent are Wine-compatible. Is there someway for me to run a uTorrent-2 client on Linux right now? I've wasted a lot of time trying to get bittorrent to play nice on my home network, to little avail.

Re:Linux client? (2, Informative)

trendzetter (777091) | more than 4 years ago | (#29944906)

I use deluge as a utorrent replacement

Re:Linux client? (1)

broeman (638571) | more than 4 years ago | (#29945120)

I started using deluge lately too, and it works great. I used to use Azureus (mainly for the plugins), but I always wanted memory handled better. Deluge is just as good as uTorrent IMHO.

Re:Linux client? (1)

mirix (1649853) | more than 4 years ago | (#29945386)

Me too. I've always thought of it as superior to utorrent. Never tried the windows port though, so I don't know how good it is.

Re:Linux client? (1)

asdf7890 (1518587) | more than 4 years ago | (#29945052)

Are there any particular features that you particularly want uTorrent for, or are you just wanting it because you are already familiar with it in a Winwos environment?

There are a great many Linux native clients you could chose from and while many are text based (which might not be your cup of tea), such as the excellent rtorrent [rakshasa.no] which I tend to use, there are quite a few that are GUI based, of which deluge [deluge-torrent.org] seems very popular, or are GUI wrappers for working with text based clients (there are several such wrappers for the basic clients, and for recent rtorrent versions too.

Some offer web-based interfaces too, which some find handy if they download to an external machine to reduce the impact on bandwidth quotas and traffic shaping that may be imposed by their ISP.

See this page [wikipedia.org] for a list of clients that you might want to look into.

Re:Linux client? (1)

timeOday (582209) | more than 4 years ago | (#29945110)

What is this story about? uTP, because it promises to reduce bittorrent interference with other apps on the network. From what I have gathered it is only offered by utorrent.

Re:Linux client? (1)

asdf7890 (1518587) | more than 4 years ago | (#29945144)

What is this story about? uTP, because it promises to reduce bittorrent interference with other apps on the network. From what I have gathered it is only offered by utorrent.

Ah sorry, I completely forgot the fact that rTorrent has become the "official" client since its purchase.

Re:Linux client? (0)

Anonymous Coward | more than 4 years ago | (#29945160)

DD-WRT router with OoS and KTorrent FTMFW

Re:Linux client? (0)

Anonymous Coward | more than 4 years ago | (#29946212)

Why should I fuck two midget Finnish women?

Re:Linux client? (1)

mister_playboy (1474163) | more than 4 years ago | (#29946488)

FTMFW = For The MotherFucking WIn

LAN performance also? (2, Interesting)

mleugh (973240) | more than 4 years ago | (#29944872)

Is this likely to improve LAN performance when using bittorrent on a shared internet connection also?

Re:LAN performance also? (1)

Torrance (1599681) | more than 4 years ago | (#29945920)

Based on my reading of the article, it will detect congestion at any point along the route. If you have several computers behind a NAT router all sharing the same internet connection, and one of those computers is using this new BT protocol, it'll detect if it's congesting the gateway and reduce its speed.

So, yes, it'll improve the network performance of any non-BT apps on any of the other computers in your local network.

Re:LAN performance also? (1)

wolrahnaes (632574) | more than 4 years ago | (#29946226)

Unless your LAN is slower than your WAN (remember that wireless never achieves its advertised rate) there should be no way BitTorrent is slowing down your LAN.

Basically unless you have FiOS or similar and are using 802.11 to access it, something is wrong with your LAN if torrents break it.

Sweet! (5, Funny)

i-like-burritos (1532531) | more than 4 years ago | (#29944966)

Now when I illegaly download the newest DVD screeners, I can do it with a clear conscience knowing that I'm not congesting the network!

Seriously though, this is a good thing. I don't know why the story is tagged "your rights online"

Re:Sweet! (0)

Anonymous Coward | more than 4 years ago | (#29945118)

Now when I illegaly download the newest DVD screeners, I can do it with a clear conscience knowing that I'm not congesting the network!

Seriously though, this is a good thing. I don't know why the story is tagged "your rights online"

Read some of the previous posts. It's related to the ISPs claiming that Bittorrent is a bandwidth hog so they need to do traffic shaping. This protocol change is the same as saying "no, you don't need QoS, we'll do it ourselves. We'll make things more efficient too!"

Re:Sweet! (1)

Thantik (1207112) | more than 4 years ago | (#29945512)

Actually quite the opposite. The change is just to lower bandwidth so uTorrent uses a more 'extra bandwidth only' approach, so YOUR other activities aren't interrupted. They could care less for the ISP.

Re:Sweet! (1)

AHuxley (892839) | more than 4 years ago | (#29945790)

Yes your non BT networking is really smooth, the pipe up is still maxed out the second your other non BT use is over.
You paid for that up bandwidth, use it :)

Clients already do this (2, Interesting)

bug1 (96678) | more than 4 years ago | (#29945130)

AFAIK most bittorent clients throttle connections already, some automatically like vuze, others like transmission only manually.

Or am i missing the point ?

Re:Clients already do this (1)

Biogenesis (670772) | more than 4 years ago | (#29945256)

I didn't RTFA, but from the summary it would seem that each client has its own method for throttling. What they want to to is build a throttling algorithm into the BT protocol, hence standardizing the procedure. I guess this would make client coding easier, as the throttling would be achieved with a call to a BT library rather than a client coder having to write/find throttling code themselves.

Re:Clients already do this (4, Informative)

PhrostyMcByte (589271) | more than 4 years ago | (#29945364)

Most clients have you set a fixed upload speed. Some try to do this automatically, while most have you set it manually. This isn't perfect - if you set it to use 80% of your upload, and you are using more than 20%, things will get slow. If you use less than 20%, you'll have some amount idle and being wasted. Some rely on something like monitoring ping to some specific service.. if ping is higher, throttle back. If ping is low, increase speed. Again this isn't perfect because it relies on a single host and route to determine your speed.

uTorrent's new protocol requires no action from the user, no automatic bandwidth tests, and no outside service. It is designed to always use the optimal speed, while never interfering with foreground tasks.

It has been a while since I read it, and when I read it I was very very tired, but my understanding is that it tags each packet with a high-precision send time. So if we have two packets, A and B, A will sent at 100ms and B will be sent at 300ms. So you know they were sent 200ms apart. The _receiver_ then notices that he receives them 400ms apart, so there is 200ms of lag which means it should be throttled back. It tries to keep the amount of lag 50ms. Again, I could be completely wrong :D

Since it is based on UDP and not TCP, it also solves the problem of Comcast sending fake RST packets to make each client think they wanted to disconnect from eachother.

Re:Clients already do this (1)

angelbunny (1501333) | more than 4 years ago | (#29945598)

Yes but what happens if your network speed is dynamic?

For example, if you've got a 10megabit/s upload coming to your home but the line is saturated so at times it might bounce down to 6megabit/s for a minute or even in heavy load could go below 5megabit/s. How do you limit your upload speed to not kill your net when your upload speed is variable? Try limiting it to 5megabit/s and you'll be find but only using half of your max connection when it is available. Try limiting it to 8megabit/s and you're fine 90% of the time but still not utilizing everything properly. Also, QoS doesn't work properly because the speed is being limited via the cable modem so that isn't an option either. And finally, the auto upload max features on most bt clients have a delay so if your upload spikes down to 5megabit/s from 10 for 5 seconds your ping will jump up regardless.

uTP only utilizes what is available and does an extremely good job so if your 10megabit/s connection spikes down to 2megabit/s for half a second your ping will not even jump up for that. Currently using uTP my upload rate is bouncing all over the place in a crazy fashion yet my net is not being hit at all. It is kinda like those stop lights on freeway on ramps to keep to many vehicles from entering the freeway at once. The effect works really well in my particular case.

Re:Clients already do this (1)

BikeHelmet (1437881) | more than 4 years ago | (#29945664)

You can throttle on your end, and your end only.

If you had say... Cable, and all your neighbours were active too, then this would make your speed drop. Your torrents choke their webpage browsing and youtube streaming, but with congestion control, it doesn't choke them as much. Is it perfect? Nope. Will it affect you negatively? Not really. I'd happily download 20% slower for 80ms ping instead of 2000ms. (and yes, it can get that bad when networks opt for low or no packet loss.)

When there is no congestion, it has no effect, so most of the time you won't even notice it.

Re:Clients already do this (1)

Bengie (1121981) | more than 4 years ago | (#29946616)

The difference is most other clients will throttle to best give *your* connection low latency. uTorrent is facing the other end of the problem. If a hop between you and the seeder is getting congested, uToerrent will throttle down as to help not overload that hop, Even if your connection is fine. The problem with P2P is that it eats up a very large portion of available bandwidth. If an ISP as a whole is getting bogged down, the downloaders will back off and try not to overload that ISP/Hop.

Next: ISPs develop automatic throttling... (1)

Interoperable (1651953) | more than 4 years ago | (#29945152)

Your router will throttle you, take your wallet and run.

not the bandwidth it's the number of connections (0)

Anonymous Coward | more than 4 years ago | (#29945218)

The problems are not with the bandwidth usage but with the shear number of connections being opened, if enough connections are there it can act like a DDoS.

Intra-ISP (0)

Anonymous Coward | more than 4 years ago | (#29945538)

Are any clients other than Azureus using tech which finds other people nearby, which tends to reduce traffic outside an ISP?

Excellent idea (1)

InsurrctionConsltant (1305287) | more than 4 years ago | (#29945614)

Assuming the summary isn't completely wrong, this is an excellent idea. In the UK we are under severe threat of a draconian three-strikes law. This is without question due to the behind-the-scenes lobbying of the record and movie industries. And also, of course, the general attitude of compliance of the government towards those interests at the expense of the original, liberal copyright law that benefits culture and the public.

Convincing the ISPs that the filtering/monitoring requirements of the draconian-copyright brigade are worse than having to deal with P2P traffic may be the only hope.

Reference: TalkTalk will resist net piracy plans [networkworld.com]

Much bigger issue with uTorrent still unsolved (3, Interesting)

bertok (226922) | more than 4 years ago | (#29945630)

There's a much bigger issue with uTorrent that the developers seem to refuse to solve, or even acknowledge.

In essence, uTorrent connects to clients randomly, and makes no attempt to prioritize "nearby" clients. This may not be a huge issue for Americans, but everywhere else, you know, like the rest of the fucking planet, this is hugely inefficient, for both the end users, and most importantly, ISPs. This is why they're throttling bittorrent: because it tends to make connections to peers outside the ISP's internal network, which costs ISPs money. In Australia for example, international bandwidth is extremely limited and very expensive, but local bandwidth, even between ISPs, is essentially unlimited, high-speed, and often free or 'unmetered'.

What do you think is going to be faster: connecting to your neighbour through at the same fucking router, or some kid's home PC in Kazakhstan over 35 hops away? Even connections from here to America have to go through thousands of miles of fiber optic cable over an ocean.

Note that some other clients like Azureus have already implemented weighted peer choices, where peers with similar IP addresses are preferred over other peers. It's not hard. Heck, it's a trivial change to make, as no changes need to be made to the protocol itself. A reasonably competent programmer could implement this in an hour: simply take the user's own IP address, and then sort the IPs of potential peers by the number of prefix bits in common, then do a random selection from that list, weighted towards the best-matching end. How hard is that?

The arrogance of the uTorrent devs is simply staggering. They're a group of developers who could, with an hours effort, reduce international bandwidth usage by double-digit percentages and improve torrent download speeds by an order of magnitude, but they just... don't.

Re:Much bigger issue with uTorrent still unsolved (0)

Anonymous Coward | more than 4 years ago | (#29945826)

I read your post and, at first, entirely agreed with it. However, what if the devs deliberately want to keep this problem amidst to perpetuate a reason for ISPs to finally upgrade their infrastructure before further optimizing how the protocol works?

If you take away reasons for them to upgrade, you're hurting their incentive to be pushed into doing it. Just a thought, despite being slightly paranoid...

Re:Much bigger issue with uTorrent still unsolved (1)

dkf (304284) | more than 4 years ago | (#29946020)

I read your post and, at first, entirely agreed with it. However, what if the devs deliberately want to keep this problem amidst to perpetuate a reason for ISPs to finally upgrade their infrastructure before further optimizing how the protocol works?

That's quite a retarded suggestion. Upgrading the link with the bottleneck requires a lot of investment (putting a new intercontinental fiber-optic line in isn't the same as digging up a few streets) so the ISPs are quite right to try to put it off as long as they can. It's not underinvestment, it's trying to make the existing investment work for its living properly. And the thing is... it does work for most since most people's network access is sporadic. It's the bulk downloaders that are the problem from the ISP's perspective, and the bulk uploaders too (since most people have asymmetric connections). Now, if the bittorrent users would switch to business-grade connections (i.e., ones that have balanced bandwidth at faster than modem speeds) then they'd be much less of a problem for everyone. But they won't, because they're cheap scum. (Well, a few might be non-scum, but they are definitely still cheap.)

Re:Much bigger issue with uTorrent still unsolved (1, Interesting)

Anonymous Coward | more than 4 years ago | (#29946404)

This has been a problem since day one. Since dial up to BBS's. However, this is also the same reason we have 8Mb/s-100Mb/s connections today. I was considered a heavy user back in the day when I wanted to send some digital pictures to a friend many states away. Dialup wasn't alway 64k, and was never 64k up. I had 300bps initially. It took a very long time to send digital pictures, though not as long as the post.

That was once considered abuse. Now no one cares that I have a lot of digital pictures I send. The ISPs don't care, and MySpace and Facebook make it free to share these with family.

Once it was floppy disks, then CDs, today it is the sharing of DVDs. It will not end, and it will drive the increase in total bandwidth. ISPs should be able to prioritize this traffic. The current encryption and obfuscation used by many P2P clients means the only way to detect it is by detecting SSL on ports other than 443 which have invalid certificates. This makes difficult to control unless you have quality equipment. Bittorrent is much easier to control. The developers are helping with prioritization which is a good thing. More needs to be done. I do the QoS for a "free" campus type hot spot with 100Mb/s of Internet connection and lots of users. We pay based on usage. When someone kicks in a big P2P session, it is very noticeable. Should I kick him off, or QoS him, or pay thousands of dollars a month extra to let him do "free" P2P? QoSing that P2P Ubuntu up/download seems to me to be the right thing to do.

At home, using DD-WRT I'm able to prioritize things. I can have a Mozy backup going full speed now without it affecting my Netflix or Hulu. Before I did QoS, the Mozy upload would cause major problems with these services due to up link congestion.
 

Re:Much bigger issue with uTorrent still unsolved (2, Interesting)

Anonymous Coward | more than 4 years ago | (#29945938)

Prefix bits do not indicate location. 2 Class C's can be a long way from each other geographically. Even if the entire Internet was broken down into Class C spaces, and you prioritised addresses in your Class C, I don't think you would see many hits. I mean, there may be 50k people on the torrent, but how many of them are in the same neighbourhood as you?

That's why the Vuze plugin uses a IP->location mapping database.

Re:Much bigger issue with uTorrent still unsolved (2, Insightful)

bertok (226922) | more than 4 years ago | (#29946230)

Prefix bits do not indicate location. 2 Class C's can be a long way from each other geographically. Even if the entire Internet was broken down into Class C spaces, and you prioritised addresses in your Class C, I don't think you would see many hits. I mean, there may be 50k people on the torrent, but how many of them are in the same neighbourhood as you?

That's why the Vuze plugin uses a IP->location mapping database.

True, but it's still better than random. Many countries were allocated IP blocks from large ranges. Most of Australia's IP addresses start with prefixes around 200-something, for example. Similarly, most ISPs have large blocks allocated to them like /8 ranges or the like. Some ISPs are big enough that torrent users could have 10 or more connections to peers in the same ISP for reasonably common files like TV shows, and only need 1 or 2 to the outside world.

Still, you're correct, adding even a simple country database would help a lot. There's easily obtainable databases of "AS" numbers that map IP ranges to organizations and/or countries, and embedding that into the client would also be a fairly simple exercise.

Proximity Favored Connections (2, Interesting)

JakFrost (139885) | more than 4 years ago | (#29945986)

Ono Plug-In

You're absolutely right about how badly implemented the random client connection protocol is for BitTorrent clients. There is a project and a plug-in called Ono [northwestern.edu] for Vuze (formely Azureus) BitTorrent clients. I used it before to resolve this problem but I found that the non-stop creation of many ping.exe threads to analyze latency was causing some slow-downs on my own system and additional upstream congestion on my upstream limited broadband pipe.

I am still surprised that a better protocol for proximity favored peer connections wasn't developed for BitTorrent and other P2P systems to maximize performance by connecting to peers on the same or close-by networks. I have a feeling that with the huge increases in demand for content there will be a need for optimized connection protocols once we start demanding more than the capacity that we have.

Netmask Flaws

One solution that is simple to implement is the one that you mentioned for netmask calculations but I fear that this is solution won't work reliability since the way that network ranges are created and managed internally by large broadband ISPs is unpredictable and neighboring ranges are owned by different ISPs or are in other countries. Plus netmask information doesn't tell you anything about closest neighbors to connect to once you exhaust the connections in your own netmask.

Routing Table Solution

I think that the best solution would be one based on information in the routing protocols that the routers have but since this information is not available to the individual clients the applications have no way of looking at the overall routing structure to determine exactly who the closest and best neighbors. are based on latency, bandwidth, cost, and hop count information.

If there was a way for the application to query the router for a partial list of the routing table (e.g. 5 or 10-hops) and then prioritize the peer addresses from the tracker according to the routing table based an algorithm that takes bandwidth up-and-down, latency, cost, and hop count into account we would have an optimal solution to the order of connections for peers.

Latency and Hop Count Not Enough

The problem is that the routers won't share the routing table information with the clients. The solution becomes the one like Ono plug-in in that the client has to ping and/or trace route to the peer addresses to determine optimal choices based only on latency and hop count without knowing anything about bi-directional bandwidth availability or cost associated. Without the bandwidth info the whole thing falls apart because latency isn't enough to determine maximum throughput and there is no practical way of doing a bandwidth check bi-directionally in a meaningful way between peers without taking up a lot of time and bandwidth in the process itself.

Upstream Throttling (Not Choking)

Hopefully, this new uTP protocol will at least give us a benefit and improvement on the upstream bandwidth side by auto-throttling the upstream to prevent choking the connection.

If only the clients could peek at the routing tables of our routers...

Re:Proximity Favored Connections (1)

bertok (226922) | more than 4 years ago | (#29946240)

It doesn't have to be reliable, it just has to be better than "totally random". Even a very bad peer selection policy would be a HUGE improvement over what they have now.

Re:Much bigger issue with uTorrent still unsolved (4, Insightful)

Aladrin (926209) | more than 4 years ago | (#29946036)

THEIR arrogance is astounding? How about yours? They are working FOR FREE. You are merely complaining. Get your hands dirty and start doing some work yourself.

You can suggest things all you want, but once you start insulting someone for their free work, you've crossed a line. Nobody is forced to use their client. There are dozens of decent clients and probably hundreds of open source ones.

As for their choices, they will work on what's more important to them, I'm sure. Since they don't need this 'local' feature, they haven't got much incentive to actually work on it.

Re:Much bigger issue with uTorrent still unsolved (4, Insightful)

bertok (226922) | more than 4 years ago | (#29946198)

THEIR arrogance is astounding? How about yours? They are working FOR FREE. You are merely complaining. Get your hands dirty and start doing some work yourself.

You can suggest things all you want, but once you start insulting someone for their free work, you've crossed a line. Nobody is forced to use their client. There are dozens of decent clients and probably hundreds of open source ones.

As for their choices, they will work on what's more important to them, I'm sure. Since they don't need this 'local' feature, they haven't got much incentive to actually work on it.

First of all, they're not working for 'free', uTorrent is owned by BitTorrent Inc, a for-profit company. Initially it was free, but it's now developed by a corporation. Those devs are salaried employees.

More importantly, uTorrent depends on and uses infrastructure that is not free, by any stretch of the imagination. International links are $billions expensive.

So by your logic, just because a user can download their client for free, it gives Bittorent Inc carte blanche to do anything at all they want, including shit all over the internet infrastructure?

How the fuck does it make sense for a company who's product uses something like 30% of the total internet bandwidth to not make an hours worth of effort to minimize their impact on said infrastructure? Their product in its present state is so harmful that ISPs are buying millions of dollars worth of equipment to throttle it, and with good reason.

Read up on the Tragedy of the Commons [wikipedia.org] and get a clue.

Compare their behavior to the largely free, open, and volunteer efforts of the dedicated people who worked on the early Internet protocols like DNS and NNTP. These were systems designed to scale, use bandwidth efficiently, and 'play nice'.

What happened since then? Why is it acceptable now to design a protocol that is maximally inefficient? Why would anyone support this kind of behavior?

Re:Much bigger issue with uTorrent still unsolved (0)

Anonymous Coward | more than 4 years ago | (#29946324)

Why would anyone support this kind of behavior?

Because I do not want to make to job of MAFIAA any easier.

Re:Much bigger issue with uTorrent still unsolved (0, Troll)

bertok (226922) | more than 4 years ago | (#29946436)

Why would anyone support this kind of behavior?

Because I do not want to make to job of MAFIAA any easier.

You do realise that the RIAA and the MPAA represent Recording Artists and Movie Producers, respectively, right?

Neither group are ISPs. Neither invest billions into internet infrastructure.

If peer-to-peer users would just play nice and use the ISP infrastructure efficiently, then maybe the ISPs wouldn't be so inclined to side with the content producers.

You might even find that if digital content distribution is done right, then the ISPs might start to push for laws similar to the "copyright tax" on writeable media to allow their users to legally download content.

Re:Much bigger issue with uTorrent still unsolved (1)

JackSpratts (660957) | more than 4 years ago | (#29946516)

holey moley. the percentage of bits devoted to file sharing is dropping fast. urgent media company/isp press releases notwithstanding, total bandwidth consumed by peer-to-peer file sharing is now under 20%. this includes all protocols. bittorrent will of course be less. the precipitous share decline has caused at least one observer (sandvine's dave caputo) to comment that "peer-to-peer is yesterday's internet story." all the more startling coming from that outfit, a company whose controversial history suggests a vested interest in obscuring such trends.

btw, you're right that utorrent doesn't pay for the bandwidth but wrong about who does. those billions in infrastructure mods are underwritten by me, and every other consumer employing the app, and these days the isps we subscribe to aren't terribly worried about bt, if they ever really were. they're much too concerned about streaming video.

- js.

Re:Much bigger issue with uTorrent still unsolved (1)

c0d3g33k (102699) | more than 4 years ago | (#29946058)

Keeping traffic completely local would make it much easier to snag a bunch of file sharers in a massive "three strikes and you're out" campaign, don't you think? Since mere use of torrent software seems to be associated with illicit activity in the minds of the ignorant (ie. the authoRIAAties), I'm not sure that "I was just downloading the latest Ubuntu ISO" would be enough to avoid being threatened by the ISP. Lots of local inter-ISP torrent traffic might also cause them to alert local law enforcement to take a closer look. This could increase one's risk significantly, particularly if any 'infringing' content is ever shared (by an occasional, less enlightened, user of the connect, for example). Seems safer to not have to worry about local/non-local bandwidth, to be honest. Might be smarter to prefer connections that are as non-local and non-concentrated as possible. It's not always just about data transfer speed and bandwidth saving - there are other factors to consider.

Re:Much bigger issue with uTorrent still unsolved (2, Insightful)

bertok (226922) | more than 4 years ago | (#29946276)

Keeping traffic completely local would make it much easier to snag a bunch of file sharers in a massive "three strikes and you're out" campaign, don't you think? Since mere use of torrent software seems to be associated with illicit activity in the minds of the ignorant (ie. the authoRIAAties), I'm not sure that "I was just downloading the latest Ubuntu ISO" would be enough to avoid being threatened by the ISP. Lots of local inter-ISP torrent traffic might also cause them to alert local law enforcement to take a closer look. This could increase one's risk significantly, particularly if any 'infringing' content is ever shared (by an occasional, less enlightened, user of the connect, for example). Seems safer to not have to worry about local/non-local bandwidth, to be honest. Might be smarter to prefer connections that are as non-local and non-concentrated as possible. It's not always just about data transfer speed and bandwidth saving - there are other factors to consider.

[citation needed]

Keep in mind that in large part, the motivation of ISPs for monitoring or throttling bittorrent is not concerns over copyright violations, but the impact to their bottom line. All ISPs have three classes of links: Internal, peered, and external. They have a strong preference to maximize the utilization of the former over the latter, as internal links are effectively free and often underutilized, while external links are often very expensive and overloaded.

If torrent traffic utilized internal connections much more than external connections, ISPs wouldn't be financially motivated to monitor at all, because monitoring equipment is expensive. Right now, monitoring and throttling is worth it, because bittorrent tends to use external links the majority of the time.

In effect, improving efficiency would improve security.

Re:Much bigger issue with uTorrent still unsolved (5, Insightful)

evilviper (135110) | more than 4 years ago | (#29946194)

In Australia for example, international bandwidth is extremely limited and very expensive, but local bandwidth, even between ISPs, is essentially unlimited, high-speed, and often free or 'unmetered'.

No bittorrent client picks one peer, and downloads everything from them... Instead, it connects to a large number of peers, and downloads from all of them.

If you can download from your neighbor 100X faster than you can download from someone across the planet... good. You'll get 100 chunks from your neighbor, for every 1 you get from the foreign country. No programming required.

What do you think is going to be faster: connecting to your neighbour through at the same fucking router, or some kid's home PC in Kazakhstan over 35 hops away?

There's ample opportunity for either to be equally fast. Crossing an ocean increase latency, but if the link isn't horribly oversubscribed, can provided speeds faster than you can handle. So, your neighbor might have 100 other people requesting the same torrent as you, for the same reasons, while the kid in Kazakhstan may have a great internet connection, which is barely being utilized, and this while international traffic is down. This is not international calling... you don't save money by not fully utilizing that transoceanic link.

Also, ISPs brought this on themselves. I've long advocated ISPs allowing unlimited speeds between subscribers, and only limiting the uplink speeds to whatever you've subscribed, but they almost never do. If they did, see above... any peer-to-peer protocol would naturally download almost everything from local sources, without any added intelligence on its part. You wouldn't have to write it in to every single app.

A reasonably competent programmer could implement this in an hour

You could implement it easily, if you're willing to restrict yourself to neighboring network addresses in lieu of all else. If you want some fancy weighting to decide how important locality is versus absolute speed, completeness, etc. then you're talking about a major project.

Besides that... A good network admin could do the job in an hour as well, with no need to rewrite any of the applications.

They're a group of developers who could, with an hours effort, reduce international bandwidth usage by double-digit percentages and improve torrent download speeds by an order of magnitude, but they just... don't.

That's baseless and utterly ridiculous.

Re:Much bigger issue with uTorrent still unsolved (1, Insightful)

bertok (226922) | more than 4 years ago | (#29946560)

Your post is precisely what is wrong: It's all about what you get out of an individual download.

Not everything in the world is about directly maximizing YOUR OWN PERSONAL BENEFIT. More importantly, you can actually improve your own personal speeds by cooperating. Read up on the Tragedy of the Commons [wikipedia.org] . If many players all blindly try to maximize their personal utilization of a common resource, they all suffer as an aggregate.

This is particularly true of peer-to-peer protocols, which are ideally placed to utilize otherwise wasted local bandwidth. I've read papers that show that an efficiently designed P2P protocol can actually maximize the return on investment of a switched network, a feat that essentially no other type of protocol can achieve, largely because a well designed P2P protocol can minimize the amount of data flowing through inter-network or international links.

Re:Much bigger issue with uTorrent still unsolved (0)

Anonymous Coward | more than 4 years ago | (#29946280)

I agree, I see other peers on the same ISP as me and others on another ISP which the ISP are also based in the same city as me and yet it doesn't connect to them.

Coding a way so you can manually prioritise that peer or domain would be easy to do.

Re:Much bigger issue with uTorrent still unsolved (2, Insightful)

bertok (226922) | more than 4 years ago | (#29946572)

I agree, I see other peers on the same ISP as me and others on another ISP which the ISP are also based in the same city as me and yet it doesn't connect to them.

Coding a way so you can manually prioritise that peer or domain would be easy to do.

I see this all the time too. It shits me to no end that I could be connecting to users with 10Mbit uplinks in the same city, but uTorrent blindly connects to peers in places like Hungary which is almost precisely the furthest possible distance from me.

ISPs don't give a crap (2, Interesting)

7-Vodka (195504) | more than 4 years ago | (#29945840)

Furthermore, the revision is designed to eliminate the need for ISPs to deal with problems caused by excessive BitTorrent traffic on their networks

How wrong this is. ISPs don't give a crap about this and it's never going to work.

1. They don't give a crap because the real reason they throttle is because they don't want you using your bandwidth. You know the bandwidth you actually paid for. Whether you are supposedly clogging up their pipe or not is not the point. The point is that you are using more bandwidth than another user and they could kick your ass and sell their internets to 1000 old ladies instead.

2. It's never going to work because of (1) and because the problem it's trying to solve was never a problem for the ISP it was always a problem for the end user anyway. You think that the ISPs have big download pipes and small upload limits like you do? They don't. Their shit is equilateral. You can stop clogging your tiny upload allocation as much as you want, it's never going to affect the ISP. They never had an UP shortage because they have equal up/down bandwith and provide you with tiny up limits. It may help the end user, but only if it's already better than existing solutions, which if you already know what your ISP castrates your up bandwidth to, it's not.

route around the problem (2, Insightful)

Tumbleweed (3706) | more than 4 years ago | (#29946412)

Get a seedbox. :)

Re:route around the problem (1)

mister_playboy (1474163) | more than 4 years ago | (#29946556)

You're still gonna need a torrent client to run on that seedbox.

TcpAckFrequency allows more traffic (1)

u64 (1450711) | more than 4 years ago | (#29946510)

Slightly Offtopic. But here's how to get smoother and faster
traffic. So that Upload distrupt Download far less.

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\parameters]
"TcpAckFrequency"=dword:9

dword:d also works. But above that things probably becomes too smooth.

Compare the 'Before' and 'After' real-world download speeds.
Especially how online gaming and interactive things behave.

Linux dont seem to have this tweak. Right?

This is not new (0)

Anonymous Coward | more than 4 years ago | (#29946706)

Azureus has been doing this for years. There are actually three built-in methods (only one of which I find effective).

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...