×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Enhancement To P2P Cuts Network Costs

kdawson posted more than 6 years ago | from the not-the-enemy dept.

The Internet 190

psycho12345 sends in an article in News.com on a study, sponsored by Verizon and Yale, finding that if P2P software is written more 'intelligently' (by localizing requests), the effect of bandwidth hogging is vastly reduced. According to the study, redoing the P2P into what they call P4P can reduce the number of 'hops' by an average of 400%. With localized P4P, less of the sharing occurs over large distances, instead making requests of nearby clients (geographically). The NYTimes covers the development from the practical standpoint of Verizon's agreement with P2P company Pando Networks, which will be involved in distributing NBC television shows next month. So the network efficiencies will accrue to legal P2P content, not to downloads from The Pirate Bay.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

190 comments

P2P - P4P? (1, Funny)

thousandinone (918319) | more than 6 years ago | (#22750754)

Yeah! Lets increment a number that isn't actually carrying a numeric value! We no longer transfer peer 2 peer. Information is now provided by peers, 4 peers.

Re:P2P - P4P? (5, Insightful)

TripMaster Monkey (862126) | more than 6 years ago | (#22750788)

Well, strictly speaking, incrementing the number would result in P3P, not P4P. Just as P2P means "Peer to Peer", P4P could be interpreted as "Peer for Peer", justifying the numeral.

Re:P2P - P4P? (5, Funny)

ePhil_One (634771) | more than 6 years ago | (#22750890)

Well, strictly speaking, incrementing the number would result in P3P, not P4P. Just as P2P means "Peer to Peer", P4P could be interpreted as "Peer for Peer", justifying the numeral.

Personally I'm waiting for the next binary progression, Peer Ate Peer, or P8P. I'm not sure what it will do, but I'll bring popcorn to watch...

Re:P2P - P4P? (1)

thousandinone (918319) | more than 6 years ago | (#22750910)

Yeah, I said as much, I still thing the idea is silly though; mainly because peer to peer isn't a protocol in and of itself to begin with, just a description of what it is- different protocols handle it different ways. A new, more efficient protocol for a peer to peer transfer is still a peer to peer transfer.

Re:P2P - P4P? (1)

MBGMorden (803437) | more than 6 years ago | (#22751496)

When has that stopped the buzzword guys? "Web 2.0" still comes over the same protocols that the old web did but they still found an excuse to increment the (nonexistent) version :).

Re:P2P - P4P? (1)

mgblst (80109) | more than 6 years ago | (#22751332)

Surely this has been patented at sometime? This just seems to obvious for it not to be.

Re:P2P - P4P? (1)

sm62704 (957197) | more than 6 years ago | (#22751526)

If you count in binary on your fingers with your pinkie as "1" and ring finger as "2" (10), then 4 is flipping someone the bird.

The RIAA perhaps?

;)

400%? (5, Insightful)

Sam H (3979) | more than 6 years ago | (#22750764)

How do you reduce the number of 'hops' by an average of 400%? Negative number of hops? Also, FP.

Re:400%? (3, Informative)

IndustrialComplex (975015) | more than 6 years ago | (#22750880)

They probably discussed the number so many times that they lost track of how it was referenced. Lets say they cut it down to 25 from 100. If they went from their method, to the old method, then it would be a 400% increase in the hopcount.

Sloppy, but we can understand what they were trying to say.

Re:400%? (2, Interesting)

LandKurt (901298) | more than 6 years ago | (#22751260)

Well, technically going from 25 to 100 is a 300% increase, since the increase is 75. But I realize that whenever the ratio between numbers is four to one it's going to be commonly referenced as 400%, regardless of whether it should actually be a 300% increase or 75% decrease. The mind fixates on the factor of four and wants to use 400 as the percentage. The correct numbers just feel wrong.

Interestingly this mistake doesn't happen with small changes like 10 or 20 percent. But as soon as something doubles its a 200% increase rather than the mathematically correct 100% increase.

Re:400%? (3, Informative)

MightyYar (622222) | more than 6 years ago | (#22750920)

The number 400% appears nowhere in the article.

Re:400%? (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#22751006)

And how, exactly, would you know that?

Re:400%? (4, Insightful)

SatanicPuppy (611928) | more than 6 years ago | (#22750924)

Just typical market speak. 400% sounds sexier than "a factor of four".

The problem that leaps to my mind is that either you're going to have to collect a huge chunk of routing information so your client can figure out which peers are "close" to you, or a third party is going to have to manage the peering...Neither one of those thrills me, especially since an ISP is pushing the technology, which would make them the obvious third party.

Re:400%? (1)

orclevegam (940336) | more than 6 years ago | (#22751602)

Maybe use the data from the DNS records to correlate blocks of IPs that all belong to the same organization? Apply a weight first to IPs coming from the same organization you're a member of, and then a second weight to those that are geographically close (using one of the many services out there that correlate IP to physical location [poor granularity though]). Might even be able to apply some logic that says something like "if getting high latency from IP in block X, weight other IPs from block X lower". Might help eliminate slow connections that are all traveling over the same wonky backbone.

Re:400%? (5, Insightful)

ThreeGigs (239452) | more than 6 years ago | (#22751074)

It gets worse. From RTFA:

"Using the P4P protocol, those same files took an average of 0.89 hops"

How do you possibly get an average of LESS than one hop, unless you're getting the file from yourself?

Re:400%? (4, Funny)

mcrbids (148650) | more than 6 years ago | (#22751206)

How do you possibly get an average of LESS than one hop, unless you're getting the file from yourself?

Easy! They ran it in simulation, using VMware. Have you ever used VMware? It's an amazing tool that makes an excellent platform for simulations and prototypes, especially when you need to know exactly how applications will perform in the real world.

Game developers, for example, routinely use VMware sessions. Especially the hard-core, 3D FPS developers.

No, really!

Re:400%? (1)

Nullav (1053766) | more than 6 years ago | (#22751278)

With those results, I'm going to assume VMWare isn't made for testing transfer protocols. You know, unless anti-routers exist.

Re:400%? (0)

Anonymous Coward | more than 6 years ago | (#22751636)

Whoooooooooooooooooooooogh

Re:400%? (0)

Anonymous Coward | more than 6 years ago | (#22751912)

Looks like he finally got the joke and killed you halfway through.

Re:400%? (2, Funny)

kaizokuace (1082079) | more than 6 years ago | (#22751616)

You can maximize the bitterness of the hops by adding more of them to the beginning of the boiling wort. A small amount of hops at the end of the boi...oh what are we talking about again?...

What information are we talking about? (3, Interesting)

TubeSteak (669689) | more than 6 years ago | (#22750776)

For other ISPs to reap the benefits Verizon did in the test, they too would have to share information about their networks with file-sharing companies, and that they normally keep that information close to their chests.
Excuse my ignorance, but what about their network is secret, other than the prices they're paying?
Network topology isn't & can't be a secret...

Re:What information are we talking about? (2, Informative)

Anonymous Coward | more than 6 years ago | (#22750834)

The answer is "a lot"

How much capacity a device has, how many links it has, how much it might cost a carrier to use those links. How much capacity the switching devices in that network have, what firewall/filtering might be in place. Where the devices are phyiscally located.

There's a lot more to a network that just IP Addresses.

Re:What information are we talking about? (1)

morgan_greywolf (835522) | more than 6 years ago | (#22751162)

ow much capacity a device has, how many links it has, how much it might cost a carrier to use those links. How much capacity the switching devices in that network have, what firewall/filtering might be in place. Where the devices are phyiscally located.
Other than 'real world' stuff like costs and physical location, the rest of the information is basically discoverable by various network and network security testing tools for someone with the know-how and motivation.

Re:What information are we talking about? (2, Informative)

truthsearch (249536) | more than 6 years ago | (#22750988)

My guess is geographic location of IPs, since they're not just talking hops, but distance. If the hops are all geographically local the data likely transfers between less ISPs and backbones. I don't know much about the details, so this is just my interpretation of the claims.

But wouldn't a protocol that learns and adjusts to the number of hops be nearly as efficient? If preferential treatment were given to connections with fewer hops and the same subnet I bet they'd see similar improvements.

Re:What information are we talking about? (1)

brunes69 (86786) | more than 6 years ago | (#22751666)

Geographic location of IPs is not secret.

www.maxmind.com

If your project is open source their database is free.

Re:What information are we talking about? (5, Informative)

mr_mischief (456295) | more than 6 years ago | (#22751112)

You seem so certain.

Your traceroute program doesn't tell you when your traffic is being routed four hops through a tunnel to cut down on visible hops and to save space in the ISP's main routing table. Without the routing tables at hand you don't know the chances of being routed through your usual preferred route and through a backup route kept in case of congestion. Nothing from the customer end shows where companies like Level 3 and Internap have three or four layers of physical switches with VLANs piled on top between any two routers. Nothing tells you when you're in a star build-out of ten mid-sized cities that all go to the same NOC vs. when you're being mesh routed over lowest latency-weight round robin, although you might guess by statistical analysis and mesh routing of commercial ISP traffic outside the main NAPs is getting more and more rare.

There's a lot you can easily deduce, especially if your ISP uses honest and informative PTR records. There's still much that an ISP can do that you'll never, ever know about.

I worked for one ISP where we had 5 Internet connections in four cities to three carriers, but we served 25 cities with them. We had point-to-point lines from our dial-in equipment back to our public-facing NOCs. We had a further 18 or so cities served by having the lines back-hauled from those towns to our dial-in equipment. We had about 12k dialup customers and a few hundred DS1, fractional DS1, frame relay, and DSL customers. Everyone's traffic went through one of two main NOCs on a good day, and their mail, DNS, AAA, and the company's web site traffic never touched the public Internet unless we were routing around trouble. In a couple of places we even put RADIUS slaves and DNS caching servers right in the POP.

I worked for another that served over 40k dial-up and wireless customers by the time they sold. We had what we called "island POPs". Each local calling area we served had dial-in equipment and a public-facing 'Net connection. Authentication, Authorization, and Accounting, DNS, Mail, and the ISP's website traffic all flowed over the public Internet except in the two towns we had actual NOCs. There were tunnels set up between routers that made traffic from the remote sites to the NOCs look like local traffic on traceroute, but that was mainly for our ease of routing and to be able to redirect people to the internal notification site when they needed to pay their late bills. We (I, actually) also set up L2TP so that we could use dial-up pools from companies like CISP who would encapsulate a dial-in session over IP, authenticate it against our RADIUS, and then allow the user to surf from their network. We paid per average used port per month to let someone else handle the customer's net connection while we handled marketing, billing, and support.

The first ISP I worked for had lines to four different carriers in four different NAPs in four different states, lots of point-to-point lines for POPs, and a high-speed wireless (4-7 MBps, depending on weather, flocks of birds, and such) link across a major river to tie together two NOCs in two states. Either NOC could route all of the traffic for all the dozens of small towns in both states as long as one of our four main connections and that wireless stayed up (and all the point-to-point ones did, too). If the wireless went down, the two halves of the network could still talk, but over the public Internet. That one got to about 10k customers before it was sold.

At any of those ISPs, I couldn't tell you exactly who was going to be able to get online or where they were going to be able to get to without my status monitoring systems. On one, all the customers could get online even without the ISP having access to the Internet, but they could only see resources hosted at the ISP. Yet that one might drop five towns from a single cable break. Another one might keep 10k people offline due to a routing issue at a tier-1 NAP, but everyone else was okay. However, if that one's NOC went offline, anyone surfing in other towns could continue to do so, but no new connections could be authenticated and nobody could check their mail. At the third, at least three connections had to fail to knock out all the customers we had in a particular state, but two or three smaller towns might go down due to a phone company issue.

You really don't know what's going on inside a network you're not administering. You might have a pretty good clue, but you just never can be sure.

Re:What information are we talking about? (1)

Xelios (822510) | more than 6 years ago | (#22751618)

We had about 12k dialup customers and a few hundred DS1, fractional DS1, frame relay, and DSL customers. Everyone's traffic went through one of two main NOCs on a good day, and their mail, DNS, AAA, and the company's web site traffic never touched the public Internet unless we were routing around trouble. In a couple of places we even put RADIUS slaves and DNS caching servers right in the POP.
My HED hurts...

Re:What information are we talking about? (2, Interesting)

leuk_he (194174) | more than 6 years ago | (#22751600)

ISP are always very reluctant to tell that they do not have any redundancy in their number of outside links to the rest or the internet. That information just is not available. And how peering agreements work is mostly hidden.

They simply do not tell, and there is no established protocol to get that information reliable. This p4p would give this information in a way usable to p2p applications.

One disadvantage of p4p is that not everyone will be equal according to p4p. It might reason that all Americans can be served at a at a lower cost than people in europe. To Europeans that have as good connection to US as to neighbor states it might look like the American community is leeching them. They only prefer to serve eachother, and leave the scraping to foreigners. As a result Trackers in Europe will ban US leeches, making p4p less useful. (This is an example, but assumed is that

This p4p is only useful to users if it serves ADDITIONAL bandwidth that was not available before. Currently however most connections as asymetrical, it is very easy to use the full upload, while there is plenty of room left in the download spectrum.

For the current connection I have now i have very little trouble to use up all available upload BW.

"legal" content? (1)

CSMatt (1175471) | more than 6 years ago | (#22750782)

So suddenly the BitTorrent protocol is illegal now?

Re:"legal" content? (0)

Anonymous Coward | more than 6 years ago | (#22751100)

P2P = Pirate 2 Pirate according to Comcast Wiki

Is it just that I'm naive ... (4, Insightful)

Dr.Merkwurdigeliebe (1055918) | more than 6 years ago | (#22750796)

... or is it encouraging to see network providers taking a stance other than p2p is bad? This looks good - kind of like "p2p isn't going away, so as long as we have to live with it, let's try to make the best of it"

Re:Is it just that I'm naive ... (1)

Tridus (79566) | more than 6 years ago | (#22750810)

Thats how I hope they take it. If it works as well as they claim though, this isn't good just for ISPs. Its good for people using it to download stuff too. I mean, getting data from the other side of the city usually has lower latency then getting it across a trans-atlantic cable.

Re:Is it just that I'm naive ... (1)

stiggle (649614) | more than 6 years ago | (#22750870)

They're using it to distribute their own content - they can still be draconian against other P2P content coming into their network infrastructure. Plus they've said they're not looking at putting the technology back into the community for other P2P clients to use. So basically they've done this to save themselves money.

Re:Is it just that I'm naive ... (1)

IndustrialComplex (975015) | more than 6 years ago | (#22750914)

But it does show that there is apparantly a lot of room for improvement over what is in the wild now. It demonstrates that the money that a lot of companies declared was wasted by torrent traffic, was indeed waste, and not an insurmountable obstacle that the only solution to it was to throw more bandwidth at the problem.

Re:Is it just that I'm naive ... (1)

thtrgremlin (1158085) | more than 6 years ago | (#22751518)

Well, they say they aren't going to share the RESULTS, but they explain exactly what they are doing. Localization data for IP addresses is public. running a trace on a swarm isn't exactly difficult. Latency has been used for a very long time by "advanced" (using that term loosely) for picking web mirrors (Like Ubuntu's 'Software Sources' tool to 'Choose Best Server').
 
Considering the best think their crackerjack legal team could come up with in allegations of treason was a Nuremberg Defense, I think the Open Source Community can figure out how to "intelligently" reduce hops using that type of data to pick peers / seeds, if it was going to improve overall (p2p) network performance.
 
P4P, hahahahahaha! Why not just call it P2P4PR.

P4P - Pay for Performance (1, Funny)

Anonymous Coward | more than 6 years ago | (#22750808)

Hmmm, looks like the acroynm is already used...Just the way they want it...

So.... (1)

mdm-adph (1030332) | more than 6 years ago | (#22750818)

So the network efficiencies will accrue to legal P2P content, not to downloads from The Pirate Bay
...and they're going to differ between the two how, exactly? (Excuse my ignorance if I'm missing something.)

Re:So.... (1)

guruevi (827432) | more than 6 years ago | (#22751138)

The network will force you to

a) view it WITH commercial breaks every 5 minutes (or worse, since it's now on the interwebs, it might also contain a lot of Cialis and Viagra ads)
b) use it only on the computer you downloaded it to
c) be unable to fast forward (or backward) without restarting a commercial

This will off course add to the revenue and on the other hand turn people off the format so they'll go back to get it from TPB.

Re:So.... (2, Insightful)

tech_guru5182 (577981) | more than 6 years ago | (#22751204)

Protocol. Pirate Bay will be a torrent, their "P4P" client will use a different protocol. Now, I don't see why someone couldn't write a bittorrent client that would do the same thing (seek relatively local ips from a tracker). It is public knowledge (or at least readily available) what ISP an IP belongs to, and what country it is in. In some cases, it can be readily localized even further. (large ISPs typically will have local identifiers for the hostname of their router. For example they may use something like Springrield1.state.bigisp.com.) I don't see that this must be in the protocol to be implemented, it should be able to be done in the client as well. Perhaps it would be best if a client would look to stay first within the same IP block, then the same domain. It won't be quite as effective without knowing all link bandwidths, but would drastically improve the current situation.

New math (4, Interesting)

ZorbaTHut (126196) | more than 6 years ago | (#22750844)

Reducing hops by 400%, eh? That's a nice trick. Can we reduce bandwidth usage by the same amount? I wouldn't mind some free bandwidth.

I honestly can't figure out where "reduce by 400%" came from. They say the average hops were reduced from 5.5 hops to 0.89 hops, which is either 84% if you're not an idiot or 616% if you are. So I'm really quite confused here. Go figure.

Re:New math (3, Funny)

L4t3r4lu5 (1216702) | more than 6 years ago | (#22750984)

Isn't a mean (assumed from "average") of 0.89 hops the same as saying that the median value is less than 1?

Is it possible that pixies and angel farts are carrying packets between peers in your model?

Re:New math (5, Informative)

MightyYar (622222) | more than 6 years ago | (#22751034)

I think I figured out their math, and you aren't going to like it:

5.5 * 0.89 - 0.89 = 4.0050 or 400%

As opposed to:

( 5.5 - 0.89 ) / 5.5 = 84%

Re:New math (0)

Anonymous Coward | more than 6 years ago | (#22751200)

Can you work your magic and explain what fucked up math they use to get .89 hops?

Re:New math (1)

MightyYar (622222) | more than 6 years ago | (#22751308)

I suppose that 0 hops would mean a direct connection... in this case it probably means connecting to another Verizon subscriber.

Though as an end-user, I don't care about "hops", I care about download speed. I'd prefer my client connect to the fastest sources, not the closest.

Re:New math (2, Funny)

unbug (1188963) | more than 6 years ago | (#22751036)

I honestly can't figure out where "reduce by 400%" came from. They say the average hops were reduced from 5.5 hops to 0.89 hops, which is either 84% if you're not an idiot or 616% if you are.
That's easy. It came from the 4 in P4P. The more accurate P6P had been vetoed by marketing as too nasty.

Localizing means less anonymity (3, Insightful)

n3tcat (664243) | more than 6 years ago | (#22750872)

While I understand what they're saying here, and I understand the surface intent of the message, I get this feeling that there is some sort of devious underlying motive here. Or it could just be that I have my Slashd^H^H^H^Htinfoil hat on a bit too tight.

Re:Localizing means less anonymity (1)

evanbd (210358) | more than 6 years ago | (#22751118)

Your computer is broadcasting an IP address!

Seriously, if your tinfoil hat is on that tight, I have some "security" software to sell you. P2P isn't anonymous, not the way it's normally implemented. If you actually want anonymous P2P, you need to go to something like Freenet [freenetproject.org].

Re:Localizing means less anonymity (0)

Anonymous Coward | more than 6 years ago | (#22751222)

Anonymity is only important if you are using P2P for something that you shouldn't be using it for, i.e. piracy. If it's a legal download, why does it matter?

Re:Localizing means less anonymity (0)

Anonymous Coward | more than 6 years ago | (#22751358)

Re:Localizing means less anonymity

How?
A) Anyone who wants your IP address will have it.
B) This is a P2P program written so NBC can push out it's TV shows to you for 'free'

Re:Localizing means less anonymity (1)

Jugalator (259273) | more than 6 years ago | (#22751382)

Localizing would also mean normally higher speed. I get much much higher speeds domestically than across the Atlantic, for example.

So it would be a double edged sword...

innumeracy (3, Informative)

MyNymWasTaken (879908) | more than 6 years ago | (#22750874)

reduce the number of 'hops' by an average of 400%
This glaring example of innumeracy is from the submitter, as it is nowhere in the article.

On average, Pasko said that regular P2P traffic makes 5.5 hops to get its destination. Using the P4P protocol, those same files took an average of 0.89 hops.
That works out to an average 84% reduction.

Re:innumeracy (4, Funny)

pushing-robot (1037830) | more than 6 years ago | (#22750978)

On average, Pasko said that regular P2P traffic makes 5.5 hops to get its destination. Using the P4P protocol, those same files took an average of 0.89 hops.

Less than one hop on average? Wow, they must use patented "You downloaded that three months ago, you wanker! Look on your damn file server!" technology.

Good idea (1)

sleeponthemic (1253494) | more than 6 years ago | (#22750876)

But basically, if you're a pirate, this might make you nervous.

Re:Good idea (1)

sm62704 (957197) | more than 6 years ago | (#22751628)

But basically, if you're a pirate, this might make you nervous.

Arr, matey, it ain't be making ME nervous! Only thing that be makin' ME nervous is when me blunderbuss is empty and me sword breaks and I drop me knife 'caus I'm full o' rum and they make me walk the plank and keel haul me! Nothin' else makes me nervous.

What's all this bloody "P2P" nonsense anyway, ye damned landlubbers? AAAARRR!!!!

Fixed (1)

HalAtWork (926717) | more than 6 years ago | (#22750882)

The NYTimes covers the development from the practical standpoint of Verizon's agreement with P2P company Pando Networks, which will be involved in distributing NBC television shows next month. So the network efficiencies will accrue to NBC's content, not to non-sanctioned P2P such as distributing open source software, free software, music, videos, and art in the public domain and licensed under creative commons, or to help distribute software updates for packages such as Azureus [sourceforge.net].

There, fixed that for you.

Not a bad idea actually (1)

scubamage (727538) | more than 6 years ago | (#22750904)

Honestly I think its kind of a cool idea, but the sad part is I don't really see how this could be done on a software level... I think thats why they're citing legal content only... it will take some modifications for routing equipment, won't it?

Re:Not a bad idea actually (1)

GreyyGuy (91753) | more than 6 years ago | (#22751068)

I haven't read the article and I'm far from an P2P or IP routing expert, but wouldn't it be possible to make a best guess on proximity based on pinging the peers available, counting the hops to each one and the time to each one to estimate which ones are closest, and then focus on sharing with those?

Re:Not a bad idea actually (1)

vrmlguy (120854) | more than 6 years ago | (#22751070)

Honestly I think its kind of a cool idea, but the sad part is I don't really see how this could be done on a software level...
Why not just do a 'traceroute' to all of the seeds as you discover them, and penalize the ones that are more hops away?

Re:Not a bad idea actually (1)

pixr99 (560799) | more than 6 years ago | (#22751634)

I don't really see how this could be done on a software level

I imagine that if the software has built in knowledge of the network topology (and the article mentions such knowledge) that it could make some determination about which peers to prefer. Another possibility is a more central index that a client could contact and ask for directions about which peers are "closest."

The first time I used a P2P system, it appeared to me, as I watched the packets that peer selection was more or less random. It did its job though, and made sure to saturate my link by using as many peers as necessary. It hit me that a great way to do this would be to integrate something like BGP into the client. Then the client could cross reference the list of peers hosting needed bits with AS paths to decide which peers are the most efficient matches. The key element would be access to a set of BGP looking glasses. Ideally, large ISPs would have BGP servers making this information available via HTTP, RPC, SOAP, etc. specifically for this sort of work.

Re:Not a bad idea actually (1)

brunes69 (86786) | more than 6 years ago | (#22751714)

Like I posted above, you can use www.maxmind.com's downloadable database to find the geographic location of any IP with a quite high granularity. The database is free to use for open source projects as well.

Re:Not a bad idea actually (1)

scubamage (727538) | more than 6 years ago | (#22751794)

The biggest issue here though is that not all switches and CO's have accurate location data which is where maxmind's database comes from, and some have no data at all (to my knowledge anyways). This would help for the most part, but it won't work perfectly. I'm also curious how they define 'localized.' Like, local to a single CO? Local to a single switch? Local to a town.. city... state... province... country?

Verizon actually doesn't suck (4, Interesting)

FredFredrickson (1177871) | more than 6 years ago | (#22750922)

For this reason, Verizon doesn't suck for broadband uses. In my area, I have Verizon DSL (they haven't given us Fios yet, but they ran the fiber cables a few years back) and I don't have any port blocking (that's right folks, I can send email to ANY server), and they don't limit P2P or Bittorrent (My downloads are fast and fresh). And they haven't turned records over to the government (or at least not reportedly, yet). So far, in the category of BIG ISPs Comcast vs Verizon, Verizon is being the underdog. Which is funny, because start arguing cell phone policies and prices, and watch the argument change completely.

Re:Verizon actually doesn't suck (1)

ptbarnett (159784) | more than 6 years ago | (#22751110)

For residential FIOS, Verizon blocks incoming port 80 and 25. But, I haven't found any outgoing port blocks.

Even a residential subscriber can get business FIOS, for about double the monthly fee. It has a static IP, and multiple IPs are available. However, for some obscure reason business FIOS doesn't play well with FIOS TV (which uses the 'Net connection to download video-on-demand and program guide info).

Re:Verizon actually doesn't suck (0)

Anonymous Coward | more than 6 years ago | (#22751696)

A whistleblower recently (this last week) reported that he worked on a direct link from Quantico (either Navy or FBI... take a guess?) directly into his un-named "major wireless provider" servers. Complete access to data packets, billing records, full read/write access.

The employee was turned down flat when he wanted to install access controls on the connection. He was turned down when he wanted to install loggers.

An eerily similar lawsuit names Verizon as the defendent with pretty much identical allegations.

lol... top google news for quantico and verizon is slashdot 9 days ago. Wired broke it, I think.

Slashdot... http://it.slashdot.org/it/08/03/05/234203.shtml?tid=172 [slashdot.org]
Wired... http://blog.wired.com/27bstroke6/2008/03/whistleblower-f.html [wired.com]

Geographically? (1)

davidwr (791652) | more than 6 years ago | (#22750936)

Just because I appear to the network as pop-123.ny.isp.com doesn't mean I'm in New York. I could be halfway around the world.

Re:Geographically? (1)

ch_rob (655367) | more than 6 years ago | (#22750970)

On that same point, even being geographically 'close' to another node wouldn't be as good as actually being able to count hops / bandwidth between nodes.

Re:Geographically? (1)

IndustrialComplex (975015) | more than 6 years ago | (#22751026)

While that is true, it only matters if a significantly large portion of the population is in a similar situation when trying to plan for the best average case.

Re:Geographically? (1)

davidwr (791652) | more than 6 years ago | (#22751232)

I'm not sure if it's true anymore, but at one time some major players including the then-major WebTV funneled all web traffic through a handful of firewalls or proxies.

what p2p protocol? (1)

esocid (946821) | more than 6 years ago | (#22750950)

They keep touting this P2P protocol, but never actually say what it is. I'll assume it's bittorrent, unless they need to replace protocol with network. I'm guessing it's just the buzzwords that they like.

Generally, what matters is acceptance (3, Insightful)

Opportunist (166417) | more than 6 years ago | (#22750964)

And let's face it, people, the next protocol will have to have a few features to be accepted, and having "local peers" isn't on the top of the list.

What the list includes? Easy:

1. Encryption
2. Onion routing

For very obvious reasons. And neither of them decreases bandwidth used. Quite the opposite.

Re:Generally, what matters is acceptance (1)

CubeRootOf (849787) | more than 6 years ago | (#22751106)

Neither of these is a requirement for 'legal' downloading. The only requirements for 'legal' downloading are authentication and speed, the second of which this apparently gets them, more than likely through the use of the first.

This will be accepted and used by large numbers of people that care about speed primarily, selection and privacy secondariliy. And in fact, if it allows me to watch Chuck and Heroes on my HD TV through my computer hookup without having to do any 'work', I might even use it... if it comes with a way to talk to those you are downloading from, as authentication is now probably active, it may start a brand new kind of community: jury is still out on whether that is a good thing or not (leaning toward bad)

Geographically isn't what's needed (4, Informative)

ThreeGigs (239452) | more than 6 years ago | (#22750966)

less of the sharing occurs over large distances, instead making requests of nearby clients (geographically).

How about a BitTorrent client that gives preference to peers on the *same ISP*?

Yeah, less hops and all is great, but if an ISP can keep from having to hand off packets to a backbone, they'll save money and perhaps all the hue and cry over P2P will die down some. I'm sure Comcast would rather contract with UUnet to handle half of the current traffic destined for other ISPs than they do now.

Sort of a 'be nice to the ISPs and they'll be nicer to the users' scenario.

Re:Geographically isn't what's needed (1)

foniksonik (573572) | more than 6 years ago | (#22751046)

I was also thinking that if it's all above board and coordinated via ISPs there should really be some good data available regarding bandwidth utilization.... as in they can positively shape the traffic to point to those who are not currently uploading and utilize their available bandwidth over someone who is already uploading (a different file) a sort of P2P load-balancing routine.

Re:Geographically isn't what's needed (2, Interesting)

darthflo (1095225) | more than 6 years ago | (#22751244)

ISPs could easily achieve this without changing a single bit in most bittorren implementations: Jack up the bandwidth within their backbone to whatever's possible. Instead of limiting that ADSL2+ line to 5 mbps running it at 25 and throttling traffic to/from it to 5 mbps at the edge of their network. Connections within the ISP's network would tend to max out those 25 mbps; given some fiber connectivity and recent hardware, users could seed at gigabit throughputs within the provider's network.
Going back to the previous 25 mbps example, this could reduce the outside traffic from, say, 1.4 GByte (an average movie) to some 150 MB (1.4 GB @ 20 mbps takes some 5 minutes during which some 180 MB could be retrieved thru the 5 mbps connection to the outside world) without any software optimisations. If the industry would start doing something like this, most P2P clients would probably use it. If they'd use it, ISPs would save even more bandwidth (== money).

Freenet? (1)

inertialFrame (259221) | more than 6 years ago | (#22751392)

Freenet [freenetproject.org] seems to be designed primarily for anonymity, and I have read that it does not have the best performance. However, it does try to become efficient over time by moving frequently requested data around automatically on the various nodes in order to reduce overall bandwidth use and improve performance. That is, the network adapts itself to optimize for something.

I wonder if, in principle, using something like freenet would accidentally be beneficial for providers like Verizon, at least with respect to the issue at hand.

Re:Geographically isn't what's needed (1)

mcrbids (148650) | more than 6 years ago | (#22751474)

The problem of distributing large amounts of content *efficiently8 was solved in 1985. No, I'm not kidding. [wikipedia.org]

Newsgroup servers routinely distribute and cache content locally to minimize overall network traffic. They can distribute only the headers of the news feed, and then cache the content after it's been requested and downloaded.

This is a *very* efficient content distribution system, and ironically, it's a system more resistant to takedown notices and the like than BT. (It's virtually impossible to entirely remove content from NNTP once it's been posted there)

Brings to mind the saying:

Those who cannot learn from history are doomed to repeat it.
--George Santayana

Re:Geographically isn't what's needed (1)

s2r (461076) | more than 6 years ago | (#22751604)

I agree with the parent post.
It not a matter of where the host is but the speed you can get from it.
I usually download/upload at max. speed to peers in Sweden than peers where I live (Argentina)

Hey, I've got a study too... (4, Informative)

br00tus (528477) | more than 6 years ago | (#22751052)

it's called Mbone [wikipedia.org]. It was created 15 years ago by a bunch of people including Van Jacobson, who had already helped create TCP/IP, wrote traceroute, tcpdump and so forth.


It would have made Internet broadcasting much more efficient, but it never took off. Why? Because providers never wanted to turn it on, fearing their tubes would get filled with video. So what happened? People broadcast videos anyhow, they just don't use the more efficient Mbone multicasting method.

Furthermore, when I download a video via Bittorrent, there are usually only a few people, whether they have a complete seed or not, who are sending out data. So how local they are doesn't matter. If there are more people connected, usually most people are sending data out at less than 10K, while there is one (or maybe 2) people sending data out at anywhere from 10K to 200K. So usually I wanted to be hooked to them, no matter where they are - I am getting data from them at many multiples of the average person.

I care about speed, not locality. The whole point of the Internet and World Wide Web is locality doesn't matter. Speed is what matters to me. For Verizon however, they would prefer most traffic goes over their own network - that way they don't have to worry about exchanging traffic with other providers and so forth. Another thing is - there is tons of fiber crisscrossing the country and world, we have plenty of inter-LATA bandwidth, the whole problem is with bandwidth from the home to the local Central Office. In a lot of countries, natural monopolies are controlled by the government - I always hear about how inefficient that would be and how backwards it would be, but here we have the "last mile" controlled by monopolies and they have been giving us decades-old technology for decades. In fact, the little attacks by the government have been rolled back, in a reversal of the Bell breakup, AT&T now owns a lot of last mile in this country. Hey, it's a safe monopoly that the capitalists, I mean, shareholders, I mean, investors can get nice fat dividends from in stead of re-investing in bleeding edge capital equipment, so why give people a fast connection to their homes? Better to spend money on lawyers fighting public wifi and the like, or commissars and think tanks to brag about how efficient capitalism is in the US of A in 2008.

Re:Hey, I've got a study too... (1)

Maxo-Texas (864189) | more than 6 years ago | (#22751322)

http://finance.yahoo.com/q/bc?s=T&t=1y [yahoo.com]

The investors in AT&T have lost about 10% of their value in the last year.
They never recovered from 2001 are still at about 60% of their value then.

This is true for many large corporations today.

The executive class is looting and pillaging corporations at the expense of
a) the workers (1 executive pay == 6000 $40k workers)
b) the investors (see stock performance above-- think about adding $155 mill in profits that went to one man who took Home depot into the toilet)
c) the country (you want them to open in your area- give them no taxes for 10 years-- so they destroy your roads and you pay to fix them-- in many cases the instant the tax breaks end, they leave)

the truly wealthy investors are right now taking huge baths in muni bonds and hedge funds.

The executive class in America is a source of many of our problems today. And they are getting away with it.

Re:Hey, I've got a study too... (1)

zappepcs (820751) | more than 6 years ago | (#22751704)

Multicasting to clients in your own LAN/WAN infrastructure is not a big idea, it's common sense. When you can expect 15% or more to want the same streams. There are reasons that multicast is not used: they will not have complete control of subscribers to the multicast. Even if the build the set-top box that receives the multicast stream and reports back, interception anywhere in the middle is posslble. Multicast streaming for current cable system content means 'giving' it away... unless all the data is encrypted. If the encryption is strong enough, there is no need to serve the content from a central point, and non-encrypted data need not work, sooo use P2P so that your central network is not having to support the streaming data and then customers whose boxes are used end up paying for the P2P bandwidth.

goes something like this: 57 movies in the on-demand line up. Say 100 subscribers per neighborhood on average. Each one gets about 1/10 of the chunks of every movie. So on your box, while you only have 1/10 of the chunks, the other 90% are close by and none of the P2P traffic went past the local router. Using P2P the cable company can put an on-demand video store in every neighborhood and never have to pay for huge centralized servers for it, nor support the bandwidth to get the data from outside the local router where the data will be used. Locate the tracker on that local router segment.. viola!

Once implemented, they hobble all other P2P and all is handily taken care of.

Yeah, sure, right (1)

poetmatt (793785) | more than 6 years ago | (#22751084)

What about if a torrent has no seeds or leeches in any remotely local area?

This is why any "massive improvement" on this aspect makes me skeptical. We all know the reason they want to tie it to local is to save bandwidth costs using only their own uploaders basically which would slow speeds down astronomically. Overseas hosts that can do 300KB/s or more on an upload vs a local that can do a cap of 40KB/s. You decide.

ASN matching (1)

c_g_hills (110430) | more than 6 years ago | (#22751168)

I conjectured a couple years ago that this could be done simply by matching up IP addresses to autonomous system numbers and picking peers that are in the same AS number in preference to other peers.

Ono (1)

Cocodude (693069) | more than 6 years ago | (#22751258)

Azureus (and possibly other bittorrent clients) has a plugin called Ono which can find peers close to you (from a networking perspective). The website states:

The main goal of this plugin is simple -- to improve download speeds for your BitTorrent client. For most P2P applications, the decision regarding which peer to download from is generally arbitrary. When most peers offer good download performance, the random solution works well. However, if most peers are in a different part of the world from you, your downloads can really suffer.

The Ono plugin avoids this by proactively finding peers that are close to you (in a networking sense). These peers generally offer better response time, which can lead to significantly improved performance. We identify those peers that are near you by reusing network measurements from content distribution networks (CDNs), i.e. without performing extensive path measurement or probing.

It's tricky to see how much this helps me, as a bittorrent user, but as others have stated here, it must be good if major internet backbones aren't being used as much.

itsatrap (1)

Gewalt (1200451) | more than 6 years ago | (#22751272)

So let's see here. We have a serverless infrastructure (peer to peer), but it requires a server (tracker). The powers that be don't like it, because they can't take control of the servers.

So along comes an ISP with a new way to control P2P by turning the ISP's gateways into the server.

Does this not give the ISP absolute control over the traffic? When they have this absolute control, don't you think they might use it to their benefit? Don't you think it just might give them a bit of "intimate" knowledge of the client activities?

I see no need for this "technology". It is not an advance. It's a trap.

Only work if they open the topology data... (3, Informative)

kbonin (58917) | more than 6 years ago | (#22751296)

Some of us working in the bleeding edge of p2p have been playing with these ideas for years to improve performance (I'm building open VR/MMO over P2P), here's the basics...

Most true p2p systems use something called a Distributed Hash Table (DHT) [wikipedia.org] to store and search for metadata such as file location and file metadata. Examples are Pastry [wikipedia.org], Chord [wikipedia.org], and (my favorite) Kademlia [wikipedia.org]. These systems index data by ids which are generally a hash (MD5 or SHA1) of the data.

Without going into the details of the algorithms, the search process exploits the topology of the DHT, which becomes something called an "overlay network" [wikipedia.org]. This lets you efficiently search millions of nodes for the IDs you're interested in in seconds, but it doesn't guarantee the nodes you find will be anywhere near you in physical or network topology space.

The trick some of us are playing with is including topology data in our DHT structure and/or search, to weigh the search to nodes which happen to be close in network topology space.

What they are likely doing is something along these lines, since they have the real topology instead of what we can map using tools like tracert.

If they really want to help p2p, then they would expose this topology information to us p2p developers, and let us use it to make all our applications better. What they're likely planning is pushing their own p2p, which will be faster and less stressful on their internal network (by avoiding peering point traversal at all costs, which is when bandwidth actually costs THEM). The problem is their p2p will likely include other less desired features, like RIAA/MPAA friendly logging and DRM, and then they'll have a plausible reason to start degrading other p2p systems which aren't as friendly by their metrics, such as distributing content they don't control or can't monetize... Then again, maybe I'm just a cynic...

Biased neighbor selection (0)

Anonymous Coward | more than 6 years ago | (#22751302)

This was already proposed by Bindal et al in ICDCS 2006 [acm.org] and evaluated in simulation by Aggarwal et al in SIGCOMM CCR [sigcomm.org] (July 2007). Besides, there is already software out there for the Azureus BitTorrent client (called Ono [sourceforge.net]) that does similar things without relying on the ISPs and without restricting what you download.

Uh huh... (0)

Anonymous Coward | more than 6 years ago | (#22751462)

The NYTimes covers the development from the practical standpoint of Verizon's agreement with P2P company Pando Networks, which will be involved in distributing NBC television shows next month. So the network efficiencies will accrue to legal P2P content, not to downloads from The Pirate Bay.
Yeah because we all know all downloads from TPB are illegal... (rolls eyes) And anyways, how long do you think it will take before this tech is written or back engineered for other bittorrent programs (which will be used with TPB)? I'd say about a week tops.

One Solution (1)

jlebrech (810586) | more than 6 years ago | (#22751490)

Is for ISPs to seed the most popular torrents. As torrenting uses the fastest peer to download its packets as a priority.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...