Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

P2P Remains Dominant Protocol

CowboyNeal posted more than 7 years ago | from the way-the-data-crumbles dept.

The Internet 88

An anonymous reader writes "Last week, a press release was issued by Ellacotya that suggested something quite startling — HTTP (Hyper Text Transfer Protocol, aka Web traffic) had for the first time in four years overtaken P2P traffic. However a new article from Slyck disputes this, and contends that P2P remains the bandwidth heavyweight."

cancel ×

88 comments

Sorry! There are no comments related to the filter you selected.

Protocol? (5, Insightful)

dreamchaser (49529) | more than 7 years ago | (#19606563)

Here I thought P2P was a class of applications, you know, ones that communicate peer to peer.

WTF. We can't even blame editors for this crap anymore, because they gave us the Firehose.

Re:Protocol? (1)

akzeac (862521) | more than 7 years ago | (#19606585)

It's the title of the article. Blame Mr. Thomas Mennecke.

So true (5, Insightful)

vivaoporto (1064484) | more than 7 years ago | (#19606607)

A lot of P2P applications even uses http in one phase or another of its execution, what is the case of bittorrent clients communication with trackers, that is done over using http requests.

What they might be implying is that the so called "legitimate" traffic (casual WWW surfing) is outpacing filesharing. Ironically, this growing is due the popularization of tools that allow users to share the files via www, tools like Youtube and Flickr (and pornotube, *cough*) that they would share via P2P applications like Kazaa, Napster or IMesh.

Bottom line is: people don't care about the tools, but about the use they do to the tools. Nothing to see here, move along.

Re:So true (1)

jZnat (793348) | more than 7 years ago | (#19608561)

share via P2P applications like Kazaa, Napster or IMesh.
It's 2007 now, so the applications you were looking for are Limewire (Gnutella), Emule (ed2k), and BitTorrent. IRC/XDCC still exists of course.

Re:So true (1)

moderatorrater (1095745) | more than 7 years ago | (#19609153)

Pornotube? It's all about spankwire my friend.

Re:Protocol? (5, Funny)

Nephrite (82592) | more than 7 years ago | (#19606823)

Don't you know that P2P stands for "Protocol to Pirate"? Shame on you!

Huh? (0)

Anonymous Coward | more than 7 years ago | (#19609597)

I thought it was Pirate 2 Pirate?

At least, that's what the MAFIAA seem to think... :-)

Re:Protocol? (0)

Anonymous Coward | more than 7 years ago | (#19608831)

that not surprise because you can now download and find what is available on p2p on file-sharing server such such as rapidshare.com and many like it. go to google and search for " rapidshare.com " - more over you can download files from these server very fast. hence the traffic boom. mike

Re:Protocol? (0)

Anonymous Coward | more than 7 years ago | (#19609913)

It's a partial simplification of terms. The 2 is actually an exponent. They could just as easily called it P3. Here's the RFC [faqs.org] not to be confused with P4 [w3.org] .

Re:Protocol? (1)

rdradar (1110795) | more than 7 years ago | (#19612467)

and this is the reason many ppl got this article wrong. It wasnt talking about P2P as an protocol, but as a group of protocols versus HTTP. YouTube, which uses http as the video transfer, uses http too and eats 10% of the web traffic. All the P2P traffic mentioned belongs to P2P programs and protocols, not just as single protocol. And this should be clear to you, but maybe its cool to always say something against like the kids ;)

your joking right (2, Interesting)

Celt (125318) | more than 7 years ago | (#19606575)

as much as everyone loves http traffic, its not going to overtake the likes of bittorrent traffic anytime soon (unless of course ISP's start blocking all P2P related traffic)

Re:your joking right (0)

Anonymous Coward | more than 7 years ago | (#19606879)

Correct, although I would imagine HTTP traffic has gone up noticeably since rapidshare / megaupload etc became popular.

Re:your joking right (1)

Ephemeriis (315124) | more than 7 years ago | (#19609185)

I didn't read the article (I'm lazy and at work) but I really have to wonder what they consider P2P traffic... What protocols/clients are they looking at? Is it just BitTorrent, or are they looking at things like Kazaa and LimeWire as well? What about private BitTorrent clients like the one Blizzard uses to update World of Warcraft? I guess I'm not surprised that the various P2P systems are transferring more data than HTTP does... HTTP is generally just text and small pictures, maybe the occasional streaming video. While the assorted P2P clients are transferring huge piles of video, music, and data.

Re:your joking right (1)

superanonman (1116871) | more than 7 years ago | (#19609421)

...and they can't do that because, in the US, it would make them non-neutral and they would lose their common carrier status.

Re:your joking right (0)

Anonymous Coward | more than 7 years ago | (#19612661)

Jesus Fucking Christ! ISPs are not goddamn Common Carriers. People say this shit all the time and are constantly corrected. When are you fucking Morans going to get it?

Re:your joking right (1)

superanonman (1116871) | more than 7 years ago | (#19617943)

As soon as people like you display intelligence.

Re:your joking right (1)

frostband (970712) | more than 7 years ago | (#19609477)

HTTP taking over P2P? Pff...I knew that was false because I haven't heard of any great new pr0n websites that could overtake my torrent (pun intended) of P2P pr0n.

That'll be AJAX (4, Interesting)

Anonymous Coward | more than 7 years ago | (#19606577)

HTTP (Hyper Text Transfer Protocol, aka Web traffic) had for the first time in four years overtaken P2P traffic

That'll be because AJAX has lead to a massive increase in HTTP traffic. How much traffic do the Web 2.0 "applications" from Google alone generate, do you think?

Many people have been saying that Web 2.0 is an utterly wasteful way to do things. There's the proof. Now can we stop building Web 2.0 "applications", please?

Re:That'll be AJAX (1)

Poromenos1 (830658) | more than 7 years ago | (#19606627)

How does loading part of a page consume more bandwidth than loading the entire page again with different content? I have to read my mail somehow, you know, it's not like I see the login page and leave satisfied.

Re:That'll be AJAX (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#19606717)

I have to read my mail somehow, you know, it's not like I see the login page and leave satisfied.

Hmm, yeah, that's a good point. It's a tricky problem, I admit.

Oh wait, I have an idea! Why not use an email client instead of shoving everything onto the web, where it was never intended to be?

Re:That'll be AJAX (0)

Anonymous Coward | more than 7 years ago | (#19606845)

Why not run your own UUCP mail server with point-to-point dial-up links like it was intended to be?

Idiot. Usage changes, get over it.

Re:That'll be AJAX (0)

Anonymous Coward | more than 7 years ago | (#19606951)

The change from UUCP to SMTP/POP3/IMAP was an improvement in functionality. The change from real email to Web 2.0 "applications" is not. That's why.

Shoving everything over HTTP is a fucking stupid idea for many reasons. People who encourage it are idiots. The idea of the internet is that we can have multiple protocols each fit for purpose running alongside each other. Apparently that's too hard, so now we have a bunch of kids who fell out of the dotcom bubble sticking to the one thing they know and pretending that they've read an RFC.

Fuck it, I'm clearly fighting a losing battle against a hoard of retards. It would clearly make everyones lives easier if we dropped IP & TCP entirely and just shove HTTP requests directly into Ethernet frames. Protocol layering just makes things complicated. The college IT-me-too-ers can drop network classes entirely spend their valuable time learning PHP instead.

Re:That'll be AJAX (1)

minus9 (106327) | more than 7 years ago | (#19608493)

Have you ever considered a career in the Diplomatic Corps?

Re:That'll be AJAX (0)

Anonymous Coward | more than 7 years ago | (#19608753)

Internet - Serious business.

Re:That'll be AJAX (2, Funny)

dintech (998802) | more than 7 years ago | (#19606775)

I have to read my mail somehow, you know, it's not like I see the login page and leave satisfied.

With all the spam I have to deal with, I think I'd leave more satisfied just with the login page.

Re:That'll be AJAX (1)

TheRaven64 (641858) | more than 7 years ago | (#19607001)

For a lot of AJAX applications, the HTTP overhead of each request is a significant fraction of the total data transferred. On top of this, AJAX apps typically use XML for data transport, which is not exactly lightweight. This gives a lot of total bloat when compared to a protocol that's actually designed for the purpose.

Re:That'll be AJAX (1)

curunir (98273) | more than 7 years ago | (#19610291)

As long as the app doesn't do stupid stuff like make requests to the server on each key press, an well-designed AJAX application will result in significantly less traffic. XML might not be a lightweight representation of data, but neither is (X)HTML. If you're talking about simply encoding data, XML will be far more efficient than XHTML, even when marked up semantically so that it can be styled with CSS. Regardless, both formats compress down extremely well with gzip compression. And JSON (which I believe GWT uses) is even more efficient still.

The one thing that is true about AJAX applications is that they often make many small HTTP requests rather than fewer larger request. But so long as the client and server both support keepalives and http pipelining, this difference is irrelevant. It's a shame that pipelining isn't enabled by default on modern browsers since I've yet to encounter a server that doesn't handle it correctly.

AJAX isn't the panacea or the bogeyman that proponents and detractors portray it as. It has it's uses and weaknesses. It just requires that developers are smart enough to realize it is just one of many tools made available to them. The adage remains the same. Use the right tool for the job.

Your comment makes me believe that you've never had to think about these issues when designing a real-world application. You've no doubt done zero real-world tests to see what the difference in traffic comes out to (our logs show AJAX saving us considerable bandwidth...we've basically halved our bandwidth per user since AJAXifying our site). Hell, you're probably one of those tinfoil-hat-browse-the-web-with-javascript-turned- off types and are just upset that the web is becoming less and less accessible to you.

Re:That'll be AJAX (2, Insightful)

TheRaven64 (641858) | more than 7 years ago | (#19610543)

Your comment makes me believe that you've never had to think about these issues when designing a real-world application. You've no doubt done zero real-world tests to see what the difference in traffic comes out to (our logs show AJAX saving us considerable bandwidth...we've basically halved our bandwidth per user since AJAXifying our site)
We recently investigated moving from an old BBS-style application that users used as a talker (accessed via SSH) to an AJAX web-app. For a single day, the traffic was around 300MB; more than the total SSH (not counting SCP) traffic of the machine for an entire month, with fewer uses on the AJAX version. It was also far more than the XMPP server that runs on the same machine and has an order of magnitude more uses manages to get through.

Of course AJAX is an improvement over reloading the entire page. I don't dispute that at all. The discussion, however, was an AJAX webmail app, versus a local mail app and an IMAP server. If you honestly think that the AJAX app is a better choice here for bandwidth, then I hope I never have to use an app you've designed on a low bandwidth connection.

Hell, you're probably one of those tinfoil-hat-browse-the-web-with-javascript-turned- off types and are just upset that the web is becoming less and less accessible to you

Ah, argumentum ad hominem. A perfect way to make your point. No, I don't browse with JavaScript disabled (although I do turn off plugins most of the time). Yes I have evaluated AJAX apps in real world usage. They do better than a lot of non-AJAX web sites, but as I said, they don't come close to the same efficiency as a local app in a lot of cases. Ty running a web-based IM client and compare that to an XMPP client, for example. Even though XMPP is a fairly bloated protocol to start with, you get a lot less bandwidth used by the local app, for the same functionality.

In some cases, such as a corporate Intranet where bandwidth isn't an issue and ease of maintenance is, an AJAX web-app might be the best choice. Similarly, they make good fall-backs for when you are using someone else's computer. It's all about choosing the right tool for the job. AJAX is not the right tool for every job, and there seems to be a growing belief that it is.

Re:That'll be AJAX (1)

curunir (98273) | more than 7 years ago | (#19613835)

The comment you were originally replying to was specifically talking about web applications. Only a fool would believe that a traditional client-server application wouldn't outperform a web application in nearly every metric except for two, the ease of client installation and the development time. The comment you were replying to was assuming that one of those two metrics was required.

GMail is a web-based email system. That is its purpose. Sure, it allows traditional mail protocols, but for most users the web-based nature is the reason they use it. Having access to your mail from any internet-connected computer is a powerful feature that no client-server application has. Nowhere in my comment did I make any assertion that AJAX would out-perform POP or IMAP...that's just silly. As is the fact that you'd even bring up that subject.

You might have had a point if the original post had been railing about web applications in general as they have proliferated the amount of information passed over HTTP, but he was specifically railing against AJAX applications. In that context, I still standby my earlier comments that your post appeared grossly uninformed. Given your thorough explanation, I now realize that you just didn't read the comments closely enough to realize the context in which you were posting.

Re:That'll be AJAX (1)

Sancho (17056) | more than 7 years ago | (#19615317)

I didn't see anyone say anything about IMAP until you did. The original post that started off this thread just complained about all those "Web 2.0 apps". Webmail was around long before Web 2.0. AJAX Webmail ought to be less bandwidth-intensive than traditional Webmail because you don't load the entire page each time.

For your SSH vs AJAX situation, it should have been obvious that the AJAX would be heavier on the bandwidth. Assuming that the application reads lines up until a newline, every line that the user enters would be accompanied by a new request to the server, compared to the lines being processed directly on the server with the SSH client.

Re:That'll be AJAX (3, Interesting)

Intron (870560) | more than 7 years ago | (#19607623)

Loading the whole page gets twenty "item unchanged, already in cache" and one new piece. So pressing a button may create a load on your browser to redraw the whole page, but not that much bandwidth.

Web 2.0 applications seem to like maintaining a connection and continuously downloading some piece of meaningless crap. One travel site I was on recently was refreshing so much that my PC was practically unuseable. The page wasn't actually changing, just being continuously "updated".

Re:That'll be AJAX (4, Interesting)

Phil John (576633) | more than 7 years ago | (#19606633)

Sure,

when the public decides that they'd like to go back to waiting for a page-refresh to be able to do anything. When I first got a Gmail account I re-activated a long-dormant HoTMaiL account to compare it with and the difference in speed was like day and night.

Web 2.0 may be quite wasteful in the amount of traffic being sent, but in these days of streaming video sites like YouTube we're talking about a drop in the ocean.

IMHO the benefits far outweigh the drawbacks. To all the naysayers that opine about what to do when you don't have any net access, we're also moving into an era where you can, with a few caveats, be always on the net wherever you are. I live in the UK and with HSDPA, 3G and GPRS coverage I have a link to the internet about 98-99% of the time as I move about throughout the day. Accessing Web 2.0 apps via Opera Mobile on my Vario II is more than bearable (esp. with the new "grab and scroll" feature in 8.65). With the new crop of mobile AJAX apps being developed for the iPhone things could start getting very interesting.

Re:That'll be AJAX (4, Interesting)

arivanov (12034) | more than 7 years ago | (#19606803)

You are painting a very entertaining rosy picture as far as the UK is concerned.

So let's see one day when I actually need a mobile access and the reality of mobile data in the UK not through pink mobile operator marketing glasses. So let's see shall we?

1. Get up, sync the laptop, leave the house - so far nothing mobile, do not need it.
2. Get on the train to Cambridge to London train. Try to connect to the net. Available GPRS timeslots at the Camrbidge railway station - around 2 (Vodafone and O2 are roughly the same here). Available capacity before 9am - 0bytes per second. The cretinous f***heads at the operator end QoS up the Blackberry traffic so if you have a train full of business people the capacity for the other data users is 0. Slightly better after 9, but still abissmall. 3G is a tad bit better, but this is temporary due to the low penetration of the 3G BB.
3. Train Cambridge to London - no 3G coverage half of the time, GPRS coverage around 1 timeslot when available. 6+ tunnels most of them long enough to cause a VPN timeout and cause a reconnect (3G is slightly better due to soft handover here, but it is not available). Overall - just about usefull to reply a couple of emails. Browse? You gotta be kidding. In the morning - totally impossible due to BB eating all capacity. After that - about as bad as browsing on a 14400 modem.
4. London - tube. No coverage. Whatsoever. The sole reason that our best beloved Mayor is a greedy c***. London tube refuses to put DAS or picocells because they want to give it exlcusively to a single operator and shave the profits. There is a ruling by the competition comission that this is not acceptable so the tube simply does not put any access in. Result - no access. 3G or no 3G.
5. Arrive wherver - no need for 3G or GPRS as there is network and/or wireless.

So overall - out of the 4h a day when I needed GPRS/3G coverage I got on the average around 10Kbit per second and it was unavailable half of the time. That is not service you can rely on. That is sh*te.

Re:That'll be AJAX (1, Funny)

Anonymous Coward | more than 7 years ago | (#19607161)

It'd be wasted on you anyway - you can't spell.

Re:That'll be AJAX (0, Offtopic)

divisionbyzero (300681) | more than 7 years ago | (#19609065)

"4. London - tube. No coverage. Whatsoever. The sole reason that our best beloved Mayor is a greedy c***. London tube refuses to put DAS or picocells because they want to give it exlcusively to a single operator and shave the profits. There is a ruling by the competition comission that this is not acceptable so the tube simply does not put any access in. Result - no access. 3G or no 3G."

Yeah, well, there is that and a cell phone is a great way to set off a bomb remotely. That's what happened in Spain and continues to happen in Iraq. Although being a cynic I think your reason is the real reason.

Re:That'll be AJAX (0)

Anonymous Coward | more than 7 years ago | (#19610823)

Yeah, well, there is that and a cell phone is a great way to set off a bomb remotely.

So it's better NOT to be able to call emergency if an old-fashioned clock-controlled bomb goes off? I guess...

Re:That'll be AJAX (1)

DaleGlass (1068434) | more than 7 years ago | (#19611157)

In Spain it wouldn't have helped, as IIRC, the train wasn't underground at the time, so getting reception there was quite possible.

As far as terrorism goes, this argument is bullshit. There are many train lines in Spain that aren't underground, and covering them would cost insane amounts of money. The other alternative would be shielding the train, but the doors have to open eventually.

Re:That'll be AJAX (0)

Anonymous Coward | more than 7 years ago | (#19610847)

It's not like there's enough room on the tube to whack out a laptop and start browsing during commuter hours anyway. There isn't during the 'quiet' times half the time.

Re:That'll be AJAX (0)

Anonymous Coward | more than 7 years ago | (#19610247)

Oh, I don't know. I don't like "We have bandwidth to waste, so lets waste it" arguments. Here in Mexico, I, like the majority of people, do not have internet at home. Yes, there is broadband for $30 (US) a month, but, first of all, that's a lot of money here (5% of what I earn a month, and I'm doing better than a lot of people), and second, there are probelems with contacting high-traffic websites like Myspace with it. Most people access internet by going to cyber cafes and paying by the hour--between 50 cents and a dollar (Again, US). Anything that needs you to be constantly online--forget it! People get their pirated content not from P2P sites, but from buying it on the street (plenty of vendors and I can't remember the law ever doing anything about it).

You can forget about Linux--it is only used by the occassional server. Everyone I know uses a pirated copy of Windows XP; I only know one person who has a computer that runs Vista.

The computers here range from Pentium 2s to Pentium 4 Celerons; I think 2 ghz is the fastest I have seen. The computer I'm on right now is a 1ghz celeron with 256 megs of ram.

Video stream with YouTube is very iffy; mosy CyberCafes don't yet have the bandwidth to do that.

So, yes, not everyone lives in a world of non-stop broadband, and clueless webmaster should remember not everyone has a T1-speed connection to the internet 24 hours a day.
 

Re:That'll be AJAX (1)

morgan_greywolf (835522) | more than 7 years ago | (#19606805)

Many people have been saying that Web 2.0 is an utterly wasteful way to do things. There's the proof. Now can we stop building Web 2.0 "applications", please?


That's ridiculous. Compare Google Maps to the old Mapquest (the current Mapquest uses AJAX). When you move in the map, you load only part of the page. The reason it's faster is that it doesn't reload the whole thing every time you move -- hence it uses less bandwidth (on average) than the old way of doing it. Sure, AJAX allows for preloading of content and some of that content won't be used, but how is this different from Web accelerators or other means of fancy proxying? (Hint: it's not, and it still uses less bandwidth because only parts of pages are being preloaded).

Still, if you want to prove it out, put a network performance monitoring tool up and compare AJAX-vs-non-AJAX versions of sites like Google Maps. I'm guessing you'll find AJAX to still consume less bandwidth on average.

Re:That'll be AJAX (1)

Aeiri (713218) | more than 7 years ago | (#19614007)

Google Maps, Mapquest, and many other tools do not use AJAX. They use Javascript, but not AJAX.

AJAX is "Asynchronous Javascript and XML". For Mapquest and Google Maps, they set "src" attributes through Javascript. No XML is used, and no asynchronicity is used. Even if you search, it submits a form through POST.

It's DHTML, and not an accurate representation whatsoever of AJAX. The data isn't stored in Javascript, but in the browser's cache.

Re:That'll be AJAX (1)

myspys (204685) | more than 7 years ago | (#19606857)

OR it could be this new thing called "video", you know.. youtube

Re:That'll be AJAX (0)

Anonymous Coward | more than 7 years ago | (#19607957)

Now can we stop building Web 2.0 "applications", please?

it must suck to have dial up.

Re:That'll be AJAX (1)

moderatorrater (1095745) | more than 7 years ago | (#19609195)

I love the "+5 interesting, expresses an unsubstantiated opinion with little or no hard evidence but hell, I agree with him" mod there.

Re:That'll be AJAX (2, Insightful)

Ephemeriis (315124) | more than 7 years ago | (#19609393)

AJAX actually allows you to, if you want, transfer less data. Gmail, for example, does not need to transmit an entire new page every time I open up a new email message...it just displays the contents of that message. Sure, caches and proxies and all that good stuff can reduce the actual amount of traffic generated by a full-page refresh...but it's still a full-page refresh, you're still requesting a redraw of every single picture and every bit of text - rather than just asking to redraw a small portion.

The reason AJAX is indirectly responsible for an increase in HTTP traffic is because the web is becoming more useful. Gmail and other AJAXy webmail systems are just about as responsive and feature-rich as their traditional desktop counterparts. Message boards and chat systems are becoming more useful and responsive, replacing Usenet and IM clients in places. We've got web-based word processors, spreadsheets, and paint programs. All sorts of stuff is going online now, just because we can. It isn't that AJAX is wasteful, it's that AJAX let's us do so much useful/fun stuff.

Seems more along the lines of what one would think (1)

CaptainPatent (1087643) | more than 7 years ago | (#19606583)

P2P (while actually a mix of several types of protocols) by default is 1000 - 1000000 times as bulky as most HTTP transfers are (unless you're downloading files off an HTTP file server) Most of the time though it's just text and pics. I think the article is just reaffirming what /. users already knew.

Nitpicking (4, Informative)

TorKlingberg (599697) | more than 7 years ago | (#19606587)

P2P is not one protocol, but many. Some P2P systems, such as Gnutella, even use HTTP for file transfers.

Re:Nitpicking (1, Troll)

Goaway (82658) | more than 7 years ago | (#19607431)

Thank you. We are all delighted to see that you are so smart.

Than means NOTHING (1, Informative)

Anonymous Coward | more than 7 years ago | (#19606655)

P2P (which is a class of applications, not a specific protocol) was created to deal with huge files. Of course it will generate a lot of traffic. Duh!

Apple remains dominant Orange (1)

MosesJones (55544) | more than 7 years ago | (#19606685)

HTTP is a protocol, P2P are a classification of applications, some of which use the HTTP protocol as a transport layer.

Comparing the two is as pointless as comparing Real Player with TCP/IP. P2P is used to shift big binaries files around, HTTP to shift small TEXT files.

Firehose has actually made the quality of stories go down!

Re:Apple remains dominant Orange (1)

rockandrolldoctor (978909) | more than 7 years ago | (#19606763)

Having used Ellacoya's products I can offer first hand knowledge of what they are talking about. Their hardware uses DPI (Deep Packet Inspection) to look for application signatures not TCP/IP port assignments.

Packet shaping (or whatever the current buzzword is today) is accomplished by DPI looking at the application signature and rate-limiting on that criteria, it does not care what TCP/IP or UDP port the application is using.

Their product has very fine-grained reporting functionality and reports on groups of applications, not (just) port traffic.

I would tend to agree with Ellacoya's statement, they have no agenda towards the type of traffic moved, unlike Oversi who manufactures a P2P caching product.

Just my $ .02
   

Re:Apple remains dominant Orange (0)

Anonymous Coward | more than 7 years ago | (#19606797)

this is just plain stupid.

Re:Apple remains dominant Orange (1)

Ziwcam (766621) | more than 7 years ago | (#19607047)

I thought Apple was a computer company, not a mobile phone company [orange.co.uk] ...

Re:Apple remains dominant Orange (1)

Goaway (82658) | more than 7 years ago | (#19607461)

Meanwhile, everybody who is not an anal-retentive nerd understood the meaning perfectly.

So..... (-1)

Anonymous Coward | more than 7 years ago | (#19606723)

Wouldn't that be more like P2PP? Which is only a step away from PP2PP. And that's just got to be illegal in Kansas.

When TOR and Freenet unite in p2p... (2, Interesting)

barwasp (1116567) | more than 7 years ago | (#19606731)

encrypted and anonymous distributed p2p protocol will dominate forever and anti-pirates will be assimilated

Re:When TOR and Freenet unite in p2p... (1)

giorgiofr (887762) | more than 7 years ago | (#19606791)

You might want to start fixing that pesky abysmal latency and its friend, horrendously slow transfer rate; then we can talk.

Re:When TOR and Freenet unite in p2p... (1)

barwasp (1116567) | more than 7 years ago | (#19607043)

You might want to start fixing that pesky abysmal latency and its friend, horrendously slow transfer rate; then we can talk.
Fibre optics and hard disk makers will take care of the speed and volume. Think, you probably now have 6 times more HD space and 3X connection speed in comparison to what you had 5 years ago. In 2012 your system will again have 6X more space and 3X the network speed, in comparison to current.

The amount of anonymizing hops used in Tor / Freenet does not need to be increased. So, time is clearly on a pirate's side.

Re:When TOR and Freenet unite in p2p... (2, Insightful)

graphicsguy (710710) | more than 7 years ago | (#19609469)

That seems totally incorrect to me. If anonymizing makes things k times slower with current disk/network speeds, it will still make things k times slower when disks/networks are faster.

Re:When TOR and Freenet unite in p2p... (1)

beyondkaoru (1008447) | more than 7 years ago | (#19611337)

yeah... i think mr. barwasp is off his rocker. "tor and freenet unite..." just doesn't really make sense. unless he means he wants to design a new protocol where you onion route to a friend-to-friend network...which makes little sense anyway. i'll throw in my two cents:

issues with tor:

there are only a few hundred servers donating time, many of which are desktops, not real servers, and they have to accomodate a lot of load.

when your tor daemon sets up a route (selects three tor servers to hop through), it selects them (almost) at random--which means that you will have to deal with connections that bounce from continent to continent, which are relatively slow. you might bounce off the uk, to china, then to germany; it's fun to get google in different languages, but it is slow.

other p2p thingies will try to maintain connections with people who are fast with respect to you. tor doesn't, partly to maintain security/anonymity. it's just kind of a problem with onion routing in general.

tor is going to get faster if more people donate server usage and people build faster inter-continental lines. but, considering people right now mostly use tor when they want to do something that requires anonymity i think the wait would be worth it.

i don't know as much about freenet, but in general overlay networks like freenet, which are distributed among peers, improve with scale.

i'm starting to write my own kind of anon network, but i can't imagine either it or freenet getting serious speed until everyone and his dog are using it--in which case it'd be reasonably fast, though by nature still slower than the internet.

Re:When TOR and Freenet unite in p2p... (1)

barwasp (1116567) | more than 7 years ago | (#19616031)

Yes, I expect a new protocol that puts together the best parts of the past 10 years of P2P technologies. Especially the pest parts from Kazaa, Freenet, Tor and bittorrent.

+ Kazaa had an efficient algorithm for getting the file-chunks from various locatinos, it also had a decentralized packet quality voting system and an integrated search engine with specialized super-nodes.
- Kazaa had not anonymity or encryption. Sharing files with other Kazaa users was voluntary and traceable, thus the fear among users was limiting the willingness to share.

+ Freenet is well-anonymized and encrypted file storage. It also maintains the hosted files and protects their existence by duplicating the file-chunks when needed and until needed. Nobody knows exactly what files are on any single user's computer, those encrypted file-chunks just came from somewhere, and that user will never know or don't need to worry about those files.
- Freenet's internal search engine is underdeveloped. The Freenet's inability to search, rate and exchange comments about stored files is preventing Freenet from reaching true popularity among file sharers.

+ Tor network is good in protecting the identity of surfers.
- Tor onion router system doesn't have file storage. Tor's potential to search, rate and comment files within P2P systems goes wasted, because it is not a part of any P2P file search engine

+ Bittorrent's torrent files allow dividing the media files into natural parts e.g. into individual songs. Tiny torrent files can also be semi-legally distributed searched commented and rated. Bittorrent also promotes file sharing among users by limiting user bandwidth according to past UP and DOWNloads.
- Bittorrent network is not encrypted nor onion routed, so those who share can be traced and even the downloaders can be located.

What I expect the future P2P system (SYSTEM) to be like:
(This is for the guy in patent office)

1) P2P file storage and distribution SYSTEM t h a t contains an encrypted file-storage much like the one used in Freenet-project

2) SYSTEM contains a search engine, which can be used so that the search queries are transmitted to and from the SYSTEM's distributed search engine, through an encrypting onion router network.

3) SYSTEM's Onion router network can also be used in publishing comments and ratings about the files that are maintained within the 'SYSTEM'. The SYSTEM can be configured to store the ratings and comments about the files with in the SYSTEM.

4) Individual user's influence into rating the files maintained by the 'SYSTEM', can be influenced by for example
    4a) The amount of SYSTEM'S data that this user has allowed others to download from his computer unit, in comparison to the amount of SYSTEM's data that this user has downloaded from the other SYSTEM users computer units
    4b) The difference between the user's rating of a certain file and the ratings that other SYSTEM users give to that SYSTEM's file. Thus each users influence into rating the files can be influenced by that users past actions within the SYSTEM (compare to Slashdot's karma-factor)
    4c) The ratings that other users give for the files that this user has been inserting into the SYSTEM

5) Individual user's allowed download quality and or quantity from the SYSTEM can be regulated by the user's 'karma'-factor that can depend on for example of
    5a) The amount of SYSTEM'S data that this user has allowed others to download from his computer unit, in comparison to the amount of SYSTEM's data that this user has downloaded from the other SYSTEM users computer units
    5b) The difference between the user's rating of a certain file and the ratings that other SYSTEM users give to that SYSTEM's file. Thus each users influence into rating the files can be influenced by that users past actions within the SYSTEM (compare to Slashdot's karma-factor)
    5c) The ratings that other users give for the files that this user has been inserting into the SYSTEM

6) The SYSTEM may be configured to store each users current 'karma-rating' and possibly also the 'karma-ratings' influenced by ratings or measurement of this user's actions as a SYSTEM's user. For example, if this user uploads into the SYSTEM a file that contains e.g. 10 giga bytes of random data. Later other users rate that 10 giga byte file as being bad or useless. Then when that original file uploader logs into the SYSTEM, the SYSTEM observes the bad ratings that have been given to that 10 GB file and penalises the original uploader, by reducing his 'karma-points' and e.g. limiting his further ability to upload data into the SYSTEM.

7) When individual users log-into the SYSTEM the users can be recognized using various cryptographic methods and algorithms e.g. by using PKI and hash-algorithms

8) The files that are inserted into the SYSTEM can also be encrypted and signed using cryptographic algorithms

9) The quality of files within the SYSTEM can be controlled by the SYSTEM using various hash-algorithms and possibly so called hash-trees

10) the SYSTEM can try to optimise the amount of file-copies that exist within the SYSTEM
    10a) The SYSTEM may be configured to register the amounts of times that a certain file is downloaded from the SYSTEM, thus the more demand there is for a certain file the more copies of the file the SYSTEM produces
    10b) the SYSTEM may be configured to observe number of file-chunks that are available for each file, thus if certain file-entity would become endangered due to low number of certain file-chunk copies the SYSTEM could prioritise duplicating such bottleneck packets within the SYSTEM.
    10c) The SYSTEM may be configured to determine the need to renew files and file-fragments to be dependent of files (download and or search topic) popularity and their age. For example, if a certain file has not been downloaded, seeked, or rated in a month, the SYSTEM may down-prioritise the need to make and or maintain copies of that file.

11) The files within the SYSTEM can be accessed as a whole or as divided into their natural parts. For example, if a certain uploaded file consists of 10 individual files and a certain of the SYSTEM's users only wants only the fifth of those individual files, then the user can obtain a required information for obtaining the fifth file using a 'descriptor file' that may be much like the so called '.torrent' - files within the bittorrent system are.

So, that is what I mean with uniting the TOR (search, rating, commenting) access into the Freenet (anonymous, encrypted, distributed, auto-duplication and file maintenance) in P2P (user and file quality ratings, internal search engines, dividing the media files). Hopefully the guy in patent office also understands this description of this system, method and device arrangement.

Re:When TOR and Freenet unite in p2p... (1)

beyondkaoru (1008447) | more than 7 years ago | (#19616201)

ah, yes, the thing i am planning is similar to what you describe, in that it involves many ideas that anonymizing networks like tor or freenet only implement a few of. it's called banana (there was a project on sourceforge i and a friend started with the same name, but it sat dormant for almost a year, and i've started from scratch in my free time about a month ago, and _lots_ to do, and i'm basically abandoning the piece of crap i left on sf.)

however, i disagree on some points you had; personally, i think it should be entirely pseudonymous -- in other words, there would be no 'logging in', unless one is logging into a server reached over the network (which, incidentally, would be anonymous).

other things that would be cool: clouds, using kademlia (dht) with 512 bits to find nodes adjacent to a server followed by distance-vector routing to actually get to it, lots of hash verification... lots of ideas here.

one thing about it though, is that keeping it decentralized is important; no one place to shut it down. they system, therefore, can't reliably do a karma system on the quality of 'uploads'; this would have to be decentralized. quite frankly, i've got enough on my inchoate coding plate to do before attempting karma. karma can be done by servers on the overlay network. similar issue with a search engine. let things be indexed by hashes, not human-readable names. if it's human readable, then it's not as easy to do and isn't indexable in the same way.

the purpose of my overlay network would be to supply all the usefulness of a (second) internet (tcp-like connections) as well as freenet / bittorrent swarms. freenet is a big dht, which is fine for small files but not so good for larger ones. i think that onion routing isn't useful in banana, but similar ideas there.

so, take 2,3,4,5,6,10 out (and the 'log in' part of 7), and you're close to my idea, except that i add in the useful feature of tor's hidden services; let people run servers that are accessible over the overlay network which are literally just like having a host on the usual internet.

basically, i think your desires are too p2p oriented; you want something like napster, while i want something like the internet in general + arbitrarily hosted data. someone could build a napster on top of banana, but that's too focused. incidentally, if you want to learn more about distributed search engines, you should check out yacy:

http://www.yacy.net/yacy/ [yacy.net]

Re:When TOR and Freenet unite in p2p... (1)

barwasp (1116567) | more than 7 years ago | (#19619915)

Very good points,
Actually, I got a feeling that we are both describing the same system. Yes, from slightly different angles, but it is still the same system.

Why I think logging in and karma-points are needed?
1) All P2P- systems have quickly found enemies, who look to sabotage the system and ruin the user experience of the system. For example vandals have been
  • Feeding the P2P systems with bogus-files, for example music files with random noise in the middle of a song
  • Rating their bogus files to make them appear top-quality
  • Vandals have used hired contractors for inserting faulty packets in some P2P systems with weaker checksums


2) If the system allowed UPloadings without karma-points, the obvious method of choice for the vandals would be filling the system with bogus files
  • Think, their goal is just to eliminate the usability of that system.
  • They would also be likely to build scripts/ programs that would rate their bogus-files as being top quality

other things that would be cool: clouds, using kademlia (dht) with 512 bits to find nodes adjacent to a server followed by distance-vector routing to actually get to it, lots of hash verification... lots of ideas here.

True, for the purpose of making a speedy 'distributed internet', kademlia-like system could be required.

one thing about it though, is that keeping it decentralized is important; no one place to shut it down.
Definitely true,

the system, therefore, can't reliably do a karma system on the quality of 'uploads'; this would have to be decentralized.

I disagree, karma-points are needed (see 1.) and they can be implemented using registry keeping 'super-nodes'. Basically the system would select some of the nodes (with good uptime and upload records) to become 'Super-nodes' that would no longer be distributing the data, but focus in answering to registry queries.
The Super-node issue both simplifies and complicates the construction, but it certainly makes it more efficient and reliable. Importantly, it inserts a mechanism that is like the Internet's "Root nameserver"-system. So if kademlia provides the service like basic DNS-routing for the data - the 'Super-nodes' have (at least) the registry over the karma quality of the users. Naturally, the 'Super-nodes' would need to compare and duplicate their registry entries so that the system would survive. Another point with the 'Super-node' survivality is that queries to those should be routed with selected, best, most trusted nodes ('Mid-nodes') among the 'Common-nodes'.
I believe this kind of a (three-step) hierarchical node system would be more efficient than a system using less differentiated nodes. So the tasks for each node-level would be
Super-nodes
  • user-karma-registry
  • maintain registry over Mid-nodes, and (hash-name range) of files they host
  • update, dublicate and exchange registries among similar Super-nodes
  • answer to user queries mediated by Mid-nodes
Mid-nodes
  • maintain quality ratings (a file's own karma) over files
  • maintain comments about files
  • form together a search engine for files, that 'Common-nodes' can use
  • mediate info about user's karma and the level of service that that user can have
  • mediate information to Common-nodes about the where abouts of Mid-nodes that know the computers having file chunks that that Common-node wants
  • update, dublicate and exchange these registries among similar Mid-nodes that deal within the same (hash-name range) of files they host
  • agree with other Mid-nodes an onion routed route for UP-and DOWNloading the file-chunks
Common-nodes
  • act as a file-storage they report their hosted file-chunks, only to their supreme 'Mid-node'. Receiving Mid-node then announces these chunks to the mid-nodes that host files with that hash-name range
  • Before a 'Common-node' is allowed to upload material to the system user's quality must be evaluated by 'Super-nodes'. So, 'Common-node' sends a log-in request to (it's) 'Mid-node', which sends it to its 'Super-node' which gives back a random-string to be hashed by that 'Common-node'. Logging in 'Common-node' hashes the string, which is then returned to 'Super-node'. 'Super-node' then sends to local 'Mid-node' info about that user's karma-quality. Naturally the authentication would be better with PKI-signing. Thus the 'Common-node' knows that it must communicate with the System only via it's 'Mid-node'. For the privacy reasons, it is vice to onion route queries that a certain 'Common-node' sends to the system. Onion routing should probably take place among 'Mid-nodes'
Amount of nodes in each hierarchical level, could be determined by the workload experienced by Mid- and 'Super-nodes'. If the workload would require, a node on higher-hierarchical level could promote a lower level node for sharing the workload. Thus...
  • number of 'common-nodes' could be 20-1000X number of 'Mid-nodes'
  • number of 'Mid-nodes' could be 20-1000X number of 'Super-nodes'

Purpose of such P2P distributed Internet
basically, i think your desires are too p2p oriented; you want something like napster, while i want something like the internet in general + arbitrarily hosted data.
Surprise, Surprise... I am also looking towards a general 'alternative Internet', not a napster. Music and filmmakers need to get paid too and paedophiles, mobsters, defamers, terrorists and other criminals don't need absolute sanctuaries. In fact, in this system that I have been sketching there is a possibility for putting ones own karma-points for increasing or decreasing the karma-points that individual files have. In case a certain file's karma-points would go low enough, the system would no-longer 'find/ duplicate/ maintain/ distribute'-it. In addition, if a certain file would be rated 'illegal' or 'copyright-violation' in addition of being just bad. The system could also store the hash-values of those previously deleted 'illegal' files and the next time someone would try to insert that same 'illegal' file into the system. The system...
  • might transmit the illegal file (e.g. child pornography) to Interpol together with real (non-Torred) IP-details.
  • might give the user a 'first-time warning' - for copyright violations
  • (might also give the user a 'virus warning' - if user is inserting viruses into the system)
  • And next time might transmit copyright violating file to Interpol together with user's real IP-details.

So, what I am looking is a free anonymous publishing platform, where
  • Content providers can upload their (TV, video, events, conference, God's service, lecture) data
  • anyone can back-up their websites in case the their normal servers fail / crash or are DDOSed /.ed etc.
  • Political opposition can have their audience and broadcasting even multicasting anonymously
  • A system that circumvents the local firewalls and censorship
  • access is always available, for example to, human-rights and environmental workers
  • suppressed opinions can be published anonymously online
  • Normal non-encrypted internet web-services can be accessed anonymously (like using Tor)
  • through which anonymous communication could be facilitated, these karma-points could also be used as tokens for -emailing - obviously spammers would run out of karma pretty soon

So, what do you think, are barking the same tree?


(Hopefully the patent office guy did read this one too.)

Re:When TOR and Freenet unite in p2p... (1)

beyondkaoru (1008447) | more than 7 years ago | (#19622577)

i disagree; i believe we are describing very different things.

you describe something like kazaa, where people can search for files by their human-readable name on some search mechanism built into the system. this system would have problems, as you describe, with vandals "Feeding the P2P systems with bogus-files, for example music files with random noise in the middle of a song". so they'll inaccurately give something a name it shouldn't have. this is a problem with all things where we have to translate from human readable names (such as domain names) to machine routable names (such as ip addresses). personally i believe that this doesn't belong in it.

having a rating system is difficult on a global scale since, as you said, "They would also be likely to build scripts/ programs that would rate their bogus-files as being top quality". having centralized control or centralized rating is potentially bad, since the people rating stuff aren't necessarily good at doing so, even if they are not malicious. if you want to look at, say, torrent sites, we realize that we are more likely to get quality from the more focused sites; you want anime? go to boxtorrents. stuff like that. the people at box are determined to maintain quality and will generally not link to stuff that isn't. a global rating system will be automated (it'd have to be distributed), so it can't do the same thing as easily; we will have similar problems as occasionally happen with /.'s rating of posts. they're usually good, but often incorrectly moderated.

you mention a method of having supernodes, where nodes with good uptime and quality become like dns root servers.... alas, this is not reasonable. it's not hard to have good uptime or appear decent, and thus the root servers can collude against the rest. recall that the people who would be attacking the system could have MUCH more money and resources than us. so, they could pour money into building big iron servers, and the network would be so happy to have them there as supernodes, they would be able to manipulate things. here is an example of what might have been an attack on tor a while ago:

http://jadeserpent.i2p.tin0.de/tor-dc-nodes-2.txt [tin0.de]

so no, i don't think that that is a feasible system.vyou can look at mute as an example of something that addresses by human name and thus has lots of mistreatment. letting people run their own little systems of finding things addressed by hash (machine readable and not tamperable) is enough; let the web (which we'd also be running over this) handle the issues of people finding stuff. it works, and people can use their judgment. there would be karma in quality of files or sites, but it would be simply in human terms. freenet (theoretically) should get rid of useless files that nobody wants because people simply don't request them and thus they are not cached.

as far as reaching the ordinary internet goes, one could simply set up a proxy and people would route connections to it as usual.

incidentally, if you wish to continue the discussion, my contact info is on my little webpage (including pgp public key, i think). email is a bit more easily used for me than /. postings :)

This sounds like a poll waiting to happen (0)

Anonymous Coward | more than 7 years ago | (#19606735)

What do use your internet connection for the most?

- Surf the web
- Use P2P software
- E-mail
- Telecommuting
- Gaming ...
- CowboyNealing on the latest CowboyNeal of my CowboyNeal application using the CowboyNeal protocol

HTTP (1)

Peyna (14792) | more than 7 years ago | (#19606783)

Last week, a press release was issued by Ellacotya that suggested something quite startling -- HTTP (Hyper Text Transfer Protocol, aka Web traffic) had for the first time in four years overtaken P2P traffic.

Okay, so the very young Slashdotter that just popped out of his mother might not know what HTTP actually stands for, but I can't believe there are any Slashdotters who don't know what HTTP is.

Re:HTTP (1)

morgan_greywolf (835522) | more than 7 years ago | (#19606865)

Okay, so the very young Slashdotter that just popped out of his mother might not know what HTTP actually stands for, but I can't believe there are any Slashdotters who don't know what HTTP is.


Uhhhh...doesn't that have to do with this INTARWEB thingie? I think I've seen things like 'http:\\' before but I'm not sure where....

Re:HTTP (1)

PhxBlue (562201) | more than 7 years ago | (#19607721)

I'm not complaining. For once, the editors are following an established writing style [apstylebook.com] .

Re:HTTP (1)

Torodung (31985) | more than 7 years ago | (#19609035)

Okay, so the very young Slashdotter that just popped out of his mother might not know what HTTP actually stands for, but I can't believe there are any Slashdotters who don't know what HTTP is.
IMHO, the author of this article doesn't. I really doubt all that traffic is the result of HTTP 1.1 commands. HTTP, for instance, doesn't really support streaming video. The best you can do is grab 15 different animated gifs on a pipelined request.

I believe he's referring here to "port 80/TCP" traffic, which is a good deal different than "HTTP traffic." Port 80 is the most abused "well known port" in the business. It is assigned to be used as HTTP, but on the average client system it's used for just about anything.

--
Toro

If I was designing a P2P network today (4, Insightful)

Colin Smith (2679) | more than 7 years ago | (#19606787)

It'd be http based. Not for efficiency or any technical reason, but because it's the best camouflage.

 

Re:If I was designing a P2P network today (0)

Anonymous Coward | more than 7 years ago | (#19607453)

Well thankfully for us then you won't be doing it seeing as clueless you are.

Re:If I was designing a P2P network today (2, Funny)

Selfbain (624722) | more than 7 years ago | (#19607875)

Is that you Yoda?

Re:If I was designing a P2P network today (2, Informative)

diamondsw (685967) | more than 7 years ago | (#19609171)

It'd be http based. Not for efficiency or any technical reason, but because it's the best camouflage.

Welcome to layer 5-7 packet inspection on modern firewalls. You're screwed.

Re:If I was designing a P2P network today (1)

superanonman (1116871) | more than 7 years ago | (#19609527)

What if you Base-64 encode the data?

Re:If I was designing a P2P network today (1)

nephridium (928664) | more than 7 years ago | (#19610229)

What if you Base-64 encode the data?

Then your data would bloat up by about 35%. More if you were to add white spaces for further camouflage.

Re:If I was designing a P2P network today (1, Interesting)

Anonymous Coward | more than 7 years ago | (#19610209)

Welcome to HTTPS. Your firewall's screwed.

Re:If I was designing a P2P network today (1)

psymastr (684406) | more than 7 years ago | (#19619021)

Duh, many P2P apps transfer data using HTTP.

conflict of interest (3, Informative)

Anonymous Coward | more than 7 years ago | (#19606799)

Ellacoya are well-known for selling routers optimised (and I use that word with the kind of looseness only Goatse man can convey) for bandwidth shaping, in particular for throttling P2P. PlusNet [plus.net] were one of the first ISPs in the UK to be hated for widespread deployment [atlasventure.com] of their kit.

Remember, a press release is almost always marketing; and this form of marketing is about getting people to purcahse solutions for problems that don't quite exist as described. (Microsoft are good at this; Google are first rate.)

2 reasons (3, Insightful)

Opportunist (166417) | more than 7 years ago | (#19606945)

Youtube (and similar services) and trojans.

Both rely heavily on HTTP for data transfer. But then again, how do you measure that? By port? By header? Who keeps me from running a HTTP server on port 21? Who dictates that I must not wrap a package into a HTTP header so the corporate firewall doesn't get irate?

Generally, I doubt that you can reliably measure it. Especially with P2P services soon implementing a wrapper to fool anti net-neutrality laws and traffic shaping the various ISPs either will implement soon or employ already.

Re:2 reasons (2, Interesting)

garcia (6573) | more than 7 years ago | (#19607891)

Youtube (and similar services) and trojans.

Botnets [slashdot.org] mostly. They are continually hammering my site with 100s of hits in a few minutes and because they are from across the globe (mostly residential cable connections) I can't ban them fast enough.

I keep them mostly out with the Apache rules linked to above but they are still hammering me.

overlap? (1)

roedeer (127491) | more than 7 years ago | (#19607397)

So... If i set up a web server, and tell my friends to download my new web page, is that p2p or http? By the way, as long as HTTP isn't multicast, wouldn't it classify as a peer-to-peer protocol?

Re:overlap? (1)

fnj (64210) | more than 7 years ago | (#19609021)

So... If i set up a web server, and tell my friends to download my new web page, is that p2p or http?
If it's a WEB server, it's http. Therre is no "or".

By the way, as long as HTTP isn't multicast, wouldn't it classify as a peer-to-peer protocol?
No, it's a client-server architecture as opposed to peer-to-peer. The fundamental point here, as approximately one million commenters have already pointed out, is that http is a PROTOCOL; peer-to-peer is an ARCHITECTURE or CLASS OF APPLICATIONS.

Its P2P over HTTP stupid (0)

Anonymous Coward | more than 7 years ago | (#19608133)

due to the large amount of traffic shaping slowing down downloads to slower than dialup speeds.

Alot of people are starting to use p2p over http to avoid it

This is only going to get more common

Is it really HTTP traffic? I don't think so. (1)

Torodung (31985) | more than 7 years ago | (#19608629)

I think we need to start differentiating between all the different kinds of apps that run over port 80, not because it's the right choice of port but because they're badly written, and whether or not a streaming movie (or application update) can any longer be properly described as "HTTP."

I am going to say "no." Many of these are apps in their own right that aren't really using HTTP for anything other than a handshake/init and should be doing their business over their own ports, especially all the streaming Flash video. 554/TCP anyone? What gives?

Anyone know the nuts and bolts of streaming Flash? That can't possibly be just a GET request, can it? If it's anything more, the traffic isn't really HTTP in my book. It's misused port 80. I know Windows Update isn't proper HTTP. That goes waaay beyond GET.

--
Toro

Re:Is it really HTTP traffic? I don't think so. (1)

netik (141046) | more than 7 years ago | (#19612097)

I got news for you, it's done over a simple GET request with buffering, and yes, it's simple HTTP.

When you scrub the video and move the pointer around, it just reissues the GET request with an offset, which is perfectly valid HTTP (and one way that HTTP supports resuming of downloads.)

Re:Is it really HTTP traffic? I don't think so. (1)

Torodung (31985) | more than 7 years ago | (#19612505)

I got news for you, it's done over a simple GET request with buffering...
Well, I'll be darned. I'll be salting my hat now for later consumption.

--
Toro

This is a new challenge for ISPs (1)

Distan (122159) | more than 7 years ago | (#19609427)

As most slashdotters know, there is often a mistaken impression that the "World Wide Web" and HTML equal the internet. Web browsers processing HTML are just one application that rides on the internet, and it wasn't even the first application that did so.

The importance of HTML was that it was the "killer app" that drove internet connections to people's homes. Naturally, the initial implementation of connectivity was tuned to HTML; particularly the standard implementation where bandwidth into the home far exceeds bandwidth out of the home, but also with regards to some ISPs blocking ports or shaping traffic.

As internet applications continue to evolve, it is only natural that the web will someday become only a tiny fraction of internet traffic.

The challenge for ISPs is to react to this, and provide the services that their customers need before their competitors do. Who knows, perhaps someday soon we will see a standard home connection where outbound bandwidth is 10x inbound bandwidth.

This is suspect too (1)

AJH16 (940784) | more than 7 years ago | (#19610411)

While it is true that the one research company may have had flaws in their study and/or other motivating factors. It can't be overlooked that the main source used in the article is a company that needs P2P to be the main part of Internet traffic to have their business work. So, what they say would also be highly suspect. The last source referenced that was more in line with the original work is probably the most accurate as they don't seem to have any bias.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>