Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Peer-to-Peer Designs

michael posted more than 13 years ago | from the pie-in-the-sky dept.

The Internet 138

We've received a lot of peer-to-peer submissions, including the one that follows and this one. Perhaps people will post links to those systems which they think have a decent chance of solving the known problems of p2p networks? PureFiction writes "Given the recent ruling against Napster and the various happenings at the Orielly P2P conference this is a good time to mention a new type of decentralized searching network that can scale linearly and adapt to network conditions. This network is called the ALPINE Network and might be a future alternative to searching networks like Napster and Gnutella while remaining completely decentralized and responsive. You can find an overview of the system as well as a Frequently Asked Questions documents on the site."

cancel ×

138 comments

Sorry! There are no comments related to the filter you selected.

Re:Banner ad coincidence? (1)

jellicle (29746) | more than 13 years ago | (#430162)

We have no control over the ads. O'Reilly's ad probably is timed to coincide with their peer-to-peer conference, and the reason all these companies are unveiling their systems is also due to the conference, so there is some sort of correlation here, but it's not as if we chose to run a particular ad on a particular story.

Re:Not there yet.. (1)

Tiroth (95112) | more than 13 years ago | (#430163)

I think you are taking generalizations too far. Each maintained connection uses a measurable amount of bandwidth, say <i>c</i>. If your total capacity is <i>B</i>, then you will be able to maintain roughly <i>B</i>/<i>c</i> connections. Of course, you need to have enough bandwidth left over to actually do useful work, so your actual bandwidth will be decreased by some arbitrary constant of your choosing. <i>B`</i>=(<i>B</i>-<i& gt;k<i>)

Now, perhaps a T1 is a wide enough pipe for, say, 100,000 users. Maybe at some point the network will scale beyond this, and you'll need a T3, etc etc. The point is not to search the entire network, but to search a large enough segment of it to find what you are looking for.

If you are searching for something extremely rare (or nonexistant) and your bandwidth is small with respect to the scope of the network you may be required to cycle your connections many times until you acheive hits. As intended, the network allows you to search at the maximum speed allowed by your bandwidth--but gives you the option of doing a long (but exhaustive) search regardless of whether you have a 14.4 or a gigabit connection.

You just described the Napster protocol. (1)

yerricde (125198) | more than 13 years ago | (#430164)

split the search tasks up into hierarchies. you search for N inside a given range. if the result can't get found within that range, you propogate the request of the tree

Which is exactly how Napster works. And look what happened to the company that hosts the biggest nap network [napster.com] .


All your hallucinogen [pineight.com] are belong to us.

Re:Taking P2P Too Far (3)

PureFiction (10256) | more than 13 years ago | (#430165)

We meet again, ;)

Sending the same data to 10K hosts in separate packets not only doesn't scale, but it's an extremely antisocial abuse of the network

Funny, I thought web servers acted this way...

Even at 60 bytes per packet, if you're
trying to send to 10000 nodes that's 600K. Then the replies start coming in - in clumps - further clogging that pipe.


If you find the reply your looking for, then there is no need to query the remaining peers. Also, you will not clog the incoming pipe, i've covered this quite a bit, you control how many queries you send out and when, and also to which peers they are sent. The adaptive nature of the protocol ensures that successive queries will be more likely to find what they are looking for sooner.

You would only query 10,000 in a worst case scenario.

The traffic patterns ALPINE will generate are like nothing so much as a DDoS attack, with the query originator and their ISP as the victims.

No, each of these 'victims' would only receive a single 60 byte packet. This is the opposite of a DoS attack, as you are sending a large number of packets, but each peer is only receiving one of them.

Omnifarious, are a little naive, but well-known technology in mesh routing and distributed broadcast can easily enough be applied
to create and maintain self-organizing adaptive distributed broadcast trees (phew, that was a mouthful) for this purpose. Read the literature.


I understand what your getting at, but your missing the main purpose of this network. If you need to search a large number of peers for dynamic content in real time, you need to reach all of them to do it. Whether you do this using a tree/routing/forward approach, or a single peer using multiple unicast packets, you have to reach them to do it.

The design of this network is so that the resources you use are your own and that you can tailor the bandwidth, peers, and effectiveness of the search to your own preferences.

This is a highly specific network architecture with a very specific purpose using very small packets. This is why alpine can bend the common conceptions about scalability and performance and still remain efficient and scalable.

Re:Actual file transfer and anonymous usage though (1)

m1sd1rX (307121) | more than 13 years ago | (#430166)

It said on the site that is would be using UDP protocols. I thought UDP connections were relatively unreliable, providing no error checking and other such services? Would that really make a good p2p network?

Actual file transfer and anonymous usage thoughts. (2)

Anemophilous Coward (312040) | more than 13 years ago | (#430167)

Hrm, from the FAQ:

What about actually getting the data? Is it transferred over DTCP as well?

ALPINE will be heavily dependant on alternate delivery systems to actually *transfer* the data located within the network. The entire ALPINE is primarilly used for information location. This is the big hole in most peer based networks, as it is probably one of the more complex tasks. Once a resource has been located, you may use OceanStore, Freenet, Swarmcast, FTP, etc, to actually retreive the data. Trying to transfer anything of decent size over DTCP would be insane.

So it sounds initially they require the user to utilize two different programs to acheive their goal: 1) the Alpine to *find* the data, 2) something else to get it. I think in order for this system to reach widespread use (especially in the Windoze community), these two functions need to be combined into one interface. Is that not what helps proliferate Napster, people who barely know how to turn on the computer and quickly find and download stuff from one program. Perhaps they will incorporate both 'features' into a final product...or did I miss that in the faq?

Secondly, doesn't this facilitate in finding an end users location? After finding the information, now I get to manually enter the IP address into FTP to connect and download. Does this not make it easier for a program to simply track down 'file X', log IP addresses to file and then resolve these IP's and hunt down the users? It would seem that in the early stages of the networks growth, it could be easily quashed by the corporate forces as the number of users would be small and easy to track down/handle. OTH, if it scales as easily as it says it does into the billions of connections...at that point it might become futile trying to track down and wrangle up everyone. Still, industries could start going after random individuals and will probably inact a new law dispensing severe penalities for those caught (probably from precents set nowdays in the Napster case).

This is where having the same program find the files and transfer them could come in handy. Instead of ever presenting the final address, perhaps it could transfer this data amongst the network in an encrypted fashion. Then when the user see a match has been found for the data/file being searched, he/she tells the program to get it. Keeping the addressing route encrypted within itself should help issue of anonymous usage (I think this was mentioned earlier already as well).

Interesting system concept anyhow (what with the multiplexing schema).

Not your normal AC.

Re:Taking P2P Too Far (2)

Rader (40041) | more than 13 years ago | (#430168)

No, you only keep track of this information for the peers you are currently connected to..

Oh ok. But that means I start all over again with the "adaptive" process each time I 'log on'. Probably ok, since statistically I'd be getting a different group each time. (People ever realize how people in your Napster Hotlist were never on ever again)

I dont see your point. Each 10,000 ME's would have their own ISP, and would use their own bandwidth.

How do the 10,000 others query him without getting to him? They have to get to his computer somehow. Thus...10,000 searches (at a time) going through the 1 client's bandwidth. (replace 10,000 with whatever number we're working with here).

Yes, I've seen lights on when i was on Napster. But all the searching was one directional--> To Napster's server. You're bypassing that now. So that means more bandwidth coming to me.

Rader

Re:Not there yet.. (2)

roman_mir (125474) | more than 13 years ago | (#430170)

see my post #130. I talk about a distributed index database that allows you to do searches with only one single search query. My example there is simplified but that is what I am talking about: revise the entire searching strategy. No matter how good your protocol is, it still inherits all the bad points about the worst scaling existing distributed protocol: Gnutella. You see, no matter how well you narrow down your query paths, you still are left with exponential growth problems. Don't compare Alpine to Napster, these are very different, Napster is centralized - one query per search. Alpine is distributed - send as many search queries as it takes until you find something. This does not solve the scalability problems, it just procrastinates the moment at which all networks go down under Gnutella or Alpine pressure.

Amazing new P2P Idea, our problems are solved (3)

smutt (35184) | more than 13 years ago | (#430173)

I came accross this amazing P2P system the other day that completely blew my mind. It scales well and can handle any kind of file type. It has mature clients for all major platforms including Linux/Solaris/IRIX/SCO/AIX/BSD/Windows even Amiga. It's so powerful it even includes a meta-search tool for searching for P2P servers.
It's called Archie and the meta-search tool is called Veronica. You should try it out it's amazing.

P2P = Push (1)

bmarklein (24314) | more than 13 years ago | (#430175)

Well said. Here's my pet peeve about the P2P hype - companies that don't really fit the definition of P2P riding the bandwagon in order to get coverage in the industry press, for example distributed computing companies like Popular Power riding the P2P bandwagon [oreilly.com] . What the hell is "P2P" about distributed.net/SETI@home-style distributed computing? Peers never communicate, the only communication is with a central server.

I think these companies are soon going to experience "Marimba Syndrome" - if you recall, Marimba rode the push wave of '97 until it became painfully apparent that the push emporer had no clothes, then they tried to distance themselves from it as quickly as possible and never really recovered.

Re:Not there yet.. (2)

BeBoxer (14448) | more than 13 years ago | (#430176)

The problem isn't how many queries you send out. It's how many you get. If people can query everybody in the network, it isn't going to scale. People with high-bandwidth connections will send out tons of queries because they can, and kill the clients on low-bandwidth connections. You can claim that eventually the folks on low-bandwidth queries will stop getting hits, but in the mean time they are DOA. And if new clients are coming up all the time and searching them, they'll probably never have a usable connection.

Not to mention the problem of having the searcher send out an individual query to every client it want to search. If I understand this correctly, if I want to search 3,000 hosts I have to send out 3,000 otherwise identical packets. This is not what is known as a scalable protocol. In fact, from a network point of view, it's a worst-case scenario.

Problems ahead for a Windows client (3)

Jim McCoy (3961) | more than 13 years ago | (#430177)

While it is probably not very important to the people reading this, there be dragons ahead for this project that I do not think the implementor is aware of. We implemented a system very much like this for Mojo Nation [mojonation.net] to achieve the swarm distribution (parallel downloads) which is one of the key features of our technology. Windows does not like to hold lots of open connections and you quickly eat up local resources and run out of file descriptors. It works like a charm under Linux and other "real" operating systems, but backporting this to make it available to the un-enlightened will be a very, very unpleasant task for whomever tries to actually implement this. jim

peer to peer will always survive (3)

deft (253558) | more than 13 years ago | (#430178)

and it will do so because this community just will NOT take no for an answer....there's too many bright minds out there. I'm personally interested in the guys over at www.musiccity.com in league with napigator.

The main problem legally with napster is that there is a central server. That problem is being solved by having multiple and/or moving servers. This makes it much harder the levy a lawsuit against anyone.

We all know napster works, but it's illegal (or will be soon). Warez is illegal, but it will never go away because you just can't prosecute.

I'm blind (1)

luqin (3559) | more than 13 years ago | (#430179)

ignore that last post please.

---

P2P - not so great, but.... (1)

inditek (150002) | more than 13 years ago | (#430183)

obviously most of us are smart enough to realize that "p2p" and the "2-way web" are just rephrasing of the ideals and regrouping of the same technology that the internet is based on. for those of us that know what to do already, we just see a regroup and proprietizing of services and a deviation from standards.

but then again, the web browser (and things spawned from it) is the interface Joe Blow knows well. *sure* he could run Apache, use and FTP client, use Gopher or WAIS, fiddle through IRC and Newsgroups... but all that came and went, arguably, when Netscape made the web browser big.

my dad, a 48 year old man, doesn't like to juggle a different app for every service. however, my dad could easily be an non-technophile entreprenuer or a small business owner or an engineer of some sort running some sort of over-net collaboration...

p2p is "amazing" to these people because it funnels all these other "mysterious" services into one window that they're willing to pay attention too.

p2p == buzzwords. crap. silly. etc. but, then again... lots of idea are recycled. very few things are "revolutionary" or "insanely great."

Re:Color me stupid, but (1)

cmat (152027) | more than 13 years ago | (#430186)

Good idea, and it would work... per se, as the one weak link in your idea is the fact that the "index" or search engine is still centralized (ie Google.) =)

But you're on the right track. The difficulty is building a client that acts as a server too, while also being able to perform a distributed search of other clients.

Cheers,
Chris

Re:P2P - not so great, but.... (1)

inditek (150002) | more than 13 years ago | (#430188)

and my grammer, as well as my proof-reading, sucks when i'm in a rush, sorry.

Re:Actual file transfer and anonymous usage though (2)

PureFiction (10256) | more than 13 years ago | (#430190)

I think in order for this system to reach widespread use (especially in the Windoze community), these two functions need to be combined into one interface.

You are correct, and they are combined. Right now a simple TCP transfer ala FTP/HTTP will be used, with additional transfer types provided using pluggable modules.

Secondly, doesn't this facilitate in finding an end users location? After finding the information, now I get to manually enter the IP address into FTP to connect and download. Does this not make it easier for a program to simply track down 'file X', log IP addresses to file and then resolve these IP's and hunt down the users?

Only if the refence you provide for the content is on your machine. You may simply provide a freenet key and the user can then obtain the file anonymously using freenet. You may provide an FTP location on some offshore server that is outside the bounds of US jurisdiction. It could be anywhere. The majority may be on your machine, but this isnt a requirement.

Instead of ever presenting the final address,
perhaps it could transfer this data amongst the network in an encrypted fashion


The final address is only used during a reply. Where you actually get the data is another issue. So, for the paranoid, they may always upload their music into freent, but locate it using Alpine.

This would be the best of both worlds for fast searching and anonymous downloading.

The concept of per to per (2)

bliss (21836) | more than 13 years ago | (#430192)

I really wish the freenet [sourceforge.net] project would get up a little more steam and start creating say a nice Freenet to web interface and start having a community. Uncrackable and undestructable and totally anonymous!

Re:Taking P2P Too Far (3)

Salamander (33735) | more than 13 years ago | (#430193)

  • Sending the same data to 10K hosts in separate packets not only doesn't scale, but it's an extremely antisocial abuse of the network

Funny, I thought web servers acted this way...

Increasingly, in the era of second-gen content distribution networks, they don't. Where they do, they pay dearly for the privilege of sucking up so much bandwidth. I don't think you do yourself any favors by pushing a first-gen "solution" when the second gen is already out there and some people - such as myself - are already working on gen three.

If you find the reply your looking for, then there is no need to query the remaining peers

You won't get the answer until you've already sent queries to the next batch. Net result: not only are you consuming all this bandwidth and creating all this congestion, then you turn around and drop those packets on the floor. That's just adding insult to injury, as far as your upstream is concerned.

The adaptive nature of the protocol

Please describe how this adaptation occurs. The details are not on your website, it's a complex problem, and I think you're just handwaving about something you don't understand.

you are sending a large number of packets, but each peer is only receiving one of them

But the intervening routers are receiving them - and the replies - in huge clumps. That's just like a DDoS.

ALPINE, yet flat? (2)

ToastyKen (10169) | more than 13 years ago | (#430195)

If it's a "truly flat network", wouldn't it be more appropriate to call it "The NORDIC Network"? :P

P2P vs. Client/Server - Everybody's a Server (2)

billstewart (78916) | more than 13 years ago | (#430197)

The issue isn't whether some parts of the software are performing server functions and some are performing client functions - it's that everybody's a server, and there aren't any centralized resources - it's all decentralized. One big difference between a Gnutella-like P2P and your hypothetical Mozpache is that everything your client-side downloads in Gnutella or Napster is advertised for uploading by your server-side, and the object naming convention is something that supports this. By contrast, Mozpache could symlink the Mozilla cache directory and the apache exportable-files directories together, with a bit of work, but Apache doesn't have a useful way to extract information from fat.db, and the cache directory file naming convention isn't very exportable (random-looking names, but better than taking all the files and naming them "index.html".)

Re:Peer to peer file sharing == piracy. You THIEFS (2)

Rader (40041) | more than 13 years ago | (#430198)

I don't see how. If a computer can connect to the internet (and thus other computers) there's really no stopping it.

Sure, some apps might get stopped, but that just means we move back to something less crude. (until the next version of something nice comes out again)

Rader

Latency issues (4)

Alien54 (180860) | more than 13 years ago | (#430199)

From the FAQ -

What about latency? I dont want to wait 2 minutes for a reply!

Get a DSL Line! ;) Also, this is assuming you query the *entire* group. Part of the purpose of the ALPINE protocol is to adapt to the repsonse you receive during queries. The first query you make may take 2.5 seconds. The next you may query the responsive peers first, and you may find what you are looking for in 1 second. The next query may be further refined and your peers are organized so that you find what your looking for in a fraction of a second.

You can only do this type of adaptive configuration tailored to *each* peer and their use of the network if you allow them to do the quering themselves, and order the queries themselves. This implies a direct connections to the people they are quering.

You cannot perform this type of custom adative configuration without an extreme amount of overhead in a routed architecture, thus the need for DTCP.

I do not know about you, but an awful lot of users out there do not have high speed access yet. And I can think of many folks whose first action would be to search everything.

Remember, half the population is below average.

The ALPINE Network (2)

Pemdas (33265) | more than 13 years ago | (#430200)

Wow...PureFiction is doing a heck of a marketing job for this, if nothing else. I can remember at least 4 comments (threshold 2) from that account on the Gnutella will never scale discussion which promoted this system.

I don't know what the technical merits are, but the marketing is solid! :)

One problem... (3)

Wraithlyn (133796) | more than 13 years ago | (#430201)

This looks really cool, however I forsee a lot of problems with users that don't have a direct internet connection. Namely, you cannot transmit a UDP packet to someone behind a proxy/firewall/NAT unless they have sent a packet out to you first. Still, they do mention NAT in the overview, so at least they are thinking of this.

Anonymous? (1)

karot (26201) | more than 13 years ago | (#430202)

If I remember correctly, one of the touted benefits of Gnutella, (but not Napster?) was that the transfer was (or could be) anonymous.

I'm not sure how important this is, but will the described "flat" structure of this system allow both source and destination to choose anonymity (assuming both ends agree to it) - If so, how can the end requesting anonymity guarantee that they really can't be traced?

Or perhaps this is just the imaginings of a madman?
--

Re:Problems ahead for a Windows client (4)

dinky (58716) | more than 13 years ago | (#430203)

From the FAQ: (perhaps you should read it)
How do you support 100,000 connections? Wont the host system crash long before then?

These connections are all multiplexed over a single UDP connection. This is one of the functions of DTCP, to provide a multiplexing protocol for use over UDP. The multiplexing value is 4 bytes, which allows for a theoretical maximum of over 4 billion connections.


This thing uses one single UDP socket so I don't think porting it to Windows would be too hard now would it.

Re:Taking P2P Too Far (2)

f5426 (144654) | more than 13 years ago | (#430207)

> No, each of these 'victims' would only receive a single 60 byte packet. This is the opposite of a DoS attack, as you are sending a large number of packets, but each peer is only receiving one of them.

Well, *if* I am the only one to query. In the general case, they would receive 60 bytes packets for every query done in the network.

This is the major flaw of all gnutella-like systems. If only the client knows what is on its disks, then you can kiss scalability good bye, no matter how hard you try.

Cheers,

--fred

Re:Latency issues (2)

f5426 (144654) | more than 13 years ago | (#430209)

> Remember, half the population is below average.

Below *median*

Cheers,

--fred

Re:Taking P2P Too Far (2)

PureFiction (10256) | more than 13 years ago | (#430210)

You won't get the answer until you've already sent queries to the next batch. Net result: not only are you consuming all this bandwidth and
creating all this congestion, then you turn around and drop those packets on the floor. That's just adding insult to injury, as far as your upstream is concerned.


No, there is no batch. The query process is iterative, and can be halted, slowed, at any point in time. While there may be a dozen to a few hundred packets in transit before you start receiving replies, you can slow or stop the process once you see that you have enough replies, or that you have found what your looking for, or just want to cancel.

Please describe how this adaptation occurs. The details are not on your website, it's a complex problem, and I think you're just handwaving
about something you don't understand.


Sure, there are various criteria that indicate a bad or good peer. These include, among other things:

- Did the peer respond to your query?
- Did the peer misrepresent the response?
- Is the file or resource valid?
- Is the peer sending you too many queries?

Etc. The various properties, and other, control where in the list of peers to search an individual peer is located. A high quality peer, who often responds, has quality files, will be queried long before a peer that never responds will.

For negative behavior there is even ban lists and so forth to prevent them from bothering you further.

But the intervening routers are receiving them - and the replies - in huge clumps. That's just like a DDoS.

Only your initial upstream router is receiving them, and from there the packets fan out to their respective destinations. Any any ISP that cannot handle the bandwidth generated by a customer has much more major problems.

Re:Banner ad coincidence? (2)

Rader (40041) | more than 13 years ago | (#430212)

Talk about target marketing!! My rites have been violated by /.!

Rader

Re:So... (1)

Rader (40041) | more than 13 years ago | (#430214)

Makes sense to me. I'm going to write a book labeled "Whores through time" about an organization of hot babes that travel through time, and the minutes before a violent act, they pop in and seduce the perp.

No way they'd resist...Kill someone, or have hot sex...hmmm..

Problem solved. Till 10 minutes later .

Rader

Ever heard of CGI? (2)

OlympicSponsor (236309) | more than 13 years ago | (#430217)

Just create a well-known URL that runs a CGI to list the contents of your HD.
--

Peer to Peer networks are illegal. (2)

TheFlu (213162) | more than 13 years ago | (#430218)

Unfortunately, peer to peer networks that have the ability to allow persons to trade copyrighted material without compensating the owner of the work should be banned...according to this article [mp3newswire.net] about the European Parliament.

Freedom of depress. The Linux Pimp [thelinuxpimp.com]

Re:Not there yet.. (2)

PureFiction (10256) | more than 13 years ago | (#430219)

If people can query everybody in the network, it isn't going to scale.

They cannot query everybody on the network. They can only query everyone they are connected to. So, modem users would obviously have a smaller connection pool compared to a DSL user.

If a peer they are connected to is causing too much load, they have them slow down, or they drop them entirely.

Someone on a T1 connection may indeed be able to connect to just about everyone, but they would also have the bandwidth and memory to do so.

Not to mention the problem of having the searcher send out an individual query to every client it want to search. If I understand this correctly, if I want to search 3,000 hosts I have to send out 3,000 otherwise identical packets. This is not what is known as a scalable protocol. In fact, from a network point of view, it's a worst-case scenario.

Worst case scenario is a forwarded broacast. And at any rate, 3,000 queries to find what your looking for is indeed a worst case search.

Part of the alpine protocol is the adaptive configuration of he query list so that quality peers are queried first, thus greatly increasing the chances that you dont need to query more than a few hundred to find what your looking for.

Napster is the New Internet (1)

vodoolady (234335) | more than 13 years ago | (#430221)

Napster wasn't just a way to get illegal music, it was a highly available, high capacity filesystem. Commercial products like that usually cost a bundle and require a team of specialists to configure and maintain. I think we're gonna see applications stop using databases and start using big p2p networks.

Re:peer to peer will always survive (2)

Rader (40041) | more than 13 years ago | (#430224)

Someone earlier mentioned that the RIAA/MPAA would close down all the P2P programs next. Well there's lots of technical arguments against that, I figure the only method left for the RIAA to take it to the next step is to start setting up sting operations.

They pose as under-cover traders (heheh) and they trade with you. Under Policy Act 5.4.11.c they log your illegal activity, turn it into a judge and then prosecute you at the $25,000 - $100,000 dollar fine for each copyright violation.

However, I think the mild trading done over the internet would be small fries compared to the assholes like me who trade 100's of albums at a time through the mail.

Rader

You trade bandwidth use for decentralization... (1)

Entropius (188861) | more than 13 years ago | (#430226)

...and thus resistance to attack. There are two kinds of centralization here: legal centralization and network centralization. Napster has a single point of "legal failure"--you only have to sue one entity to bring it down. It also has relatively few network points of failure. The opposite is Gnutella, which is as unusable as it is unsueable. We've already demonstrated that a network as centralized as Napster won't work for legal reasons. However, there are existing networks out there that have enough decentralization to be highly resistant against lawyers, but are centralized enough to take the bandwidth burden away from the ordinary user.

There are already two systems in place that would work. The first is the oldest form of P2P on the net: IRC. IRC fileswapping channels have been around for a while; the problem is that they don't have the "critical mass" of users to make them really useful. Someone, however, should write a script that reflects searches from one channel to the other. For instance, if someone sends out a search on channel 1, the bot will send the same search to channel 2. If it is sent any results, it will echo them to the original searcher. This doesn't put any bandwidth burden on the original searcher, but extends his search radius considerably, especially if the reflectors are configured to examine multiple servers (for instance, #mp3 on DALnet reflecting to #mp3 on EFnet. I don't know how you'd write it as a mIRC script, but it could be done in C...). The bots would probably be configured to keep a record of all the fileswapping channels that they have heard of. (Reflector bots should mention all the channels that they are reflecting to/for every 10 minutes or so). When the bot logs on, it should join each of its list of channels, and then determine which channel links are already being maintained. This is easy to do: send a query and see if someone reflects that query to the other channels. If any channels do not already have a reflector linking them, the bot starts bouncing queries between them. It then leaves all channels that it's not serving a function in, to cut down on its bandwidth use. Anytime a reflector hears another reflector give its periodic status report, including a line like "I am a member of channels #foo, #bar, #fnord, etc...", it should add any new channels to its list.

A system like this is halfway between Gnutella and Napster in terms of bandwidth use. (The regular users in the channels who aren't running servers could safely squelch reflectors to cut down on bandwidth.) It has no single legal point of failure: the IRC network is protected as a common carrier with a significant non-infringing use, and there are too many people running servers and reflectors to sue them.

note: I'm merely talking about technology, not endorsing or condemning its use for any specific purpose. -Entropius

Re:Taking P2P Too Far (2)

Salamander (33735) | more than 13 years ago | (#430229)

The query process is iterative, and can be halted, slowed, at any point in time.

The more you throttle it down, the longer it takes to get past the overwhelming majority of negative response to the few positive ones, so you can have slow response because you didn't throttle your traffic or slow response because you did. Yippee.

Sure, there are various criteria that indicate a bad or good peer. These include, among other things:

Is a framework for collecting, collating, and using this information already thought out, or did you make up this list only in response to my query?

Only your initial upstream router is receiving them

You need to look further than that. If you have a Napster-like number of users there will be thousands of routers out there connected to thousands of ALPINE users each generating queries. When you multiply things out to get total traffic, as was just done for Gnutella, you do get a level of traffic that will make the router owners sit up and take notice.

Piggyback IRC (2)

Rader (40041) | more than 13 years ago | (#430230)

Why don't we write a P2P program that just piggybacks on the power of IRC servers? This is a protocol that can't be shut down, and has decent scaling properties.

A front end could be written so that no one even has to know that the info is being sent through IRC on the back end.

Rader

Re:exactly. (2)

Rader (40041) | more than 13 years ago | (#430232)

Because then you'd have iMesh.
A listing of what is online and offline. Mostly offline due to statistics.

How would you know what was available NOW? Not only that, but posting UP info isn't going to happen. Look at all the leeches that were on Napster. Not only that, but lets say people did try...they're still not going to reasonably post UP their changes all the time. Maybe we could automate it. But now you're never going to find Free anon web pages that can handle that.

Rader

Re:Actual file transfer and anonymous usage though (1)

Anemophilous Coward (312040) | more than 13 years ago | (#430233)

Only if the refence you provide for the content is on your machine. You may simply provide a freenet key and the user can then obtain the file anonymously using freenet. You may provide an FTP location on some offshore server that is outside the bounds of US jurisdiction. It could be anywhere. The majority may be on your machine, but this isnt a requirement.

You mentioned right before this that the searching and transfering will be combined into a single client, which is a good first step towards making the system easy to use for most everyone.

However, with the above sentence, I still don't see this network becoming the widespread choice for Joe Q. User out there. Sure the enduser may be able to find the material and retrieve it easy now, but those wanting to present the material (may) have to go to greater lengths to protect themselves (if there should be a need to). If Joe Q. User wants to present his files anonymously, but has to tap into other resources (he/she will probably be unaware of freenet) to do this, the amount of data available on the network may not reach the 'mainstream popularity' levels needed to transform this into the next killer-app.

OTH, I do suppose nowadays, Joe Q. User is easily trackable through Napster as well (apologies if this not the case, I don't use the program). Even though workarounds allowed those banned users to reconnect, I presume that incoming/outgoing connection traces could still be produced resulting in end user addresses. If this is case, then the millions of Joe Q's out there don't really care about anonymity on the Internet (which, oddly enough, is mostly true). Thusly, they probably won't care about hosting the data (legal or illegal) on their own computer.

Perhaps an integration with Freenet somehow within one interface might be possible (sorry, not fully up on the workings of freenet today...gonna go study up on it now though) to make this an easy to use Ma & Pa app.

Not your normal AC.

Color me stupid, but (5)

OlympicSponsor (236309) | more than 13 years ago | (#430234)

Isn't "p2p" the same as "client/server" in the special case where client==server? So, for instance, HTTP is P2P if I'm running netscape and apache and so are you and we connection to each other? Or does it only count as P2P if it's a single piece of sofware? If so, then I'm announcing Mozpache, a web browser AND server.

"But how do you search," I hear you cry. How do you search NOW? Google, right? Same deal here, just use DynDNS (or whatever) to get the link to stay stable.

"P2P," sheesh--it's amazing what some people think is amazing.
--

Not there yet.. (4)

roman_mir (125474) | more than 13 years ago | (#430235)

This system does not fully eliminate the Gnutella problem of having too many search queries on the network. With Gnutella your queries will be propagated from your node to all the nodes you are connected to and then to all the nodes that your neighbours are connected to, which creates search clashes (same node searched gets the same query from neighbours over and over.) With Alpine the overlap is eliminated but the point is, you still will have to search every node every time you want to find something. I do not see Alpine as a huge step forward in terms of scalability, what they achieved is basically elimination of repeated search queries but not the real problem - sending as many queries as there are users. I am not sure whether they will eliminate Ping Pong, I don't think so.

It is necessary to rewise the entire searching stragegy, not simply linearly reduce the number of queries.

P2P can it be stopped? (2)

Dissenter (16782) | more than 13 years ago | (#430236)

I know Napster this and Napster that, but we are talking about something that is much bigger. P2P sharing will always be around. Before Napster there were DCC bots on IRC and ratio FTP servers that were basically the predecesor to P2P. People upload, people download. There's just a layer between. People are getting more and more used to this type of sharing anyway.

There are hundreds of ftp server applications for Windows 98, or whatever. When a large group of people learn to put up their own ftp servers, there's nothing sponsoring this other than the end users. It's at their own risk. There may not be pretty interfaces and chat rooms anymore, but seriously, did any of you ever use that?

In the future, I see listserves with people sharing today's port and password to a community of millions.


Dissenter

Re:One problem... (3)

PureFiction (10256) | more than 13 years ago | (#430237)

Yes, you are correct. And you will always send a packet first. If you are behind a NAT firewall this will be a NAT discovery packet.

A reply is then returned which has your masqueraded IP and port which the NAT router is using. From this point on, this masqueraded address is what you use to identify yourself.

Some systems may need to turn on loose UDP masquerade or the equivalent to allow reply packets from sources other than the initial destination to which you sent the discovery packet.

There are additional details, but the end result is NAT users are supported.

Re:Not there yet.. (1)

albamuth (166801) | more than 13 years ago | (#430238)

I know I may be just mixing terms, but what if search queries were smarter, in that nodes retained knowlege of the same or similar queries: i.e. remembering what "direction" they came from, thus slowly strenghtening certain paths, associated with certain queries. What this would amount to is neural net - type of of development.

Not sure if this would speed up searches or compromise privacy, though. Freenet is designed to be totally anonymous, with encryption. Maybe I'll just read the ALPINE FAQ...

You must be a prejudging dope (1)

Bjimba (31636) | more than 13 years ago | (#430239)

Well, *this* US resident easily understood the poster's allusion to the two types of skiing - Alpine, where you need a slope, and Nordic, where you can use flat terrian. In your blinding rush to paint us as geographically-challenged, perhaps you missed this.

FLIPR (2)

HairyBN (252481) | more than 13 years ago | (#430240)

This one [flipr.com] is a free application that provides people with a way to share music and other media while keeping track of all the transfers to be able to pay the artists.

Check it out.. The server is on linux too...

since this is the OT thread... (1)

popular (301484) | more than 13 years ago | (#430241)

Says girlfriend: "New peer to peer designs? How were people wearing them last year?"

--

Yet Another Reimplementation Of TCP Over UDP ? (1)

GTM (4337) | more than 13 years ago | (#430242)

I've had a quick look at their protocol, looks like they want to maintain connexions over UDP... So will they do it better that the deisgners of TCP over IP ? Not sure. :-P

Re:Taking P2P Too Far (2)

Rader (40041) | more than 13 years ago | (#430246)

Even at 60 bytes per packet, if you're trying to send to 10000 nodes that's 600K. Then the replies start coming in - in clumps - further clogging that pipe.

Funny, I thought web servers acted this way...

A web server only sends out to it's 10,000 users. Those users aren't also web servers sending out 10,000 packets each. Web servers are getting away with murder compared to 10,000 searching ALIPINE users.

Rader

Re:Taking P2P Too Far (2)

Rader (40041) | more than 13 years ago | (#430247)

... No, each of these 'victims' would only receive a single 60 byte packet. This is the opposite of a DoS attack, as you are sending a large number of packets, but each peer is only receiving one of them.

Yea, each victim only gets ONE single 60 byte packet. FROM ME. But we're talking about 10,000 users doing the same, then ALL of them will be getting 10,000 packets.

There is only one thing in the back of my mind that would support where you're going with this information you're sharing...is that your research shows that 90% of the people connected are just connected to be nice, (went to bed, etc) and they're not active. Leaving a rotating 10% of active users. (active=searching)

Rader

Re:P2P Anonymity? (1)

codewolf (239827) | more than 13 years ago | (#430249)

I agree. It would be a waste of their time, and I was surprised to hear that they may even be considering it. I think that they may instead attempt to attack the larger ISP's for this. They may use an approach like "You know what your users are doing and since you have the ability to stop it, you are responsible." I don't know how far they will go with that approach, but they really screwed up on the Napster attack (in the sense that they had one central place to find the source of the pirate MP3's and could have made a monetary deal that would control it).

Ho, hum (2)

gmhowell (26755) | more than 13 years ago | (#430252)

Archie plus apache plus *ftpd plus Linux/*BSD

BTW, I've got this great idea for a round device. You put a stick thru the middle of it and you can easily move things around. Any ideas on how to improve it?

Re:Taking P2P Too Far (3)

Rader (40041) | more than 13 years ago | (#430254)

.... No, there is no batch. The query process is iterative, and can be halted, slowed, at any point in time.

What is an appropriate sized batch? 200 queries at once? 100? Seems like searches will take forever by stalling a query.

Sure, there are various criteria that indicate a bad or good peer. These include, among other things:

Wow, this seems like a lot of information to keep track of on the client side. Not only am I keeping track of every IP-node user out there, but I have to keeep track of it over time. In a napster-success scenario, I'd have 2 million entries to keep track of. Not only that, but it seems like a lot of wasted overhead? Even if a user doesn't have what I want, I have to compute statistics into his/her record each time.

... Any any ISP that cannot handle the bandwidth generated by a customer has much more major problems.

Um...look I'm just one user. Any searching done by me, yes, is only one person's activity. But I'm logging into a group of 10,000 active users? The ISP will have to handle 10,000 user requests of ME. And you can't reiterate the B.S. about throttling search requests. That's like saying there'll be less pee in the world if we all just pee'd slower.(yes, the only analogy I could think of. I'll brb, i gotta go P)

Rader

What about hotline? (1)

//violentmac (186176) | more than 13 years ago | (#430256)

Why is it that hotline never gets mentioned??? Surely more bytes have been transfered over Hotline servers than ANY other file (not just mp3) sharing peer to peer system!!!

I know nobody will notice this post, cause it's at 1. Oh well. It just had to be said. At least I feel better now.

I dare you defy me.

-gb

Re:Taking P2P Too Far (1)

Blitter (15795) | more than 13 years ago | (#430257)

If you need to search a large number of peers for dynamic content in real time, you need to reach all of them to do it.

This isn't really true. You seem to be describing a search algorithm with O(N) run time (your benchmark page [cubicmetercrystal.com] even states this). Linear search algorithms *suck*! Real world search algorithms run in O(log N) time, or even O(1) time. Admittedly this is not an easy problem, but don't bother going to the implementation phase if you can't do better than linear search time in the design phase -- it won't scale.

The only way to effectively search for something is to be able to avoid searching the majority of your storage locations. This is essentially what things like sorted trees/arrays and hash functions buy you. The fact that your storage locations are distributed, come and go from the network, and have changing content does *nothing* change this simple fact. You must impose some kind of order on the set of peers to facilitate searching. Otherwise you are doomed.

Re:Absolutely right (2)

Rader (40041) | more than 13 years ago | (#430258)

This is an idea friends and I have talked about. Allowing anyone with the resources to become more of a server, while other users connect to these various levels of servers. Reminds me a lot like IRC.

However, once you start doing this, the popular servers will get pressured from the RIAA and be forced to shut down.

So what sized machine/bandwidth are we talking about being able to handle being on the wide "backbone" you spoke of? I'm curious to see how many people out there would be able to be part of the backbone of the system. From what I've seen, the bandwidth is more important than the speed of the computer (the ratio of computaton vs. bandwidth being pretty small, so any decent computer could handle the computations). If only T1's were a requirement, then I'd see quite an inexhaustible supply of volunteers, but if it required more than a T3, then I see an easy target for the RIAA.

Rader

Re:The ALPINE Network (1)

jellicle (29746) | more than 13 years ago | (#430259)

I was sort of hoping people would chime in with other systems as well. :) We got a bunch of submissions over the past few days - O'Reilly's P2P conference - and I just picked some that looked interesting.

Re:The ALPINE Network (1)

albamuth (166801) | more than 13 years ago | (#430260)

I noticed that, too. Sounds like the the new, post-.com business strategy: Get mentioned on Slashdot, IPO, laugh all the way to Barbados.

This doesn't look too convincing to me. (1)

AFCArchvile (221494) | more than 13 years ago | (#430261)

Just look at Gnutella. Sure, it's versatile, but bandwidth utilization is at 75% with Gnutella idling.

If these new designs perform like that, this might be more like "Pie in the eye."

Re:Not there yet.. (2)

PureFiction (10256) | more than 13 years ago | (#430262)

you still will have to search every node every time you want to find something

This is not the case. You only have to search unitl you *find* what your looking for. This is a big difference, and part of the ALPINE protocol is adapting to the responses and peers your communicating with to ensure that you search fewer peers each time your looking for something.

This is covered in the documents, and is a major benifit. The network adapts to your preferences and optimizes accordingly.

Anybody want to help start a project? (2)

Omnifarious (11933) | more than 13 years ago | (#430263)

I would like to start building a P2P system based on the ideas here [slashdot.org] and The StreamModule System [omnifarious.org] . I expect that it can be built fully decentralized and completely scalable. I also want a lot of careful protocol documentation along the way so people can easily see how to works so holes can be poked before it gets too big.

Taking P2P Too Far (5)

Salamander (33735) | more than 13 years ago | (#430264)

I have to admit that it's a little bit strange posting something with such a subject line from the conference hall at the O'Reilly P2P conference in SF, but I can't help myself.

Implementing a pseudo-broadcast by sending separately to all destinations is stupid. Real network designers have known this for years. First off, to send to N destinations you have to shove N packets down your local pipe, which may be narrow. Even at 60 bytes per packet, if you're trying to send to 10000 nodes that's 600K. Then the replies start coming in - in clumps - further clogging that pipe. That single UDP socket you're using does have a finite queue depth, so it will start dropping replies left and right after the first few. Well, maybe not, but only because your ISP's routers will have dropped them first because they overflowed their own queue depths.

Sending the same data to 10K hosts in separate packets not only doesn't scale, but it's an extremely antisocial abuse of the network. The traffic patterns ALPINE will generate are like nothing so much as a DDoS attack, with the query originator and their ISP as the victims. In the same Gnutella thread in which you started hyping ALPINE, some slightly clueful people were suggesting tree-based approaches. Those ideas, as stated e.g. by Omnifarious, are a little naive, but well-known technology in mesh routing and distributed broadcast can easily enough be applied to create and maintain self-organizing adaptive distributed broadcast trees (phew, that was a mouthful) for this purpose. Read the literature. The pitfalls in what you're suggesting are already so well known that they should be part of any computer-networking curriculum, and much more reasonable solutions to the same problems are only scarcely less well known. There is no need to reinvent the wheel, especially if your wheel is square.

As Clay Shirky mentioned in his talk here yesterday, "peer to peer" can be considered a little bit of a misnomer. It's a lot more about addressing and identity issues, and even more about scalability, and having N^2 connections in a network of N nodes is no route to scalability. ALPINE's scaling characteristics will be worse than Gnutella's. Pemdas made a good point [slashdot.org] that you seem to have a talent for marketing. Stick to it. Unlike Pemdas I can evaluate the technical merits of what you're proposing, and you are headed 180 degrees away from a solution.

Absolutely right (3)

BeBoxer (14448) | more than 13 years ago | (#430265)

This is absolutely correct. I talked about this in the Gnutella scalability thread yesterday. Even if you ignore the overhead of your "backbone", the process of even trying to send every query to every client is fundamentally broken. If you want to support people on less than 100bT dorm networks, this is not going to scale.

Just figure out how big a query is, then figure out how many queries per second have to be in the network before all of the client's bandwidth is consumed. If you estimate a query packet to be 1000 bits, your modem users max out at 56 queries per second on the network. And that's an absolute best case which will never ever be acheived in practice.

Until this problem is addressed, these networks will never scale. You have to have some hierarchy of high-bandwidth servers which get the queries and low-bandwidth clients which don't. This can still be a truly distributed network, but you have to distinguish between the machines that have the resources to handle lots and lots of queries and those that don't.

Imagine a two-level network where you have a Gnutella-style network of OpenNap servers which the napster-style clients connect to. The servers distribute the queries amongst themselves to perform the searches. Each server knows what files it's clients are sharing Napster-style, and can answer for them. With this architechture, the well-connected hosts on cable networks and dorm subnets do the heavy lifting of the searches while the dial-up clients get good performance because they aren't being clogged with a bunch of queries. The network scales better because you aren't trying to do lots of work on really slow links. Your network is also more stable because you don't have the clients (which come and go quickly) changing the topology of your "backbone".

BearShare & Limeware (1)

Jagasian (129329) | more than 13 years ago | (#430266)

Sure, everyone knows about the scaling problems of GNUtella and clones, but the latest version of BearShare is an easy to use, no, idiot proof GNUtella clone for M$-windoze. I have been using it for a couple days now.

One of the main sources of problems for GNUtella is the type of content traded on the network. With 5MB songs, trading is quick and easy, but with 700MB mp4 DVD rips, trades take half a day, causing would-be-sharers to be locked into a small number of leachers for about half a day. GNUtella and clones tend to trade larger files more often than Napster. This causes you to be queued far more often, when requesting a file from a source who is sharing a DVD or two.

Still, if you have 10 minutes, check out either:
BearShare (for Windblows) [bearshare.com]
Limeware (for Linux, etc) [limewire.com]

While it is important to look towards the future technologies such as Freenet or Alpine, the here-and-now matters the most. The current status of both Freenet and Alpine is not good enough for widespread use as a P2P network. The best thing for now is to try one of the new GNUtella servants (far better than the original in terms of ease-of-use and performance)... or try a hacked napster. However, sticking with the Napster tech is a bad idea... tech should move towards full distributed networks for robustness reasons.

Even though GNUtella can't scale in its current protocol version, I still see it as being the next generation after napster. Soon, the GNUtella protocol will be revised to greatly improve performance, and the GNUtella generation will hit prime time.

After that, people will want even better performance, anonymity, security, etc...
Those forces will bring about the following generation. Who will fill it? Well, thats the generation when we will see Freenet, Alpine, and other more ideal networks, fight for power.

All the while, in between generations, "duct tape" proxies will be used to mend the gaps.

Re:Problems ahead for a Windows client (1)

LowneWulf (210110) | more than 13 years ago | (#430267)

Why would it matter? UDP is connectionless, so you only need one socket, and then just keep track of who your nearest peers are in your own data structures.

Re:Not there yet.. (2)

PureFiction (10256) | more than 13 years ago | (#430268)

The point you have to remember is that you control exactly how much bandwith you use for queries and how many peers you query. Also, the alpine protocol adapts to the responses you receive so that you tend towards a more efficient search.

Similar peers that have similar content and quality service will graviate towards the top of each others query lists. Thus, these higher quality peers will be queried before the others (if the others are queried at all).

The net result is that ech query you make with success enhances the probability and speed with with the next query will be answered.

For example, napster has grown to millions of users, but whever you execute a napster query, you are only searching among a grpoup of 3,000-10,000! And these are randomly selected.

Alpine will allow you to search 3,000 to 100,000+ of *selective* peers, which you have tuned to optimial result.

Freenet *is* anonymous (2)

troyboy (9890) | more than 13 years ago | (#430270)

It is not a simple matter to trace the requestor of a file on Freenet, unless the attacker can do some good traffic analysis. Read this [sourceforge.net] and dive into the documentation if you have doubts.

Re:Not there yet.. (2)

Rader (40041) | more than 13 years ago | (#430271)

They cannot query everybody on the network. They can only query everyone they are connected to. So, modem users would obviously have a smaller connection pool compared to a DSL user.

Someone on a T1 connection may indeed be able to connect to just about everyone, but they would also have the bandwidth and memory to do so.

You seem to be contradicting yourself. If a modem user can limit (or has to limit) the number of connections in his/her group, then how is it possible for a T1 user to have everyone in their group? Both cannot happen.

Rader

Re:Taking P2P Too Far (2)

PureFiction (10256) | more than 13 years ago | (#430272)

Not only am I keeping track of every IP-node user out there, but I have to keeep track of it over time. In a napster-success scenario, I'd have 2 million entries to keep track of. Not only that, but it seems like a lot of wasted overhead?

No, you only keep track of this information for the peers you are currently connected to.. This may be 3,000 to 10,000 for a napster sized group (not all one million napster users are on the same server!) or more if you have a beefy machine that can handle it.

It is entirely up to each user how many connections and how much bandwidth they wish to use.

The ISP will have to handle 10,000 user requests of ME. And you can't reiterate the B.S. about throttling search requests

I dont see your point. Each 10,000 ME's would have their own ISP, and would use their own bandwidth.

Ever watch your modem/DSL lights when your on napster? This is no diffrent, and the throttling does work, unlike TCP streaming where the bandwidth is alsways wide open (unless you excplicitly trottle sending in your application).

Re:The concept of per to per (1)

hardburn (141468) | more than 13 years ago | (#430273)

1) Not searchable

Being worked on.

2) Sharing a directory of files is cumbersom, in that it requires insertion. Sharing files on Freenet requires to you duplicate your MP3s... or insert them all, and delete the native filesystem versions... that would makes most people nervous.

A definiate problem, but a nessary one to maintain plausable deniability. Actualy, you need to keep your native filesystem version, too, unless you don't want them anymore. However, I don't think this will be a problem with 20GB hard drives for

------

Flat trails (1)

KjetilK (186133) | more than 13 years ago | (#430274)

Yeah, but, being a Nordic skier, I can tell you, Nordic skiing trails are not "truly flat".... :-) But on the other hand, Nordic skiing gives you more freedom than Alpine skiing, so why not....? :-)

Re:P2P Anonymity? (1)

hardburn (141468) | more than 13 years ago | (#430275)

Freenet does not use automatic splitting. Many of the more high-level developers advocate having automatic splitting, but the coders of the ugly internals believe it may do more harm then good. It may never be implemented.

In any case, Freenet uses encryption to protect anoynimity, along with going through proxy through proxy through proxy through proxy to get your information (and does so without too much of a performance hit, though it's not exactly HTTP).


------

P2P Anonymity? (1)

codewolf (239827) | more than 13 years ago | (#430276)

I was listening to a show on NPR this morning about Napster, and the other availabvle P2P networks that could replace the services that Napster delivers. It seems that the music industry is seriously considering going after individual users of the other services as well (such as freenet, etc.). I think that some people assume incorrectly that these networks provide a high degree of anonymity. The requests for files on these P2P networks still originate from your compueter, and can still be traced. I believe that in FreewNet's case the "provider" of the file you request is protected by the splitting up of the file in question, but the requestor still can be tracked. Now if these other P2P networks combined the existing services with encrypted file requests, the music industry would have no way of chasing down anyone looking to download pirate MP3's. However, I don't think that the music industry has any chance in hell of going after individuals any way.....

Re:Not there yet.. (3)

roman_mir (125474) | more than 13 years ago | (#430277)

Yes, I read that too, note that statistically less than 30% of users have what you need and out of those 30% not everyone will let you download what you want. Let's say that in the best case scenario Alpine has a network that can run 70% faster than Gnutella on networks with large node numbers, this is good, but only linearly good, exponential growth of the network will cause the same problems with Alpine that exist with Gnutella, since infinity/2 is still infinity :)

thinking of Freenet? (1)

biftek (145375) | more than 13 years ago | (#430278)

AFAIK, gnutella doesn't have (currently at least) anonymous transfers, it is a straight point to point connection.

Freenet [freenetproject.org] does allow for anonymity of certainly the provider, and possibly the person downloading as well. The main problem with Freenet is searching for files, which is near non-existant. Basically, you need to know the url to find out if a file exists. This leads to problems like with DeCSS and 2600, where sites are prosecuted simply for providing those URLs.

Perhaps if ALPINE could be combined with Freenet to provide improved searching, it could be interesting.

Re:Banner ad coincidence? (2)

spif (4749) | more than 13 years ago | (#430279)

I'm not implying that you intentionally tied the ad to the story, but I am implying that perhaps you were more inclined to run the story because of the ad. In the future you might want to consider checking to see if you have an ad running which is directly related to a story you're planning to post, and adding a brief disclosure statement ("Slashdot is currently running an ad for the book 'Peer-to-Peer' which is published by O'Reilly") if there is such an ad.


fnord.

propogate up the hierarchy (1)

johnrpenner (40054) | more than 13 years ago | (#430280)


search in small pools, and propogate up the hierarchy.

the solution to p2p 'bog-down' is to split the search tasks
up into hierarchies. you search for N inside a given range.
if the result can't get found within that range, you propogate
the request of the tree. the tree is made of 'clusters'. the
problem then is how to dynamically allocate *which* tree and
cluster you are part of as users continuously move on and offline.

Re:peer to peer will always survive (2)

TheTomcat (53158) | more than 13 years ago | (#430281)

Peer to peer will always survive and it will do so because this community just will NOT take no for an answer.

I hate to do this, because it paints P2P technologies with an unethical light, but if there is ever an official P2P war, it will have the same results as the war-on-drugs, or prohibition of alcohol, or trying to keep Marijuana illegal.

I truly believe that pot will (eventually) become legal to grow, and smoke, and the governments will tax it heavily (as they do tobacco) and profit from it. I'm not HOPEFUL that this will happen, nor am I opposed to it. I just believe it will happen.

The "war-on-drugs" is mildly successful, but, if I wanted to go our and get a shot of heroin, or a cap of Mescaline tonight, I wouldn't have a whole lot of trouble finding someone to sell it to me.

And we all know how prohibition of alcohol turned out.

Warez is illegal, but it will never go away because you just can't prosecute.

You _can_ prosecute, it's just difficult. It's a losing battle. Prosecuting one person in one town isn't going to solve anything, and prosecuting too many people just becomes ultimately more expensive than the projected "loss" by individuals 'pirating' your software.

It's like arresting one junkie for possession. It doesn't solve the problem. Our prisions just aren't big enough to hold everyone who violates the law, which is why we have varying levels of prosecution.

P2P vulnerability... (1)

gozie (153475) | more than 13 years ago | (#430282)

P2P makes things like (virii, pseudo-DOS attacks, etc.) so much worse right? One lone person could potential flood the whole network with just query searches. We wouldn't be much different then a network of Borgs. Melicious (sp?) acts, even stupid errors could bring so many problems to so me unwitting people.

Re:Not there yet.. (2)

PureFiction (10256) | more than 13 years ago | (#430283)

You seem to be contradicting yourself. If a modem user can limit (or has to limit) the number of connections in his/her group, then how is it possible for a T1 user to have everyone in their group? Both cannot happen.

It would be very unlikely, but all that would need to occur is that one of the 10,000 connections that every peer has would be to the T1 server. The rest of the connections may be to random peers, but the T1 user would still be connected to everyone, while everyone else maintains only 10,000 connections.

Re:This is a nice idea, but.... (1)

gozie (153475) | more than 13 years ago | (#430284)

I like that idea, but it's still pseudo-centralized.

Re:Not there yet.. (2)

roman_mir (125474) | more than 13 years ago | (#430285)

Problems:
1. no standard (growing) packet sizes leading to real delivery failures and more.
2. packets can not interact with each other and cancel each other, so only one packet can be sent to query the entire network. This is not bad but drastically reduces searching speed, since the packet will have to traverse the entire network and return to you. Also the packet will have to keep trace of the entire rout with every traversed nodes (imagine the size of the packet by the 100'th node) it'll probably be lost if a recieving user node is dropped before the package is redirected... how long are you willing to wait for a response for your search query?
3. the worst part is that there is no heuristic for the search, just because your packet is on node A and nodes B and C are connected to node A, there is no way to predict which direction to go, there is no preference in B and C.

But there is still hope, it should be possible to build network where the search is done on a number of self proclaimed servers that index the rest of the network. These servers must have a number of clones so that no info is lost once server goes off line and the distributed index should be able to update and redistribute itself. This will reduce the total number of search packets sent within the network. Primitive example: Imagine 26 nodes on the network, each one has info on all files stored on the net that start with a particular letter of English alphabet. The servers a cloned a few times and your queries go to the closest server that has info on your query that (for example) starts with a particular letter.......

Re:BearShare & Limeware (1)

hardburn (141468) | more than 13 years ago | (#430286)

The current status of both Freenet and Alpine is not good enough for widespread use as a P2P network.

And why not? Ignoring useability issues (which are going away very fast), Freenet scales up much better then anything else I've seen. It's only when scaling down (where it's at now, unfortunatly) that it has problems.


------

Re:The concept of per to per (1)

Jagasian (129329) | more than 13 years ago | (#430287)

Freenet can be used through your web browser. You can effectivly surf websites that exist within freenet. There are two main drawbacks to Freenet:
1) Not searchable
2) Sharing a directory of files is cumbersom, in that it requires insertion. Sharing files on Freenet requires to you duplicate your MP3s... or insert them all, and delete the native filesystem versions... that would makes most people nervous.

Boink!!! (2)

segmond (34052) | more than 13 years ago | (#430288)

I can't find any detailed technical spec on Espra, at least there are some on Alpine, till then Espra is at the bottom of p2p appl that I will listen to till it faces the real world. Alpine is already getting critisim for questionably "flawed" design... Let's not get burned by the p2p hype, take it slow guys...

Simple enough solution (1)

Julian Morrison (5575) | more than 13 years ago | (#430289)

..at least for outbount messages: use multicast UDP.

Re:The concept of per to per (1)

jameshowison (162886) | more than 13 years ago | (#430290)

Check the first link in the article! Espra.net makes freenet as easy to use as Napster.

Espra [espra.net] is the GUI interface to Freenet. Now get on the network and help it work! Cheers James

Helping the community? (1)

khyron664 (311649) | more than 13 years ago | (#430291)

Is finding another P2P solution really going to help the hacker community not look like pirates? I've heard some legal uses for P2P networking, but all anyone ever talks about is the next Napster. One that can't be shut down so easily. Some even go so far as to admit (odd they'd do that) that Napster is illegal. With dicussions like these, it's no wonder many people view us as thieves. Or am I just way off the boat here and the hacker community really doesn't care how they're thought of?

As a side note, I really don't care what happens to Napster or P2P networking. Shutting down Napster isn't going to stop MP3s, and Napster was basically an illegal service. Although there was some legal MP3's traded on it. Everyone cries about losing Napster, but in reality it doesn't change anything.

Anyway, back to my previous question. Does this community care how it is perceived?

Khyron

Re:Color me stupid, but (1)

ivan256 (17499) | more than 13 years ago | (#430292)

So, you've introduced a third party, google, into your system, and it's no longer dynamic or P2P. The point of P2P is that people can drop off or connect at any time and you can search them, not an archive of what was there...

If you can make that scale then it is "Amazing"

Re:propogate up the hierarchy (1)

gozie (153475) | more than 13 years ago | (#430293)

Searches would take forever....

Vague searches and definition of "a reply" (2)

yerricde (125198) | more than 13 years ago | (#430294)

Funny, I thought web servers acted this way

And they're on high-speed T3 or OCx connections to the Internet, connections that are designed to handle such a load.

If you find the reply your looking for, then there is no need to query the remaining peers

What if your query isn't an exact match to one file? For instance, I'm looking for "songs by The Offspring, in .ogg [xiph.org] or .mp3 format, at bitrate >= 160 kilobit/s," in whatever query language the system uses. (I picked a random P2P-friendly band.) I'm not "Feeling Lucky [google.com] "; I know my query is vague, but I want to survey the net around me and see what Offspring tracks are on hosts close to mine. The reply is the set of results I get back, not just the chronologically first element.

If, on the other hand, I typed in "artist contains Offspring, title contains Pretty Fly, length within +/- 3 s of [whatever the real length is], Ogg Vorbis format, bitrate 160-192 kbps, on a persistent connection," I would accept a "first reply" response.

No, each of these 'victims' would only receive a single 60 byte packet

From every single user who's searching. Say a user searches a 20,000 user network once every 10 minutes (this takes into account inactive users). You'll have to handle (on the average) 2,000 queries a minute, over 60 a second. That's not even counting peak use. Can your hardware and network connection keep up?

But whenever I think of the obvious solution to this problem (proxies that cache search requests for a group of users), I realize that such a topology would be equivalent to that of the existing OpenNap network.


All your hallucinogen [pineight.com] are belong to us.

How about pier to pier (2)

InterGuru (50986) | more than 13 years ago | (#430295)

The packet steamers of the 19th century were the first example of packet pier to peer communication.

exactly. (1)

gagganator (223646) | more than 13 years ago | (#430296)

why cant napster users obtain anon free web pages and post mp3s there?

google is the search engine

wed even get detailed artist info, scanned album covers, photoshoped artist heads on porn bodies, ygti

ELF dominated by Anime (2)

Cytotoxic (245301) | more than 13 years ago | (#430297)

There is a new peered sharing network, Project ELF [projectelf.com] , which allows truly anonymous sharing of any file type. The point of this system is actually privacy, rather than speed, but there are some features which will actually make it faster the larger the network gets, including downloading pieces of the same file from multiple sites simultaneously. Pretty cool!

Fantastic (1)

luqin (3559) | more than 13 years ago | (#430298)

Now where's the DTCP info that I need to start coding up a client?

---

Finally, a P2P protocol that considers DSL users (1)

Sheepdot (211478) | more than 13 years ago | (#430299)

Looked at the specs for the protocol they will be using and what did I find? NAT support. Looks like those of us with DSL won't have to be configuring anymore incoming ports with this system, unlike we've had to do with Napster, Gnutella, and the like.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>