Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Towards an Internet-Scale Operating System 305

gschoder writes: "Two Berkeley computer scientists (including David P. Anderson of SETI@home) envision an Internet-scale operating system to harness the processing power, networking efficiency, and storage capacity of everyone's computers. Scientific American has their proposal."
This discussion has been archived. No new comments can be posted.

Towards an Internet-Scale Operating System

Comments Filter:
  • by pacc ( 163090 ) on Tuesday February 12, 2002 @01:23PM (#2994875) Homepage
    There are still no simple ways to use a pair
    of computers on the same desk efficiently, why not start there?
  • ...
  • Seti At Home (Score:2, Insightful)

    by JohnHegarty ( 453016 )
    This is basically SetiAtHome on a massive scale. I wounder home many work units this cluster could do an hour ;-)
  • Scary... (Score:3, Insightful)

    by Em Emalb ( 452530 ) <ememalb AT gmail DOT com> on Tuesday February 12, 2002 @01:25PM (#2994896) Homepage Journal
    "When Mary gets home from work and goes to her PC to check e-mail, the PC isn't just sitting there. It's working for a biotech company, matching gene sequences to a library of protein molecules. Its DSL connection is busy downloading a block of radio telescope data to be analyzed later. Its disk contains, in addition to Mary's own files, encrypted fragments of thousands of other files. Occasionally one of these fragments is read and transmitted; it's part of a movie that someone is watching in Helsinki. Then Mary moves the mouse, and this activity abruptly stops. Now the PC and its network connection are all hers."

    Nope. Cause some l33t h4x0r will have own3d her already.

    This is scary as hell. I hope it doesn't get implemented. This is far different from Seti...
    • by Pac ( 9516 )
      "I am sorry Mary, but 15% of this file's backup were lost due to last week "You are really an idiot if you click this attachment" Outlook 2010 virus, 20% are unavailable at this moment due to orbital problems with the Earth-Moon Internet backbone and other 5% were in computers seized by the government in the on-going war on spammers. Should I guess the missing 40% from the available 60%?"
      • Should I guess the missing 40% from the available 60%?

        Yes! Error-correcting codes will make it possible to guess the whole file from fragments that add up to 50%. Mojo Nation [mojonation.net] already does this.

    • Re:Scary... (Score:4, Interesting)

      by tonywong ( 96839 ) on Tuesday February 12, 2002 @03:13PM (#2995579) Homepage
      What about the computer doing things that you are philosophically opposed to? Like nuclear simulations (for China?), or genetic database searching for profiling individuals?

      It can be a lot more scary than you think.
    • Re:Scary... (Score:3, Informative)

      by ddstreet ( 49825 )
      Nope. Cause some l33t h4x0r will have own3d her already.


      Micro$oft Press Release #10520

      We are happy to announce the immediate availablity of our new distributed computing service! For a low fee, you can harness the power of EVERY computer installed with Windoze XP in the world! Yes, that's right, all their base are belong to us, and you can buy CPU time on 'em!

      What's scary is that (except for renting out time) the above is TRUE. M$ does 0wn all Windoze XP systems. And people PAY them for it!!! Inconceivable!

      • And if the resources are used for "security upgrades" they can even do it based on the EULA.

        Suppose M$ sells CPU time to companies (or using it to run neuclear simulations to take over the world), legitimizing it by saying they're protecting the financial security of Bill Gates. Well, is that a security upgrade or what?
  • by SamBeckett ( 96685 ) on Tuesday February 12, 2002 @01:26PM (#2994905)
    cat > test.c

    int main() {

    while(1) fork();

    return(0);

    }
  • i don't know.. (Score:4, Insightful)

    by lowtekneq ( 469145 ) <lowtekneq@hotmailTWAIN.com minus author> on Tuesday February 12, 2002 @01:26PM (#2994906) Homepage
    I'm not so sure how i feel about something i own being used for something i don't. I use seti, but i downloaded it myself and agree with its purpose. But whose to say what my computer will be used for, whose to say what files will fill up my hd, ect. Luckly we still have a choice of the OS we want to run.
    • Re:i don't know.. (Score:3, Interesting)

      by spoonyfork ( 23307 )
      I'm not so sure how i feel about something i own being used for something i don't.

      What if the computer you bought for US$2000 was largely subsidized by the colation of entities that wanted to use your CPU and mass storage when you weren't so that it only cost you like US$1000 or even US$500. Would you participate then? Even if you wouldn't, could you see how someone else might?

    • Already happening. (Score:3, Informative)

      by Tenebrious1 ( 530949 )
      A couple years ago, a friend sent me a link to a distributed computing (DC) website for cancer research (IIRC). When I looked at the fine print, the DC company was a for-profit service. The cancer research, non-profit, couldn't afford and did not have the technology to run its own DC setup, so signed on with the DC service. The fine print said that 1/5th of the work packets would be for the cancer research, while 4/5ths would be for "paying" customers, who subsidized the other 1/5th share. It did not say who the paying customers were.

      After thinking about it, I decided against it. I had no idea who was paying for the other 4 work packets- big tobacco, Iraqi agents doing bio weapons research, Chinese nuclear weapons development. If they had said right out who it was for, I might have still signed up, I really didn't like the way I had to poke through the fine print to figure this out.

  • In Scientific American, the writer gives the example of Mary's computer being ultilized by a Biotech company while it's idle. Another example is a movie that is stored on several hundred people's computers. Why should I let my computer be ultilized for someone else's for-profit work or entertainment when they can do it for themselves?

    It's another thing when a person volunteers to participate (I run SETI@athome) but this proposal sounds like a forced standard upon a consumer.
    • by jxqvg ( 472961 )
      You would want your computer doing other peoples' work in this article's description because you would be paid for it.

      These guys seem to envision this happening through some sort of micropayment system, though, which is still an overall iffy proposition considering the current cost of performing a transaction.

      There are several other significant issues with using presumably anonymous internet connected machines, and their use of the term "microkernel" only clues you in that it's a NotSoBrandNew concept, but it's a fun read to get PHBs and Venture Capitalists interested.

  • by Bandito ( 134369 ) on Tuesday February 12, 2002 @01:27PM (#2994916)
    This is all great, but let's face it. People don't leave their computers on all of the time. In fact, here in California, they run ads on television telling you to turn _off_ your computer when you're "out of the room."

    Liquid cooling for PC's is still out of the reach of many, so noise is a factor. And I can only assume that this work will require your computer to be awake, so power management goes out the window.

    Even if these were overcome, there's still the obstacle of just getting people to go along with this. It doesn't sound to me like these "pennies trickling into a virtual bank account" are going to pay for that broadband connection or the increased electricity bill.

    Like most other things, it sounds great on paper...
    • Not entirely true. You don't have to have things like your monitor and speakers on, and i believe that they take up much more energy than a dinky processor running at 1.5 V. And you should be able to save power by turning off things like the video card, sound card, a significant portion of the ram, all but one hard disk, the cd rom, lower the power consumption by the processor and slow it down so your fan can turn off, etc. The core of a computer (CPU, ram, and HD) doesn't take up much power. Otherwise how could you have things like those 10 gb mp3 players that run for hours and hours on batteries? It is the human interface part of a computer that takes up all the juice, and you can turn that off if there is no human to interface with :D
      • Any modern processor, aka p4/athlon uses an amazing amount of power. Athlons can put out over 50 watts of heat when at full utilization. I'd say if you were using your computer at 100% cpu, with the HD on then you are using at least 100 watts of power. About 1/2 of the power a computer needs does come from the monitor, so if you shut that off, then you will save plenty.

        Those 10gb mp3 players are designed to be 'on' as little as possiable. Their hd's only spin when accessing data, they also spin slower then PC hard drives. The CPU in those things are just powerful enough to decode the mp3 and process user input. On the average PC, it only takes 1-3% cpu time to decode a mp3 these days, and probably half of the time it's because the GUI is pretty. ( Does anyone remember WinPlay, the first mp3 player for windows? It could decode mp3s so quick even on 486es. It couldn't seek through an mp3 back in 96 though)

  • High latency? (Score:4, Interesting)

    by Telastyn ( 206146 ) on Tuesday February 12, 2002 @01:28PM (#2994927)
    The only thing I could immagine these things being used for is very high storage, very very parrellized problems. Factoring, travelling salesman (otherwise known as airport scheduling), SETI@home and the such.

    The OS will never be fully "functional" as OSes are considered today, because people will lie and cheat and steal. IMO (read: opinion removed from ass) the only practical use of this would be the equivalent of making a kernel patch that could have a slice of disk, a slice of memory usage, and a slice of bandwidth, and then it would run SETI@home, or whatever code it was instructed to run from the "master".

    If it was not run on public machines I could immagine something akin to Beowulf from the ground up. An OS designed for premeditated clustering. That's not Internet sized though...
    • Part of what makes this kind of research interesting is learning how to parallelize operations that we think of a serial with current technology.

      The article mentions streaming a movie, which we typicaly think of as a server-to-client operation. However, companies like KonTiki [kontiki.com] are already using techniques (their buzzword is Adaptive Rate Multiserving, wah!) involving peer-to-peer parallel operation to solve these kinds of problems.

      As far as people lying, cheating, and stealing, you may as well suggest that checks and credit cards will never be "functional."

  • by 2Flower ( 216318 ) on Tuesday February 12, 2002 @01:29PM (#2994930) Homepage

    Five years ago, I'd have said no way, this is unfeasible, people would not contribute their storage space and CPU cycles to someone else.

    But now, with server-obfuscated peer to peer systems like AudioGalaxy, it could be possible. Imagine selling people on the idea of a 'universal public hard drive', where all you do is search for a file, then copy it over locally without actually knowing where/who it came from. I doubt there'd be any objections, given how convenient and 'anonymous' it would be. Sacrificing a share of your own hard drive space for cacheing files you might not be interested in would be a small price to pay for that. That's one resource down; do the same thing for CPU cycles (provided we have a killer app reason for people to need more cycles, given high speed processors of today) and other computing resources and the rest will fall in place.

    I doubt it'll go as far as this proposal, at leastnot for a LONG time, but the unthinkable is already becoming the thinkable in some areas.

    • This is exactly what freenet is all about.

      Freenet is a p2p system whereby you join the collective and as you use the network, download parts of files. As you request documents, your peers do searches for you and download the files for you as well. This way as more and more people request a file it travels closer to those people.
      So if you put something into freenet, it will be there until everyone who has a copy dies.

    • FreeNet does everything your talking about. It seems that the only thing that is keeping FreeNet from really being usable is a good key/searching mechanism. No way to really crawl the thing is there?

      • It seems that the only thing that is keeping FreeNet from really being usable is a good key/searching mechanism. No way to really crawl the thing is there?

        If somebody develops a way to publish web pages within Freenet, using URLs that link to other Freenet pages, you'll eventually see Google spider Freenet.

  • by Sanity ( 1431 ) on Tuesday February 12, 2002 @01:29PM (#2994931) Homepage Journal
    In the 1999 paper [freenetproject.org] "A Distributed Decentralized Information Storage and Retrieval System" which formed the basis for the Freenet [freenetproject.org] project, the following future direction is suggested:
    Generalisation of Adaptive Network for data processing
    A longer term and more ambitious goal would be to determine whether a distributed decentralised data processing system could be constructed using the information distribution Adaptive Network [Freenet] as a starting point. Such a development would allow the creation of a complete distributed decentralised computer

    Guess there is nothing new under the sun.

  • by b.foster ( 543648 ) on Tuesday February 12, 2002 @01:30PM (#2994942)
    Let me preface this by saying that work related to SETI@home, the Human Genome Project, and politically motiviated cypher cracking is a Good Thing(tm) and should be preserved.

    However, the proposed ISOS is big, powerful, and likely to be sought after by the most powerful corporations and institutions on the planet. How much lobbying would a large drug company need to do to get more than its share of distributed processing power? How much money would the U.S. Government need to give to them to use the system for cracking "terrorist" messages from the "evil ones" like Kevin Mitnick and Bernie G? How much money would the Government need to give to them to use the system for spying on individual users? Remember, this is the same government who pays Hollywood to put anti-drug themes in their sit-coms, so what would they not be willing to try?

    The end result of this, then, is that ordinary computer users will be forced to subsidize (through the use of CPU cycles, electricity, wear and tear on hardware, and memory use) the efforts of large companies and governments who are working against their best interests. So, tell me again... what would we gain from this?

    Bill

  • by Blue23 ( 197186 ) on Tuesday February 12, 2002 @01:31PM (#2994943) Homepage
    The article mentions:

    "As her PC works, pennies trickle into her virtual bank account."

    However, it doesn't mention the other side, that as her files are backed up elsewhere, pennies trickle out. In addition, assuming an equal amount of "work", the outflow needs to be greater then in inflow. Take for example, the pay-per-view movie. It has a set cost to purchase. Everyone storing the movie gets a bite. But a single copy of it won't work - a single system off (or back under control of the user) means that part of the real-time delivery of the movie is delayed. So the movie has to be stored in such a way that dozens of systems can be inaccessable and yet still play in real time. As such, you need to have a large numebr of copies.

    Now think about this for data backup. Is Mary gets paid "X" to hold some data, she can't be the sole recipient of it. Say she's one of 3 people with a copy of it (a rather low number). So the total cost is 3X. Now, she's going hand having her data backed up, which is the same size. She's paying out 3X to back up the same amount of storage she's only getting paid X to provide - it's much more economical to back it up herself, say a copy on her laptop and her home coputer, or work and home so the never share geographical space.

    Same goes for processing power - you can't assume that a unit will finish the task given it, so that you need to run it multiple times if it is time sensitive, leading to the same inflation on what you pay out over what you are paid for your unused resources.

    =Blue(23)
    • But those same files are already copied many, many more times than are necessary. Say (to exaggerate horribly) that 1000 copies of a movie file are needed to guarantee that you can play it whenever you want. But how many 100s of thousands of people have copies of the file now?

      And, as regards CPU time, it doesn't matter if it takes twice as much CPU time to get anything done, if you've made 50x as much CPU time available since all of your idle cycles become useful cycles.

      This, or a system like this, could lead to you never having to buy disk space again. You just put files 'on your system', and periodically you may need to pay another $5 to the your disk farm provider (probably part of your ISP) since you've gone over your previously alloted space. And you end up with backups & redundancy.

      Assuming we can overcome some basic hurdles like overzealous copyright law, ubiquitous broadband, and automatic encryption of your files, I don't see how disk space sharing can not become the direction for the future.
  • by Frank Sullivan ( 2391 ) on Tuesday February 12, 2002 @01:32PM (#2994952) Homepage
    Massively distributed operating systems have been around for years... check out Tannenbaum's work on Amoeba. Does anyone use Amoeba? No.

    This is two days in a row now that Slashdot has posted articles on the great new idea of distributed operating systems that CS theorists solved and have largely ignored for the last ten years. Besides Amoeba, there was the Connection Machine, VMS clusters, and others.

    The fact is, massive distribution is of VERY limited use, and doesn't require OS-level hooks - Napster and distributed.net are both prime examples of useful massive distribution without involving the OS at all.
    • by Salamander ( 33735 ) <jeff AT pl DOT atyp DOT us> on Tuesday February 12, 2002 @01:51PM (#2995075) Homepage Journal
      This is two days in a row now that Slashdot has posted articles on the great new idea of distributed operating systems that CS theorists solved and have largely ignored for the last ten years. Besides Amoeba, there was the Connection Machine, VMS clusters, and others.

      ...none of which were designed to tolerate the high latencies and frequent failures that a truly Internet-scale OS would face. Legion [virginia.edu] and similar projects are much nearer the mark, but this is still nowhere near being the sort of "solved problem" you claim it is.

      • Frankly, "high latencies and frequent failures" are why such an idea is impractical, regardless of whether or not the theoretical problems can be solved (and i argue that they already have been solved).

        Massive distribution should not and will not be done just because it's techno-cool... it has to produce real value. What sort of real value can it produce? That depends on what sort of problems it can solve.

        First, let's look at constraints. The three obvious ones are CPU power, disk space, and network bandwidth. All three of these have been growing relatively in proportion to Moore's Law for the last couple of decades. Their relative proportions have not shifted much... the CPU is by far the fastest, followed by local disk, and then network bandwidth.

        Now, let's look at the problems we want to solve. How about data storage ("Jane's computer has an encrypted fragment of someone else's movie")? Local disk space is far, far cheaper and more robust than network storage! Bandwidth is the most expensive part of the equation. I can buy another few dozen gig of disk space for $100. How long will it take to transmit a few dozen gig via DSL? Sure, network speed will scale up, but so will disk space. Unless something changes, the balance of the equation remains the same... local storage is cheaper then network, as well as more reliable.

        Of course, not all files you want will be on your computer, hence peer-to-peer file sharing, which is what Microsoft is trying to solve. But in this case, local disk storage is far slower than CPU, and far faster than network... in other words, there is no reason to not use a user-level process to manage the data exchange. No OS support is necessary beyond TCP/IP and disk I/O, right? This problem has already been solved in numerous real-world ways.

        Now let's look at CPU-bound problems. There are computations we may want to make that can't be done in a fraction of a second locally. These are generally math problems, sometimes with large datasets. Some of these problems can be parallelized, and some cannot. Of those that can be parallelized, some have coarse granularity, and some have fine granularity. Coarse problems, like keyspace searches for brute-force encryption cracking or SETI pattern searches, don't need OS-level support - data is most efficiently shared at the process level, which is what distributed.net and SETI do already. Others optimize at finer granularity. In those cases, data sharing and communication requirements between threads are so intense that using a slow, unreliable network is impractical! That's what big parallel supercomputers are for. So there's no need for OS-level support for parallelized number crunching that is practical in the current CPU/bandwidth ratio.

        So what problem are we trying to solve that is distributed (or distributable) efficiently across multiple computers, and requires OS-level support for optimum efficiency? I don't see it.

        Now, i should revise my previous statement that no one uses OS-level distributed computing. Fault tolerant databases, clusters, and massively parallel supercomputers all use it - at the local level. And even those are butting up against the network bandwidth problem. If it can't be done with gigabit connections on the backplane, how will it be done over a modem?
        • Frankly, "high latencies and frequent failures" are why such an idea is impractical, regardless of whether or not the theoretical problems can be solved (and i argue that they already have been solved).

          Hm. So we have a set of "theoretical" problems, for which it's doubtful that solutions exist. Except that you say they've already been solved...and apparently they're not just theoretical either. Truly, you have a dizzying intellect.

          Local disk space is far, far cheaper and more robust than network storage!

          Cheaper, yes. More robust? For what value of "robust"? Are we talking about data that only exists in one place, or in multiple places? Which one's more resistant to the type of failure that takes out a whole site? Please provide a definition by which something that exists only on your machine (whose mere existence is only known locally) is more robust than something that exists in multiple places.

          How long will it take to transmit a few dozen gig via DSL?

          Irrelevant. In any but the most stupidly designed distributed data stores, most data would be served out of a local cache under most conditions. In many, the next step would be to serve it out of another geographically-local machine over a fast LAN connection. Just because you personally can't think of a distributed-storage architecture any better than traversing the globe for every datum doesn't mean that better architectures don't exist.

          there is no reason to not use a user-level process to manage the data exchange

          Really? Ever try to do mmap-style I/O over Napster? How about plain old open/read/write over Gnutella? Byte-range locking within a Freenet file? Hmmm. If you want to talk about solved problems, how about ideas like VFS layers and network-protocol abstractions? To provide generalized, transparent access to data, on a par semantically with the sort of access that you get with a local filesystem, your "user-level process" isn't going to cut it. Not by a long shot. That's like going back to the days when every application needed its own library just to get keyboard input or draw stuff on the screen. This kind of thing belongs, at least partially, inside the operating system so that all applications can use all equivalent protocols without special linkage; see my file-sharing manifesto [platypus.ro] for a fuller explanation.

    • Some people use it. For instance i use mosix, which transparently migrates linux processes around.
      ive also spent a truly innordinate amount of time thinking about installing amoeba, plan9 and others. the reason i havent is that mosix does alot of what i want in a cluster but i dont have to limit my set of apps to those that come with or i can manage to compile in one of those odd OSes.
      But with the OSkit [sourceforge.net] and the growing prevalence of platform independant languages (java, python) i can see a time not too distant when the fireball amoeba distro [sourceforge.net] and the linux single system image [sourceforge.net] projects are competing for the average user.
      Or maybe we'll get lucky and a project to put together the best features of plan9, qnx, eros [eros-os.org] and amoeba will take off with a leader like linus.
  • Data security? (Score:4, Insightful)

    by gillbates ( 106458 ) on Tuesday February 12, 2002 @01:34PM (#2994963) Homepage Journal
    Given the fact that most companies don't want the possibility of anyone outside the company viewing their information, I don't think this will take off. I don't think that many businesses will be able to offload their processing, even if from a purely legal standpoint. What happens if Jim's payroll data is accidentally disclosed to Mary by a core dump? The legal implications of this alone would keep most businesses from using it. Consider also the following things:
    • Yes, it could render the special effects for the next LOTR movie in record time, but the MPAA would never endorse this, for fear of 'piracy concerns'
    • Biotech could make revolutionary advances, except that they run the risk of divulging a proprietary secret gene before it can be patented. A distributed network like this is practically begging for industrial espionage.
    • It's not likely that banks will use it, as an accidental disclosure, or worse, alteration of the data could result in the corruption of account information and costly litigation.
    Yes, scientists could very well use a general-purpose, distributed network. But with all the concern about privacy and IP rights, I doubt that any largely profitable business would be able to utilize such a system.
    • Besides cryptography (or do you expect files to be exchanged as plaintext?), no computer will have more than a tiny portion of any given dataset. Even a large farm of eavesdropping servers would represent no more than a small drop in this processing and storage ocean.

      Very Large Governments, of course, would probably have the power to successfully mine information, but even they would be given a good run for their money. And then again, Very Large Governments already have access to almost anything they care to want.
  • by LinuxParanoid ( 64467 ) on Tuesday February 12, 2002 @01:35PM (#2994973) Homepage Journal
    For technical computing jobs, this makes great sense.

    For commercial computing jobs, as a business with economic incentives for participation, a distributed operating system unfortunately makes little or no sense due to the types of applications that are currently server-limited.

    Commercial computing jobs which need "big servers" are typically very database-dependent. You can't distribute the application very well unless you can distribute the database. (And hopefully you aren't crunching terabyte data warehouses, right? That takes a while to send down the pipes...) Besides the inherent difficulty of distributing your database across many nodes, you have the the typical basket of problems the IOS must overcome with a very high degree of assurance: security of your highly-proprietary information, reliability, backup, etc.

    Most of the P2P plays a year or two ago discovered this the hard way. The most promising sales approaches ended up being things like distributed caching for search engine companies, which is a niche, not a mainstream business.

    --LP
  • Obviously, distributed resource aggregation isn't a new concept and has been discussed many times before. There have been a couple attempts at a generalized resource aggregation system, but they all seem to have two major problems: no one wants to donate their resources to commercial entities without getting something back in return and the number of problems that can be distributed over high latency, low speed connections is limited.

    SETI@home works well because the problem-space can split up and the amount of time it takes for a client to process it far exceeds the time it takes to transfer the data. There are also a good number of users out there who just like the idea of searching for ET.

    Distributed.net works well for the same reasons as SETI@home, but instead of users wanting to look for ET... users adopted it originally for chance at cash and later for the ego boost.

    If you build a generalized infrastructure to handle arbitrary requests for resources, the end-users loses touch with what they are working with eliminating any type of ego boost. Plus, I can't imagine many people are going to want to donate their space cycles to a pharmacutical company who will then go and patent a drug developed from information you give them, sell it at highly inflated prices in the name of R&D costs while you get nothing in return except a higher power bill and constant noise coming from your computer.

    That's not to say there aren't good causes that people would be willing to donate resources to still out there, but these causes are attractive because they give the users a direct connection to them.

    Of course, that's just my opinion, I could be wrong.
  • It's been done. See MULTICS.
  • hmmm (Score:4, Interesting)

    by ekephart ( 256467 ) on Tuesday February 12, 2002 @01:41PM (#2995003) Homepage
    Don't get me wrong the marvels of distributed computing are endless, but why don't we make ourselves more efficient on a smaller scale first. Besides there are some questions to work out.

    "Consider Mary's movie, being uploaded in fragments from perhaps 200 hosts. Each host may be a PC connected to the Internet by an antiquated 56k modem--far too slow to show a high-quality video--but combined they could deliver 10 megabits a second, better than a cable modem."

    Ok, thats nice, how do they propose Mary receive 10Mbps? Get 12 DSL lines? What about the people on dial-up? While people gain access to the internet around the world, those of us with the uber-connections will just leech on them? Now, they talk about the "digital divide" but that is just plain vicious. I'd rather be stickin it to The Man then Uncle Sven in Stockholm. So then what, everyone gets a fast connection -> backbone upgrade -> ATT, MCI, Earthlink, Sprint, etc. spend the money that Amgen would save.

    Also: How would individuals choose who can use their computers resources given their ethical or moral convictions. While I would surely donate my CPU and disks to cancer research or finding larger prime numbers, I don't want the DoD using it to think up new ways to kill people.
  • ...for my processor time. It's one thing to be able to do SETI@HOME. But if some biotech company wants some remote computer to use my PC for DNA analysis, it had better pay me well for my generosity.

    Damn I'm antisocial.

    nahtanoj

  • I could really see technical minded people eating this stuff up, but the real problem lies with non-techies. Yes, the seti@home screensaver for windows looks cool so non-techies seem to have no problem installing that but will Mary really be willing to have a distributed back up system on her computer? What about gamers, who need every available bit of bandwidth? These technologies are really promising but they need widespread adoption to become a success. That's what made napster so successful, it wasn't bleeding edge technology but it had widespread acceptance.
  • Idle CPU time, especially if the computer is just left on at night without any other use, is not free. The computer requires electricity, and in some cases internet connection bills. Any system needs to be able to pay owners at least enough to make up for these costs, or they will be losing money. In addition, if users can make money leaving their computers on 24/7, how will it affect our nations power systems? Many grids are already pushed to the limit, and if every person with a computer began to have it on all the time it might push things over the edge in some places. That might cause obsticles to the systems deployment.
  • Half a picture (Score:4, Informative)

    by Salamander ( 33735 ) <jeff AT pl DOT atyp DOT us> on Tuesday February 12, 2002 @01:46PM (#2995038) Homepage Journal

    As happens too often, this proposal concentrates entirely too much on distributed computation, and pretty much ignores the problem of distributed storage. They're quite different problems, each requiring its own solution, even though it's intuitively obvious that any true "Internet Scale Operating System" would have to deal with both.

    If you're interested in this "other half of the problem" here are some links:

    • Farsite [microsoft.com] (Microsoft; focus on many nodes, not long distances, but still relevant)
    • OceanStore [berkeley.edu] (UC Berkeley)
    • CFS [mit.edu] (MIT)
    • Publius [nyu.edu] (ATT/NYU)
    • Intermezzo [inter-mezzo.org]

    There are many more. The bibliographies for the above will mention many earlier systems, while a quick Google search for these project names will show more recent ones.

  • Hmm, we can harness the unrealized potential of millions of desktop PCs. Ummm, why would we - the users and owners of the computers - want to do that?

    How does it benefit me as a user, aside from #1 increasing my energy bill by encouraging me to leave my PC on, #2 increasing wear and tear on my PC as my hard drive is accessed repeatedly, and #3 increasing my vulnerability to hackers? Oh, and #4 - sucking up the bandwidth of my ISP because of all of these always-on computers, thus trashing any hope of decent pings for my first-person shooters.

    Gee, where do I sign up?
  • ... which aired January 1984...

    .
  • Just wait.... (Score:3, Interesting)

    by st0rmshad0w ( 412661 ) on Tuesday February 12, 2002 @01:49PM (#2995056)
    Until your system and damn near everyone elses is siezed for evidence in some computer crime or some move in the war on terrorism.
  • ILOVEYOU (Score:3, Funny)

    by sporty ( 27564 ) on Tuesday February 12, 2002 @01:49PM (#2995060) Homepage
    Doesn't the "I Love You"/SirCam/Nimbda virus already do this? :)
  • This won't work for the same reason that communism doesn't work. There are too many people who are greedy, manipulative jerks, and more often than not they will take advantage of the rest of us.

    Perhaps if you set up your computer service like a secret society this would work. Then you'd have to know all the users, and would be able to track everything. It would be like the Masons, only with computers.
  • by emin ( 149044 ) on Tuesday February 12, 2002 @01:54PM (#2995092) Homepage
    The article mentions distributed backup as a possible application, but in my mind distributed backup is the killer application.

    Consider a distributed backup program which works roughly as follows.

    • You install the program and give it a certain amount of space on your hard drive.
    • You tell the agent which files or directories you want backed up (e.g. /home).
    • The distributed backup program periodically contacts other computers and swaps encrypted versions of your data for their data.
    • If your machine crashes or you lose data or your city gets nuked, you can easily recover your data from the computers you shared with.

    This type of application would provide at least 3 important benefits for backup. First, its relatively cheap. If you want to backup more data, just buy more local disk space and trade files with more computers. This seems much easier (at least for a home user) than setting up a tape backup system, making sure the tapes get replaced, making sure the tapes get put someplace safe, etc. Second, its much safer than pretty much any backup system you could buy today commericially since your data is literally spread all over the world. Finally, the backup system isn't controlled by any large corporation.

    Obviously there are still some details left to be worked out such as how to let computers who want to trade files find each other (both centralized and distributed options exist analagous to napster and gnutella), how to prevent cheating (having your computer periodically ask its partners for hashes of the data they are backing up should work), how to control redundancy most efficiently (error correcting codes like Reed-Solomon codes or Tornado codes would probably be smarter than just repeating data).

    If you're looking for a great distributed open source project that will make the world a better place, I encourage you to develop prototypes for distributed backup. I plan to develop my own prototype one day, but currently I'm pretty busy with graduate school.

    -Emin

    • by Jim McCoy ( 3961 )
      The article mentions distributed backup as a possible application, but in my mind distributed backup is the killer application.

      While this is not directly mentioned by David Anderson in his article I know for a fact that this is something that United Devices is interested in because late last year Mojo Nation was in discussion with UD to provide just this sort of service to its users.

      This sort of distributed backup is what the current private branch of the Mojo Nation codebase does, with a little taskbar app that sits in the background and distributed backed up files to peers within the enterprise. One major benefit that your post missed is that the majority of the data stored on hard drives within an enterprise is redundant data (e.g. multiple copies of MS Word, etc.) and with a distributed backup system you only need to keep a few copies of such files around for restores. You can back up 99% of your data while only needing 10-15% of the available space on individual PCs.

      In what is turning out to be one of life's interesting ironies, the company that was most intrested in this UD/MojoNation pairing was Enron's bandwidth trading group (mostly for storing medical imaging data and distributed corporate backups.) When Skilling left Enron just before the whole accounting scandal started to blow up the Enron guys became "unavailable" so things never moved forward, but you can be certain that this sort of a distributed data storage and backup system will appear again.

      Jim
  • *sniff* *sniff* what's that I smell? A bigger security threat than Windows? It can't be!
  • by cavemanf16 ( 303184 ) on Tuesday February 12, 2002 @01:58PM (#2995116) Homepage Journal
    Has everyone forgotten what runs the good ol' USA? Money.

    The utopian future that dreamers always look forward to will never happen. It hasn't happened before, it won't happen in the future. However, this type of computer for the desktop that shares it's 'computing' power with the entire network, makes LOTS of sense for businesses. I go to lunch, break, and then go home for the day. All the while, my computer could be donating its computing power to handling webserver requests, processing internal jobs for the mainframe, or even help run massive load and regression tests on the system to anticipate 'kinks' in the armor of the system from a scalability standpoint.

    Sure, it would just be "so neato!" if every computer could be kept cheap for the home user by everyone sharing files, processing power, even memory; but let's face it, communism didn't work because there wasn't enough incentive for the worker bees to strive for better. There's always a fine balance between greed and sharing. Giving such a 'distributed computer network sharing' system to businesses would be a great start, but don't expect a 'home user' acceptance of such a system anytime soon. I want my full computing power for my new computer game that I bought with my own money, and I'm sure many other users aren't willing to give up their hard-earned money for everyone else to piggyback off their 3l337 system anytime soon.

  • Copyright. Kills innovation dead.©
  • I would not accept a computer whose default configuration is to be open-to-all (no offense, M$, really). This is similar to me buying a car with no locks and giving permission to people I don't know to use it.

    Anonymous driver says, "I'll just leave the gas money in the ash tray." Why should I believe him?

    Also, it is pretty easy to write

    while( true )
    {
    ...add a few bytes...send a few bytes...
    }

    What is to stop me from doing this on a thousand computers drawing from a false bank account (if I had the knowledge and were so inclined)?
  • Trusted data (Score:3, Interesting)

    by SirSlud ( 67381 ) on Tuesday February 12, 2002 @02:08PM (#2995174) Homepage
    Whats to stop people from throwing noise out the back of their box upstream? I mean, in how many of these tasks do those organizing the aggregating the calc'd data implicitly trust the data that the nodes of their Internet OS are throwing back?

    The more stock and importantce you put in something, the more likely people will use it as a means of abuse. I can envision a world where people who are against a particular scientific task (for whatever reason, ethical, on principal, or whatever), use this Internet OS, and join particular distributed apps simply to throw noise into the upstream ...
  • All your CPU are belong to us!

  • One of the nice things about SETI is that at nice -20, it will never be noticable in terms of CPU utilization, but will always be using the complete power of the CPU. Could we do that with disks?

    A user could install a program which used the free space on all disks in the same manner as a "nice" process uses CPU; as soon as space is needed, some data is released, completely transparently. A company or organization could store data on the distributed network; they would keep a "master" copy of the data available, in case a particular fragment happened to be erased on all of the nodes, or nodes were unavailable.

    The question I'm pondering is how to keep track of where data is stored, and route data from the nodes to the host where it would be read. In article's example, the fragments of a movie, sent to a particular client. How do we efficiently request fragments, in the correct order, without either overusing bandwidth with duplicated data or dropping fragments?
  • Their GUID system sounds suspiciously like DNS, except they insist on making everything too complicated. Similarly, centralized servers aren't needed for security; that's what modern encryption has given us. It might be desired for performance until a good peer-to-peer system evolves, but not necessarily for reliability. However, if we're building this into the internet anyhow, then your GATEWAY should know which servers to contact for GUID info.

    Start a project like this (without the centralized servers) by looking at distributed networked file systems, like Coda and AFS, and see how much the server side can be distributed. The same goes for authentication systems, like Kerberos. Obviously the security would come from encryption and redundancy, but this is a very complicated scenario when the servers are distributed.

    In fact, distributing even as much as has been outlined in the article onto the clients would be difficult, and would likely kill network thoroughput if not done very carefully. If distributed as suggested in the article, it would place a massive load on the internet, by making thousands of requests for bits and pieces of files where there should be one request.

    However, with a centralized system, the problem is already solved, essentially. Any large-scale university (like MIT) has already developed the kinds of network file sharing and authentication technologies required herein. The distributed applications have already been written, and would merely contact these central servers for information instead of their own central servers. The economic framework is interesting, but already done, and the payment services exist as well.
  • A lot of concerns voiced in this discussion are dealt with adequately in the article.

    That being said, "Sign me up!". The security, privacy, availability issues are going to be solved. As in the article, you get to determine when, how, etc your computer is used, and you get to set the price.

    What this means in reality, though, is that there will be people who will set up farms of computers and underbid their processing power/storage space/bandwidth, and you will get very little, if any, money. Imagine a few cents a month, maybe.

    This system would be of great use to big business (who will really make savings) but will have little effect on the consumer except, perhaps, faster access to products and services sold by big business.

    The problem being that the only resource the average user may possibly use from such a system is backup. Your network connection isn't going to be fast enough to buy a cheap computer and buy processing power online for your game. MMORPGs, however, may take on a whole new meaning when they start being able to handle millions of simultaneously connected players, and a fully interactive virtual 3d world may come to fruition through such a distributed system.

    So, as many research products go, this will enable businesses to lower their costs and compete more effectively with each other, which, surprise, surprise, will (eventually) mean a cost reduction for our services and products.

    I'll start building my slow storage rack now. Shouldn't cost more than a few hundred for a terrabyte of near-line and on-line data.

    -Adam
  • by Animats ( 122034 ) on Tuesday February 12, 2002 @02:26PM (#2995305) Homepage
    This sort of thing works for SETI@HOME and for cryptanalysis because there are very few, or no, hits. There's no need for much coordination between the processing nodes. In fact, you could do brute-force cryptanalysis with each node just trying random keys, and it would only be 2x slower than systems where each client checks out some portion of the keyspace. Only a few problems have that property.

    The article looks more like an excuse for implementing a micropayment system (Creates a direct connection between your wallet and our bank account!). Enthusiasm for micropayment systems seems to come from people who want to collect the payments, not from the people expected to pay them. It's very clear that what consumers want are flat-rate services; competitively, flat-rate wins over pay-per-use as soon as the prices get close.

    If you want vast amounts of CPU time and are willing to pay, you'd probably be better off cutting a deal for off-peak time on hosting server farms. You get a uniform environment, good interconnect bandwidth, and a single organization to deal with.

  • by Seth Finkelstein ( 90154 ) on Tuesday February 12, 2002 @02:41PM (#2995401) Homepage Journal
    Date: Sun, 18 Nov 2001 22:30:56 -0800
    From: Greg Broiles
    Subject: Re: Pricing spare resources and options?

    At 01:44 PM 11/18/2001 -0500, dmolnar wrote:

    >The recent comments on Mojo Nation prompted me to look at their site
    >again. I don't see much guidance on how to set prices for network
    >services. There's a mention someplace that business customers will build
    >pricing schemes on top of Mojo Nation, but not much indication of what
    >these schemes might be.
    >
    >So what is the "right" way to price resources? (Preferably beyond the
    >obvious "supply and demand.")

    Unfortunately, one of the evolutionary steps in Mojo Nation's development has been their abandonment, for the most part, of user-visible and user-configurable economics; they deliberately made it difficult to see how many Mojo are held by the local broker, and relatively unlikely that a broker will be able to earn significant Mojo by careful pricing - recent clients are configured such that the economic brakes on resource usage are sharply curtailed or removed entirely.

    It's my impression that, given the changes in the venture capital and software markets, they've refocused their efforts away from P2P filesharing and towards speedy realtime content delivery, whereby people with limited net connections can maximize their incoming bandwidth by pulling (or getting pushes) from multiple other parties simultaneously, somewhat similar to what Morpheus/Kazaa are doing, or what Bram Cohen (a Mojo Nation alumnus) is doing with BitTorrent.

    The economics seemed to attract people who wanted to experiment with pricing, etc., but that wasn't necessarily a market or constituency which is interesting to investors or businesspeople.

    >A related question - I ran into a friend of mine who had just finished an
    >internship in options trading. He suggested it might be worth looking at
    >options on spare disk space or other resources, as a means of figuring out
    >how to make Mojo-type systems eventually profitable in the real world. Now
    >I have a copy of Natenberg's _Option Volatility and Pricing_ to look at...

    It seems like there ought to be an interesting market here, but I know and worked with several people (with good financial backgrounds) who flogged this for awhile and never got anywhere. I guess a big part of the problem is that there's such a big difference in the perceived value of a megabyte/month of online storage .. if you're on the provider side, you think that's pretty expensive, as you've got the investment & etc required in building a data center, providing bandwidth to reach customers, paying staff, etc - but if you're on the customer side, you look at an 80 Gb drive at Fry's in the Sunday newspaper for $160 and think about a $500 1.5mb/s frame relay connection, and wonder why the service guys want $3 per Mb/month ..

    and then the Mojo guys come along and make it sound like the people with the cheap frame relay connections and commodity PC hardware ought to be able to set up data centers in their back bedrooms or on their old laptops, but so far all of the business models proposed involve paying those guys up front for an indefinite period of storage, so there's no strong incentive to actually store the data for long, especially not if you can resell that same disk space 3 or 4 or 50 times.

    Seems like the guys who really have hard data about options for bandwidth and disk usage are the disaster recovery guys. And that market hasn't been so great lately either, Comdisco declared bankruptcy and is their disaster recovery unit is getting swallowed up by Sungard, I think.

    Anyway, yeah, the Enron guys thought there was something interesting to be done in bandwidth futures, too, but I don't know if they ever really got anything done before their demise beyond some demonstration projects.

    --
    Greg Broiles -- gbroiles@parrhesia.com -- PGP 0x26E4488c or 0x94245961
    5000 dead in NYC? National tragedy.
    1000 detained incommunicado without trial, expanded surveillance? National disgrace.

    • Unfortunately, one of the evolutionary steps in Mojo Nation's development has been their abandonment, for the most part, of user-visible and user-configurable economics; they deliberately made it difficult to see how many Mojo are held by the local broker, and relatively unlikely that a broker will be able to earn significant Mojo by careful pricing - recent clients are configured such that the economic brakes on resource usage are sharply curtailed or removed entirely.


      This is because strict pricing really does not work. I could point you to some good work by Andrew Odzlyko regarding incremental pricing for computational resources, but the best paper to find that outlines the hard part is "Price-War Dynamics in a Free-Market Economy of Software Agents" by Kephart et al. Computational resources are like electricity, they can't really be stored for future resale so it is relatively easy for suppliers to play games with the market by withholding resources during periods of peak demand. The resources are very time-dependant and they are effectively a zero-cost good so there is a race to the bottom in pricing. Additionally, these resources are difficult to price by users --users expect a constant price for resources contributed and most users have both an inflated expectation of what their resources are worth and little understanding of things like options pricing (e.g. to them Black-Scholes is a vacation destination.)

      For Mojo Nation we opted to move to a pricing model closer to Odzlyko's "Paris Metro Pricing" in which resources donated to the system were exchanged for a sort of network karma. If you donated resources during periods of peak demand you could redeem them for enhanced quality of service at a later point. Not as fancy as the "disk space for dollars" model that the cypherpunk dreamers seem to want but a scheme a little more grounded in reality.

      Jim

  • can anyone say... (Score:4, Interesting)

    by gh05t ( 558297 ) on Tuesday February 12, 2002 @02:44PM (#2995420)
    security as we know it no longer exists?
    How many people do you know that are too scared to purchase anything online because they're afraid that some crazy cracker will intercept vital financial information? I know quite a few. We have to keep in mind that a relatively small portion of the overall population will actually see the benefit of this technology; and even fewer will trust it.
    Things that should be considered:
    • security of personal computers
    • security of bank account
    • additional power consumption from computer being left on
    • cost to companies that use the technology
    • cost, if any, for a persons' file backups
    • value of the differences in speed/storage of individuals' computers
    First of all, can the encryption be cracked? with massive distributed computing available your computers cpu cycles may very well be used to crack your own personal encryption scheme that was used to back up your files securely. What kind of bank account access will be given to allow pennies to trickle in? Without proper supervision, how would you know that the pennies trickling out are really legitimately earned? I beleive that there was a case not too many years ago where a programmer created 'bugs' in a banks software that allowed money to trickle into his own bank account unsolicited. Also, can the companies using your pc really pay enough to compensate for the additional power consumption costs of leaving your computer on more frequently? Wouldn't people be more inclined to leave their computers on more often so as to allow more pennies to trickle in? And last of all, how would the value of individuals' computers be judged? Would it truly be fair to allow someone with a Pentium 233MHz and a 3 Gig hard drive to get payed the same rate as someone with an Athlon XP 1900+ and 80 Gig hard drive? I think that it's a cool idea, but too difficult to implement any time soon, if ever.
  • Storage (Score:3, Interesting)

    by esme ( 17526 ) on Tuesday February 12, 2002 @03:05PM (#2995527) Homepage
    The greatest possibility that I see for using this sort of system is storage. I don't know about the rest of you, but I would glady sell my spare processor cycles to get a robust, secure, frequently-updated backup of my files. I have a backup system (CD-R for my home machines), but I don't keep it updated very well, and certainly not as updated as the system they're talking about could keep it.

    Add to that the fact that when you start dealing with serious amounts of data (~1TB), making backups to tape or any other media starts to get really difficult. If the free disk space on people's computers (I've got around 30 or 40GB free on my home machines) could be put to use to store backups, I'm sure businesses would be willing to pay a significant amount of money for it.

    -Esme

  • by Mr. Neutron ( 3115 ) on Tuesday February 12, 2002 @03:13PM (#2995578) Homepage Journal
    I'm all for sharing unused CPU power and DSL bandwidth, but what if I think SETI@Home is a waste of time, or have moral objections to my box being used as a repeater to broadcast R-rated movies? Is there going to be a way to itemize every flop and byte, and opt-out of the ones I don't want?

    Probably not.

  • I/O Bound (Score:3, Interesting)

    by Waffle Iron ( 339739 ) on Tuesday February 12, 2002 @03:23PM (#2995639)
    It seems to me that most coputational tasks are more I/O bound than processor bound anyway. This scheme would just make the problem worse by moving the computations farther away from the ultimate source and destination of the data being processed.

    Processors faster than 2GHz are dirt cheap today. High-bandwidth connections aren't cheap, and connections to home users are 3 orders of magnitude slower than an internal disk drive channel.

    This kind of thing only seems to make sense for the most geek-oriented scientific types of calculations, and of those only the jobs that are trivially parallelized, like SETI. I don't see everyone changing their OS to support it.

  • ...eerily remind me of "Skynet" from the Terminator movies?

    How long before it becomes self-aware, realizes humans are the single biggest threat to its continued existence, and begins scheming to eradicate us?
  • a couple of issues (Score:3, Interesting)

    by dutky ( 20510 ) on Tuesday February 12, 2002 @03:58PM (#2995825) Homepage Journal
    It sounds nice, but I see two big problems:
    1. even if we have lots of unused processor time (which I'm sure we do), pumping the data in to and out of a remote procedure call can consume a lot of bandwidth and result in a huge lag time. Many problems don't distribute well, even when you have relatively high bandwidth connections to send the data over (like multi-GB memory busses), so the problem only gets worse when you use a measley network pipe or modem line. (processor memory bus bandwidth tends to be in the 5-10 Giga-bit range, even the best home internet access is only 10-100 Mega-bits)

    2. the steady state of a hard drive is full. There just isn't going to be enough spare, on-line, storage space on folks' desktops to give any appreciable amount out to share. If you have to deal with the bloat of a self healing encoding, the problem only gets worse.

      Consider the case of N users, each with one hard drive of size X. They share out half of their hard drive space, but a file takes three times as much space to store on the distributed system than it does purely locally (for the self-healing encoding). The total hard drive space available to the group is now N*X/2 + 1/3*N*X/2 = N*X*4/6, or just over half the actual total space on the network. The average space available to any single user is the total available space on the network divided by the number of users, or just over half the actual space on the individual user's local hard drive.

      That doesn't sound like too good a deal to me. Admittedly, I will be getting some extra reliability, but given how many home user's back-up their data on a regular basis, I don't think reliability is worth much (at least to home users).


    At first blush, it sounds like a nice idea, but I don't think the economics are going to support it. It will always be easier and cheaper for the folk that actually need more storage or processing power to just go out and buy it, especially while Moore's law is in effect. For anyone else, it just doesn't matter.
    • Problem #3) What about security issues. The biotech info that is on your computer is on someone elses and traveling through the internet which is inherently insecure. Wouldn't that open up to things like hackers using your computer to crack the biotech info that is on your computer in the same kind of distributed maner? This may work better in a work environment where the company has a lan and at night all employees log out of their computers but leave it on for the big jobs to run, but not at peoples homes. Gee just think your rival companies employees are helping to solve your problems.....
  • Intended use... (Score:2, Interesting)

    by nologin ( 256407 )

    I'm not so worried about the technical side of things, but more along the lines of intended use...

    Could someone queue a job to crack a encrypted password file, or a document stolen from the government? I imagine that with 150 million computers using their spare cycles, this job could be done with relative ease. This is definitely an issue that the authors have failed to address in their proposal.

    The legal rammifications alone makes this prohibitive. Is a person who's computer did 0.1% of an illegal activity just as liable as someone who did 10%, 25%, 50% or as liable as the person who submitted the job? Can you even fully control what kind of jobs your system is doing using this proposed infrastructure?

    It may be a great idea for say X machines inside a large corporation, but there is already some alternatives to fill that need. I just don't see how they can work out the logistics of issues such as the one I present above, when they have to also worry about technical and financial issues that such a system would bring with it.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...