×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

With Linux Clusters, Seeing Is Believing

Hemos posted about 10 years ago | from the believing-is-truth dept.

Technology 208

Roland Piquepaille writes "As the recent release of the last Top500 list reminded us last month, the most powerful computers now are reaching speeds of dozens of teraflops. When these machines run a nuclear simulation or a global climate model for days or weeks, they produce datasets of tens of terabytes. How to visualize, analyze and understand such massive amounts of data? The answer is now obvious: using Linux clusters. In this very long article, "From Seeing to Understanding," Science & Technology Review looks at the technologies used at Lawrence Livermore National Laboratory (LLNL), which will host the IBM's BlueGene/L next year. Visualization will be handled by a 128- or 256-node Linux cluster. Each node contains two processors sharing one graphic card. Meanwhile, the EVEREST built by Oak Ridge National Laboratory (ORNL), has a 35 million pixels screen piloted by a 14-node dual Opteron cluster sending images to 27 projectors. Now that Linux superclusters have almost swallowed the high-end scientific computing market, they're building momentum in the high-end visualization one. The article linked above is 9-page long when printed and contains tons of information. This overview is more focusing on the hardware deployed at these two labs."

Sorry! There are no comments related to the filter you selected.

First posting? (-1)

Anonymous Coward | about 10 years ago | (#11072998)

I'm not quite sure why people are so attracted to having the first post

Realisation about this procedure (5, Funny)

Vvornth (828734) | about 10 years ago | (#11073002)

This is how we nerds measure our penises. ;)

Re:Realisation about this procedure (2, Funny)

Anonymous Coward | about 10 years ago | (#11073075)

What? You have more than one!?!

Re:Realisation about this procedure (2, Funny)

Anonymous Coward | about 10 years ago | (#11073375)

What? You don't!?!

/points and laughs at "One Penis Guy" over there...

Re:Realisation about this procedure (1)

Rellik66 (596729) | about 10 years ago | (#11073132)

My beowulf cluster is bigger than yours, so nyah!

Many monitors are Good Thing (tm)!!! (0)

Anonymous Coward | about 10 years ago | (#11073474)

Imagine a beowulf cluster of 21'' german monitors of resolution 1,600x1,200 ...

Seeing 8 x 8 monitors are 12,800x11,200 pixels!!!

14.33 MegaPixels, wow!!!

open4free ©

Re:Many monitors are Good Thing (tm)!!! Bugfixed!! (0)

Anonymous Coward | about 10 years ago | (#11073485)

143.36 MegaPixels, wow!!!

open4free ©

Re:Many monitors are Good Thing (tm)!!! Bugfixed!! (0)

Anonymous Coward | about 10 years ago | (#11073562)

Seeing 8 x 8 monitors are 12,800x9,600 pixels!!!

122.88 MegaPixels, wow!!!

open4free ©

Re:Realisation about this procedure (3, Funny)

Rellik66 (596729) | about 10 years ago | (#11073164)

uh-oh, more bad pick-up lines for Linux Geeks:

"you don't need to imagine how big my beowulf cluster is"

Re:Realisation about this procedure (0)

Anonymous Coward | about 10 years ago | (#11073212)

Yeah, you need at least 200 teraflops to measure mine.

Re:Realisation about this procedure (0)

Anonymous Coward | about 10 years ago | (#11073302)

Indeed, my penis contains so much data I need one of those clusters to visualize its contents.

Hammer (0, Offtopic)

hammer revolution (836067) | about 10 years ago | (#11073007)

--;

The hammer revolution has begun

--;

Mac OS X has similar benefits (4, Interesting)

daveschroeder (516195) | about 10 years ago | (#11073008)

Virginia Tech's "System X" cluster cost a total of $6M for the asset alone (i.e., not including buildings, infrastructure, etc.), for performance of 12.25 Tflops.

By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.

Double the cost, for a Top 100 supercomputer's-worth lower performance.

And it wasn't because Virginia Tech had "free student labor": it doesn't take $6M in labor to assemble a cluster. Even if we give it an extremely, horrendously liberal $1M for systems integration and installation, System X is still ridiculously cheaper.

I know there will be a dozen predictable responses to this, deriding System X, Virginia Tech, Apple, Mac OS X, linpack, Top 500, and coming up with one excuse after another. But won't anyone consider the possibility that these Mac OS X clusters are worth something?

Re:Mac OS X has similar benefits (1, Insightful)

Anonymous Coward | about 10 years ago | (#11073032)

I know this is a stupid question, but what exactly is a Teraflop? The first thing that comes to mind is someone doing a belly flop and hitting concrete...

Re:Mac OS X has similar benefits (0)

Anonymous Coward | about 10 years ago | (#11073095)

No, that would be a Terraflop

Re:Mac OS X has similar benefits (1)

utexaspunk (527541) | about 10 years ago | (#11073129)

maybe, you're just trolling, but ever heard of a dictionary [reference.com] ? "A measure of computing speed equal to one trillion floating-point operations per second [flops]"

Re:Mac OS X has similar benefits (1)

91degrees (207121) | about 10 years ago | (#11073187)

It's actually a teraFLOPS. Tera meaning 1 000 000 000 000, and FLOPS meaning "FLOating-Point Operations per Second".

Re:Mac OS X has similar benefits (3, Informative)

Spy Hunter (317220) | about 10 years ago | (#11073293)

I think you missed something here in your rush to defend Apple. The article is not about building high-teraflop supercomputers; it is about using small-to-medium sized clusters of commodity hardware to run high-end visualization systems (with Linux's help of course). Since they specifically want top-of-the-line graphics cards in these machines, Macs would not be the best choice. PCs have PCI express now (important for nontraditional uses of programmable graphics cards, as these guys are probably doing) and the latest from ATI/NVidia is always out first on PCs, cheaper.

Re:Mac OS X has similar benefits (5, Insightful)

zapp (201236) | about 10 years ago | (#11073296)

G5 nodes do have excellent performance, but don't assume OSX is all they can run.

We at Terra Soft have just released Y-HPC, our version of Yellow Dog Linux, with a full 64-bit development environment, and a bunch of cluster tools built in.

I'm not much of a marketting drone, but being as I am part of the Y-HPC team, I had to put a shameless plug in. Bottom line is, it kicks OSX's ass any 2 ways you look at it.

Y-HPC [terrasoftsolutions.com]

Re:Mac OS X has similar benefits (2, Insightful)

59Bassman (749855) | about 10 years ago | (#11073514)

Truly no offense intended, but...

I've tried installing YDL on a small G5 cluster. It was a PITA to get running (3 installs before I was able to get the X server running right). And still I can't find any fan control. After 5 minutes the fans spool up to "ludicrous speed" and stick there.

I really want to like YDL. I've been talking to the folks who do OSCAR about trying to get OSCAR to support YDL. But I'm not sure how it will work out yet, at least until I can figure out how to turn down the fans!

Re:Mac OS X has similar benefits (-1, Flamebait)

Jameth (664111) | about 10 years ago | (#11073320)

Similar benefits? What are you talking about? The article is about visualization!

Is there some reason you feel the need to make off-topic evangelizations left and right? Even more importantly, who the hell modded this informative instead of off-topic? Clearly, this doesn't relate to an article about doing visualizations of supercomputer output.

Re:Mac OS X has similar benefits (3, Insightful)

RazzleFrog (537054) | about 10 years ago | (#11073333)

Beside the fact that you are (please forgive me) Apples and Oranges, your sample size is way too small to use as conclusive evidence. Until we start seeing X Serve Clusters in a few more places we can't be sure of the cost benefit.

Re:Mac OS X has similar benefits (4, Informative)

RealAlaskan (576404) | about 10 years ago | (#11073334)

Virginia Tech's "System X" cluster cost a total of $6M for the asset alone (i.e., not including buildings, infrastructure, etc.), for performance of 12.25 Tflops.

By contrast, NCSA's surprise entry in November 2003's list, Tungsten, achieved 9.82 Tflops for $12M asset cost.

When I looked here [uiuc.edu] , I found this: ``Tungsten entered production mode in Novermber 2003 and has a peak performance of 15.36 teraflops (15.36 trillion calculations per second).''

To me, that looks faster than System X, not slower.

Let's see: NCSA stands for ``National Center for Supercomputing Applications''. ``NCSA [uiuc.edu] is a key partner in the National Science Foundation's TeraGrid project, a $100-million effort to offer researchers remote access ...''

Looks as if the NCSA has a huge budget. I'd guess that ``gold-plated everything'' and ``leave no dollars unspent'' are basic specs for everythig they buy.

What can we learn about Virginia Tech? How about this [vt.edu] :

System X was conceived in February 2003 by a team of Virginia Tech faculty and administrators and represents what can happen when the academic and IT organizations collaborate.

Working closely with vendor partners, the Terascale Core Team went from drawing board to reality in little more than 90 days! Building renovations, custom racks, and a lot of volunteer labor had to be organized and managed in a very tight timeline.

In addition to the volunteer labor, I'd guess that Virginia Tech had very different design goals, in which price was a factor. NCSA's bureaucracy probably accounted for a lot of those extra $6M they spent. Different designs and goals probably had a lot to do with the rest of the price, but I suspect that a bureaucratic procurement process was the main cause for the higher price of the Xeon system.

Yes, System X and the Apple hardware is pretty neat, but don't use the price/performance ratio of these two systems as a metric for the relative worth of Linux and OSX clusters.

It's unfair and meaningless to compare volunteer labor and academic pricing and scrounging on a limited budget to bureaucratic design, bureaucratic procurement and an unlimited budget.

Rpeak, not Rmax (4, Insightful)

daveschroeder (516195) | about 10 years ago | (#11073427)

Look here [top500.org] .

The speed you quoted is the theoretical peak, not the actual maximum achieved in a real world calculation (like the Top 500 organization's use of Linpack).

System X's equivalent theoretical peak is 20.24 TFlops.

I'm also not indicting Linux clusters in the least; they've clearly shown they can outperform traditionally architected and constructed supercomputers for many tasks, with the benefit of using commodity parts - at commodity pricing. All I'm saying is that there's a new player here, and it's a real contender, and has done a lot for very little money...which was the whole goal of Linux clusters in this realm in the first place.

(Also, as I said, the volunteer labor model is irrelevant - let's just pretend it was professionally installed for an additional $1M, or even $2M if that would satisfy you. It's still several million dollars cheaper, and 3Tflops greater performance. These are BOTH rackmount clusters with similar amounts of nodes and processors, running a commodity OS with fast interconnects. There are differences, yes, and perhaps even differences in goals. But looking past that, price/performance for something like this is still an important metric.)

Re:Mac OS X has similar benefits (2, Insightful)

vsack (558342) | about 10 years ago | (#11073374)

You have to take the costs with a grain of salt. They built the original machine for $5.2M. They then upgraded all the nodes from PowerMac G5s to Xserve G5s for $600K. Even if you assume that the $5.2 was a fair price for their original system, the upgrade price was an absolute gift from Apple. The cost per node to upgrade was about $550. Since they moved from non-ECC RAM to ECC RAM (4GB/node), the memory upgrade should have cost more than that alone.

Vendors will often give away hardware in order to break into a new market. This is incredible marketing for Apple. Who cares if they eat a few million for the press they've gotten?

Apple gave away nothing (1)

daveschroeder (516195) | about 10 years ago | (#11073453)

The only special thing they did for VT was *take back* the original G5 towers, and provide 2.3GHz G5 Xserves before they were otherwise available. The $600K upgrade did not reflect any significant discount or gift. A similar cluster could be built by anyone, now, for around that same total price of $6M.

Re:Apple gave away nothing (0)

Anonymous Coward | about 10 years ago | (#11073591)

give us a breakdown please.

Re:Mac OS X has similar benefits (1, Insightful)

Anonymous Coward | about 10 years ago | (#11073418)

I bet that NCSA actually ran something though. That's something that the VT machine never really appeared to do...

Yeah, VT really didn't do anything... (3, Interesting)

daveschroeder (516195) | about 10 years ago | (#11073504)

...except get untold amounts of recognition, publicity, free advertising, news articles, and the capability to catapult themselves to the forefront of the supercomputing community overnight for a paltry sum of money, thus attracting millions of dollars of additional funding and grants to build clusters that WILL be doing real work, such as the one we're talking about now (which is more than capable now that it has ECC memory), and the several additional clusters they plan to build in the future, not to mention the benefit of proving that a new architecture, interconnect, and OS will perform well as a supercomputer, allowing more choice, competition, and innovation to enter the scene, which ultimately results in more and better choices for everyone.

Re:Mac OS X has similar benefits (1)

chill (34294) | about 10 years ago | (#11073423)

Virginia Tech used G5 Tower units. I wonder how much difference there would be in power, heat and space had they used Xserve 1Us? Like what Apple is installing for the Army. (http://www.apple.com/science/profiles/colsa/)

VT already switched to Xserve G5s (1)

daveschroeder (516195) | about 10 years ago | (#11073472)

And yes, they saved a lot of space, and heat/power (130nm chips to 90nm) and increased the performance by 2Tflops (by going from 2.0GHz processors to 2.3GHz). The major gain, though, was ECC memory.

Re:Mac OS X has similar benefits (0)

Anonymous Coward | about 10 years ago | (#11073563)

But won't anyone consider the possibility that these Mac OS X clusters are worth something?

That sounds like a sales pitch to me. We don't want salesmen here. We want balanced unbiased discussion of facts, preferably with links to back your facts up.

When System X came out, I took a trip to Apple's site and multiplied out the number of computers times price per computer and realized that there was NO WAY IN HELL that the cost numbers we were being fed were real. Not even close. And my personal experience as an April 2004 revision AluBook owner is that while OS X is very nicely crafted, Linux is MUCH faster on the same hardware.

Re:Mac OS X has similar benefits (-1, Troll)

Anonymous Coward | about 10 years ago | (#11073592)

Classic troll?

If not you really need to take a step back and realize you've become a bit of a fanatic. At no point was the viability of Apple or G5s questioned, yet you felt the need to take the discussion on a tangent and defend Apple. Why?

You're talking about an operating system as if it was your religion.

You dropped something (2, Funny)

Anonymous Coward | about 10 years ago | (#11073009)

How to visualize, analyze and understand such massive amounts of data?

How to write complete sentences?

Re:You dropped something (1)

madaxe42 (690151) | about 10 years ago | (#11073134)

It looks like you're writing a sentence. Would you like me to fuck it up for you?

Re:You dropped something (1)

jdray (645332) | about 10 years ago | (#11073498)

Ah-ha! I always expected that Clippy was a Slashdot editor. Now we have more evidence! Sort of.

not suitable for the Slashdot demographics (2, Funny)

BuddieFox (771947) | about 10 years ago | (#11073010)

The article linked above is 9-page long when printed and contains tons of information.

I hope the poster doesn't actually expect any of us to post any meaningful comments based on having read that article, it's a lost cause.. At least on me.

Re:not suitable for the Slashdot demographics (1)

Interrupt18 (839674) | about 10 years ago | (#11073109)

The poster doesn't expect anything other than to generate traffic for his blog in hopes of getting a few adsense dollars. Have a look [slashdot.org] at all of the stories he posts and see if you can find a trend.

Yes, but does it run Linux? (-1, Offtopic)

Anonymous Coward | about 10 years ago | (#11073018)

This post brought to you by the word of the day: "Jew"!

I wish... (1)

simon hughes (826043) | about 10 years ago | (#11073020)

... my computer could do that.

Is that US or metric tons? (4, Funny)

HarveyBirdman (627248) | about 10 years ago | (#11073025)

The article linked above is 9-page long when printed and contains tons of information.

Damn! What kind of paper stock are you printing on?

Re:Is that US or metric tons? (2, Funny)

bhima (46039) | about 10 years ago | (#11073052)

That's that new Depleted Uranium paper the military has been using!

Re:Is that US or metric tons? (2, Funny)

Tassach (137772) | about 10 years ago | (#11073083)

What kind of paper stock are you printing on?
Paper has bad archival properties. Real men use granite slabs for hardcopy.

Re:Is that US or metric tons? (0)

Anonymous Coward | about 10 years ago | (#11073486)

Ahh, but if he provides an accurate measurement of exactly how many tons, then we can come up with a page:weight ratio, and find out how heavy the information stored in the Library of Congress is!

Roland Piquepaille and Slashdot (5, Interesting)

Anonymous Coward | about 10 years ago | (#11073033)

Roland Piquepaille and Slashdot: Is there a connection?

I think most of you are aware of the controversy surrounding regular Slashdot article submitter Roland Piquepaille. For those of you who don't know, please allow me to bring forth all the facts. Roland Piquepaille has an online journal (I refuse to use the word "blog") located at www.primidi.com [primidi.com] . It is titled "Roland Piquepaille's Technology Trends". It consists almost entirely of content, both text and pictures, taken from reputable news websites and online technical journals. He does give credit to the other websites, but it wasn't always so. Only after many complaints were raised by the Slashdot readership did he start giving credit where credit was due. However, this is not what the controversy is about.

Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. Blogads is not your traditional online advertiser; rather than base payments on click-throughs, Blogads pays a flat fee based on the level of traffic your online journal generates. This way Blogads can guarantee that an advertisement on a particular online journal will reach a particular number of users. So advertisements on high traffic online journals are appropriately more expensive to buy, but the advertisement is guaranteed to be seen by a large amount of people. This, in turn, encourages people like Roland Piquepaille to try their best to increase traffic to their journals in order to increase the going rates for advertisements on their web pages. But advertisers do have some flexibility. Blogads serves two classes of advertisements. The premium ad space that is seen at the top of the web page by all viewers is reserved for "Special Advertisers"; it holds only one advertisement. The secondary ad space is located near the bottom half of the page, so that the user must scroll down the window to see it. This space can contain up to four advertisements and is reserved for regular advertisers, or just "Advertisers". Visit Roland Piquepaille's Technology Trends (www.primidi.com [primidi.com] ) to see it for yourself.

Before we talk about money, let's talk about the service that Roland Piquepaille provides in his journal. He goes out and looks for interesting articles about new and emerging technologies. He provides a very brief overview of the articles, then copies a few choice paragraphs and the occasional picture from each article and puts them up on his web page. Finally, he adds a minimal amount of original content between the copied-and-pasted text in an effort to make the journal entry coherent and appear to add value to the original articles. Nothing more, nothing less.

Now let's talk about money. Visit http://www.blogads.com/order_html?adstrip_category =tech&politics= [blogads.com] to check the following facts for yourself. As of today, December XX 2004, the going rate for the premium advertisement space on Roland Piquepaille's Technology Trends is $375 for one month. One of the four standard advertisements costs $150 for one month. So, the maximum advertising space brings in $375 x 1 + $150 x 4 = $975 for one month. Obviously not all $975 will go directly to Roland Piquepaille, as Blogads gets a portion of that as a service fee, but he will receive the majority of it. According to the FAQ [blogads.com] , Blogads takes 20%. So Roland Piquepaille gets 80% of $975, a maximum of $780 each month. www.primidi.com is hosted by clara.net (look it up at http://www.networksolutions.com/en_US/whois/index. jhtml [networksolutions.com] ). Browsing clara.net's hosting solutions, the most expensive hosting service is their Clarahost Advanced (http://www.uk.clara.net/clarahost/advanced.php [clara.net] ) priced at £69.99 GBP. This is roughly, at the time of this writing, $130 USD. Assuming Roland Piquepaille pays for the Clarahost Advanced hosting service, he is out $130 leaving him with a maximum net profit of $650 each month. Keeping your website registered with Network Solutions cost $34.99 per year, or about $3 per month. This leaves Roland Piquepaille with $647 each month. He may pay for additional services related to his online journal, but I was unable to find any evidence of this.

All of the above are cold, hard, verifiable facts, except where stated otherwise. Now I will give you my personal opinion.

It appears that every single article submitted to Slashdot by Roland Piquepaille is accepted, and he submits multiple articles each month. As of today, it is clear that ten articles were accepted in October, six in November, and four in December (so far). See http://slashdot.org/~rpiquepa [slashdot.org] for yourself. Some generate lots of discussion; others very little. What is clear is that, on a whole, this generates a lot of traffic for Roland Piquepaille. Just over 150000 hits each month according to Blogads. And the higher the traffic, the higher the advertisement rates Roland Piquepaille can charge. So, why do the Slashdot editors accept every single story from Roland Piquepaille? Is the content of his journal interesting and insightful? Of course it is, but not by Roland Piquepaille's doing. The actual content of his journal is ripped from the real articles, but at least he gives them credit now. Does the content of his journal bring about energitic discussion from the Slashdot readership? Yes, because the original articles from which he got his content are well written and researched and full of details.

So you may be asking, "What is so controversial about this?" Well, in almost every single article submitted by Roland Piquepaille, Slashdot readers complain that Roland Piquepaille is simply plaigarizing the original articles and that rather than linking to Roland Piquepaille's Technology Trends on the front page of Slashdot (guaranteeing a large amount of traffic for him), Slashdot should instead link to the original articles. In essence, avoid going through the middle man (and making money for him!). The Slashdot readership that can see through Roland Piquepaille's farce objects on the basis that he stands to make a generous amount of money by doing very little work and instead piggy-backing on the hard work of other professional writers. Others argue that he is providing us with a service and should not be ashamed to want to get paid for it. But exactly what service is he providing us with? He copies-and-pastes the meat of his journal entries from professional and academic journals and news magazines and submits about seven or eight of these "articles" to Slashdot each month. Is this "service" worth up to $647 a month? Or, does each "article" represent up to $80 of work?

The real question is, why does Slashdot continue to accept every single one of his submissions when many of the readers see through the scam and whole-heartedly object to what he is doing? Maybe the Slashdot editors don't have much journalistic integrity. Haha, just kidding. We all know they wouldn't know integrity if it bitch-slapped a disobediant user talking about Slashcode internals or shut down www.censorware.org [google.com] in a temper tantrum. Anyway, what incentive would Slashdot editors have to link to lame rehashes of original and insightful technology articles? What incentive would Roland Piquepaille have to constantly seek these tech articles and rehash them into lame journal entires and submit them to Slashdot? I submit to you, the Slashdot reader, that the incentive for each is one and the same. Now that you have been informed of the facts of the situation, you can make your own decision.

Re:Roland Piquepaille and Slashdot (1, Offtopic)

lucabrasi999 (585141) | about 10 years ago | (#11073078)

Finally, he adds a minimal amount of original content between the copied-and-pasted text in an effort to make the journal entry coherent and appear to add value to the original articles.

Oh, please, you give Roland WAY too much credit. He doesn't add any original content. He just copies and pastes.

shit (0)

Anonymous Coward | about 10 years ago | (#11073130)

That should be As of today, December 13 2004....

Also, it seems Roland had scaled back the number of ads on his page this month. Or maybe nobody has bought the ads. Last month, when I was researching this he had 1 premium ad and 4 regular ads available. I would have released this report back then, but I've been banned from posting for some time now. I'm only posting this now because I am not at my regular location.

MOD PARENT UP, UP, UP! (0)

Anonymous Coward | about 10 years ago | (#11073146)

n/t

Re:Roland Piquepaille and Slashdot (1, Funny)

LithiumX (717017) | about 10 years ago | (#11073247)

You, my friend, must be exceptionally bored. Either that, or this Roland guy must have shunned your romantic advances sometime recently. Can't you just stalk in silence like everybody else?

Re:Roland Piquepaille and Slashdot (2, Informative)

maxwell demon (590494) | about 10 years ago | (#11073438)

Roland Piquepaille's Technology Trends serves online advertisements through a service called Blogads, located at www.blogads.com. [...] Blogads pays a flat fee based on the level of traffic your online journal generates. [...] Visit Roland Piquepaille's Technology Trends (www.primidi.com) to see it for yourself.

Are you actually Roland Piquepaille? If so, that's a really neat trick to move traffic to that site. If not, then he may be thankful for your comment, after all :-)

Wow! (3, Funny)

Anonymous Coward | about 10 years ago | (#11073034)

Supercomputers have become so advanced we need more supercomputers just to understand them.

Obligatory (2, Funny)

Epistax (544591) | about 10 years ago | (#11073079)

42

Re:Wow! (0)

Anonymous Coward | about 10 years ago | (#11073217)

It makes me drool to think of having a 128-processor cluster for a graphics card. I might even be able to run Doo -- no! Must ... resist ... tired ... old .. meme.

Re:Wow! (1)

AltGrendel (175092) | about 10 years ago | (#11073232)

At the risk of being called off topic.

When Harlie was one [amazon.com] was the first book that I recall about a computer that designed another more complex computer that only it could understand.

Maybe Harlie was a Linux cluster.

Collection of interesting visualization samples (1)

nickh01uk (749576) | about 10 years ago | (#11073248)

There are a bunch of different viz techniques listed on http://www.tauceti.org/research.html#v [tauceti.org] here.

Re:Wow! (1)

maxwell demon (590494) | about 10 years ago | (#11073350)

Wait until supercomuters become so complex that we need supercomputers to design the supercomputers which we need to understand the output of the supercomputer. Problem is, to understand the supercomputer-designing supercomputer's output we need a supercomputer to be designed by a supercomputer ... ok, there's a way out: Let the supercomputer build the supercomputer it designed.
Ok, now we just need another supercomputer to test the supercomputer the supercomputer built us to interpret the output of the supercomputer ...

pictures (1)

Mach5 (3371) | about 10 years ago | (#11073035)

oh, great, tell us about what the machines can do. i want pictures dammit!

Big Screen! (2)

TychoCelchuuu (835690) | about 10 years ago | (#11073036)

A 35 million pixel screen would rock for Half-Life 2. Where can I get me one? Looking at the picture, it's kind of like 3 monitors stuck together, so maybe I'll save some money and only get 1/3rd of the setup. How much can that cost? I mean, really.

Re:Big Screen! (0)

Anonymous Coward | about 10 years ago | (#11073087)

"How much can that cost? I mean, really."

I don't know... how much ya got?

Re:Big Screen! (2, Informative)

dsouth (241949) | about 10 years ago | (#11073521)

A 35 million pixel screen would rock for Half-Life 2. Where can I get me one? Looking at the picture, it's kind of like 3 monitors stuck together, so maybe I'll save some money and only get 1/3rd of the setup. How much can that cost? I mean, really.
I know you're joking, but since I'm the hardware architect for the LLNL viz effort, I'll bite anyway. :-)

Here's what you'll need at minimum:

  • A lot of display devices (monitors, projectors, whatever)
  • Sufficient video cards to drive the above (with new cards, you could do 2 devices per card if you have the appropriate cards, X configs, and the like).
  • A sufficient number of nodes to run the cards.
  • The fastest interconnect you can afford.
Once you've assembled the above, you connect everything up, install your favorite Linux or BSD distro on each node, then install DMX [sourceforge.net] . DMX works as an X11 proxy. It dispatches the X calls to other X11 servers on the appropriate nodes, giving the illusion that they are all one big X11 server. It also proxies for glX, so openGL stuff should run correctly.

If you've built a large setup (where "large" means "more than eight screens"), the openGL performance will suffer. In that case you can also install Chromium [sourceforge.net] which can work with DMX to provide a more efficient path for the openGL commands. [The DMX glx proxy broadcasts the gl commands to all nodes, Chromium can provide a tile sort that only sends the gl calls to the appropriate nodes.]

Assuming you can get all the above running, there's still plenty of work. Just keeping eight projectors color balanced will eat up a few hours of your week. If you want to do frame-locked stereo on your power wall, things get even more complex (and expensive -- nvidia 3000G/4400G cards aren't typically in the discount bin at Fry's).

Have fun, openGL stuff looks really cool on powerwalls... :-)

Re:Big Screen! (1)

swordboy (472941) | about 10 years ago | (#11073637)

A 35 million pixel screen would rock for Half-Life 2. Where can I get me one?

Well, you could use projectors to get a seamless screen from XP's built-in multi-monitor capability. I believe that the number is 10 screens simultaneous. This provides for a 3x3 matrix and an extra for controlling the damn thing. But you'll probably only get your hands on 1024x768 (786k pixels) so 9 would amount to 7Mpixel.

You'll probably have to wait on that 35Mpixel screen if you want borderless. Otherwise, go get yourself a bunch of high-res CRTs or LCDs and piece them together.

Make sure that you take a picture.

Uh huh ... (-1, Flamebait)

Anonymous Coward | about 10 years ago | (#11073040)

Yeah, Linux is the *only* answer when it comes to visualizing these massive amounts of data. I'm sure.

Linux is garbage. I can't believe these companies/organizations are willing to spend so much money implementing a patchwork-quilt/hackjob of an operating system.

Re:Uh huh ... (2, Insightful)

superpulpsicle (533373) | about 10 years ago | (#11073136)

Sigh... another jealous M$ fanboy who hates linux cause his career relies on running windows and clusterpatchupdate.exe.

Re:Uh huh ... (-1, Troll)

Anonymous Coward | about 10 years ago | (#11073471)

Yup. Because the only two operating systems in the world are Linux and Windows.

Try FreeBSD? Try Solaris? Try OS X? Try OpenBSD? Try NetBSD? The list goes on ...

Ironically enough, each and every one of those is professionally-developed. Linux? Pcha ... keep on wishing.

Re:Uh huh ... (0)

Anonymous Coward | about 10 years ago | (#11073311)

Darl! Welcome back. We've not heard from you for a while.

Funny how these things never run Winblows (0)

Anonymous Coward | about 10 years ago | (#11073050)

eh?

Slashdot's obligatory RP "article"... (-1, Offtopic)

Anonymous Coward | about 10 years ago | (#11073056)

Aha! Another post by Roland Piquepaille to drive traffic to his blog and get his advertising referral cash. Well done!

Regarding the story title (3, Funny)

weeboo0104 (644849) | about 10 years ago | (#11073102)

With Linux Clusters, Seeing Is Believing

Does this mean that we don't have to just imagine a Beowulf cluster anymore?

You are correct, sir (4, Funny)

Gzip Christ (683175) | about 10 years ago | (#11073203)

Does this mean that we don't have to just imagine a Beowulf cluster anymore?
That's right - now Beowulf cluster visualizes you!

Finally.... (3, Funny)

ElvenMonkey (789317) | about 10 years ago | (#11073111)

A machine that can compile a Stage1 Gentoo install in a reasonable amount of time.

You would think so (2, Informative)

jellomizer (103300) | about 10 years ago | (#11073479)

Unless you change the settings so it is compiling mulible applications at the same time. The speed to install Stage 1 of Gentoo won't be much faster then a 2 maybe 4 CPU system. These super computers and clusters use a concept called Parallel Processing. It is a process where a task is broken up and are handled by many processors in parallel. Most applications are not designed to run in parallel. So unless you have a compiler that is designed with Parallel Processing the OS will give the compiling task to 1 processor to processes out. You may get a slight speed advantage because the OS resources are being handled by an other processor but you are not guanrenteed 2x performace with 2 processors. Espectially with most make scripts for application you compile one program when that is done you do the next one. There are some algroithms that can done very Well (orders of magintude less) on Parallel Processing and there are other algorithms that just cannot be parallelized. Having 2 1 Ghz Processors is not the same as having 1 2 Ghz Processor. The 2 1Ghz will probably handle load much better then the 1 2ghz processor but the 1 2ghz processor will probably run your game better.

Beowulf cluster (-1, Redundant)

bredk (838817) | about 10 years ago | (#11073141)

Imagine a Beowulf cluster of these things!

Fuck Roland Piquepaille (5, Insightful)

Anonymous Coward | about 10 years ago | (#11073152)

So, if I've got this straight, Slashdot drives the banner ad traffic, real journalists write the content, and all Roland has to do is rip off a few articles, then sit in the middle and collect the checks. How do I get a sweet gig like that?

Re:Fuck Roland Piquepaille (0)

Anonymous Coward | about 10 years ago | (#11073279)

Funny, my friends and I were thinking the same thing. Does seem like a bit of a scam, I mean, create your own content instead of just summarizing others'.

And to think .. (1)

macaulay805 (823467) | about 10 years ago | (#11073189)

10 years or so from now, you'll have this much power in a little 1" x 1" box (probably priced around $100 dollars, too).

Re:And to think .. (1)

adeydas (837049) | about 10 years ago | (#11073503)

though a bit of an exaggeration, i guess there is a near about possibility especially if we have DNA chips...

ridiculously overpriced (1)

ChreexLe (838547) | about 10 years ago | (#11073515)

10 years or so from now, you'll have this much power in a little 1" x 1" box (probably priced around $100 dollars, too)

yes...refer to subject line

Re:And to think .. (1)

jdray (645332) | about 10 years ago | (#11073533)

So just imagine the Beowulf clusters we'll be imagining at that point. It blows my contemporarily-themed mind.

Imagine... (-1, Redundant)

Anonymous Coward | about 10 years ago | (#11073219)

a Beowulf Cluster of these things!

Building clusters with linux is easy. (4, Interesting)

roxtar (795844) | about 10 years ago | (#11073233)

To reaffirm what the article said building linux clusters is very simple. In fact certain distributions such as bccd [uni.edu] and cluster knoppix [bofh.be] specifically for that. Although configuring clustering softwares such as pvm mpi lam mosix etc wouldn't be a problem, I prefer something which has almost everything build into one package thats why I like the above distros. In fact I built a cluster (using BCCD) at home and used it to render images built from povray [povray.org] . I used pvmpov [sourceforge.net] for the rendering on a cluster part. Although there were only four machines the speed difference was evident. And above all making clusters is extremely cool and shows the paradigm shift towards parallel computing.

Re:Building clusters with linux is easy. (4, Interesting)

LithiumX (717017) | about 10 years ago | (#11073344)

I do think clusters are going to be a dominant architecture for the next few decades, but I also think the current ultra-heavy emphasis on clusters is as much a function of asymptotic limitations as much as the natural evolution of the technology. It's currently cheaper to build a cluster out of a whole mess of weaker processors than it is to develop a single ubercore. I doubt that situation will last more than a decade, though, going by previous history.

Computers were initially monolithic machines that effectively had a single core. By the 70's, the processing on many mainframes had branched out so that a single mainframe was often a number of seperate systems integrated into a whole (though nothing on the level we see today). By the 80's it seemed to swing back to monolithic designs (standalone pc's, ubercomputer Crays) and it wasn't until the 90's that dual and quad processing became commonplace (though the technology had existed before).

Eventually, someone will hit on a revolutionary new technology (sort of like how transistors, IC's, and microprocessors were revoloutionary) that renders current LVSI systems obsolete (optical? quantum?), and the cost/power ratio will shift dramatically, making it more economical to go back to singular (and more expensive) powerful cores rather than cheap (but weaker) distributed cores.

Re:Building clusters with linux is easy. (2, Insightful)

roxtar (795844) | about 10 years ago | (#11073412)

But on the other hand problems which require immense amount of calculations will exist and I don't see how advances in VLSI or some other technology will eliminate these kind of problems. So what I actually believe is that to some extent, yes we may go back to singular cores but imagine the power of those single cores together. In my opinion even if new technology does arrive, clusters are here to stay.

Re:Building clusters with linux is easy. (2, Informative)

LithiumX (717017) | about 10 years ago | (#11073538)

It all depends on what form an advance takes.

When VLSI hit the market, it became cheaper to have one ultrapowerful machine, compared to having a cluster of older IC-based hardware. You got more firepower for the money. That's not to say it wouldn't still pay to combine multiple Nth Generation machines, but a great deal of the cost advantage would be lost.

Clusters exist in their current diversity because it is simply the cheapest and most effective way to create powerful supercomputers. If you have a new technology orders of magnitude more powerful (which is how it usually goes), but also considerably more expensive, it becomes cheaper to build a singular (or small number) of powerful specimins than it does to build legions of older technology (like current processors - they aren't that powerful compared to higher end chips, but they're much much cheaper).

You could always network a whole mess of next-generation processors, but while it's a newer technology it will be obscenely expensive (not counting cost, there's nothing to stop people from creating arrays of supercomputer clusters right now).

Re:Building clusters with linux is easy. (1)

Bilzmoude (811717) | about 10 years ago | (#11073469)

Agreement: Although configuring clustering softwares such as pvm mpi lam mosix ...I prefer something which has almost everything build into one package

There is a similar distro, based off of ClusterKnoppix, called ParallelKnoppix [pareto.uab.es] , which includes LAM/MPI. In addition, ClusterKnoppix includes OpenMosix, so the tools are there. You already have it built in:)

Plus, if you are really looking for a HA system, it may be worth a remaster of either Cluster or ParallelKnoppix to add exactly the tools you want.

very long article... (3, Interesting)

veg_all (22581) | about 10 years ago | (#11073238)

So now Monsieur Piquepaille has been shamed by scornful posters [tinyurl.com] into including a link to the actual article (instead of harvesting page views), but he'd still really, really like you to click through to his page....

Really... (4, Insightful)

grahamsz (150076) | about 10 years ago | (#11073249)

Now that Linux superclusters have almost swallowed the high-end scientific computing market...

While some simulations parallelize very well to cluster environments, there are still plenty tasks that don't split up like that.

The reason clusters make up a lot of the Top 500 list is that they are relatively cheap and you can make them faster by adding more nodes - whereas traditional supercomputers need to be deisgned from the ground up.

mail address (1)

hey (83763) | about 10 years ago | (#11073259)

Maybe they are building cool Linux clusters but they can't be that smart. They have their mail addresses just sitting here on the site for spammers to harvest!

Re:mail address (1)

maxwell demon (590494) | about 10 years ago | (#11073542)

Maybe they are building cool Linux clusters but they can't be that smart. They have their mail addresses just sitting here on the site for spammers to harvest!

They are running a secret project about the use of supercomputers to analyze spam. :-)

Not fair, Linux! (2, Funny)

RandoX (828285) | about 10 years ago | (#11073331)

Leave some market share for the big guys.

Nice pictures, but... (0)

Anonymous Coward | about 10 years ago | (#11073353)

So these guys have some fancy computers displaying pretty pictures. Lots of computers = more detailed pics. But it's still viewed by humans in the end. I can't take in that much information. The art of simulation is to extract the bit you're interested in, and leave the rest. Aren't these systems just generating lots of trees and no wood?

A nine page article!!! (0)

Anonymous Coward | about 10 years ago | (#11073372)


Geez, I hope the youngsters can get through that without a break...

GridEngine (0)

Anonymous Coward | about 10 years ago | (#11073398)

GridEngine is also a main component of Linux clusters. Without a batch system, you can only have one user using the cluster at a time...

Opensource+free:
http://gridengine.sunsource.ne t

Engrish? (-1, Offtopic)

Anonymous Coward | about 10 years ago | (#11073407)

"This overview is more focusing on the hardware deployed at these two labs."


I'm more focusing on lunch.

WTF?! Learn english Roland, it's a good pasttime.

Paraview (1)

4of12 (97621) | about 10 years ago | (#11073443)

Once you have your visualization cluster, decided on the CPU, the interconnect, the OS, etc., you might ask what kind of application [paraview.org] you can run on it.

Single image shared vs distributed memory in Linux (3, Insightful)

saha (615847) | about 10 years ago | (#11073539)

Clusters are proven to be cost effective, but they do require more labor to optimize code to get it to work in that environment. Its easier to have the system and the complier do the work for you in a single image system. This article address those issues and concerns. single image shared vs distributed memory in large Linux systems [newsforge.com]

Damn.... (1)

boodaman (791877) | about 10 years ago | (#11073546)

I keep forgetting about Roland Piquepaille, and I click on his damn "overview" link.

Why does /. post these damn things from him? The guy is a shameless shill.

There should be a highly visible disclaimer on everyone of his posts: "This link goes to an external site that is NOT the article's original site, and this external site is unendorsed by Slashdot. This external site profits from traffic generated by clicking on this link."

Someone needs to write a Firefox extension that filters any mention of his "overviews". Hmmmm....

Yeah, Roland the Plogger again. (1)

Animats (122034) | about 10 years ago | (#11073609)

Yeah, and he even changed his URL. Maybe he was in too many spam blocklists. Does he spam other places too, or just Slashdot?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?