Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Virginia Tech Supercomputer Up To 12.25 Teraflops

timothy posted more than 9 years ago | from the changing-blacksburg's-climate dept.

Apple 215

gonknet writes "According to CNET news and various other news outlets, the 1150-node Hokie supercomputer rebuilt with new 2.3 GHz Xserves now runs at 12.25 Teraflops. The computer, the fastest computer owned by an academic institution, should still be in the top 5 when the new rankings come out in November."

cancel ×

215 comments

Sorry! There are no comments related to the filter you selected.

shit (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10629533)

shit shit shit

This is a fp (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10629534)

Fp pentru stefi si bombonel.

Re:This is a failure (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10629769)

YOU FUCKING FAIL IT!



qwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxc vb nm

qwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxc vb nm

FP (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10629535)

Woo!

Re:Failed Post (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10629786)

YOU FUCKING FAIL IT LOSER! YOU AREN'T EVEN FIRST FAILURE, YOU FAIL TO BE THE FIRST OF THE FAILURES! YOU FAIL IT!


too many caps too many caps too many caps too many caps
too many caps too many caps too many caps too many caps too many caps too many caps too many caps too many caps
This is Junis Afghanistan, you cock-smoking teabaggers. Don't forget to upgrade to tiger while being anally penetrated by a GNAA faggot.

hrm (5, Funny)

gutterandthestars (782754) | more than 9 years ago | (#10629540)

6.40tflops should be enough for anybody

Re:hrm (5, Interesting)

tmj0001 (704407) | more than 9 years ago | (#10629612)

Hans Moravec's book "Robot" suggests that 100 teraflops is about the level required for human intelligence. So we are up to 10% of his target. But human intelligence still seems very far away, so either he has badly underestimated, or our collective programming skills need significant improvement.

Re:hrm (5, Interesting)

TimothyTimothyTimoth (805771) | more than 9 years ago | (#10629673)

I think Morevec's method of simulating human intelligence involves modelling a scanned copy of the human brain, in real time at a neuronal level. It would be similar to modelling the global weather system, a software capability we already have. Current neuroscience would expect this model to be functionally equivalent to a human mind in terms of matching inputs and outputs. As an aside, I know that Ray Kurzweil has I much higher required estimated of a 20 petaflop (20,000 teraflop) computer, based on more conservative assumptions. 20 petaflops is due around 2009/10 under Moore's law. (And I for one offer an early welcome to our expected new AI overlords ...)

Re:hrm (4, Interesting)

TimothyTimothyTimoth (805771) | more than 9 years ago | (#10629703)

By the way, IBM BlueGene/L is going to produce 360 teraflops by end 2004, so if the report of Moravec's estimate is correct, and he is correct, that AI Overlord welcome could be pretty soon.

(Although I don't believe brain scanning quite hits the resolution mark required yet.)

Re:hrm (5, Funny)

Randy Wang (700248) | more than 9 years ago | (#10629888)

I, for one, welcome our new Beowulf overlord...

Re:hrm (3, Insightful)

Chatsubo (807023) | more than 9 years ago | (#10629841)

What is be really interesting is that when we get these human-brain-equivalent machines, the technology does not stop there.

So the intelligence level of this thing would prob. double in accordance to Moore's law, and in a year outclass it's master two fold. In about another year it will be four times as intelligent as any human being. And, of course, it doesn't stop there....

The implications that this would have on society would be very interesting. Would we believe everything it told us, or claimed that we know better? Would we like all the answers it gave us. Would it start deceiving us for our own good? etc.

Re:hrm (2, Informative)

TimothyTimothyTimoth (805771) | more than 9 years ago | (#10629901)

If you are thinking along these lines you might already be aware of this link, but if not, might I recommend:

http://singinst.org/index.html [singinst.org]

Re:hrm (4, Interesting)

diersing (679767) | more than 9 years ago | (#10629953)

I have a question from a casual observer who comes across this Hokie machine and the top 500 list every now and then. What is it these computers do?

Hearing it referenced in terms of AI helps, but is that the only purpose for a research facility to build one of these mammoths? Are there practical applications for the business world (other then the readily available (read commercial) clustered data warehousing)?

I'm not trolling, just curious.

Re:hrm (1)

Wudbaer (48473) | more than 9 years ago | (#10629972)

It would be similar to modelling the global weather system, a software capability we already have.

Where do we have this amazing capability ?

I mean not only have a rough model ignoring a lot of important influences on weather like water temperatures in the Oceans etc. on a very rough grid , like we have now, but a really accurate weather model.

A recent article I read about NEC's Earth simulator stated that even if this amazing machine was supposed to deliver beneath other things climate calculations with unprecedented accuracy and comprehensiveness it still came short of this ambitious goal. Quite short.

Re:hrm (0)

Anonymous Coward | more than 9 years ago | (#10629973)

It would be similar to modelling the global weather system, a software capability we already have.

perhaps you've seen a weather report that's remotely accurate - everyone that i've seen can't get 24 hours ahead more right than wrong...

Re:hrm (3, Insightful)

Quobobo (709437) | more than 9 years ago | (#10629687)

I think the reason lies within the latter.

Think about it; how is throwing more and more hardware at it going to solve the problem? What we're lacking is the software itself needed to do this, and it's obviously not going to be an easy task to write. I see no reason why an AI as intelligent as a human couldn't be implemented on a slower system, unless "thinks as fast as a human" is among the requirements.

(disclaimer, I've never read the book, these are just my opinions)

Re:hrm (1)

jimicus (737525) | more than 9 years ago | (#10629714)

In theory, the software required is easy. All you need is enough inputs, outputs (doesn't have to be speech) and enough neurones (either real or simulated) to connect it all together.

After that, the complicated bit (training the neural network) is much the same as it is with a baby - talk to it, show it simple things, put liquidised food in one end, keep the other end as clean as possible.

The only minor snag with current technology is the limits to how much it can learn and how long it takes to do so.

much the same as a baby (2, Funny)

nounderscores (246517) | more than 9 years ago | (#10629774)

2:14am EDT August 29, 1997...

Researcher: "Go to your machine room! And no Command and Conquer until you do your homework!"

Joshua:"Oh yeah? Would you LIKE TO PLAY A GAME?"

Re:hrm (1)

segmond (34052) | more than 9 years ago | (#10629844)

Everything works in theory, but not pratice.

Re:hrm (4, Interesting)

benhocking (724439) | more than 9 years ago | (#10630075)

Actually, it's not quite that simple. As someone whose research is in modeling the hippocampal region CA3 (about 2.5 million neurons in humans, 250k neurons in rats), I can tell you that the connectivity of the system is a very important variable. And there is still much we don't know about the connectivity of the human brain. Furthermore, there are hundreds of different types of neurons in the human brain. Why so many different types if only 2 or 3 would do? Seems evolution took an inefficient path - unless, as is probably the case, the differences in the neuron types are crucial for the human computer to work the way it does. Granted, some differences might be due to speed or energy efficiencies which are not absolutely critical for early stages, but I suspect that many differences have to do with the software (or wetware in this case) that makes us intelligent.

After we've solved that minor problem, I think teaching the system will be relatively trivial. I.e., if we understand the wetware enough to reconstruct it, we most likely understand how its inputs relate to our inputs, etc., and we could teach it much the same as we teach a human child. Of course, we might also figure out a better way to teach it, and in so doing we might even find a better way to teach human children. (Some of our research has recreated certain known best learning strategies, it is probably only a matter of time before simulators disover a better one!)

Re:hrm (0)

Anonymous Coward | more than 9 years ago | (#10629695)

You cant simulate this with ORDINARY programming techniques, you have to use AI, GPs and NeuroNets for this kind of stuff.

Re:hrm (4, Interesting)

SnowZero (92219) | more than 9 years ago | (#10629697)

I actually asked Hans a similar question at a talk he gave a while back, and he didn't really answer it, to my disappointment. My question was that "In nature the algorithm and computer were evolved together, so we'd expect them to be at a similar level of advancement. So, even if we get a computer as fast as a human, it might it not be nearly as smart since our programs do not use it efficiently enough?" In other words, Moore's law isn't helping us write better software (in some ways quite the contrary).

I'm a robotic software researcher, so this notion really affects me. IMO Software will lag well behind hardware, since it doesn't scale out nearly as well. Representation is of course a huge problem I won't even try to touch... But rest assured lots of people are working on all these things. Btw, It also doesn't help that CPU designs aren't even trying to make AI-style algorithms fast, but we can't blame manufacterers for that util there is demonstrable money to be made.

Re:hrm (1)

Short Circuit (52384) | more than 9 years ago | (#10629753)

What kind of advancements to CPU design would improve their use in AI? Shorter pipelines? Greater emphasis on bus speed vs cache?

Re:hrm (2, Interesting)

segmond (34052) | more than 9 years ago | (#10629831)

I don't think CPU should be designed for AI-style algorithms, when the said algorithms have not been proven. Assume we finally suceed in implementing the Holy Grail of AI right, then we can seek out ways to optimize and make it fast, thus custom CPUs will come in. Right now, most of the algorithms are a joke.

There were AI CPUs (2, Informative)

scattol (577179) | more than 9 years ago | (#10630017)

For a while there were CPUs specifically designed to run LISP [andromeda.com] , aka AI . Symbolics was one of the better knowns one.

It failed in bankrupcy. My vague understanding was that the designing dedicated LISP processors was hard and slow and with little resources they could not keep up. Essentially the Symbolics computers ran LIPS pretty quickly given the MHZ but SUN and Intel kept moving up the MHZ faster than Symbolics could keep up. In the end there were not speed advantage to a dedicated LISP machine, just an increase in price. Economics might change eventually. Who knows.

Re:hrm (1, Insightful)

beders (245558) | more than 9 years ago | (#10629704)

In an object orientated system it should be a case of modelling individual neurons and their interactions, the hard part might come getting these tied into the inputs/outputs

Re:hrm (3, Interesting)

RKBA (622932) | more than 9 years ago | (#10629748)

His estimate was probably based on the common, and incorrect, belief that neurons are purely digital.

Re:hrm (2, Funny)

dr_d_19 (206418) | more than 9 years ago | (#10629756)

...or perhaps Hans Moravec was just plain wrong :)

Re:hrm (2, Interesting)

Deorus (811828) | more than 9 years ago | (#10629781)

I think the difference between human and computer intelligence is that our software (conscious) is able to hard-wire the hardware (unconscious). We may not be able to consciously perform certain tasks such as floating point calculations because our software lacks low level access, but we can hard-wire our hardware for those tasks, this is why our unconscious is so quick and accurate when trained to recognize and respond to specific patterns regardless of their complexity.

Re:hrm (3, Insightful)

segmond (34052) | more than 9 years ago | (#10629814)

He is wrong. Intelligence is not about speed. I have met people who are very very smart, but they think very slowly. You ask questions, and the I too knows (ITKs) will blurt out an answer so damn fast, but mr smarty pant will think and think, and you would think they are clueless, but when they final answer, you can't tear apart their answer.

We can build a machine that has human intelligence and run it on a 2ghz process. The only issue is that instead of answering a question in a second. Perhaps it will take 1 or 2 hours to deliver an intelligence reply. But it should be able to pass a turing test with time thrown at the window.

Go read what 3D researchers said about graphics in the 70's. I bet they believed a 10ghz was good enough for real life 3D graphics.

What is hindering us is not speed, but our approach to AI research.

Re:hrm (1)

Zenmonkeycat (749580) | more than 9 years ago | (#10629862)

We'll know when we've hit that mark when every output to the console is accompanied by either "pathetic hacker" or "insect."

Re:hrm (3, Funny)

hackstraw (262471) | more than 9 years ago | (#10629864)

Hans Moravec's book "Robot" suggests that 100 teraflops is about the level required for human intelligence.

Yeah. I've been waiting for years for those dumbasses to make a computer that can outperform my ability to perform 100 trillion double precision floating point operations a second flawlessly.

Re:hrm (1)

Gentlewhisper (759800) | more than 9 years ago | (#10630127)

Hans Moravec's book "Robot" suggests that 100 teraflops is about the level required for human intelligence. So we are up to 10% of his target. But human intelligence still seems very far away, so either he has badly underestimated, or our collective programming skills need significant improvement.

Judging from the fact that Australians voted for John Howard as Prime Minister.. Nah! Not gonna need that much!

Wow! (0, Offtopic)

Big Nothing (229456) | more than 9 years ago | (#10629543)

Imagine a beow...

Never mind.

Re:Wow! (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10629561)

comeon, that was funny

Re:Wow! (0)

Anonymous Coward | more than 9 years ago | (#10629644)

nah...not when its the same exact joke as the frosty piss in the previous /. article

Re:Wow! (1, Informative)

Knx (743893) | more than 9 years ago | (#10629661)

Correct me if I'm wrong, but it actually looks an awful like a Beowulf cluster by nature.

Oh, and btw: here [vt.edu] are some pictures.

Re:Wow! (0)

Anonymous Coward | more than 9 years ago | (#10629671)

you really didn't get it did you? 7xxxxx users have no sense of humour....

Re:Wow! (1)

Quobobo (709437) | more than 9 years ago | (#10630128)

What, compared to the people who post the Beowulf/Soviet Russia/SCO jokes a million times over? Hard to get a worse sense of humour than them, as even this "new" 7xxxxx user is sick of their lame jokes.

MOD PARENT DOWN -1 (0)

Anonymous Coward | more than 9 years ago | (#10629765)

-1 Clueless

Speed at top (4, Interesting)

luvirini (753157) | more than 9 years ago | (#10629548)

Reflecting on the comment: "hould still be in the top 5 when the new rankings come out in November." There seems to be a serious push for multiprosessor systems, currently the ranking seem to consist of a couple of stars, few big ones(this computer among them) and a huge group of third category, and then the "used to be great" computers. But from my reading of the trends seems that there will be more and more crowding at near the top, so I expect the second category to be much larger, with much smaller differences.

Re:Speed at top (4, Insightful)

TAGmclaren (820485) | more than 9 years ago | (#10629709)

currently the ranking seem to consist of a couple of stars, few big ones(this computer among them) and a huge group of third category, and then the "used to be great" computers


That's an interesting way of looking at it, but I think so far most of the commentators have failed to pick up what makes this system so incredible. Srinidhi Varadarajan, the designer of the system:
Varadarajan said competing systems cost $20 million and up, compared to System X's approximately $5.8 million price tag ($5.2 million for the initial machines, and $600,000 for the Xserve upgrade).

"We will keep the price-performance crown," he said. "We don't know anyone who's within a factor of two even of our system. We'll probably keep the price-performance lead until someone else shows up with another Mac-based system."


Think about that for a second. The system isn't just in the top 5 (or at least top 10), but it's the cheapest by a factor of at least 2. What's even funnier from a tech standpoint is that the creator doesn't expect it to be beaten until another Apple system is built - which puts a very interesting spin on the old "Apple's more expensive".

Anyway as to in/out of the top 5, Varadarajan reckons there's another 10-20% in optimisations left in the tank...

Data taken from the recent Wired Article [wired.com] on the subject.

Re:Speed at top (3, Informative)

Anonymous Coward | more than 9 years ago | (#10629986)

The system isn't just in the top 5 (or at least top 10), but it's the cheapest by a factor of
at least 2.

The $5.8M number is how much the computers (and maybe racks) cost, not the whole system. AFAICT, that number appears leaves out US$2-3M worth of InfiniBand hardware that somebody (probably Apple) must've "donated" so it wouldn't show up as part of the purchase price. IB gear costs ~US$2k/node in bulk, on top of the cost of the node itself. It's highly unlikely someone else could build this exact configuration for US$5.8M without serious underwriting or hardware donations. Heck, I can't even get the Apple online store to give me a price on a G5 Xserve that includes an education discount, and I work for a fairly large public university.

don't forget... (2, Interesting)

Geek_3.3 (768699) | more than 9 years ago | (#10630027)

(those that go to despair.com will recognize this) that "You can do anything you set your mind to when you have vision, determination, and an endless supply of expendable labor." Point being, I'm sure having essentially free labor (sans pizza, of course... ;-) might have cut the price down just a little bit too...

Not to poo poo their efforts, but the whole system was essentially a 'loss-leader' for future supercomputers projects using the G5's and Xserve....

So he's saying that... (0)

Anonymous Coward | more than 9 years ago | (#10630021)

For multimillion dollar massively parallel systems, he thinks the Mac is the sweet spot for price/performance reasons. That's nice.

Down here on planet Earth, where most folks have a single or possibly dual processor system, the best computing bang for the buck seems to be Athlon64 or dual Opterons.

Also, anyone spending $5-10 million on their MPP system is probably going to be writing most of their own code. The rest of us have to rely heavily on available code or store bought applications. On both of those fronts, you're fighting with one arm tied behind your back with a Mac.

I'll be curious to see how the new Cray system, which scales to 30,000+ opterons will compare on a bang for the buck basis.

And lest I be accused of bashing the Mac, I personally like them. They're sleek and sexy, but also very expensive compared to similar X86 hardware.

Cheers,

Re:Speed at top (1)

carnivore302 (708545) | more than 9 years ago | (#10629772)

Actually in a similar article [wired.com] , it states that "Released Tuesday, the 12.25-teraflops benchmark would put System X in fourth place in the world ratings, but it will probably be surpassed by new supercomputers from NASA, IBM and others. ".

Srinidhi Varadarajan, System X's lead architect said ""We expect to be in the top 10. Where, we don't know. Top five is not possible, probably".

So, at least that's different from what was stated on the slashdot story.

Density (5, Interesting)

GerbilSocks (713781) | more than 9 years ago | (#10629549)

VT could theoretically pack in 4x the number of nodes in the same space that occupied the original System X. Could we be looking at at least a 50 TFlop (minus 10% overhead) supercomputer with 8,800 cluster nodes?

If that were feasible, you could be looking at toppling Earth Simulator at a fraction of the cost.

Re:Density (3, Insightful)

Anonymous Coward | more than 9 years ago | (#10629563)

At linpack. Of course, the Earth Sumulator wasn't built (just) to run linpack.

Also, the Earth Simulator has been around for how many years? 2? 3? Quite frankly, it would be downright embarrassing if it couldn't be toppled at a fraction of its cost by now.

Re:Density (2, Funny)

koi88 (640490) | more than 9 years ago | (#10629682)


Of course, the Earth Sumulator wasn't built (just) to run linpack.

I think most super computers weren't built just to run benchmark tests.
Well, at least I hope.

Re:Density (0)

Anonymous Coward | more than 9 years ago | (#10629834)

Quite frankly, it would be downright embarrassing if it couldn't be toppled at a fraction of its cost by now.

What makes you say that? Last time I checked, prices have been rising in the industry, as in most.

Re:Density (2, Funny)

Ingolfke (515826) | more than 9 years ago | (#10629578)

And if we could harness the heat from this machine we could probably power most of the North Eastern United States.

Re:Density (5, Informative)

UnknowingFool (672806) | more than 9 years ago | (#10629639)

Not necessarily. Processing power doesn't really scale linearly like that. Add 4 times as many processors doesn't mean the speed will increase 4x.

First, as they try to increase the speed of the system, the bottlenecks start becoming more of a factor. Interconnects is one big obstacle. While the new System X may use the latest and greatest interconnects between the nodes, they still run at a fraction of the speed that the processors can run.

Also the computing problems that they are trying to solve may not scale either with more processors. For example, clusters like this can be used to predict and simulate weather. To do so, the target area (Europe for example) is divided into small parts called cells. Each node takes a cell and handles the computations of that cell.

In this case adding more processors does not necessarily mean that each cell is processed faster. Getting 4 processors to do one task may hurt performance as they may interfere with each other. More likely the cell is further subdivided into 4 smaller cells and the detail of the information is increased not the speed. So add 4x processors only increases data 4x but it doesn't mean that the data is solved any faster.

Re:Density (3, Informative)

luvirini (753157) | more than 9 years ago | (#10629691)

Indeed, Breaking up computational tasks to smaller pieces that can be processed by these architectures is on of the biggest challenges in the high end computing.

Many processes are indeed easy to divide to parts. Take for example ray-tracing, you can have one processor run each ray if you want, getting huge benefits compared to singleprocessor designs. But many tasks are such that the normal way of calculting them requires you to know the previous result. Trying to break up these tasks is one of the focuses in the reserearch around supercomputing.

Re:Density (1)

alfalfro (120490) | more than 9 years ago | (#10629667)

Other factors:
1) Cost, narf.
2) Power, we already have the biggest uninterruptable power supply in the state. If we added another I think our small college town would experience "rolling blackouts."

2.3GHz? (0)

Reverant (581129) | more than 9 years ago | (#10629553)

But the XServers come at 2.0GHz, with the desktop powermacs at 2.5GHz. Is this a mistake?

Re:2.3GHz? (4, Interesting)

Ford Prefect (8777) | more than 9 years ago | (#10629566)

But the XServers come at 2.0GHz, with the desktop powermacs at 2.5GHz. Is this a mistake?

From the article:
Apple said last week that the 2.3GHz machines were a one-off deal for Virginia Tech and not something the company plans to announce for broader consumption anytime soon.
What I really want to know is what they do with the old machines. The articles speaks of the cluster being 'upgraded' - are the older G5s replaced, or do they just become part of the new cluster?

Still, I suppose there's one or two unwanted G5s - anyone want to send me a couple? :-)

Re:2.3GHz? (5, Informative)

mmkkbb (816035) | more than 9 years ago | (#10629593)

they were sold off by MacMall at a slight discount around 6 months ago, along with a certificate of authenticity and a "property of virginia tech" sticker

Re:2.3GHz? (1)

jdwest (760759) | more than 9 years ago | (#10629602)

They were sold through retail channels with the addition of a metal nameplate stating its node number.

Re:2.3GHz? (0)

Anonymous Coward | more than 9 years ago | (#10629569)

The 2.3GHz configuration is currently exclusive to VT.

"Dick factor" aside (3, Interesting)

ceeam (39911) | more than 9 years ago | (#10629555)

Would be interesting to know exactly what stuff do these machines do? Maybe they would even be able to share some code so that people can fiddle around with it optimizing (should be fun).

Re:"Dick factor" aside (2, Informative)

millahtime (710421) | more than 9 years ago | (#10629568)

Currently they aren't doing anything with them except getting them up and running. Status is listed at...
Assembly - Completed!
System Stablization - In Progress
Benchmarking - In Progress

When up and going the system will probubly do some high end scientific calculations.

Re:"Dick factor" aside (3, Informative)

TAGmclaren (820485) | more than 9 years ago | (#10629741)

Currently they aren't doing anything with them except getting them up and running


Their site is out of date then: http://www.wired.com/news/mac/0,2125,65476,00.html ?tw=newsletter_topstories_html [wired.com]
Now that the upgrade is complete, System X is being used for scientific research. Varadarajan said Virginia Tech researchers and several outside groups are using it for research into weather and molecular modeling. Typically, System X runs several projects simultaneously, each tying up 400 to 500 processors.


If there's a Wired article, and a Cnet article, go with the Wired article every time. It's written by people who love tech.

...cough...ECHELON...cough.... (0)

Anonymous Coward | more than 9 years ago | (#10630058)

yeah...I wonder.

Re:"Dick factor" aside (3, Informative)

joib (70841) | more than 9 years ago | (#10629638)


Would be interesting to know exactly what stuff do these machines do? Maybe they would even be able to share some code so that people can fiddle around with it optimizing


I don't know about the VT cluster specifically, but here's a couple of typical supercomputer applications that happen to be open source:

ABINIT [abinit.org] , a DFT code.

CP2K [berlios.de] , another DFT code, focused more on Car-Parinello MD.

Gromacs [gromacs.org] , a molecular dynamics program.


(should be fun)


Well, if optimizing 200 000 line Fortran programs parallelized using MPI sounds like fun to you, jump right in! ;-)

Note: Above applies to abinit and cp2k only, I don't know anything about gromacs except that it's written in C, not Fortran (though inner loops are in Fortran for speed).

Oh, and then there's MM5 [ucar.edu] , a weather prediction code which I think is also open source. I don't know anything about it, though.

What do they do with it? (1)

moberry (756963) | more than 9 years ago | (#10629764)

I have a freind who is finishing up his masters, and starting his PhD in computer engineering at VT. I asked him about it and he simply said: "they haven't found anything do actually _do_ with it"

Re:"Dick factor" aside (0)

Anonymous Coward | more than 9 years ago | (#10630083)

Yeah, and I'll just test it on my "cluster" consisting of a dual Athlon MP, four Ultra 1s, two AMD laptops and two Thinkpad 755s (486DX4/100s!!!). Oh yeah, two SGI Indigos, an Octane, four IPCs, two IPXs, and a couple of Sparc10s.

Hmm. I think if I put *all* the rest together, they probably match up well with my dual Athlon MP, so that gives me a relative... um... 4.5Ghz!!! (with an ethernet interconnect, half wireless!!!)

Yeah. that will help them make their programs better.

-Jephthai-

The actual use is going to be (1)

hsmith (818216) | more than 9 years ago | (#10630087)

Highly concentrated in bio-systems/informatics. Tech just built a HUGE building for bioinformatics. they plan to be doing a lot of processing for that.

But they also plan to sell out processor time "cheap".

and i must say GO HOKIES

So compare it to...... (3, Interesting)

ericdano (113424) | more than 9 years ago | (#10629557)

The school said it spent about $600,000 to rebuild the system and add the additional nodes. The original cost of System X was $5.2 million.

Compare it to this new Cray system [slashdot.org] . Bang for the buck would make the Apple system better.

Crays... (4, Insightful)

CaptainPinko (753849) | more than 9 years ago | (#10629576)

are not designed for the same type of work as clusters. If a probably is not effeciently parallizable and requires shared memory then a Cray is the only feasible option A Cray is not a cluster. It's like comparing mph for a sports car and truck: the car is faster but they are meant for different types of loads.

Re:Crays... (4, Interesting)

Coryoth (254751) | more than 9 years ago | (#10629659)

are not designed for the same type of work as clusters. If a probably is not effeciently parallizable and requires shared memory then a Cray is the only feasible option A Cray is not a cluster. It's like comparing mph for a sports car and truck: the car is faster but they are meant for different types of loads.

To be fair to the original poster, the Cray system he was referencing is a cluster system. Then again, its a cluster system with very impressive interconnects for which System X just isn't comparable (ie. The Cray system will scale far far better), not to mention the Cray software (UNICOS, CRMS, SFW), and the fact that the Cray system is an "out of the box" solution. So you are right, there is no comparison.

Jedidiah.

Re:So compare it to...... (4, Insightful)

Coryoth (254751) | more than 9 years ago | (#10629647)

Compare it to this new Cray system. Bang for the buck would make the Apple system better.

Yup, except the Cray comes with far superior interconnect technology, a better range of hardware and software reliability features built in, software designed (by people who do nothing but supercomputers) specifically for monitoring maintaining and administrating massively parallel systems, and most importantly it all works "out of the box". You buy a cabinet, you plug it in, it goes.

Why do these Apple fans, who justifiably claim that comparing a homebuilt PC to a "take it out of the box and plug it in" Apple system is silly, want to compare a build it yourself supercomputer to one that's just plug and go?

And yes, comparing MacOS X to UNICOS for supercomputers is like comparing Linux to OS X for desktops (in fact that's very flattering to OS X as a cluster OS).

Jedidiah.

Re:So compare it to...... (2, Funny)

capmilk (604826) | more than 9 years ago | (#10629672)

Bang for the buck would make the Apple system better.

Sure, but what would you rather say: "I just bought an Apple computer" or "I just bought a Cray computer"?

The list of Supercomputers (5, Informative)

ehmdjii (622451) | more than 9 years ago | (#10629590)

this is the official homepage of the listing:

http://www.top500.org/

Obligatory: (2, Funny)

Dorsai65 (804760) | more than 9 years ago | (#10629604)

but will it run Longhorn?

Re:Obligatory: (0, Offtopic)

Anonymous Coward | more than 9 years ago | (#10629629)

No.

Yes, but... (1)

midifarm (666278) | more than 9 years ago | (#10630042)

this only qualifies for the minimum requirements.

Peace

hey (0, Offtopic)

sla291 (757668) | more than 9 years ago | (#10629609)

Imagine a beowulf clu... err, sorry, wrong humor !

Old stuff... (2, Insightful)

gustgr (695173) | more than 9 years ago | (#10629611)

Before you guys ask I RTFA. I was wondering what do they do with the old processors?

Re:Old stuff... (5, Interesting)

Anonymous Coward | more than 9 years ago | (#10629655)

If you're referring to the old G5 Powermacs used in the original System X...they were sold. I bought one!

and yet... (3, Funny)

BobWeiner (83404) | more than 9 years ago | (#10629630)

...it still doesn't come with a floppy disk drive.

/sarcasm

Re:and yet... (1)

Knx (743893) | more than 9 years ago | (#10629728)

Yeah, but ya know, you can plug 2300 USB keys! Woohoo! See here [apple.com] .

Re:and yet... (2, Interesting)

Short Circuit (52384) | more than 9 years ago | (#10629777)

That's a big RAID Array [arstechnica.com] ...

Re:and yet... (0)

Anonymous Coward | more than 9 years ago | (#10629804)

... or a two-button mouse.

I'm suprised no-one has said it yet... (-1, Redundant)

webgit (805155) | more than 9 years ago | (#10629648)

Imagine a beowolf cluster...

School Funding (1)

BrianHursey (738430) | more than 9 years ago | (#10629649)

This kind of funding I wish my school had. We are going to build a cluster system out of about 20 G4's running yellow dog Linux. But this is my chance to actually do cluster programing.

The funny thing is with the class we are actually trying to figure out things to compile, besides bootstrapping our Linux laptops. =P Man we are geeks.

every tried Over clocking? (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#10629653)

imagine what would happen if they overclock these mofos. *drools*

12.25 Teraflops ... (0, Funny)

Anonymous Coward | more than 9 years ago | (#10629679)

and still just one mouse button.

*SCNR*

Article Comparison... (0, Redundant)

jwilhelm (238084) | more than 9 years ago | (#10629692)

So... if I'm reading this all correctly...:


Cray: 11,000 Opterons = 40 Teraflops

Apple: 1,150 G5s = 12 Teraflops


Hm. I would have thought the Cray would have been more powerful, especially since it costs more and has that "specially designed interconnect." Interesting...

Re:Article Comparison... (3, Informative)

erick99 (743982) | more than 9 years ago | (#10629702)

If power was only equated to speed then you would be correct. However, as other posters have pointed out, there are several reasons why a Cray is a more powerful system besides sheer speed.

Thank you VT (4, Funny)

Alcimedes (398213) | more than 9 years ago | (#10629693)

I have it on good insider knowledge, that this entire cluster is going to be put to the best possible usage.

Not disease solving, not genetic mapping, not calculating weather patterns.

No, what they're going to do is remaster the Original Star Wars series, right from the laser disc versions!!!!

Imagine, a digitallly remastered bar scene where Han shoots first!!@$!@#!one!@

/kidding

russia (1)

Bombah (572185) | more than 9 years ago | (#10629759)

In soviet russia the Supercomputer flops !

they will not be in the top 5 (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10629797)

you heard it here first, folks.

What is a supercomputer ? (3, Interesting)

Animaether (411575) | more than 9 years ago | (#10629869)

I'm curious as to the answer to the question (What is a supercomputer ?).

The reason is this.. more and more of these 'supercomputer' entries appear to be many machines hooked up together, possibly doing a distributed calculation.

However, would projects such as SETI, GRID, and UD qualify with their many thousands of computers all hooked up and performing a distributed calculation ?

If not, then what about the WETA/Pixar/ILM/Digital Domain/Blur/You-name-it renderfarms ? Any one machine on those renderfarms could be put to use for only a single purpose: to render a movie sequence. Any one machine could be working on a single frame of that sequence. Does that count ?

I seem to think more and more that the answer is 'no', from my perspective. They mostly appear to me as rather simple computers (very often not even the top-of-the-line in their own class), with the only thing going for them that there are many of them.

The definition of supercomputer (thanks Google, and by linkage dictionary.reference.com ) is :
A mainframe computer that is among the largest, fastest, or most powerful of those available at a given time.


And for mainframe :
A large powerful computer, often serving many connected terminals and usually used by large complex organizations.

The central processing unit of a computer exclusive of peripheral and remote devices.


Doesn't the above imply that a supercomputer should really be just a single computer, and not a network or cluster of many computers ?
( The mention of 'terminals' does not mean they're nodes. Terminals are, after all, chiefly CPU-less devices intended for data entry and display only. They are not part of the mainframe's computing capabilities. )

If the above holds true, then what is *really* the world's top 3 of supercomputers ? I.e. which aren't 'simply' a cluster of nodes.

Any mistakes in the above write-up/though process ? Please do point them out :)

Re:What is a supercomputer ? (1)

joib (70841) | more than 9 years ago | (#10630158)

I don't think there exists any non-ambigous way to define what a supercomputer is.

Anyway,

I think we can disqualify @HOME style projects, since the individual nodes are not under the control of the manager. Similarly, you can't submit some small batch job to a @HOME system and expect to have results within a short time. Uh, that wasn't a very good description but I hope you understand what I mean.. i.e. that to qualify as a supercomputer all the nodes should be dedicated to the supercomputing stuff, and be under the direct control of the administrator.

As for the one node vs. cluster of nodes, it gets trickier. How do you define one node? Shared memory? But then, what about NUMA systems such as the SGI Altix? It is entirely valid to view NUMA systems as consisting of multiple connected nodes, along with some kernel (and usually hardware) support to make it appear as shared memory. Hardware-wise there's no huge difference between such a system and a cluster, essentially the only major difference is that NUMA systems typically have some silicon to take care of cache coherency.

Or should we limit ourselves to shared memory systems where all the memory sits on the same bus? This limitation would seriously limit our ability to build really huge systems, simply because the speed of light would cause ever bigger latencies. Not to mention that this limitation would prohibit even a simple dual cpu AMD Opteron system, which is a NUMA system. So I don't think this limitation is good either.

Of course, we could say that a real supercomputer is distinguished by running a single kernel for the entire system. That would allow NUMA systems, but disallow clusters. Anyway, I think this limitation sounds a bit artificial.

In light of the above reasoning, I think we must accept clusters as legitimate supercomputers. As long as they have enough oomph to make the top500 or thereabouts, that is. Not that linpack is any perfect benchmark, far from it. Oh well, perhaps HPC Challenge or something like that will someday replace linpack as the "official" benchmark for top500.

Everyone is getting their rigs ready (1)

Jakhel (808204) | more than 9 years ago | (#10630022)

the 1150-node Hokie supercomputer rebuilt with new 2.3 GHz Xserves now runs at 12.25 Teraflops. The computer, the fastest computer owned by an academic institution, should still be in the top 5 when the new rankings come out in November."

Just in time for the release of Half Life 2. Hmmm...coincidence? I THINK NOT!!!

Actually, VT will be #8 this time around (4, Interesting)

daveschroeder (516195) | more than 9 years ago | (#10630061)

Prof. Jack Dongarra of UTK is the keeper of the official list in the interim between the twice yearly Top 500 lists:

http://www.netlib.org/benchmark/performance.pdf [netlib.org] (see page 54)

There have been some new entries, including IBM's BlueGene/L, at 36Tflops, finally displacing Japan's Earth Simulator, and a couple other new entries in the top 5.

Here's just the top 16 as of 10/25/04:

http://das.doit.wisc.edu/misc/top500.jpg [wisc.edu]

No matter what anyone says, Virginia Tech pulled an absolute coup when they appeared on the list at the end of 2003: no one will likely EVER be able to be #3 on the Top 500 list for a mere US$5.2M...even if the original cluster didn't perform much, or any, "real" work, the publicity and recognition that came of it was absolutely more than worth it.

Also interesting is that there is also a non-Apple PowerPC 970 entry in the top 10, using IBM's JS20 blades...

What is the point? (1)

Bill, Shooter of Bul (629286) | more than 9 years ago | (#10630123)

They built the origninal and as you say they didn't perform any real work. So whats the point? Its like rich guys that buy ferraris and never drive them.

What can I say, I got greedy! (1)

gonknet (594078) | more than 9 years ago | (#10630130)

It was late and I should have wrote top 10.... Who cares about the #8 computer, when you can have a top 5 computer!

Hm.. with this much compute power.. (4, Funny)

elemur (7613) | more than 9 years ago | (#10630148)

If you add in VirtualPC... presumably the clustered version.. you should start to get to the level of compute power that was recommended by Microsoft for Longhorn... though it still wouldn't be the high end. Expect some sluggishness..
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>