×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

130 comments

Computers keep getting faster (1)

wallyhall (665610) | more than 3 years ago | (#32406350)

Computers still seem to be getting exponentially faster by the year ... when will Silicon give way? http://en.wikipedia.org/wiki/File:TOP500-2008.svg [wikipedia.org]

Re:Computers keep getting faster (4, Interesting)

somersault (912633) | more than 3 years ago | (#32406414)

I think power requirements are probably the main problem, rather than the hardware. It must be pretty trivial to add more cores to a system that's already using tens of thousands of them, but you're going to need a lot of power.

These systems are only really getting "faster" for parallel tasks too - if you gave them a sequential workload then I assume they would fare worse than a high end gaming machine!

Re:Computers keep getting faster (2, Insightful)

TheRaven64 (641858) | more than 3 years ago | (#32406450)

These systems are only really getting "faster" for parallel tasks too - if you gave them a sequential workload then I assume they would fare worse than a high end gaming machine!

I doubt it. A good fraction of them use POWER6 processors, which are still a lot faster than any x86 chip for most sequential workloads. On top of that, they typically have a lot more I/O bandwidth. They might only be a bit faster, but it would have to be a really high-end gaming rig to be faster.

Re:Computers keep getting faster (2, Informative)

Sique (173459) | more than 3 years ago | (#32406822)

"A good fraction" in this case means: Less than 10%. In fact, only 42 out of 500 use POWER.

Re:Computers keep getting faster (1)

cdpage (1172729) | more than 3 years ago | (#32407228)

...that's not nothing to grip at dude.

What i wonder is, what % of flops they are making verses all the others... if it wasn't for the top 5 having close to 1mil cores, they might make up more then 10% of the computational contribution, no?

I would like to see this graphic chart to include that...

That, and perhaps a distribution chart too... i'd like to see how the PS3 is fairing now in distribution

Re:Computers keep getting faster (4, Insightful)

Entropius (188861) | more than 3 years ago | (#32406580)

Parallel tasks are the whole point of using a supercomputer. The gains made in speed for sequential tasks really haven't been that great; Moore's Law for sequential tasks fell apart a while back.

Being able to parallelize a task is a prerequisite for putting it on a supercomputer.

Re:Computers keep getting faster (4, Informative)

wagnerrp (1305589) | more than 3 years ago | (#32408012)

Parallel tasks are the whole point of using a supercomputer.

Well it is now. The original supercomputers were based around a single very fast processor, and had a number of co-processors whose sole purpose was to offload IO and memory prefetch, so the CPU could churn away without interruption. Modern out-of-order CPUs are effectively an old style supercomputer on a chip. Heavy use of parallel processing didn't really take off until the late 80s. This paradigm shift is what caused the supercomputer market crash in the 90s, as development devolved from custom CPUs, to throwing as many generic cores at the problem as you can and using custom interconnects to mitigate parallel overhead.

Re:Computers keep getting faster (1)

alvinrod (889928) | more than 3 years ago | (#32408268)

Yes, but you could just give them multiple sequential workloads. It won't speed up any individual sequential workload very much, but you still get more work done overall.

I wouldn't worry too much. We've been pretty good at finding things to help us keep up with the effects of Moore's law.

June?! (5, Funny)

Anonymous Coward | more than 3 years ago | (#32406378)

Holy crap, the supercomputers are so fast they're in the future!

Re:June?! (1)

PingPongBoy (303994) | more than 3 years ago | (#32408914)

It's like an early Christmas ...

And speaking of that, a nice present would be an account on a supercomputer for running whatever I want. A Top 5000 would do, presumably.

Hopefully the universe won't mind if we call May June and thereby manipulate Moore's Law in our favor. Year over year if Moore's Law holds while the calendar grows shorter, the light speed barrier shall be overcome.

At any rate, this is the age of the Internet and global news updated by the minute. Supercomputers are expensive to upgrade so keeping a monthly update frequency may be reasonable, at least as far as publishing the theoretical speed is concerned, as running a benchmark every month could be aggravating. Technological improvements take time, but the notion that it takes a supercomputer to improve supercomputing could to be more relevant in that new breakthroughs might be coming faster as more and more clock cycles exist every second, while the law of diminishing returns counters.

It would be inspiring to see that technology is usable to advance the boundaries monthly. Indeed, businesses may feel a greater need to take more risks (not like BP, please) to achieve breakthroughs if computing is seen to be on a good pace for competitors to use in gaining an edge. A real stimulus package would be businesses having more incentive to get their own supercomputers. Personal computers have the speed of past supercomputers but businesses do not tend to think "Let's run this for a couple weeks on this great idea" - programmers, this is your cue to develop software that makes tomorrow's PCs yield answers that rival the $#|+ you can get from hired experts.

A 2nd "Chinese".... (1)

mtmra70 (964928) | more than 3 years ago | (#32406380)

Looks like a 2nd NSCS supercomputer located in China is in the top 10. Does that make it "Chinese"?

Re:A 2nd "Chinese".... (0)

Anonymous Coward | more than 3 years ago | (#32406412)

Depends upon the opinion of the Chinese government on any given day, if it is located on the mainland, wouldn't you suppose?

Re:A 2nd "Chinese".... (0)

Anonymous Coward | more than 3 years ago | (#32406446)

No. It is a supercomputer with Chinese characteristics.

Re:A 2nd "Chinese".... (-1, Flamebait)

Anonymous Coward | more than 3 years ago | (#32406554)

No. It is a supercomputer with Chinese characteristics.

You mean it wile make me nikes for 3 cents an hour. And eat dog for dinner?

Re:A 2nd "Chinese".... (3, Interesting)

TheRaven64 (641858) | more than 3 years ago | (#32406454)

Interestingly, the Chinese machines don't seem to be using Chinese CPUs yet. I was hoping to see at least one Loongson in the top 500.

As long as the niggers dont get one (1)

Mike Hock (249988) | more than 3 years ago | (#32406500)

But we should still be concerned that a cockroach race has this much computing power.

Re:As long as the niggers dont get one (0)

Anonymous Coward | more than 3 years ago | (#32409808)

Forget to post anonymously again?

Linux (5, Informative)

B5_geek (638928) | more than 3 years ago | (#32406398)

Ya for Linux!

Seriously, if this doesn't make every PHB take notice I can't imagine what would. (Hey boss, its free too!)

Re:Linux (1, Insightful)

Anonymous Coward | more than 3 years ago | (#32406512)

All our admins and all of our users only know Microsoft systems. Training isn't free.

Re:Linux (0)

Anonymous Coward | more than 3 years ago | (#32406544)

And MS training is free?

Re:Linux (5, Insightful)

Pharmboy (216950) | more than 3 years ago | (#32406550)

All our admins and all of our users only know Microsoft systems. Training isn't free.

So your users can't use Linux on the server? Or is it that all the users use super computers on the desktop? Our biz has all MS on the desktop and all Linux on the server. Obviously it is completely seamless. As for the admins, any admin worth their salt is always learning new things to just keep up with technology as it changes. Learning Linux by installing it on one system to start is trivial, and in certain situations, much easier to setup than Windows, such as DNS servers, web servers, etc.

If your admins can only work on a server if it uses a mouse, you need new admins.

Re:Linux (1, Insightful)

Anonymous Coward | more than 3 years ago | (#32408196)


If your admins can only work on a server if it uses a mouse, you need new admins.

Agreed. Often times you can't count on morons simply being canned or replaced though. The fact is there's a lot of fools out their that think "system administration" simply means knowing which button to click in the right order. Any understanding beyond that simply doesn't exist, and is lost on them.

This limitation isn't simply one of "GUI vs CLI" or "Windows vs Linux". It's really one of wanting to understand something beyond the UI presented to you. We all know real systems, Windows or Linux screw up in ways that pointy-clicky, or even "type in the magic command" knowledge won't help you. People unwilling to learn the system beyond the basics are fools, and will always remain fools until they expand beyond basics.

Re:Linux (3, Insightful)

Black Art (3335) | more than 3 years ago | (#32409012)

In my experience Windows admins require *MUCH* more training than Linux admins. There is much more "black magic" that they need to know to be good at their jobs.

A Windows admin needs to know all the secret registry hacks to make things run well. They need to know all the non-intuitive places that Microsoft hides the settings for whatever services need to be configured. They also need to know how to recover things when it all goes horribly wrong.

Most Linux systems have text files to configure things. The files are in a predictable place. Updates are pretty easy and clear.

But Microsoft has scammed people into believing that leaving it harder than just putting up with the same old crap. In this case I just wish that people did get what they pay for...

   

Re:Linux (0)

Anonymous Coward | more than 3 years ago | (#32407296)

I hate that argument. The users don't know shit.

Most of the Win users I have come across think the H drive (network share) is on their computer. Users are mindless drones who need to be spoon fed their world. I would love to see a site where the workstation was a locked down OS where the user had a set of icons (office, mail, web browser, file manager, custom apps) to pick from. No changing anything else. I believe that would cut down on help desk greatly.

Re:Linux (-1, Redundant)

Anonymous Coward | more than 3 years ago | (#32408018)

Hey Troll.

How them grapes taste?

Re:Linux (1)

burnin1965 (535071) | more than 3 years ago | (#32410176)

All our admins and all of our users only know Microsoft systems. Training isn't free.

I guess you are pretty well !#@%ed, but then again the world still needs ditch diggers. ;)

Re:Linux (0)

Anonymous Coward | more than 3 years ago | (#32406546)

Ya for Linux!

Seriously, if this doesn't make every PHB take notice I can't imagine what would. (Hey boss, its free too!)

Depends on the job you're doing.

Lots of small single- to quad-CPU OS images working separately in parallel doesn't say anything about the scalability of single large images with tens or hundreds of CPUs.

Re:Linux (1)

staalmannen (1705340) | more than 3 years ago | (#32407004)

The weird thing is that there are several entries in the statistics page [URL] http://www.top500.org/stats/list/35/os [top500.org] [/URL] that actually ALSO are linux, not just the top 405, but also the RedHat, CentOS CNL, SLES, (CellOS?) etc entries.... Looking at it that way UNIX is already outcompeted with a few entries of AIX and opensolaris. I wonder what happened to Plan9 on the Blue Gene....

Re:Linux (0)

Anonymous Coward | more than 3 years ago | (#32407326)

plan 9 on blue gene is a research project, so
wouldn't show up in the top500 production
numbers.

Re:Linux (1, Insightful)

Anonymous Coward | more than 3 years ago | (#32407328)

Ya for Linux!

Seriously, if this doesn't make every PHB take notice I can't imagine what would. (Hey boss, its free too!)

How is this relevant to the environment most PHBs control? We're talking supercomputers here.. Ferraris.. Lamborghinis... not super reliable diesel trucks. Most PHBs want uptime, not go-fast-real-quick.

welcome to 1995 (2, Informative)

Colin Smith (2679) | more than 3 years ago | (#32408084)

um. you want a Beowulf with that?

Linux has been in the supercomputer lists for decades.

Google is a much better example of how you can use Linux to take over the world; which is what every self respecting middle manager want's to do.

I.e. Shit loads of cheap compute power. Got any tasks which need that?

Re:welcome to 1995 (0)

Anonymous Coward | more than 3 years ago | (#32409450)

I.e. Shit loads of cheap compute power.

how much metric shit load of cheap compute power?

Re:Linux (0)

Anonymous Coward | more than 3 years ago | (#32409536)

Except Linux operating systems cannot run the software that people want and/or need.

By Processor (3, Interesting)

TheRaven64 (641858) | more than 3 years ago | (#32406432)

The view by processor is quite interesting. AMD has the top spot, but the majority of the top 500 have Intel chips. There are still two SPARC64 machines in the top 100, and a third one down at 383. All three SPARC64 machines are in Japan, which isn't entirely surprising. IBM makes a good showing, but it's interesting to see how far behind x86 they are, in a market that was traditionally owned by non-commodity hardware.

Re:By Processor (2, Insightful)

pwilli (1102893) | more than 3 years ago | (#32406458)

I would have expected more AMD-based systems in the top-100, because super computers are usually built with cheap and moderately fast Processors, the market segment where AMD gives lots of bang for the buck.

Re:By Processor (2, Insightful)

Entropius (188861) | more than 3 years ago | (#32406596)

If you're Intel you have more money to spend on marketing, which means "we'll give you a cut rate on a lot of 10000 processors just so we can have the bragging rights."

Re:By Processor (1)

maxume (22995) | more than 3 years ago | (#32406760)

It's quite likely that they can offer a hefty discount and still make a profit on the transaction.

Re:By Processor (3, Interesting)

stevel (64802) | more than 3 years ago | (#32408170)

System and component vendors don't make money on these "lighthouse account" supercomputer sales. My experience, having worked in the past for a vendor that did this a lot, is that they're a money-loser. The motivation is bragging rights, though that can be fleeting. I know of several times that my employer declined to bid on a supercomputer deal as it would just be too expensive.

Typically, these systems are actually sold by system vendors (Dell, HP, IBM) and not processor vendors, though the processor vendor will support the bid. That #1 "AMD" system is actually a Cray. Software also plays a large part in success or failure.

Re:By Processor (0)

Anonymous Coward | more than 3 years ago | (#32408006)

When you're building a cluster you have to worry about everything, which includes the chipset. At the moment AMD chipsets don't give the same performance as Intel chipsets do, which is a real problem if you're building a machine that's doing lots of I/O.

Re:By Processor (1, Interesting)

Anonymous Coward | more than 3 years ago | (#32406514)

What's more interesting is is that the Chinese supercomputer is second overall with only 55680 cores (Intel) and 1.271 peta FLOPS.
That's almost 170000 cores less than the number 1 (AMD), and only 500 tera FLOPS less.
And it's 70000 cores less than the number 3 (IBM) and 200 tera FLOPS faster.

Re:By Processor (3, Informative)

TheRaven64 (641858) | more than 3 years ago | (#32406576)

It's especially interesting for two reasons. Firstly, because at that sort of scale interconnect throughput and latency can make a much bigger difference than processor speed. With HyperTransport, AMD has had a huge advantage over Intel here (IBM also uses HyperTransport). It looks like QPI might have eliminated that advantage. Beyond that, you have the supporting circuitry - you don't just plug a few thousand processors into a board and have them work, you need a lot of stuff to make them talk to each other without massive overhead.

The other interesting thing is that the Chinese are using Intel processors at all. I would have expected them to use Loongson 2F chips, or Loongson 3 if they were out in time. I'm not sure if Loongson wasn't up to the job, or if they had some other reason for using a foreign-designed chip.

Re:By Processor (1)

drinkypoo (153816) | more than 3 years ago | (#32406888)

The other interesting thing is that the Chinese are using Intel processors at all. I would have expected them to use Loongson 2F chips, or Loongson 3 if they were out in time. I'm not sure if Loongson wasn't up to the job, or if they had some other reason for using a foreign-designed chip.

Loongson has great TDP but isn't all that ballsy. If you're trying to do the job with less cores, it's not in the running. So what if something else takes twice the power? It's the people's money. Same as here.

Re:By Processor (2, Informative)

Pharmboy (216950) | more than 3 years ago | (#32406974)

Wikipedia [wikipedia.org] shows the highest performing Loongson system before April scored 1 teraflop peak, and "and about 350 GFLOPS measured by linpack in Hefei". Sounds like they are focusing on performance/watt more than being the fastest, from a read of the rest of the article. Still pretty fast stuff, considering their newest system has 80 quads and is claimed to have a peak around 1 teraflop.

Re:By Processor (3, Informative)

Jeremy Erwin (2054) | more than 3 years ago | (#32407006)

What's even more interesting is that the nVidia chips that made Nebulae so fast seem to have escaped your notice.

SETI@HOME has 3 million or so nodes... (0, Insightful)

Anonymous Coward | more than 3 years ago | (#32406474)

Make the definition of "computer" just a bit looser and it probably could make the list.

The defintiion is already pretty damn loose.

Re:SETI@HOME has 3 million or so nodes... (-1, Troll)

Anonymous Coward | more than 3 years ago | (#32406540)

SETI, what a scam that is. Nothing more than a wheeze drummed up by power companies and Intel.

Re:SETI@HOME has 3 million or so nodes... (1)

dingen (958134) | more than 3 years ago | (#32406552)

Yeah, most "supercomputers" are distributed systems, just like SETI@Home. The only real difference between a traditional supercomputer and a network like SETI@Home is how spread out the nodes are and the amount of bandwith between them.

I just can't stop thinking about a beowulf cluster of those!

Re:SETI@HOME has 3 million or so nodes... (4, Insightful)

TheRaven64 (641858) | more than 3 years ago | (#32406642)

Not even remotely true. The big difference is not the bandwidth between the nodes, it's the latency. Nodes in a supercomputer can exchange data in well under a millisecond. Nodes in SETI@Home can exchange information in a few hundred milliseconds. Don't think that's important? A single 2GHz core runs 200,000,000 cycles in the time that it takes to send a message between two relatively close SETI nodes. It executes closer to 200,000 instructions in the time that it takes to exchange data between two supercomputer nodes. This means that for things that are not embarrassingly parallel problems, a pair of supercomputer nodes will be up to 100 times faster than a pair of SETI nodes with identical processors. In practice, they won't spend all of their time communicating, so they'll probably only be ten times faster. Of course, when you scale this up to more than two nodes, the delays are increased a lot on a SETI-like system, so something using a few hundred nodes can be far more than only two orders of magnitude faster on a supercomputer.

LINPACK (2, Interesting)

ProdigyPuNk (614140) | more than 3 years ago | (#32406504)

I think this is the first benchmarking article I've read in years where the organizers actually know what their benchmark program does: http://www.top500.org/project/linpack [top500.org] . Refreshing to see real statistics (as good as they can make them), instead of the normal crap that is most hardware articles anymore.

I wonder what kind of score these beasts would get on 3DMark ?

Re:LINPACK (0)

Anonymous Coward | more than 3 years ago | (#32406558)

When did "anymore" start to mean "these days"?

Re:LINPACK (0)

Anonymous Coward | more than 3 years ago | (#32408068)

The day you failed English?

Re:LINPACK (0)

Anonymous Coward | more than 3 years ago | (#32407110)

Just like any statistic, it only measures what it measures and it's open to abuse. There's been at least one top 10 supercomputer built around LINPACK dick-measuring to such an extent that it was nearly useless for getting any work done in the real world. Most of the time it's not taken to that much of an extreme, but there is obviously a stressing of LINPACK performance that wouldn't be there if it were not the ruler for performance.

Re:LINPACK (1)

asvravi (1236558) | more than 3 years ago | (#32408066)

Linpack is no benchmark. Let me know when any of them can begin to manage Adobe Flash.

- Steve J

Should Say "Top 500 Publicly-Acknowledged Supers" (5, Insightful)

cshbell (931989) | more than 3 years ago | (#32406518)

The list should more accurately be called, "Top 500 publicly-acknowledged supercomputers." You can go right on thinking that the US NSA, British MI6, and even some private industries (AT&T?) don't have vastly larger supers that are not publicly disclosed.

Re:Should Say "Top 500 Publicly-Acknowledged Super (0)

Anonymous Coward | more than 3 years ago | (#32406548)

The list should more accurately be called, "Top 500 publicly-acknowledged supercomputers." You can go right on thinking that the US NSA, British MI6, and even some private industries (AT&T?) don't have vastly larger supers that are not publicly disclosed.

Some of the soho vfx houses in London have renderfarms that put the bottom half of this list to shame.

Re:Should Say "Top 500 Publicly-Acknowledged Super (1)

TheRaven64 (641858) | more than 3 years ago | (#32406670)

I doubt it. They may have more aggregate computing power, but they'd do badly on the benchmarks that the Top500 list runs, which depend on interconnect speed as well as raw processor throughput. Rendering is an intrinsically parallel problem. In the absolute worst case, you can render frames independently. If you are ray tracing, you can run each ray separately. Other image and object space partitioning schemes let you trivially parallelise other rendering strategies. This means that render farms typically buy fast computers, but connect them with cheap interconnect - often only GigE or similar. If you tried benchmarking them, the interconnect latency and throughput would be the bottleneck.

Re:Should Say "Top 500 Publicly-Acknowledged Super (1)

Shinobi (19308) | more than 3 years ago | (#32407144)

Linpack doesn't stress interconnect by that much, however. But yes, there are quite a few systems not on that list.

Re:Should Say "Top 500 Publicly-Acknowledged Super (1)

Yaos (804128) | more than 3 years ago | (#32406702)

It depends on what you consider a supercomputer. If you have 100 systems running a single cluster for virtual machines, is that a supercomputer because all of the servers are working together? When you go to Google to search for something that goes to one of their datacenters, all of their systems are hooked together to allow very fast searching and serving of results. Is the system behind Google search a supercomputer?

Food? What food? (5, Interesting)

hcpxvi (773888) | more than 3 years ago | (#32406532)

Of the UK entries in this list, the first few are Hector (the national supercomputing facility), ECMWF, Universities, financial institutions etc. But there are also some labelled "Food industry". I wonder what I am eating that requires a supercomputer?

Re:Food? What food? (1)

sznupi (719324) | more than 3 years ago | (#32406644)

Simulations of chemical processes? Estimations of future harvests and researching chemicals used for agriculture? I can't know if that's it, but there you go - some examples where it might be worthwile.

Re:Food? What food? (3, Funny)

tivoKlr (659818) | more than 3 years ago | (#32406650)

Maybe they're using it to determine why anyone would eat Haggis [wikipedia.org] .

Re:Food? What food? (0)

Anonymous Coward | more than 3 years ago | (#32406674)

Because it's quite nice (like a meaty rice), and contains less unpleasant animal parts than an average beef burger.

Re:Food? What food? (1, Informative)

Anonymous Coward | more than 3 years ago | (#32407590)

Because it's delicious, seriously! Don't knock it till you've tried it. It's not conceptually much different from a big sausage, anyway.

Re:Food? What food? (0)

Anonymous Coward | more than 3 years ago | (#32407208)

Of course genetic modified food!

Weather? (1)

ThrowAwaySociety (1351793) | more than 3 years ago | (#32410118)

Of the UK entries in this list, the first few are Hector (the national supercomputing facility), ECMWF, Universities, financial institutions etc. But there are also some labelled "Food industry". I wonder what I am eating that requires a supercomputer?

Weather simulation, perhaps? Weather has a huge impact on crop yields.

Or perhaps bioinformatics for genetic tinkering.

Re:Food? What food? (1)

CCarrot (1562079) | more than 3 years ago | (#32410124)

It's not what you are eating, but how they figure out how to sell their food to you. It takes some serious crunching to digest the enormous platter-fulls of data on consumer buying trends for pizza, based on age, geographical location, typical Google search histories, and reaction to percentage of red in existing pizza ads!

On the other hand, I must admit to being curious about what the 'perfect' pizza, matched exactly to me by one of the world's fastest computers, would actually taste like...mmmm...pizza...

Re:Food? What food? (1)

StormReaver (59959) | more than 3 years ago | (#32410556)

I wonder what I am eating that requires a supercomputer?

Doesn't the fast food industry use supercomputers to count the calories of its products, and to annually calculate the number of clogged arteries of its patrons?

Why do we keep giving China all these advantages? (0, Troll)

antifoidulus (807088) | more than 3 years ago | (#32406798)

Seriously, China is able to see a lot of the advancements made in the US through its army of grad students(the Chinese government essentially helps them cheat on all the tests they need to do well on in order to study in the US, they consider it to be in their national interests). Meanwhile China won't let a foreigner anywhere near their technology. Is it any surprise then that they are getting close to the top?

Re:Why do we keep giving China all these advantage (1, Insightful)

Anonymous Coward | more than 3 years ago | (#32406918)

Do you actually think that everything was and is invented in US? A man that doesn't know the history will lose the future.

Re:Why do we keep giving China all these advantage (0, Troll)

antifoidulus (807088) | more than 3 years ago | (#32407052)

Nope, but nothing innovative has come out of China since the communists took over. I don't even have problems with pretty much any other nation on earth. It's just China that steals technology en masse then calls it their own. It's China that is trying to take over the world. It's China that is destroying the world economy.

Thats why I boycott Chinese goods. I don't boycott any other nations stuff, and actually I am better for it. Chinese goods are insanely shoddy. I tend to get much better quality when I buy Thai clothes for instance, and for the tiny bit more it costs me when I purchase it it winds up costing much less in replacement costs in the end.

I paid a bit more for glasses that were made in Japan, and despite years of abuse and neglect the things still work perfectly, I probably would have gone through about 4 $80 Chinese made glasses for the 1 $120 Japanese one. China makes garbage and I would be willing to bet that their "entry" here is a forgery as well.

Largest Pirvately Owned Supercomputer? (2, Interesting)

Plekto (1018050) | more than 3 years ago | (#32406814)

I was curious if any privately owned(non-corporate or government) machines made the list, and where they placed.

Re:Largest Pirvately Owned Supercomputer? (1)

lrrosa (1424977) | more than 3 years ago | (#32407354)

I was curious if any privately owned(non-corporate or government) machines made the list, and where they placed.

No, botnets are not part of the list.

actual purpose (2, Interesting)

Iamthecheese (1264298) | more than 3 years ago | (#32406902)

In years past as many as 7 out of 10 officially listed computers were for security research. Now, contrary to the article, that's down to 2.

Jaguar -- general research (http://www.nccs.gov/computing-resources/jaguar/)
Roadrunner -- security research (http://www.lanl.gov/)
Kraken XT5 -- general research (National Institute for Computational Sciences/University of Tennessee)
Tianhe-1 -- unstated
Pleiades -- security research (nukes)

"Recently expanded to accommodate growing demand for high-performance systems able to run the most complex nuclear weapons science calculations, BGL now has a peak speed of 596 teraFLOPS. In partnership with IBM, the machine was scaled up from 65,536 to 106,496 nodes in five rows of racks; the 40,960 new nodes have double the memory of those installed in the original machine"

Intrepid -- General research
Ranger -- General research
Red Sky -- General research

It makese me wonder whether the machines for nuclear research went underground or maybe it just doesn't take a top ranking supercomputer to calculate a nuclear explosion anymore.

Re:actual purpose (1)

Vectormatic (1759674) | more than 3 years ago | (#32407142)

perhaps nuke simulations have indeed reached a level where more crunching power isnt worth it anymore, why build a complete new system to do a blast-sim if your existing machine does it in two days. Perhaps there isnt a market for more then X blast simulations per year..

anyway WOW, 40960 NEW nodes... If every BGL node is a single U of rackspace, then even ignoring network/UPS/etc requirements, that means adding 1000 racks, to the already existing ~1500...

Re:actual purpose (2, Informative)

zeldor (180716) | more than 3 years ago | (#32407498)

pleaides isnt nukes, its nasa. airplanes and weather.
the others some are nukes some are open unclassified uses.
noaa/nsf/etc

Re:actual purpose (3, Interesting)

rdebath (884132) | more than 3 years ago | (#32408450)

As I understand it most of the nuclear research simulations that it would be nice to run simply cannot be done on any modern machines. If it's only a few particles they can be simulated on a laptop but the interesting interactions need to simulate millions or billions of points with every single one of them influencing every other one in the simulation.

As a simple example, a genetic algorithm was used to program some reconfigurable FPGA chips. A layout was grown on the chip the did the job but broke just about every rule for FPGA design. There were parts of the layout on the chip that were not connected to any circuit but removing them made the device fail to work. Transferring the layout to a different chip got you a non-working circuit. It would be great to be able to simulate this ... not a chance it's too big, by so very many orders of magnitude.

http://www.netscrap.com/netscrap_detail.cfm?scrap_id=73 [netscrap.com]

I don't understand your numbers (1)

pavon (30274) | more than 3 years ago | (#32408956)

Are you counting the entire list of computers or just the top 10? Is the first list supposed to be ones used for security research and the second for general research? If so, Red Sky and possibly others are used for security research.

The change is that most super computers at the national laboratories are not single-use, and are thus listed as general research even if they spend a large proportion of their cycles on security research.

Sure they're fast.. (-1, Redundant)

Anonymous Coward | more than 3 years ago | (#32407424)

but can it run Crysis?

Treemap (1)

ISoldat53 (977164) | more than 3 years ago | (#32407638)

The sidebar about treemaps is as interesting as the main article. An interesting way to display complex data in a compact form.

Interesting... (2, Funny)

CCarrot (1562079) | more than 3 years ago | (#32408550)

"It's measured against a theoretical benchmark - if you ran a real-world application you might get a very different answer".

Next bulletin:

"Vista-based benchmark testing complete - converts Jaguar to big pussycat"

;o)

And Linux passes the 90% mark (1, Informative)

Anonymous Coward | more than 3 years ago | (#32408842)

"Linux family" operating systems went from 89% in the previous list to 91% of this one [top500.org] .

Not that the field wasn't already dominated, but it's an interesting milestone. (FWIW, Linux passed 75% in 2006-11, 50% in 2004-06, and 25% in 2003-06.)

chinks... (0)

Anonymous Coward | more than 3 years ago | (#32409456)

well that puts a chink in the argument that the west owns the super computer market.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...