Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

100 Million-Core Supercomputers Coming By 2018

timothy posted more than 4 years ago | from the super-commuters-the-year-after dept.

Supercomputing 286

CWmike writes "As amazing as today's supercomputing systems are, they remain primitive and current designs soak up too much power, space and money. And as big as they are today, supercomputers aren't big enough — a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be the next performance goal: an exascale system. Today, supercomputers are well short of an exascale. The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops. But Jaguar's record is just a blip, a fleeting benchmark. The US Department of Energy has already begun holding workshops on building a system that's 1,000 times more powerful — an exascale system, said Buddy Bland, project director at the Oak Ridge Leadership Computing Facility that includes Jaguar. The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design. The latter project is now under way in France: the International Thermonuclear Experimental Reactor, which the US is co-developing. They're expected to arrive in 2018 — in line with Moore's Law — which helps to explain the roughly 10-year development period. But the problems involved in reaching exaflop scale go well beyond Moore's Law."

Sorry! There are no comments related to the filter you selected.

FIRST POST FOR THE ALL NEW GNAA!!! (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30119364)

VISIT US AT IRC.HARDCHATS.COM #GNAA

fcjioedfj

Filter error: Don't use so many caps. It's like YELLING.

100 Million? (3, Funny)

Itninja (937614) | more than 4 years ago | (#30119420)

Can't we just start calling this a 'supercore' or something? When the numbers get that high it kind of goes beyond what most people can visualize. Like describing how hot the Sun is....let's just says it's "exactly 1 Sun hot".

Re:100 Million? (2, Insightful)

MozeeToby (1163751) | more than 4 years ago | (#30119550)

How about 1 million cores being a mega-core. So the proposed supercomputer would be a 100 mega-core computer.

Re:100 Million? (4, Funny)

Yvan256 (722131) | more than 4 years ago | (#30119674)

Let's just make sure it's 1 000 000 cores and not 1 048 576 cores... let's not make that mistake again.

Re:100 Million? (3, Funny)

_KiTA_ (241027) | more than 4 years ago | (#30119786)

Let's just make sure it's 1 048 576 cores and not 1 000 000 cores... let's not make that mistake again.

Re:100 Million? (4, Insightful)

Yvan256 (722131) | more than 4 years ago | (#30119828)

Just because CS has been abusing a system for over four decades doesn't make it right.

Re:100 Million? (0)

Anonymous Coward | more than 4 years ago | (#30120056)

Loggers don't measure in yards, they measure in board-feet.

Why can't computer science use a measurement that's based on powers of 2, which is exactly what they work in?

Re:100 Million? (1)

oldspewey (1303305) | more than 4 years ago | (#30120214)

When loggers measure in board feet, they call them board feet not yards. When drive makers (or whoever else) measure in kibibytes [wikipedia.org] , they call them kilobytes not kibibytes.

Re:100 Million? (2)

Mikkeles (698461) | more than 4 years ago | (#30120622)

'When loggers measure in board feet, they call them board feet not yards....'

Board-feet measure volume; yards measure length.

Re:100 Million? (1)

Yvan256 (722131) | more than 4 years ago | (#30120244)

It's not about using a power of 10 vs power of 2, it's about using the SI units for the larger units. A kilo means 1000, not 1024.

We're at a point where we have hard drive manufacturers getting sued by users who are confused by all this damn mess.

Here's some information about the subject [wikipedia.org] .

Re:100 Million? (2, Interesting)

aztracker1 (702135) | more than 4 years ago | (#30120356)

Far more computer science types wind up working with money (base-10) than anything base-2 or base 16.

Re:100 Million? (0)

Anonymous Coward | more than 4 years ago | (#30120486)

Loggers don't measure in yards, they measure in board-feet.

Oh, that's where log4j is going wrong then ... it doesn't even log distance :-(

Re:100 Million? (0)

Anonymous Coward | more than 4 years ago | (#30120504)

+1, Agreed.

Having multiple conflicting definitions of a term renders that term meaningless.

address lines (1)

conspirator57 (1123519) | more than 4 years ago | (#30120762)

if speed is the goal, then it needs to be a power of two number of cores so that you don't have to implement logic checking for a valid core address. That logic would eat performance from every action performed by the machine. So, until you develop affordable decimal logic hardware implementations that can scale in size the way the binary logic does, we're gonna keep making computers that work fast the way we do now and it's gonna involve powers of 2. And get off my lawn.

Re:100 Million? (1)

Jeremy Erwin (2054) | more than 4 years ago | (#30119600)

The core? The surface? The corona?

Re:100 Million? (2, Insightful)

Anonymous Coward | more than 4 years ago | (#30119604)

Because even though the number's "effect" on you diminishes as it goes up, doesn't mean it is still significant. There's a reason Engineers use quantitative instead of qualitative.
How do you tell the difference between hot and really hot or really really hot?

Really.

How about the difference between 10, 20 and 30?

10

Which gives you more information?

Re:100 Million? (1)

Yvan256 (722131) | more than 4 years ago | (#30119694)

The definition of "supercomputer" changes as time goes by. Today's cellphones are yesterday's supercomputers.

Re:100 Million? (0)

Anonymous Coward | more than 4 years ago | (#30120108)

The definition of "supercomputer" changes as time goes by. Today's cellphones are yesterday's supercomputers.

Weird. My cell phone today is the same as my cell phone yesterday. And don't even get me talking about my so-called "supercomputer" tomorrow.

Re:100 Million? (1)

Stargoat (658863) | more than 4 years ago | (#30119944)

I suspect that we will end up calling this a heuristically designed processor. Or something similar....

Speaking of heat (5, Funny)

ArbitraryDescriptor (1257752) | more than 4 years ago | (#30119960)

I am currently accepting investors to help build a one billion core supercomputer to create high resolution climate models that take into account the waste heat from a 100 million core supercomputer making a high resolution climate model.

(Seriously, how much heat is that thing going to put out?)

Re:Speaking of heat (0)

Anonymous Coward | more than 4 years ago | (#30120524)

I was reading on wikipedia that MSoft sold 400 million copies of windows XP.
This doesn't incude the pirated ones. Thus this means that there are 400 million PCs in the world. If everyone had the newest PC with 6 core athlons, there would be 2400 million cpus in the world. So we've already met the 100 mil core limit time 24!!! Thus, exaflop has already been reached. What's the next one one? Hmmm, TSA can decode all the emails everyone sends out, including the millions upon millions of spam all the faster..... Thus the govs supercomputer is sure one glorified spam checker.

Re:100 Million? (1)

Stupid McStupidson (1660141) | more than 4 years ago | (#30119986)

Fuck everything, we're going 200 million cores

Re:100 Million? (1, Informative)

Archangel Michael (180766) | more than 4 years ago | (#30120122)

One Million Cores and one Sun hot mentioned in the same post, coincidence? I think not!

Re:100 Million? (1, Funny)

Anonymous Coward | more than 4 years ago | (#30120182)

Can you translate that in "Library of Congress's"?

Re:100 Million? (1)

TheKidWho (705796) | more than 4 years ago | (#30120742)

Well, what was missing from the article summary is that this computer is going to be built using nVidia GPUs, not CPUs for the majority of computing...

Although really, with the way Fermi is shaping up, it is turning into a very specialized CPU.

Who's President, Future-boy? (4, Insightful)

pete-classic (75983) | more than 4 years ago | (#30119430)

As amazing as today's supercomputing systems are, they remain primitive

Wait, what? You lost me. Are you from the future? How can you describe the state of the art as "primitive"?

-Peter

Re:Who's President, Future-boy? (1)

NoYob (1630681) | more than 4 years ago | (#30119616)

As amazing as today's supercomputing systems are, they remain primitive

Wait, what? You lost me. Are you from the future? How can you describe the state of the art as "primitive"?

-Peter

Oh, that's silly! He's psychic, of course. He can SEE into the future!

Re:Who's President, Future-boy? (2, Funny)

Yvan256 (722131) | more than 4 years ago | (#30119658)

Forget the president, ask for the winning lottery numbers for the next 20 years!

Re:Who's President, Future-boy? (5, Interesting)

mcgrew (92797) | more than 4 years ago | (#30119870)

My cell phone is a supercomputer. At least, it would have been if I'd had it in 1972. Rather then being from the future, he, like me, is from the past and living in this science fiction future when all that fantasy stuff like doors that open by themselves, rockets to space, phones that need no wires and fit in your pocket, computers on your desk, ovens that bake a potato in three minutes without the oven getting hot, flat screen TVs that aren't round at the corners, eye implants that cure nearsightedness, farsightedness, astigmatism and cataracts all at once, etc.

Back when I was young it didn't seem primitive at all. Looking back, GEES. When you went to the hospital they knocked you out with automotive starting fluid and left scars eight inches wide. These days they say "you're going to sleep now" and you blink and find yourself in the recovery room, feeling no pain or nausea with a tiny scar.

We are indeed living in primitive times. Back in the 1870s a man quit the Patent office on the grounds that everything useful had already been invented. If you're young enough you're going to see things that you couldn't imagine, or at least couldn't believe possible.

Sickness, pain, and death. And Star Trek. [slashdot.org]

Re:Who's President, Future-boy? (1)

aztracker1 (702135) | more than 4 years ago | (#30120412)

No nausea? WTF, I've gone through two surgeries where they put me out in the past 5 years, I was nauseous after both of them.

Re:Who's President, Future-boy? (1)

Z00L00K (682162) | more than 4 years ago | (#30119978)

You can still predict that some tech is primitive.

When a computer develops a mind of it's own in a logical manner it's starting to reach the human level and we can start to discuss if it's primitive or not. If it starts to reproduce on it's own it's time to be careful.

Re:Who's President, Future-boy? (1)

aztracker1 (702135) | more than 4 years ago | (#30120436)

Sarah Conner?

Re:Who's President, Future-boy? (5, Insightful)

David Greene (463) | more than 4 years ago | (#30120076)

Wait, what? You lost me. Are you from the future? How can you describe the state of the art as "primitive"?

Pretty easily, actually. There are lots of problems to solve, not the least of which is programming model. We're still basically using MPI to drive these machines. That will not cut it on a 100-million core machine where each socket has on the order of 100 cores. MPI can very easily be described as "primitive," as well as "clunky," "tedious" and "a pain in the ***."

How do we checkpoint a million-core program? How do we debug a million-core program? We are in the infancy of computing.

Re:Who's President, Future-boy? (2, Funny)

vtcodger (957785) | more than 4 years ago | (#30120388)

***How do we debug a million-core program?***

What is this "debugging" thing you speak of? If you are asking how we will test software for a million core system, we'll do it the same way we always have. We'll get a trivial test case to run once, then we'll ship.

Re:Who's President, Future-boy? (0)

Anonymous Coward | more than 4 years ago | (#30120286)

-Peter

Peter! Hey how's it going! We have been looking for you to help out this poor unfortunate guy Paul over here. Ok truth is.. there are millions of Pauls I hope you don't mind spreading your wealth around.

You sign your posts manually? (0)

Anonymous Coward | more than 4 years ago | (#30120350)

What's it like to be a primitive caveape in the modern world?

Re:Who's President, Future-boy? (1)

pete-classic (75983) | more than 4 years ago | (#30120438)

Wow, a bunch of people didn't get what I thought was a simple point.

I understand that the current state of supercomputing will seem primitive at some point in the future. In fact, my post is predicated on that notion.

But words mean things. At some point in the future everything about our current state of culture and technology will seem primitive. Describing the current state-of-the-art as primitive is meaningless. That approach can be applied to any topic equally.

Let me illustrate by counter-example. "The practice of medicine in parts of sub-Saharan Africa remains primitive." See how I'm creating a contrast that conveys meaning? Such a contrast only exists in the summary if the author has some frame of reference extending into the future.

Also, I was making a Back to the Future reference.

*shrug*

-Peter

Are all your cores used on your desktop? (1)

Ilgaz (86384) | more than 4 years ago | (#30120616)

Check Wiki about "thinking machines", "transputer" and if you have more than 1 CPU/Core, launch a game and see if all cores used effectively without needing massive additional work from game publisher.

Technology is primitive, even a billion processor machine doesn't save it from being primitive. It is the software at least.

Re:Who's President, Future-boy? (1)

Jekler (626699) | more than 4 years ago | (#30120642)

I think of our supercomputing systems as primitive in an analogous way as cavemen wouldn't end up with a rocket thruster if they just throw enough logs on a fire.

Without more advanced software designs and some type of revolutionary system architecture, more cores ends up only being slightly better than linear progression. They're primitive in that our supercomputers are seldom more than the sum of their parts.

yeah but yeah but (0)

Anonymous Coward | more than 4 years ago | (#30119496)

my old trusty VIC20 ftw

Sorry - I can't help myself (0)

RPGonAS400 (956583) | more than 4 years ago | (#30119596)

Can You Imagine a Beowulf Cluster of These?

Re:Sorry - I can't help myself (3, Funny)

sherpajohn (113531) | more than 4 years ago | (#30119730)

Only if they run Linux and can render Natalie Portman covered in hot grits faster than my imagination already does....woohoo!

Re:Sorry - I can't help myself (1)

Narpak (961733) | more than 4 years ago | (#30120002)

Only if they run Linux and can render Natalie Portman covered in hot grits faster than my imagination already does....woohoo!

Your imagination must be quite primitive. Mine did that instantly upon reading your post.

Re:Sorry - I can't help myself (0)

Anonymous Coward | more than 4 years ago | (#30119750)

Can You Imagine a Beowulf Cluster of These?

/me wanks furiously

Limits on simulation. (4, Interesting)

140Mandak262Jamuna (970587) | more than 4 years ago | (#30119614)

The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks. Some kind of simulations parallelize naturally. Time accurate fulid flow simulation for example is very easy to parallelize and technically you can devote a processor for each element and do time marching nicely. But not all physics problems are amenable to parallelization. Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.

The CFL condition that limits the maximum time step one can take shows no sign of relenting. Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.

How many problems can these systems really solve? (3, Insightful)

wondi (1679728) | more than 4 years ago | (#30119624)

All this effort at creating parallel computing ends up solving very few problems. HPC has been struggling with parallelism for decades, and no easy solutions found yet. Note that these computers are aimed at solving a particular problem (e.g. modeling weather) and not at being a vehicle to quickly solve any problem. When the comparable multi-processing capacity is in your cell phone, what are you going to do with it?

Re:How many problems can these systems really solv (2, Funny)

Dgtl_+_Phoenix (1256140) | more than 4 years ago | (#30119822)

When the comparable multi-processing capacity is in your cell phone, what are you going to do with it?

Stream high definition porn... duh.

Re:How many problems can these systems really solv (2, Informative)

David Greene (463) | more than 4 years ago | (#30120006)

Note that these computers are aimed at solving a particular problem (e.g. modeling weather) and not at being a vehicle to quickly solve any problem.

That's not entirely accurate. HPC systems are designed to solve a class of problems. That's not the same thing as a "particular" problem. Jaguar has, in fact, solved many different problems, including fluid flow, weather, nuclear fusion and supernova modeling. It's not going to run Word any faster than your PC but that's not what you buy a supercomputer to do.

Re:How many problems can these systems really solv (4, Funny)

Again (1351325) | more than 4 years ago | (#30120204)

That's not entirely accurate. HPC systems are designed to solve a class of problems. That's not the same thing as a "particular" problem. Jaguar has, in fact, solved many different problems, including fluid flow, weather, nuclear fusion and supernova modeling. It's not going to run Word any faster than your PC but that's not what you buy a supercomputer to do.

So you're saying that OpenOffice would still take forever to start.

Re:How many problems can these systems really solv (0)

Anonymous Coward | more than 4 years ago | (#30120106)

These computers will be good for solving problems involving lots of independent operations. A processor can process one operation at a time, but since these operations do not depend on each other, the operations can be sent to several processors at once. Imagine a big foreach loop, for example.

Or maybe someone will want to have 100 million Google Chrome tabs open.

I'm personally imagining putting dnetc on one of these things.

Re:How many problems can these systems really solv (1)

2obvious4u (871996) | more than 4 years ago | (#30120176)

Parallel computing is great for solving NP-Complete problems. If you have enough cores for every possible solution you can have all possible paths process at the same time and compare the results.

Re:How many problems can these systems really solv (1)

T Murphy (1054674) | more than 4 years ago | (#30120644)

I'm always wary of making an infamous "50 MB of memory is all you'll ever need" type of claim, so I like to believe that we'll figure out how to use greater processing power by the time it gets here. We haven't had too much trouble with that so far. As far as actual use, if we ever get products like Morph (http://www.youtube.com/watch?v=IX-gTobCJHs [youtube.com] ), there might be a need for massively parallel processing. At the very least, such computing power would likely be needed to make such products.

Why 100 million processors? (4, Funny)

140Mandak262Jamuna (970587) | more than 4 years ago | (#30119638)

Technically, shouldn't 640K processors be enough for every one?

Re:Why 100 million processors? (3, Funny)

Yvan256 (722131) | more than 4 years ago | (#30119798)

It is, if we're talking about cloud processors for running vaporware.

Re:Why 100 million processors? (1)

aztracker1 (702135) | more than 4 years ago | (#30120500)

VMWare is now renaming its' company to "CloudWare" which will be optimised for use with Intel's new Vapor® processor line.

Portland, Ore? (0, Flamebait)

Anonymous Coward | more than 4 years ago | (#30119642)

What's "Portland, Ore"?

Oh, you mean "Portland, Oregon"? This is a website. It isn't fucking Twitter or SMS, how hard is it to write three more letters?

We know that Slashdot is U.S.A.-centric so we'll forgive the missing "Portland, Oregon, U.S.A." part, but for crying out loud, at least write the whole state name.

Re:Portland, Ore? (0)

Anonymous Coward | more than 4 years ago | (#30120148)

Why 3 letters? the standard abbreviation for a state is 2 letters, in this case OR

Ok, Ok, we'll be more specific... (1)

raftpeople (844215) | more than 4 years ago | (#30120624)

"Portland, Oregon, U.S.A., Earth, Milky Way, Cluster TXH-170718, Universe 01 (we think)"

Oink, oink (1, Insightful)

Animats (122034) | more than 4 years ago | (#30119698)

The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design.

Sounds like a pork program. What are "bio energy products", anyway. Ethanol? Supercomputer proposals seem to come with whatever buzzword is hot this year.

It's striking how few supercomputers are sold to commercial companies. Even the military doesn't use them much any more.

Re:Oink, oink (0)

Anonymous Coward | more than 4 years ago | (#30120088)

Sounds like you're full of bullshit.

Re:Oink, oink (2, Insightful)

David Greene (463) | more than 4 years ago | (#30120206)

Sounds like a pork program. What are "bio energy products", anyway. Ethanol?

I'm no expert on this, but I would guess the idea is to use the processing power to model different kinds of molecular manipulation to see what kind of energy density we can get out of manufactured biological goo. Combustion modeling is a common problem solved by HPC systems. Or maybe we can expore how to use bacteria created to process waste and give off energy as a byproduct. I don't know, the possibilities are endless.

It's striking how few supercomputers are sold to commercial companies. Even the military doesn't use them much any more.

Define "supercomputer." Sony uses them. So does Boeing. The auto industry uses clusters to model crashes, but I believe that's more limited by the design of the off-the-shelf software than anything. They could certainly run on supercomputer-class machines if the vendors ported them.

And the military uses them a lot. Much of the DOE research done on these machines is probably defense-driven.

Re:Oink, oink (1)

vtcodger (957785) | more than 4 years ago | (#30120452)

Isn't quantum computing supposed to solve all these problems without need for a zillion cores? Or have a latched onto the wrong panacea here?

Re:Oink, oink (1)

T Murphy (1054674) | more than 4 years ago | (#30120714)

These systems cost a lot- it might take buzzwords to get politicians to buy into them and fund these sorts of projects. Even so, many energy projects are important to pour more research into, even if such projects often get watered down to a single misleading buzzword.

AMD vs Intel (1)

teko_teko (653164) | more than 4 years ago | (#30119704)

It's interesting that 4 of top 5 supercomputers are running AMD, while 402 of the Top500 are running Intel.

What's the cause of this? Value? Energy-saving? Performance?

Re:AMD vs Intel (4, Informative)

Eharley (214725) | more than 4 years ago | (#30119854)

I believe AMD was the first mass market CPU to include an on-board memory controller.

Re:AMD vs Intel (1)

jamzo (116612) | more than 4 years ago | (#30120168)

It's probably because of HyperTransport and floating point computation performance. I think HyperTransport made it easier for supercomputer vendors like cray to build better interconnects and traditionally opterons was a bit better at floating point ops. Also, opterons have 64Kb L1 caches where I think comparable Intel processors had 32Kb L1 caches. But this was all a couple of years ago ... the next generation of the fastest supercomputers will probably be Intel based.

Re:AMD vs Intel (0)

Anonymous Coward | more than 4 years ago | (#30120354)

Not energy saving, since the Cell based ones are clearly on top in performance/Watt. Note that number 2 is actually an AMD/Cell hybrid where the majority of flops are provided by Cell. It has 2/3 the performance of number 1 but 1/3 the power consumption. And the x86 based that has the same power consumption has half the flops.

I'm curious to see what it will be in 1 year, since Power7 might be a serious contender. (Power6 has been IBM's Pentium IV, very high clock speed for limited performance and performance/Watt, but first Power7 early benchmarks look much better).

Power7 blades will be more expensive but the Power7 also has in theory more memory bandwidth, scales better to a larger number of threads per memory coherency domain, and may have much better performance/Watt. In this case, the larger upfront costs may not be decisive (especially when the difference is counted in MW, evacuating them has an impact on the infrastructure).

Re:AMD vs Intel (3, Informative)

confused one (671304) | more than 4 years ago | (#30120464)

I'd be guessing but here are three possible reasons AMD might be in that place:
1.) Value, ie. lower cost per processor
2.) Opteron has built in straight forward 4-way and 8-way multiprocessor connectivity, Xeon was limited to 2-way connectivity without extra bridge hardware, until recently.
3.) Opteron has higher memory bandwidth than P4 or Core 2 arch.

Re:AMD vs Intel (2, Informative)

hattig (47930) | more than 4 years ago | (#30120636)

Easy CPU upgrades because the socket interface stay the same.

Some of those supoercomputers might have gone from dual-core 2GHz Opteron K8s through quad-core Opteron K10s to these new sexa-core Opteron K10.5s with only the need to change the CPUs and the memory.

Or possibly if the upgrades were done at a board level, HyperTransport has remained compatible, so your new board of 24 cores just slots into your expensive, custom, HyperTransport-based back-end. To switch to Intel would require designing a QPI-based back-end.

Of course Magny-Cours and Bulldozer will use the G34 socket, so that's not a plug-in and go upgrade when they come out in 2010 and 2011 respectively. But it will be a stable platform for several years itself, and thus be attractive.

Re:AMD vs Intel (0)

Anonymous Coward | more than 4 years ago | (#30120648)

As much as I think AMD are a bunch of crybabies, they made a wise decision early on with Hypertransport and with the IMC. They had a solution that worked well from the desktop on up to HPC, while Intel mainly targeted the desktop up through 2P systems. They certainly had solutions for 4P and above, but none were as elegant as the Opterons were for that space.

Note that with Nehalem (and the EX version) this will change and Opteron is no longer compelling in, well, any space other than on a price/performance basis possibly.

And Sarah Palin Will Be Elected (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30119710)

President in 2012.

I guess Slashdot has become Limbaughized.

Yours In Petrograd,
Kilgore T.

Why build this monstrosity? (4, Funny)

140Mandak262Jamuna (970587) | more than 4 years ago | (#30119716)

We know what answer it is going to give. 42. Save the money.

Re:Why build this monstrosity? (2, Funny)

thewils (463314) | more than 4 years ago | (#30120228)

That's the answer though. They're building this thing to find out what the question was.

The Jaguar? (2, Interesting)

Yvan256 (722131) | more than 4 years ago | (#30119748)

The Jaguar is capable of a peak performance of 2.3 petaflops.

The first Jaguar [wikipedia.org] was a single megaflop.

Partly a software problem. Erlang? (1, Interesting)

mcrbids (148650) | more than 4 years ago | (#30119780)

We're still at the point where unthreaded languages (like PHP) are still viable. For example, we use PHP in a complex, multi-server, multi-core cluster, and it's "share nothing" approach scales quite nicely, in that having more and more users hitting the systemm on separate servers doesn't really cause a problem, since there's virtually no cross-communication going on.

But there's a scalability limit in what you can do "PER PROCESS". There are some very processor intensive functions that simply take a while to do (such as rendering a 100 page report, then converting to PDF) and there's currently no way to spread the load in PHP beyond a single core.

At the other extreme, we have almost the same problem - having such a large number of cores that resources commonly shared among threads and processes is really no longer feasible.

Languages like Erlang have a "shared nothing" approach, but not at the process/thread level, but at the function level. Individual functions within a process are themselves "share nothing" and thus can easily scale across multiple cores, processors, and servers in a networked cluster. (at least, this is the theory)

So how 'bout it, folks? Where are the benchmarks showing how languages DESIGNED to take advantage of parallel processors and clusters actually scale up in the real world? Is Erlang the cat's meow when discussing systems of this scale?

I'm not expecting to see my example process (100 page PDF reports) scale up smoothly to 250,000 cores, but I sure would like to see it scale up smoothly to a dozen or two!

Re:Partly a software problem. Erlang? (2, Informative)

Anonymous Coward | more than 4 years ago | (#30120038)

What on Earth? You're bringing PHP and "rendering PDF reports" into a discussion about HPC? And you propose Erlang as some kind of solution? Nobody doing HPC is using Erlang. As usual for Slashdot, you have absolutely no idea what you're talking about.

Re:Partly a software problem. Erlang? (1)

mcrbids (148650) | more than 4 years ago | (#30120552)

You're bringing PHP and "rendering PDF reports" into a discussion about HPC?

Yes. Because in a few years, hardware comparable to what is now "HPC" will be routine. We've already jumped from unicore, uniprocessor servers to almost a hundred cores. Just a decade or so, a 100-core computing cluster was HPC, even if not near the "top 500".

And you propose Erlang as some kind of solution? Nobody doing HPC is using Erlang.

But... Erlang was designed for scalability! It was DESIGNED to smoothly scale from a unicore to multicore to LOBOS style computing. If Erlang isn't a "player" in the HPC space, why the !@# not? And if it's not a player, what is, and what do I need to do to transition to it over the next decade or so?

As usual for Slashdot, you have absolutely no idea what you're talking about.

Since you posted this on Slashdot... (I'll put the mirror away)

Re:Partly a software problem. Erlang? (1)

vlm (69642) | more than 4 years ago | (#30120444)

I'm not expecting to see my example process (100 page PDF reports) scale up smoothly to 250,000 cores, but I sure would like to see it scale up smoothly to a dozen or two!

Well, that's not very hard. Split the job like the ray tracers do, into 250K little parts of the 100 page report, have each core individually render its little bit, then mush all the rendered outputs together.

You could do this now, more or less off the shelf, by separating your raw data into 100 raw input files, one for each page, then have 100 machines or cores or whatever render each separate page, then a big run of pdfjoin to turn 100 single page pdfs into one 100 page pdf.

creators' big flash coming way before 2018 (0)

Anonymous Coward | more than 4 years ago | (#30119800)

that will settle who 'owns' what forever.

it's newclear powered, user friendly, & completely bug free, as well free to use, forever.

Windows 2018 (2, Funny)

smitty777 (1612557) | more than 4 years ago | (#30119820)

Maybe this thing will have enough power to run Windows by 2018??

Re:Windows 2018 (0)

Anonymous Coward | more than 4 years ago | (#30120026)

Windows has the requirement of SOTA +1, so that it can be used for 5 years into the future per its upgrade path. This also means that it isnt worth buying the new windows until you have a new machine and SP1 has been released.

human brain (4, Interesting)

simoncpu was here (1601629) | more than 4 years ago | (#30119830)

How many cores do we need to simulate a human brain?

Re:human brain (1)

Dgtl_+_Phoenix (1256140) | more than 4 years ago | (#30120070)

Not sure about the number of cores, a number of experts say that around 20 petaflop should do it. We should see computers capable of doing this by the end of the decade. Of course creating the AI or brain scans necessary to accomplish this is going to be the more challenging problem. What will be fantastic about simulated brains is that their neurons will be significantly faster than standard human neurons. This means that your simulated brain can produce orders of magnitude more work despite being no smarter.

Re:human brain (0)

Anonymous Coward | more than 4 years ago | (#30120078)

Only 1, so long as the simulation segfaults.

Re:human brain (1)

CannonballHead (842625) | more than 4 years ago | (#30120776)

Define "simulate" in this context. Processing power? Creativity? Originality? Ingenuity? I didn't think any number of cores could "cause" creativity... aside from a "brute force" method. Try-every-possibility-and-see-if-one-works.

Windows Vista (0)

Anonymous Coward | more than 4 years ago | (#30119860)

The question is: will it be enough to run Aero?

Re:Windows Vista (1)

MickyTheIdiot (1032226) | more than 4 years ago | (#30120066)

no

So far to go! (0)

Anonymous Coward | more than 4 years ago | (#30119862)

The fastest system only has 224k Cores? Oh, dear. We definitely need bigger systems, then.
And I suppose Deep Thought has nothing to worry about yet, either. Yet. :D
(The fictional version, that is. The "real" one has already been outdone by Rybka.)

I can visualize a third life (1)

ub3r n3u7r4l1st (1388939) | more than 4 years ago | (#30119918)

Yup. 100-million core driven Second-Life server, like the Matrix.

That's a lot (0)

Anonymous Coward | more than 4 years ago | (#30119928)

Wow, not just one million-core supercomputer, but 100 of them?

synergy (0)

Anonymous Coward | more than 4 years ago | (#30120014)

They should obviously start working with the Mandlebulb people..

Processing power (1)

Wowsers (1151731) | more than 4 years ago | (#30120046)

Is this going to be the new processor requirement for running Flash in a web browser?

Sure (1)

Stan Vassilev (939229) | more than 4 years ago | (#30120134)

I'm still waiting for that 10GHz Pentium Intel promised for 2004.

Coming By 2012 (0)

Anonymous Coward | more than 4 years ago | (#30120212)

Republican reign for the next 1000 years after Obama fails.

Yours In Moscow,
Kilgore T.

Sir, all 1 million cores have failed.. (0)

Anonymous Coward | more than 4 years ago | (#30120296)

if only they had built 1,000,001!

How about reconfigurable computing instead? (0)

Anonymous Coward | more than 4 years ago | (#30120304)

...Such as, say, FPGAs (www.maxeler.com) or GPUs (www.ati.com)
One such accelerator card can replace ~ 100 cores for common applications such as finite differences or MonteCarlo ...
Hence you would need "only" a million blades.

One Hundred Billion Cores (0)

Anonymous Coward | more than 4 years ago | (#30120316)

Made by sharks with frikkin lasers on their heads.

Is it just me or does this news message just shout Dr. Evil?

20 Megawatt power supply... (0, Offtopic)

tomhath (637240) | more than 4 years ago | (#30120378)

IBM's design goal for an exascale system is to limit it to 20 megawatts of power ,,,

Just keeping that sucker cooled will contribute to global warming. I hope they're going to use all that waste heat for something.

Colon Blow (0)

Beelzebud (1361137) | more than 4 years ago | (#30120416)

To put this into perspective, it would take over 4 and a half million bowls of Super Colon Blow to equal the computation power of just 1 of these things!

stupid (0)

Anonymous Coward | more than 4 years ago | (#30120484)

"But the problems involved in reaching exaflop scale go well beyond Moore's Law."

  The above quote shows quite well that the writer doesn't understand what Moore's Law is about.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?