Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Breaking Supercomputers' Exaflops Barrier

Soulskill posted about a year ago | from the have-you-tried-hitting-the-turbo-button dept.

Supercomputing 96

Nerval's Lobster writes "Breaking the exaflops barrier remains a development goal for many who research high-performance computing. Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through. Indeed, Tianhe-2 did pretty well when it was finally revealed — knocking the U.S.-based Titan off the top of the Top500 list of the world's fastest supercomputers. Yet despite sustained performance of 33 petaflops to 35 petaflops and peaks ranging as high as 55 petaflops, even the world's fastest supercomputer couldn't make it past (or even close to) the big barrier. Now, the HPC market is back to chattering over who'll first build an exascale computer, and how long it might take to bring such a platform online. Bottom line: It will take a really long time, combined with major breakthroughs in chip design, power utilization and programming, according to Nvidia chief scientist Bill Dally, who gave the keynote speech at the 2013 International Supercomputing Conference last week in Leipzig, Germany. In a speech he called 'Future Challenges of Large-scale Computing' (and in a blog post covering similar ground), Dally described some of the incredible performance hurdles that need to be overcome in pursuit of the exaflops barrier."

cancel ×

96 comments

Sorry! There are no comments related to the filter you selected.

One thing (-1)

Anonymous Coward | about a year ago | (#44108275)

One thing we can be certain of. There are no niggers involved in this project. You see supercomputers just aren't gangsta yo.

Has this been turned into another pissing contest? (0, Redundant)

Taco Cowboy (5327) | about a year ago | (#44108445)

All the talk about who has the fastest / most awesome computer in the world used to make sense --- there were a lot of problems which need huge computational power to help solve

They went from mere gigaflop to petaflop and now they are aiming all the way to break the exaflop barrier

Now, let me ask this --- is there really a case which justifice all the juice ?

From giga to peta, it's already a difference of 1,000 times

From peta to exa, another 1,000

Which means, when they finally break the exa-barrier, they already attain 1,000,000 (one million times) the crunching power of what they used to get, in the giga era

Do they really need the 1,000,000 fold of crunch to solve their problem, or has this been turning into another "pene" contest ?

Mea Culpa (1)

Taco Cowboy (5327) | about a year ago | (#44108457)

Oops, sorry,

Should have used "tera" in place for "giga" ...

Re:Mea Culpa (4, Interesting)

ebno-10db (1459097) | about a year ago | (#44108543)

Should have used "tera" in place for "giga"

I'm getting tired of all the prefixes, couldn't we just use scientific notation? 1e18 flops means a lot more to me than exaflop.

Re:Mea Culpa (0)

Anonymous Coward | about a year ago | (#44109127)

actually 1e18 and exa mean the exact same thing.

i don't think your inability to grasp the prefixes is a sufficiently strong argument for abandoning the use of metric prefixes.

Re:Mea Culpa (0)

Anonymous Coward | about a year ago | (#44109189)

The 1e18 notation is better for doing actual math in. It's worse when spoken. I think you can debate its merits when writing in a non-mathematical context.

Re:Mea Culpa (2)

Blaskowicz (634489) | about a year ago | (#44112221)

1 exponentiated to the 18th power is still 1.

Re:Mea Culpa (0)

Anonymous Coward | about a year ago | (#44118043)

Sorry, we'll dumb down math to produce high school grads. But if you want to be an engineer, you're gonna have to suck it up a little.
Actually I've witnessed engineers and physics types in several sparing matches at certain meetings. It's funnier when you know the internal battle going on in their own heads!

Re:Mea Culpa (-1, Flamebait)

crutchy (1949900) | about a year ago | (#44109983)

i think the discussion is about who has the most niggaflops

Re:Has this been turned into another pissing conte (0)

Anonymous Coward | about a year ago | (#44108701)

It's pretty simple: Accuracy and detail of physics simulations.

Science is a constant strive for more of both accuracy and detail. Both of these are constrained by processing power and programming ability.
With more processing power, we are able to develop more accurate simulations which will help further understanding about the physical world, which will in turn give us cause to revise the programs that run on them to get more and more complex and accurate... requiring more processing power, yet again.

The only possible end to this struggle is when we have enough knowledge and processing power to simulate with *perfect* accuracy every single sub-sub-sub-atomic particle in the universe in realtime.

Re:Has this been turned into another pissing conte (4, Interesting)

CODiNE (27417) | about a year ago | (#44108735)

Well I don't know anything at all about nuclear simulations and fluid dynamics modeling...

But for pure benefit to mankind I'd say folding@home is a pretty worthy project. It's been running for years and has helped make actual discoveries and raised understanding of protein folding's effects.

According to Wikipedia it was running at 14 Petaflops when last updated. Would taking that up to an exaflop be a huge benefit? You bet!

How about being able to simulate an entire life cycle of a human body at atomic scale? That would gain us tremendous understanding of well... EVERYTHING.

Most definitely there are worthy projects that have a real need for exaflop computing and it's not a waste of time.

You remind me of my friend who years ago said that his 802.11b wireless network was as fast as he'd ever need. Guess he didn't plan on people watching multiple HDTV streams throughout the house.

Information != benefit (1)

Ottibus (753944) | about a year ago | (#44109349)

But for pure benefit to mankind I'd say folding@home is a pretty worthy project. It's been running for years and has helped make actual discoveries and raised understanding of protein folding's effects.

According to Wikipedia it was running at 14 Petaflops when last updated. Would taking that up to an exaflop be a huge benefit? You bet!

While not wishing to critisise folding@home specifically, we should be careful not to assume that there is an automatic progression from data to knowledge to understanding and hence to benefit. And with rising costs (both financial and environmental) we should not blindly assume that building huge supercomputers or running millions of inefficient home computers 24/7 is an inherently good idea.

Re:Information != benefit (2)

alexandre_ganso (1227152) | about a year ago | (#44110203)

Huge supercomputers have the advantage that they are efficient, when compared to projects such as those running "@home", and their interconnects allows them to solve problems that need strong communications between the computing elements. Such problems cannot be solved in an efficient way by this "@home" model, where a machine receives a work unit, computes it and returns the result for final aggregation.

Those interconnects can sum to as much as half the price of building a supercomputer.

When you mention the environmental rising costs, I suspect you mean the carbon footprint, caused by energy consumption for manufacturing and operating those machines. The costs are not negligible, granted, but they are probably not as big as that caused by the cars of the thousands of scientists who use such machines :-) This is especially true in US, where cars are horribly inefficient, public transport from the suburbs to research centers is spotty and distances are large.

I understand that these environmental costs are much smaller than the benefits given by the use of such machines. Remember that supercomputers are used to simulate things such as nuclear explosions, ballistics and radiation decay. The costs for the environment are certainly better than blowing atomic bombs around! Not to mention the gains in health research, for example.

So, yes, there is a HUGE demand for such behemoths, and they are much better than the alternative.

Re:Has this been turned into another pissing conte (0)

Anonymous Coward | about a year ago | (#44111707)

I keep saying Seti@Home is the most potentially beneficial. Just an old Galactic National Geographic talking about "science of the ancients" will advance the human condition by more than all this other stuff put together.

That'd be quite a piss! (1)

EETech1 (1179269) | about a year ago | (#44108767)

So if we JUST put roughly 30 of the Tianhe-2s or 500,000 nodes with 100,000,000 computing cores in one big system, we'd have our exascale computer!

Anyone want to venture a guess how long it'd take Intel to make 1,000,000 Xeons and 1,500,000 Phis?

I can't wait to see the day, but me thinks we have a long way to go!

I can't believe some folks thought the Tianhe-2 was going to be the one to break the exaflop barrier! OOPS, only made it 3% of the way there...

Cheers!

Re:That'd be quite a piss! (1)

davester666 (731373) | about a year ago | (#44108969)

So, we need a beowulf cluster of Tianhe-2's?

Re: That'd be quite a piss! (1)

Metahominid (1368691) | about a year ago | (#44109073)

Hahah, quite nice. You sir should do standup.

Re:That'd be quite a piss! (2)

timeOday (582209) | about a year ago | (#44109139)

So if we JUST put roughly 30 of the Tianhe-2s or 500,000 nodes with 100,000,000 computing cores in one big system, we'd have our exascale computer!

Actually, no, that's the problem/challenge... linking 30 Tianhe-2s would make a supercomputer that is only slightly faster than a single Tianhe-2, because the cores would mainly be sitting idle due to communication latency. Granted this is not true for computations that are completely parallel (e.g. cracking passwords) but that is NOT what "exaflop" means; it means an exaflop on a scientific computing benchmark.

Re:That'd be quite a piss! (1)

amaurea (2900163) | about a year ago | (#44109697)

Exactly. Also, if you just link lots of these together naively, the computation would be crashing all the time, never having time to finish, because the currently standard ways of communicating (such as MPI) make it difficult to handle the loss of a single process, and when you're starting to talk about many millions of nodes, the chances that not a single one of them will crash in the space of the minutes to hours a computation takes is pretty minuscule.

Re:Has this been turned into another pissing conte (0)

Anonymous Coward | about a year ago | (#44108781)

Hey at least when a "n-word" troll snags first post, you may as well reply to it and thereby VALIDATE IT because that way, you get a post very close to the top! At the very top for those who don't browse at -1, which I think is most users seeing how that is default.

So you got yours, that's all that matters, right? Who cares if you had to obtain the help of a total jackass to get it?

It's all in the name of science (0)

Anonymous Coward | about a year ago | (#44108871)

This is where the Science part of "computer Science" comes in. Sometime you just have to see if you can, and assume the practical application will come later.

It would foolish to assume that we will *never* need an exaflop of processing power :).

Re: singularity (0)

Anonymous Coward | about a year ago | (#44109313)

The human brain can perform 10^16 operations per second. A machine that can perform 10^18 operations per second might be able to simulate a human brain and become the first sentinet machine.

This has been predicted for decades, and it will be known as the singularity. It is a very big thing. Your life will change in unimagineable ways in the decade following the singularity. All for the better.

Re: singularity (1)

crutchy (1949900) | about a year ago | (#44109997)

artificial intelligence will never match natural stupidity

Re: singularity (1)

NemoinSpace (1118137) | about a year ago | (#44118195)

This comment is more insightful than funny. I doubt a machine could ever reproduce this condition. When is the last time you met a stupid person? Sure, at the time I'm it was very frustrating. But even to this day, you remember a lot of them, and it makes you feel good. Doesn't it?

Re: singularity (1)

crutchy (1949900) | about a year ago | (#44120415)

i meet a lot of stupid people... the world is full of them

the post i was replying to was funny because even though a machine may be approaching the operational speed of the human brain, the software running on it will never be comparable to the synaptic pathways of the human brain (which aren't even very well understood to begin with).

you may be able to calculate pi to the millionth decimal place in a fraction of a second with one of these exaflop machines when they come out, but how long will it take for it to have an idea?

Re: singularity (1)

ImdatS (958642) | about a year ago | (#44114357)

The human brain can perform around 200-2,000 Petaflops (0.2 - 2 Exaflops) - when we compare it to computers.

The problem is that we can only access the conscious part, which is probably in the range of 100-200 Flops (Note, no mega, giga, or tera).

The subconscious part is where the real processing power lies. If we could simulate that in a computer, it would be tremendous in maybe understanding how it works.

For example: the human brain has the ability to "foresee" the future within a timeframe of around 0.2 seconds (or less). How does it do this? How does that work? Is it only a result of huge processing power or something else? Does two exaflops result in consciousness?? Questions after questions...

Re:Has this been turned into another pissing conte (0)

Anonymous Coward | about a year ago | (#44109501)

You know, it's comment like this is why I rarely bother coming back to slashdot anymore. I used to enjoy reading the comments but now it is populated by whiny assholes who seem uninterested in exploration, science and just doing something because it's cool. Lety's just turn them all into server farms and not build big computers, eh?

What could we do with an exaflop computer? What couldn't we do!

Then there is some prick below spouting nuggets of wisdom 'knowledge != benefit' as if that actually fucking means something.

Re:Has this been turned into another pissing conte (1)

crutchy (1949900) | about a year ago | (#44110019)

You know, it's comment like this is why I rarely bother coming back to slashdot anymore.

You know, it's comments like this is why i think you're a total dweeb, and everyone knows you really can't get enough of /. while you're sitting there cooped up in your "command center" in your mom's basement stuffing your pizza face with McDonald's fries.

And no doubt behind Slashdot you have a bunch of tabs with Google image searches for "boobies".

1000x buys 6x 4D grid size (1)

peter303 (12292) | about a year ago | (#44112683)

Mainy important science problems are at least 4D in nature: three space dimensions and time. These include weather prediction, seismic prospecting, fluid dynamics, etc. 5.6 to the fourth power is one thousand in creased cost. You either get a finer grid or larger grid.

I heard a NOAA talk in Boulder about the erroneous Hurricane Sandy prediction. The "European" weather model correctly predicted the rare west turn of the northeastern hurricane while the US models did not. The Europeans used a 20 km would wide grid model and the SU a 32 km grid cell. The Europeans had more powerful computers. The US runs more frequent "incremental" model updates, while the Europens run fewer full calculations.

Fuck off you stupid child (-1)

Anonymous Coward | about a year ago | (#44109181)

Hey racemixer, anti-nigger trolling is MY gimmick. Go find your own.

Re:Fuck off you stupid child (0)

Anonymous Coward | about a year ago | (#44109193)

Hey racemixer, anti-nigger trolling is MY gimmick. Go find your own.

Shut up, nigger!

Re:Fuck off you stupid child (0)

Anonymous Coward | about a year ago | (#44109305)

God damn it you little mother fucker, I don't give a fuck who or where you are. If you don't stop stealing my shit, I'll hunt you down and rape your sorry ass with my HIV+ dick.

Re:Fuck off you stupid child (0)

Anonymous Coward | about a year ago | (#44110031)

And while you're "raping" me (and I'm trying to figure out whether you're little HIV+ prick is actually in my ass yet) I'll be pounding your sister's ass and fingering your mom's raggedy old snatch.

Re:One thing (0)

Anonymous Coward | about a year ago | (#44111653)

Why do you republicans even bother with this stuff?

Department of Energy secret supercomputer (0)

Anonymous Coward | about a year ago | (#44108297)

That reminds me, what exactly IS that top secret program they use the Department of Energy's super computer for? Certainly not securing stockpiles of nuclear weapons as claimed.

http://energy.gov/articles/department-energy-supercomputer-helps-design-more-efficient-big-rigs

The other claim was some top secret military thing. Another NSA thing?

Re:Department of Energy secret supercomputer (1)

Shavano (2541114) | about a year ago | (#44108315)

They're building a god.

Re:Department of Energy secret supercomputer (1)

iggymanz (596061) | about a year ago | (#44108917)

humans will even worship a rock or lump of baked clay, for something to be a "god" only requires worshipers.

Re:Department of Energy secret supercomputer (-1)

Anonymous Coward | about a year ago | (#44108973)

humans will even worship a rock or lump of baked clay

Or consumerism, or the monetary system, or the news, or American Idol, or the calendar (you buy when Hallmark Holiday says you will!!), or an addiction, or a desire, etc.

Those are called "sheeple" by those who truly understand the distinction. Those who don't are offended or indifferent to this word "sheeple".

Re:Department of Energy secret supercomputer (1)

iggymanz (596061) | about a year ago | (#44108983)

you'll be glad too know I use money and consumerism only to worship myself.

Re:Department of Energy secret supercomputer (0)

Anonymous Coward | about a year ago | (#44109203)

you'll be glad too know I use money and consumerism only to worship myself.

So how does that work out when you do the slightest research on outer (or inner) space and realize there are many things greater than yourself?

Re:Department of Energy secret supercomputer (0)

Anonymous Coward | about a year ago | (#44112359)

Burn the heretic !

Re:Department of Energy secret supercomputer (1)

iggymanz (596061) | about a year ago | (#44118885)

nonsense, man is the measure.

Re:Department of Energy secret supercomputer (1)

crutchy (1949900) | about a year ago | (#44110061)

Those are called "sheeple" by those who truly understand the distinction

no, they're called apple customers

Re:Department of Energy secret supercomputer (1)

davester666 (731373) | about a year ago | (#44109621)

So, you are claiming that Obama is really a puppet animated by a DoE supercomputer, running software written by the NSA?

Re:Department of Energy secret supercomputer (1)

crutchy (1949900) | about a year ago | (#44110065)

So, you are claiming that Obama is really a puppet animated by a DoE supercomputer, running software written by the NSA?

fuck no... obama isn't that smart... he's more like a commodore 64 with a virus

Re:Department of Energy secret supercomputer (1)

Shavano (2541114) | about a year ago | (#44119565)

And yet when it comes to getting elected, he seems pretty clever, or at least more clever than the competition.

Re:Department of Energy secret supercomputer (1)

crutchy (1949900) | about a year ago | (#44120395)

you don't honestly think obama got himself elected do you? BWAHAHAHA!!!!

Re:Department of Energy secret supercomputer (1)

wonkey_monkey (2592601) | about a year ago | (#44109643)

It won't work. Captain Kirk will just ask it what love is and it'll blow itself up.

Re:Department of Energy secret supercomputer (1)

Shavano (2541114) | about a year ago | (#44119561)

Damn you Kirrrrrk!

Re:Department of Energy secret supercomputer (0)

Anonymous Coward | about a year ago | (#44108339)

That reminds me, what exactly IS that top secret program they use the Department of Energy's super computer for? Certainly not securing stockpiles of nuclear weapons as claimed.

http://energy.gov/articles/department-energy-supercomputer-helps-design-more-efficient-big-rigs

The other claim was some top secret military thing. Another NSA thing?

Probably for breaking encryption to make it easier to spy on american citizens.

I guess when you're fucked in the head you get some kind of sexual orgasmic pleasure from voyeurism. There is certainly no actual national security reason to do it. I mean if the real concern was really about terrorism then the first thing they'd do would be to lock down the wide-open Mexican border. Kinda pointless to read everybody's e-mail when anyone who wants to can get in the country undetected with god-knows-what. Gov't people are evil but they are not stupid.

Re:Department of Energy secret supercomputer (2)

elfprince13 (1521333) | about a year ago | (#44108393)

Dunno about "top secret", but the DoE puts a huge amount of computing resources into physical simulation. Check out some of the NERSC projects (GTC, for example [nersc.gov] ).

Re:Department of Energy secret supercomputer (1)

crutchy (1949900) | about a year ago | (#44110069)

That reminds me, what exactly IS that top secret program they use the Department of Energy's super computer for?

virtual porn... the government felt it would be a little unethical to use pixar's infrastructure

Re:Department of Energy secret supercomputer (1)

Ambitwistor (1041236) | about a year ago | (#44113467)

The Jaguar/Titan system mentioned in your link is used for unclassified scientific computing. The NSA is building a computer facility at ORNL, but that's a different system (and was never claimed to be for stockpile stewardship). They don't put classified jobs onto unclassified systems.

Barrier? (1)

holmstar (1388267) | about a year ago | (#44108327)

How is exaflop a barrier? Is there some atypical difficulty in exceeding an exaflop?

Re:Barrier? (0)

ebno-10db (1459097) | about a year ago | (#44108337)

RTFA

Re:Barrier? (3, Insightful)

holmstar (1388267) | about a year ago | (#44108377)

I'm sure the same sort of things were said about a petaflop machine, back in the day. Doesn't make exaflop a barrier. Just an engineering challenge, like every other bleeding edge supercomputer has been.

Re:Barrier? (1)

Anonymous Coward | about a year ago | (#44108923)

Yes, it's the quantitative carrot... when I was learning parallel computing, teraflops were the fantasy milestone we'd reach some day and terabytes was the crazy storage you imagined existed in some NSA datacenter, rather than in your cousin's USB drive. People feel like brilliant strategists every time they point out the next 500-1000x milestone and declare that as the thing that matters to differentiate themselves from all the myopic folk working on today's problem.

Re:Barrier? (1)

PingPongBoy (303994) | about a year ago | (#44109785)

It is a barrier, but that being said it just means no one has done it yet. It doesn't mean it's impossible. A barrier is something to strive to overcome and in spite of all the striving, it feels like a fully blown case of Zeno's paradox, for a while. Only now that we're so much closer to the day that an exaflops will be reached, it seems that we must all chatter about it lest no one will have enough motivation to actually make it happen.

Re:Barrier? (3, Interesting)

Zargg (1596625) | about a year ago | (#44108489)

I'm pretty sure the parent is questioning why the word "barrier" is used instead of something like "milestone", which I would have chosen. A barrier implies there is something special stopping you there that you need to work around or resolve, but milestone is just a convenient number to stop at, as in this case. I see no difference between passing exaflop and say 0.9 exaflop, since both require "a really long time, combined with major breakthroughs in chip design, power utilization and programming", so it isn't a barrier, just a convenient number.

Re:Barrier? (2)

Tastecicles (1153671) | about a year ago | (#44108349)

yeah, strange harmonics and shit, and word around the cooler is that it would require an infinite amount of energy as well... that or set the atmosphere on fire or some shit.

Re:Barrier? (1)

alexandre_ganso (1227152) | about a year ago | (#44110261)

It's just speech. It's a milestone. It's not difficult to exceed one exaflops (the name stands for operations per second, it's not a plural) once you got to, say, 0.99 exaflops. Scientists like to talk in orders of magnitude. Right now we are in the tens of petaflops, but didn't get yet to hundreds. Tiahne-2 gets to 55 pflops, but its sustained speed is a bit bigger than half of that.

Problem is much more about how to get there. It's not just machinery. Is how to actually write and debug programs at that scale. As we cannot make the cores much faster than what we have today, the solution is to add more cores.

The added cores increase the stress on the network, and makes programming such thing much more difficult. Good luck debugging a race condition on one million processes.

Other problems arise from things as mundane as equipment breaking. Think that if you have a single broken memory chip during the execution of a program, the whole computation is either compromised or just lost. And with millions of cores, comes millions of motherboards, power supplies, I/O system, storage, all kinds of electronic components which are subject to problems.

So, while technically, it's not a barrier per se, this huge number of variables that makes things exponentially more complex than what we have today is indeed a barrier. As someone asked here, we cannot just make a cluster of tianhe-2s. The thing would be breaking all the time, spending so much electricity and manpower for maintenance that its uptime would be smaller than a windows 98 unpatched machine connected to an open network.

Re:Barrier? (1)

AmiMoJo (196126) | about a year ago | (#44110359)

There is nothing special about reach the exaflop level, unlike say the sound barrier where there are real physical forces that make it difficult to pass.

Scaling is a challenge of course, but the difference between say 0.9 exaflops and 1.1 exaflops is basically just money.

NVIDIA's bread and butter long term (2)

storkus (179708) | about a year ago | (#44108359)

My take away from reading this and the blog post is that, while NVIDIA may consider graphics to be their bread & butter, it looks like they're looking at this space (HPC) very seriously in the long term--perhaps they even think they can dominate it. This is a big difference from the other players: IBM isn't bothering to throw POWER at it, and AMD/ATI is only present on older machines; ATI in particular seems more interested in going after the mobile space rather than HPC. I don't know what to make of Intel other than they know they're the choice for the non-GPU side and are at the top of their game.

One problem I see is that NVIDIA is still a fabless house and has performance limitations tied to whatever fab they partner with; perhaps this is why they downplay process gains in the blog post.

Of course, if the conspiracy theorists are to be believed, NSA and friends already have this 10-years-into-the-future technology...

Re:NVIDIA's bread and butter long term (2)

ebno-10db (1459097) | about a year ago | (#44108379)

Of course, if the conspiracy theorists are to be believed, NSA and friends already have this 10-years-into-the-future technology...

I heard 20 years - they're still learning stuff from the Roswell crash.

Re:NVIDIA's bread and butter long term (0)

Anonymous Coward | about a year ago | (#44108511)

Na. It was humans from the future taking a trip back in time just to see how fucked up their ancestors were. That was a mistake. Nice to know that we will still make major mistakes in the future...er...past...whatever.

Re:NVIDIA's bread and butter long term (1)

causality (777677) | about a year ago | (#44108429)

Of course, if the conspiracy theorists are to be believed, NSA and friends already have this 10-years-into-the-future technology...

With a nearly unlimited budget, no need to sell a product or make a profit, some of the best and brightest talent in the world (they especially like math majors), and the ability to spy on and thus learn from nearly anyone ... well, they'd be pretty damned incompetent if they somehow aren't ahead of the mainstream. Make no mistake, "national security" is a very high-stakes game, these are people who play to win, and "winning" means superiority.

That is a conspiracy theory? Usually those involve aliens or globalists bankers and such. This? This is two plus two type of material.

Re:NVIDIA's bread and butter long term (0)

Anonymous Coward | about a year ago | (#44108499)

There are entire NSA owned and operated data centers filled with super-computing clustered servers. The whole point is to turn VOIP and Telco calls (spoken language) into text and then data-mine that in real time. You think Siri from Apple was amazing? They have nothing in comparison to what the NSA has. They have an unlimited black ops budget bought and paid for by the US tax payer. Period!

Black helicopters? Fuck no. Try thermal imaging, optical (laser) acoustic pickup, stealth drones baby!

Re:NVIDIA's bread and butter long term (0)

Anonymous Coward | about a year ago | (#44108523)

amd i think is looking at the HPC space pretty closely. their gpus are better for some work, but overall they trail. but they also trail at cpu compared to intel. their "goal" is kind of tied to consoles, etc. unified memory, single die APUs. if they fabbed as good as intel they'd be pretty sweet for hpc type work i think with their new apu type setup. general purpose scalar type stuff will still trail, but integration with other highly parallel tasks where they destroy intel on gpu, but trail nvidia....but destroy any arm competition. very good middle ground. if only they had the .22nm already on the way out, and .14? next.

Re:NVIDIA's bread and butter long term (1)

fpabd.tk (2815503) | about a year ago | (#44108679)

Nvidia'a Performance Is Good Enough ForMe

Re:NVIDIA's bread and butter long term (0)

Anonymous Coward | about a year ago | (#44109065)

Actually, AMD's GPUs are certainly in the same class as Nvidia's for compute purposes. In certain cases, they're actually far faster (bitcoin is but one example of many). They just don't seem to target getting their GPUs into supercomputers.

The real reason why Nvidia downplays process gains is that they have a difficult time transitioning to new processes. They've suffered serious, if not crippling, yield issues with their last two process jumps at least, and the only reason that they survived the Fermi architecture at all is that they had a per-die pricing agreement, rather than per-wafer. Other companies had little or no trouble, so it's definitely something that Nvidia is doing wrong.

Re:NVIDIA's bread and butter long term (0)

Anonymous Coward | about a year ago | (#44109649)

I don't think that the supercomputing future lies in GPU computing. If you look at Tianhe-2, you see that it does not use GPUs. Instead it uses Xeon Phi accelerator cards. The advantage of Xeon Phi cards is that parallelization on those cards works similar like classical parallelization on supercomputers. You just use MPI. For GPUs, on the other hand, you have to adapt a lot of code.

Xeon Phi vs GPU (2)

Ottibus (753944) | about a year ago | (#44130295)

The advantage of Xeon Phi cards is that parallelization on those cards works similar like classical parallelization on supercomputers.

Not really, no. Classic supercomputers were vector machines whereas Xeon Phi is wide SIMD.

You just use MPI

MPI is equally applicable to GPU or Xeon Phi, it operates at a level above the raw computation. In both cases you have controlling CPUs with accelerators attached (GPU in once case, Xeon Phi in the other). MPI is used to manage the data flow between these units but has little to do with the architecture of those units themselves.

For GPUs, on the other hand, you have to adapt a lot of code.

You have to adapt code either way:

For GPU you express the problem as a scalar kernel that is executed in parallel. You have to make sure that the work doesn't overlap but you only have to consider one element at a time.

For SIMD you break your problem in to SIMD-width chunks that are computed in parallel. It is easier to synchronise operations but you have to fit the problem into chunks of the right size.

Xeon Phi has an advantage where you have existing SIMD code (e.g. SSE), but if you are starting from scratch then there is no clear winner. And HPC code is increasingly being written in languages like OpenCL and CUDA which are designed for GPU rather than SIMD.

Re:NVIDIA's bread and butter long term (2)

HuguesT (84078) | about a year ago | (#44109743)

Actually Intel is pretty much the king of the hill at the moment for HPC. They don't have a "GPU" solution, but they do have a massively parallel CPU + PCIe compute card available called the "Xeon Phi". Extremely confusing, yet this is what the current fastest supercomputer uses

http://www.datacenterdynamics.com/focus/archive/2013/06/xeon-phi-powered-supercomputer-tops-top500

Xeon phi is easier to deal with than Nvidia's solution for GPU, essentially because it is currently much easier to program.

http://goparallel.sourceforge.net/independent-test-xeon-phi-shocks-tesla-gpu/

Re:NVIDIA's bread and butter long term (0)

Ottibus (753944) | about a year ago | (#44110145)

Xeon phi is easier to deal with than Nvidia's solution for GPU, essentially because it is currently much easier to program.

Citation needed.

2 gigawatts... (1)

fustakrakich (1673220) | about a year ago | (#44108395)

Hmm, Mr. Fusion is due in a couple of years...

Re:2 gigawatts... (0)

Anonymous Coward | about a year ago | (#44108477)

TFBP says that's the entire output of Hoover Dam.

So why not build it at Hoover Dam?

chaining cellphone CPUs (1)

peter303 (12292) | about a year ago | (#44112713)

They are much lower power and somewhat lower speed than desktop CPUs. You'd have to use many more of them. Some projects are trying this.

Now government and media quiet about SuperComputer (0)

Anonymous Coward | about a year ago | (#44108465)

I can hear mosquito flying it is so quiet. Not so long ago, when US had the fastest supercomputer I was bombarded by government propaganda of success. TV stations, newspapers and radio were singing the same song. We are the greatest, we rule, you filthy rest of the people.
Why the silence now?

Imagine (1)

Penumbra (175042) | about a year ago | (#44108473)

Imagine a beowulf cluster of.... What? All supercomputers are basically beowolf clusters now? Umm...Ok, is Natalie Portman still topical?

Re:Imagine (0)

Anonymous Coward | about a year ago | (#44108631)

I'm not sure. Could you give a car analogy?
Anyway... I'm waiting 'til Netcraft confirms it.

Re:Imagine (0)

Anonymous Coward | about a year ago | (#44109273)

depends on what she isnt wearing

IBM broke this record years ago... (-1)

Anonymous Coward | about a year ago | (#44108697)

When they videotaped me fucking my husband at 62 petaflops per second. I challenge any of you dumb motherfuckers to beat that accomplishment. We all know none of you filthy fucking niggers can do that. Only a pair of white boys can do that, and we have. Go back to Africa and try fucking some monkeys as practice. When you think you niggers can fuck like us, come back and fucking TRY. You damned breeders will never defeat our record, no matter how many times you give yourselves AIDS, rape your infant daughters, and eat those nasty ass vaginas. Gay Pride Forever.

Why (1)

ShooterNeo (555040) | about a year ago | (#44109071)

Does anyone have an idea of what these extremely expensive systems are even for? And don't say password cracking/NSA, because both of those tasks are "embarrassingly parallel", so that you can use a cloud of separate computers rather than a tightly interlinked network like a supercomputer.

Are there real world problems right now where another 100x more CPU power would make real, practical differences? (versus making the algorithm more efficient, etc)

Re:Why (1)

gkndivebum (664421) | about a year ago | (#44109255)

CFD simulation. Lattice Boltzmann [wikipedia.org] simulations of fluid dynamics is one such application. Folks at the various DOE national laboratories have a pretty keen interest in this kind of simulation.

Re:Why (1)

Ambitwistor (1041236) | about a year ago | (#44113343)

Exascale computers would be helpful for climate modeling. Right now climate models don't have the same resolution as weather models, because they need to be run for much longer periods of time. This means that they don't have the resolution to simulate clouds directly, and resort to average statistical approximations of cloud behavior. This is a big bottleneck in improving the accuracy of climate models. They're just now moving from 100 km to 10 km resolution for short simulations. With exascale they could move to 1 km resolution and build a true cloud-resolving model that can be run on century timescales.

yea but (1)

Osgeld (1900440) | about a year ago | (#44109113)

we all know Chinese numbers represent a value exactly 14% less than what the rest of the world agrees on.

Wow. An exaflop would be amazing. (0)

Anonymous Coward | about a year ago | (#44109317)

That would almost be enough to run Vista!

Why LINPACK? (0)

Anonymous Coward | about a year ago | (#44109813)

Does it make sense to rate supercomputers with the speed of solving a dense linear system?

Do we really have such huge and unstructured linear systems that need to be solved directly (LU factorization with partial pivoting)?

Can somebody list such applications?

Re:Why LINPACK? (0)

Anonymous Coward | about a year ago | (#44115145)

It's a benchmark, and as we all know benchmarks are meaningless in evaluating real world performance. The advantages of some of these systems, especially ones with custom interconnects, is in optimizing shared memory latency, storage latency, and programming environments to speed up certain operations. And I assure you that, aside from China who is trying to win a mine-is-longer-than-yours contest, the customers plopping down millions for these computers would rather that these optimizations benefit their applications instead of a very artificial test.

Moore's law. (1)

rew (6140) | about a year ago | (#44109987)

Moore's law predicts that the "factor-of-33" will be bridged in about 10 years. There is only a factor of 20 to the "peak performance", so about a year before that, peak performance might topple the exabyte "barrier".
(Some people plug in different constants in Moore's law. I use factor-of-1000 for every 20 years. That's 30 every 10, 2 every 2, and about 5 every five. This has never failed me: it always works out).

Re:Moore's law. (0)

Anonymous Coward | about a year ago | (#44110287)

NVIDIA guy referenced in the article seems to predicting the end of Moore's law. He claims that manufacturing process advances will only give us 2.2x improvement in gflops per watt in the next 7 years (and the rest of the way towards the exaflop must come from unspecified sources.)

It's unclear why that is going to happen. Naively I'd expect roughly a 2x improvement per generation (in the last 5 years, NVIDIA went from ~4 gflops/watt (SP) on 65 nm to ~18 gflops/watt on 28 nm, while at the same time adding new instructions and expanding the range of functionality of their chips - while their 65 nm chips were narrowly tailored to do lots of massively parallel single-precision floating point calculations, their latest offerings are much more general-purpose.) Maybe they are expecting the industry as a whole to hit a brick wall around 10 nm.

It has slown down (1)

Blaskowicz (634489) | about a year ago | (#44112405)

A more pessimistic estimate would say Moore's law only gets you a doubling every 3 years nowadays, so a factor of 32 would take 15 years to work out. See the troubles there were for e.g. TSMC moving to 28nm, and now 20nm.
An exaflops supercomputer would still be possible, with a 10x boost from Moore's law over 10 years and building a 3x bigger supercomputer.

Re:Moore's law. (1)

Ambitwistor (1041236) | about a year ago | (#44113479)

I think the DOE was predicting last year that their first exascale system will come online in 7 to 9 years.

This isn't going to happen for awhile. (1)

Dputiger (561114) | about a year ago | (#44110251)

We're at 5.4% of exaflop scale. Somehow I don't think this is a 2013 / 2014 goal ;)

"Some developers" make ridiculous predictions (1)

tgeller (10260) | about a year ago | (#44111293)

Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through.

Wait... *what* uninformed developer(s) predicted that? The previous record (six months ago) was set by Titan, at 17.59 Petaflop/s. So to pass the exaflop barrier this time around would require over a fifty-fold improvement -- something never before seen in the history of the Top500 list. Did someone *really* make this prediction, or is author Kevin Fogarty just making shit up?

Oracle already did (1)

gtirloni (1531285) | about a year ago | (#44112233)

Unless the expensive Exadata box we just bought isn't capable of the exa-stuff they promised.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>