Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Japan Aims To Win Exascale Race

Unknown Lamer posted about a year ago | from the set-goals dept.

Japan 51

dcblogs writes "In the global race to build the next generation of supercomputers — exascale — there is no guarantee the U.S. will finish first. But the stakes are high for the U.S. tech industry. Today, U.S. firms — Hewlett-Packard, IBM and Intel, in particular — dominate the global high performance computing (HPC) market. On the Top 500 list, the worldwide ranking of the most powerful supercomputers, HP now has 39% of the systems, IBM, 33%, and Cray, nearly 10%. That lopsided U.S. market share does not sit well with other countries, which are busy building their own chips, interconnects, and their own high-tech industries in the push for exascale. Europe and China are deep into effort to build exascale machines, and now so is Japan. Kimihiko Hirao, director of the RIKEN Advanced Institute for Computational Science of Japan, said Japan is prepping a system for 2020. Asked whether he sees the push to exascale as a race between nations, Hirao said yes. Will Japan try to win that race? 'I hope so,' he said. 'We are rather confident,' said Hirao, arguing that Japan has the technology and the people to achieve the goal. Jack Dongarra, a professor of computer science at the University of Tennessee and one of the academic leaders of the Top 500 supercomputing list, said Japan is serious and on target to deliver a system by 2020."

Sorry! There are no comments related to the filter you selected.

Japanese workers must take power! (3)

For a Free Internet (1594621) | about a year ago | (#45501977)

Down with the capitalist emperors!

i remember something about India (1)

etash (1907284) | about a year ago | (#45501983)

setting a goal for 2018 or 2019 ?

Such races are very good for the overall development and progress of computing, as the new technologies that will be developed will eventually be used in desktops and mobile computing.

There are still challenges like the interconnects and the power draw, but IMHO these are problems that eventually will be solved.

So what? (0)

Anonymous Coward | about a year ago | (#45502015)

This is nothing more than dick waving for nations.

Not news.

Re:So what? (0)

etash (1907284) | about a year ago | (#45502029)

expect a number of science supporters who will bite and answer (with examples such as climate modelling) the troll's flamebait in 3 .. 2 ... 1...

Re:So what? (0)

rubycodez (864176) | about a year ago | (#45502079)

we're already producing useless and fictitious climate models, we don't need more advanced hardware to continue that money-wasting farce.

no big deal if Japan makes first exascale machines, the US can just buy them or lease time on them. the USA already has the most advanced HPC software

Re:So what? (0)

Anonymous Coward | about a year ago | (#45503053)

I don't think I found a single truth in your entire post. How sad.

Fyfe etal nature climat september 2013 (0)

Anonymous Coward | about a year ago | (#45505113)

compare measured temperature to models and find...that the models fail
so latest word (and these guys are big inthe climate community) is that models are in fact fictitious...contra data recently in PNAS claiming that missing temp data from poles allows models ot agree....stay tuned

point is, the science is not settled just cause a bunch of peple agree; you may not know this, but from about 1900 to 1950, there was a consensus that CO2 human global forcing couldn't occur cause the atmosphere was already saturated for absorbance in the IR band..a consensus, I say
turned out to be wrong cause they forgot that it is the optically thin outer layer that is crucial...stayu tuned, but don't be so effin arrogant

Re:So what? (0)

Anonymous Coward | about a year ago | (#45507627)

The United States is not south america!

Re:So what? (2)

blackiner (2787381) | about a year ago | (#45502231)

This is precisely the type of dick waving we should have between nations. It is pretty much harmless, unlike war, and at the end of the day everyone, not just the one nation that "wins", will benefit from the technology that comes out of it.

Re:So what? (1)

WWJohnBrowningDo (2792397) | about a year ago | (#45502559)

Everyone wins, except for the tax payers.

Re:So what? (1)

DarkOx (621550) | about a year ago | (#45502805)

Even the tax pays win in the sense it's probably way cheaper than transcontenintal warfare.

Re:So what? (3, Informative)

ItsJustAPseudonym (1259172) | about a year ago | (#45503051)

You DO realize that a lot of the technology that the public currently uses is derived from academic and government research that was funded by tax dollars, right? Heck, even military research has resulted in spread-spectrum communications for cell phones. So the tax payers have benefited from the use of their tax dollars for this. A claim to the contrary is both misinformed about the past and short-sighted about the future.

Re:So what? (2)

pitchpipe (708843) | about a year ago | (#45505707)

Everyone wins, except for the tax payers.

Do you anti-tax types ever think about anything else? Money is not the point of all of this.

Re: So what? (0)

Anonymous Coward | about a year ago | (#45502365)

What?? You don't think that reaching a goal like that will benefit the way we use computers? Yeah, I agree it's a lot of dick waving but I feel like competitive attitude between nations is what pushes the advancements in medicine, technology and so forth...
But yeah one solution might create 10 more problems.

Re:So what? (1)

K. S. Kyosuke (729550) | about a year ago | (#45502667)

This is nothing more than dick waving for nations.

Except that this is the fifth generation of Japanese computer dick waving.

Sorry, NSA already won, contest over (0)

JoeyRox (2711699) | about a year ago | (#45502067)

But thanks Japan and others for your participation.

Re:Sorry, NSA already won, contest over (0)

Anonymous Coward | about a year ago | (#45502803)

What use does the NSA have with floating point operations?

U.S.A. is the only contestant of this "race"... (-1)

Anonymous Coward | about a year ago | (#45502137)

Neither Japan or Europe build such systems from scratch, and they are quite happy to let U.S.A. do it for them - both Japan and Europe can claim victory points from U.S.A. in other "races"... e.g., both make cars!
So, there is no race between them - if there is an "exascale" race, it's between U.S.A. companies.

Re:U.S.A. is the only contestant of this "race"... (1)

fisted (2295862) | about a year ago | (#45502905)

Where is that 'rest of the world' anyway?

Re:U.S.A. is the only contestant of this "race"... (0)

Anonymous Coward | about a year ago | (#45503119)

I was not aware that Fujitsu (hint, RIKEN's K Computer is from Fujitsu, powered by Fujitsu's SPARC designs, and Fujitsu's interconnect) is an American company. Thanks for the head's up.

Re:U.S.A. is the only contestant of this "race"... (1)

Sique (173459) | about a year ago | (#45503131)

You forget the Earth Simulator [wikipedia.org] , based on NECs SX-6 processor architecture and the fastest super computer in the world from 2002 to 2004.

Japan surely is able to build those systems from scratch.

Re:U.S.A. is the only contestant of this "race"... (1)

serviscope_minor (664417) | about a year ago | (#45506365)

You forget the Earth Simulator, based on NECs SX-6 processor architecture and the fastest super computer in the world from 2002 to 2004.

True, but that's long been decomissioned. The K computer (current #4) was the #1 for a while and the first to beat 10PFlops. It uses home-grown SPARC chips.

While the original SPARC wasn't a Japanese invention at this point, it's just an instruction set that they have a lot of experience with since Fujitsu supplied all the highest performing SPARCs to Sun.

They were fabbed by Fujitus and used Fujitsu's own homegrown interconnect and with that reached an almost unprecedented efficiency.

Any technical prowess better spent on Fukushima... (0)

Glasswire (302197) | about a year ago | (#45502187)

...from becoming a hemispheric disaster. [huffingtonpost.ca] .
Even the laughable freeze-the-ground-around-it plan seems to have been hatched to mollify Olympic commission voters [cbsnews.com] who still gave Japan the 2020 games as the 'safe' choice over Istanbul and Madrid.

Re:Any technical prowess better spent on Fukushima (1)

khallow (566160) | about a year ago | (#45502357)

The work to prevent Fukushima from being a "hemispheric disaster" already happened.

Re:Any technical prowess better spent on Fukushima (1)

Glasswire (302197) | about a year ago | (#45502569)

The hemispheric disaster has not happened yet. But until they finish unloading reactor 4 - which won't be until end of 2014, any serious earthquake (a high probability in that area) could cause the precarious elevated rod bundles to crash down and even the best case scenarios, if that happens, are ugly.
How bad things are after that is still up for debate, but reactor 4 is a clear and present danger.

Re:Any technical prowess better spent on Fukushima (1)

khallow (566160) | about a year ago | (#45504937)

But until they finish unloading reactor 4 - which won't be until end of 2014, any serious earthquake (a high probability in that area) could cause the precarious elevated rod bundles to crash down and even the best case scenarios, if that happens, are ugly.

Uh huh. You do realize that these fuel rods have already experienced a magnitude 9 earthquake and the "crash down" didn't happen? The "precarious elevation" is not that precarious.

Yes but... (1)

CheezburgerBrown . (3417019) | about a year ago | (#45502351)

How does it compare to the total computing power of the combined Bitcoin miners?

Re:Yes but... (1)

doublebackslash (702979) | about a year ago | (#45503227)

Farily small. 60663.20 Peta FLOPS [bitcoinwatch.com] (60 exaflops) at my time of clicking if those numbers can be trusted (likely since the network hashrate can be derived from the average speed of blocks being found) Not that bitcoin mining uses floating point units since it is brute forcing a hash... but I digress.

China is not fooling around (1)

symbolset (646467) | about a year ago | (#45502385)

China is doing some amazing stuff in HPC, and with homegrown IP.

Re:China is not fooling around (1)

Anonymous Coward | about a year ago | (#45502683)

Homegrown as in stolen?

Re:China is not fooling around (0)

Anonymous Coward | about a year ago | (#45502801)

Good engineers copy -- the best engineers steal? Wait...

Re:China is not fooling around (0)

Anonymous Coward | about a year ago | (#45503499)

Well, the U.S.A. spends dozens of billions of dollars on industrial spionage and still does not manage to sell enough of their crap to pay their bills.

Re:China is not fooling around (1)

symbolset (646467) | about a year ago | (#45503653)

They are doing their own silicon designs using a properly licensed MIPS instruction set.

Waste of money (1, Insightful)

SoftwareArtist (1472499) | about a year ago | (#45502409)

I think the exascale race will turn out to be a dead end. Tightly coupled calculations simply don't scale. To effectively use even current generation supercomputers you need to scale to thousands of cores, and there just aren't very many codes that can do that. Exascale computers will require scaling to millions of cores, and I don't see that happening. For all but a handful of (mostly contrived) problems, that won't be possible.

So like it or not, we need to settle for loosely coupled codes that run mostly independent calculations on lots of nodes with only limited communication between them. And for that, you don't need these specially designed systems with super expensive interconnects. Any ordinary data center works just as well for a fraction of the cost.

Re:Waste of money (1)

Glasswire (302197) | about a year ago | (#45502611)

"I'm too busy to research this and form an educated opinion, but I do have time to tell everyone my uninformed opinion."

Well you can't argue with that, but certainly a whole industry would argue with your assertion.

Re:Waste of money (1)

SoftwareArtist (1472499) | about a year ago | (#45504769)

I gather you're new to slashdot? Most people on here have signature quotes like that. They get added automatically to every post. It's not part of the message.

Re:Waste of money (0)

Anonymous Coward | about a year ago | (#45502943)

Gustafson's law, man. With bigger machines, you run bigger problems, and these scale better. In other words, you only need weak scaling, not necessarily strong scaling.

Re:Waste of money (1)

SoftwareArtist (1472499) | about a year ago | (#45504805)

Even weak scaling to millions of processors is incredibly hard. It also isn't always useful. If the problems you care about take too long to be practical, then trying to solve even larger problems isn't an option. And if your calculation scales nonlinearly in problem size, those larger problems would take even longer to solve, even on a bigger computer.

Re:Waste of money (0)

Anonymous Coward | about a year ago | (#45505817)

Sam AC here. I understand your concern about O(N^2) and worse problems. My work involves re-writing algorithms to make them linear-scaling or reduced-scaling.

Sorry for moving the goalposts a little, but I insist we don't yet have to scale to millions of cores. I worked on 5 or 6 machines from top500, and all of them became terribly congested after the users flowed in. We will still utilize bigger machines even if codes scale to 1000 cores, by being able to reduce queueing times, increasing max job sizes, walltimes and so on, essentially reducing congestion.

Some of the machines from the top 20 will not give you an account if you cannot show your typical problems scale decently to 2048 cores. I appreciate this is sometimes hard. But being able to have 100 users, each running 10 such jobs is still a good bargain.

Re:Waste of money (1)

Pinky's Brain (1158667) | about a year ago | (#45506843)

How would it be a good bargain? The interconnect would be over-engineered for such a use. 1000 petascale machines will be cheaper than one exascale machine and can service those same users.

You need problems requiring time-sensitive solutions which can efficiently run on the complete system at least some percentage of the time otherwise there is no value there.

Re:Waste of money (0)

Anonymous Coward | about a year ago | (#45509153)

In practice you never run on the complete system. You share the machine with hundreds of users. On one of the machines from the second top ten, for instance, the maximum job size is 1536 cores, while the machine has over 70000 cores. The first and last time they were all used for a "time-sensitive solution run on a complete system" was the linpack benchmark used to secure the position in top500. The interconnect over-engineered? From first hand experience, the interconnect could still use more bandwidth. While for most applications in my domain it's the RAM bandwidth that is the bottleneck, at about a thousand cores the MPI comms latencies start to kill efficiency, unless you use hybrid parallelism. YMMV.

Re:Waste of money (0)

Anonymous Coward | about a year ago | (#45505839)

To effectively use even current generation supercomputers you need to scale to thousands of cores, and there just aren't very many codes that can do that. Exascale computers will require scaling to millions of cores, and I don't see that happening.

Tianhe-2, which is currently the #1, alone has more than 3 million cores. It's still far from reaching exascale.

Any ordinary data center works just as well for a fraction of the cost.

Data centers are a solved problem, exascale is not and it's going to be damn difficult to pull off. It has many challenges as simply pushing hardware and software to the current limits won't do it. And it's going to be damn costly too. But guess what? Science often isn't cheap and someone has to push humanity forward, if nobody ever made the necessary effort we'd still be jumping from three to three like the other apes.

good news everyone! (1)

Gravis Zero (934156) | about a year ago | (#45502495)

year-round heating will be free in japan ssstarting in 2018! lizard people, rejoisss!

check this with proof (-1, Offtopic)

Olivia Wood (3442129) | about a year ago | (#45502793)

==========ONLINE JOB========== my unmarried friend has twin toddlers and revamped $9k her 1st month. It feels therefore sensible creating abundant|such a lot|most} cash once others have to be compelled to work for thus much less. this is often what I do =======>> WWW.BLUE48.COM ==========ONLINE JOB==========

AWS anyone? (0)

Anonymous Coward | about a year ago | (#45502853)

Makes me wonder how much processing power AWS has, if one could use it all at once.

What about an AFRICAN country? (0)

Anonymous Coward | about a year ago | (#45503959)

Surely, what with race being 'a social construct', and us all being 'the same', one of the MANY countries in Africa, full of people 'just like us', should be able to win this?

LOL...

Bitcoin = 60 exaFLOPS (2)

Reliable Windmill (2932227) | about a year ago | (#45504277)

As an interesting observation, the Bitcoin network has peaked at over 60 exaFLOPS of computational power.

Re:Bitcoin = 60 exaFLOPS (0)

Anonymous Coward | about a year ago | (#45504681)

I very much doubt it. The "FL" in FLOPS stands for "Floating point". Most of the computational power in the Bitcoin network is specialised hardware incapable of doing any floating-point arithmetic.

In terms of computational complexity (and silicon area), hash functions are a couple orders of magnitude easier than general floating point operations.

Back To The 5Th (0)

Anonymous Coward | about a year ago | (#45504625)

This all happened in Japan in the late 1970s and ended up in a Government Ministry endorsed 5Th Generation Supercomputer Project in the early 1980s.

I remember this well.

Here is a write-up at Wikipedia: http://en.wikipedia.org/wiki/Fifth_generation_computer

Yes, 1982. Now thirty years ago! Then, Japan (Government and Nation) was rolling high numbers and looked to "Break The Bank" in the US.

But the Financial and Land Price Speculation Bubble ... busted in 1989. Leveraged finances bubble busted in 1997. All hope was lost then. Fukushima, a sad tale on the coat tails of the others mentioned and has yet to play out to its end.

No. The assessment by the University of Tennessee (UT) Professor is patently wrong by facts of history and simple logic; both in apparent short supply at UT.

QED

Re:Back To The 5Th (0)

Anonymous Coward | about a year ago | (#45505131)

thankyou - great post
seems like every few years, japan or china or someone announces some giant program to be world leader in something...usually, by the time they get to their goal, that "something" is no longer of interest...hasn't anyone here heard of "the cloud"
maybe by the time the japanese start selling exa flop machines, the cloud will be at zetaflops you rent by the millisecond as needed

Power Requirements (1)

Dialecticus (1433989) | about a year ago | (#45506349)

Building an exascale computer is all well and good, but we still have to find a way to power the damn thing. How will we generate the necessary 1.21 jiggawatts?

Computers are ESD sensitive, after all, so lightning is right out. Perhaps a stainless steel frame would help with the flux dispersal...

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?