Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology

Unknown Lamer posted about 9 months ago | from the series-of-nanotubes dept.

Supercomputing 118

dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'" Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).

cancel ×

118 comments

Sorry! There are no comments related to the filter you selected.

EPA Offers Funding to Reduce Pollution from Diesel (-1)

Anonymous Coward | about 9 months ago | (#45474053)

WASHINGTON – The U.S. Environmental Protection Agency (EPA) has made available $2 million in funding for rebates to help public and private construction equipment owners replace or retrofit older diesel construction engines. The rebates will reduce harmful pollution and improve air quality in local areas.

“Exhaust from diesel construction equipment affects children, senior citizens and others in neighborhoods across the country”,” said Janet McCabe, acting assistant administrator for EPA’s Office of Air and Radiation. "These rebates will help equipment owners protect public health and improve air quality near construction sites while updating their fleets.”

Rebates will be offered as part of the Diesel Emission Reduction Act, also known as DERA. This is the second rebate program offered since Congress reauthorized DERA in 2010 to allow rebates in addition to grants and revolving loans. The rebates will support the program’s effort to replace and update existing diesel vehicles, and will target where people are exposed to unhealthy air.

Since 2008, DERA has awarded more than $500 million to grantees across the country to retrofit, replace, or repower more than 50,000 vehicles. By cutting air pollution and preventing thousands of asthma attacks, emergency room visits and premature deaths, these clean diesel projects are projected to generate health benefits worth up to $8.2 billion.

Public and private construction equipment owners in eligible counties that are facing air quality challenges are encouraged to apply for rebates for the replacement or retrofit of construction equipment engines. EPA will accept applications from November 20, 2013, to January 15, 2014 and anticipates awarding the rebates in February 2014.

Construction equipment engines are very durable and can operate for decades. EPA has implemented standards to make diesel engines cleaner, but many older pieces of construction equipment remain in operation and predate these standards. Older diesel engines emit large amounts of pollutants such as nitrogen oxides (NOx) and particulate matter (PM). These pollutants are linked to health problems, including asthma, lung and heart disease, and even premature death. Equipment is readily available that can reduce emissions from these engines.

To learn more about the rebate program, the list of eligible counties, applicant eligibility and selection process, please visit http://www.epa.gov/cleandiesel/dera-rebate-construction.htm [epa.gov]

SOLUTION for CMOS "band gap" (5, Interesting)

kdawson (3715) (1344097) | about 9 months ago | (#45474309)

I suggest we go back a few levels and back to the 1970's when TTL was being replaced because of it's higher voltages. Remember back when core memory was replaced but before CMOSS? These were the TTL eras made by similar but NOT THE SAME transistors.

1. Silicone bandgap of CMOSS is higher than TTL
2. Gate length is more fabricable. (Fabricate the gates in Mexico; say they were made in USA)
3. Drain has "quantum clogging" problems in TTL but not CMOSS
4. Dopant levels make GaAs less commercially feasible.
5. Wafer sizes still dominated by "silicon" technology. It is not cheaper to go to more e-toxic and alien technologies. Far cheaper to stick with the wafers commercially produced today. GaAs and Indium Phosphate are like communion wafers!!!
6. Investors. We need to keep money at the fore front. Global depression is iminint. Must make cheep and available components with "WHAT WE HAVE" allready!!

TTL. I think its a good idea.

-KD

MOD PARENT UP! (0)

Anonymous Coward | about 9 months ago | (#45474403)

As a silicon engineer, I beckon you to mod parent up.

Re:SOLUTION for CMOS "band gap" (0)

Anonymous Coward | about 9 months ago | (#45474545)

TTL: Because CPUs need 10x or more power consumption and massive amounts of switching noise.

Re:SOLUTION for CMOS "band gap" (2)

ebno-10db (1459097) | about 9 months ago | (#45478049)

Bah. You want to burn power? Try ECL. The lights dimmed when you turned it on, but on the bright side you could cook your breakfast on a chip. ECL people were also using decent transmission line layout techniques for PCB's back in the 60's - a few decades before other digital designers had to worry about it. For many years the MECL handbook was the standard reference for hi-speed digital PCB design.

OS is the telsa of the softwar wwworld (-1)

Anonymous Coward | about 9 months ago | (#45474085)

do we need some failed (paid) hypenosys talknician to remind us?

Work smarter, not harder. (1)

Anonymous Coward | about 9 months ago | (#45474087)

Coding to make best use of resource.

Moving to clockless.

Minimal use processors (custom ASIC).

Live with it. Sometimes you may have to wait Seven. And a Half (what? not till next week?) Million Years for your answer. It may be a tricky problem.

Re:Work smarter, not harder. (2)

K. S. Kyosuke (729550) | about 9 months ago | (#45474133)

Moving to clockless.

Chuck Moore-style? [greenarraychips.com]

Minimal use processors

That doesn't make sense. Or rather, makes multiple possible senses at once. Could you elaborate on what in particular do you have in mind?

Re:Work smarter, not harder. (1)

Anonymous Coward | about 9 months ago | (#45474299)

That doesn't make sense. Or rather, makes multiple possible senses at once. Could you elaborate on what in particular do you have in mind?

I believe he was referring to building the processor for the task - getting rid of unnecessary gates, prioritizing certain operations over others, etc, based on the research being done. An example are the custom machines available for mining bitcoins now - prioritize running integer hashes, get rid of all the junk you don't need(high memory, floating point processors to name a couple).

Re:Work smarter, not harder. (0)

Anonymous Coward | about 9 months ago | (#45474235)

My guess the next tech breakthrough will be in the interconnect. A lot of these supercomputers use hyper-dimensional topologies to put resources into groups where message passing doesn't have to deal with long distances. But a lot of that has to do with making the software work with these topologies. Before a computer gets ranked, much time is spend to optimize Linpack for their particular setup.

Re:Work smarter, not harder. (3, Interesting)

mlts (1038732) | about 9 months ago | (#45474375)

I wonder if the next breakthrough would be using FPGAs and configuring the instruction set for the task at hand. For example, a core gets a large AES encryption task, so it gets set to an instruction set optimized for array shifting. Another core gets another job, so shifts to a set optimized for handling trig functions. Still another set deals with large amounts of I/O, so ends up having a lot of registers to help with transforms, and so on.

Of course, fiber from chip to chip may be the next thing. This isn't new tech (the PPC 603 had this), but it might be what is needed to allow for CPUs to communicate closely coupled, but have signal path lengths be not as big an engineering issue. Similar with the CPU and RAM.

Then there are other bottlenecks. We have a lot of technologies that are slower than RAM but faster than disk. Those can be used for virtual memory or a cache to speed things up, or at least get data in the pipeline to the HDD so the machine can go onto other tasks, especially if a subsequent read can fetch data no matter where it lies in that I/O pipeline.

Long term, photonics will be the next breakthrough that propels things forward. That and the Holy Grail of storage -- holographic storage, which promises a lot, but has left many a company (Tamarak, InPhase) on the side of the road, broken and mutilated without mercy.

Re:Work smarter, not harder. (3, Interesting)

Jane Q. Public (1010737) | about 9 months ago | (#45474681)

"Of course, fiber from chip to chip may be the next thing. This isn't new tech (the PPC 603 had this), but it might be what is needed to allow for CPUs to communicate closely coupled, but have signal path lengths be not as big an engineering issue. Similar with the CPU and RAM."

Fiber from chip to chip is probably a dead end, unless you're just primarily taking advantage of the speed of serial over parallel buses.

The problem is that you have to convert the light back to electricity anyway. So while fiber is speedier than wires, the delays (and expense) introduced at both ends limits its utility. Unless you go to actual light-based (rather than electrical) processing on the chips, any advantage to be gained there is strictly limited.

Probably more practical would be to migrate from massively parallel to faster serial communication. Like the difference between old parallel printer cables to USB. Granted, these inter-chip lineswould have to be carefully designed and shielded (high freq.), but so do light fibers.

Re:Work smarter, not harder. (1)

Blaskowicz (634489) | about 9 months ago | (#45478519)

Probably more practical would be to migrate from massively parallel to faster serial communication. Like the difference between old parallel printer cables to USB. Granted, these inter-chip lineswould have to be carefully designed and shielded (high freq.), but so do light fibers.

Did that happen already? Hypertransport looks like a serial bus, and Intel's QPI is much of the same thing. Likewise PCIe replaced PCI, like your printer cable exemple. All those buses are "serial, but you use multiple lanes anyway" though.

Re:Work smarter, not harder. (1)

mikael (484) | about 9 months ago | (#45474731)

FPGA's are slower than ASIC's. And an ASIC processor can always be made to be programmable. Beyond ASIC's are systems-on-a-chip: memory, CPU, vector processing, internetworking all on one chip. Perhaps even thousands of cores and blocks of shared memory everywhere.

Moving to optical computing seems to be the most likely move, unless something completely different comes in - maybe processors could store bits in electromagnetic fields between electrodes rather than that actual moving electrons. There was some research going on into magnetic storage that used individual magnetic vortices to store bits rather than millions of atoms. So it would seem logical that they could extend that to logic gates.

Re:Work smarter, not harder. (1)

glueball (232492) | about 9 months ago | (#45475615)

FPGA's are slower than ASIC's

Not to market they aren't. And post-production reprogramming is a problem for ASICs.

Re:Work smarter, not harder. (3, Funny)

symbolset (646467) | about 9 months ago | (#45474775)

Assume a spherical data center...

Re:Work smarter, not harder. (2)

Blaskowicz (634489) | about 9 months ago | (#45478541)

This makes me think of the Cray, nice-looking cylinder shape with a big mess of small wires inside. Or that video a while back where people were time-lapse wiring a cluster with lots of colored cables, in the center of it.

Re:Work smarter, not harder. (1)

solidraven (1633185) | about 9 months ago | (#45474567)

Clockless isn't a very good idea, designing such a large asynchronous system with current CMOS technology is going to end in a big disaster.

It may not be necessary to be so large if clockles (0)

Anonymous Coward | about 9 months ago | (#45474709)

It may not be necessary to be so large if clockless.

Re:Work smarter, not harder. (0)

Anonymous Coward | about 9 months ago | (#45474761)

The idea of going clockless was also known as asychnronous computing. Each logic block would go at it's own speed, ramping up clock speed when demand required it, otherwise just slowing down or switching off completely. It was an idea modeled after the way the brain worked.

Re:Work smarter, not harder. (1)

marcosdumay (620877) | about 9 months ago | (#45474921)

Yes, and that's a great idea for mobile computers (maybe it'll be the next big thing there some time). It's just not that usefull for surpercomputing... still usefull, but not revolutionary.

The problem is that supercomputers are made with the best of mass produced chips. You are already discharging the processors that can't run at top speed, and you already designed their pipeline in a way where variation in instruction times won't reduce your throughput. This way, all that you can gain from asynchronous chips is the variation that you ignored at the above optimizations because it was too small.

Re:Work smarter, not harder. (1)

buswolley (591500) | about 9 months ago | (#45475023)

Considering how disruptive computer power AI is becoming, a temporary slow down can be good for us...To give us time to adapt, put in place controls, and decide on a future without need for human labor

Re:Work smarter, not harder. (1)

geekoid (135745) | about 9 months ago | (#45475189)

Yeah, you try to create some laws to govern a speculative future.
Could you imagine writing regulation for the internet in 1950?
You can't regulate until after something is in the wild, otherwise it will fail horrible.

Re:Work smarter, not harder. (1)

buswolley (591500) | about 9 months ago | (#45478521)

I meant a cultural adaption. Things are changing very very very quickly now, even within a single generation.

Re:Work smarter, not harder. (1)

strack (1051390) | about 9 months ago | (#45478841)

i like how you used the word 'cultural' in place of 'all this newfangled tech is scaring the olds, slow down a bit dagnammit'

MIPS CNT... (5, Funny)

motd2k (1675286) | about 9 months ago | (#45474121)

MIPS CNT... how do you pronounce that?

Re:MIPS CNT... (1)

Flere Imsaho (786612) | about 9 months ago | (#45476817)

MIPS CNT... how do you pronounce that?

MIPS CNT - do you even beowulf? SPECint me IRL!

Breakthroughs are there. (0)

Anonymous Coward | about 9 months ago | (#45474135)

Quantum/optical computing is going from impossible to possible within 1 decade. Is it really a "slowdown" or just a few years of lag time? We'll see.

The GOP’s scary-movie strategy (-1)

Anonymous Coward | about 9 months ago | (#45474143)

http://www.washingtonpost.com/opinions/dana-milbank-the-gops-scary-movie-strategy/2013/11/19/41da73a4-5163-11e3-9e2c-e1d01116fd98_story.html

"the ones who join exchanges are likely to be older and sicker, making the insurance pool costlier to insurers. As Larry Levitt, a senior vice president of the Kaiser Family Foundation, explained to me, if costs are more than 3percent higher than anticipated in the first few years of Obamacare, the federal government will have to pick up at least half of the additional expense."

GOOD GRIEF!

This is news? This is not a "Republican Strategy", this is what we call the TRUTH!

And those of us with brains who have looked at this have known this all along!

What a freaking farce.

Re:The GOP’s scary-movie strategy (-1)

Anonymous Coward | about 9 months ago | (#45474479)

This is also why so many of the "administration's" excuses for why the Obammycare rollout has been so bad are focused at republicans.

"We didn't want to publicize the security problems we were having prior to the website's launch, because the Republicans would ahve used that against us."

"We didn't want to talk about the millions of policies we knew would be cancelled when Obamacare went live, because the Republicans would have used that against us."

And so on and so forth.

In every such example the implication is clear: the Republicans would have used the facts to counter the "administration's" lies, and to Obammy and his minions that was a bad thing; a much worse thing than launching the site and setting Obammycare in motion without security and with the promise of overwhelming sticker shock.

They decided to "play politics" with people's lives, literally, rather than have to defend their monstrous "law."

How is this not already a GOPe ad campaign?

So what? (2, Interesting)

Animats (122034) | about 9 months ago | (#45474179)

So what? Much of supercomputing is a tax-supported boondoggle. There are few supercomputers in the private sector. Many things that used to require supercomputers, from rocket flight planning to mould design, can now be done on desktops. Most US nuclear weapons were designed on machines with less than 1 MIPS.

Supercomputers have higher cost/MIPS than larger desktop machines. If you need a cluster, Amazon and others will rent you time on theirs. If you're sharing a supercomputer, and not using hours or days of time on single problems, you don't need one.

Re:So what? (5, Insightful)

Anonymous Coward | about 9 months ago | (#45474377)

There are actually a half-decent number of 'supercomputers' -depending on how you define that term- in the private sector. From 'simple' ones that do rendering for animation companies to ones that model airflow for vehicles to ones that crunch financial numbers to.. well, lots of things, really. Are they as large as the biggest National faciltiies? Of course not - that's where the next generation of business-focused systems get designed and tested and models and methods get developed and tested.

It is indeed the case that far simpler systems ran early nuclear weapon design, yes, but that's like saying far simpler desktops had 'car racing games' -- when, in reality, the quality of those applications has changed incredibly. Try playing an old racing game on a C64 vs. a new one now and you'd probably not get that much out of the old one. Try doing useful, region-specific climate models with an old system and you're not going to get much out of it. Put a newer model with much higher resolution, better subgrid models and physics options, and the ability to accurately and quickly do ensemble runs for a sensitivity analysis and, well, you're in much better territory scientifically.

So, in answer to "So what?", I say: "Without improvements in our tools (supercomputers), our progress in multiple scientific -and business- endeavors slows down. That's a pretty big thing."

Re:So what? (1)

interkin3tic (1469267) | about 9 months ago | (#45475107)

I'd argue that most scientific progress doesn't depend on supercomputers, and anything we know we can use supercomputers for, we can do with current computers, it will just take longer. Aside from the science of making more powerful computers I suppose. Protein folding, for example, could go faster, but it's already going.

This is not to say I don't think we should be content with the computers we have now, just saying it doesn't seem too catastrophic to science. And buisiness seems to make money no matter what. They'll be able to sell people new computers one way or another. "THIS version of the mac book... uh... IS YELLOW!!!!"

Re:So what? (0)

Anonymous Coward | about 9 months ago | (#45478291)

I would say you don't know what you're talking about.

Re:So what? (1)

fuzzyfuzzyfungus (1223518) | about 9 months ago | (#45474581)

"If you need a cluster, Amazon and others will rent you time on theirs."

You come from the planet where all algorithms parallelize neatly, eh? I've heard that they've cured the common cold and the second law of thermodynamics there, too...

Re:So what? (0)

Anonymous Coward | about 9 months ago | (#45474733)

You come from the planet where all algorithms parallelize neatly, eh? I've heard that they've cured the common cold and the second law of thermodynamics there, too...

Yes, it's really wonderful here. Unfortunately we have decided not to share our science and technology with Earthlings, it's for your own good. But we do give you this. [youtube.com]

Re:So what? (1)

whoever57 (658626) | about 9 months ago | (#45475411)

You come from the planet where all algorithms parallelize neatly, eh? I've heard that they've cured the common cold and the second law of thermodynamics there, too...

Because supercomputers are not massively parallel computers ... Oh wait....

Re:So what? (3, Informative)

fuzzyfuzzyfungus (1223518) | about 9 months ago | (#45475473)

They have no choice in the matter, since nobody makes 500GHz CPUs; but there is a reason why (many, not all) 'supercomputers' lay out a considerable amount of their budget for very fast, very low latency, interconnects (myrinet, infiniband, sometimes proprietary fabrics for single-system-image stuff), rather than just going GigE or 10GigE and calling it a day, like your generic datacenter-of-whitebox-1Us does.

There are problems where chatter between nodes is low, and separate system images are acceptable, and blessed are they, for they shall be cheap; but people don't buy the super fancy interconnects just for the prestige value.

Re:So what? (4, Interesting)

Kjella (173770) | about 9 months ago | (#45474585)

Of course these people are using talking about supercomputers and the relevance to supercomputers, but you have to be pretty daft to not see the implications for everything else. In the last years almost all the improvement have been in power states and frequency/voltage scaling, if you're doing something at 100% CPU load (and isn't a corner case to benefit from a new instruction) the power efficiency has been almost unchanged. Top of the line graphics cards have gone constantly upwards and are pushing 250-300W, even Intel's got Xeons pushing 150W not to mention AMD's 220W beast, though that's a special oddity. The point is that we need more power to do more and for hardware running 24x7 that's a non-trivial part of the cost that's not going down.

We know CMOS scaling is coming to an end, maybe not at 14nm or 10nm but at the end of this decade we're approaching the size of silicon atoms and lattices. There's no way we can sustain the current rate of scaling in the 2020s. And it wouldn't be the end of the world, computers would go roughly the same speed they did ten or twenty years ago like cars and jet planes do. Your phone would never become as fast as your computer which would never become as fast as a supercomputer again. We could get smarter at using that power of course, but fundamentally hard problems that require a lot of processing power would go nowhere and it won't be terahertz processors, terabytes of RAM and petabytes of storage for the average man. It was a good run while it lasted.

Re:So what? (1)

whoever57 (658626) | about 9 months ago | (#45474917)

We know CMOS scaling is coming to an end, maybe not at 14nm or 10nm but at the end of this decade we're approaching the size of silicon atoms and lattices.

I have heard that statement made many times since about the mid-80s or at the very latest, early '90s -- not the exact size, but the prediction of the imminent end to CMOS scaling. Perhaps it is true now, as we approach single molecule transistors.

Re:So what? (2)

lgw (121541) | about 9 months ago | (#45477031)

Yes, the difference now is reaching the limits of physics, and even with something better than CMOS there's not much headroom. There's only so much state you can represent with one atom, and we're not that far off.

I think the progress we'll see in the coming decades will be very minor in speed of traditional computers, significant in power consumption, and huge in areas like quantum computing, which are not incremental refinements of what we're so good at today.

Our tools are nearly as fast as they reasonably can be, but that's not to say there aren't important gains to be had from different kinds of tools.

Re:So what? (1)

Miamicanes (730264) | about 9 months ago | (#45477487)

Exactly. Thanks to atomic uncertainty, we're rapidly approaching the point where CPUs are going to need 3 or more pipelines executing the same instructions in parallel, just so we can compare the results and decide which result is the most likely to be the RIGHT one.

We're ALREADY at that point with flash memory. Unlike SRAM, which is unambiguously 0 or 1, SLC flash is like a leaky bucket that starts out full (1), gets instantly drained to represent 0, and otherwise leaks over time, but still counts as '1' as long as it's not empty. MLC flash is even worse... one bucket represents 2 or more bits, so the amount that can leak away without corrupting the value is even less. Twenty years from now, CPUs will be the same way... 16 times the transistors, but maybe 4x the performance of today if we're lucky, because the transistors will be so small, they'll occasionally get "stuck" or "leak", and CPUs will need additional logic to determine when it happens and transparently fix it when it does (we might even be at that point already to some extent).

Re:So what? (0)

Anonymous Coward | about 9 months ago | (#45477943)

With respect, I think you're stuck on the current mode of thinking. Everything you say is certainly true so long as you apply the standard CPU design ideas.

What am I talking about? OK, by way of analogy. In the field of wireless transmission, multi-path signals were considered 'distortion' or worse, for decades. The multiple paths were typically the result of signal reflections and those were clearly noise to be vigorously tuned out, toned down, suppressed or ignored. Clearly.

Then the WiFi alliance realized that multi-path signals could be exploited to increase data transmission rates. This was a major change of perspective. Cue the entrance of MIMO signalling.

Many of the effects you mention are the result of quantum effects becoming apparent, as transistor feature sizes shrink. Burying quantum effects with statistically significant numbers of electrons is becoming, well is on the horizon to be, impractical.

However we already have a nascent field of quantum computing. It's early days yet but the promise is exciting. What if those quantum effects, rather than being viewed as a 'problem', are in fact a huge opportunity? We'll need to learn how to harness quantum logic in the real world. We're talking about mass market products, small and power efficient, cheap enough to produce in the millions.

Suddenly Moore's law kicks in again. Smaller transistor sizes would promote quantum effects, whereas larger features suppress them. Quantum superpositions allow quantum transistors to represent many logic states simultaneously. You can break the relationship where 1 transistor = 1 bit (a simplification of course). Enter the qubit.

Re:So what? (2)

geekoid (135745) | about 9 months ago | (#45475217)

"... current rate of scaling in the 1980s err 1990 err 2000, definitely 2000 err 2010.. I know; definitely 2020.

Re:So what? (0)

Anonymous Coward | about 9 months ago | (#45474625)

There are a lot of supercomputers in the private sector, although most of them don't bother submitting rankings to something like the top 500 list. Oil companies in particular have some really large machines, along with various simulation software used by some large aircraft and computer chip designing companies. Some companies are now trying to get into engineered materials and that requires substantial resources to simulate atomic structures of materials and derive macroscopic properties.

Supercomputers have higher cost/MIPS than larger desktop machines.

Well duh, the same as semi-trucks and sports cars have a higher post per mile than a typical family car. They do different things and are optimized for different problems though. If you have a problem that needs a cluster, then you build a cluster. A lot of universities have clusters of varying size for problems that don't need as much communication between processes as in supercomputers. They just don't end up in the news as much as they are relatively boring, but still important for both research and education.

Re:So what? (1)

unixisc (2429386) | about 9 months ago | (#45474637)

I somewhat agree w/ this. For the applications that do need supercomputers, they should really work on escalating the levels of parallelism within them. After that, just throw more CPUs at the problem. Indeed, that's the way Intel managed to wipe RISC out of the market.

Also, as others pointed out, improve the other bottlenexts that exist there - the interconnects and that sort of thing. We don't need to move out of CMOS to solve a problem facing a fringe section of the market.

Re:So what? (2, Interesting)

Anonymous Coward | about 9 months ago | (#45474641)

Actually, the sort-of sad reality is that, outside the top few supercomputers in the world, the "top500" type lists are completely bogus because they don't include commercial efforts who don't care to register. Those public top-cluster lists are basically where tax-supported-boondoggles show off, but outside the top 5-10 entries (which are usually uniquely powerful in the world), the rest of the list is bullshit. There are *lots* (I'd guess thousands) of clusters out there that would easily make the top-20 or top-50 list of the public clusters, that are just undocumented publicly. So yes, "supercomputing"-level clusters are in wide commercial use. I know for a fact I've worked at two different companies in the past in this situation. One had a bit over 10K Opterons in a single datacenter wired up with Infiniband doing MPI-style parallelism, and this was back in like ... I want to say about 2005? They were using it to analyze seismic data to find oil. Never showed up on any list of supercomputers anywhere, like almost all commercial efforts.

Does disruptive mean affordable? (4, Interesting)

UnknownSoldier (67820) | about 9 months ago | (#45474185)

We've had Silicon Germanium cpus that can scale to 1000+ GHz for years. Graphene is also another interesting possibility.

The question is that "At what price can you make the power affordable?"

For 99% of people, computers are good enough. For the other 1% they never will be.

Re:Does disruptive mean affordable? (1)

bluefoxlucid (723572) | about 9 months ago | (#45474297)

Yeah, SOS-CMOS like SOG-CMOS or SOD-CMOS. You can't have a data core without SOD-CMOS.

Re:Does disruptive mean affordable? (1)

Anonymous Coward | about 9 months ago | (#45474331)

We've had Silicon Germanium cpus that can scale to 1000+ GHz for years.

Not really. We've had transistors that can get almost that fast... no one builds a CPU with those, for good reasons. It's not a question of cost.

Re:Does disruptive mean affordable? (2)

green is the enemy (3021751) | about 9 months ago | (#45474371)

The problem is heat. Simple as that. Currently there are no technologies more power efficient than CMOS. Therefore there are no technologies that can produce more powerful computers than CMOS. If a significantly more power-efficient technology is found, the semiconductor manufacturers will absolutely attempt to use it.

Re:Does disruptive mean affordable? (2)

K. S. Kyosuke (729550) | about 9 months ago | (#45474437)

Do they also scale thermally? It is ultimately a problem of computations per joule, not a problem of computations per second. Supercomputers already have to use parallel algorithms, so building faster ones is about how much computing power can you squeeze into a cubic meter without the machine catching fire. That's actually the other reason why CMOS is being used, and not, e.g., ECL. ;-)

Re:Does disruptive mean affordable? (4, Informative)

fuzzyfuzzyfungus (1223518) | about 9 months ago | (#45474747)

Even if you are willing to burn nigh unlimited power, thermals can still be a problem (barring some genuinely exotic approaches to cooling), because ye olde speed of light says that density is the only way to beat latency. There are, of course, ways to suck at latency even more than the speed of light demands; but there are no ways to suck less.

If your problem is absolutely beautifully parallel (and, while we're dreaming, doesn't even cache-miss very often), horrible thermals would be a problem that could be solved by money: build a bigger datacenter and buy more power. If there's a lot of chatter between CPUs, or between CPUs and RAM, distance starts to hurt. If memory serves, 850nm light over 62.5 micrometer fiber is almost 5 nanoseconds/meter. That won't hurt your BattleField4 multiplayer performance; but when even a cheap, nasty, consumer grade CPU is 3GHz, there go 15 clocks for every meter, even assuming everything else is perfect. Copper is worse, some fiber might be better.

Obviously, problems that can be solved by money are still problems, so they are a concern; but problems that physics tells us are insoluble are even less fun.

Re:Does disruptive mean affordable? (1, Interesting)

lgw (121541) | about 9 months ago | (#45477097)

Light-carrying fiber is slower than copper (5ns/m vs 4 for copper) - it sort of has to be, as the higher impedance goes hand-in-hand with the need for total internal reflection at the boundary of the clear plastic. Optical helps with band-width-per-strand, not with latency.

I think the next decade of advances will be very much about power efficiency, and very little about clock rate on high-end CPUs. That will benefit both mobile and supercomputers, as power are power-constrained (supercomputers by the heat rather than the raw power, but it works out to the same thing).

Re:Does disruptive mean affordable? (1)

triffid_98 (899609) | about 9 months ago | (#45474457)

No, they can't. We've known that for some time...and this is why [wikipedia.org] .

Re:Does disruptive mean affordable? (2)

mlts (1038732) | about 9 months ago | (#45474459)

I'd say computers are good enough for today's tasks... but what about tomorrow's?

With the advent of harder hitting ransomware, we might need to move to better snapshotting/backup systems to preserve documents against malicious overwrites which are made worse with SSD (TRIM zeroes out stuff, no recovery, no way.)

Network bandwidth also is changing. LANs are gaining bandwidth, while WANs are stagnant. So, caching, CDN services, and such will be needing to improve. WAN bandwidth isn't gaining anything but more fees here in the US.

Right now, the basic computer is sort of stagnant, but if fast WAN links become usable, this can easily change.

Re:Does disruptive mean affordable? (2)

fuzzyfuzzyfungus (1223518) | about 9 months ago | (#45474791)

Arguably, WAN bandwidth (except wireless, where the physics are genuinely nasty) is mostly a political problem with a few technical standards committees grafted on, rather than a technical problem.

Even without much infrastructure improvement, merely scaring a cable company can, like magic, suddenly cause speeds to increase to whatever DOCSIS level the local hardware has been upgraded to, even as fees drop. Really scaring them can achieve yet better results, again without even driving them into insolvency, however much they might deserve it...

Re:Does disruptive mean affordable? (0)

Anonymous Coward | about 9 months ago | (#45474741)

Bullshit. We have fancy material TRANSISTORS that SWITCH at terahertz speeds and even then, only when cooled to around absolute zero. This doesn't mean you can just connect it all together and expect a CPU to run 2 magnitudes faster than current CPUs. Money doesn't trump physics.

Re:Does disruptive mean affordable? (0)

Anonymous Coward | about 9 months ago | (#45475957)

"We've had Silicon Germanium cpus that can scale to 1000+ GHz for years."

We do? Can you provide a source for this astounding claim? I have the feeling that Slashdot has a number of "technology conspiracy theorists" that truly have no clue whatsoever what they're talking about.

Re:Does disruptive mean affordable? (1)

Meditato (1613545) | about 9 months ago | (#45476615)

Terahertz circuitry exists, it's just stupidly expensive to produce and cool, and plus performance isn't just about clock speed. Pipelining and memory access speed play into it a great deal.

If only (0)

Anonymous Coward | about 9 months ago | (#45474187)

If only there was some newer technology to save us! Maybe something using light and quantum super states... idk... crazy talk...

My inner grammar nazi says (1)

MikeTheGreat (34142) | about 9 months ago | (#45474205)

that this is not a complete sentence:
"Although carbon nanotube based processors are showing promise [...]."

Go, speed-editor, go! :)

Re:My inner grammar nazi says (1)

K. S. Kyosuke (729550) | about 9 months ago | (#45474335)

That sentence was just a lab sample.

on the nature of disruptive... (4, Insightful)

schlachter (862210) | about 9 months ago | (#45474275)

my intuition tells me that disruptive technologies are precisely that because people don't anticipate them coming along nor do they anticipate the changes that will follow their introduction. not that people can't see disruptive tech ramping up, but often they don't.

Re:on the nature of disruptive... (0)

BringsApples (3418089) | about 9 months ago | (#45474555)

It seems to me that any technological advancements that humans have "invented" is merely a fabrication of context, in order to do what Nature is already doing. Perhaps the next "super computers" will not be what we think of as computers, but more like biological structures that are able to process things without using mathematics, or bits at all. As if all aspects of mathematics will be inherently built into the structure itself.

just a thought.

Re:on the nature of disruptive... (1)

bluefoxlucid (723572) | about 9 months ago | (#45474717)

No, they're disruptive because they change what is technically possible. The ability to directly manipulate ambient energy would greatly change ... everything. I've got piles and piles of things we can do with quantum tunneling junctions, when they're refined enough--currently you get a slab with 1% of the area functional (it works, but it's overly expensive to manufacture and too large).

Anticipating a new advance to produce multi-thousand-GHz processors for 15 years won't make them disruptive. We'll see sudden explosive CPU clock speed growth when they come into existence, and sudden new efforts to take advantage of it all.

Re:on the nature of disruptive... (4, Interesting)

fuzzyfuzzyfungus (1223518) | about 9 months ago | (#45474973)

my intuition tells me that disruptive technologies are precisely that because people don't anticipate them coming along nor do they anticipate the changes that will follow their introduction. not that people can't see disruptive tech ramping up, but often they don't.

Arguably, there are at least two senses of 'disruptive' at play when people talk about 'disruptive technology'.

There's the business sense, where a technology is 'disruptive' because it turns a (usually pre-existing, even considered banal or cheap and inferior) technology into a viable, then superior, competitor to a nicer but far more expensive product put out by the fat, lazy, incumbent. This comment, and probably yours, was typed on one of those(or, really, a collection of those.)

Then there's the engineering/applied science sense, where it is quite clear to everybody that "If we could only fabricate silicon photonics/achieve stable entanglement of N QBits/grow a single-walled carbon nanotube as long as we want/synthesize a non-precious-metal substitute for platinum catalysts/whatever, we could change the world!"; but nobody knows how to do that yet.

Unlike the business case (where the implications of 'surprisingly adequate computers get unbelievably fucking crazy cheap' were largely unexplored, and before that happened people would have looked at you like you were nuts if you told them that, in the year 2013, we have no space colonies, people still live in mud huts and fight bush wars with slightly-post-WWII small arms; but people who have inadequate food and no electricity have cell phones), the technology case is generally fairly well planned out (practically every vendor in the silicon compute or interconnect space has a plan for, say, what the silicon-photonics-interconnect architecture of the future would look like; but no silicon photonics interconnects, and we have no quantum computers of useful size; but computer scientists have already studied the algorithms that we might run on them, if we had them); but application awaits some breakthrough in the lab that hasn't come yet.

(Optical fiber is probably a decent example of a tech/engineering 'disruptive technology' that has already happened. Microwave waveguides, because those can be tacked together with sheet metal and a bit of effort, were old news, and the logic and desireability of applying the same approach to smaller wavelengths was clear; but until somebody hit on a way to make cheap, high-purity, glass fiber, that was irrelevant. Once they did, the microwave-based infrastructure fell apart pretty quickly; but until they did, no amount of knowing that 'if we had optical fiber, we could shove 1000 links into that one damn waveguide!' made much difference.)

Didn't that boat sail with the Cray Y-MP? (2, Insightful)

tlambert (566799) | about 9 months ago | (#45474517)

Didn't that boat sail with the Cray Y-MP?

All our really big supercomputers today are adding a bunch of individual not-even-Krypto-the-wonderdog CPUs together, and then calling it a supercomputer. Have we reached the limits in that scaling? No.

We have reached the limits in the ability to solve big problems that aren't parallelizable, due to the inability to produce individual CPU machines in the supercomputer range, but like I said, that boat sailed years ago.

This looks like a funding fishing expedition for the carbon nanotube processor research that was highlighted at the conference.

Re:Didn't that boat sail with the Cray Y-MP? (3, Insightful)

timeOday (582209) | about 9 months ago | (#45474677)

All our really big supercomputers today are adding a bunch of individual not-even-Krypto-the-wonderdog CPUs together, and then calling it a supercomputer. Have we reached the limits in that scaling? No.

This is wrong on both counts. First, the CPUs built into supercomputers today are as good as anybody knows how to make one. True, they're not exotic, in that you can also buy one yourself for $700 on newegg. But they represent billions of dollars in design and are produced only on multi-billion dollar fabs. There is no respect in which they are not lightyears more advanced than any custom silicon cray ever put out.

Second, you are wrong that we are not reaching the limits of scaling these types of machines. Performance does not scale infinitely on realistic workloads. And budgets and power supply certainly do not scale infinitely.

Re:Didn't that boat sail with the Cray Y-MP? (1)

tlambert (566799) | about 9 months ago | (#45477019)

First, the CPUs built into supercomputers today are as good as anybody knows how to make one.

Well, that's wrong... we just aren't commercially manufacturing the ones we know how to make already.

There is no respect in which they are not lightyears more advanced than any custom silicon cray ever put out.

That's true... but only because you intentionally limited us to Si as the substrate. GaAs transistors have a switching speed around 250GHz, which is about 60 times what we get with absurdly cooled and over-clocked silicon.

Re:Didn't that boat sail with the Cray Y-MP? (1)

timeOday (582209) | about 9 months ago | (#45477441)

Well, that's wrong... we just aren't commercially manufacturing the ones we know how to make already.

What is missing?

Re:Didn't that boat sail with the Cray Y-MP? (1)

rubycodez (864176) | about 9 months ago | (#45478109)

the attempts to make large chips and supercomputers failed spectacularly for good reason, even at the slow speeds of the early 1990s the stuff had to be in a bucket of coolant. that bad choice of GaAs made the cray 3 fail.

Re:Didn't that boat sail with the Cray Y-MP? (1)

antifoidulus (807088) | about 9 months ago | (#45478089)

Actually cost to fab custom chips is a huge impediment to getting faster(at least faster on Linpack) supercomputers. Both the Japanese entries that have grabbed the top spot in the past 10 years(earth simulator and the K-Computer) were actually custom jobs that added in extra vector CPUs. These machines were very fast but also very expensive to make because they had such small runs of CPUs. The K-computer was slightly better in this regard as it uses a bunch of SPARC CPUs with basically an extra vector unit bolted on, but it is still a custom CPU that needed to be custom fabled.

It would be great for supercomputing if there were more commodity cpus that had multiple vector units per core, but unlike GPUs, where gamers subsidize a lot of the research, development, and production of high performance hardware, there is just no demand outside supercomputing for more than one vector unit per core on a CPU. So at least for the time being we may see the current pattern continue: someone will come up with the funding for a custom cpu that will have multiple vector units per core, leapfrog everyone else for a while, then eventually fall behind commodity hardware as they do not have the resources to continue developing their hardware designs for their very small customer base. Rinse and repeat.

Re:Didn't that boat sail with the Cray Y-MP? (1)

timeOday (582209) | about 9 months ago | (#45478785)

The XEON Phi [intel.com] has vastly greater SIMD capability than any Cray or SPARC architecture. In stock now [sabrepc.com] .

Re:Didn't that boat sail with the Cray Y-MP? (4, Informative)

Anonymous Coward | about 9 months ago | (#45474695)

The problem is that there are many interesting problems which don't parallelize *well*. I epmhasize *well* because many of these problems do parallelize, it's just that the scaling falls off by an amount that matters the more thousands of processors you add. For these sorts of problems (of which there are many important ones), you can take Latest_Processor_X and use it efficiently in a cluster of, say, 1,000 nodes, but probably not 100,000. At some point the latency and communication and whatnot just takes over the equation. Maybe for a given problem of this sort you can solve it 10 days on 10,000 nodes, but the runtime only drops to 8 days on 100,000 nodes. It just doesn't make fiscal sense to scale beyond a certain limit in these cases. For these sorts of problems, single-processor speed still matters, because they can't be infinitely scaled by throwing more processors at the problem, but they can be infinitely scaled (well, within information-theoretic bounds dealing with entropy and heat-density) by faster single CPUs (which are still clustered to the degree it makes sense).

CMOS basically ran out of real steam on this front several years ago. It's just been taking a while for everyone to soak up the "easy" optimizations that were laying around elsewhere to keep making gains. Now we're really starting to feel the brick wall...

Re:Didn't that boat sail with the Cray Y-MP? (0)

Anonymous Coward | about 9 months ago | (#45475951)

The problem is that there are many interesting problems which don't parallelize *well*.

Do you have an actual example?

Serious question here... I work in HPC. Not saying there aren't problems like that, just that too often people making comments like that just don't know anything about how to parallelize anything.

Maybe for a given problem of this sort you can solve it 10 days on 10,000 nodes, but the runtime only drops to 8 days on 100,000 nodes.

To nitpick, not only are your numbers unrealistic (though your point is well-taken), I believe that you probably mean 100,000 cores instead of 100,000 nodes... I don't know for sure, but I don't think that I've heard of any cluster with 100,000 nodes in existence, or even planned. Also, those large clusters that make the news are typically run mainframe-like, in that you don't use them all yourself, but there are many users, and you take a much smaller group of nodes.

When you talk about running problems on the complete system, you actually get into hardware problems long before you hit software problems -- hardware failures are common enough that for some clusters, they have troubles running for long enough to finish the benchmarks to prove that they're as fast as they are.

Complete garbage (-1)

Anonymous Coward | about 9 months ago | (#45474579)

By definition, computers are scalable. Need more performance? Add more processing units/memory.

So what is this moronic dribbler dribbling about? Moore's Law allows for the COST of a given unit of computing to fall, taking into account improvements in the efficiency of mass production- so computing is boosted by more than just engineering improvements in performance per mm2 of 'chip'.

The truth is that too many pointless, and useless 'academics' need to work their narrow, and frequently irrelevant world view in order to continue receiving their pay-cheque. They therefore become sources of pure noise, and zero illumination.

With the growth of GPU processing, and ARM based SoC computers, the field of computing is taking one of its rare significant lurches forward. There is the famous saying that "those that can, do" and this applies just as much to the field of computing. Computer facilities with levels of processing unthinkable even a few years back are being built, many sadly for the use of the 'intelligence' community.

Powerful computing needs powerful infrastructure. The bad old days of near useless 'super computers' that focused purely on the fastest single processing engine itself are long since over. Now you want loads of processing engines, connected to masses of RAM and HDD storage- you know, like the facilities Google builds. But there are idiots like William Gropp, who haven't updated their knowledge of computing in decades (and we all met such idiots with high ranking positions at universities). They will bang on at conferences and seminars like its still 1975.

Computing is about many factors, of which absolute speed in a given CPU core is about the LEAST important. Performance per Watt and performance per Dollar is VASTLY more important. The general programmability of a given super computer system is also very important- powerful hardware is USELESS if using that hardware requires very difficult software.

Saying that existing chip engineering will eventual hit the physical limits of the current materials involved is self-evident, and completely pointless at this time. Suggesting that the existence of this limit means we must MAGIC up some future replacement is like saying we must magically create flying car technology to deal with the limitations of conventional car design. It is the kind of outburst only a truly third-rate, puffed up academic could make.

IF we find a new way to make chips, we will use it. However, throwing temper tantrums will NOT improve the likelihood of this happening. There is very reason to believe that only incremental improvements to current chip engineering will be usefully possible, with similar technology using better materials for conductors, insulators, and semi-conductors giving improvements at any given process process size. At this time, the industry is TOO focused on the need to 'shrink'. FD-SOI technology proves there are ways to squeeze far more performance from conventional chip production without shrinking.

As for loony SF fantasies of 'quantum' and 'optical' computers- well these pie-in-the-sky concepts will keep third rate academics employed for centuries without ever returning anything useful to the real world.

Re:Complete garbage (4, Informative)

The Master Control P (655590) | about 9 months ago | (#45474985)

By definition, computers are scalable. Need more performance? Add more processing units/memory.

BZZZT, WRONG.

This is where you can stop reading, folks.

Re:Complete garbage (0)

geekoid (135745) | about 9 months ago | (#45475261)

so adding more processing units/and memory doesn't mean more performance on a specialized machine design to have those added?
Interesting..no wait, stupid, not interesting. My mistake.

subtlety (1)

green is the enemy (3021751) | about 9 months ago | (#45474753)

A bit of humor in one of the linked articles?

To eliminate the wire-like or metallic nanotubes, the Stanford team switched off all the good CNTs. Then they pumped the semiconductor circuit full of electricity. All of that electricity concentrated in the metallic nanotubes, which grew so hot that they burned up and literally vaporized into tiny puffs of carbon dioxide. This sophisticated technique was able to eliminate virtually all of the metallic CNTs in the circuit at once.

Bypassing the misaligned nanotubes required even greater subtlety.

......

Re:subtlety (1)

Meneth (872868) | about 9 months ago | (#45475015)

Yeah. "Sophisticated". :)

Power and legacy codes (2)

Orp (6583) | about 9 months ago | (#45474803)

... are the biggest problems from where I'm sitting here in the convention center in Denver.

In short, there will need to be a serious collaborative effort between vendors and the scientists (most of whom are not computer scientists) in taking advantage of new technologies. GPUs, Intel MIC, etc. are all great only if you can write code that can exploit these accelerators. When you consider that the vast majority of parallel science codes are MPI only, this is a real problem. It is very much a nontrivial (if even possible) problem to tweak these legacy codes effectively.

Cray holds workshops where scientists can learn about these new topologies and some of the programming tricks to use them. But that is only a tiny step towards effectively utilizing them. I'm not picking on Cray; they're doing what they can do. But I would posit that before the next supercomputer is designed, that it is done with input from the scientists who will be using it. There are a scarce few people with both the deep physics background and the computer science background to do the heavy lifting.

In my opinion we may need to start from the ground up with many codes. But it is a Herculean effort. Why would I want to discard my two million lines of MPI-only F95 code that only ten years ago was serial F77? The current code works "well enough" to get science done.

The power problem - that is outside of my domain. I wish the hardware manufacturers all the luck in the world. It is a very real problem. There will be a limit to the amount of power any future supercomputer is allowed to consume.

Finally, compilers will not save us. They can only do so much. They can't write better code or redesign it. Code translators hold promise, but those are very complex.

Re:Power and legacy codes (2)

fuzzyfuzzyfungus (1223518) | about 9 months ago | (#45475055)

"Why would I want to discard my two million lines of MPI-only F95 code that only ten years ago was serial F77? The current code works "well enough" to get science done."

Out of genuine curiosity (I'm not nearly familiar enough with either the economics or the cultural factors involved), would the hardware vendors, rather than the scientists(who, are scientists, not computer scientists, and just want to get their jobs done, not become programmers, so aren't strongly motivated to change), be in a position to attack the legacy code problem?

While a lot of academic code isn't FOSS in the techie sense (it may be in some way encumbered, it may just never have been formally released at all, it may be a total wreck, etc.), I'd assume that most of it isn't so secret that a deal couldn't be arranged to get specific people a look at it, even if under NDA.

If I were somebody like Intel or Nvidia, would it ever be worth my time to attempt to juice hardware sales (especially someone like Nvidia, whose strongest product is rather unlike a standard-issue general-purpose CPU cluster) by selling the customer on a combined "We'll sell you 10,000 Tesla boards, and provide software engineers to rebuild 'cranky-oldschool-geophysics-sim' as an equivalent; but CUDA-aware, application."

Doable? Far too expensive? Faces serious pushback from the old timer who knows how to make that ol' fortran dance? Code considered too valuable to risk disclosure? People would rather be locked into a ghastly mess that is at least old enough to be widely supported, rather than some new and possibly proprietary ghastly-mess-in-10-years?

Re:Power and legacy codes (1)

jabuzz (182671) | about 9 months ago | (#45477461)

Interesting thought. I guess the answer is that for the small percentage of HPC users that code stuff, they need to keep updating the code as time goes by. So they might not want to learn CUDA/OpenCL etc.

On the other hand in my experience most HPC users are using a preexisting application to do something like CFD, molecular dynamics etc. For these there are open source applications like OpenFOAM, NAMD etc. that it would make sense for Nvidia to throw engineering effort at to improve the GPU acceleration.

The 1000 and 3500 core HPC systems I look after both have a GPU component. They are not widely used however at this point in time, though the few users that do use them use them heavily.

Hum I think I will suggest this to Nvidia at the MEW24 next week.

Re:Power and legacy codes (1)

jd (1658) | about 9 months ago | (#45478065)

A surprising amount is FOSS. I routinely get screamed at by irate scientists for listing their stuff of Freshm...freecode.

The Cray 4 (1)

stox (131684) | about 9 months ago | (#45474825)

was going to be gallium arsenide, but it never made it to market.

Re:The Cray 4 (0)

Anonymous Coward | about 9 months ago | (#45476003)

The Cray 3 actually was GaAs. Years later, my MacBook Air is faster than it was. Life moves on.

A lot of supercomputing motivated by bad science! (3, Interesting)

Theovon (109752) | about 9 months ago | (#45474905)

There are plenty of algorithms that benefit from supercomputers. But it turns out that a lot of the justification for funding super computer research has been based on bad math. Check out this paper:

http://www.cs.binghamton.edu/~pmadden/pubs/dispelling-ieeedt-2013.pdf

It turns out that a lot of money has been spent to fund supercomputing research, but the researchers receiving that money were demonstrating the need for this research based on the wrong algorithms. This paper points out several highly parallelizable O(n-squared) algorithms that researchers have used. It seems that these people lack an understanding of basic computational complexity, because there are O(n log n) approaches to the same problems that can run much more quickly, using a lot less energy, on a single-processor desktop computer. But they’re not sexy because they’re not parallelizable.

Perhaps some honest mistakes have been made, it trends towards dishonestly as long as these researchers continue to use provably wrong methods.

Micronize (2)

Tim12s (209786) | about 9 months ago | (#45475271)

The next step has already started.

Micronizing truely massive supercomputers is the next step for "applied sciences". We've gotten used to measuring data centres in power, I recon it will be computing power per cubic foot or something like that. It'll start with drones, then it will move to shipping and long haul robotics. After that it'll move to mining applications. I'm not talking about automating but rather truly autonomous applications that require massive computation for collision avoidance and programmed execution.

At this point it'll be a race to redo the industrial age, albeit with micronized robotics. Again, already started with 3D printing.

Hopefully by then someone figures out how to get off this rock.

The plateu of supercomputing is a good thing. (1)

Anonymous Coward | about 9 months ago | (#45475281)

That means there are hard limits to technology the NSA is using against us.

Re:The plateu of supercomputing is a good thing. (1)

rubycodez (864176) | about 9 months ago | (#45478117)

they mostly just need storage, for later when they need dirt on any particular person

Tianhe-2? (3, Informative)

therealobsideus (1610557) | about 9 months ago | (#45475863)

Totally off topic, but I ended up getting drunk with a bunch of people that are here in town for SC13 last night. Those boys can drink. But I'm surprised that there wasn't more talk about Tianhe-2 there, and how Chinese is going to kick the US off the top 25 in international supercomputing.

Re:Tianhe-2? (0)

Anonymous Coward | about 9 months ago | (#45477341)

The Tianhe-2 is a Linpack monster. I would be very surprised if it is as capable in running applications as several other supercomputers on the list. The Xeon Phi accelerator processors aren't easy to program and likely get very little actual use. Same goes for many systems beefed up with GPUs. Lots of macho-flops with little additional real work.

Argh Moore's Law is Over! (1)

Anonymous Coward | about 9 months ago | (#45477297)

Yes, Moore's Law is just about over. Fortunately all signs point towards graphene transistors actually being workable within a decade. We can make them have a bandgap, we can produce ever larger crystals of pure graphene, we can isolate it from the environment to avoid contamination. We can, in labs, do everything needed to make graphene transistors already. Combining everything effectively and commercially may take a while, but it'll happen and by 2023 you'll be running your Google Glass v10's CPU at several hundred gigahertz with optical interconnects (which can already transition to graphene) with little heat and tons of battery life.

Re:Argh Moore's Law is Over! (1)

ebno-10db (1459097) | about 9 months ago | (#45478189)

Meanwhile in the software world we'll still be arguing Java vs. C++.

Re:Argh Moore's Law is Over! (1)

therealobsideus (1610557) | about 9 months ago | (#45479123)

BASIC is where it's at, you insensitive clod!

Meetup (0)

Anonymous Coward | about 9 months ago | (#45477749)

Meetup is the world's largest network of local groups. Meetup makes it easy for anyone to organize a local group or find one of the thousands already meeting up face-to-face. More than 9,000 groups get together in local communities each day, each one with the goal of improving themselves or their communities.For more info , please visit http://www.meetup.com/

Doesn't seem hard (1)

jd (1658) | about 9 months ago | (#45478043)

Intel's latest creations are basically x86-themed Transputers, which everyone (other than Intel) has been quite aware was inevitable. The only possible rival was processor-in-memory, but the research there has been dead for too long.

Interconnects are the challenge, but Infiniband is fast and the only reason Lightfleet never got their system working is because they hired managers. I could match Infiniband speeds inside of a month, using a wireless optical interconnect, with a budget similar to the one they started with.

Hard drives - you're doing it wrong. The drive should be battery-backed and have significant amounts of RAM on the controller. By significant, I mean enough to ensure that typical users will not be capable of distinguishing the hard drive from a RAM disk AND to ensure physical writes are delayed long enough that you maximize the number of writes that can be done as a sustained burst. (This reduces drive head movement, simultaneously making writes faster and drives longer-lasting.) Since this requires smart drives, you might as well have an OS on there. No sense wasting your main CPUs on filesystem work that can sensibly be offloaded.

For chips, for chrissakes get with the picture! Wafer-scale integration is the only way to go! There was also recently talk of some new gallium compound-on-silicon approach, with the gallium compound providing the transistors, silicon the connections. Interesting, but the US has outsourced almost all chip manufacture. If they want the new materials, they'll need to train a new generation of engineers and build new plants. If they're going to do that anyway, go wafer-scale. The initial costs will be high, when using the new materials, so you might as well go for broke and make the new stuff just too damn good to not get.

What's left? Ah yes, software. Easy. Pass a law stating that in order to get government funding, schools should teach Occam-pi and not Java. Big government? Too bad. Code quality out there is shite. The only way to fix that is to break bad code. Well, that or put the worst 10% of students likely to turn professional against the wall. However, that would produce a backlash. You could put bags of skittles in their pockets, I suppose.

There. That's the supercomputer industry fixed for another couple of decades. Do I get paid for this?

SC Centers have gotton cheap! (1)

johnwerneken (74428) | about 9 months ago | (#45479097)

50 years ago, state of the art was a billion dollars, equivalent to $25USD billion today. So they are .004 or 1/250 what they formerly cost. WTG Techies!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>