Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Discovery May Lead To Exascale Supercomputers

samzenpus posted more than 2 years ago | from the greased-lightning dept.

IBM 135

alphadogg writes "IBM researchers have made a breakthrough in using pulses of light to accelerate data transfer between chips, something they say could boost the performance of supercomputers by more than a thousand times. The new technology, called CMOS Integrated Silicon Nanophotonics, integrates electrical and optical modules on a single piece of silicon, allowing electrical signals created at the transistor level to be converted into pulses of light that allow chips to communicate at faster speeds, said Will Green, silicon photonics research scientist at IBM. The technology could lead to massive advances in the power of supercomputers, according to IBM."

cancel ×

135 comments

Sorry! There are no comments related to the filter you selected.

GPU = supercomputer? (1)

Twinbee (767046) | more than 2 years ago | (#34410578)

I hope they're also talking about standard GPU supercomputers (which are as we know pretty cheap at less than £100 for the low end!)

Re:GPU = supercomputer? (1)

camperslo (704715) | more than 2 years ago | (#34410850)

It seems the definition of a supercomputer keeps changing

http://www.youtube.com/watch?v=gzxz3k2zQJI [youtube.com]

Re:GPU = supercomputer? (1)

RulerOf (975607) | more than 2 years ago | (#34411228)

Power Mac G4 as a weapon? It must have had on-chip 128 bit encryption. [wikipedia.org]

Re:GPU = supercomputer? (3, Funny)

Locke2005 (849178) | more than 2 years ago | (#34411392)

Of course it's a weapon! Have you ever been hit over the head with a Power Mac G4?

Re:GPU = supercomputer? (1)

camperslo (704715) | more than 2 years ago | (#34412010)

Power Mac G4 as a weapon? It must have had on-chip 128 bit encryption.

It wasn't on chip encryption, but reaching a gigaflop that was a threshold for export restrictions. Of course what the industry considered supercomputers had already progressed far beyond that level.

http://findarticles.com/p/articles/mi_qn4182/is_19990907/ai_n10131702/ [findarticles.com]

But how does it all compare to a cloud/botnet of smartphones?

More and more computing power everywhere, but the Earth still has plenty of problems left to solve.

Re:GPU = supercomputer? (1)

thethibs (882667) | more than 3 years ago | (#34412574)

Sometime around 1975 I decided I wasn't going to play with low-powered computers anymore. I went looking for a job on a Cray or one of the big Control Data supercomputers.

I never got the job, but I did get the compute power; it's in my pocket.

Re:GPU = supercomputer? (1)

hitmark (640295) | more than 3 years ago | (#34412738)

As long as it crunches massive numbers quickly, who cares how its defined?

Re:GPU = supercomputer? (5, Informative)

JanneM (7445) | more than 2 years ago | (#34410960)

GPUs are indeed an inexpensive way to boost speed in some cases. But they have been rather oversold; while some specific types of problems benefit a lot from them, many problems do not. If you need to frequently share data with other computing nodes (neural network simulations come to mind), then the communications latency between card and main node eats up much of the speed increase. And as much of the software you run on this kind of system is customized or one-off stuff, the added development time in using GPUs is a real factor in determining the relative value. If you gain two weeks of simulation time but spend an extra month on the programming, you're losing time, not gaining it.

Think about it this way: GPU's are really the same thing as specialized vector processors, long used in supercomputing. And they have fallen in and out of favour over the years depending on the kind of problem you try to solve, the relative speed boost, cost and difficulty in using them. The GPU resource at the computing center is used much less than the general clusters themselves, indicating most users do not find it worth the extra time and trouble to use.

It is a good idea, but it's not some magic bullet.

Re:GPU = supercomputer? (1)

wealthychef (584778) | more than 2 years ago | (#34411172)

the big problem is once you solve an exascale problem, how do you write the answer to disk fast enough? Petabytes of data are hard to handle/visualize/analyze/copy around.

Re:GPU = supercomputer? (1)

TooMuchToDo (882796) | more than 2 years ago | (#34411408)

You use RAM for disk.

Re:GPU = supercomputer? (1)

wealthychef (584778) | more than 3 years ago | (#34412296)

No, that won't work. I'm talking about persistent storage. What do you do when you want to view the results of timestep 45 in a 1000 timestep simulation, each timestep of which generates a few petabytes of data?

Re:GPU = supercomputer? (1)

dan_linder (84060) | more than 2 years ago | (#34411808)

Since we're talking about discoveries that may lead to faster computers, these are the solutions it may use:
  * Texas A&M Research Brings Racetrack Memory a Bit Closer -> http://hardware.slashdot.org/story/10/12/01/0552254/Texas-AampM-Research-Brings-Racetrack-Memory-a-Bit-Closer [slashdot.org]
  * SanDisk, Nikon and Sony Develop 500MB/sec 2TB Flash CardSanDisk, Nikon and Sony Develop 500MB/sec 2TB Flash Card -> http://hardware.slashdot.org/story/10/12/01/1322255/SanDisk-Nikon-and-Sony-Develop-500MBsec-2TB-Flash-Card [slashdot.org]

My great-grandchildren will have screaming fast cell phones!

Dan

Re:GPU = supercomputer? (1)

sexconker (1179573) | more than 2 years ago | (#34411912)

"Texas A&M Research Brings Racetrack Memory a Bit Closer"

I groaned audibly at the terrible pun.

Re:GPU = supercomputer? (2)

afidel (530433) | more than 2 years ago | (#34411208)

The major problem with adoption is probably that most of the people running jobs on SC's are scientists not computer scientists. They use large piles of ancient, well tested libraries and only tweak small parts of the code that are specific to their problem. This means that most of those libraries will need to be ported to OpenCL and CUDA before adoption really picks up.

Re:GPU = supercomputer? (0)

Anonymous Coward | more than 2 years ago | (#34411270)

GPU = Graphical Processing Unit. I find that this is the second most updated piece of hardware next to the ram. I am sure others would agree. In my opinion this is where the advancements need to occur. You stated that, " the added development time in using GPUs is a real factor in determining the relative value. If you gain two weeks of simulation time but spend an extra month on the programming, you're losing time, not gaining it", Well Yes even if you spent two months programming it, to gain two weeks of simulation time, those two weeks gained, are going to be used over and over and over in the future by countless users for years to come.

Re:GPU = supercomputer? (1)

JanneM (7445) | more than 3 years ago | (#34413242)

...even if you spent two months programming it, to gain two weeks of simulation time, those two weeks gained, are going to be used over and over and over in the future by countless users for years to come.

We're talking supercomputers, not pc:s. A lot of the software used there really is written for one single task or one single project. Once the project is over and the original user is done with it, it's never used again. While you may want to do something similar at a later date your new task is different enough - and the new machine you use is different enough - that you'll need to redesign the application. And if you were not part of the previous team, understanding and adapting their code (never mind finding the code in the first place) to your machine may well be more work than simply developing your own software from the start.

So the relevant measure is the total time spent during the lifetime of that project, and that means development time really does become a major factor to take into consideration.

relentless progress oversold (1)

epine (68316) | more than 3 years ago | (#34412964)

GPUs are indeed an inexpensive way to boost speed in some cases. But they have been rather oversold; while some specific types of problems benefit a lot from them, many problems do not.

Where do you get the idea that GPUs have been oversold? Is the loudest mouth breather in the room representative of the general consensus? One vain, overreaching guy from 1960 who had spent too many hours hunched over a keyboard predicts human level AI within the decade, and the entire endeavour is tainted forever? All to alleviate one slow news day?

2000 BC called, and wants their sampling procedure back.

Sixteen lanes of PCI-e V 3.0 has a architectural bandwidth of 16 GB/s and we're looking at about 4GB of local memory with very high bandwidth. The computational problems you can parcel up within these constraints is not small. This is before integrated GPUs becomes commonplace fabricated on the same die, with or without the IBM pixie dust.

For thirty years we enjoyed the regime that a rising tide lifts all boats. Everyone worked within a single dominant programming paradigm and (nearly) every program benefited from clock speed. Memory latency was drifting to the event horizon, but we mostly dealt with that. Until we hit the thermal bend.

If that is the standard of reference, *everything* from the last ten years was oversold.

There's a similar problem measuring the inflation rate. If you keep a fixed basket of goods, you get a consistent measure of inflation that becomes increasingly irrelevant. Back in 1970, they gave you an organic apple for the price of a regular apple. But aside from that, who wants the 1970 basket of goods, even at half it's original price? In that basket, an iPhone would cost you a trillion dollars, and take thirty years to deliver.

If you update the basket to account for the change in the kinds of goods available, you end up with a lower measure of inflation. When a Cray is small enough to fall into the toilet, and you don't even hear the splash, that's serious deflation. And what value do you assign to gene sequencing bird flu over the weekend? In your standard basket?

Exploiting the GPU is not a quick fix. There are immense transition costs, and it only applies to suitable computations. Tragic.

But think about the evolving basket of goods. Here's a good way to do it. Take every package in the R programming language and ask what level of acceleration is available from GPU computing, once the transition costs are paid in full, and then decide if the GPU was oversold or not. R covers econometrics, data mining, and computational biology to name just a few.

That basket works for me. Or do you think that statistical inference over massive data sets has no relevance to the next twenty years? Are you saying that massive stockpiles of data are oversold? Really? That's a brave position.

I recall serious observations circa 1990 from prominent economists that the PC showed up everywhere in the economic data except for productivity. Relative to the 1980 basket of productivity, there was merit in the argument. In 1980, you changed your font once every two years. To successfully fritter, you had to play Pac-Man on CGA.

I get miffed when the standard of "oversold" is denominated in News at Eleven squeaky-wheel gratification reflex, while reeducating the entire white collar work force to a whole new way of doing things is taken for granted.

One man's oversold is another man's relentless progress.

Obligatory Stargate reference (0)

Anonymous Coward | more than 2 years ago | (#34410596)

Ah, just like the Ancients in Stargate. Now we only need to figure out the Zero-Point-Module.

Re:Obligatory Stargate reference (0)

Anonymous Coward | more than 2 years ago | (#34410716)

For all the good it did them.

Could you (-1)

Anonymous Coward | more than 2 years ago | (#34410604)

Imagine a Beowulf Cluster of these?

First! (0, Insightful)

Anonymous Coward | more than 2 years ago | (#34410606)

Interesting, perhaps in our lifetime we'll see it make its way to the desktop... It'll be sorely needed by then to run Windows 2030 comfortably, until of course, the malware takes hold.

Huh (1)

martas (1439879) | more than 2 years ago | (#34410644)

Just yesterday I was talking with someone about how IBM have devolved into patent trolls that do no worthwhile research. Have I been proven wrong, or is this just vaporware? Anyone with the knowledge to do so, please enlighten me!

Re:Huh (1)

Anonymous Coward | more than 2 years ago | (#34411148)

From the article

"In an exascale system, interconnects have to be able to push exabytes per second across the network,"

"Newer supercomputers already use optical technology for chips to communicate, but mostly at the rack level and mostly over a single wavelength. IBM's breakthrough will enable optical communication simultaneously at multiple wavelengths, he said."

The sad part:

"IBM hopes to eventually use optics for on-chip communication between transistors as well. "There is a vision for the chip level, but that is not what we are claiming today," Green said."

Looks like we won't see it on GPU's/CPU's in our desktops for some time. I don't think desktops have hit that bottleneck yet anyway (maybe GPU's?). Not in he same way supercomputers have anyway.

As for your remarks about IBM.

I don't think IBM is much of a patent troll, or at least haven't developed into one. IBM is a very old corporate business that has businesses in mind and have been picking up patents on all sorts for decades now. I also have never seen any "trolling" only provoked counter sues and lawsuits that are about protecting their technologies they actually on sell to clients. A troll simply sues with no intention of ever using the technology and is not actively producing a product.

Now I'm sure they have many patents they don't need or don't intend to use but this is not any different to MS, Google, Oracle (or Sun), Apple or any of the other biggies. Future planning is something you need to do as a business.

Re:Huh (1)

Nerdfest (867930) | more than 2 years ago | (#34411220)

They do file more patents than anyone else, and have done a couple of nasty things against open source software, but they do do a lot of research, especially it seems at the low level on hardware, and they do a lot of good in the open source work (Eclipse, etc). I'd be ok with them as long as I didn't work at places that are stuck using their software. I really wish people would stop buying it, it just encourages them.

Re:Huh (5, Insightful)

Amorymeltzer (1213818) | more than 2 years ago | (#34411248)

IBM may be patent-happy, but it's only reasonable to protect their "inventions". There's a huge difference between a patent troll who buys patents solely for litigation purposes, and IBM, who has been among the leading tech innovators for decades, defending their investments using the legal system. We may not love the current state of affairs for patents, but it's important to distinguish between bottom feeders out for a dirty buck and successful entities making use of their R&D department.

Re:Huh (0)

Anonymous Coward | more than 2 years ago | (#34411632)

The problem is that you don't know what you're talking about. Pretty simple really, and it explains about 95% of the opinions on patents around here.

Here's a clue: really, really wanting something but not wanting to pay for it is a pipe dream. This is the real world. Things don't come to you for free because you strongly desire them. Sorry.

Re:Huh (1)

geekoid (135745) | more than 2 years ago | (#34411698)

While not like it was, IBM does do a lot of R&D.

Re:Huh (1)

Blakey Rat (99501) | more than 2 years ago | (#34411848)

Any good they do in research is negated in the sheer amount of frustration, inefficiency, and anger they produce from inflicting Lotus Notes on millions of unfortunate customers.

Karma Whoring AC (1)

Anonymous Coward | more than 2 years ago | (#34410650)

IBM's press release is http://www-03.ibm.com/press/us/en/pressrelease/33115.wss [ibm.com]

One interesting bit is that the new IBM technology can be produced on the front-end of a standard CMOS manufacturing line and requires no new or special tooling. With this approach, silicon transistors can share the same silicon layer with silicon nanophotonics devices. To make this approach possible, IBM researchers have developed a suite of integrated ultra-compact active and passive silicon nanophotonics devices that are all scaled down to the diffraction limit - the smallest size that dielectric optics can afford.

Exascale is not a word. (1, Funny)

clone52431 (1805862) | more than 2 years ago | (#34410678)

A whole dictionary full of perfectly good words and they have to make one up to mean “very large”...

Re:Exascale is not a word. (0)

Anonymous Coward | more than 2 years ago | (#34410788)

I didn't read the article, so I assumed they meant computers with performance measured in exaflops.

Re:Exascale is not a word. (1)

clone52431 (1805862) | more than 2 years ago | (#34410892)

Great, exactly why making up words is dumb. Now I’m not even sure whose interpretation of it was correct, mine or yours.

Re:Exascale is not a word. (1)

Surt (22457) | more than 2 years ago | (#34411288)

His, I'm pretty sure. Exa flop scale computers (short: exascale) are a big area of research right now.

Re:Exascale is not a word. (1)

clone52431 (1805862) | more than 2 years ago | (#34411692)

Exa flop scale computers (short: exascale)

Great, take the one thing out that actually tells people what you’re talking about (flop, floating-point operations per second).

Why not just say “exa-flop scale”? It’s an additional, what, 5 characters? Well, depending on whether or not you hyphenate “exa-scale”, which you probably should.

Re:Exascale is not a word. (1)

Jah-Wren Ryel (80510) | more than 2 years ago | (#34411300)

Great, exactly why making up words is dumb. Now I'm not even sure whose interpretation of it was correct, mine or yours.

Yours is wrong. It's an industry specific term and there is precedent going back a couple of decades. We are currently at petascale levels (i.e. computers able to hit over 1 petaflop on the linpack benchmark) before that terascale. I don't think gigascale was ever a common term, even before the high end was able to do a gigalop.

Fuck I feel old... (1)

crovira (10242) | more than 3 years ago | (#34413082)

I remember beading an article in the IEEE Spectrum called: "Reaching for the megaflop" in the nineteen-seventies.

I was working for CDC building power supplies (at their facility in Dorval, PQ, Canada) and keeping up with technology.

Exaflop computing is just blowing me away...

Re:Exascale is not a word. (0)

Anonymous Coward | more than 2 years ago | (#34410898)

exaflop - it's going to be a really big flop when it hits the market.

Re:Exascale is not a word. (1)

Anonymous Coward | more than 2 years ago | (#34410852)

Exascale is not a word. ... A whole dictionary full of perfectly good words and they have to make one up to mean "very large"...

Finally someone who might agree with my proposal to replace the overcomplicated SI system with a much simpler 'big'/'small' size classification system! I'm still not sold on the need for adjectives, though I'm open to debate.

Re:Exascale is not a word. (1)

Artifakt (700173) | more than 2 years ago | (#34411702)

It's not worth debating. As long as the most common adjectives used are also sexual terms,* the public WILL have its adjectives.

*and they are, as in "really f**king big!", "Muthaf**kin gynormous", and "awesomer than F**kzilla stompin the s**t outa Tokif**kio"

Re:Exascale is not a word. (2)

Monkeedude1212 (1560403) | more than 2 years ago | (#34410858)

Its a portmanteau!

Exa (obviously being a step above Peta which is above Tera which is above Giga and so on and so forth)

and scale. Which is self-explanatory.

The best part about English is silly quirks like portmanteaus. Don't try to be pedantic.

Re:Exascale is not a word. (1)

Hatta (162192) | more than 2 years ago | (#34411020)

Obviously what they meant was "Exo-scale" computing. Which is, of course, computing technology received from extraterrestrial reptilians.

Re:Exascale is not a word. (1)

wealthychef (584778) | more than 2 years ago | (#34411150)

Yes, exascale is a word. No, exascale does not mean "very large." It means "1000 times as big as petascale." You might want to check Google before you post. I know, this is /.

Re:Exascale is not a word. (1)

clone52431 (1805862) | more than 2 years ago | (#34411708)

You might want to check Google before you post.

I checked Webster’s. And if it means “exa-flop scale”, people should just say exa-flop scale.

Re:Exascale is not a word. (4, Funny)

WrongSizeGlass (838941) | more than 2 years ago | (#34411246)

Exascale is not a word

A whole dictionary full of perfectly good words and they have to make one up to mean “very large”...

Exascale is a perfectly cromulent word.

Re:Exascale is not a word. (1)

Artifakt (700173) | more than 2 years ago | (#34411722)

Personally, I'm trasmodic with eucompipulation about Exascale.

Re:Exascale is not a word. (0)

Anonymous Coward | more than 2 years ago | (#34411870)

Using words like cromulent [urbandictionary.com] embiggens [urbandictionary.com] a vocabulary.

Re:Exascale is not a word. (1)

InlawBiker (1124825) | more than 2 years ago | (#34411412)

Whatever, so long as it brings back the TURBO button I'm buying one!

Re:Exascale is not a word. (1)

geekoid (135745) | more than 2 years ago | (#34411714)

haha, the turbo button. Man, that was awesome. As if anyone would not run in turbo.

Re:Exascale is not a word. (2)

clone52431 (1805862) | more than 2 years ago | (#34411804)

As if anyone would not run in turbo.

Um, that was actually exactly what anyone would need to do, sometimes. The turbo button existed to slow the computer down. Necessary for running some really-old games that implemented hardware-sensitive timers and ran much too fast on “fast” computers (such as the 16 MHz box I cut my teeth on).

Is there a -1 for computing history fail?

Re:Exascale is not a word. (1)

timeOday (582209) | more than 3 years ago | (#34413506)

As if anyone would not run in turbo.

Not a tetris fan, I see?

Re:Exascale is not a word. (1)

oldhack (1037484) | more than 2 years ago | (#34411428)

And still nowhere near hellascale. Lame.

Re:Exascale is not a word. (2)

Zero__Kelvin (151819) | more than 2 years ago | (#34411598)

For some reason, when people invent new and innovative technologies that have never existed before, they feel this inexplicable need to come up with a new name to describe it. I for one don't see why they couldn't just call it a really-really-fast-scale computer, but alas the English language evolves along with the new developments, and we wind up with exascale [wikipedia.org] , which - though it is not a word - has it's own Wikipedia page for some unknown reason.

Re:Exascale is not a word. (1)

clone52431 (1805862) | more than 2 years ago | (#34411764)

which - though it is not a word - has it's own Wikipedia page for some unknown reason

Funny, there’s a Wiki project page for that [wikipedia.org] .

Re:Exascale is not a word. (1)

Zero__Kelvin (151819) | more than 3 years ago | (#34412750)

"which - though it is not a word - has it's own Wikipedia page for some unknown reason

Funny, there’s a Wiki project page for that [wikipedia.org] ."

No, there isn't. You seem to think that the only place you will find real words is in a dictionary. It turns out there are also real words in encyclopedias.

Re:Exascale is not a word. (1)

clone52431 (1805862) | more than 3 years ago | (#34412872)

It turns out there are also real words in encyclopedias.

And there are also things that aren’t real words.

Re:Exascale is not a word. (1)

Zero__Kelvin (151819) | more than 3 years ago | (#34413072)

You should consult a dictionary and look up the word encyclopedia ;-)

Finally! (0)

assemblerex (1275164) | more than 2 years ago | (#34410830)

I can play crysis.

Re:Finally! (0)

blai (1380673) | more than 2 years ago | (#34410932)

Sadly, no.

Re:Finally! (1)

Monkeedude1212 (1560403) | more than 2 years ago | (#34411596)

But still not on its maximum settings.

Who would've thought... (3, Insightful)

chemicaldave (1776600) | more than 2 years ago | (#34410884)

...that the metal connections between individual components would not be fast enough.

I only wonder how long before this sort of technology makes its way to the consumer market, if only for show. Of course I can't see a use for an exascale databus on the mobo anytime soon.

Re:Who would've thought... (3, Informative)

olsmeister (1488789) | more than 2 years ago | (#34410972)

It's obviously not the same, but in some ways it sounds similar to Intel's Lightpeak. [wikipedia.org] I guess it is the next logical step once you get to that point.

Re:Who would've thought... (1)

powerlord (28156) | more than 3 years ago | (#34413360)

It does. It also sound similar to the idea needed for Quantum Connected CPUs from Travis S. Taylor's book "The Quantum Connection" (from Baen, the first few chapters are available free online here: http://www.webscription.net/chapters/0743498968/0743498968.htm?blurb [webscription.net] . The idea of the Quantum Connected CPUs is built up in chapters 4,5 and 6 which are included in the free sample.

( now all we need is the AI, and a few other things, and the Galaxy is our oyster :D )

Re:Who would've thought... (2)

TeknoHog (164938) | more than 2 years ago | (#34410988)

Of course I can't see a use for an exascale databus on the mobo anytime soon.

An exascale databus ought to be enough for everyone, at least for the five who comprise the world market for computers.

Re:Who would've thought... (0)

Anonymous Coward | more than 2 years ago | (#34411128)

You may not need all that speed, but the reduction in power and heat generation would be enough of a reason for me to include this technology in consumer PCs.

Re:Who would've thought... (1)

cez (539085) | more than 3 years ago | (#34412658)

You bring up an excellent point! I wonder when we'll reach a break even point for how much energy and environmental cost it takes to produce these per year as compared to how much energy they can save per year... probably closer if they use cinnamon [slashdot.org] . Granted, the per year comparison is arbitrary.

Re:Who would've thought... (3, Interesting)

John Whitley (6067) | more than 2 years ago | (#34411680)

...that the metal connections between individual components would not be fast enough.

If you bothered to RTFA (emphasis mine):

Multiple photonics modules could be integrated onto a single substrate or on a motherboard, Green said.

I.e. they're not talking about hooking up individual gates or even basic logic units with optical communications. Anyone who's actually dealt with chip design in the past several decades realizes that off-chip communications is a sucky, slow, power-hungry, and die-space-hungry affair. Most of the die area and a huge amount (30%-50% or more) of power consumption of modern CPU's is gobbled up by the pad drivers -- i.e. off-chip communications. Even "long distance" on-chip communications runs into a lot of engineering challenges, which impacts larger die-area chips and multi-chip modules.

Re:Who would've thought... (1)

timeOday (582209) | more than 3 years ago | (#34413490)

I wonder if optical interconnects couldn't be a great boon to volumetric (3d) chip design. Unlike wires, lasers going different directions can pass right through each other (can't they?) Think about a bunch of people on different levels of a big atrium in a tall hotel, all signalling to each other with lasers. You make a CPU with a hole in the middle which is ringed with optical ports aimed up and down at different angles. Now stack them up (like a roll of Life Savers candy) alternating with ring-shaped heat sinks, and make a vacuum in the center chamber (to eliminate dust and refraction from heated air). With specialized cores following a standardized format, you could easily make a multiprocessor with the right number of cores and different mixes of cores (encryption, GPU, h264 codec, RAM, ...) without any changes to silicon or wiring.

Obligatory shot at M$... (0)

Anonymous Coward | more than 2 years ago | (#34411822)

...Of course I can't see a use for an exascale databus on the mobo anytime soon.

You'll need it to run Windows (insert release N+1 here).

Re:Who would've thought... (1)

MattskEE (925706) | more than 3 years ago | (#34412826)

The metal connections are certainly fast enough, after all the signals on the metal lines will travel at a fraction of the speed of light divided by the index of refraction of the surrounding dielectric medium, same as an optical waveguide.

But there are two important problems which this does not address: loss and crosstalk.

Because the conductor loss is very significant for metal interconnects, much more power is consumed in long interconnects. This power consumption only increases with transistor density as wires get narrower, and minimizing power dissipation is one of the major challenges in continuing to scale chips smaller and smaller. The resistance also introduces an RC charging time constant when driving the gate capacitance of the receiving transistor.

Each wire/transmission line on a chip also has an associated electric/magnetic field around it, and closely spaced lines suffer from crosstalk. Take for example the LGA1366 socket that the i7 uses: there are 1366 pins very tightly packed on the chip package, and even more tightly packed on chip itself. Eventually we cannot pack them any tighter because the crosstalk will kill the signal to noise ratio.

Optical systems are better because the loss is much smaller, so less power is required to transmit each bit a certain distance, much more bandwidth is available per optical channel, and you can multiplex multiple color channels, and crosstalk is less of a problem for optical waveguides (both because fields are better confined and because you need fewer waveguides).

Most of this I got from a talk by John Bowers [ucsb.edu] who leads one of the many research groups working in this area.

I'm trying to figure out (1)

Aerorae (1941752) | more than 2 years ago | (#34410984)

the difference between this and Intel's technology, other than the obvious chip-to-chip vs machine-to-peripheral difference.
It's all variations on silicon (nano)photonics, right? The article says "Intel is also researching silicon nanophotonics at the silicon level, but has not yet demonstrated the integration of photonics with electronics"...but that makes me wonder what the big deal about Light Peak is, then... is the only difference the "nano"?

Re:I'm trying to figure out (0)

Anonymous Coward | more than 2 years ago | (#34411122)

It's not chip-to-chip, they're talking about between transistors on the same chip.

Re:I'm trying to figure out (1)

markov_chain (202465) | more than 2 years ago | (#34411484)

No they're not, this is between chips.

Re:I'm trying to figure out (1)

markov_chain (202465) | more than 2 years ago | (#34411586)

Forget Light Peak, Intel has already demonstrated an on-chip CMOS laser and used it for optical links: press release here [intel.com] . I really don't know what the IBM guy meant with his claim.

Re:I'm trying to figure out (2)

Bengie (1121981) | more than 2 years ago | (#34411984)

Light peak is meant for up to ~100m and scaling up to 100gbit in the future and meant to replace USB/SATA/HDMA/etc. I some how doubt on-chip CMOS lasers are meant for anything beyond a meter as they're meant for chipset-to-chipset..

Finally, Optical Computers! (3, Funny)

nickersonm (1646933) | more than 2 years ago | (#34411000)

We have reached an informational threshold which can only be crossed by harnessing the speed of light directly. The quickest computations require the fastest possible particles moving along the shortest paths. Since the capability now exists to take our information directly from photons travelling molecular distances, the final act of the information revolution will soon be upon us.
-- Academician Prokhor Zakharov, "For I Have Tasted The Fruit"

Now I just need room temperature superconductors to build my gatling laser speeders.

Re:Finally, Optical Computers! (1)

Buelldozer (713671) | more than 3 years ago | (#34412636)

I love that game, still play it. Wish they'd make a sequel.

Great News For Helping To Analyze All Of The (-1)

Anonymous Coward | more than 2 years ago | (#34411060)

Wikileaks [wikileaks.org] data !!!

I'm waiting for the Hillary Clinton CABLE requesting a hold on Koreagate for her new hairdo !

( It's working !!!)

Yours In Novosibirsk,
K. Trout, C.I.O.

Wake me... (1)

MonsterTrimble (1205334) | more than 2 years ago | (#34411094)

Where we get a Positronic Brain [wikipedia.org] from this.

Re:Wake me... (2)

Surt (22457) | more than 2 years ago | (#34411320)

Well, besides the fact that moving you there will be inconvenient for us, there won't be any such location because positronic would be a step backward from photonic in terms of performance, assuming your more interested in calculation power than explosive power.

Re:Wake me... (1)

geekoid (135745) | more than 2 years ago | (#34411726)

he's got 88 petabytes in his brain!

imagine... (0)

Anonymous Coward | more than 2 years ago | (#34411110)

... a Beowulf cluster of these!!!!

Just part of the problem (1)

PaladinAlpha (645879) | more than 2 years ago | (#34411218)

The interconnects are not the entire problem. Faster transmit helps, of course. But the information still has to come in from storage; it's still held in slow memory banks; it still has to propagate across the swarm. Software still has to be able to access that data in a way that makes sense and can scale to half a million nodes. Connectionless distributed computation is nontrivial, and while lower-latency intranode communication might get us the last 5% it won't get us the first 95%.

Re:Just part of the problem (1)

LostOne (51301) | more than 2 years ago | (#34411424)

There is a factor you may have neglected to consider: with a faster transmit, the distance between components can be longer with the same speed components. That is, the communication latency introduced by path length is lower.

Where is the Paper? (0)

Anonymous Coward | more than 2 years ago | (#34411294)

Did they publish a paper? The marketing talk is useless. I want to read what they really did. News websites should always link the Paper even if it's behind a paywall...

The holy grail (1)

Anonymous Coward | more than 2 years ago | (#34411330)

Optoelectronics really is the holy grail of computing. There's no cross talk problems, no magnetic fields to worry about, and you can multiplex the hell out of a communication link. The current record [wikipedia.org] is 155 channels of 100 Gbit/s each. (!)

Re:The holy grail (0)

Anonymous Coward | more than 2 years ago | (#34411506)

Don't worry, I suspect at a small enough scale quantum effects will somehow cause problems.

Cool. (0)

Anonymous Coward | more than 2 years ago | (#34411332)

Now we can keep track of the deficit in real time.

Not sure this guy understands the problem. (3, Informative)

blair1q (305137) | more than 2 years ago | (#34411348)

He's sped up links between chips from something like one-third c to c.

Architecturally that reduces inter-chip latency by 66%, which does indeed open up a new overall speed range for applications that are bandwidth-limited by interconnects. But in no sense does it imply a 1000-fold increase in overall performance. It's only a 3X improvement in bandwidth of the physical layer of the interconnect to which the speedup applies.

It may allow architectures that pack in more computing units, since light beams don't interfere physically or electrically the way wires do. And light can carry multiple channels in the same beam if multiple frequency or phase or polarization accesses can be added. Those will further improve bandwidth and possibly allow a further increase in the number of computing units, which could help get to the 1000X number.

BTW, didn't Intel have an announcement on optical interconnects just a while ago? Yes. [intel.com] They did [bit-tech.net] .

Re:Not sure this guy understands the problem. (1)

drerwk (695572) | more than 2 years ago | (#34411638)

Current switching speed is not limited by sinal propagation speed in metal;1/3c. More likely by the capacitance in the line.

Why is this faster? (1)

Locke2005 (849178) | more than 2 years ago | (#34411368)

Are they implying that signals travel faster through a fiber optic cable than a copper cable? Or just that there is less interference between the lines.

Re:Why is this faster? (3, Insightful)

wurp (51446) | more than 2 years ago | (#34411696)

Well, electricity does travel slightly slower than light (physical electrons, which have mass, do move, although not from one end of the wire to the other). However, I suspect what they're after is improved switching speed. High frequency photons can switch on & off more sharply (i.e. in less time) than electrons in a typical electrical flow.

Re:Why is this faster? (1)

wurp (51446) | more than 2 years ago | (#34411712)

Goddamit, I hate the way "No karma bonus" sticks until I turn it off again now.

Re:Why is this faster? (1)

geekoid (135745) | more than 2 years ago | (#34411756)

not faster, more.

SUper 1337 hax0r machines (1)

FPoe (1935312) | more than 2 years ago | (#34411602)

Well, at least they are using their smarts to actually invent the things they claim instead of sitting on patents like some other companies. Now to remember the new password standard, minimum 90 characters.

But does this mean... (0)

gestalt_n_pepper (991155) | more than 2 years ago | (#34411872)

Windows will finally be usable?

Surly now... (1)

FridayBob (619244) | more than 2 years ago | (#34412090)

... the Singularity must almost be upon us. I, for one, welcome our new supercomputing overlords!



(No it isn't, Ray Kurzweil is an idiot, and don't call me Shirley!)

ITRS roadmap (1)

DavMz (1652411) | more than 3 years ago | (#34412202)

FTA:

The photonics technology [will] help IBM to achieve its goal of building an exascale computer by 2020

So I guess IBM is in line with the International Technology Roadmap for Semiconductors [itrs.net] .
  There has been a lot of research done by the major players in the industry, individual components have been developped (light sources, couplers, phodetectors, optical waveguides, etc...) and IBM just showed they can produce them on-die with standard semiconductor production methods.
That's not the kind of breakthrough the article claims, it is usual incremental progress. And I am quite happy with that.

So then, (1)

wondergeek (220755) | more than 3 years ago | (#34412608)

Kurzweil's not looking quite as crazy right now.

And right behind those photons... (0)

Anonymous Coward | more than 3 years ago | (#34413038)

Massively bloated, obscure, absract, incomprehensible software races to bring your computer to about the same performance levels as your present one...

When are we going to start rating software in cycles/keystroke or any kind of metric? Why is it always the job of hardware designers to rescue your sorry, undisciplined asses?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>