Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Rethinking Computer Design For an Optical World

timothy posted more than 4 years ago | from the optical-floptical dept.

Intel 187

holy_calamity writes "Technology Review looks at how some traditions of computer architecture are up for grabs with the arrival of optical interconnects like Intel's 50Gbps link unveiled last week. The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors."

cancel ×

187 comments

Sorry! There are no comments related to the filter you selected.

LightPeak (1)

Yvan256 (722131) | more than 4 years ago | (#33141496)

For GPUs? Finally an easy upgrade path for all future Macs?

Re:LightPeak (1)

TheKidWho (705796) | more than 4 years ago | (#33141540)

No, the lag would be stupid. You want your GPU as close as possible to the CPU...

Re:LightPeak (1)

Yvan256 (722131) | more than 4 years ago | (#33141570)

My Mac mini uses an nVidia 320M which shares RAM with the CPU. According to the resume it's good enough for the RAM, so why can't it be fast enough for a GPU?

Re:LightPeak (3, Interesting)

somersault (912633) | more than 4 years ago | (#33142164)

CPUs have high speed cache that is faster than the mainboard RAM for high speed processing on a set of data, and swap the cache to/from RAM as necessary (kind of like how you page RAM to your hard drive when you run out of RAM).

Such a small cache would be useless for GPUs though, so they need faster RAM to read the massive amounts of texture/vertex/shader/whatever data they have as quick as possible. They also benefit more from stuff like RAM that is optimised for high sequential read speeds, so it does make sense to use RAM that has been specially designed for GPUs if you actually care about graphics performance (I doubt most Mac Mini users do).

Re:LightPeak (1)

Yvan256 (722131) | more than 4 years ago | (#33142250)

But wouldn't the GPU and its own RAM be in the same box, away from the main CPU? Modular computers. Buy the CPU, RAM, GPU and storage modules you need and build your own computer accordingly.

Re:LightPeak (1)

lgw (121541) | more than 4 years ago | (#33142320)

But wouldn't the GPU and its own RAM be in the same box, away from the main CPU? Modular computers. Buy the CPU, RAM, GPU and storage modules you need and build your own computer accordingly.

Isn't that what I did to build the computer I'm typing this one right now? I barely needed a screwdriver, and that was just to secure the motherboard to the case.

Re:LightPeak (3, Interesting)

Yvan256 (722131) | more than 4 years ago | (#33142536)

Most people don't want to mess around inside a computer case, just like most people don't want to mess with the engine of their car or truck, or with the insides of their televisions, etc.

Such a modular system would be similar to huge LEGO bricks, nothing to open up, just connect the bricks together. Hopefully they would make the modules in standard sizes and allow multiples of that standard size. A CPU module could be 2x2x2 units, optical drives could be 2x1x2, etc.

The system could allow to connect to at least four faces, so we don't end with with very tall or very wide stacks. Proper ventilation would be part of the standard unit size (you need more heatsinking than the aluminium casing allows? Make your product one unit bigger and put ventilation holes in the empty space). A standard material such as aluminium could be used so that machining/extruding could be used and would allow the modules to dissipate heat.

Re:LightPeak (1)

jedidiah (1196) | more than 4 years ago | (#33142626)

It doesn't matter if it's simple and easy lego bricks. If people aren't interested in rolling their own then they aren't interested in rolling their own regardless of how easy or hard it is.

Such large bulky systems will likely seem at best quaint.

Re:LightPeak (4, Informative)

The Master Control P (655590) | more than 4 years ago | (#33142652)

I recommend reading the programmer's guide [nvidia.com] to a modern graphics architecture; Caching is essential to them.

Modern GPU architectures face the same clock speed/bus speed disparity and memory latency problems as CPUs and have taken their response much farther. They have several thousand registers per core and an L1 size & speed cache per processor group. Cache misses carry a typical penalty of several hundred cycles.

Re:LightPeak (1)

Beardydog (716221) | more than 4 years ago | (#33141918)

I thought GPU operations were one-way enough that separation issues were much more about bandwidth than latency.

Re:LightPeak (1)

hitmark (640295) | more than 4 years ago | (#33142618)

with todays direct attachment of screens, it probably is. but if the rendering is happening in a central location, and then routed back over a network, it may be something else.

something like using the render farm to power the workstations during office hours, and then render the scenes after hours.

Re:LightPeak (1)

644bd346996 (1012333) | more than 4 years ago | (#33141956)

Don't you think that GPUs are smart enough today that they could just take in updated geometry data, etc. and render, without any performance-critical need to send data back up the pipeline to the CPU? Sure, our current software stack isn't well-suited for that kind of use, but lightpeak could provide the impetus for that relatively small re-architecting.

Re:LightPeak (1)

Nadaka (224565) | more than 4 years ago | (#33142400)

The problem with that is the current trend of using GPU's to process physics simulations in games as well as the images. Modern usage gets a lot of value from the ability of graphics cards returning non-entirely-graphics-related data.

Re:LightPeak (2, Informative)

Anonymous Coward | more than 4 years ago | (#33142546)

No, the lag would be stupid.

No the lag would not be stupid, just imperceptible. No, really. A ten meter cable will delay data sent to a Remote GPU (tm) by fifty nanoseconds. Not milliseconds. Not microseconds. Nanoseconds. You can't perceive that. Not in your wildest, most fevered gamer dreams.

Contemporary GPUs couldn't accomplish this because they frequently interact with the host CPU in a synchronous manner. I'm guessing that is the point of the "rethinking computer design" topic.

Interesting, but... (1)

Pojut (1027544) | more than 4 years ago | (#33141522)

moving memory and computational power to peripherals like laptop docks and monitors.

I would think that this would make upgrading more complicated, not less so. Thoughts?

Re:Interesting, but... (3, Interesting)

derGoldstein (1494129) | more than 4 years ago | (#33141754)

It would allow you to use components an a more modular way, especially around an office. If you're not big enough (of a company) to have dedicated rendering/encoding servers, you could move the GPU around depending on who's currently doing the work that requires it. Even on a more casual basis, you could have a bunch of laptops with mid-range GPUs, and have an external GPU for whomever if gaming at the moment. Just like people take turns in a household with the home-theater rig in the living room -- you don't need to install a huge LCD + amp + speaker system in every room, you just need to take turns.

Re:Interesting, but... (2, Insightful)

mhajicek (1582795) | more than 4 years ago | (#33141940)

I like the mention of putting memory and such in a dock. So you have 8GB RAM in your laptop on the go, but when you get home or to the office and dock you have 32GB. You could also have your hot and power hungry CAD / gaming GPU in the dock and a lesser on built in.

Re:Interesting, but... (2, Insightful)

0100010001010011 (652467) | more than 4 years ago | (#33142172)

Or made like LEGO Blocks. Need a quad core CPU? Go buy one and snap it onto your others.

Re:Interesting, but... (2, Interesting)

Nadaka (224565) | more than 4 years ago | (#33142550)

Not exactly what you had in mind, but I've already seen a lego like modular computer in the embedded hobbyist market.

It is mostly networking and user interface elements that can be stacked, not gpu's or cpu's.

http://www.buglabs.net/products

Re:Interesting, but... (1)

hitmark (640295) | more than 4 years ago | (#33142574)

hmm, motherboard interconnect, NUMA for the home.

a few extra feet (0)

Anonymous Coward | more than 4 years ago | (#33141524)

that should help reduce latencies

Re:a few extra feet (1)

hedwards (940851) | more than 4 years ago | (#33141622)

That's what I'm curious about, I don't think that light travels that much more quickly than electrons do. On top of which, you can usually run multiple lanes of data to and from the destination, which while possible with optics, strikes me as a pain.

Re:a few extra feet (3, Interesting)

Sarten-X (1102295) | more than 4 years ago | (#33141818)

By my understanding, it's not so much the travel time as the decoding/switching/other electronic time. As one example, consider the switching time of a transistor/photodetector. The gate must collect enough energy to switch from "off" to "on". Increased speed means having fewer electrons enter the gate. Higher energy per electron means raising the voltage. That's why overclocking often involves fiddling with voltages. Unfortunately, with more voltage comes more induction, breakdown, and other headaches I don't know enough about to list.

In contrast, light is much simpler to work with. You can make a light beam brighter without affecting other beams much. There's little chance of a beam breaking through its cable. We can send higher energies to gates with ease. Higher energy means less time to switch, and faster operation.

Note that I am not a physicist, and not much of an electrical engineer. I may be entirely wrong.

Re:a few extra feet (1)

derGoldstein (1494129) | more than 4 years ago | (#33141946)

EMI/RFI [wikipedia.org]

The higher the frequency, the bigger the problem.

Re:a few extra feet (0)

Anonymous Coward | more than 4 years ago | (#33141980)

Yeah. Doing it too much can make you BLIND!

Re:a few extra feet (4, Insightful)

Mordok-DestroyerOfWo (1000167) | more than 4 years ago | (#33141958)

Note that I am not a physicist, and not much of an electrical engineer. I may be entirely wrong.

I'm not qualified enough to say whether you're right or wrong, but you stated your case eloquently and if there's one thing that Hollywood, politics, and Star Trek have taught me, sounding right is more important than being right.

Re:a few extra feet (3, Funny)

bennomatic (691188) | more than 4 years ago | (#33142028)

Huzzah for the Internet-age realist and/or snarker. Nice complement, back-handed or otherwise.

Re:a few extra feet (4, Funny)

smooth wombat (796938) | more than 4 years ago | (#33142182)

Or, as our esteemed Professor Farnsworth remarked:

Yes, yes, anything with that many big words could easily be the solution.

Speed of whatever (4, Insightful)

overshoot (39700) | more than 4 years ago | (#33141866)

I don't think that light travels that much more quickly than electrons do.

Yes and no. In a vacuum, electrons aren't terribly useful unless you're driving them with a particle accelerator. In wires, electrons aren't really doing the work anyway: electrical signals effectively travel as waves in the dielectric surrounding the wires and in particular between signal pairs. In that case, the signal travels at around half the speed of light in a vacuum (faster if you use expensive insulation like Teflon, slower for other plastics.)

Light in optical fiber is also slowed by the refraction coefficient of the material and by path-length extension in multimode fiber. However, on balance it's a bit faster.

The real gotcha is that electrical signals at outrageous bandwidths suffer from some really horrible losses due to both skin effects on the wires and dielectric losses in the insulation. At 50 Gb/s and 30 cm, you're doing well to detect the resulting signal, never mind decode it. Worse, the losses are highly frequency-dependent, so you have to do all sorts of ugly things to pre- and post-condition the signal to make it usable. Some of this can be overcome by cranking up the transmit power, but then you get into that property of wires known as "antenna." All of that processing at both ends takes time, too.

Just not worth doing, generally.

Likewise, putting a bunch of streams out in parallel requires all sorts of cleverness to put the separate lanes together again after transmission skew. A single optical stream is much easier to use, sort of like the communications equivalent of Amdahl's Law.

Re:a few extra feet (1)

Locke2005 (849178) | more than 4 years ago | (#33142874)

My understanding was that signal propagation in glass fiber was actually slightly slower than signal propagation in copper coax. Add to that the delays of modulating the signal from electric to light and demodulating it back into electric. Using optical interconnects can only increase your latency.

dumb monitor (2, Insightful)

demonbug (309515) | more than 4 years ago | (#33141578)

The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors

Why would I want to pay for computational power in my monitor? When I buy a monitor I want it to do it's job - show the best quality images for the cheapest cost possible. A good monitor should last much longer than the associated computer driving it (unless we suddenly have a huge increase in the rate of development of display technology). Why would I want added cost in my monitor that will only make it out of date more quickly?

Re:dumb monitor (3, Insightful)

jack2000 (1178961) | more than 4 years ago | (#33141618)

So you can buy a new monitor again, and again and again. I bet this is what went through Steve Jobs' head when he they made macs hard to upgrade, that and a huge thunder of Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching ...

Re:dumb monitor (1)

derGoldstein (1494129) | more than 4 years ago | (#33141884)

that and a huge thunder of Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching Ka-ching ...

I'm almost certain that he was born with the "Ka-ching Ka-ching" sound looping in his brain.

Re:dumb monitor (1, Funny)

Anonymous Coward | more than 4 years ago | (#33142302)

Hey it's not his fault he was born while Pink Floyd's 'Money' was playing.
Blame it on the nurse who had the radio blaring!

Re:dumb monitor (2, Insightful)

ceoyoyo (59147) | more than 4 years ago | (#33142014)

For ages I avoided Macs and built my own machines with upgrades specifically in mind. Turns out I rarely ever actually upgraded any of them anyway, except occasionally the video card and, more often, hard drives and memory. It was usually more economical to sell the old machine to someone and buy or build another.

When I started grad school the lab used all Macs. I've never missed the ability to upgrade.

Re:dumb monitor (2, Insightful)

derGoldstein (1494129) | more than 4 years ago | (#33142204)

What about the ability to re-use a good power supply and case? I've had my PSU/Case combo for 3 computers now. When I say that I've "upgraded my computer", I often mean that I've replaced the motherboard, CPU, and RAM to a new architecture. Many/most of the other components remain the same -- I often have no reason to upgrade the storage, video card, optical drives, and, as mentioned above, the PSU/case. It's more flexible and modular, even if it does take some more work.

Re:dumb monitor (1)

ceoyoyo (59147) | more than 4 years ago | (#33142538)

Yeah, I kept one case for ages. A big steel monster that weighed a tonne but was far superior to the paper thin sheet metal deals they started making. It was a pain though, because it's tough to sell a computer without a case, so I usually ended up buying a new case whenever I "upgraded" anyway.

It always was far easier, and frequently cheaper, just to sell the whole thing and buy another. Macs doubly so because they seem to hold their resale value better than a generic PC.

Re:dumb monitor (1)

hitmark (640295) | more than 4 years ago | (#33142666)

not surprising as until apple went x86, they where something of a collectors item. The last holdout of the microcomputer era, building their own internals from the ground up.

Re:dumb monitor (1, Insightful)

Anonymous Coward | more than 4 years ago | (#33142314)

I on the other hand bought a really nice 21 inch lcd in the year 2000, I still have the LCD but where is the 350mhz k6/2? or the other 7 machines I have owned since then

monitors do not need to be smart, they do not need to be tied to the computer, unless your in a situation where an all in one appliance (not computer) makes since, such as a university, where you have some for students to type papers do research and whatnot

Re:dumb monitor (1)

jedidiah (1196) | more than 4 years ago | (#33142730)

Plugged a new machine into an old monitor?

Then you've "upgraded your machine" by Apple standards.

Storage would be one key thing to make easy to upgrade. Stuff is always getting bigger and bigger
and we're always finding new ways of filling up disks. Plus, one might go bad and you would want
to replace it.

The idea that you would never need to repair or upgrade storage is silly.

It would be nice if Macs allowed for easy standardized hot (or cold) swapping of internal drives.

Re:dumb monitor (1)

bennomatic (691188) | more than 4 years ago | (#33142060)

Man, there should be a SJobs version of the Godwin rule.

Re:dumb monitor (1)

0100010001010011 (652467) | more than 4 years ago | (#33142262)

Care to point out which ones are "hard to upgrade"? My Macbook Pro couldn't be easier to upgrade a HD or RAM in. The G5s up through the MacPros seem to be as simple of an upgrade path as you can get. Everything more or less slides out, no screws, nothing. [apple.com]

The original Minis were difficult, but that probably came from cramming that amount of material into the form factor. Newer iMacs and Minis are just a twist off cover to upgrade RAM.

Re:dumb monitor (1, Interesting)

jedidiah (1196) | more than 4 years ago | (#33142774)

> Care to point out which ones are "hard to upgrade"?

All the ones that don't cost an arm and a leg.

I can easily upgrade a $300 PC. On a Mac, that's a privelege that requires a minimum $2400 buy in.

Re:dumb monitor (0)

Anonymous Coward | more than 4 years ago | (#33142410)

So you can buy a new monitor again, and again and again.

WARNING! Anecdotal evidence ahead.

With the family/friends that turn to me for computer help (n = ~30), those that buy PCs tend to replace them somewhere between two and three times as often as those with Macs. With the exception of memory, no upgrades are ever performed on a given machine; they are simply retired and replaced. Macs are usually resold on eBay / craigslist, whereas PCs are dropped off at the Home Hazardous Waste center.

Re:dumb monitor (4, Funny)

bsDaemon (87307) | more than 4 years ago | (#33141644)

you mean like an imac? /ducks (disclaimer: typed from a 24" imac while at work)

Re:dumb monitor (1)

Grishnakh (216268) | more than 4 years ago | (#33142788)

A good monitor should last much longer than the associated computer driving it (unless we suddenly have a huge increase in the rate of development of display technology).

Not likely. With today's LCDs (esp. the LED backlit ones), displays are already very good, and there's little reason to upgrade unless you want a bigger one. That trend is only going to go so far.

Displays seem to make quantum leaps, so to speak. For a long time, we were all using CRT monitors. After VGA and SVGA came out, many of us were happy with those for a long time. I got a 19" CRT back in 1998 that I used for about 10 years before I finally replaced it with an LCD monitor, for instance, and before that I had a 14" SVGA CRT that I had for around 7 years.

CRTs peaked out at a certain point, when the resolution got really good and the size to around 19-21". Some were bigger, but they were so large and heavy and expensive that not many people had them. I had a 24" Sony CRT at my last job that was very nice, but weighed about 100 lbs so it was awful to move.

Then LCDs came along, and followed the same path: they started out small (15"), but kept increasing in size (and resolution) to their current size, 22-24". LCDs were a giant improvement over CRTs in many ways, namely size (very little depth) and weight, and of course power consumption, while a bit of a step back in image quality and color (though their images are perfectly rectangular unlike CRTs, no weird distortions and adjustments to overcome them). There are larger LCD monitors, but 24" seems to be near the limit for practicality on a desktop, with the user seated only 1-2 feet away. LED backlighting has helped the color problem too, as well as the power consumption. So LCD monitor technology seems to me to be peaking like CRT tech did in the late 90s/early 2000s.

I'm not sure what could come out that would be a worthwhile improvement over LCD technology. LCDs have pretty much fixed all the problems CRTs ever had: lack of flatness of the screen, power consumption, etc. Very expensive models with non-TN panels and LED backlighting have excellent image quality for high-end work requiring it, and lower-end TN panels with LED backlighting are cheap and good enough for most people. So other than making even bigger (~30") panels, which would only appeal to some people (like programmers who want to see lots of code at once), I don't see what else people might desire in a display located within arm's reach. For greater screen real estate, dual monitors are already very common, and allow you to angle each monitor for best viewing, something you couldn't do with a single monitor twice as wide. And at 25W for a modern 24" LED-backlit monitor, there's not too much you could do to improve power consumption.

I think people are going to hold onto their current monitors for quite some time.

And yes, I agree that integrating almost anything into the monitor is stupid. Maybe USB ports, but that's about it. Certainly not any kind of CPUs.

DRM (3, Interesting)

vlm (69642) | more than 4 years ago | (#33141602)

moving memory and computational power to peripherals like ... monitors.

They mean ever more complicated DRM. Like sending the raw stream to the monitor to be decoded there.

Re:DRM (1)

mlts (1038732) | more than 4 years ago | (#33141674)

DRM comes to mind, as well as forcing/offloading various graphic rendering commands to the monitor. So when DirectX changes or gets upgraded, you have to buy not just a new card, but another monitor. I'm just waiting for HDCP to start having versions so someone with HDCP 2010a won't be able to watch Blu-Ray movies, nor HD TV unless they pitch the monitor and buy themselves a TV with HDCP 2010b or something along those goofy lines.

Half of the computer could be left on the table... (1)

derGoldstein (1494129) | more than 4 years ago | (#33141648)

I mean for laptops. Right now I can leave storage and a larger monitor when I take it with me, and of course anything that can be networked. I'd like to be able to "dock the laptop into" more RAM, a more powerful GPU, and (while I realize this is wholly unlikely) maybe a second CPU (4 cores on the laptop, 4 more on the table).

Adding a GPU as an external peripheral has already been done, just not in a commercially viable way. Hopefully this will change.

Re:Half of the computer could be left on the table (1)

icebraining (1313345) | more than 4 years ago | (#33142278)

Adding a second CPU is not that unlikely - motherboards with two sockets exist for a long time. If you can "push out" the RAM with this tech, why not a second CPU?

Here we go again (4, Informative)

overshoot (39700) | more than 4 years ago | (#33141672)

This is eerily reminiscent of Intel's flirtation with Rambus: they were so focused on bandwidth that they sacrificed latency to get it. Yeah, the Pentium4 series racked up impressive GHz numbers but the actual performance lagged because the insanely deep Rambus-optimized pipeline stalled all the time waiting for the first byte of a cache miss to arrive.

Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

Now, peripherals are another matter. But if bandwidth were all it took, we'd be using 10 Gb/s PCI Express for memory right now.

Re:Here we go again (1)

Shikaku (1129753) | more than 4 years ago | (#33141794)

BUT BUT BUT....

50Gbps!!!!!!1

Re:Here we go again (4, Informative)

demonbug (309515) | more than 4 years ago | (#33141802)

This is eerily reminiscent of Intel's flirtation with Rambus: they were so focused on bandwidth that they sacrificed latency to get it. Yeah, the Pentium4 series racked up impressive GHz numbers but the actual performance lagged because the insanely deep Rambus-optimized pipeline stalled all the time waiting for the first byte of a cache miss to arrive.

Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

Now, peripherals are another matter. But if bandwidth were all it took, we'd be using 10 Gb/s PCI Express for memory right now.

I was thinking the same thing regarding latency and remote memory. If you've got your memory 1 physical meter away, you're already looking at something like 6.6 ns round-trip latency (in a vacuum) just for light traveling that physical distance; seems like once you include switching plus getting to/from the optical interconnect you're looking at some pretty serious latency issues compared to onboard RAM (I think DDR3 SDRAM is on the order of 7-9 ns).

Re:Here we go again (2, Insightful)

tantrum (261762) | more than 4 years ago | (#33142554)

might split things up into something reminding onboard ram and external swap though.

I don't need my 24gb swap space much at the moment, but it would be sweet to have it respond in something like 20ns anyways :)

Re:Here we go again (2, Funny)

Grishnakh (216268) | more than 4 years ago | (#33142820)

So they just need to figure out how to make FTL optical cables...

Re:Here we go again (2, Insightful)

feepness (543479) | more than 4 years ago | (#33141906)

Same goes for optical interconnect to memory: the flood may be Biblical when it arrives, but while waiting for it to arrive the processor isn't doing anything useful.

That's the thing though, isn't it? There isn't a "the processor", there's 8, 16, 32, 128 processors. So stalling one may not be that great a loss.

Re:Here we go again (3, Insightful)

chrb (1083577) | more than 4 years ago | (#33142020)

Same goes for optical interconnect to memory: the flood may be Biblical when it arrives

But it won't be - the system is fundamentally limited by all of the rest of the components. A top end front-side bus can already push 80Gb; scaling that upto the 400Gbit that this optical link promises will probably be practical within a few years, but the latency of encoding and decoding a laser signal and pushing it over several meters is going to be a killer for computational applications. It will be great for USBX, and for high end networking it will challenge Infiniband (which currently tops out at around 300Gb). Infiniband is already used for networking high-performance computational clusters, but nobody is using it for the CPU to memory bus because of the high latency. Even with high bandwidth, computation still has to be carried out on the data, and so it still makes sense to put the data and processor as close together as possible.

In the last decade there were many research papers proposing that co-processors would be placed on DRAM cards, or Embedded DRAM would allow CPU and processors to be fabricated on a single die (e.g. 1 [psu.edu] , 2 [stanford.edu] ). But if you have a processor and DRAM connected to similar units via an optical interconnnect, guess what - the architecture begins to look awfully similar to a regular network with optical ethernet. So, it looks likely that this will be just another incremental improvement in architecture rather than the radical shift that TFA envisions.

Re:Here we go again (1)

Animats (122034) | more than 4 years ago | (#33142134)

Yes. Not only do you have speed of light latency, you have marshaling latency, as the bits have to go into a register in parallel, then be clocked out serially for transmission, then converted to parallel at the other end. For memory access, that overhead matters.

Optical interconnects do have faster propagation than electrical ones. Radio in vacuum achieves the speed of light, but in cables and on PC boards, capacitance and inductance slow down propagation [hightech12.com] well below the speed of light. Coax is 60-75% of light speed. Traces on FR4 board are around 50%. Inner traces on multilayer PC boards are below 30% of light speed. Interconnects on chip are sometimes even worse. Optical interconnects don't reach the speed of light in vacuum either, but they're usually above 60% of light speed.

There's slow and then there's slow (1)

overshoot (39700) | more than 4 years ago | (#33142226)

Coax is 60-75% of light speed. Traces on FR4 board are around 50%. Inner traces on multilayer PC boards are below 30% of light speed. Interconnects on chip are sometimes even worse.

Well, I'm not aware of anyone using epoxy glass for cable insulation. You can get pretty quick (0.8 C0 or so) with foamed Teflon insulation, but you have to be seriously wanting to pay for it. Easy to damage, too.

Re:Here we go again (1)

PipsqueakOnAP133 (761720) | more than 4 years ago | (#33142374)

So... uh... The 2nd Coming of FB-DIMMs?

If that happens, I'm not thinking performance, I'm thinking short Intel stock.

Tell me (1)

overshoot (39700) | more than 4 years ago | (#33142478)

So... uh... The 2nd Coming of FB-DIMMs?

Without disclosing my Super Secret Identity, let's just say that I was there at the beginning of the FBDIMM fiasco, told my management to run, not walk, away from getting sucked into it, and proceeded to watch the train wreck from very close up. As in, on the field instead of front-row in the stands.

I've made a lot of bad calls in my life but I totally nailed that one.

Re:Here we go again (3, Interesting)

hackerjoe (159094) | more than 4 years ago | (#33142482)

You people are not thinking nearly creative enough. The article doesn't make it clear why you'd want to move your memory farther away -- it would increase latency, yeah, but moreover, what are you going to put that close to the CPU? There isn't anything else competing for the space.

Here's a more interesting idea than just "outboard RAM": what if you replaced the RAM on a blade with a smaller but faster bank of cache memory, and for bulk memory had a giant federated memory bank that was shared by all the blades in an enclosure?

Think multi-hundred-CPU, modular, commodity servers instead of clusters.

Think taking two commodity servers, plugging their optical buses together, and getting something that behaves like a single machine with twice the resources. Seamless clustering handled at the hardware level, like SLI for computing instead of video if you want to make that analogy.

Minor complaint, the summary is a little misleading with units: they're advertising not 50 gigabits/s, but 50 gigabytes/s. Current i7 architectures already have substantially more memory bandwidth than this to local RAM, so the advantage is definitely communication distance here, not speed.

Sounds like NUMA is going mainstream (1)

Ant P. (974313) | more than 4 years ago | (#33141714)

The question is how many years it'll take before Windows supports it.

Re:Sounds like NUMA is going mainstream (0)

Anonymous Coward | more than 4 years ago | (#33141962)

The question is how many years it'll take before Windows supports it.

NUMA has been supported by Windows since 2003 server and at least XP Service Pack 2. So at least 7 years ago.

light speed lag leads to higher latency (4, Interesting)

Chirs (87576) | more than 4 years ago | (#33141730)

Without factoring in speed of light drops due to index of refraction changes, at a distance of 1 meter you're looking at latencies of 7 nanoseconds just for travel time. The bandwidth may be decent but the latency is going to be an issue for any significant distance.

Re:light speed lag leads to higher latency (0)

Anonymous Coward | more than 4 years ago | (#33141842)

You don't need to be an entire meter away for applications like having a laptop docking station with extra memory or a different graphics card. That would be close enough to add less than 1ns of latency.

Getting Entangled (1)

peterofoz (1038508) | more than 4 years ago | (#33142066)

I bet this is going to get all tangled up in the near future.

http://arstechnica.com/old/content/2006/01/5971.ars [arstechnica.com]

Potential applications: Subspace radio, wide area networks on a solar system scale. Just think, no more 3 minute wait for a radio signal from Mars or beyond.

Re:Getting Entangled (2, Informative)

Rakishi (759894) | more than 4 years ago | (#33142128)

No known process allows for information transfer at speeds faster than light. Including quantum entanglement. Stop watching so much science fiction and go read up on what it actually does instead.

Re:Getting Entangled (0)

Anonymous Coward | more than 4 years ago | (#33142880)

So if you generate an entangled pair of photons and then separate them by any distance—from a few nanometers to thousands of light-years—you can collapse the wave function of one by detecting its spin direction and you'll know instantaneously the spin of its entangled partner. In such a scenario, the information about the spin of the entangled particle travels faster than light, which is a problem for quantum mechanics and is why Einstein didn't like entanglement.

I don't know enough to verify the article. That said, the article linked states that information can travel faster than the speed of light.

Re:light speed lag leads to higher latency (1)

flaming-opus (8186) | more than 4 years ago | (#33142210)

Absolutely. I think the more likely case is that we're going to see RAM on the compute device, or at least on-package. In the world of cache, even traversing the processor die is a latency worth worrying about.

That said, how about optical numa? with HT or QPI the latency is already up above 100 ns, so adding an optical hop may be reasonable. How about using an optical cable to string together 2 single-socket motherboards into a dual-socket SMP? Not that you need optics to do this, but they make it possible to have nodes 3 meters apart instead of half a meter.

The 1990s called. (4, Funny)

PPH (736903) | more than 4 years ago | (#33141732)

They want their rats nest of cables back.

The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs to aid cooling and moving memory and computational power to peripherals like laptop docks and monitors.

Re:The 1990s called. (1)

thethibs (882667) | more than 4 years ago | (#33141882)

Actually, that's the 1960s.

Re:The 1990s called. (1)

derGoldstein (1494129) | more than 4 years ago | (#33142016)

Depending on what you're working on, it could be right now. Have you seen a graphic designer on his/her own "turf"? I didn't know a laptop could dock into so many things at the same time. Monitor, keyboard, mouse, wacom tablet, storage, network, scanner, printer, and a partridge in a pair tree. Many of us have never left the rat's nest...

Is that a USB (1)

DRAGONWEEZEL (125809) | more than 4 years ago | (#33142368)

Powered Partridge?

Re:The 1990s called. (1)

mhajicek (1582795) | more than 4 years ago | (#33142000)

But without a nest of cables you can't do Serial Experiments Lain!

Re:The 1990s called. (1)

derGoldstein (1494129) | more than 4 years ago | (#33142258)

Also, Ghost in the Shell teaches us that if you want a really good connection to someone's brain, it needs to be a physical one.

Re:The 1990s called. (1)

HeckRuler (1369601) | more than 4 years ago | (#33142902)

And Shadowrun shows that even if you do upgrade to wireless, everyone will live in Faraday cages.

Computer architecture must have the Bhudda-nature (4, Insightful)

idontgno (624372) | more than 4 years ago | (#33141740)

because this appears to be another aspect of Wheel of Reincarnation [catb.org] .

I'm old enough to remember a time where a computer was a series of bitty boxes tied together with cables. Then someone decided to integrate a lot of the stuff onto a motherboard, with just loosely-related stuff connected by cables to the motherboard. Then the loosely-related stuff got put into cards that plugged into the motherboard. Then that stuff just got integrated into the motherboard.

And now it's being reborn as stuff in bitty boxes connected together with cables.

I wonder what enlightement will be like, because karma appears to have been a bitch.

You're going to make me look like a genius. (2, Funny)

Anonymous Coward | more than 4 years ago | (#33141936)

In 30 years I'll suggest integrated optical motherboards.

Re:Computer architecture must have the Bhudda-natu (1)

confused one (671304) | more than 4 years ago | (#33142436)

That's what I was thinking: This is going back to the way it was in the mini-computer era. CPU in one box. Additional memory in another. Framebuffer in a third. Disk in a fourth...

What's old is new again.

Re:Computer architecture must have the Bhudda-natu (1)

Grishnakh (216268) | more than 4 years ago | (#33142878)

Except the whole thing has terrible power consumption, because each unit has its own crappy wall-wart power supply, and you have to have 3 power strips wired in series to have places for them all to plug in.

Re:Computer architecture must have the Bhudda-natu (2, Insightful)

Jah-Wren Ryel (80510) | more than 4 years ago | (#33142578)

Uh yeah, this isn't the first time around. The computer industry is constantly rediscovering previous designs. Timesharing, batch jobs, client-server, intergrated/distributed processing, etc, etc. Nothing new under the sun, just smaller and faster is all.

I wonder what enlightement will be like, because karma appears to have been a bitch.

It's called retirement - you get out of the loop and eventually you go out like a the flame of a candle.

Re:Computer architecture must have the Bhudda-natu (1)

timeOday (582209) | more than 4 years ago | (#33142876)

I wouldn't confuse "what might be enabled by this new technology" with what is actually going to happen.

The vast majority of computers (even if known by other names such as "smartphone") will only become more and more integrated. I doubt we'll be buying standalone graphics cards for PC's in 10 years, and not even standalone RAM modules in many cases.

Maybe for high performance computing there will be a big shared memory hooked up to tens of thousands of cores by optical interconnects, but not for 99% of the market.

Speed limit (1)

Megane (129182) | more than 4 years ago | (#33141810)

The extra speed makes it possible to consider moving a server's RAM a few feet from its CPUs

Sure it has the bandwidth, but have you tried calculating the speed of light into that? Long ago I saw part of an interview with Grace Hopper, and she held up a six-inch piece of wire. She explained that the piece of wire represented a nanosecond delay. Now admittedly electricity usually only travels at about 0.5c, IIRC, but I think she was giving the speed-of-light delay, not the speed-of-electrons delay. I'm also not including any propagation delays in the optic transmitter and reciever. Also, the delays are doubled because the CPU has to request what data needs to be sent, and that has to arrive at the memory before the memory can send the data.

"A few feet"? Let's say 3 feet. That means 3 feet times 2 directions times 2 nanoseconds per foot, for a total of 12 nanoseconds, maybe a little better if you can make page requests. I remember back in the early '90s, RAM speeds were in the range of 60-80ns for plain old fast-page DRAM.

You can deal with relativistic propagation delays for secondary storage, but not for primary storage.

Re:Speed limit (1)

0123456 (636235) | more than 4 years ago | (#33141848)

You can deal with relativistic propagation delays for secondary storage, but not for primary storage.

You could have a few gigabytes of cache on the motherboard :).

BTW, a foot is almost exactly one nanosecond at the speed of light, just another example of why it's superior to metric :).

Re:Speed limit (2, Funny)

Hijacked Public (999535) | more than 4 years ago | (#33142062)

almost exactly

These English units sound great, where do I sign up?

Re:Speed limit (3, Insightful)

vlm (69642) | more than 4 years ago | (#33142328)

Now admittedly electricity usually only travels at about 0.5c, IIRC, but I think she was giving the speed-of-light delay, not the speed-of-electrons delay.

Don't confuse propagation velocity of electromagnetic waves, which depends on dielectric constant and is around 0.8c in normal conductors, with drift velocity of electrons which is maybe a meter per hour.

http://en.wikipedia.org/wiki/Speed_of_electricity [wikipedia.org]

http://en.wikipedia.org/wiki/Drift_velocity [wikipedia.org]

http://en.wikipedia.org/wiki/Velocity_of_propagation [wikipedia.org]

Electrons really move slowly in metal. In a vacuum tube like a CRT, pretty quick.

Re:Speed limit (1)

hcdejong (561314) | more than 4 years ago | (#33142860)

Electrons really move slowly in metal. In a vacuum tube like a CRT, pretty quick.

Clearly, what we need is to harness this speed by building electronic elements that work by firing electrons across a vacuum.

Latency? (2, Interesting)

Diantre (1791892) | more than 4 years ago | (#33141824)

IANAEE (I Am Not An Electrical Engineer) Pardon my possible stupidity, but what was keeping us from putting the RAM a few feet from the CPU? The way I understand it, electrons don't move much slower than light. Of course you might lose current.

Re:Latency? (1)

BZ (40346) | more than 4 years ago | (#33142102)

> The way I understand it, electrons don't move much slower than light.

Electrons move slowly. ;)

Electrical signals (aka electromagnetic waves) in wires move at speeds that depend on the wire and the insulation around (and within, for coax) the wire. Speeds can be as high as 0.95c and as low as 0.4c with pretty typical wiring setups.

Re:Latency? (1)

bgt421 (1006945) | more than 4 years ago | (#33142454)

At GHz speeds, wire delay is pretty significant. Another part of it is electrical noise -- longer wires tend to act as transmission lines. I didn't RTFA, butI think the advantage of optical interconnects is that the throughput that you get beats the loss of waiting for data. You can afford to wait 10 nsec if afterwards you can fill your 1kbyte whole cache. (Not read it 64 bits/4nsec or whatever). Additionally, optical lines are immune to electrical noise (RF).

My Dream Computer (1)

Beardydog (716221) | more than 4 years ago | (#33141870)

My dream computer has always been a completely modular system, with every component accessible and hot-swappable. I always imagined it being about the size and shape as a normal computer, but covered in slots, with video cards, RAM, drives, etc in the form of cartridges... pin lengths designed to make sure the right things contact in the right order...

While lamenting the poor graphical performance of my laptop, I investigated external graphics cards. While they aren't currently suitable for... well... anything, a nice 50gbps optical cable might make it a plausible scenario.

I would even prefer an external video card for my desktop computer (if performance matched the internal version). It could have its own case, cooling, and powerbrick, instead of murdering my internal power supply, heating my computer up, screaming like a jet engine, and possible bursting into flames when my haphazard system design blocks vital airflow.

Re:My Dream Computer (1)

MarcQuadra (129430) | more than 4 years ago | (#33142308)

"I would even prefer an external video card for my desktop computer (if performance matched the internal version). It could have its own case, cooling, and powerbrick, instead of murdering my internal power supply, heating my computer up, screaming like a jet engine, and possible bursting into flames when my haphazard system design blocks vital airflow."

You're too much in the minority for a market to be built up for you. Haven't you realized that these days people want to buy -one box- with -as few cables as possible- and just replace it every four years? Noby except the nerdiest 5% want to go to the store to pick out a storage array, memory array, GPU, and 'interface'. People want iMacs.

That said, this technology could be indispensable for doing to CPUs and RAM what we've already done to storage in the datacenter (read: commoditize the crap out of it).

Re:My Dream Computer (1)

LWATCDR (28044) | more than 4 years ago | (#33142756)

"My dream computer has always been a completely modular system, with every component accessible and hot-swappable." it is called a mainframe.
Actually some of IBMs none mainframe big iron can do the same thing.
Some of their machines can even call for support on their own. They will contact IBM and a tech will show up and inform you that the RAM or drive is failing and swap the part. Mainframes even have hot swappable CPUs.

Finally! (1)

boristdog (133725) | more than 4 years ago | (#33141904)

Bigger computers!
What we've been working toward all these decades!

Re:Finally! (1)

derGoldstein (1494129) | more than 4 years ago | (#33142056)

Modular computers. Easier upgrade paths. More re-use/re-sell value for external components. If you want to buy an iMac in which every component is epoxied together, that's your choice.

Re:Finally! (1)

H0p313ss (811249) | more than 4 years ago | (#33142252)

Modular computers. Easier upgrade paths.

This hadn't occurred to me, but now that you mention it I'm reminded of a friends failure to install SIMMs correctly on an old 486 era desktop. He actually managed to damage the motherboard since he didn't notice the retaining clips and just mashed them in.

A plug & play architecture that is so modular and simple that even the noobiest of noobs can upgrade might have some legs. Right now upgrading is such a bitch that I don't even bother anymore, I just get kick-ass machines and replace them bi-anually and ask relatives if they want my cast-offs.

Re:Finally! (1)

jedidiah (1196) | more than 4 years ago | (#33142870)

Resale value is always going to be inherently limited because most people don't want stuff that's old or has been abused by someone else.

Computer components are less reusable or resell-able not so much because of shifting connector formats but because stuff gets obsolete very quickly.

Sub-500G 3.5" drives seem positively quaint when Target is selling 750G 2.5" USB drives.

The fact that some GPU doesn't support some feature released in the last 3 years is going to be FAR more of an issue than what kind of card it is.

The trailing edge new stuff is always going to be much more desirable than the moldy oldies and is going to drive it's value down to just about zero.

Two things... (2, Interesting)

MarcQuadra (129430) | more than 4 years ago | (#33141920)

1. The Internet already does that. How much of the experience today is processed partly in a faraway datacenter? I know that even users like me use the Internet as a method to pull things away from each other so each part lives where it makes sense. I have a powerful desktop at home that I RDP into from whatever portable device I happen to be toting. I don't worry about my laptop getting stolen, the experience is pretty fast (faster than a netbook's local CPU, for sure), and I get to mix-and-match my portable hardware.

2. This is going to have much more use at a datacenter than it will in a server closet or a home. I can already fit more RAM, CPU, and Storage than I need in a typical desktop. Most small businesses run fine on one or two servers. Datacenters, on the other hand, could really take advantage to commoditizing RAM and CPU, like they have with SANs in storage. No more 'host box/VM', it's time to take the next step and pool RAM and CPUs, and provision them to VMs through some sort of software/hardware control fabric. I think Cisco already knows this, which is why they're moving to building servers.

Imagine the datacenter of the future:

Instead of discrete PC servers with multiple VM guests each and CAT-6 LAN plugs, you have a pool of RAM, a pool of storage, and a pool of CPUs controlled by some sort of control interface. Instead of plugging the NIC on the back of it into your network equipment, the control interface is -built into- the network core, wired right into the backplane of your LAN. Extra CPU power that's not actually being used will be put to work by the control fabric compressing and deduplicating stuff in storage and RAM. The control interface will 'learn' that some types of data are better served off of the faster set of drives, or in unused RAM allocated as storage. 'Cold' data would slowly migrate to cheap, redundant arrays.

Guest systems will change, too. No longer will VMs do their own disk caching. It makes sense for a regular server to put all its own RAM to use, but on a system like this, it makes sense to let the 'host fabric' handle the intelligent stuff. Guest operating systems will likely evolve to speak directly to the 'host' VFS to avoid I/O penalties, and to communicate needs for more or less resources (why should a VM that never uses more than 1GB RAM and averages two threads always be allocated 4GB and eight threads?).

Latency, in length (1)

by (1706743) (1706744) | more than 4 years ago | (#33142528)

For what it's worth:

c / (3 ghz) ~= 9.993 cm

Perhaps half of this is really the characteristic length (two way communication and all). I don't really know how RAM works, so with DDR it may even be half of that length, which puts it at about 2.5cm / 1 in. (roughly). I leave it for someone else to tell me why these numbers mean absolutely nothing (seriously, I'm not too proud to learn something here).
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>