Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Material Sets Stage For All-Optical Computing

timothy posted more than 4 years ago | from the good-ping-times dept.

Science 53

An anonymous reader writes with this excerpt from the International Business Times: "Researchers have made a new material that can be used to guide waves of light, a breakthrough that could lead to ultra-fast computing. Georgia Tech scientists are using specially designed organic dyes that can process and redirect light without the need to be converted to electricity first. ... 'For this class of molecules, we can with a high degree of reliability predict where the molecules will have both large optical nonlinearities and low two-photon absorption,' said [Georgia Tech School of Chemistry professor Seth] Marder." According to the article, using an optical router could lead to transmission speeds as high as 2,000 gigabits per second, five times faster than current technology.

Sorry! There are no comments related to the filter you selected.

Yeah ... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31369402)

Leading to faster "Frist Psot" messages !!

Re:Yeah ... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31369434)

Zing!

Re:Yeah ... (0, Offtopic)

garg0yle (208225) | more than 4 years ago | (#31369868)

You mean "Bazinga!"

Didn't see that coming (1)

nospam007 (722110) | more than 4 years ago | (#31369406)

"2000 gigabits per second"

GigaBITs? Wow!

Re:Didn't see that coming (2, Insightful)

Serilleous (1400333) | more than 4 years ago | (#31369474)

Yes, its true, transmission speeds and routing capacity are usually measured in bits rather than megs or kb. (this probably has something to do with the whole kb/mb doesn't follow powers of ten thing)

Re:Didn't see that coming (1)

Lord Bitman (95493) | more than 4 years ago | (#31369510)

so you're saying not only are they bits, but they're power-of-ten giga-?

Re:Didn't see that coming (0)

Anonymous Coward | more than 4 years ago | (#31369526)

so you're saying not only are they bits, but they're power-of-ten giga-?

Much like the turtles, it's bits all the way down.

Re:Didn't see that coming (1)

Mashdar (876825) | more than 4 years ago | (#31369956)

over niNE ZEROS! Unbelievable!

Re:Didn't see that coming (2, Insightful)

Darth Sdlavrot (1614139) | more than 4 years ago | (#31369688)

Probably has more to do with the fact that historically some hardware had byte and word sizes that weren't multiples of 8.

E.g. see http://en.wikipedia.org/wiki/36-bit [wikipedia.org]

Re:Didn't see that coming (1)

aquila.solo (1231830) | more than 4 years ago | (#31374852)

No, it has more to do with rates of serial transmission only making sense in bits/sec. The size of the shift register on the other end (bytes, words, whatever) doesn't impact the speed of the transmission system, just how often you have to load/unload it. The reason bytes are used when referring to memory is that that is usually the smallest addressable space (there are exceptions).

of course (1)

oxide7 (1013325) | more than 4 years ago | (#31369568)

Sure -- its for routers, a basic component of data transmission, and a bit is the most fundamental piece of data. We typically think in "BYTES", but that's really an OS abstraction right? At the lowest level of the network stack it makes sense to talk in bits -- anything above that can interpret it as it will (like a Byte, Word, Long.. etc)

Re:OS abstraction wtf (1, Informative)

Anonymous Coward | more than 4 years ago | (#31370172)

Where did you learn it was an OS abstraction? That's just ... sigh.

Bytes are the smallest addressable unit of memory the CPU can handle. It doesn't matter if the memory controller only does cache line fills or whatever, memory addresses are in units of bytes.

Re:OS abstraction wtf (0)

Anonymous Coward | more than 4 years ago | (#31371828)

Bytes are the smallest addressable unit of memory the CPU can handle. It doesn't matter if the memory controller only does cache line fills or whatever, memory addresses are in units of bytes.

But here we are not talking about normal x86 CPU architecture, but routers that transport various amounts off bits...

Re:OS abstraction wtf (1)

tibman (623933) | more than 4 years ago | (#31373590)

Just a guess, but i wouldn't imagine packets are measure in bits but in bytes. That's why hex is often used, right? Unless the router is using an exotic protocol, everything will be measured in bytes? Data structures are almost all in multiples of bytes.. size_of(bool) returns 1. malloc works with bytes, not bits. I'm pretty sure you will have a hard time finding anything that works directly in bits and not bytes, to include routers.

Re:Didn't see that coming (1)

Mashdar (876825) | more than 4 years ago | (#31369946)

Oddly, in the future (when this technology is available):
2000 / 1000 = 5

Not to mention that if 1000Gb/s connections are widely commercially available today, I have to assume that faster connections are available for specialized purposes.

Re:Didn't see that coming (1)

aquila.solo (1231830) | more than 4 years ago | (#31374936)

Commercially available connections are measured in the tens of Gb/s. It's called gigabit ethernet, not terabit ethernet.

Re:Didn't see that coming (2, Funny)

belthize (990217) | more than 4 years ago | (#31370054)

Gah I hate these lame random units .. gigabits/second.

Could somebody translate that to a more standard Libararies of Congress/fortnight please.

Re:Didn't see that coming (1)

dakameleon (1126377) | more than 4 years ago | (#31370846)

I'd say... approximately 27503 LoC/Fortnight

Re:Didn't see that coming (1)

WindBourne (631190) | more than 4 years ago | (#31380628)

I am guessing that you did not see it coming, or going.

But (0)

Schiphol (1168667) | more than 4 years ago | (#31369410)

If the best expected performance of the new technology is just 5 times better than current technology, is it really worth pursuing it? Current technology is current, as in real. Best expected performance needs to be divided by a correcting factor which is unlikely to be much lower than 5.

Re:But (1)

derGoldstein (1494129) | more than 4 years ago | (#31369556)

Yes. This keeps the data in "optical form", which has additional ramifications. Once (if?) this becomes practical, additional uses will crop up.

Re:But (2, Insightful)

Anonymous Coward | more than 4 years ago | (#31369614)

Exactly, like lower latency. The conversion into an electrical signal, and then back to optical probably adds a bit of latency. I'm no expert, but I'd imagine that data going to and from a typical destination on the internet goes through several of these conversions adding (in most cases negligible) latency. If most of the routers on the net were all optical, I'd imagine we'd have an internet with imperceptible latency most of the time. That could lead to things as simple as lag-free gaming, real-time video conferencing, and maybe in the future a very (sur)real shared virtual reality, all done across large physical distances.

Re:But (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31371450)

It takes light 64ms to go from NYC to Beijing, as the crow flies. So roundtrip of 128 ms is borderline unplayable in first person shooters

Re:But (0)

Anonymous Coward | more than 4 years ago | (#31374296)

I was thinking with typical usage scenarios, people would mostly stay within the same continent. Coast to coast on the US looks like it'd be about 32 ms round trip. Not imperceptible, but still very tolerable. That is a best case scenario I suppose, but still, I'd imagine the technology could cut the ping times we have now in half.

Re:But (0)

Anonymous Coward | more than 4 years ago | (#31369654)

IF it uses considerably less and leaks less power for about the same manufacturing price, then yes.

Re:But (4, Insightful)

Hurricane78 (562437) | more than 4 years ago | (#31369716)

Of course. Because the new technology also is getting better. And usually at a much quicker rate than the existing one, because that one is already at the end of its limits.

There often even is new technology that is still worse than the old one, because of its experimental state. But worth pursuing anyway, because of the huge potential.

The same is true for optical circuits.

Re:But (1)

Darth Sdlavrot (1614139) | more than 4 years ago | (#31369722)

If the best expected performance of the new technology is just 5 times better than current technology, is it really worth pursuing it?

Moore's law says 2× in two years, but some people think we're running up against the limits of Silicon.

This gives us 5× in one shot.

You ask is it worth it? Are you suggesting that we not engage in pure science any more because it might not pan out in the long run? Only time will tell if it was worth it.

Sure (1)

oxide7 (1013325) | more than 4 years ago | (#31369802)

I would consider 3 factors: "need", "current capacity", and "future capacity". "Need" is always increasing, at a rate to overtake "current capacity," therefore "future capacity" needs to always be developed. If they are predicting 5x faster performance at this early stage, it surely holds more. When BluRay was in it's infancy many people said 'why would i ever need 50GB, thats bigger than my HD'--- but low and behold, now 50gb doesn't seem like enough

Re:But (4, Insightful)

cowscows (103644) | more than 4 years ago | (#31370228)

The first automobiles could easily be outrun by a horse. I guess we're fortunate that no one noticed that or else they would've all agreed that automobile technology was a waste of time and should be abandoned.

Re:But (1)

aquila.solo (1231830) | more than 4 years ago | (#31374964)

Nice car analogy!

Faster FAster FASTER! (4, Funny)

Anonymous Coward | more than 4 years ago | (#31369418)

"five times faster than current technology." Reminds me of being a teenager and discovering lotion...

Have you seen the pic? (2, Funny)

Anonymous Coward | more than 4 years ago | (#31369494)

I could not get my eyes from that advertisy pic [ibtimes.com] .

Must...read...article.

Similar stuff from IBM (2, Interesting)

distantbody (852269) | more than 4 years ago | (#31369496)

EETimes has "IBM Research claimed a keystone achievement in on-chip optical communications Wednesday (March 3), saying its 40-gigabit-per-second (Gbps) germanium avalanche photodetector completes what it calls the nanophotonic toolkit." (link) [eetimes.com] (A few days before announcing 2,500 layoffs, hmmm...)

...And the same news from Semiconductor Intl [semiconductor.net] .

Re:Similar stuff from IBM (1)

game kid (805301) | more than 4 years ago | (#31369636)

(A few days before announcing 2,500 layoffs, hmmm...)

Simple. IBM no longer needed help because it invented awesome.

Optocouplers (3, Interesting)

derGoldstein (1494129) | more than 4 years ago | (#31369538)

I'd like to benchmark this against graphene [technologyreview.com] . Since optical signals don't have to be converted to electrical first, then (I think) the bottleneck would be the optoelectronics.

Re:Optocouplers (3, Informative)

Hurricane78 (562437) | more than 4 years ago | (#31369730)

optoelectronics

If they don’t have to be converted to electricity first, then where are the electronics in this?

A better name is “photonics”. :)

Re:Optocouplers (1)

derGoldstein (1494129) | more than 4 years ago | (#31369852)

More verbose description --
Given that the signals arrive in optical form, you (will) have two choices:
1) Convert them into electrical signals, using optoelectronics, process the data, and then (sometimes) convert the signal back to optical.
2) Keep the signals in optical form and process them using these new materials.

Just because the first option has more steps, doesn't mean it's slower. If you have very fast converters, and then very fast transistors (like the graphene-based ones linked above), then you may get an overall system that's faster than the purely optical one.

Re:Optocouplers (1)

sam0737 (648914) | more than 4 years ago | (#31369970)

optoelectronics

If they don’t have to be converted to electricity first, then where are the electronics in this?

A better name is “photonics”. :)

Well that until your computer is totally processed in light, including CPU, motherboard, display card and network interface. I would agree the future AC transformer be an giant LED light blub, then all component inside are light path...to the extend that the light is processed and redirected to the monitor panel directly and a lighting pixel. That's would be very awesome, yet I don't see it coming very soon though :P. Hence we will still stick with electronics somewhere for a while.

Re:Optocouplers (1)

jlebrech (810586) | more than 4 years ago | (#31369830)

Even better, Make the optoelectronics in graphene too.

Would love a quick turn around on technologies such as this, even if for a quick novelty factor at first.

Even something as powerful as a 486 would surfice, although the mhz would probably not be an issue.

The problem isn't backbone, it's the last mile (-1, Offtopic)

Jaydee23 (1741316) | more than 4 years ago | (#31369798)

There is lots of bandwidth around, the problem is getting it to people. Until the last mile problem is tackled in a more robust manner, teh backbone could have infinite speed and I'd hardly be able to tell the difference. (Northern Scotland Office 0.5Mbit/sec Home 4Mbit/sec)

Go Jackets! (1)

gtarget (1360439) | more than 4 years ago | (#31370120)

+1 for Georgia Tech, twice in one week (Spanish botnet taken down)

Re:Go Jackets! (0)

Anonymous Coward | more than 4 years ago | (#31373280)

Bust their ass

Is it just me? (1)

dorpus (636554) | more than 4 years ago | (#31370612)

I've been reading headlines for the past 20 years that claim "breakthroughs" in all-photonic computing. Where are the all-photonic routers?

Re:Is it just me? (1)

Hazelfield (1557317) | more than 4 years ago | (#31373244)

Maybe you just need a lot of breakthroughs because there are lots of problems to break through?

Power & Heat (2, Interesting)

CraigoFL (201165) | more than 4 years ago | (#31370924)

It's funny... when the tech industry first started talking about switching to light instead of electricity for the chip insides, the biggest motivating factor was speed. How much faster (usually determined in "clock" speed even) can we make a chip if we can use photons instead of electrons? These days, I'm more interested in other factors:
  • How much electricity (per unit of performance) does it use?
  • How much heat does it put out?
  • How much smaller can we make the chip and its supporting components?

This is a result of the highly-clustered, highly-mobile computing age we live in today. A single fast chip isn't as applicable any more. Give us tiny and low-power.

The lights SHOULD dim when you switch it on! (1)

mac1235 (962716) | more than 4 years ago | (#31373992)

I'm a gamer you insensitive clod!

Probably not much to see here, at least yet (3, Interesting)

Curmudgeon420 (1092149) | more than 4 years ago | (#31371078)

The big issues in designing optical switches is their switching time and minimum switch pulse width. I and my group built what is probably the first all-optical computer in the early '90s. We used Lithium Niobate switches, which limited the machine's clock frequency of 100 MHz. It's hard to find the original article, which is in the Feb. 18 issue of Science Express. Subscription required, unfortunately. In that article the authors say nothing about switching time, or minimum switch pulse. It looks like a good piece of research, but eons away from anything practical.

Heat and tempest security (1)

bkr1_2k (237627) | more than 4 years ago | (#31371656)

Obviously the ramifications as far as emissions security (TEMPEST though that's a simplification of TEMPEST) are huge, but what is this likely to do for heating and component size. I can see this being a great opportunity for a lot of military applications even if the speed limit is only a few times better than what we have now.

Fiber (1)

The Wild Norseman (1404891) | more than 4 years ago | (#31372156)

It's my understanding that fiber optic cable has speeds that are limited by the electronic conversions on either side. Is that what is the issue here as well? How well could this (for lack of a better term) internal light mesh with an external (fiber optic) light?

More Academia Garbage (0)

Anonymous Coward | more than 4 years ago | (#31372168)

Hmmm, more garbage from academia. Let's see, organic dyes a simple not photostable enough to withstand the high peak power pulses needed to cause these nonlinear affects. I should know spent 4 years of grad school working on similar optical switching system.

Cloaking Device? (1)

BJ_Covert_Action (1499847) | more than 4 years ago | (#31372658)

So when I first read the headline and article, my initial question was, "Could similar dyes be used to route light around an object, hence creating a cloaking device?"

Unfortunately, the article didn't hint at this possibility at all. However, I did pick this up:

The research was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA) and the Office of Naval Research (ONR).

So DARPA's helping fund it eh? In answer to my own question then, "Yes!"

Leave it to DARPA to fund the development of a cloaking device and play it off as a computer breakthrough. I, for one, am stoked.

Smoke and mirrors. (1)

Katchu (1036242) | more than 4 years ago | (#31375134)

Smoke and mirrors: A new material that can be used to guide waves of light
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?