Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Squeezing More Bandwidth Out of Fiber

timothy posted more than 3 years ago | from the wring-wring dept.

Networking 185

EigenHombre writes "The New York Times reports on efforts underway to squeeze more bandwidth out of the fiber optic connections which form the backbone of the Internet. With traffic doubling every two years, the limits of current networks are getting close to saturating. The new technology from Lucent-Alcatel uses the polarization and phase of light (in addition to intensity) to double or quadruple current speeds."

cancel ×

185 comments

Sorry! There are no comments related to the filter you selected.

Hmmm... (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#33853150)

Well, we'll just have to hope that their competitors will implement the technology; because the odds of Alcatel doing a proper job are pretty much zero....

Re:Hmmm... (3, Insightful)

hedwards (940851) | more than 3 years ago | (#33853170)

Or figure out a way of getting cyber criminals off the net. The problem for quite some time has been that they'll suck up as much bandwidth as they can get, and since they don't pay for it, there's little reason to actually throttle back their operations.

Re:Hmmm... (4, Funny)

nacturation (646836) | more than 3 years ago | (#33853184)

Or figure out a way of getting cyber criminals off the net. The problem for quite some time has been that they'll suck up as much bandwidth as they can get, and since they don't pay for it, there's little reason to actually throttle back their operations.

Shut down The Pirate Bay? :)

Re:Hmmm... (5, Interesting)

garnser (1592501) | more than 3 years ago | (#33853538)

Actually talking with one of the major Tier 1 providers they only saw a 30% drop in total throughput over the first 24 hours after shutting down TPB, took about 1 month for it to recover. Youtube is probably a better candidate if we want to save some bandwidth http://www.wired.com/magazine/2010/08/ff_webrip/all/1 [wired.com] ;)

Re:Hmmm... (3, Insightful)

morgan_greywolf (835522) | more than 3 years ago | (#33853668)

Hey! That's a good idea! Let's just shutdown the main reasons people are using high-speed internet technologies: streaming audio and video. And shutting down BitTorrent obviously wouldn't hurt.

Then we'll party like it's 1997!

Re:Hmmm... (0)

bsDaemon (87307) | more than 3 years ago | (#33853704)

I'm sure you're being sarcastic, but I don't actually see the problem with that.

Re:Hmmm... (2, Insightful)

morgan_greywolf (835522) | more than 3 years ago | (#33853776)

Aside from a total economic disaster, that is.

Re:Hmmm... (4, Insightful)

xda (1171531) | more than 3 years ago | (#33853832)

Instead of letting usage drive innovation we should just use less? that is the stupidest thing I have heard in a while, sorry.

Let's stop and think what people are downloading via TPB... music, movies, media in general. if your gripe is that the legality of these file transfers is in question let's assume that in the near future everyone is downloading content legitimately. What then?

You dumb asses are taking an interesting article about cutting edge network technology and ruining it with your stupid opinions about things that don't matter. The music and video is going to keep coming, legal or not.

Re:Hmmm... (0, Troll)

bsDaemon (87307) | more than 3 years ago | (#33853944)

I don't care about the legality. I just don't think that the content has added much of anything actually useful. Combine that with all the additional javascript libraries, cross-site scripting, remote includes and whatnot and the web just keeps getting more cluttered and slower. Increasing the bandwidth would just encourage more stupid shit to be put onto "the cloud" and cause the problem to continue to persist.

Not to mention the fact that there is so much clutter now that finding actual information has become a chore. I'm about half way ready to just suck it up and buy my own JSTOR account to cut out the bullshit.

Re:Hmmm... (0)

Anonymous Coward | more than 3 years ago | (#33854420)

I don't care about the legality. I just don't think that the content has added much of anything actually useful.

Why does it need to be useful? Maybe we should ban country drives with the family on Sunday because they aren't useful. Or stop selling ice cream because it has no nutritional value and therefor isn't useful. The problem lies in the mentality of the old dial-up companies that would install one modem for every 10 users. Then you ended up with days where you couldn't connect at all because the modem center couldn't handle the calls. The obvious solution would have been to add modems, but that would cost the ISP's money, causing them to have less of a profit (although they still made a profit), and we wouldn't want that, would we?

Re:Hmmm... (0)

Anonymous Coward | more than 3 years ago | (#33854542)

B.S. sites like the PB have added greatly to the spread of a world culture and allowed people to share cultural works (yes for as sorry as some modern movies and music are they are still part of our culture) that otherwise might be missed. When Internet music piracy hit the tubes, all of a sudden artists had a greater world audience, the same thing happened with movies and games. In fact, I'll argue that piracy was one of the chief influences that brought a lot of people to the Internet. Like the gold rush, people flocked to the Internet in the early 2000's to get free music and movies. Its funny how quick we forget that now that commercialization and the social network pass time are taking over.

Funny how no one includes the thousands of commercial ads or the scripts used by Akamai, etc. to monitor our web usage - ever look at how many bytes of ads and monitoring stuff you download on every site visit? If anything the commercialization of the web is what's killing it and our bandwidth. Companies want to lessen competition and create infrastructure that will support their businesses not the public interests. That's alright, that is what a company is supposed to do - but this is why it shouldn't be up to companies to decide the rules for the web.

Re:Hmmm... (1)

Pseudonym Authority (1591027) | more than 3 years ago | (#33854646)

I just don't think that the content has added much of anything actually useful.

But you see, no one cares about your opinion. Data usage will continue to go up, and you can't do shit about it.

But suppose you could, how would you enforce it? Only allow certain people on the internet? Require prior approval for posting? Detail your plan please.

Re:Hmmm... (0)

Anonymous Coward | more than 3 years ago | (#33854654)

finding actual information on the internet has always been a chore.. there was a minute there where there weren't too many bs pages to sift through in a typical Google search.. but that was just a minute after you still had to check thirty seven different crawlers and bots to find anything you did not know the address to.

Re:Hmmm... (0)

Anonymous Coward | more than 3 years ago | (#33854874)

Yeah, while you're at it, lets cut HTTP off. All we need is NNTP! Back to basics!

Re:Hmmm... (2, Funny)

espiesp (1251084) | more than 3 years ago | (#33854924)

I just don't think that the content has added much of anything actually useful.

Riiiiight. And this post is SOOO useful. Time to get off your high horse and shovel some manure.

Re:Hmmm... (2, Insightful)

flappinbooger (574405) | more than 3 years ago | (#33854018)

Video on Demand is exhibited nicely now by the likes of Roku, Netflix and Hulu as well as in a lesser way the major networks who stream some programming online.

Anyone who doesn't see VOD as the future is daft, the bandwidth must increase and broadband internet must get to the rural areas of the US.

Re:Hmmm... (0)

Anonymous Coward | more than 3 years ago | (#33853976)

I also think shutting down YouTube and TPB would be a good thing for the Internet. But where would it stop? Netflix, because people stream 12 hours of content per day? Bandwidth-heavy MMOs/multi-player video games? It's basically ban all or do nothing.

Re:Hmmm... (0)

Anonymous Coward | more than 3 years ago | (#33853590)

ok...lets shut down the things that are making the internet and technology grow...that is a smart idea. While we are at it why don't we go back to using horses because there are too many cars in the world.

Re:Hmmm... (-1, Troll)

Anonymous Coward | more than 3 years ago | (#33853946)

ok...lets shut down the things that are making the internet and technology grow...that is a smart idea. While we are at it why don't we go back to using horses because there are too many cars in the world.

Right, because The Pirate Bay makes the internet grow. Wow, such insight. Have you considered suicide?

Re:Hmmm... (1)

nacturation (646836) | more than 3 years ago | (#33854182)

ok...lets shut down the things that are making the internet and technology grow...that is a smart idea. While we are at it why don't we go back to using horses because there are too many cars in the world.

Dear obvious troll: I never suggested any such thing. My flippant comment about The Pirate Bay was a joke (I even included a smiley so dullards like you wouldn't miss it) and I'm not about to explain the joke for slow minds such as yours. I can just picture you rolling your eyes and foaming at the mouth as you typed that. Try not to take life so seriously next time, okay?

There are many legitimate, fully legal uses of bandwidth that cause various internet technologies to grow that we don't need the internet equivalent of broken windows economics to spur growth. I know it must be terribly difficult for you to think reasonably about such matters, but do try. And put down the crack pipe. I know you're doing it only because of your desire to spur economic growth in the anti-narcotics industry, but quitting is the best thing for your health and sanity.

Re:Hmmm... (0)

Anonymous Coward | more than 3 years ago | (#33854508)

tl;dr: WHOOSH.

Re:Hmmm... (0, Troll)

John Hasler (414242) | more than 3 years ago | (#33853238)

> Or figure out a way of getting cyber criminals off the net.

Be serious. There's no hope of getting rid of Microsoft Windows in the forseeable future.

Re:Hmmm... (2, Funny)

morgan_greywolf (835522) | more than 3 years ago | (#33853696)

A couple of GBU-31s and an F-22 Raptor ought to fix that...

Re:Hmmm... (0, Redundant)

Steeltoe (98226) | more than 3 years ago | (#33853814)

You may be right.. I just installed Windows 7 on a server of mine..

Re:Hmmm... (3, Funny)

T Murphy (1054674) | more than 3 years ago | (#33853252)

So when people go online, the ISP should pop up a EULA saying you can only use the internet for legal activity. Problem solved.

Re:Hmmm... (4, Funny)

John Hasler (414242) | more than 3 years ago | (#33853294)

Or just get rid of network neutrality so that ISPs can filter packets with the evil bit set.

Re:Hmmm... (1)

peragrin (659227) | more than 3 years ago | (#33854098)

with the evil bit set, does that mean no windows machines can use the internet?

that would shut down 99.99999% of the botnets out there over night.

Re:Hmmm... (1)

eugene2k (1213062) | more than 3 years ago | (#33854940)

That would make microsoft.com unreachable.

Dual Pol Coherent Systems have already been done.. (4, Interesting)

Myrv (305480) | more than 3 years ago | (#33855044)

Well, we'll just have to hope that their competitors will implement the technology

Already have. Actually Alcatel is pretty much playing catchup with all this. Nortel introduced a 40Gb/s dual polarization coherent terminal 4 years ago (despite many people, including Alcatel, saying it wasn't possible). Furthermore Nortel Optical (now Ciena) already has a 100Gb/s version available. Alcatel is pretty late to this game.

Ahh, the old Star Trek maneuver (1, Funny)

Anonymous Coward | more than 3 years ago | (#33853156)

Reverse the polarity! Phase variance!

what about color (0)

Anonymous Coward | more than 3 years ago | (#33853174)

add color, use 256 identifiable colors then send those, send bytes instead of bits.

Re:what about color (1, Funny)

Anonymous Coward | more than 3 years ago | (#33853300)

Stop the presses! An anonymous poster on the internet came up with an idea that has baffled teams of physicists for decades!

Re:what about color (4, Informative)

JDeane (1402533) | more than 3 years ago | (#33853556)

In a way they already do this the different wave lengths are used in something called multiplexing, they can cram a lot of completely different signals down the same pipe at the same time with this technique.

http://en.wikipedia.org/wiki/Multiplexing [wikipedia.org]

That link probably explains it much better then I can.

Also they do not send bits, they send more then bytes, they send packets, or entire frames.

Re:what about color (5, Informative)

Soft (266615) | more than 3 years ago | (#33853636)

add color, use 256 identifiable colors then send those, send bytes instead of bits.

Already being done; TFA mentions this (for "wavelength" read "color", as the light that is being used is in the infrared).

What limits the number of wavelengths in a single fiber is the bandwidth of the amplifiers: optical fibers slightly absorb light, and current long-haul links require reamplification ca. every 100km. This is done using EDFAs (erbium-doped fiber amplifiers), which work for wavelengths in the 1530-1560nm range (the "C band"; visible light is in the 400-800nm range). Adding wavelengths outside this band would require redeploying new amplifiers along the fiber, which would be expensive; besides, other types of amplifiers aren't quite as mature as EDFAs, and you would need more of them because the fiber attenuates more outside this window.

You could also try to squeeze these wavelengths tighter, to put more of them within the C band, but they are already packed at 0.4-nm intervals, corresponding to a 50-GHz frequency interval, which holds a 10- or 12.5-Gbit/s signal with little margin, as long as conventional optical techniques are used--that is, switching the light on or off for each bit.

There remains the possibility of using smarter ways of modulating the light, using its phase and polarization, to pack e.g. 100Gbit/s in a 50-GHz bandwidth; and that's what Alcatel are doing. They are not the only ones, of course, the field of "coherent optical transmissions" has been a hot topic in the past couple of years. Now commercial solutions are getting into the field.

Note that these techniques are already widely used in radio and DSL systems, and had been proposed for optical systems back in the 1980s, before EDFAs essentially solved the attenuation problem. Now, however, we have again reached a bandwidth limit and have to turn back to coherent transmission. In the 1980s, that meant complicated hardware at the receivers, impossible to deploy outside the labs; now all the complicated stuff can be done with DSP in software. Radio and DSL already do this, but only at a few tens of Gbit/s; doing it at 100Gbit/s for optics is more challenging, and is just now becoming possible.

tens of Mbit/s not Gbit/s (was:what about color) (2, Informative)

Soft (266615) | more than 3 years ago | (#33853674)

Radio and DSL already do this, but only at a few tens of Gbit/s

... Mbit/s, I meant. The challenge is to gain 3+ orders of magnitude to the 100-1000Gbit/s range.

If you squeeze glass it flows (5, Funny)

Anonymous Coward | more than 3 years ago | (#33853178)

If you pump too much data into a fiber optic glass, it will begin to flow under the photonic pressure similar to glass in old church windows. If you look at them, the bottoms are much thicker than at the top. The reason? All that knowledge from God in Heaven up in the sky exerts a downward pressure on churches in particular warping their window glass... the same thing will happen to fiber optics. If you put too many libraries of congress through it, it will start to flow like toothpaste and your computer rooms will have a sticky floor and all your network switches will be gooey.

Thanks
Signed,
Mr KnowItAll.
(Happy Thanksgiving by the way)

Mod parent up. Re:If you squeeze glass it flows (1)

thomasdz (178114) | more than 3 years ago | (#33853190)

Mod parent up! it's so unlikely, it MUST be true.

Re:If you squeeze glass it flows (0)

Anonymous Coward | more than 3 years ago | (#33853768)

My Materials class in 1972 was very clear, Glass at normal temperatures can be classified as a liquid.
ergo, over time it moves. This Medieval glass is considerably thicker at the bottom than the top.

Talk about the bleeding obvious........... sigh.

Re:If you squeeze glass it flows (2, Insightful)

chtit_draco (1850522) | more than 3 years ago | (#33853942)

My Materials class in 1972 was very clear, Glass at normal temperatures can be classified as a liquid. ergo, over time it moves. This Medieval glass is considerably thicker at the bottom than the top.

Talk about the bleeding obvious........... sigh.

This Medieval glass is thicker at the bottom because of its fabrication process. Theoretically it should indeed "flow", but the relaxation time is just way too long for it to become noticeable in a matter of centuries...

Source : 2008 polymer class / http://en.wikipedia.org/wiki/Glass#Behavior_of_antique_glass [wikipedia.org]

Re:If you squeeze glass it flows (1)

Steeltoe (98226) | more than 3 years ago | (#33853826)

It's Thanksgiving?

Agggh, smartass!

Wtf are constants? (0)

Anonymous Coward | more than 3 years ago | (#33853188)

Why don't we just make the speed of light faster?

Re:Wtf are constants? (2, Funny)

Mitchell314 (1576581) | more than 3 years ago | (#33853246)

We could if we had the source code files.

Re:Wtf are constants? (0)

Anonymous Coward | more than 3 years ago | (#33853546)

But recompiling the universe takes forever...

Re:Wtf are constants? (3, Interesting)

ls671 (1122017) | more than 3 years ago | (#33853478)

FTFS: > to double or quadruple current speeds.

Of course, they must have been talking about capacity instead of speed. Sending more information concurrently using the same pipe. Every bit of information would still travel at pretty much the same speed obviously.

Re:Wtf are constants? (1)

srussia (884021) | more than 3 years ago | (#33853664)

FTFS: > to double or quadruple current speeds.

Of course, they must have been talking about capacity instead of speed. Sending more information concurrently using the same pipe. Every bit of information would still travel at pretty much the same speed obviously.

"Capacity" is still a polysemous term. I could be a static magnitude (a "stock") or a a dynamic one (a "flow"),. For example "1 LoC" can be a unit of capacity. Maybe "throughput" is what they mean, whose unit could be "1 LoC per second".

Re:Wtf are constants? (1)

Idbar (1034346) | more than 3 years ago | (#33853584)

Well, if all they want is to squeeze the fibers, why not using bigger and stronger wire ties?

Dark Fiber (2, Interesting)

mehrotra.akash (1539473) | more than 3 years ago | (#33853218)

Isnt some large percentage of the fiber not being used anyways? Rather than change the equipment on the current fiber, why not use more of the current equipment, and light up more fiber?

Re:Dark Fiber (4, Insightful)

John Hasler (414242) | more than 3 years ago | (#33853260)

Because the dark fiber is where it is, not where it is needed. One of the fibers that crosses my land runs from Spring Valley, Wisconsin to Elmwood, Wisconsin. Is that going to help with a bandwidth shortage between New York and Chicago?

Re:Dark Fiber (1)

oldspewey (1303305) | more than 3 years ago | (#33853324)

You know I've read that about Spring Valley Wisconsin: Blink and you'll miss the packet coming through.

Re:Dark Fiber (4, Informative)

phantomcircuit (938963) | more than 3 years ago | (#33853448)

The shortage is almost entirely in the transcontinental links.

Re:Dark Fiber (1)

davester666 (731373) | more than 3 years ago | (#33853588)

Yeah, I just read that somebody laid some more fibre across the Atlantic. Oh, wait, that is reserved for insider stock market trading, as it has slightly less delay than existing fibre.

Re:Dark Fiber (1)

TooMuchToDo (882796) | more than 3 years ago | (#33854574)

Correction: High Frequency Trading, not insider stock market trading. Just as nefarious, but with a prettier name.

Re:Dark Fiber (5, Interesting)

Anonymous Coward | more than 3 years ago | (#33853522)

Speaking as someone involved in all of this, the days of dark fiber have, by and large, gone away. Back in 2002/2003 there had been massive amounts of overbuilding due to the machinations of MCI and Enron, no joke. Telco's world wide had been looking at MCI and Enron's sales numbers, vouched for by Arthur Anderson, and had launched big builds of their own, figuring that they were going to get a piece of that pie as well. When the ball dropped and it turned out that MCI and Enron had been lying with the collusion of Arthur Anderson, the big telco's realized that the purported traffic wasn't there. They dried up capex (which killed off a number of telecom equipment suppliers and gave the rest a near-death experience) and hawked dark fiber to anybody who would bite.

Those days have come and gone. Think back to what it was like in 2002: Lots of us now have 10 Mb/s connections to the ISP, 3G phones, 4G phones, IPODs, IPADs, IPhones, Androids, IPTV, and it goes on and on. The core networks aren't crunched, quite, yet, but the growth this time is for real and has used up most if not all of the old dark fiber.

Now telco's are going for more capacity, really, and it's a lot cheaper to put 88 channels of 100 Gb/s light on a existing single fiber than to fire up the ditch diggers. If you're working in the area, it's becoming fun again!

Re:Dark Fiber (0)

Anonymous Coward | more than 3 years ago | (#33853616)

Lots of us here in the developed world actually have 100/100, 200 or even 1000/100 megabit home connectivity. The good part about these is that 90% of consumers who have gotten these don't get even close to fifty megabits, even for a percent of the time. Otherwise, both local and backhaul operators would be doomed...

Re:Dark Fiber (1)

xda (1171531) | more than 3 years ago | (#33854350)

Ciena's 100Gb/s per 88 channel DWDM doesn't work on long haul, only MAN last time I checked

Re:Dark Fiber (0)

Anonymous Coward | more than 3 years ago | (#33853562)

Yeah but what happens when the in'ernets are attacked and we have to download all the countries files to one central backup location that nobody knows about?

Die Hard 4.0 would never have happened the way it did if half the USA was watching Stalking Cat [youtube.com] ...i'm not sure if that is a good or bad thing.

Butt (-1)

Anonymous Coward | more than 3 years ago | (#33853290)

*butt* can carry many wavelengths of laser light, with each wavelength adding to the bits transmitted per second.

From the anally retentive dpt at the new york times?

less MISinformation, rhetoric, lies, larceny (-1, Flamebait)

Anonymous Coward | more than 3 years ago | (#33853364)

that might free up a little bandwidth.

that's immeasurable amounts of MISinformation etc..., & there you have IT? that's US? thou shalt not... oh forget it. fake weather (censored?), fake money, fake god(s), what's next? seeing as we (have been told) came from monkeys, the only possible clue we would have to anything being out of order, we would get from the weather.

the search continues;
google.com/search?hl=en&source=hp&q=weather+manipulation

google.com/search?hl=en&source=hp&q=bush+cheney+wolfowitz+rumsfeld+wmd+oil+freemason+blair+obama+weather+authors

meanwhile (as it may take a while longer to finish wrecking this place); the corepirate nazi illuminati (remember, (we have been told) we came from monkeys, & 'they' believe they DIDN'T), continues to demand that we learn to live on less/nothing while they continue to consume/waste/destroy immeasurable amounts of stuff/life, & feast on nubile virgins in massive self-adulating conclaves with their friend morgion, is always hunting that patch of red on almost everyones' neck. if they cannot find yours (greed, fear ego etc...) then you can go starve. that's their (slippery/slimy) 'platform' now. see also: http://en.wikipedia.org/wiki/Antisocial_personality_disorder

never a better time to consult with/trust in our creators. the lights are coming up rapidly all over now. see you there?

greed, fear & ego (in any order) are unprecedented evile's primary weapons. those, along with deception & coercion, helps most of us remain (unwittingly?) dependent on its' life0cidal hired goons' agenda. most of our dwindling resources are being squandered on the 'wars', & continuation of the billionerrors stock markup FraUD/pyramid schemes. nobody ever mentions the real long term costs of those debacles in both life & any notion of prosperity for us, or our children. not to mention the abuse of the consciences of those of us who still have one, & the terminal damage to our atmosphere (see also: manufactured 'weather', hot etc...). see you on the other side of it? the lights are coming up all over now. the fairytail is winding down now. let your conscience be your guide. you can be more helpful than you might have imagined. we now have some choices. meanwhile; don't forget to get a little more oxygen on your brain, & look up in the sky from time to time, starting early in the day. there's lots going on up there.

"The current rate of extinction is around 10 to 100 times the usual background level, and has been elevated above the background level since the Pleistocene. The current extinction rate is more rapid than in any other extinction event in earth history, and 50% of species could be extinct by the end of this century. While the role of humans is unclear in the longer-term extinction pattern, it is clear that factors such as deforestation, habitat destruction, hunting, the introduction of non-native species, pollution and climate change have reduced biodiversity profoundly.' (wiki)

"I think the bottom line is, what kind of a world do you want to leave for your children," Andrew Smith, a professor in the Arizona State University School of Life Sciences, said in a telephone interview. "How impoverished we would be if we lost 25 percent of the world's mammals," said Smith, one of more than 100 co-authors of the report. "Within our lifetime hundreds of species could be lost as a result of our own actions, a frightening sign of what is happening to the ecosystems where they live," added Julia Marton-Lefevre, IUCN director general. "We must now set clear targets for the future to reverse this trend to ensure that our enduring legacy is not to wipe out many of our closest relatives."--

"The wealth of the universe is for me. Every thing is explicable and practical for me .... I am defeated all the time; yet to victory I am born." --emerson

no need to confuse 'religion' with being a spiritual being. our soul purpose here is to care for one another. failing that, we're simply passing through (excess baggage) being distracted/consumed by the guaranteed to fail illusionary trappings of man'kind'. & recently (about 10,000 years ago) it was determined that hoarding & excess by a few, resulted in negative consequences for all.

consult with/trust in your creators. providing more than enough of everything for everyone (without any distracting/spiritdead personal gain motives), whilst badtolling unprecedented evile, using an unlimited supply of newclear power, since/until forever. see you there?

"If my people, which are called by my name, shall humble themselves, and pray, and seek my face, and turn from their wicked ways; then will I hear from heaven, and will forgive their sin, and will heal their land." )one does not need to agree whois in charge to grasp the notion that there may be some assistance available to us(

boeing, boeing, gone.

Close to Shannon limit (4, Interesting)

s52d (1049172) | more than 3 years ago | (#33853396)

Assuming we have 5 THz of usable bandwith (limited by todays fiber and optical amplifiers),
and applying some technology known from radio for quite some time:

Advanced modulation (1024 QAM): 10 bits/sec
Polarization diversity (or mimo 2*2) by 2

So, 100 Tbit/sec is approximate reasonable limit for one fiber.
There is some minor work to transfer technology from experimental labs to the field,
but this is just matter of time.

Wavelength mupltiplexing just make things a bit simpler:
Instead of one pair of A/D converters doing 100 Tbit/sec, we might use 1000 of them doing 100 Gbit/sec.

In 2010, speed above 60 Tbit/sec was already demonstrated in the lab.

Eh, will we say soon: "Life is too short to surf using 1 Gbit/sec"?

Re:Close to Shannon limit (1)

Rising Ape (1620461) | more than 3 years ago | (#33853582)

Won't increasing the number of bits per symbol as you suggest require a higher SNR, thus meaning amplifiers have to be more closely spaced? Given the desire is to get more out of the existing infrastructure, that might be a problem.

Re:Close to Shannon limit (1)

Soft (266615) | more than 3 years ago | (#33853890)

Won't increasing the number of bits per symbol as you suggest require a higher SNR, thus meaning amplifiers have to be more closely spaced?

Good point. And even having closely-spaced amplifiers may not work, as optical amplifiers have fundamental limitations in terms of noise added (OSNR actually decreases by at least 3dB for each high-gain amplifier).

At least, that's for classical on-off keying (1 bit per symbol, using light intensity only). Coherent transmission might not have the same limit; I'd have to check the calculation to be sure. And you might be able to do something with "distributed" amplification, where instead of having localized amplifiers, you pump energy into the fiber so that it attenuates less. (E.g. use the Raman effect: light at a wavelength of 1550nm can be amplified in a silica fiber by sending another, stronger, beam of light at about 1450nm. But there's still some noise added.)

Re:Close to Shannon limit (2, Insightful)

DigitAl56K (805623) | more than 3 years ago | (#33853614)

Eh, will we say soon: "Life is too short to surf using 1 Gbit/sec"?

Seriously... damn Flash websites...

Probably not (5, Interesting)

Sycraft-fu (314770) | more than 3 years ago | (#33853628)

You find each time go go up an order of magnitude in bandwidth, the next order matters much less.

100 bps is just painfully slow. Even doing the simplest of text over that is painful. You want to minimize characters at all costs (hence UNIX's extremely puthy commands).

1 kbps is ok for straight text, but nothing else. When you start doing ANSI for formatting or something it quickly gets noticeably slow.

10 kbps is enough that on an old text display, everything is pretty zippy. Even with formatting, colour, all that it is pretty much realtime for interactivity. Anything above that is slow though. Even simple markup slows it does a good bit. Hard to surf the modern Internet, just too much waiting.

100 kbps will let you browse even pretty complicated markup in a short amount of time. Images also aren't horrible here, if they are small. Modern web pages take time to load, but usually 10 seconds or less. Browsing is perfectly doable, just a little sluggish.

1 mbps is pretty good for browsing. You wait a bit on content heavy pages, but only maybe a second or two. Much of the web is sub second loading times. This is also enough to stream SD video, with a bit of buffering. Nothing high quality, but you can watch Youtube. Large downloads, like say a 10GB video game are hard though, it can take a day or more.

10 mbps is the point at which currently you notice no real improvements. Web pages load effectively instantly, usually you are waiting on your browser to render them. You can stream video more or less instantly, and you've got enough to stream HD video (720p looks pretty good at 5mbps with H.264). Downloads aren't too big an issue. You can easily get even a massive game while you sleep.

100 mbps is enough that downloads are easy to do in the background while you do something else, and have them ready in minutes. A 10GB game can be had in about 15 minutes. You can stream any kind of video you want, even multiple streams. At that speed you could stream 1080p 4:2:2 professional video like you'd put in to a NLE if there were any available on the web.

1 gbps is such that the network doesn't really exist for most things. You are now approaching the speed of magnetic media. Latency is a more significant problem than speed. Latency (and CPU use) aside, things tend to run as fast off a network server as they do on your local system. Downloads aren't an issue, you'll spend as much time waiting on your HDD as the data off the network in most cases.

10 gbps is enough that you can do uncompressed video if you like. You could stream uncompressed 2560x1600 24-bit (no chroma subsampling) 60fps video and still have nearly half your connection left.

If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time. At that speed, things just come down at amazing rates. You could download an entire 50GB BD movie during the first 6 minutes of viewing it. Things stream so fast over a gig that you can have the data more or less immediately to start watching/playing/whatever and the rest will be there in minutes. The latency you'd face to a server would be more of a problem.

Even now going much past 10mbps shows strong diminishing returns. I've gone from 10 to 12 to 20 to 50 in the span of about 2 years. Other than downloading games off of Steam going faster, I don't notice much. 50mbps isn't any faster for surfing the web. I'm already getting the data as fast as I need it. Of course usages will grow, while I could stream a single 1080p blu-ray quality video (they are usually 30-40mbps streams video and audio together) I couldn't do 2.

However at a gbps, you are really looking at being able to do just about everything someone wants to for any foreseeable future in realtime. I mean you can find theoretical cases that could use more but ask yourself how practical they really are.

Re:Probably not (1)

Soft (266615) | more than 3 years ago | (#33853796)

10 gbps is enough that you can do uncompressed video if you like. [...] If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time.

Yes and no; I'm sure we'll invent new ways of wast^Wusing bandwidth. (3DTV, telepresence, video editing on remote storage, cloud computing... What's next?)

But the problem no longer lies in the house. Not everybody has a 100Mbit/s Internet access yet, but that's coming in the next few years. Now think 1billion people simultaneously trying to access YouTube... The problem is the core network, where you must aggregate all these users' traffic. Current DWDM links in the Tbit/s range are not enough.

(Or you could try to be smart, cache some data, use multicasting... In practice, people can't be bothered not to waste bandwidth, and you can't cache everything.)

Re:Probably not (4, Interesting)

Sycraft-fu (314770) | more than 3 years ago | (#33854242)

I was never arguing against the need in the core. Already 10 gbps links are in use just on the campus I work at, never mind a real large ISP or tier-1 provider. I'm talking to the home. While many geeks get all starry-eyed about massive bandwidth I think they've just never played with it to realize the limits of what is useful. 100mbit net connections, and actually even a good deal less, are more than plenty for everything you could want today. That'll grow some with time, but 20-50mbps is plenty for realtime HD streaming, instant surfing, fast downloads of large games, etc. Be a good bit before we have any real amount of content that can use more than that (or really even use that well).

Gigabit is just amazingly fast. You discover copying from two modern 7200rpm drives you get in the 90-100MBytes/sec range if things are going well (like sequential copy, no seeking). Doing the math you find gigabit net is 125MBytes/sec which means even with overhead 100MBytes/sec is no problem. At work I see no difference in speed copying between my internal drives and copying to our storage servers. It could all be local for all I can tell speed wise.

That's why I think gig will not be "slow" for home surfing any time in the foreseeable future, and maybe ever. You are getting to the point that you can stream whatever you need, transfer things as fast as you need. Faster connections just wouldn't do anything for people.

A connection to the home only really matters to a certain point. Once you can do everything you want with really no waiting, any more is just for show. At this point, that is somewhere in the 10-20mbps range. You just don't gain much past that, and I say this as someone who has a 50mbps connection (and actually because of the way they do business class I get more like 70-100mbps). That is also part of the reason there isn't so much push for faster net connections. If you can get 20mbps (and you'll find most cable and FIOS customers can, even some DSL customers) you can get enough. More isn't so useful.

Re:Probably not (1)

Kjella (173770) | more than 3 years ago | (#33853970)

Yup... I used to be on around 2 Mbps ADSL, it was just painful. Now I'm at 25 Mbps cable, and I've realized I don't really need more. With uTorrent+RSS most things I want are downloaded before I even know it. With 5 Mbps upload I have no trouble uploading anything to friends or keeping my ratio. Now can I pretty please soon pay for a service that is half as good as the pirates give me?

Re:Probably not (1)

Sycraft-fu (314770) | more than 3 years ago | (#33854180)

For games Steam and Impulse will give you better service than the pirates run. Other than when their servers are heavily loaded due to a free weekend or something I get games from them at 5MBytes+/sec. Starts fast, stays fast. Of course it also has the advantage of always being what I asked for and all that, and always being available for redownload.

For movies and so on, sorry got nothing. There are some good streaming services, but they only do HD to a Blu-ray player, they can't trust your evil computer, and they are pay per view. Netflix works well for streaming, but limited content. No good (legit) movie download services out there.

Excellent comment! but... (1)

way2trivial (601132) | more than 3 years ago | (#33854056)

"I've gone from 10 to 12 to 20 to 50 in the span of about 2 years."

fuck you.

Re:Excellent comment! but... (1)

Sycraft-fu (314770) | more than 3 years ago | (#33854286)

So you are saying I probably shouldn't tell you that because of how they handle business class accounts, I actually get more than that most of the time? :D

http://www.speedtest.net/result/985434853.png [speedtest.net]

That is my actual result from a few minutes ago. It is fun for bragging rights, I'll say that. However truth be told other than Impulse and Steam downloads I notice no difference over 20mbps. Personally I'd take 20/20 if it were offered instead of 50/5. However currently they use 4 downstream channels with DOCSIS 3, but only one upstream channel. The equipment can handle 4 upstream channels, Cox just doesn't use more than one.

Re:Excellent comment! but... (0)

Anonymous Coward | more than 3 years ago | (#33854908)

... and therein lies the real issue. You're right, more or less, I certainly wouldn't mind more downstream bandwidth, but I probably wouldn't notice it that much. On the other hand, an increase in upstream bandwidth (even to your 5Mb/s) would be a large improvement in quality of service for me (that's fast enough to stream good quality 720p video).

Re:Probably not (0)

Anonymous Coward | more than 3 years ago | (#33854060)

Give them bandwidth and they'll find a way to use it.

look at netflix, useless unless you have decent internet speed.
Start having higher speeds and you'll start seeing HD netflix.

plenty of large infrastructure projects didn't have any immediate use, and some didn't even have forseeable uses.
Look at electricity. When nobody had electricity there was no market for electronic devices.
Give people electricity to their home and suddenly there are all sorts of new gadgets which were completely unknowable a few years previous. Heck alot of people saw electricity as a fancy way to replace candles originally. Now look what easy access to electricity does.

Same goes for bandwidth. People will find a use for it. VOIP is only the beginning. In the next few years I expect video conferencing/videovoip to start finding it's way into cheap gadgets and gain traction. Look at the popularity of skype.

Re:Probably not (1)

Skapare (16644) | more than 3 years ago | (#33854086)

All of that applies to "last mile" connections to home. For a business, you have to multiply many of those needs by as many people using them at one time. Then there may be services going in the reverse direction. For an ISP, multiply by the number of customers (divided by the oversell factor). For core infrastructure ISPs, more than 100 Tbps is still going to be needed to service a billion homes with 1gbps and a million businesses with 10gbps.

And I'll still need to compress my 5120x2160p120 videos.

The last time (1)

DJCouchyCouch (622482) | more than 3 years ago | (#33853424)

The last time I tried squeezing more bandwith out of fiber, I was constipated for weeks!

Try the veal!

Squeeze? (0)

Anonymous Coward | more than 3 years ago | (#33853502)

I thought the point of getting fiber was so that you didn't HAVE to squeeze. My doctor and my network engineer told me so!

100MB cap (1)

gygy (1182865) | more than 3 years ago | (#33853564)

Cap everything -> Profit !!

fearless 'leaders' acting like spineless monkeys (-1, Offtopic)

Anonymous Coward | more than 3 years ago | (#33853632)

check that, even spineless monkeys would have MUCH more integrity/spirit/consideration for their fellow monkeys than the clowns/puppets we're watching. additional warranted apologies to to genuine clowns & puppets everywhere. thanks.

Zoom (1)

Jaktar (975138) | more than 3 years ago | (#33853708)

I'm sure this will make my 768kbps down and 512kbps up seem so much more snappier.

New technology? (1)

homey of my owney (975234) | more than 3 years ago | (#33853770)

This was under development before the dot com bust because back then we were going to run out of bandwidth within 3 years.

Phase? (1)

Krahar (1655029) | more than 3 years ago | (#33853856)

I'm down with polarity, but what is a phase of light and what does it mean to use 4 of them? Google isn't giving me any love on that.

Re:Phase? (1)

Soft (266615) | more than 3 years ago | (#33853980)

Polarization, you mean? (As in the direction along which the electrical field vibrates?)

For phase modulation, try Wikipedia [wikipedia.org] . I like the diagram.

The problem in optical transmissions, unlike radio or electricity, is that you can't directly access the phase of the light. All you can do is to have two beams of light interfere together (just like with sound: if you hear two tones, very closely spaced, you will hear a low-frequency "beat" which pulses at a frequency equal to the difference between the frequencies of the original tones). That gives you access to the phase, but you need to have the vibration frequencies of your beams very close together, which is not simple. Recent advances (in DSP processors, paradoxically) are making it possible, especially for high-speed modulations.

Re:Phase? (1)

Krahar (1655029) | more than 3 years ago | (#33854334)

Thanks. Usually Google is excellent at showing relevant Wikipedia pages. Don't know why it fails in this instance. There must be some kind of reference compared to which phase is measured. I wonder if that is done by also sending a single reference on a different wavelength or if it is done by very fine-tuned clocks and knowledge of transmission time. Seems like a reference phase would be by far the easiest and most robust solution.

Re:Phase? (1)

Soft (266615) | more than 3 years ago | (#33854464)

Seems like a reference phase would be by far the easiest and most robust solution.

It has been proposed for some modulation types. However, this halves the efficiency: you use one wavelength for each reference beam, but you can't use the same reference for all the other wavelengths, due to the fact that these wavelengths travel down the fiber at different speeds. (This is called "chromatic dispersion" and can be a major pain in the neck at high bit rates.) So there would be a delay between reference and data beams, thus a phase shift, which would have to be measured and compensated for.

In practice, I believe that a known data sequence is transmitted at regular intervals so the receiver can detect it and synchronize to the emitter.

Re:Phase? (1)

Interoperable (1651953) | more than 3 years ago | (#33854278)

It's probably two of them. Two degrees of freedom from polarization, two from phase.

If your information is carried on an amplitude modulated sine wave, you recover it by demodulating with another sine wave. Demodulating with an orthogonal function, cosine, yields nothing. So you can pack a second carrier with a cosine phase in there and then demodulate each with the correct phase to extract the modulation signal. I don't know much about how much cross-talk there would be but probably, in theory, as long as the modulation frequency (on the order of GHz) is slow compared to the carrier frequency (hundreds of THz), it's not too bad.

Why not install more cable as well? (0)

Anonymous Coward | more than 3 years ago | (#33853914)

The idea looks plausible but there is a way to upgrade bandwidth that ALWAYS works. INSTALL MORE CABLE!!!

Re:Why not install more cable as well? (1)

Skapare (16644) | more than 3 years ago | (#33854044)

Installing is very very expensive. Too bad they didn't realize that back when they did put in the dark fiber. They are doing some installs now, but the financial resources are being held back until the government gives them massive tax breaks and helps them cover the costs so they can keep their profits high and not have to cut CEO salaries and bonuses.

Interesting, but... (3, Insightful)

Bureaucromancer (1303477) | more than 3 years ago | (#33854000)

Nice work they're doing, but nothing is going to get us around the need for new infrastructure. As much as the telecos are trying to deny it, we are going to need another major round of long distance fiber installations before the global network anything like stabilizes. Actually, I would draw a comparison to the British railway system in the 19th century (flawed but some interesting points in it), particularly with reference to the boom bust cycle (including apparent over construction early on that actually goes over capacity quite soon, and eventual REAL massive over investment and big collapses and consolidation). Not really sure if we want that outcome, but it seems like a reasonable parallel in some ways; on the one hand massive overbuilding would be nice for users, for awhile at least, but as it is we need to be breaking telecom monopolies, not creating more through collapses and consolidtation...

Why not get more efficient though? (1)

Sycraft-fu (314770) | more than 3 years ago | (#33854356)

It makes sense both in economic and practical terms to squeeze as much out of the fiber we have before laying new stuff. Not only does that get more bandwidth easier and cheaper now, but it means when new stuff is laid it'll last longer. It is real expensive to lay a transatlantic cable (and those are what are the most full). The more we can get out of one, the better. I'd much rather we research the technology to get, say, 100tbits per fiber out of the cable and need a couple thousand fibers spanning a few cables than be happy with 1tbit per fiber and need millions of fibers spanning thousands of cables.

New infrastructure should be the last resort, not the first one. Make the most efficient use of what you have, then build new stuff if there is a need. You can look at it from economic, reliability, environmental, or really any way and it comes out the same: Make better use of what you have, don't go get new things if you can avoid it.

Re:Why not get more efficient though? (1)

Bureaucromancer (1303477) | more than 3 years ago | (#33854710)

I'm all for efficieny gains, andthe work being done in TFA is great, we just shouldn't try to convince ourselves that we do t need further infrastructure, and pretty soon at that. Bear in mind that the strategy of a lot of the telecos lately seems to have been declaring that increased usage of he network, rather than bein a sign of technological maturity is 'abusive', and needs to be stopped. For that matter, the fact hey male these claims is a pretty good sign that there isn't enough competition, seein as increased bandwidth usage, far from being a threatto network integrity is a chance for the providers to sell more product.

Re:Why not get more efficient though? (1)

Sycraft-fu (314770) | more than 3 years ago | (#33854888)

I think maybe you read too much Slashdot and don't look around at what is happening overall with Internet providers. For one, bandwidth per dollar has gone way up in many cases. This is true for home connections, and for business. Only 10 years ago I paid about 20% more (not even counting inflation) for 640k/640k than I currently do for 50m/5m. That's some major increase in bandwidth without an increase in cost. This is true for high end business lines as well. I realize there are cases where it isn't, however on the vast majority of cases you get a lot more bandwidth for the same or less cost than only a short time ago.

Also the caps you are talking about are highly varied. Some providers have no caps at all. Many others have caps, but they are quite reasonable. Things like 100s of GB per month. This is because the reason the net is cheap, the reason it works as it does is we all have to share bandwidth. You can't have truly dedicated bandwidth, to do that would be expensive and wasteful. Well the flip side of that is not everyone can use their connection full blast all the time, and torrenting has made that a more common thing. I mean on campus where I work we get great speeds, 100mbit+ downloads. However if everyone tried to use all their bandwidth all the time, we'd get maybe 300-500kbits each. It's only fast because we share. It is relatively few ISPs that put a low, problematic, cap on their usage. Also you'll discover the vast majority of ISPs offer business class accounts for a bit more that have no cap. I have one of these.

Really when you look at it ISPs are doing a good job of providing more bandwidth at a reasonable cost. There's just a lot of whining on /. because many geeks use bandwidth as an ePenis number and so want a lot more, and because there are more than a few here that want to torrent 24/7. The overall situation isn't bad at all. Most people can get fast net for a good price, and can use it all the like with no trouble.

Where's the multicast? (1)

Skapare (16644) | more than 3 years ago | (#33854014)

People are still getting video feeds by HTTP. Multicast was supposed to save on bandwidth for things like IP TV. But it still isn't happening on any real scale. A lot of core infrastructure bandwidth could be reduced by making multicast fully functional and using it. And, of course, we need to do that in a way that precludes some intermediate business deciding what we can, or cannot, receive by multicast. Oh, and how many multicast groups are there? And how do I get one?

Re:Where's the multicast? (1)

TooMuchToDo (882796) | more than 3 years ago | (#33854602)

It's happening on the backend, and it's god damn huge. It's just hidden behind IPTV "cable boxes". If you're watching television on Comcast's cable plant, you're using multicast.

Just change the colour (0)

Anonymous Coward | more than 3 years ago | (#33854064)

Pure bull shit - just use another colour

Butter's Law (1)

SimonTheSoundMan (1012395) | more than 3 years ago | (#33854370)

With traffic doubling every two years, the limits of current networks are getting close to saturating.

That's ok, we have been following Butter's Law for quite some time, just like Moore's Law following transistor density. To cost of sending data halves every 9 months, and data networks double in speed every 9 months. This is well within the problem with the traffic doubling every two years.

Oh...of course! (4, Informative)

Interoperable (1651953) | more than 3 years ago | (#33854382)

The article implies that it's easy to do, there was simply never a need before. I seriously doubt that it's a trivial thing to accomplish a four-fold increase in bandwidth on existing infrastructure.

Polarization has a habit of wandering around in fiber. Temperature and physical movement of the fiber will change how the polarization is altered as it passes through the fiber. In a trans-oceanic fiber the effect could be dramatic; the polarization would likely wander around with quite a high frequency. This would need to be corrected for by periodically sending reference pulses though the fiber so that the receivers could be re-calibrated. Not too difficult, but any inaccessible repeaters would still need to be retrofitted. I also don't know if in-fiber amplifiers are polarization maintaining. They rely on a scattering process that might not be.

Phase-encoding has similar problems. Dispersion, the fact that different frequencies travel at different velocities (this leads to prisms separating white light into rainbows), will distort the pulse shape and shift the modulation envelope with respect to the phase. You either need very low dispersion fibers, and they already need to use the best available, or have some fancy processing at a receiver or repeater. Adding extra phase encoding simply implies that the current encoding method (probably straight-up, on-off encoding) is inefficient. That's not necessarily lack of foresight, that's because dense encoding is probably really hard to do in a dispersive medium like fiber. Again, it's not a trivial drop-in replacement.

The article downplays how hard these problems are. It implies that the engineers simply didn't think it through the first time around, but that's far from the case. A huge amount of money and effort goes into more efficiently encoding information in fiber. There probably is no drop in solution, but very clever design in new repeaters and amplifiers might squeeze some bonus bandwidth into existing cable.

Re:Oh...of course! (4, Interesting)

Soft (266615) | more than 3 years ago | (#33854718)

The article implies that it's easy to do, there was simply never a need before. I seriously doubt that it's a trivial thing to accomplish a four-fold increase in bandwidth on existing infrastructure.

It's not, as you have pointed out. My interpretation is that, on the contrary, phase and polarization diversity (which I'll lump into "coherent" optical transmissions) are hard enough to do that you'll try all the other possibilities first: DWDM, high symbol rates, differential-phase modulation... All these avenues have been exploited, now, so we have to bite the bullet and go coherent. However, on coherent systems, some problems actually become simpler.

Polarization has a habit of wandering around in fiber.

Quite so. Therefore, on a classical system, you use only polarization-independent devices. (Yes, erbium-doped amplifiers are essentially polarization-independent because you have many erbium ions in different configurations in the glass; Raman amplifiers are something else, but sending two pump beams along orthogonal polarizations should take care of it.)

For a coherent system, you want to separate polarizations whose axes have turned any which way. Have a look at Wikipedia's article on optical hybrids [wikipedia.org] , especially figure1. You need four photoreceivers (two for each balanced detector), and reconstruct the actual signal by digital signal processing. And that's just for a single polarization; double this for polarization diversity and use a 2x2 MIMO technique.

That's why it's so expensive compared to a classical system: the coherent receiver is much more complex. Additionally, you need DSP and especially ADCs working at tens of gigasamples per second. This is only just now becoming possible.

Phase-encoding has similar problems. Dispersion, the fact that different frequencies travel at different velocities (this leads to prisms separating white light into rainbows), will distort the pulse shape and shift the modulation envelope with respect to the phase. You either need very low dispersion fibers, and they already need to use the best available, or have some fancy processing at a receiver or repeater.

Indeed. We are at the limit of the "best available" fibers (which are not zero-dispersion, actually, to alleviate nonlinear effects, but that's another story). Now we need the "fancy processing". And lo, when we use it, the dispersion problem becomes much more tractable! Currently, you need all these dispersion-compensating fibers every 100km, and they're not precise enough beyond 40Gbaud (thus 40Gbit/s for conventional systems). With coherent, dispersion is a purely linear channel characteristic, which you can correct straightforwardly in the spectral domain using FFTs. Then the limit becomes how much processing power you have at the receiver.

The article downplays how hard these problems are. It implies that the engineers simply didn't think it through the first time around, but that's far from the case. A huge amount of money and effort goes into more efficiently encoding information in fiber. There probably is no drop in solution, but very clever design in new repeaters and amplifiers might squeeze some bonus bandwidth into existing cable.

Well, yes, much effort has been devoted to the problem. After all, how many laboratories are competing for breaking transmission speed records and be rewarded by the prestige of a postdeadline paper at conferences such as OFC and ECOC ;-)?

As for how much bandwidth can be squeezed into fibers, keep in mind that current systems have an efficiency around 0.2bit/s/Hz. There's at least an order of magnitude left for improvement; I don't have Essiambre's paper handy, but according to his simulations, I think the minimum bound for capacity is around 7-8bit/s/Hz.

Re:Oh...of course! (1)

John Hasler (414242) | more than 3 years ago | (#33854728)

They would use circular polarization multiplexing. They already use phase shift modulation as well as wavelength division multiplexing.

Squeezes ma fiber (0)

Anonymous Coward | more than 3 years ago | (#33854426)

That is all.

Maybe if we ran the fiber downhill? (0)

Anonymous Coward | more than 3 years ago | (#33854456)

How about using gravity to help the light flow faster? It works for water!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>