Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Caltech and UVic Set 339Gbps Internet Speed Record

samzenpus posted about 2 years ago | from the greased-lightning dept.

Network 79

MrSeb writes "Engineers at Caltech and the University of Victoria in Canada have smashed their own internet speed records, achieving a memory-to-memory transfer rate of 339 gigabits per second (53GB/s), 187Gbps (29GB/s) over a single duplex 100-gigabit connection, and a max disk-to-disk transfer speed of 96Gbps (15GB/s). At a sustained rate of 339Gbps, such a network could transfer four million gigabytes (4PB) of data per day — or around 200,000 Blu-ray movie rips. These speed records are all very impressive, but what's the point? Put simply, the scientific world deals with vasts amount of data — and that data needs to be moved around the world quickly. The most obvious example of this is CERN's Large Hadron Collider; in the past year, the high-speed academic networks connecting CERN to the outside world have transferred more than 100 petabytes of data. It is because of these networks that we can discover new particles, such as the Higgs boson. In essence, Caltech and the University of Victoria have taken it upon themselves to ride the bleeding edge of high-speed networks so that science can continue to prosper."

cancel ×

79 comments

Sorry! There are no comments related to the filter you selected.

First! (-1)

Anonymous Coward | about 2 years ago | (#42125349)

I'm on their network. :p

Progress (5, Informative)

SternisheFan (2529412) | about 2 years ago | (#42125353)

The older neckbeard hugs his 300 baud modem, and softly sobs...

Re:Progress (1)

Osgeld (1900440) | about 2 years ago | (#42125801)

I remember being opposed to 28.8 modems cause the ANSI art from bbs's would scroll by too fast, it just wasnt the same without watching a menu draw on screen

course now I use a machine routine to force my apple II to do 115,200 and thinking this seems a bit sluggish

Re:Progress (1)

harlows_monkeys (106428) | about 2 years ago | (#42127915)

No serious neckbeard used a 300 baud modem. The modem of choice was the Telebit Trailblazer.

old (1)

DarthVain (724186) | about 2 years ago | (#42129985)

Heh, I remember when I was younger connecting to 300 baud modems with my 2400 baud modem and thinking "man that's slow, it must be very old!" lol

My god (1, Funny)

Anonymous Coward | about 2 years ago | (#42125425)

It's full of porn!

Re:My god (1)

Nostromo21 (1947840) | about 2 years ago | (#42126779)

I was just about to ask if that's enough to stream hi-def 8K uncompressed, live-action uber-porn!? >;-)

Just a matter of Cost (3, Interesting)

nevermindme (912672) | about 2 years ago | (#42125447)

At this point it is just a matter of how much you are willing to spend with 10Gig Ethernet becoming the standard method for handoff from the Telco demark. Just installed a L3 endpoint with well over 50Gig/sec of capacity with each gig/sec CIR costing less than it takes my company to write the check and split it out to the customers invoices. Storage and Power/Cooling are the last expensive item in the datacenter.

Re:Just a matter of Cost (2)

Charliemopps (1157495) | about 2 years ago | (#42125781)

Don't forget equipment management, billing, HR, insurance, and everything else it takes to keep your employees that actually install, maintain and upgrade that equipment around. Last I checked, people were still required and they are, by far, the most expensive part of our data center.

The point (1)

symbolset (646467) | about 2 years ago | (#42125453)

Also, if you want to bring next-gen gigabit fiber networking to homes in a major metro area your backhaul network needs to push the limits of fiber. Otherwise you run out of backbone capacity. Even with this speed, CDN endpoints are needed to reduce backbone bandwidth requirements for things like streaming video and TV.

Re:The point (1)

Algae_94 (2017070) | about 2 years ago | (#42125529)

I don't think these two institutions are concerned with streaming video and TV, or connecting to peoples homes at all. How many homes need to access 100PB of data from CERN?

Re:The point (0)

Anonymous Coward | about 2 years ago | (#42126355)

In an ideal world? Every home that wants access. This is Science.

Re:The point (3, Interesting)

symbolset (646467) | about 2 years ago | (#42127057)

I was taking a more general application of "what's the point?" The first connection to what would become the Internet was made between UCLA and SRI in Menlo Park, CA after all. That was a big deal for them, but a bigger deal for us. What the point of that was is rather subjective.

100PB seems like a lot of data today - 3,000 times the 3TB storage available in a standard PC. But I am so old I wear an onion on my belt, as was the fashion in my day. 1/3000th of that 3TB is 1GB. I can remember when to have 1GB of storage in your PC was an undreamt of wealth of storage richness: a bottomless well that might never be filled. Hell, I can remember a day when 3TB of digital info storage was more storage than there was - everywhere on Earth. In fact in my early days there was talk of a terabyte being the sum of human knowledge (silly, I know). It's reasonable to expect that when that much more time has passed again, 100PB will not be a big deal either.

So now we carry around a 1TB 2.5" USB drive in our shirt pocket like it's no big deal. And when guys like this do things like this we talk about what it means to them - and that's fine. But there is a larger story, like there was a larger story at UCLA - and that is "what does this mean to the rest of us?"

Now 339Gbps isn't such a big deal. NEC wizards have already passed 101 Tbps - 300 times as much over a single fiber [wikipedia.org] , though not to this distance. That's enough bandwidth to pass your 100PB in 20 minutes, over a single strand of glass fiber.

The LA Metro area is about 15 million people, or 3 million homes. To deliver 1Gbps to a reasonable half of 3 million homes and mesh it out for global distribution is going to require a lot of these links. The aggregate demand would probably be under 1% of peak potential of 3,000 Tbps or about 30Tbps. 100 times the bandwidth of this link. Using CDNs(*) - particularly for YouTube, CableTV, the usual porn suspects and BitTorrent you could diminish the need for wider bandwidth considerably but you still need a wide pipe to the outside world. And all the Internet servers in the world would need to be updated to support the crushing demand with higher performance, SSD storage and the like. And that's great for the economy, and it's just LA.

These innovations are neat, but they're neater still when they come home to all of us.

TL;DR: Get off my lawn.

/(*) Define CDN: A CDN, or Content Delivery Network [wikipedia.org] is a facility for moving high bandwidth, high demand or high transaction content closer to a nexus of consumers. An example would be Netflix, which delivers streaming video content to 21 million subscribers, comprising by some estimates a full third of Internet traffic. Netflix provides for free to Internet providers BackBlaze boxes [gigaom.com] that move Netflix content closer to the end user, reducing backbone usage. Similar boxes are provided by advertising networks and other content providers.

Re:The point (3, Informative)

Anonymous Coward | about 2 years ago | (#42127393)

Now 339Gbps isn't such a big deal. NEC wizards have already passed 101 Tbps - 300 times as much over a single fiber [wikipedia.org] , though not to this distance. That's enough bandwidth to pass your 100PB in 20 minutes, over a single strand of glass fiber.

Yes, but NEC sent raw bits (probably just pseudorandom data) over a single length of fiber, that's just the raw fiber throughput. This experiment ran over a live network, so: multiple lengths of fiber, optical amplifiers in between, add/drop multiplexers in between, then put Ethernet over it, IP routing between Ethernet segments, and _then_ terminate this on real machines transferring that data through memory to and from disk. The hardware they used is also commercially and currently available, while you'd probably be looking for a vendor that has a setup with 370 lasers that fire into a single fiber (as described in the article that Wikipedia used as a source for the NEC experimental results.)

Re:The point (1)

symbolset (646467) | about 2 years ago | (#42127819)

Standards are for weenies. If you own both ends of the glass pipe, how you put data down it is up to you. It's not standard. It's not Ethernet. But I really don't think Google gives a damn because they own both ends of the pipe and what they care about is moving bits for least cost at minimum latency.

Re:The point (3, Interesting)

Anonymous Coward | about 2 years ago | (#42128049)

(same AC as above)

That's true, but that ideal will break sooner than you'd like unless you're going to custom-design and build all your equipment yourself, which at that level is pretty tough, even for a company like Google. That equipment is not comparable to what you can get in your local computer shop, only very few companies worldwide can produce all devices you need from end to end.
  - After some 160km at most you're going to need amplification, which limits the wavelengths you can use (there are only a couple of bands commonly used in fiber communication.)
  - After a few amplification points, you're going to need reconditioning equipment, matched to the signal you're sending
  - Eventually you're going to have to convert back to Ethernet, POS or ATM to connect anything useful to it and you'll probably want to run IP over it so you can talk to other systems on the Internet. You can of course make your own NICs, but in that case you'll still have to interface to some PCI standard, unless you're going to design your complete end node yourself too. In that case, you'll also have to write a custom OS to drive your custom hardware.

Existing systems that can do this on a large scale (i.e. high bandwidth) are _much_ more affordable (a rough guess is some tens of millions of dollars for a very simple small-diameter network with a couple of routers) than completely designing it just for yourself, so you'll have to live with the standards everyone has agreed on unless you really have significant disposable income as a company.

Re:The point (0)

Anonymous Coward | about 2 years ago | (#42133003)

(same AC as above)

I desperately want to make love to a school boy

Re:The point (1)

jbo5112 (154963) | about 2 years ago | (#42146715)

Kansas City is already being wired with 1Gbps dedicated connections in people's homes. A fiber line runs from the home to a "fiber hut" (a room sized switch or router) and the fiber hut is placed on the Internet backbone, with no aggregation in between. I saw synchronous 900Mbit(ish) bandwidth tests results on a screen at Google Fiber Space. Even Verizon's and AT&T's 100Gbps network backbones are going to fill up pretty fast once this rolls out to more customers and Google starts installing for their second rally.

The nice thing about streaming video/TV on a 1Gbps network is that a HD movie only uses bandwidth for 30 seconds or a few minutes tops, then the customer will be mostly idle while watching. It's not like current broadband, where a single higher-bitrate video service (e.g. VUDU) could saturate a customer connection an hour or 2+. Eventually, it would be smart to employ some sort of consumer-side caching and p2p sharing on data intense services like movies. There's no sense in wasting backbone bandwidth, when neighbors already have it on the local network. The biggest obstacle would be cramming the storage requirements into a small, cheap, set top player.

Re:The point (1)

symbolset (646467) | about 2 years ago | (#42152365)

Google has the backbone fiber to do this, and the tech to make it work. Nobody else does. And they do, in fact, leverage non-standard tech to move their bits over their own fiber. They design their own switches, switch links and so on and they leverage emerging technologies like this. Just like they design their own servers, and go direct to China for motherboards and Intel for processors, they're not paying $1300 for a 10Gbps SFP+ LR GBIC.

They bought this fiber for pennies on the dollar it cost to lay, back in the .bomb era. There is quite a lot of it. Quite a lot more than they could ever need, even if they wired every home in America with 10Gig.

Nobody moves bits cheaper than Google does. And this is part of why.

YAY !! BLOW MY MONTHLY GB in 0.666 SECONDS !! (-1)

Anonymous Coward | about 2 years ago | (#42125493)

I live to tell you a story about a man named Jed !! A poor mountaineer - barely kept his familly fed !! Then one day, when he was shootin' at some food !! Up from the ground came a bubblin' crude !! OIL !! it was !! Black Gold !! Texas Tea !! Fish heads !! Fish heads !! Roly-poly fish heads !! DANGER !! DANGER !! UVic ?? FikU !!

So at home... (2, Informative)

nurb432 (527695) | about 2 years ago | (#42125509)

I could blow thru my bandwidth cap in just under a second.

Re:So at home... (1)

Anonymous Coward | about 2 years ago | (#42127113)

meanwhile comcast offers me 15mbit at 60 bucks a month

Inter-networking (2, Informative)

fufufang (2603203) | about 2 years ago | (#42125519)

For those who can't be bothered to read the whole article, the packets actually went over the Internet. It wasn't a simple case of direct optic fibre connection. It is impressive that the backbone can now achieve such bandwidth.

Re:Inter-networking (0)

Anonymous Coward | about 2 years ago | (#42126801)

Not the big eye (I) Internet, but over R&E networks. And at least the Internet2 portion was using a new network that had no other traffic riding on the 100GE links. So maybe not a dedicated fiber connection, but pretty close. But the RoCE transfer did seem to go pretty well for them over the multiple 100Ge links they were using...

Units (1)

Anonymous Coward | about 2 years ago | (#42125535)

What's the equivalent speed measured in Library of Congresses?

Re:Units (1)

mic0e (2740501) | about 2 years ago | (#42128713)

You could transfer about one football field of library of congress area in the time it takes to watch one sitcom.

339 Gb/s? (1)

KrazyDave (2559307) | about 2 years ago | (#42125537)

isn't that like the average home Internet speed in Iceland?

Re:339 Gb/s? (1)

Anonymous Coward | about 2 years ago | (#42127111)

meh, no, we only have 100 mbit/s standard, with expiriments being done on 1 gbit/s

339G over a 100G would be news (1)

dtdmrr (1136777) | about 2 years ago | (#42125563)

Read the damn press release. It does say single link, but not single 100G link. If they were demonstrating in-line compression at those speeds, or actually sending already compressed data at 4x the line rate, that would be intresting news. Sloppy re-reporting.

Re:339G over a 100G would be news (1)

Alien Being (18488) | about 2 years ago | (#42126063)

It's good to know that 339 gigabits per second (53GB/s) is equal to 187Gbps and that's hows much iz gonna need next time I downloads 200000 Bluray films fore sundown. But how godamn many rods of horsepower am I'm gonna need to run this bastard. Them hamster gots to eat.

Yeehaw, Iz ciphering real good now.

Of course, even a dumb ole country boy knows that if the packets aren't routed, it's not fair to call it internet. I'll just chalk that up to sensationalist reporting by slashdot morons.

Re:339G over a 100G would be news (0)

Anonymous Coward | about 2 years ago | (#42126411)

Have a look at Michigan's page on this: http://linat05.grid.umich.edu/aglt2-merged/supercomputing.php [umich.edu]

I think those are routed links. It really looks like that 336G number isn't about end to end speed, but rather aggregate bandwidth for multiple sites and in both directions (on each link). Or possibly its the bandwidth at the caltech link with bidirectional flow with the other sites (I guess one of those is another node at caltech).

So caltech can spew hundreds of gigabits/s across the country and to canada. I wonder if the grad student housing across the street is still stuck with cable modems.

Re:339G over a 100G would be news (0)

Anonymous Coward | about 2 years ago | (#42130535)

That is not the correct link, that was a draft link sent around before we published it here on the ATLAS Great Lakes Tier2 website: http://www.aglt2.org/supercomputing.php

AC, please do not spread the link you are using.

Re:339G over a 100G would be news (0)

Anonymous Coward | about 2 years ago | (#42131135)

Linked correctly: www.aglt2.org [aglt2.org]

Invalid unit of measure... (0)

skelly33 (891182) | about 2 years ago | (#42125571)

"... or around 200,000 Blu-ray movie rips."

Does this score points with delinquent juveniles? It's a counterproductive addition that makes the whole posting appear specious with respect to academic and scientific relevance.

Re:Invalid unit of measure... (1)

sconeu (64226) | about 2 years ago | (#42125675)

And the MPAA sued both universities for ONE BILLION DOLLARS, claiming the "potential to pirate 11.25 DVDs per second".

Re:Invalid unit of measure... (1)

Anonymous Coward | about 2 years ago | (#42125747)

Do you always stink that bad or is it only because you pooped your pants? Sorry to stoop to your level of trolling, but the summary at least gives both, the value in a reproducible measurement and as an analogy that relays how impressive the feat was. Why did you add your first sentence? At least I can blame you for mine.

Re:Invalid unit of measure... (0)

Anonymous Coward | about 2 years ago | (#42125795)

Everyone knows the correct unit is Libraries of Congress.

1.0 BURPS (0)

Anonymous Coward | about 2 years ago | (#42133801)

HIghest Internet speed stands at 53 GB/s, that equals 1 BlUe-Ray disks Per Second.

Comcast and CenturyLink (1)

Black Art (3335) | about 2 years ago | (#42125649)

Comcast and CenturyLink would still block you after using more than 250 gigs of bandwidth.

In the future "unlimited" will mean "just a second".

Re:Comcast and CenturyLink (1)

citizenr (871508) | about 2 years ago | (#42126497)

Comcast and CenturyLink would still block you after using more than 250 gigs of bandwidth.

In the future "unlimited" will mean "just a second".

Australian so much loved here 100Mbit NBN has a 40GB quota :D Thats less than an hour full throttle.

Scientific computing (2)

bragr (1612015) | about 2 years ago | (#42125665)

>>Put simply, the scientific world deals with vasts amount of data This is so true. I work for a bioinformatics compute cluster and our users have no problem maxing out our infiniband infrastructure. Who needs load simulators when your users sometimes work with >1TB datasets?

Re:Scientific computing (0)

Anonymous Coward | about 2 years ago | (#42141419)

>>Put simply, the scientific world deals with vasts amount of data

This is so true. I work for a bioinformatics compute cluster and our users have no problem maxing out our infiniband infrastructure. Who needs load simulators when your users sometimes work with >1TB datasets?

Oh yeah, sorry about last week. The lab just got a new dataset.

It is because (1)

Osgeld (1900440) | about 2 years ago | (#42125689)

"It is because of these networks that we can discover new particles"

holy shit batman, I thought it was because of the "Large Hadron Collider", ok then +1 to you internet

That math is wonky (2, Interesting)

Anonymous Coward | about 2 years ago | (#42125719)

How is 339 gigabits per second equal to 53 gigabytes per second?
How is 187 gigabits per second equal to 29 gigabytes per second?
How is 96 gigabits per second equal to 15 gigabytes per second?

Glad to hear they're using 6.4 bits per byte. Next week they'll drop down to 2 bits per byte and push out another press release.

Re:That math is wonky (0)

Anonymous Coward | about 2 years ago | (#42130861)

They didn't see how they did the "translation". I'm thinking you may have a raw speed in Gigabits, that won't match the actual payload due to overhead in packets from other protocols and headers from their own. Since they are not specific about them, we cannot say.

rule 34 (0)

Anonymous Coward | about 2 years ago | (#42125767)

so what's 4 PB of porn a day going to give us?

Re:rule 34 (2, Funny)

Anonymous Coward | about 2 years ago | (#42125807)

A severe case of chafing.

LHC data sets, eat your heart out (1)

damn_registrars (1103043) | about 2 years ago | (#42125831)

Sure, the LHC generates tons of data. But that will soon be dwarfed by some of the data sets that are coming from proteomics research, especially when you take into account how many proteomics labs around the world are (or soon will be) capable of generating 1TB /day.

Re:LHC data sets, eat your heart out (2)

The Master Control P (655590) | about 2 years ago | (#42125967)

To then be overtaken by the next generation radio telescopes, which will store TB/h as they test technologies for the SKA, which will store TB/second [arxiv.org] .

Seriously, no one has any idea how the fuck we're going to analyze all the data quite yet. A completely untrained n00b beats the pants off of any image classifying algorithm hands down, but how do you classify billions of objects that way?

Re:LHC data sets, eat your heart out (1)

KingMotley (944240) | about 2 years ago | (#42126981)

foxconn. Duh

Re:LHC data sets, eat your heart out (1)

jbo5112 (154963) | about 2 years ago | (#42147219)

Facebook analyzes and stores roughly 500 TB a day (Apache web logs), just to know how their service is being used. I know it's quite an order of magnitude easier to analyze, but efficient cluster and distributed computing does wonders. Telescope data would fit the paradigm quite well, probably even playing nicely with the uber-simple mapreduce framework.

Google figured out how to get untrained n00bs to classify images. They invented the Google Image Labeler game. IIRC, you would be paired up with someone to describe an image for 2 minutes. For each keyword that both people used, both people would get a point. Google would run leader boards for things like all time high score, highest score per day, highest score per game, etc. It was surprisingly successful, and the fruits of the game are quite evident when comparing Google's image search to Bing's.

Re:LHC data sets, eat your heart out (1)

Zondar (32904) | about 2 years ago | (#42126083)

Earlier this year, Gizmodo reported that the LHC generates a petabyte of data every second. Actually, that's just one of the two primary experiments.

http://us.gizmodo.com/5914548/the-large-hadron-collider-throws-away-more-data-than-it-stores [gizmodo.com]
http://www.youtube.com/watch?v=0mgXNgD3JFU [youtube.com]

The have to preprocess the raw data through a filter that determines if the collision is interesting or not. Then they further parse that down and only store a tiny fraction of the raw data generated every day. If they were able to store and distribute that raw data, who knows what kinds of information is being thrown away simply because we can't store it and can't transmit it offsite fast enough.

Re:LHC data sets, eat your heart out (2, Interesting)

Anonymous Coward | about 2 years ago | (#42129383)

That limit doesn't really have to do with offsite transfer speed, but just about local speed of getting it onto a disk a few meters away from the machine. Faster computers and storage technology is needed for that instead of faster long distance communication (unless a particular advance in the latter helps with dedicated lines over very short distances relatively speaking). Fast long distance operations improve the rate at which researchers can query and run tests on the large data set instead.

Also, they had some idea what they are throwing away. The same filter that decides if something is interesting or not, will with a predetermined frequency let some collisions' data through without any filtering so they can check the effectiveness of the filtering.

speed of light (0)

Anonymous Coward | about 2 years ago | (#42125835)

In average they transfer 1 bit to a great distance slightly faster than light can travel 1 meter.

the big slurp (1)

epine (68316) | about 2 years ago | (#42125981)

Surprises me they chose an island as the the northern terminus. I'm suspecting the undersea observation network has something to do with it.

OS, application? (2)

manu0601 (2221348) | about 2 years ago | (#42126285)

Do we know anything about OS and application setup? Achieving such speed is not obvious, and OS may kill performances.by copying data between user and kernel space.

Re:OS, application? (1)

AmiMoJo (196126) | about 2 years ago | (#42128549)

I'd imagine they are counting the raw network to RAM via DMA speed, not the speed at which the CPU can interpret the data. The disk transfer rate speed is a better indication of that, although presumably the disk DMAs the data into RAM and then the NIC DMAs it out again so the CPU is relatively unloaded.

Re:OS, application? (1)

manu0601 (2221348) | about 2 years ago | (#42138091)

DMA will not solve all your problems. If you use the POSIX API, for instance, you will call read(2) or recv(2) to get data in a buffer you provide. You do not know in advance the size of the data chunk, and if your buffer is to small, data ought to be waiting in kernel for you to retrieve it. Same problem if you call read(2) after the data arrived.

In both cases you must have a copy between kernel and userland, regardless of how smart the implementation is with DMA

Re:OS, application? (1)

Alarash (746254) | about 2 years ago | (#42178317)

You are correct, it's not really viable to setup a whole bunch of PCs for this. If only to aggregate the results. You need to use dedicated load generators for that kind of testing. The aforementioned CERN uses this card (http://www.spirent.com/Solutions-Directory/Spirent-TestCenter/HyperMetrics_mx_40-100 [spirent.com] ) for this. I know because we sold it to them.

In South Korea (1)

Jah-Wren Ryel (80510) | about 2 years ago | (#42126361)

Why is this news?

In South Korea anyone can get 350Gbps fibre to their home for less than $50/month!

Re:In South Korea (0)

Anonymous Coward | about 2 years ago | (#42126467)

why do Koreans lie so much?

Re:In South Korea (0)

Anonymous Coward | about 2 years ago | (#42126909)

why do Koreans lie so much

Why are bigots so humor handicapped?

Re:In South Korea (0)

Anonymous Coward | about 2 years ago | (#42127063)

What do they get in North Korea?

To be a speed record (1)

rossdee (243626) | about 2 years ago | (#42126477)

Doesn't it have to be both ways?

(Like with land water and air soeed records...

powerball dot com could use this trick (0)

Anonymous Coward | about 2 years ago | (#42126507)

site went down in a ball of flames at 10:59 est
what's worse is i didn't win

339GB/sec !!! (0)

Anonymous Coward | about 2 years ago | (#42126709)

339GB/sec, where do they find enough PRON to upload and download ?

New 339 gbps/sec accounts available (1)

CHRONOSS2008 (1226498) | about 2 years ago | (#42126753)

1 GB data cap and a cost of 399.99/month and you have to sign up for a trillion years ....
who cares its does none of us any good with the restrictive long term copyright crap...
it snot like anyone on the net is gonna be needing bandwidth unless your pirating right?

Caution please (1)

U8MyData (1281010) | about 2 years ago | (#42126803)

The faster we can move data is in some way connected to how fast life moves in every respect. There is a point when "cool" is truly a four letter word. Not that I am arguing against it, it's just that there is a similar arguments within the AI industry. In more and more ways our achievements are outstripping out ability to deal with the consequences.

And just today (1)

greatgreygreengreasy (706454) | about 2 years ago | (#42126819)

I switched a guy over from copper DSL max 3mb/sec to our new fiber network, max TEN mb/s!

UTMA [utma.com]

Massive data (0)

Anonymous Coward | about 2 years ago | (#42126957)

When I was working at the VLA it was said to never underestimate the bandwith of a station wagon will with magnetic tapes. I guess a more up to date one woule be to talk about hard drives instead.

Re:Massive data (1)

Eugene (6671) | about 2 years ago | (#42127643)

I think tape drives still wins (LTO 5 is 1.5TB uncompressed, the new LTO 6 is 2.5TB)

Internet vs Wagon-net (1)

jbo5112 (154963) | about 2 years ago | (#42150673)

With a quick search, I found a Ford Flex listed with 83.2 cubic feet of space and the dimensions of an LTO cartridge. 83.2 ft^3/((102 x 105.4 x 21.5) mm^3)=10192 tape cartridges, nearly 25 PB using LTO 6 w/o compression. Google says the drive between the colleges is 19 hours and 48 minutes. Neglecting copy times, it works out to about 366 GB/s, more than 8x the speed.

In reality, you can stack tapes in the passenger seat, but if you want to have any hope of sorting the tapes back out, you'll need to pack them much less efficiently. Using the most compact media cases I could find, I figure you can carry maybe 90 cases, each holding 36 tapes, for a total of 3240 tapes. That drops the data to 8.1 PB per car. Best case scenario, you have 3240 drives at each location, reducing write times to the 4.55 hours for a single tape. Worst case scenario is that you only have 270 tape drives or less. Then the tape drive can't keep up with the network speed. Add in some time to load/unload tapes in the car, stop for gas and read the tapes back in, and you're at 29 hours easily. Now your speed is down to 81 GB/s or 650 Gbps. It would be a lot of work (and well over $1,000 each for the tape drives), but the station wagon wins, being almost twice as fast. The break-even time for this trip carrying 8.1 PB is 55 hours and 40.6 minutes (including read/write times).

For 3.5" hard drives, you'll get 60% more data (4TB vs 2.5TB) in a 64.7% larger package (147mm x 102mm x 25.4mm). I didn't research similar hdd cases, but I would think you would want bulkier packing for cushioning. LTO 6 isn't on the market quite yet though. LTO 5 only gives you 60% of the storage or 33.4 hrs to beat this Internet speed. Until we have helium filled drives, hard drives would probably be in the middle somewhere closer to LTO 6.

And we all know how they tested it. (0)

Anonymous Coward | about 2 years ago | (#42128975)

They just let every CalTech student download all the porn they want at once.

Oooh to be loggin in instead of an anonymous poster.

Data (1)

DarthVain (724186) | about 2 years ago | (#42130045)

My big question that the article seems a bit light on, is other than the "size" of the data, they make no mention of what kind of data was used.

It's all very well and good if they used a randomized contents of a HD, then just replicated it. It is something else entirely if they used very specialized data for this particular test.

i.e. that speed works very well but only if the data consists of all 1's.

Also some pretty limited applications for this technology at present unless computers get faster, and storage gets massive. I mean you would fill just about anything in no time, and there are limits to how much data can be processed simply streaming it in.

Michigan was involved too! (0)

Anonymous Coward | about 2 years ago | (#42130669)

ATLAS Great Lakes Tier2 at the University of Michigan also contributed to this demonstration. They are mentioned in the article text and our site is in the diagrams and network throughput graphs but we are not in the first-paragraph summary used by article poster.

Here is our page on the event:
http://www.aglt2.org/supercomputing.php

Re:Michigan was involved too! (0)

Anonymous Coward | about 2 years ago | (#42131473)

With html link this time: www.aglt2.org [aglt2.org]

There is also Next Generation Sequencing (1)

quarkie68 (1018634) | about 2 years ago | (#42131215)

You should also mention the multi Petabyte paradigm of life scientists and the Next Generation Sequencing applications. This is just another frontier in data intensive computing paradigms.

"Or around 200,000 Blu-ray movie rips..." (1)

It's the tripnaut! (687402) | about 2 years ago | (#42135529)

This assumes a blu-ray movie is 20GB, which it is not. From Wikipedia [wikipedia.org] :

"Conventional (pre-BD-XL) Blu-ray Discs contain 25 GB per layer, with dual layer discs (50 GB) being the industry standard for feature-length video discs."

Hyperbole is good but should not be at the expense of the truth.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>