Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Bandwidth Challenge Results

samzenpus posted more than 8 years ago | from the greased-lightning dept.

Networking 111

the 1st sandman writes "SC2005 published some results of several challenges including bandwidth utilization. The winner (a Caltech led team of several institutes) was measured at 130 Gbps. On their site you can find some more information on their measurements and the equipment they used. They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"

cancel ×

111 comments

Sorry! There are no comments related to the filter you selected.

home use (2, Interesting)

Chubby_C (874060) | more than 8 years ago | (#14105766)

how long before these ultra-high speed networks are rolled out the home users?

Re:home use (1)

freewaybear (906222) | more than 8 years ago | (#14105775)

I wish my ISP would give me those kinda pipes...

Re:home use (1)

stox (131684) | more than 8 years ago | (#14106138)

Well, if past is prologue, your looking at about a decade. Kinda sorta. How many home users are actually using GigE now?

Never ! (1)

alshehab (834707) | more than 8 years ago | (#14106400)

When there exists a business model that profits from rolling out these to the home users.

Re:home use (1)

timeOday (582209) | more than 8 years ago | (#14106628)

I'd be happy if they just got these big pipes close enough to my house that everybody could simultaneously pull 10 mbps. That's about enough for 2 simultaneous video streams, with very good quality assuming a decent codec.

My Comcast service already hits 4 mbps whenever I ask it to, so it feels within reach but I guess we'll see.

Re:home use (1)

wwwillem (253720) | more than 8 years ago | (#14107449)

Oh, you can have it now, but first make some space in your basement for this [caltech.edu] equipment. :) Better ask your wife first.....

LOC'ed in. (2, Funny)

Anonymous Coward | more than 8 years ago | (#14105770)

"They claimed they had a throughput of several DVD movies per second. How is that for video on demand!""

How many Library Of Congress'es is that?

Re:LOC'ed in. (4, Informative)

phatslug (878736) | more than 8 years ago | (#14105996)

130 Gbps = 0.0158691406 terabytes per second (using google) [google.com.au]
1 Library of congress is 20TB
1 Fortnight is 1209600s
0.0158691406 x 1209600 = 19195.31247
At 130Gbps after 1 fortnight 19195.31247TB would be transfered
19195.31247/20 = 959.77 Libraries of Congress per fortnight.

Re:LOC'ed in. (0)

Anonymous Coward | more than 8 years ago | (#14107103)

$ units

You have: 130 Gbps
You want: 20TB/fortnight
                * 982.8

Re:LOC'ed in. (0)

Anonymous Coward | more than 8 years ago | (#14107432)

Ah yes, using fortnights as a time unit. On VMS systems, one parameter uses fortnights (or small fractions of them) as a time unit. This parameter controls how long the sysetm will wait for you to enter the correct time after a reboot, generally only on the first boot or if the system has been down for a rather long time and can't trust the hardware clock to really be useful.


$ mcr sysgen
SYSGEN> SHOW TIMEPROMPTWAIT
Parameter Name Current Default Min. Max. Unit Dynamic
TIMEPROMPTWAIT 65535 -1 0 -1 uFortnight
SYSGEN>

Re:LOC'ed in. (4, Funny)

Doppler00 (534739) | more than 8 years ago | (#14106599)

More importantly, how many letters from MPAA lawyers does that equal per second?

Sponsors? (4, Funny)

simpleguy (5686) | more than 8 years ago | (#14105772)


The Bandwidth Challenge, sponsored by the good fellows at the MPAA and RIAA. I think they forgot to put their logos on the sponsor page.

Re:Sponsors? (1)

corcoranp (892008) | more than 8 years ago | (#14105799)

Also brought to you my Microsoft (in preparation for XBox720)

Microsoft - What virus do you want today?

Re:Sponsors? (0)

Anonymous Coward | more than 8 years ago | (#14106073)

I'D BUY THAT FOR A DOLLAR!

Probably not enough DVDs/sec (5, Insightful)

xtal (49134) | more than 8 years ago | (#14105791)

I love arbitrary metrics..


They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"


Given you might need to serve a few thousand people an hour (or more?), I'd say it's still got awhile to go. Kinda sobering, when you think about it. Shiny discs and station wagons are going to be around for awhile.

Re:Probably not enough DVDs/sec (2, Insightful)

Lord Maud'Dib (611577) | more than 8 years ago | (#14105823)

That's whole movies per second they are talking about. So even at only 2 movies per second thats 7200 people to get complete movie downloads per hour.

Re:Probably not enough DVDs/sec (1)

rtaylor (70602) | more than 8 years ago | (#14106173)

7200 movies per hour won't even cover a small city during peak usage. How many cable companies do you know of that have 8k clients at the most? Assume for every household with the TV off there is another with 2 sets on different channels.

Re:Probably not enough DVDs/sec (1)

timeOday (582209) | more than 8 years ago | (#14106653)

7200 movies per hour won't even cover a small city during peak usage.
I have a crazy idea, maybe the cable company could use more than 1 single network cable.

Besides, for the foreseeable future video on demand will be pay per view, so the number of simultaneous users will be far fewer than the number of households.

Re:Probably not enough DVDs/sec (1, Informative)

Anonymous Coward | more than 8 years ago | (#14106839)

You retards.

You don't have them download the entire fucking DVD in one second or in one hour. Who, other then nerds, wants to fill up their harddrives with movies that they can simply watch at any time over the internet with a small subscription base.

You stream it to them.

On a DVD movie the HIGHEST bitrate your going to see is around 10Mbps.

If you had a 130Gbps pipe... that would allow you to serve 13 thousand customers on one connection, and that is at the highest quality setting aviable on dvd movies nowadays. More likely your dealing with movie data between 4 and 8 Mbps and if you take into account that people aren't going to be watching all at the same time you can probably comfortably service around 40 thousand people.

If you use mpeg4 compression on it and get 'TV' quality formats you could probably serve closer to 100 thousand people movie content with no hickups no matter how many people wanted to watch a movie.

Then if you take into account that you can use multicast, like have a delay of up to 5 minutes for play time to sync up customer requests to make multicasting more effective, technology to stream the same data stream to multiple people.

At each fork in the network the router would repeat the same output on both ports, etc etc until all the customers receive their own stream then your dealing with the ability to litterally take care of MILLIONS of people over a 130Gbps link.

People have been doing multicasting for a while now.

If you want to have people download rather then subscription service then stuff like Bittorrent will accomplish the same thing.

The way to get rid of the threat of piracy then is not to do DRM then.. it's to bundle subscription services at the ISP level.

If you had a deal with a ISP for a high speed internet link and you could watch any movie you liked as much as you liked for only a extra 7-10 dollars, or maybe limit you to a dozen movies for 5 bucks a month would you take it? I know I would. It would eliminate any desire to pirate for all but the most retarded/depraved people.. which the companies wouldn't make any money off of anyways.

re: multicast (1)

DrSkwid (118965) | more than 8 years ago | (#14107024)

Our cable co.'s VoD service has Pause & Rewind etc.

You get random access to the whole movie for 24h hours for about $4

A good deal I think and regularly watch a movie.

Re:Probably not enough DVDs/sec (1)

askegg (599634) | more than 8 years ago | (#14106840)

Well that depends on the architecture. If every set top box has a bittorrent client it could help load popular shows onto others in the neighbourhood without leaving your local exchange. Plus, someone will crack how to do this with live video feeds at some stage.

Re:Probably not enough DVDs/sec (1)

baka_vic (769186) | more than 8 years ago | (#14106919)

Peercast [peercast.org] does have some live video feeds being p2p-broadcasted. On the peercast yellow pages [peercast.org] , you can find live anime feeds running for a few hours a day.

Re:Probably not enough DVDs/sec (1)

Tony Hoyle (11698) | more than 8 years ago | (#14107002)

Note that that's transmitting the *whole* dvd for download.

VOD doesn't do that - it streams it in realtime, so you're talking about being able to server many tens of thousands of customers simultaneously.

Take into account multicast and align each 'broadcast' to a minute granularity (so you only need 90 simultaneous streams of the most popular movies to serve everyone) and there's more than enough bandwidth to scale to even the largest city.

Even if you were wanting to download the whole DVD to a hard disk (assuming for the moment you could build a hard disk that could store a DVD in half a second), you wouldn't transmit the same movie multiple times... same economies of scale - the popular stuff actually ends up taking less bandwidth proportionately to the obscure stuff (which might only be transmitted per-customer).

Re:Probably not enough DVDs/sec (1)

TheRaven64 (641858) | more than 8 years ago | (#14107518)

Don't forget caching, particularly predictive caching (my very own research area, as it happens). You can get some nice clues from the UI before someone starts playing the DVD. For one thing, you could have the first minute of every film cached in a local (where local is up to two hops away) device, and stream that at the start. Also, as soon as a user starts reading the description of a film you could join the correct multicast group (use some kind of historical access prediction to tell whether this person will skip the credits, and start caching from after the credits as well if this is >20% probability).

With intelligent caching you should easily get the perceived latency down to under 20 seconds - probably under 10. You could even fudge it a bit by playing a trailler at the start of every movie...

Anyone want to pay me to design such a system?

Re:Probably not enough DVDs/sec (1)

Wasteofspace (777087) | more than 8 years ago | (#14105829)

Assuming 2 dvds per second, thats around 7200 dvds in 1 hour. assuming streaming, a movie takes around 1.5 hours which is around 10800 users per 1.5hour.. I'd say its feasable.

Re:Probably not enough DVDs/sec (2, Interesting)

Varun Soundararajan (744929) | more than 8 years ago | (#14105875)

The math in the page is very approximate.

Lets take this scenario. There are around 10,000 users seeing the movie (thats an average- we are not looking at starwars kind popularity).

each user needs to have atleast 100 mbps or more for an average viewing (this too is very conservative=consider HDTV).

10,000 * 100 mbps= 1,000,000 mbps=100 gbps (take 1 gbps=1000 mbps )

now what are we looking at? serving 10,000 people? eh!

Re:Probably not enough DVDs/sec (1)

bommai (889284) | more than 8 years ago | (#14105941)

HDTV encoded with H.264 can be had for about 8Mbit/sec not 100. -Siva

Re:Probably not enough DVDs/sec (1)

doddi (881823) | more than 8 years ago | (#14106001)

Not quite, MPEG2 HDTV is usually at 13-15 Mbps (max 19 Mbps).
Plus, you made an error, your calculation results should have been 1000 Gbps.
So 10,000 users would require 10,000 * 13 Mbps = 130,000 Mbps = 130 Gbps.

Coincidentally, that is what Caltech achieved. :)

By using a better encoding (H264) they might even double that number.

Re:Probably not enough DVDs/sec (0)

Anonymous Coward | more than 8 years ago | (#14106438)

having a network that can serve 10000 people at the max is really a good start, but not a solution.

Re:Probably not enough DVDs/sec (1)

timeOday (582209) | more than 8 years ago | (#14106642)

100 mbps is ridiculous. DVD is 5 mbps.

Re:Probably not enough DVDs/sec (2, Insightful)

Absolut187 (816431) | more than 8 years ago | (#14106049)

It would make a lot of sense for video over internet providers to use a bit-torrent style protocol. Assuming that a lot of people would be watching the same movie at the same time, there would be a huge peer network, especially for new releases.

Re:Probably not enough DVDs/sec (1)

smallpaul (65919) | more than 8 years ago | (#14106834)

Given you might need to serve a few thousand people an hour (or more?), I'd say it's still got awhile to go.

"Several per second" is equivalent to "a few thousand an hour."

Re:Probably not enough DVDs/sec (1)

sysbot (238421) | more than 8 years ago | (#14106935)

This actually can be solve by using mutilcast network, because the sender will only required to send out the same amount of traffic but with potentially unlimited users/host. Much like how satellie work, the server sending the stream doesn't need to know who's the receivers are.

shout out to my folks (1)

Raleel (30913) | more than 8 years ago | (#14105796)

I'm very happy to see the second place went to my coworkers at PNNL. I don't know about Caltech's, but PNNL was to disk as well. Impressive feats all around.

I don't want to denegrate the Caltech crew either, as I know Stephen Low and find that he's one of the nicest guys I've ever gotten to work with.

"several" (0)

Anonymous Coward | more than 8 years ago | (#14105801)

in this case, several dvd being equal to 2. (130gbps / 8bit/Byte / 8GB/movie ~= 2)

They sure know how to make a ... (1)

arrrrg (902404) | more than 8 years ago | (#14105812)

figure [caltech.edu] .

Re:They sure know how to make a ... (0)

Anonymous Coward | more than 8 years ago | (#14105887)

Dude... oh man... dude... seriously....

Mr. Phelps, (3, Funny)

DPL (215366) | more than 8 years ago | (#14105820)

Your mission should you choose to accept - is to invoke the power of /.

This packet will self-destruct in 8..7..6..5..

where's all the (1)

soapdog (773638) | more than 8 years ago | (#14105828)

prOn?

Re:where's all the (0)

Anonymous Coward | more than 8 years ago | (#14105995)

u didn't think they were sending dvds of "Blue's Clues the Movie" did you?

Whatever you do... (4, Funny)

Dark Paladin (116525) | more than 8 years ago | (#14105830)

Don't tell the MPAA - they already tell people you can download an entire DVD movie over a 56K phone link in 15 minutes - imagine what they would tell people how much money they lose per second with this new high speed connection!

Re:Whatever you do... (1)

necromcr (836137) | more than 8 years ago | (#14106585)

Dont worry.. They probably moved it to /dev/null... after making 'backup' copies.

In other news... (0, Redundant)

jleq (766550) | more than 8 years ago | (#14105840)

They claimed they had a throughput of several DVD movies per second.
In other news, the MPAA released a statement today saying...

Just wait until.. (-1, Redundant)

Anonymous Coward | more than 8 years ago | (#14105846)

..the MPAA gets whiff of this :P

Mandatory Spaceballs... (3, Funny)

Electr!c_B4rd_Qu!nn (933533) | more than 8 years ago | (#14105847)

"They've Gone Plaid!"

Re:Mandatory Spaceballs... (1)

scott_karana (841914) | more than 8 years ago | (#14106669)

"They've gone to plaid!"

Sorry, I couldn't help but fix that. :)

farthings per furlong (5, Interesting)

Anonymous Coward | more than 8 years ago | (#14105849)

Or Libraries of Congress per second. DVDs per second isn't a useful rate, unless you're transferring lots of DVDs in a series - which few people do. The much more interesting bandwidth unit is "simultaneous DVDs", multiples of 1.32MBps, 1x DVD speed [osta.org] (9x CD speed). 130GBps is something like 101KDVD:s, which means an audience could watch 101 thousand different DVDs on demand simultaneously over that pipe. That's probably enough for most American cities to have fully interactive TV.

Re:farthings per furlong (1)

adamgoossens (932066) | more than 8 years ago | (#14106320)

Providing my numbers aren't off, I'm pretty sure yours are (the pesky bits/bytes thing):

It's actually 131Gb/s (Gigabits per second), which works out to about 16.4 GB/s (Gigabytes/sec).

16.4 / 0.00132 = approx. 12,424 users/sec streaming 1x speed DVD data.

Or, using your metrics, about 12.4 KDVDs/sec ;)

So that means.. (4, Funny)

craznar (710808) | more than 8 years ago | (#14105859)

... you could transfer the entire library of quality hollywood movies in 4 seconds.

What do we do next ?

Re:So that means.. (0)

Anonymous Coward | more than 8 years ago | (#14106019)

4 seconds ? You give them much to much credit.

In other news it's said you can transfer whole xbox 360 games in nano seconds. No one can play them though they are busy reseting the crashbox 360.

Re:So that means.. (0)

Anonymous Coward | more than 8 years ago | (#14106274)

... you could transfer the entire library of quality hollywood movies in 4 seconds.
What do we do next ?

Buy a shitload of DVD-R's

Re:So that means.. (1)

afidel (530433) | more than 8 years ago | (#14106680)

If you can't find more than a couple films on this list [imdb.com] then you just don't like film. Hollywood might put out a lot of crap, but there are enough diamonds in there to pull out a handfull of movies a year which are really good, not to mention all the fun, crappy filler like action flicks =)

Bandwidth? (1, Interesting)

Anonymous Coward | more than 8 years ago | (#14105863)

I'm more interested in the media in which they'd write to at those speeds.

RAM (1)

imunfair (877689) | more than 8 years ago | (#14106563)

Probably one of those RAM "hard drives" I saw on a slashdot article a while back. IIRC they have 4GB capacity max right now (four 1 gig sticks of ram)

While googling in an attempt to find what I was thinking about, I found this article from a year ago about a HUGE one of these bought by the US government for 'database crosschecking' (Spying on people in real time, for those of you wearing your tinfoil hats)

http://www.techworld.com/storage/news/index.cfm?Ne wsID=1176 [techworld.com]

Enjoy.

Meanwhile... (1)

Rayin (901745) | more than 8 years ago | (#14105869)

I am downloading the SWG Trial at 80kb/s on "high-speed" internet. And for the record, my cable modem was faster 4 years ago than it is today, on average. If we are making progress, it sure as hell isn't in the commercial sector.

DVDs/sec? How about (2, Funny)

achurch (201270) | more than 8 years ago | (#14105871)

They claimed they had a throughput of several DVD movies per second.

That's nice, but what is it in Libraries of Congress per microfortnight?

Re:DVDs/sec? How about (1)

glowworm (880177) | more than 8 years ago | (#14106065)

Hmmm,

1 Library of Congress = 10 TiB (LOC is base 10) = 10^13 or 10,000,000,000,000 bytes
1 Fortnight = 1,209,600 seconds
1 Microfortnight = 1,209,600 * 10^-6 = 1.2096 seconds

So we need: 10^13 / 1.2096 = (8.26719577 * 10^12)*8 bps

Now using the 130Gbps (130*1000*1000*1024) we get: 133,120,000,000 / ((8.26719577 * 10^12)*8)

Which is:

0.0020127744 LOC/mFtnght

The final line said it all folks! (1)

Almost-Retired (637760) | more than 8 years ago | (#14105886)

That was a credit to the Hudson Bay Fan Company for keeping all that smoking data cool.

But don't tell the RIAA or the MPAA, they'll have a press release out yet tonight about how much they lost to piracy. But I'll bet its never crossed their minds that if they'd quit treating the customer like a thief, and give him an honest hours entertainment for an honest hours wages, plus letting us see how much the talent got out of that, we'ed be a hell of a lot happier when we do fork over.

We don't like the talent to starve when they've a million seller, and nearly $20 for a friggin cd is outragious.

--
Cheers, gene

The final line said it all folks!-Just say NO! (0)

Anonymous Coward | more than 8 years ago | (#14105942)

"But don't tell the RIAA or the MPAA, they'll have a press release out yet tonight about how much they lost to piracy. But I'll bet its never crossed their minds that if they'd quit treating the customer like a thief, and give him an honest hours entertainment for an honest hours wages, plus letting us see how much the talent got out of that, we'ed be a hell of a lot happier when we do fork over."

Since we're doing another "I hate the RIAA/MPAA/Some game producer" rant tonight. When exactly was the demarcation point between "Like a thief/not like a thief" and "Price I like/ price I don't like" and Piracy. Or to put it in an "chicken/egg" context. Which came first? As for what the talent makes? Maybe the "talent" feels that it's none of your business. How much do YOU make?

"We don't like the talent to starve when they've a million seller, and nearly $20 for a friggin cd is outragious."

Not buying things...consumer, does amazing things for both situations. So why are you all still buying...and complaining?

Missing infrastructure (4, Interesting)

aktzin (882293) | more than 8 years ago | (#14105891)

"They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"

This is nothing but an impressive statistic until ISPs provide this kind of bandwidth into homes (the infamous "last mile" connection). Not to mention that even the fastest hard drives available to consumers can't write data this fast.

Re:Missing infrastructure (1)

tomstdenis (446163) | more than 8 years ago | (#14106095)

It doesn't have to go that far. If all of the backbone peers have super uber fast connections that alone can speed things up.

100s of millions of people at 5Mbps == a heck of a lot of load.

Though yeah, Gbps to the home would be nice...

Tom

Re:Missing infrastructure (1)

NailedSaviour (765586) | more than 8 years ago | (#14107191)

While the lack of "last-mile" bandwidth is a concern, I don't think it's a killer.

You might not be able to say "I want to watch a movie right now" and get it on demand but whats wrong with a Netflix-like queue of films you would like to see and "trickle" download system?

You could list 5 films you liked, and the system could merrily go off and trickle download 20 films that people who also liked the first 5 liked. You don;t have to watch them all, or even pay for them.

HDD space is becoming less and less of a concern and you could use as little or as much of your available bandwidth as you like, depending on how often you want to watch.

Yes, but (0, Redundant)

anamexis (753041) | more than 8 years ago | (#14105896)

How many Libraries of Congress per second?

Re:Yes, but (1)

sound+vision (884283) | more than 8 years ago | (#14105956)

Or football fields per second.

Re:Yes, but (1)

Comatose51 (687974) | more than 8 years ago | (#14106103)

I think you meant how many LoC per fornight. You don't want to confused people with a standardized term like second.

What you say? (0)

Anonymous Coward | more than 8 years ago | (#14105909)

Let's see how they handle a Slashdotting...

Measurements (0)

Anonymous Coward | more than 8 years ago | (#14105926)

"a throughput of several DVD movies per second"

How many trucks per hour, loaded with DVDs, speeding down the highway is that?

hey i want one of those (1)

detachment2702 (813035) | more than 8 years ago | (#14105947)

If only I had a HDD big enough to download the internet auuuuuuuugh downloading porn at the speed of liiiiiiiiight!

A new form of Denial Of Service attacks? (0)

Anonymous Coward | more than 8 years ago | (#14105955)

Instead of sending lots of packets filled of garbage or insults, you could slam your enemies with entire copies of Zoolander!

A chain of airplanes has more throughput (3, Insightful)

davidwr (791652) | more than 8 years ago | (#14105962)

Imagine you "owned" O'Hair Int'l and the Atlanta airport, two of the busiest airports in America.

Imagine you had as many big planes as possible taking off from each airport and landing at the other every day.

Imagine they were all filled with hard disks or DVDs.

Now THAT is a lot of bandwidth.

Latency sucks though.

The moral of the story:
Bandwidth isn't everything.

Re:A chain of airplanes has more throughput (1)

s-twig (775100) | more than 8 years ago | (#14106030)

If you want improved latency upgrade to an F-111.

They used Linux 2.6 kernel (3, Interesting)

RedBear (207369) | more than 8 years ago | (#14105980)

Here I was expecting to read about one of the BSDs again (like when they used NetBSD to break the Internet2 Land Speed Record), but it looks like this time they used an "optimized Linux (2.6.12 + FAST + NFSv4) kernel". I'm not well informed on speed records held by various versions of the Linux kernel, so maybe someone else can tell us whether this is something special for Linux or more run-of-the-mill. I had the impression that professional researchers usually prefer the BSDs for this kind of work. Will this put Linux on the map for more high-end research like this?

Impressive work, either way.

Re:They used Linux 2.6 kernel (0)

Anonymous Coward | more than 8 years ago | (#14106262)

This wasn't a single computer speed challenge, but an aggregate challenge. Each of the machines in this test was only doing ~3Gbit/sec.

Re:They used Linux 2.6 kernel (0)

Anonymous Coward | more than 8 years ago | (#14106293)

NFS??? Wouldn't it have been easier just to send data over a socket and measure throughput?

Re:They used Linux 2.6 kernel (4, Interesting)

jd (1658) | more than 8 years ago | (#14106385)

Already is [lightfleet.com] . From the looks of Lightfleet, and some of the other people at SC2005 who didn't have tables but DID have information, Linux is being taken very seriously in the bandwidth arena.


The problem with latency is that everyone lies about the figures. I talked to some of the NIC manufacturers and got quoted the latency of the internal logic, NOT the latency of the card as a whole, and certainly not the latency of the card when integrated. There was one excellent-sounding NIC - until you realized that the bus wasn't native but went through a whole set of layers to be converted into the native system, and that the latency of these intermediate steps, PLUS the latencies of the pseudo-busses it went through, never figured in anywhere. You then had to add in the latency of the system's bus as well. In the end, I reckoned that you'd probably get data out at the end of the week.


I also saw at SC2005 that the architectures sucked. The University of Utah was claiming that clusters of Opterons didn't scale much beyond 2 nodes. Whaaaa???? They were either sold some VERY bad interconnects, or used some seriously crappy messaging system. Mind you, the guys at the Los Alamos stand had to build their packet collation system themselves, as the COTS solution was at least two orders of magnitude too slow.


I was impressed with the diversity at SC2005 and the inroads Open Source had made there, but I was seriously disgusted by the level of sheer primitiveness of a lot of offerings, too. Archaic versions of MPICH do not impress me. LAM might, as would LAMPI. OpenMPI (which has a lot of heavy acceleration in it) definitely would. The use of OpenDX because (apparently) OpenGL is "just too damn slow" was interesting - but if OpenDX is so damn good, why hasn't anyone maintained the code in the past three years? (I'd love to see OpenGL being given some serious competition, but that won't happen if the code is left to rot.)


Microsoft - well, their servers handed out cookies. Literally.

Re:They used Linux 2.6 kernel (1)

Lehk228 (705449) | more than 8 years ago | (#14106584)

The use of OpenDX because (apparently) OpenGL is "just too damn slow" was interesting - but if OpenDX is so damn good, why hasn't anyone maintained the code in the past three years? (I'd love to see OpenGL being given some serious competition, but that won't happen if the code is left to rot.)

Perhapse openDX is significantly more complex to write but executes more efficiently for the job at hand.

Re:They used Linux 2.6 kernel (0)

Anonymous Coward | more than 8 years ago | (#14106941)

Nothing that guy (grandparent post about his impressions of SC2005) said made any sense.. about anything.

He was talking out of his ass.

Re:They used Linux 2.6 kernel (0)

Anonymous Coward | more than 8 years ago | (#14106681)

NetBSD hasn't owned any of those internet speed records for about 18 months. Linux owns them all.

http://lsr.internet2.edu/history.html [internet2.edu]

Yes, Linux is on the map.

Re:They used Linux 2.6 kernel (1)

atomicdragon (619181) | more than 8 years ago | (#14106704)

Some BSDs were considered, but would probably not make too big of a difference directly. Most of this was more about getting individual nodes working together than raw bandwidth out of a single box. A lot of the tools used were intended for Linux and were otherwise kind of untested or not really configured yet for use on one of the BSDs (although I think that may be looked into soon). In the end, with each node pushing out about 940-950 Mbps on a 1 Gbps connection, there is not too much more to squeeze out of the connection to each node, so they are going to stick with where they have the best tools and familiarity.

I think a similar setup, using the FAST kernel was used when the same team from Caltech broke the Internet2 land speed record (at SC04 and again in early 2005) and with another record between Caltech and CERN. (The former has since been surpassed by another group, I do not know about the latter.)

hahah (3, Funny)

ikea5 (608732) | more than 8 years ago | (#14106009)

good luck slashdoting this one!

Sony Rootkit? (1)

slashbob22 (918040) | more than 8 years ago | (#14106034)

Rumor has it that Sony is planning on releasing the next version of their rootkit for this infrastructure. The new version will all users to call a simple API. I for one thank Sony for inventing a disabler of any new threatening technology.

connectionListener();
sendSpam();
startDOS();

Gbps (2, Interesting)

Douglas Simmons (628988) | more than 8 years ago | (#14106043)

Yeah 130Gbps sounds super-duper fast, but seven dvds a second on a backbone spreads out pretty thin when everyone and their mother is bittorrenting their ass off.

I'm looking through these charts and I am not finding an important number, how far the signal can be sent at that rate before it starts dying. Repeaters could be responsible for keeping this in vaporworld.

Re:Gbps (1)

Detritus (11846) | more than 8 years ago | (#14106964)

I'm looking through these charts and I am not finding an important number, how far the signal can be sent at that rate before it starts dying. Repeaters could be responsible for keeping this in vaporworld.

How much money do you have? That's the limiting factor. The hardware is available, it's just very expensive. There are fiber optic amplifiers that boost the signal level without having to demodulate it and regenerate it.

Re:Gbps (1)

Booker C. Bense (71418) | more than 8 years ago | (#14107546)

They measure the bandwidth rates at the show floor interconnect. This was not a toy demonstration, the endpoints of the connects were thousands of miles away and they used real data with real physics applications. The total bandwidth is not the number that impresses me, but the percentage of
available bandwidth actually used is pretty impressive.

_ Booker C. Bense

Big deal (1)

tomstdenis (446163) | more than 8 years ago | (#14106079)

The L1 cache on a P4 has a latency of 2 [or 3] which yields one 16 byte read every 2 cycles or about 1.6Ghz * 128 = 204.8Gbps :-)

j/k
Tom

Only measured 17 of the 22 wavelengths (1)

gowdy (135717) | more than 8 years ago | (#14106214)

So the actual speed was faster.

A better standard instead of DVDs (1)

HD Webdev (247266) | more than 8 years ago | (#14106343)

Why not TeraGoatsesPerSecond to measure the hole bandwidth so that we can more easily visualize it?

And, at least the size is standard.

PNG? (3, Funny)

Avisto (933587) | more than 8 years ago | (#14106390)

130Gbps and they still use jpeg to (badly) compress their graphs.

Re:PNG? (1)

dodobh (65811) | more than 8 years ago | (#14107337)

They still worry about the /. effect.

We're in luck ... (1)

hobbs (82453) | more than 8 years ago | (#14106418)

They claimed they had a throughput of several DVD movies per second.


That's almost as fast as the movie industry is generating crappy movies to download!

BW Challenge: less and less relevant (1)

grumpyman (849537) | more than 8 years ago | (#14106446)

This competition becomes less and less relevant as it was meant to drive better ways to utilize the network and find application to use that bandwidth. For utilization, as long as you got enough CPU and parallization then you can fill the pipe - last few years of this competition has been just doubling up CPU, find a big long pipe, pump it thru, wash/rinse and repeat. And for applications, VOD stuff and the likes cannot be done because of last mile, not because of lack of technology. I'm not sure if the competitions achieved what they wanted last few years.

Re:BW Challenge: less and less relevant (1)

chully (544050) | more than 8 years ago | (#14106725)

Acually, that is part of what they are saying. The pipe is no longer the bottleneck, you are right.
What's cool about that is that we have made a leap in technology, and we have new bottlenecks to fatten up for next year. It's like advancing an army one phalanx at a time.

Re:BW Challenge: less and less relevant (1)

Detritus (11846) | more than 8 years ago | (#14107009)

It's relevant if you are doing particle physics and have huge datasets that need to be transported from point A to point B.

I'd like to see a phased-array radio telescope that supplies raw data to each remote user for beam forming.

Re:BW Challenge: less and less relevant (1)

Indes (323481) | more than 8 years ago | (#14107106)

It will probably be beneficial to the electronics community to know how much bandwidth we can get out of specific data networks and pipelines..
Especially when using a protocol'd network.

Need a bigger quota... (1)

pookemon (909195) | more than 8 years ago | (#14106515)

So how long would it take for me to use up my monthly quota of 10Gig with bandwidth like that... ;)

Re:Need a bigger quota... (0)

Anonymous Coward | more than 8 years ago | (#14106742)

0.61s
but they need more time to cap your speed or disconect you from the network

but if your isp bills you for every thing over 10GB you might not want to try this :c

What does each component do? (2, Informative)

FunFactor100 (848822) | more than 8 years ago | (#14106683)

It would be nice to know what each group of hardware is doing in this setup. What purpose do all the different servers have on the system? Also, there is a lot of storage on this setup, however it's spread all over the place. They have 4x300GB hard drives in each of the 30 Dual Opterons, one 36.4GB hard drive in each of the 40 HP servers, 24 hard drives on the Sun server, more hard drives on the IBMS, and even more on the Nexsan SATABeast. Any idea what each cluster of servers does?

not sure.... (1)

mr_z_beeblebrox (591077) | more than 8 years ago | (#14107342)

They claimed they had a throughput of several DVD movies per second. How is that for video on demand!

I can't answer the question until I know WHICH several movies.

I hope NOT! (1)

thoper (838719) | more than 8 years ago | (#14107503)

God, i realy hope a spammer doesnt grab one of these....
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>