Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Another Internet2 Speed Record Broken

Hemos posted more than 9 years ago | from the busting-outta-school dept.

The Internet 251

rdwald writes "An international team of scientists led by Caltech have set a new Internet2 speed record of 101 gigabits per second. They even helpfully converted this into one LoC/15 minutes. Lots of technical details in this press release; in addition to the obviously better network infrastructure, new TCP protocols were used."

cancel ×

251 comments

Sorry! There are no comments related to the filter you selected.

one LoC/15 minutes (5, Funny)

SpaceLifeForm (228190) | more than 9 years ago | (#10942316)

One Line of Code every 15 minutes? Seems slow to me.

My Car Gets Forty Rod to the Hogsgead (4, Funny)

The-Bus (138060) | more than 9 years ago | (#10942393)

Yeah, I'm not really sure what the Library of Congress unit does for me. I'm more used to the European metric measurement of Geburninged Volkswagen.

Nowhere in the article does it say how long they ran the test for. A second? A minute? An hour?

I mean that's a full terabyte almost every minute and a half. What has so much data?

Re:My Car Gets Forty Rod to the Hogsgead (5, Funny)

SlayerofGods (682938) | more than 9 years ago | (#10942425)

I mean that's a full terabyte almost every minute and a half. What has so much data?
The library of congress perhaps?

Re:My Car Gets Forty Rod to the Hogsgead (0)

MikeBabcock (65886) | more than 9 years ago | (#10942465)

tcpclient [cr.yp.to] www.slashdot.org 80 cat /dev/urandom

Jeez

Slashdot sucks ;-)

Re:My Car Gets Forty Rod to the Hogsgead (3, Funny)

pjf(at)gna.org (807061) | more than 9 years ago | (#10942481)

> What has so much data?

/dev/zero ;P ?

Re:My Car Gets Forty Rod to the Hogsgead (2, Funny)

Binestar (28861) | more than 9 years ago | (#10942511)

> What has so much data?

/dev/zero ;P ?


Yeah, but that compresses pretty easy.

Re:My Car Gets Forty Rod to the Hogsgead (1)

Fortran IV (737299) | more than 9 years ago | (#10942620)

from the this-is-Slashdot-so-why-remember-the-past dept.

Wal-mart [slashdot.org] , remember?

Re:one LoC/15 minutes (1)

hummassa (157160) | more than 9 years ago | (#10942426)

Yeah, it's my aproximate productivity in days I keep posting things in /. ... :-)

Re:one LoC/15 minutes (1)

fredrikj (629833) | more than 9 years ago | (#10942560)

Seems accurate if these are lines of Perl, and they are converting them to Java before sending...

Re:one LoC/15 minutes (1)

macaulay805 (823467) | more than 9 years ago | (#10942576)

One Line of Code every 15 minutes? Seems slow to me.

Is this the current requirements to become an EA coder also? (No wonder they're overworked)

Gigabits... (5, Informative)

wittj (733811) | more than 9 years ago | (#10942317)

The speed is 101 Gigabits per second (Gbps), not Gigabytes.

Re:Gigabits... (4, Funny)

shawn(at)fsu (447153) | more than 9 years ago | (#10942371)

Right now the MPAA is trying to figure out how many movies [slashdot.org] that converts to....

Re:Gigabits... (2, Informative)

LSD-OBS (183415) | more than 9 years ago | (#10942436)

Around 1.5 full length DVDs including extra features, per second. Yikes!

Re:Gigabits... (1)

relaxmax (686075) | more than 9 years ago | (#10942530)

And since 1 Gigabit = 0.125 Gigabytes;

101 Gbps = 12.62 GBps

LoC? (-1, Redundant)

slavemowgli (585321) | more than 9 years ago | (#10942319)

One line of code per 15 minutes? I'd sure like to know what language that'd be...

Re:LoC? (1)

RAMMS+EIN (578166) | more than 9 years ago | (#10942339)

Perl, obviously. Ever seen whole one line programs in any other language?

Re:LoC? (2, Interesting)

slapout (93640) | more than 9 years ago | (#10942411)

Actually yes. Back in the day when Basic allowed multiple commands per line with the colon (:) seperator. Oh wait, you meant useful programs...nevermind.

zoom zoom zoom (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10942320)

zoom zoom

Oye (3, Funny)

NETHED (258016) | more than 9 years ago | (#10942321)

Bring on the Porn comments.

But remember, never underestimate the bandwidth of a 747 full of Blueray disks.

747 (2, Funny)

Fullaxx (657461) | more than 9 years ago | (#10942346)

or the cost ;)

Re:747 (2, Insightful)

kfg (145172) | more than 9 years ago | (#10942482)

or the cost ;)

Never overestimate the cost per bit of a 747 full of blueray disks.

KFG

Re:Oye (-1)

Anonymous Coward | more than 9 years ago | (#10942359)

Why? There's porn on the internet??

Porn (1)

LifesizeKenDoll (783854) | more than 9 years ago | (#10942392)

You could send so much porn at 101 Gbps...*drool*

Re:Oye (0)

nadadogg (652178) | more than 9 years ago | (#10942485)

Or a nerd on a bike with a backpack full of CD-R's

Re:Oye (1)

KiloByte (825081) | more than 9 years ago | (#10942597)

I'm afraid that you're _overestimating_ it.

Let's see... it's a 16h trip vs 12.6GB/s. With 25GB per disk, you'll need to take 29k disks on the plane. I would say that 30 disks weight around 1kg (it's an estimation by anal extraction, I don't have any scales at hand).

So... in order to match the bandwidth, you would need to haul around 2 tons of disks per plane. The cargo capacity of a 747 is around 30 tons (if I read the brochure I found correctly).

In other words, we already have 1/15th of that -- and the blue ray is _not_ a deployed technology yet. If you use DVDs instead, the 747 still comes ahead, but for ordinary CDs, you would need three jumbo jets doing the route...

Writing speeds (5, Funny)

Norgus (770127) | more than 9 years ago | (#10942322)

>. if only my HDD would write that fast!

Now, (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#10942325)

Queue 101 "How fast does it download porn?" jokes...

cool! (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10942326)

Cool stuff!

101 Gbps not 101 GBps (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#10942327)

3rd line in the article.

Gbps... (-1, Redundant)

BlurredOne (813043) | more than 9 years ago | (#10942330)

Gbps is GigaBITS not GigaBytes... Big difference between the two.

Gigabytes? (-1, Redundant)

aismail3 (735831) | more than 9 years ago | (#10942331)

TFA says 101 gigabits per second, not gigabytes.

Too Fast for its Own Good (3, Insightful)

omghi2u (808195) | more than 9 years ago | (#10942338)

Has anyone every stopped to think this might be too fast for its own good?

Isn't there a point when we've reached a speed where rather than deciding what to send from one place to another, we become lazy and start sending everything?

And won't that just lead to massive researcher mp3 swaps? :P

Re:Too Fast for its Own Good (5, Insightful)

RAMMS+EIN (578166) | more than 9 years ago | (#10942356)

``Isn't there a point when we've reached a speed where rather than deciding what to send from one place to another, we become lazy and start sending everything?''

You mean like broadcasting radio and TV?

Re:Too Fast for its Own Good (5, Insightful)

oexeo (816786) | more than 9 years ago | (#10942370)

> Has anyone every stopped to think this might be too fast for its own good?

Has the infamous Bill Gates quote not taught you anything?

Re:Too Fast for its Own Good (1)

grid geek (532440) | more than 9 years ago | (#10942454)

Has anyone every stopped to think this might be too fast for its own good?

No, this isn't a car, it doesn't need human intelligence after the code has been developed to keep it from turning into a wreck.

Isn't there a point when we've reached a speed where rather than deciding what to send from one place to another, we become lazy and start sending everything?

This data transfer was part of the ramp up for the start of the LHC which will be taking data at 40TB/sec, recording approx 750MB/sec data to disk - which totals about 6 Petabytes of data a year. Added to this the simulation data which may produce 12PB/year of data (based on really conservative estimates - the BaBar experiment at SLAC produced upto 6x as much simulation data as recored each year). Since all this data needs to be copied off site then no, its unlikely we could get the network to a point where everything could be sent - processing still increases faster than network bandwidth.

And won't that just lead to massive researcher mp3 swaps?

Not if they want tenure ...

Re:Too Fast for its Own Good (1)

LSD-OBS (183415) | more than 9 years ago | (#10942500)

which totals about 6 Petabytes of data a year

At 100Gbps that's still 5 months per 1Pb. These guys sure are demanding!

Re:Too Fast for its Own Good (1)

LSD-OBS (183415) | more than 9 years ago | (#10942486)

The Swedish LHC (CERN) guys are going to need to be sending *petabytes* of collision data around the world for analysis over the course of their experiments. It's crazy to think it, but some people really do have a need for this. I suppose this is why the Internet2 is largely restricted to research and education purposes.

Re:Too Fast for its Own Good (0)

Anonymous Coward | more than 9 years ago | (#10942592)

This poster and others have asked if this is such a great idea to be able to send this much data at such speeds. This, however, is dealing with Internet2, which is used exculsively for universities and research institutions sending data back and forth. It might seem like a lot if you're talking e-mail, pr0n, and other normal Internet traffic, but not so much if you're sending real-time high-def images to be used by a remote robotic surgeon or data for weather modeling.

Re:Too Fast for its Own Good (1)

MyLongNickName (822545) | more than 9 years ago | (#10942633)

Does your surgeon really need these speeds? Or does he need a connection with near-zero latency, and very consistent near-zero latency?

Real-Time Video can be acheived with today's bandwidth (albeit expensive solutions). What we can't deal with is a five second hiccup (sorry about cutting all the way through you ma'am... lag is horrible today).

Does Internet 2 solve the latency issue?

Re:Too Fast for its Own Good (1)

lisaparratt (752068) | more than 9 years ago | (#10942625)

Yes. Yes, it is. It's far too fast for it's own good. Also, it's a good thing we don't travel about 30mph, otherwise our skeletons would turn to jelly! Oh, wait a minute...

** Spoiler ** (2, Funny)

Stokey (751701) | more than 9 years ago | (#10942341)

Cue the gags about "Finally, I shall be able to download my pr0n collection".

Cue questions about whether is gigabytes or gigabits.

Cue questions about "How can I get a such gaping-a$$ bandwidth.

One of these days I will write the ultimate FAQ to /. posts including all the possible combinations of arguments started by SCO stories, how politics is treated here and a whole chapter on non-funny memes.

Go! Pedal faster.

Re:** Spoiler ** (-1)

Anonymous Coward | more than 9 years ago | (#10942400)

Make sure to include your post in the chapter on non-funny memes.

Re:** Spoiler ** (-1)

Anonymous Coward | more than 9 years ago | (#10942409)

You forgot:

Cue posts listing possible posts

I think those posts warrant a chapter all of their own.

Re:** Spoiler ** (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10942424)

One of these days I will write the ultimate FAQ to /. posts.

Imagine a beowulf cluster of those in Soviet Russia! ;)

byolinux

Re:** Spoiler ** (0)

Anonymous Coward | more than 9 years ago | (#10942715)

In Soviet Russia, those imagine a beowulf cluster of you!

Re:** Spoiler ** (2, Funny)

Anonymous Coward | more than 9 years ago | (#10942650)

I for one welcome our new FAQ-writing overlord.

karma whoring. (0, Redundant)

virtualone (768392) | more than 9 years ago | (#10942349)

Internet Speed Quadrupled by International Team During 2004 Bandwidth Challenge

PITTSBURGH, Pa.--For the second consecutive year, the "High Energy Physics" team of physicists, computer scientists, and network engineers have won the Supercomputing Bandwidth Challenge with a sustained data transfer of 101 gigabits per second (Gbps) between Pittsburgh and Los Angeles. This is more than four times faster than last year's record of 23.2 gigabits per second, which was set by the same team.

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and deploy a new generation of revolutionary Internet applications.

The international team is led by the California Institute of Technology and includes as partners the Stanford Linear Accelerator Center (SLAC), Fermilab, CERN, the University of Florida, the University of Manchester, University College London (UCL) and the organization UKLight, Rio de Janeiro State University (UERJ), the state universities of São Paulo (USP and UNESP), the Kyungpook National University, and the Korea Institute of Science and Technology Information (KISTI). The group's "High-Speed TeraByte Transfers for Physics" record data transfer speed is equivalent to downloading three full DVD movies per second, or transmitting all of the content of the Library of Congress in 15 minutes, and it corresponds to approximately 5% of the rate that all forms of digital content were produced on Earth during the test.

The new mark, according to Bandwidth Challenge (BWC) sponsor Wesley Kaplow, vice president of engineering and operations for Qwest Government Services exceeded the sum of all the throughput marks submitted in the present and previous years by other BWC entrants. The extraordinary achieved bandwidth was made possible in part through the use of the FAST TCP protocol developed by Professor Steven Low and his Caltech Netlab team. It was achieved through the use of seven 10 Gbps links to Cisco 7600 and 6500 series switch-routers provided by Cisco Systems at the Caltech Center for Advanced Computing (CACR) booth, and three 10 Gbps links to the SLAC/Fermilab booth. The external network connections included four dedicated wavelengths of National LambdaRail, between the SC2004 show floor in Pittsburgh and Los Angeles (two waves), Chicago, and Jacksonville, as well as three 10 Gbps connections across the Scinet network infrastructure at SC2004 with Qwest-provided wavelengths to the Internet2 Abilene Network (two 10 Gbps links), the TeraGrid (three 10 Gbps links) and ESnet. 10 gigabit ethernet (10 GbE) interfaces provided by S2io were used on servers running FAST at the Caltech/CACR booth, and interfaces from Chelsio equipped with transport offload engines (TOE) running standard TCP were used at the SLAC/FNAL booth. During the test, the network links over both the Abilene and National Lambda Rail networks were shown to operate successfully at up to 99 percent of full capacity.

The Bandwidth Challenge allowed the scientists and engineers involved to preview the globally distributed grid system that is now being developed in the US and Europe in preparation for the next generation of high-energy physics experiments at CERN's Large Hadron Collider (LHC), scheduled to begin operation in 2007. Physicists at the LHC will search for the Higgs particles thought to be responsible for mass in the universe and for supersymmetry and other fundamentally new phenomena bearing on the nature of matter and spacetime, in an energy range made accessible by the LHC for the first time.

The largest physics collaborations at the LHC, the Compact Muon Solenoid (CMS), and the Toroidal Large Hadron Collider Apparatus (ATLAS), each encompass more than 2000 physicists and engineers from 160 universities and laboratories spread around the globe. In order to fully exploit the potential for scientific discoveries, many petabytes of data will have to be processed, distributed, and analyzed. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport, terabyte-scale data samples on demand, in order to optimally select the rare "signals" of new physics from potentially overwhelming "backgrounds" from already-understood particle interactions. This data will be drawn from major facilities at CERN in Switzerland, at Fermilab and the Brookhaven lab in the U.S., and at other laboratories and computing centers around the world, where the accumulated stored data will amount to many tens of petabytes in the early years of LHC operation, rising to the exabyte range within the coming decade.

Future optical networks, incorporating multiple 10 Gbps links, are the foundation of the grid system that will drive the scientific discoveries. A "hybrid" network integrating both traditional switching and routing of packets, and dynamically constructed optical paths to support the largest data flows, is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data intensive science in many fields. By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic grid supporting many-terabyte and larger data transactions is practical.

While the SC2004 100+ Gbps demonstration required a major effort by the teams involved and their sponsors, in partnership with major research and education network organizations in the United States, Europe, Latin America, and Asia Pacific, it is expected that networking on this scale in support of largest science projects (such as the LHC) will be commonplace within the next three to five years.

The network has been deployed through exceptional support by Cisco Systems, Hewlett Packard, Newisys, S2io, Chelsio, Sun Microsystems, and Boston Ltd., as well as the staffs of National LambdaRail, Qwest, the Internet2 Abilene Network, the Consortium for Education Network Initiatives in California (CENIC), ESnet, the TeraGrid, the AmericasPATH network (AMPATH), the National Education and Research Network of Brazil (RNP) and the GIGA project, as well as ANSP/FAPESP in Brazil, KAIST in Korea, UKERNA in the UK, and the Starlight international peering point in Chicago. The international connections included the LHCNet OC-192 link between Chicago and CERN at Geneva, the CHEPREO OC-48 link between Abilene (Atlanta), Florida International University in Miami, and São Paulo, as well as an OC-12 link between Rio de Janeiro, Madrid, Géant, and Abilene (New York). The APII-TransPAC links to Korea also were used with good occupancy. The throughputs to and from Latin America and Korea represented a significant step up in scale, which the team members hope will be the beginning of a trend toward the widespread use of 10 Gbps-scale network links on DWDM optical networks interlinking different world regions in support of science by the time the LHC begins operation in 2007. The demonstration and the developments leading up to it were made possible through the strong support of the U.S. Department of Energy and the National Science Foundation, in cooperation with the agencies of the international partners.

As part of the demonstration, a distributed analysis of simulated LHC physics data was done using the Grid-enabled Analysis Environment (GAE), developed at Caltech for the LHC and many other major particle physics experiments, as part of the Particle Physics Data Grid, the Grid Physics Network and the International Virtual Data Grid Laboratory (GriPhyN/iVDGL), and Open Science Grid projects. This involved the transfer of data to CERN, Florida, Fermilab, Caltech, UC San Diego, and Brazil for processing by clusters of computers, and finally aggregating the results back to the show floor to create a dynamic visual display of quantities of interest to the physicists. In another part of the demonstration, file servers at the SLAC/FNAL booth in London and Manchester also were used for disk-to-disk transfers from Pittsburgh to England. This gave physicists valuable experience in the use of the large, distributed datasets and to the computational resources connected by fast networks, on the scale required at the start of the LHC physics program.

The team used the MonALISA (MONitoring Agents using a Large Integrated Services Architecture) system developed at Caltech to monitor and display the real-time data for all the network links used in the demonstration. MonALISA (http://monalisa.caltech.edu) is a highly scalable set of autonomous, self-describing, agent-based subsystems which are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and grid systems as well as the scientific applications themselves. Detailed results for the network traffic on all the links used are available at http://boson.cacr.caltech.edu:8888/.

Multi-gigabit/second end-to-end network performance will lead to new models for how research and business is performed. Scientists will be empowered to form virtual organizations on a planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, in "data intensive" fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

Harvey Newman, professor of physics at Caltech and head of the team, said, "This is a breakthrough for the development of global networks and grids, as well as inter-regional cooperation in science projects at the high-energy frontier. We demonstrated that multiple links of various bandwidths, up to the 10 gigabit-per-second range, can be used effectively over long distances.

"This is a common theme that will drive many fields of data-intensive science, where the network needs are foreseen to rise from tens of gigabits per second to the terabit-per-second range within the next five to 10 years," Newman continued. "In a broader sense, this demonstration paves the way for more flexible, efficient sharing of data and collaborative work by scientists in many countries, which could be a key factor enabling the next round of physics discoveries at the high energy frontier. There are also profound implications for how we could integrate information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable."

Les Cottrell, assistant director of SLAC's computer services, said: "The smooth interworking of 10GE interfaces from multiple vendors, the ability to successfully fill 10 gigabit-per-second paths both on local area networks (LANs), cross-country and intercontinentally, the ability to transmit greater than 10Gbits/second from a single host, and the ability of TCP offload engines (TOE) to reduce CPU utilization, all illustrate the emerging maturity of the 10Gigabit/second Ethernet market. The current limitations are not in the network but rather in the servers at the ends of the links, and their buses."

Further technical information about the demonstration may be found at http://ultralight.caltech.edu/sc2004 and http://www-iepm.slac.stanford.edu/monitoring/bulk/ sc2004/hiperf.html A longer version of the release including information on the participating organizations may be found at http://ultralight.caltech.edu/sc2004/BandwidthReco rd

Doesn't make sense (5, Insightful)

oexeo (816786) | more than 9 years ago | (#10942352)

new TCP protocols were used

TCP is a specific protocol, a "new" TCP protocol would suggest a different protocol, unless it means a revision of the current protocol.

Re:Doesn't make sense (3, Insightful)

TopShelf (92521) | more than 9 years ago | (#10942373)

Isn't "TCP protocol" redundant anyway?

Re:Doesn't make sense (1)

oexeo (816786) | more than 9 years ago | (#10942438)

> Isn't "TCP protocol" redundant anyway?

I think acrynums where invented for the hilarity of watching stupid people prepend/append the abbriviated words to them (yes, just like I did above).

Re:Doesn't make sense (0, Offtopic)

oexeo (816786) | more than 9 years ago | (#10942456)

where = were

Might as well correct all of your mistakes... (0)

Anonymous Coward | more than 9 years ago | (#10942514)

acrynums = acronyms
abbriviated = abbreviated

Re:Might as well correct all of your mistakes... (1)

oexeo (816786) | more than 9 years ago | (#10942542)

I was educated in a foreign language, so my spelling in English sucks. I only pointed out the misspellings I noticed.

Re:Might as well correct all of your mistakes... (1)

SlayerofGods (682938) | more than 9 years ago | (#10942615)

And we're here to help you find the rest. ;)

Re:Doesn't make sense (1)

RichN (12819) | more than 9 years ago | (#10942587)

acrynums = acronyms
abbriviated = abbreviated

Re:Doesn't make sense (1)

The-Bus (138060) | more than 9 years ago | (#10942453)

No, this is a TCP Protocol for Transfer.

DUH.

...Oh.

Re:Doesn't make sense (5, Informative)

PhrostyMcByte (589271) | more than 9 years ago | (#10942377)

They are talking about "Fast [caltech.edu] " TCP, which AFAIK just consists of a better routing algorithm and using multiple TCP streams at once.

Re:Doesn't make sense (1)

ari_j (90255) | more than 9 years ago | (#10942416)

Aren't there other concerns, such as window size? When you're piping that much down the line, an ACK every 48 bytes just isn't right.

Re:Doesn't make sense (1)

MikeBabcock (65886) | more than 9 years ago | (#10942449)

Insightful?

What is HTTP? Oh yes, a TCP protocol. For proper semantics, you might say its a protocol that sits on top of TCP, or go into network layers and bore the reader to tears.

On the other hand, lets say we're talking about a new revision of the TCP protocol ... TCP version 42 ... wouldn't that be a new TCP protocol as well?

I'm surprised you can't figure out what that would mean.

Re:Doesn't make sense (1)

oexeo (816786) | more than 9 years ago | (#10942504)

What is HTTP? Oh yes, a TCP protocol.

It's an application protocol, that functions over TCP, you can't really say it's a TCP protocol.

On the other hand, lets say we're talking about a new revision of the TCP protocol ... TCP version 42 ... wouldn't that be a new TCP protocol as well?

Yes, it could be, like I said in my post.

Re:Doesn't make sense (1)

meethade (603686) | more than 9 years ago | (#10942573)

I think what was meant by "new" was a more recent revision of TCP along it's implementation.

Gigabits! (0, Redundant)

mboverload (657893) | more than 9 years ago | (#10942357)

Its GIGABITS not GIGABYTES

Re:Gigabits! (2, Informative)

compro01 (777531) | more than 9 years ago | (#10942398)

still, thats 12.625 GBps. it's still plenty fast.

that's my entire hard drive moved in 10 seconds.

Re:Gigabits! (1)

3terrabyte (693824) | more than 9 years ago | (#10942552)

What I want to know is how to fill up a hard drive that size in 10 seconds. This speed is nice and all, but it means that our hard drives are now the bottleneck.

I mean what's the speed on a RAID-5 SCSI?

Re:Gigabits! (1)

RetroGeek (206522) | more than 9 years ago | (#10942659)

This speed is nice and all, but it means that our hard drives are now the bottleneck.

Moving hardware is always the bottleneck, if for no other reason than you have to wait for the data to reach the read/write head.

When we were setting up out first networks, it was faster to load eveything off our Netware servers, than to load it off the local hard drive.

The Netware server cached the files in RAM, then served them up over a 10BaseT LAN. This was noticebly faster than going via the local hard drive (pre-IDE days).

Re:Gigabits! (0)

Anonymous Coward | more than 9 years ago | (#10942555)

that's my entire hard drive moved in 10 seconds.

Ha., I bet I'm able to move your entire hard drive in less than a second, just hand it over to me and I'll show you... ;-)

Entire hard drive (2, Insightful)

dark-br (473115) | more than 9 years ago | (#10942647)


Yep, your entire hard drive moved in 10 seconds but the question is: How do they got those read/write speeds?

Your HD would never reach that... hdparm gives me 40mb/s if I am lucky.

Maybe they have a *LOT* of RAM :)

Re:Gigabits! (-1, Redundant)

compro01 (777531) | more than 9 years ago | (#10942423)

that 12.625 GBps. lots fast still.

that's my entire hard drive moved in 10 seconds!

Library of Congress? (1)

NardofDoom (821951) | more than 9 years ago | (#10942364)

Is that with or without the pictures?

Best read this way.... (5, Funny)

Himring (646324) | more than 9 years ago | (#10942367)

Best read using Christopher Lloyd's voice from Back to The Future, e.g.:

"101 jigowatts per second!!!" --Professor Emmett Brown

Gigabits (1)

Macfox (50100) | more than 9 years ago | (#10942369)

Jeez... When you're talking about new world records you think you'd stop to double check those IMPORTANT facts which appear in the first paragraph. :)

But then again this is slashdot.

Bytes'n'Bits (2, Informative)

mx.2000 (788662) | more than 9 years ago | (#10942380)

speed record of 101 gigabytes per second.

Wait, isn't this supposed to be a nerdy tech magazine?

I mean, I except this kind of Bit/Byte confusion on CNN, but on slashdot...

Re:Bytes'n'Bits (0)

Anonymous Coward | more than 9 years ago | (#10942637)

Maybe you should accept the fact that people make mistakes occasionally.

Re:Bytes'n'Bits (0)

Anonymous Coward | more than 9 years ago | (#10942680)

like me accepting things like accept/except on /.

Sustained transfer? (4, Interesting)

Anonymous Coward | more than 9 years ago | (#10942381)

How did they sustain a transfer like that? Unless my math is wrong, that's 11GBps ... what has that kind of read/write speed?

Re:Sustained transfer? (1)

Chatsubo (807023) | more than 9 years ago | (#10942526)

/dev/null

Re:Sustained transfer? (1)

RAMMS+EIN (578166) | more than 9 years ago | (#10942562)

``what has that kind of read/write speed?''

The network, obviously. And that's the only part that needs it - they don't need to be sending useful data.

Re:Sustained transfer? (0)

Anonymous Coward | more than 9 years ago | (#10942653)

They sent a 1-bit file and multiplied.

Re:Sustained transfer? (1)

noselasd (594905) | more than 9 years ago | (#10942686)

Uh, they are measuring network speed, not harddrive/etc. speed.

Not to knock their achievment but... (0, Troll)

Chineseyes (691744) | more than 9 years ago | (#10942391)

I swear I see this sort of thing twice a month on slashdot when does a record being broken stop being news and start becoming expected?

Porn (0, Redundant)

TheKidWho (705796) | more than 9 years ago | (#10942395)

Here comes new super high quality porn on the internet!

Re:Porn (0)

Anonymous Coward | more than 9 years ago | (#10942697)

hey! now we can just download the actresses instead of just their movie!!
omgah! I can't wait...
hubababababa dr000l.. where do I sign up for this new tech?.

Alternative High Bandwidth Solution (2, Funny)

isa-kuruption (317695) | more than 9 years ago | (#10942397)

A station wagon hauling backup tapes. Too bad the latency is so high!

Nah, who needs a station vagon (1, Funny)

Anonymous Coward | more than 9 years ago | (#10942661)

Just get that backup tape from place A to place B,
so that you write to it at place A, then it scrolls to place B, then read it at place B, then gets written to, then scrolls to place A. Of course, that would take some 10-1000 km of tape with some exotic routing, but that would be cool :)

--Coder

15 LoCs? (0, Redundant)

Chemisor (97276) | more than 9 years ago | (#10942414)

15 Libraries of Congress in 15 seconds? Great! Anybody got a copy?

I can beat that! (4, Funny)

Anonymous Coward | more than 9 years ago | (#10942419)

I can transfer one and a half terabits from one end of the room to the other in less than a second in two easy steps.

Step 1. Fill 200MB hard drive with data
Step 2. Fling aforementioned hard drive in a frizbee'esque motion across the room.

Expect some data loss however.

Take that Caltech!

Re:I can beat that! (1)

tokenhillbilly (311564) | more than 9 years ago | (#10942554)

That's the problem with unicast protocols. What you need to do is recruit a friend an have them stand on the other side of the room. Their job is to catch the hard drive. This protocol addition to your network will greatly reduce the chance of data loss errors.

Also, you might consider a 300GB drive to increase network throughput.

Re:I can beat that! (1)

relaxmax (686075) | more than 9 years ago | (#10942564)

Data loss can be further reduced by having a 'catcher' wearing a customised outfit, so as to minimise the sudden impact on the disk. Something like a sumo wrestler wearing baseball gloves

Re:I can beat that! (0)

Anonymous Coward | more than 9 years ago | (#10942593)

1.5Tb != 200MB

A 200MB HD will only get you 1.6 gigabits (1)

PornMaster (749461) | more than 9 years ago | (#10942622)

Thanks for playing the home game. Unfortunately, due to a math error, we miscalculated the entry fee and have deducted $18000 from your bank account. Please come again.

Re:I can beat that! (1)

zx75 (304335) | more than 9 years ago | (#10942636)

Wow, you can fill 200MB hard drive with data in less than a second?

People comment about the bandwidth of a card full of DVDs or Tape Drives and the like, but do they ever stop to think about exactly how LONG it takes to write information to the medium? Driving from one place to another with the data is trivial, but converting the raw data into the transportable message takes absolutely forever.

They could get better speed (4, Insightful)

CastrTroy (595695) | more than 9 years ago | (#10942428)

They could probably get much better speeds if they compressed it first. The Library of Congress is quite compressible, as there is a lot of redundant data. Text in general is known to be quite compressible.

Here's a question. Sure, you can send 101 Gigabits per second. But what kind of power do you need on either end to send or interpret that much data? I know my hard drive doesn't go that fast. I don't even think my RAM is that fast.

Why still TCP , what about SCTP? (3, Interesting)

Viol8 (599362) | more than 9 years ago | (#10942458)

SCTP was specifically devised as a replacement for TCP as it can emulate the 1 -> 1 connection of TCP but can do connection based 1 -> N too. I thought it has been designed with high speed in mind too. Does anyone know whether this protocol is being used more and more or has it just become another good-idea-at-the-time that got run over by the backwards compatability steamroller?

Is it needed? (3, Insightful)

Kombat (93720) | more than 9 years ago | (#10942495)

This is great and all, but has anyone stopped to ask why we need such fast networks? The stock-frenzy driven surplus of unneeded bandwidth was a major contributing factor to the dot-com bust. I remember when I was working on a multi-gigabit, next-generation optical switch, and the project manager was assuring us that in just a few years, people would be downloading their movies from Blockbuster instead of actually traveling there to pick up a DVD. We were all supposed to be videoconferencing left and right by now, with holographic communications just around the corner. A massive growth in online gaming was supposed to cripple the existing legacy networks, forcing providers to upgrade or perish. All of this was supposed to generate a huge demand for bandwidth, which were were poised to deliver.

Well, as we all know, that demand never materialized. We had way more bandwidth than the market needed, and when the bandwidth finally became stressed, providers opted to cap bandwidth and push less-intensive services rather than pay for expensive upgrades to their infrastructures.

I think we should instead be focusing on technologies that can a) generate real new revenue to the providers that we're trying to sell these ultra-fast networks to, b) have obvious and legitimate research or quality of life improvements, and c) are sure-fire hits to attract consumer attention (and $$$).

Don't get me wrong, this is very cool and all, but until Netflix actually lives up to its moniker and sends me my rented movies through my phone/cable line rather than UPS, then it doesn't really matter to me if the network is capable of 5 Gbps or 500 Gbps. Slashdot will still load in a second or 2 either way. We need real products to take advantage of this massive bandwidth, and that revenue will drive research even further, faster. I fear we're going to stall out unless we find a way to embrace these faster networks and make money off of them.

Re:Is it needed? (0)

Anonymous Coward | more than 9 years ago | (#10942589)

We need such fast networks so we can parcel the total bandwidth out amoung many consumers.

Re:Is it needed? (1)

Kombat (93720) | more than 9 years ago | (#10942665)

We need such fast networks so we can parcel the total bandwidth out amoung many consumers.

The existing network is more than adequate for that goal. You missed my point entirely.

Better wait (2, Funny)

koi88 (640490) | more than 9 years ago | (#10942532)


I dunno, my internet seems still pretty fast.
I guess I skip this internet2 thing and just wait for internet3.

What I want to know is... (-1, Offtopic)

f4llenang3l (834942) | more than 9 years ago | (#10942600)

where is Al Gore, and when is he going to claim responsibility for it?

Possible uses? (5, Insightful)

yetanothermike (824215) | more than 9 years ago | (#10942611)

Instead of looking at the possibility of beefing up your catalog of Futurama episodes, think about the new uses for it.

Medical imaging produces very large files, and the need to transfer them over distances quickly to save lives is real.

The possibility for video is great as well. Imagine getting multiple feeds of the next WTO event from different sources on the ground. Or quality alternative broadcasting that isn't just some postage-stamp-sized, pixelated blobs. Torrents are nice, but there is something to be said for being jacked in live.

And for those who didn't RTFA, it's 3 DVDs a second.

Why TCP? (1)

ReeprFlame (745959) | more than 9 years ago | (#10942710)

Why is the Internet2 still developing protocols based on TCP. I thought that they were proven to be somewhat inefficient with others proving better. I could have also sworn I read I2 was developing a new protocol that was almost a blend on TCP and UDP. Maybe that is what this is? At least I am happy with the new speed records... Shows development!
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>