Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Another Internet2 Speed Record Broken 251

rdwald writes "An international team of scientists led by Caltech have set a new Internet2 speed record of 101 gigabits per second. They even helpfully converted this into one LoC/15 minutes. Lots of technical details in this press release; in addition to the obviously better network infrastructure, new TCP protocols were used."
This discussion has been archived. No new comments can be posted.

Another Internet2 Speed Record Broken

Comments Filter:
  • by SpaceLifeForm ( 228190 ) on Monday November 29, 2004 @10:15AM (#10942316)
    One Line of Code every 15 minutes? Seems slow to me.
  • Gigabits... (Score:5, Informative)

    by wittj ( 733811 ) on Monday November 29, 2004 @10:15AM (#10942317)
    The speed is 101 Gigabits per second (Gbps), not Gigabytes.
  • Oye (Score:3, Funny)

    by NETHED ( 258016 ) on Monday November 29, 2004 @10:15AM (#10942321) Homepage
    Bring on the Porn comments.

    But remember, never underestimate the bandwidth of a 747 full of Blueray disks.
    • 747 (Score:2, Funny)

      by Fullaxx ( 657461 )
      or the cost ;)
      • Re:747 (Score:2, Insightful)

        by kfg ( 145172 )
        or the cost ;)

        Never overestimate the cost per bit of a 747 full of blueray disks.

        KFG
    • Re:Oye (Score:4, Interesting)

      by City Jim 3000 ( 726294 ) on Monday November 29, 2004 @11:28AM (#10942866)
      Cargo capacity of a 747-400 is 53000 kg and 160 m3
      I assume around the same size and weight of a blueray disc as of a DVD disc which is 1.2 mm thick, 12cm in diameter and weighs a maximum of 20 grams. Also consider a plastic sleeve which maybe adds .2 mm and 3 grams.
      Space needs for a disc with sleeve is thus 120x120x1.4mm = 20160 mm^3 = 0,00002016 m^3
      Weight is 23 grams = 0,023 kg.
      Thus:
      discs/plane (volume) = 160 / 0,00002016 ~ 7936500 pcs
      discs/plane (weight) = 53000 / 0,023 ~ 2304300 pcs
      maximum discs per plane is then about 2300000 pcs
      Blueray stores 50GB = 400 Gbit
      Plane stores 400*2300000 Gbit = 920'000'000 Gbit

      Not counting the time to load, burn and read the discs, a non-stop flight from Pittsburgh to LA takes around 5 hours = 18000 seconds
      This amounts to 920000000/18000 =~ 51000 Gbit/sec

      Considering a very approximate cost of $1/kg for the transport, and $2 for each disc it amounts to around
      $4653000 total.
      Which is about 0.04 $/Gbyte, or around the same price per GB as a cheap 160GB Hard drive.
  • by Norgus ( 770127 ) on Monday November 29, 2004 @10:15AM (#10942322)
    >. if only my HDD would write that fast!
  • by omghi2u ( 808195 ) on Monday November 29, 2004 @10:18AM (#10942338) Journal
    Has anyone every stopped to think this might be too fast for its own good?

    Isn't there a point when we've reached a speed where rather than deciding what to send from one place to another, we become lazy and start sending everything?

    And won't that just lead to massive researcher mp3 swaps? :P
    • by RAMMS+EIN ( 578166 ) on Monday November 29, 2004 @10:20AM (#10942356) Homepage Journal
      ``Isn't there a point when we've reached a speed where rather than deciding what to send from one place to another, we become lazy and start sending everything?''

      You mean like broadcasting radio and TV?
    • by oexeo ( 816786 ) on Monday November 29, 2004 @10:22AM (#10942370)
      > Has anyone every stopped to think this might be too fast for its own good?

      Has the infamous Bill Gates quote not taught you anything?
    • Has anyone every stopped to think this might be too fast for its own good?

      No, this isn't a car, it doesn't need human intelligence after the code has been developed to keep it from turning into a wreck.

      Isn't there a point when we've reached a speed where rather than deciding what to send from one place to another, we become lazy and start sending everything?

      This data transfer was part of the ramp up for the start of the LHC which will be taking data at 40TB/sec, recording approx 750MB/sec data to di

    • The Swedish LHC (CERN) guys are going to need to be sending *petabytes* of collision data around the world for analysis over the course of their experiments. It's crazy to think it, but some people really do have a need for this. I suppose this is why the Internet2 is largely restricted to research and education purposes.
    • For a couple of million users, this is around dialup. You'd need a whole bunch of these if you wanted to give a small country TV on demand.
    • by Anonymous Coward
      Canadian researchers at CaNet3 did an interesting experiment around this very question.

      What do you do when your network is faster than your drives?
      You turn the network itself into a drive - a giant drive made of light and 1,000 miles in diameter.

      Basically, the idea is that instead of accessing data relatively slowly from a server's drive, you instead keep the data spinning around the fibre network at the speed of light. If anyone wants something - a DVD quality movie for example - they peel it off as it c
    • Its not speed for its own sake.

      One of the major reasons why this is important (and higher bandwidth is good) is for scientific data. Things like data from CERN and other particle accelerator. These produce -huge- amounts of data every time they run and this will allow researchers to be able to access this data without actually having to go to places like CERN.
    • Duh! Until it's so fast that it gets there before I even decide to send it then it's not fast enough!

      Give me a terabit a second transfer rate and a year and I'll show you someone who is sick and tired of the damn wait for things to download/upload.

      It's all what you are used to.

    • by LordOfYourPants ( 145342 ) on Monday November 29, 2004 @02:05PM (#10944193)
      When people jumped from 56k to 1Mbps, the only thing that really changed for the *average joe* was grabbing mp3s and checking out more trailers.

      Contrary to popular belief, most people are not out there downloading a 9GB collection of Friends, season 1 or grabbing a 20GB MAME set with flyers and cabinets. Most people will just go buy the DVD or grab Midway Arcade Treasures and be happy.

      When people jumped from 1Mbps to 5Mbps, I've seen them take advantage of it by shopping on amazon 2ms faster than before.

      I think the real "danger" with higher speeds would lie in the realm of more annoying/higher def advertising. When the day comes that it becomes trivial and technically possible on a large amount of computers to download and display a 1920x1080 30 sec interstitial ad before you can view a webpage, it *will* be done.

      You can already see this transition happening with lower res video as people try to pack a highly-compressed 30 second FMV ad into a flash box.

    • Just food for thought, this isn't so much about speed as it is about size (and you all thought that didn't matter).

      Think about it this way. If you have a 1" pipe, and you send a little bit of water down it, the water reaches from one end to the other in a certain amount of time. Now take that up to a 4" pipe. Does the water travel any faster simply because it's a larger pipe? No. But the difference is that you can send MORE water in the same amount of time, not that you can send it from one end to the
    • I remember watching a lecture on the reseach channel, where a comparison was made of growth rates of different technologies: cpu, storage and network bandwidth. The bottom line was that cpu performance growth follows Moores Law (i.e. the perf increase is dominated by manufacturing issues) while network performance is increasing at 10X the cpu rate (disk is somewhere between). The talk discussed the implications of this.

      The summary was that we'd need to revisit system tradeoffs. We currently compress data

  • by Stokey ( 751701 )
    Cue the gags about "Finally, I shall be able to download my pr0n collection".

    Cue questions about whether is gigabytes or gigabits.

    Cue questions about "How can I get a such gaping-a$$ bandwidth.

    One of these days I will write the ultimate FAQ to /. posts including all the possible combinations of arguments started by SCO stories, how politics is treated here and a whole chapter on non-funny memes.

    Go! Pedal faster.
  • Doesn't make sense (Score:5, Insightful)

    by oexeo ( 816786 ) on Monday November 29, 2004 @10:19AM (#10942352)
    new TCP protocols were used

    TCP is a specific protocol, a "new" TCP protocol would suggest a different protocol, unless it means a revision of the current protocol.

  • Is that with or without the pictures?
  • by Himring ( 646324 ) on Monday November 29, 2004 @10:22AM (#10942367) Homepage Journal
    Best read using Christopher Lloyd's voice from Back to The Future, e.g.:

    "101 jigowatts per second!!!" --Professor Emmett Brown
  • Bytes'n'Bits (Score:2, Informative)

    by mx.2000 ( 788662 )
    speed record of 101 gigabytes per second.

    Wait, isn't this supposed to be a nerdy tech magazine?

    I mean, I except this kind of Bit/Byte confusion on CNN, but on slashdot...
  • Sustained transfer? (Score:4, Interesting)

    by Anonymous Coward on Monday November 29, 2004 @10:23AM (#10942381)
    How did they sustain a transfer like that? Unless my math is wrong, that's 11GBps ... what has that kind of read/write speed?
    • ``what has that kind of read/write speed?''

      The network, obviously. And that's the only part that needs it - they don't need to be sending useful data.
    • Uh, they are measuring network speed, not harddrive/etc. speed.
      • The way to write 11GBps is to use a distributed array of disks. A parllel filesystem can easily handle it. Over a 100 networked computers with a parallel filesystem like Lustre, GPFs or PVFS( 1,2 and 3 .... is there a 3 ?) can do it. I mean there are disk arrays that have sustained throughput of over 55GBps. Also the 11GBps that we see now may one day used for having all sorts of communication going through it so in a way it is a way of the future.
    • ... what has that kind of read/write speed?

      For the first time, a comment that starts "Imagine a Beowulf cluster..." might actually be on topic.

      More seriously, the Internet2 is designed for transferring massive scientific data sets between research institutions. The folks at CERN are planning to run experiments that generate terabytes of data per second. They're no doubt going to be using buckets of RAM and monster arrays of drives operating in parallel to keep on top of that. They wouldn't be develo

    • "How did they sustain a transfer like that? Unless my math is wrong, that's 11GBps ... what has that kind of read/write speed?"

      Good point, but that's the aggregate throughput of the data pipe and not necessarily generated or used by any two single end-point devices. They may test it this way as a proof of concept, but it's more likely that 1000 computers in a lab on one coast would send that total data through such a link to a lab on the other coast.

    • A 4-drive raid0 array on a PCI-X card.
    • As this was an experiment, it is likely that they merely sent pseudo-random data. Probably even just the same blocks of data repeatedly. You don't need a large data set to generate traffic for testing purposes, just something that is easily verifiable.
  • A station wagon hauling backup tapes. Too bad the latency is so high!
  • by Anonymous Coward on Monday November 29, 2004 @10:29AM (#10942419)
    I can transfer one and a half terabits from one end of the room to the other in less than a second in two easy steps.

    Step 1. Fill 200MB hard drive with data
    Step 2. Fling aforementioned hard drive in a frizbee'esque motion across the room.

    Expect some data loss however.

    Take that Caltech!
    • Thanks for playing the home game. Unfortunately, due to a math error, we miscalculated the entry fee and have deducted $18000 from your bank account. Please come again.
    • Wow, you can fill 200MB hard drive with data in less than a second?

      People comment about the bandwidth of a card full of DVDs or Tape Drives and the like, but do they ever stop to think about exactly how LONG it takes to write information to the medium? Driving from one place to another with the data is trivial, but converting the raw data into the transportable message takes absolutely forever.
    • This might be bit degratory, but I've heard that in England they toss midgets (some sort of bar game) and surely the information content of a midget is much more than 200MB. So the Brits have transfered information in their drunken stupor for centuries faster than these dudes.

      If anybody shorter than me (5'11") is offended by midget tossing, blame the Brits not me.

      • Blame the Brits? Unfortunately, this bar game is around in the States, too. The problem is not just that most dwarfs (we call ourselves "dwarfs", or "short-statured") see this as degrading, but that it is dangerous. Not that we're particularly worried about the dwarfs that subject themselves to this - they are probably aware of the risks, even if they are ignoring them - but the fact that this is seen as "acceptable" creates a danger for Joe Dwarf walking down the street, in that some day, some drunk ass
  • by CastrTroy ( 595695 ) on Monday November 29, 2004 @10:30AM (#10942428)
    They could probably get much better speeds if they compressed it first. The Library of Congress is quite compressible, as there is a lot of redundant data. Text in general is known to be quite compressible.

    Here's a question. Sure, you can send 101 Gigabits per second. But what kind of power do you need on either end to send or interpret that much data? I know my hard drive doesn't go that fast. I don't even think my RAM is that fast.
    • >>The Library of Congress is quite compressible

      I do hope you mean that the data content within the library is compressible... The building itself is quit tough to compress!
    • Often times it isn't worth the effort to compress data, especially when your network bandwidth greatly exceeds the rate at which your system can compress the data.
  • by Viol8 ( 599362 ) on Monday November 29, 2004 @10:34AM (#10942458) Homepage
    SCTP was specifically devised as a replacement for TCP as it can emulate the 1 -> 1 connection of TCP but can do connection based 1 -> N too. I thought it has been designed with high speed in mind too. Does anyone know whether this protocol is being used more and more or has it just become another good-idea-at-the-time that got run over by the backwards compatability steamroller?
  • Is it needed? (Score:3, Insightful)

    by Kombat ( 93720 ) <kevin@swanweddingphotography.com> on Monday November 29, 2004 @10:39AM (#10942495)
    This is great and all, but has anyone stopped to ask why we need such fast networks? The stock-frenzy driven surplus of unneeded bandwidth was a major contributing factor to the dot-com bust. I remember when I was working on a multi-gigabit, next-generation optical switch, and the project manager was assuring us that in just a few years, people would be downloading their movies from Blockbuster instead of actually traveling there to pick up a DVD. We were all supposed to be videoconferencing left and right by now, with holographic communications just around the corner. A massive growth in online gaming was supposed to cripple the existing legacy networks, forcing providers to upgrade or perish. All of this was supposed to generate a huge demand for bandwidth, which were were poised to deliver.

    Well, as we all know, that demand never materialized. We had way more bandwidth than the market needed, and when the bandwidth finally became stressed, providers opted to cap bandwidth and push less-intensive services rather than pay for expensive upgrades to their infrastructures.

    I think we should instead be focusing on technologies that can a) generate real new revenue to the providers that we're trying to sell these ultra-fast networks to, b) have obvious and legitimate research or quality of life improvements, and c) are sure-fire hits to attract consumer attention (and $$$).

    Don't get me wrong, this is very cool and all, but until Netflix actually lives up to its moniker and sends me my rented movies through my phone/cable line rather than UPS, then it doesn't really matter to me if the network is capable of 5 Gbps or 500 Gbps. Slashdot will still load in a second or 2 either way. We need real products to take advantage of this massive bandwidth, and that revenue will drive research even further, faster. I fear we're going to stall out unless we find a way to embrace these faster networks and make money off of them.
    • We were all supposed to be videoconferencing left and right by now, with holographic communications just around the corner.

      These are the same sanke-oil futurists that once told us we'd have flying cars, fully automated homes, vacations in space, sexbots and televised death sports.

      OK, maybe only Norman Jewison predicted televised death sports, but you get my point. They would righteously rock, though. Especially watching televised death sports while fucking my sexbot in my flying car.

    • Re:Is it needed? (Score:3, Insightful)

      by rsmith-mac ( 639075 )
      Just because "the future" isn't happening in N. America yet, doesn't mean that it isn't happening elsewhere. N. America is constrained by its last mile problem, but Asian nations like S. Korea and Japan don't suffer this, which is why they already have multi-megabit fiber drops to homes and businesses. Sure, we on our miniscule ADSL and Cable hookups may not see the need for such massive bandwidth since we can't use it, but when you have a 1000 unit apartment complex with 100Mb fiber drops, this kind of int
  • Better wait (Score:2, Funny)

    by koi88 ( 640490 )

    I dunno, my internet seems still pretty fast.
    I guess I skip this internet2 thing and just wait for internet3.
  • Possible uses? (Score:5, Insightful)

    by yetanothermike ( 824215 ) on Monday November 29, 2004 @10:52AM (#10942611)
    Instead of looking at the possibility of beefing up your catalog of Futurama episodes, think about the new uses for it.

    Medical imaging produces very large files, and the need to transfer them over distances quickly to save lives is real.

    The possibility for video is great as well. Imagine getting multiple feeds of the next WTO event from different sources on the ground. Or quality alternative broadcasting that isn't just some postage-stamp-sized, pixelated blobs. Torrents are nice, but there is something to be said for being jacked in live.

    And for those who didn't RTFA, it's 3 DVDs a second.

  • by daveschroeder ( 516195 ) * on Monday November 29, 2004 @11:08AM (#10942717)
    ...how fast this could transfer the sum of all data (DNA, memory, etc.) contained in a human.

    Yes, I'm kidding. But only half kidding. In some crazy future where we can reconstitute energy into matter, how much bandwidth would be needed to do this practically? Do we even have any ideas or estimates on how much storage would be needed to accurately represent the nature of the human body in terms of data? And no, I'm not talking about the "memory" of the brain - I'm talking about the physical manifestation of the body itself, of which the memory of the brain is a part.
    • by Anonymous Coward
      There are on order of 10^27 atoms in a human (6.022*10^23 per 12 grams or carbon so about 10^26-10^27 for a 100kg carbon blob)

      That at even 100Gigabytes per second assuming say 100bytes per atom is 10^16 seconds or about 2% the age of the universe (100million years)

      We need another 10^9 increase
    • ...how fast this could transfer the sum of all data (DNA, memory, etc.) contained in a human.

      Another poster has already provided an excellent summary of how long it would take to transfer a whole 'human', assuming 100 bytes per atom.

      I will note that DNA is actually easy. Since it's massively redundant--just about every cell has a copy of the same stuff--you only need to send it once. The entire human genome is three billion (3E9) base pairs. Each base is one of only four possibilities, so that's just

  • Such a blazingly fast connection is amazing, but how the hell do they get the data onto and off the pipe? Are the disk read and write speeds up to that? What about the ram? how the hell do they do that!!!
  • by F4Codec ( 619560 ) on Monday November 29, 2004 @11:55AM (#10943032) Journal
    A paper I wrote a while ago...

    Some Perceived Problems with the Introduction of Terabit Network Technologies.

    This short paper attempts to highlight some potential problems associated with the introduction of high speed networking - specifically at the Terabit per second level. These problems are still in the theoretical arena as practical Terabit networks are probably still several weeks away from fabrication.

    Introduction.

    The primary problem when considering Terabit networks must be the enormous speed that the packets on such networks will be traveling. Naturally there are problems at the protocol level with very large window sizes necessary for useful throughput, and enormous quantities of data "in flight" at any one point. However, these problems are encountered at the Gigabit level and are solvable in principle (by appropriate window and packet size negotiation for instance).

    The major problem that is perceived at such high speeds is that data is now flowing at a significant fraction of the speed of light. This brings into play a number of relativistic effects that must be taken into account when designing such high speed networks.

    Physical Considerations.

    There are firstly a number of physical considerations that must be taken into account. These are problems associated with any body traveling close to the speed of light (c).

    1. A large amount of energy is required to accelerate the packets to the required velocity. However, the closer one approaches c - the more of that energy is transformed into mass. Thus packets will become heavier. A related problem is the slowing down of packets, when they enter conventional lower speed Megabit networks. The large amounts of energy that have gone into accelerating the packets and giving them extra mass will be lost. This will require large heat sinks. Cable fractures may also be explosive in these cases (which is in keeping with the abbreviation TNT Terabit Network Technologies). Alternatively, a special large coil of cable could be used to allow the packets to naturally slow down.
    2. Networks often need to be laid to fit the physical shapes of buildings and other infrastructure. When any body traveling close to c undergoes acceleration it tends to emit "breaking radiation" or bremsstrahlung. This is particularly noticeable when bodies have to undergo angular acceleration when turning corners. Thus any bends in the cabling will have to be heavily shielded with lead plates to stop the intense burst of high energy particles. At high enough speeds, the curvature of the Earth may also prove a significant loss of energy.
    3. Whilst traveling at high speeds, the packets will undergo time-dilation effects. Thus whilst two ends of a connection may agree on a round-trip time for a packet, this may well be different to the packets perceived RTT. The packets estimate of the RTT will be shorter than the measured delay. Therefore if times are kept in the packet this will lead to confusion.
    4. When a body is traveling at high speeds, it tends to shrink in the direction of the travel. This means that a packet taking 1400 bytes, might actually take up 1300 bytes space on the network. This leads to more capacity being available than might first be perceived. However all routers must be able to handle packets at speed to stop them suddenly growing. This leads to circuit switching being possibly a better base technology.

    A perhaps more serious problem is the case of collisions on a network technology such as ethernet. The collision of two very high speed packets could give rise to many spectacular effects, equivalent to those seen in current particle accelerators. In par

  • by zijus ( 754409 ) on Monday November 29, 2004 @12:14PM (#10943134)

    I read a lot of : is this needed?, let's be clever and ask oneself what we are doing...

    Frankly, it is hilarious from folks who probably jumped on GMail, IPods, stupid phone which does all but work when needed, and other devices which are arguably the most un-needed space on the planet. (No you won't get me to believe your 200MB emails are worth keeping...)

    Ciao

    As a reminder, the ALICE experiment at CERN will produce per year 1 PB ( Peta Byte ) of _raw_ data. This is only _one_ experiment out of _four_. Add DB overhead and you start getting the picture. And no: there won't be backups: too big. The nature of particle physics is to be statistics. The search is for slight deviations from what is predicted. So the amount of raw data is huge. It is also that the amount of (raw) data per second produced will be in some case magnitude of order bigger.

    It is thought that some data will not be stored at all at CERN, but sent straight to remote storage farm. Too much data to be stored localy.

    The people analysing those data will be scattered over the planet, involving indeed the need of big transfers.

    Ha ha ha: is this needed ? Hi hi let's think about it... Please dump all the crap data you pretend to need and ask again the question.

  • http://boson.cacr.caltech.edu:8888/

    A Jini-based, self-discovering network monitoring tool. That's pretty damn cool too.

    And I thought that Jini was totally ignored after I bought "Jini in a Nutshell" for $0.50 at a church book sale!
  • Don't let the MPAA hear, they'll try to use it as evidence of piracy.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...