Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Ethernet The Occasional Outsider 169

coondoggie writes to mention an article at NetworkWorld about the outsider status of Ethernet in some high-speed data centers. From the article: "The latency of store-and-forward Ethernet technology is imperceptible for most LAN users -- in the low 100-millisec range. But in data centers, where CPUs may be sharing data in memory across different connected machines, the smallest hiccups can fail a process or botch data results. 'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,' Garrison says. This forced many data center network designers to look beyond Ethernet for connectivity options."
This discussion has been archived. No new comments can be posted.

Ethernet The Occasional Outsider

Comments Filter:
  • Long Live! (Score:5, Funny)

    by Anonymous Coward on Thursday May 25, 2006 @03:20PM (#15404153)
    Long Live the Token Ring!

    One Ring to rule them all
    • Mod parent up as funny!

      (I was actually going to post something similar, but this one beat me to the punch).

      Do they even make Token Ring anymore? I know the MAUs were hella-expensive.

    • I saw students bring in computers with token-ring cards when I worked at a University Helpdesk. They would come in and say "My computers broken, I plugged 'the internet' in but it won't connect" (we would troubleshoot over the phone and they would want us to come up to their room, after much repeating our policies they would cave and bring it down because they wanted to download their pr0n). I was baffled when it would turn out to be a token-ring card... I was like "Where the HELL did they get this?". I'
      • Well, my college was connected with token ring up until 2001, when they did a complete network overhaul. Maybe you got one of our transfer students :D

        Apparently, my college got a great deal on token ring from IBM in the early 90s, and at the time it was plenty fast. But by the mid 90s, it was showing its age, with no upgrade path. Back when my college still had no clue how to manage their network (read: 1997, pre-Napster), it consisted of a "turbo" (16Mbit) token ring backbone with various 4Mbit and 16Mb
    • ARCNET is the tank of networks protocols. I was once working on an arcnet system and I tripped over the cable and yanked it out of the wall. Would you believe the token jumped out of the cable and ran accross the floor and jumped into the wall.

      Nothing stops ARCNET!
      • I had a teacher once who ran ARCNET over a section of barbed-wire fence, just to prove it would work. It worked for about a week until he got bored of it and took it apart, even working through a rainstorm (that made it drop some packets though)

      • I had an arcnet setup one year in my dorm with the people across the hall. The campus ethernet would only allow one computer to be active per room (switch with 1 MAC address per port limit). We had the linux machines setup as routers, so we could get to the internet via our ethernet, or via the arcnet across the hall to another computer. We could get multiple computers that way, as long the ethernet switch only saw one MAC address, it didn't matter how many IPs you had behind it. We could have used a
  • by Anonymous Coward on Thursday May 25, 2006 @03:23PM (#15404181)
    In our Data Center, we have a great big vat of steaming salt water and we drop one end of the cat5 cables from each server into the vat....those packets that can't figure out where they're going just drop to the bottom and die ...we have to drain this packet-goo out once a month. (but we do recycle it...we press it into CDs and sell them on Ebay)

    (Seriously, haven't people heard cut-through switches which just look at the first part of the header and switch based on that... store-and-forward switches are so "1990s")

    TDz.
    • that was my thought exactly (not the salt vat.. althought i like it)

      we have a small office ~20 computers and 3 servers.. and i refuse to buy switchs that can't do cut through.. store and forward is slow..and very memory entisive for switchs on high speed networks..
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday May 25, 2006 @03:41PM (#15404350)
      There are only TWO reasons to use Store & Forward.

      #1. You're running different speeds on the same switch (why?).

      #2. You really want to cut down on broadcast storms (just fix the real problem, okay?)

      Other than that, go for the speed! Full duplex!
      • People run different speeds on the same switch all the time, and for not necessarily poor reasons: If you have a SMB (in this case, that's small or medium business) with maybe one big fileserver, you don't need to run gigabit to everyone... You can run 100Mbps to the clients, and run gig to the switch only. Of course, since just about everything but laptops is coming with gig now (and probably some of them) this is becoming less valuable.
        • There's plenty of hardware out there that doesn't come gig-e equipped. Hell, I still deploy RS232 terminal concentrators at 10 megs now and then.
          • While that's true, most of the time those kind of devices would be happiest on their own subnet for security and management reasons - or at least, I'd be happiest with them there. Therefore they can live on different router interfaces, whether the router's from cisco, or a PC from fry's with linux on it. The only time it's really necessary to mix speeds on the same switch is when you have multiple clients accessing a resource and their aggregate speeds make it useful.
        • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday May 25, 2006 @04:34PM (#15404805)
          People run different speeds on the same switch all the time, and for not necessarily poor reasons: If you have a SMB (in this case, that's small or medium business) with maybe one big fileserver, you don't need to run gigabit to everyone...
          What's with the "need to"?

          I'm talking performance. Store & Forward hammers your performance. In my experience, you get better performance when you run the server at 100Mb full duplex (along with all the workstations) and use Cut Through than if you have the server on a Gb port, but run Store & Forward to your 100Mb workstations.
      • #1. You're running different speeds on the same switch (why?).


        AFAIK, wireless doesn't consistantly support 100Mbps compared to local Ethernet. Usually, I get around 54Mbps, or possibly 10 Mbps on a weak signal.

        That's why you still see store-and-forward - Wireless and wired networks are different speeds.

      • #1. You're running different speeds on the same switch (why?).
        lets see:

        you have an older but still functional and economical to run printer with a 10base2/T combo card in it and for which a replacement card would be either expensive or unobtainable.

        you have 100mbit to most of the desktops because your wiring wasn't done well enough for gig-e to cope.

        you have gigabit to your servers

        you have a 10 gigabit backbone link.

        also even if a switch is cutting through a lot of packets its still going to have to queue t
    • (Seriously, haven't people heard cut-through switches which just look at the first part of the header and switch based on that... store-and-forward switches are so "1990s")

      Even still - low 100 ms for store-and-forward ethernet switches? That seems really, really high. I would've said more like single milliseconds, which is still high, but it isn't 100 ms.

      I know from experience that I've used store-and-forward ethernet switches with much, much better latency than 100 ms.
  • The NSA's network sniffer, recently discovered at an AT&T broadband center, can only sniff up to 622MB [slashdot.org]. Sounds to me like if you use an InfiniBand switch, that would effectively make the output of the NSA's network sniffers complete gibberish.
  • by victim ( 30647 ) on Thursday May 25, 2006 @03:23PM (#15404183)
    I don't think I need to read anymore, well, I did verify that the number really appears in the article.
    This author does not understand the subject material.

    (I suppose you could deliberatly overload a switch enough to get this number, maybe, but that would be silly, and your switch would need 1.25Mbytes of packet cache.)
    • by merreborn ( 853723 ) on Thursday May 25, 2006 @03:27PM (#15404228) Journal
      Looks like the author fucked up the definition of millisecond too:

      "By comparison, latency in standard Ethernet gear is measured in milliseconds, or one-millionth of a second, rather than nanoseconds, which are one-billionth of a second"

      http://www.google.com/search?hl=en&q=define%3Amill isecond&btnG=Google+Search [google.com]
      "One thousandth of a second"

      Seriously. How the fuck does this idiot get published?
    • This author does not understand the subject material.

      I disagree. The author has simply misplaced his metric units. He used the word "milliseconds", where he should have used the word "microseconds". You can see an example of this where he refers to milliseconds as one millionth of a second, rather than the one thousandth that they actually are.

    • Problems also include the use of the term "store and forward Ethernet" (WTF does that mean?!) and the fact that Ethernet channel bonding has been around for about 10 years.
      • Store and forward is when the switch reads in the entire packet before making a routing decision. Most protocols, including Ethernet and TCP/IP, send the target address very early in the frame precisely so that store and forward isn't necessary. Instead they use a strategy called cut-through switching, where they read just enough of the frame to determine where to send it and then send the remainder to the destination port as it arrives on the source port. Most home or small office switches use store and
        • I was not just being pedantic. I come from the old-school world where store-and-forward means something like UUCP [wikipedia.org], so the phrase "store-and-forward ethernet" sounded like an oxymoron to me.

          If store-and-forward is indeed used in two very different contexts, it might be helpful for someone to update the Wikipedia article on store and forward [wikipedia.org] with current, and accurate meanings.
  • Ultra-low latency networking is a minor interest of mine, but one I've never had the chance to really pursue. Can anyone familiar with the landscape recommend some low-cost options for experimenting with this stuff? Or maybe just let me down gently. "No, Sammy, there are no low-cost options. And there's no Santa Claus."
    • Re:Low-cost options? (Score:3, Informative)

      by dlapine ( 131282 )
      Define low cost? Myrinet [myrinet.com] with less than 10 microsecond latency is normally considered to be the least expensive option. You can check their price lists, but an 8 port solution [myrinet.com] (with 8 HBA's) will set you back over $8k, not including the fiber.

      For some people, that's cheap. If not, sorry.

  • by Anonymous Coward on Thursday May 25, 2006 @03:25PM (#15404203)
    From the article, three paragraphs in:
    "(By comparison, latency in standard Ethernet gear is measured in milliseconds, or one-millionth of a second, rather than nanoseconds, which are one-billionth of a second)"

    That would be one-thousandth, not millionth (aka micro second). Not a good start...

    • Well that and ethernet gear is measured in miliseconds? That doesn't seem useful. If I run a traceroute, the time is listed as "1ms" for all the internal hops. There are 5 internal hops, all ethernet. In my experience, all modern ethernet gear adds less than a millisecond of latency. Traceroute programs only report milliseconds because there's a useful measure for Internet traffic and anything under 1ms can be safely called "really fast" for normal work.

      Seems to me you'd need to measure ethernet gear in mic
  • by with_him ( 815684 ) on Thursday May 25, 2006 @03:25PM (#15404209)
    I just blame it on the ether-bunny.
  • Software design (Score:3, Interesting)

    by nuggz ( 69912 ) on Thursday May 25, 2006 @03:26PM (#15404213) Homepage
    The origional post makes some comments that
    sharing memory ... the smallest hiccups can fail a process or botch data results.
    Sounds like bad design, or a known design trade off.
    Quite reasonable, when on a slow link, until I know better assume the data I have is correct, if it isn't throw it out and start over. Not wildly different than branch prediction or other approaches to this type of information.

    'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,'
    Faster is faster, not really a shocking concept.
    • what it looks like to me is.. ok so they set something up using normal 100/1000 ethernet and then realized something was slow and that if they use gbic 30gb ports things run faster... can someone please sent them a cookie?
  • by pla ( 258480 ) on Thursday May 25, 2006 @03:26PM (#15404218) Journal
    The latency of store-and-forward Ethernet technology is imperceptible for most LAN users -- in the low 100-millisec range.

    I don't know what sort of switches you use, but my home LAN, with two hops (including one over a wireless bridge) through only slightly-above-lowest-end DLink hardware, I consistantly get under 1ms.



    When you get into application-layer clustering, milliseconds of latency can have an impact on performance

    Again, I get less than 1ms, singular.



    Now, I can appreciate that any latency slows down clustering, but the ranges given just don't make sense. Change that to "microseconds", and it would make more sense. But Ethernet can handle single-digit-ms latencies without breaking a sweat.
    • I wonder if his messed up numbers come from his mistaken belief that a millisecond is three orders of magnitude smaller than it is.
    • Sure, for an 8 port switch, where all the computers have a direct connection. Consider the issues involved for a router with a 128 machines all trying to cross-communicate. Or larger collections of computers that might need to use multiple sets of switches to span the entire system.

      On a Force10 switch, with 2 nodes on the same blade:
      tg-c844:~ # ping tg-c845
      PING tg-c845.ncsa.teragrid.org (141.142.57.161) from 141.142.57.160 : 56(84) bytes of data.
      64 bytes from tg-c845.ncsa.teragrid.org (141.142.57.161):

      • 1) Neat stuff in your cluster.

        2) A fair number of ethernet switches exist for ~500 nodes @ 1Gbps that will have predictable latency, like the force10 you are describing. 900 nodes would be tough, admittedly, at the moment. Also, I don't think you meant to say "router" -- you almost certainly are switching if it's all configured right.

        3) Myrinet is very specialized and uses cut-through switching. Ethernet is a generalized protocol that can be used on a WAN, and is almost always store-and-forward. Store-a
    • I don't know what sort of switches you use, but my home LAN, with two hops (including one over a wireless bridge) through only slightly-above-lowest-end DLink hardware, I consistantly get under 1ms.

      Wow, you must use some really old hardware. My packets arrive before I send them:

      Pinging 10.0.0.1 with 32 bytes of data:

      Reply from 10.0.0.1: bytes=32 time=-11ms TTL=63
      Reply from 10.0.0.1: bytes=32 time=1ms TTL=63
      Reply from 10.0.0.1: bytes=32 time=-11ms TTL=63
      Reply from 10.0.0.1: bytes=32 time=2ms TTL=63

  • On my planet, a millisecond is a full thousandth of a second, not just one millionth.

    Oh, well. People tell me I'm just slow.

  • That just sounds daft. Given the bottle neck harddrives are for cpu's, it doesn't sound like a great shock that when you gotta wait for your data over ethernet you're going to see problems.

    Maybe I should RTFA...
    • This is a NUMA (non-uniform memory access) cluster. Basicly a bunch of computers woring together that occasionally need to access the same data. If the last process to need that data happens to be on another computer, it needs ot be transfered. The trick to these clusters is writing software so that transfer need is minimal, and that the same data set stays on the same processor, to the best of your ability.
    • Naturally, the article is riddled with errors, because (1) the author isn't a subject matter expert and (2) good sub-editing is no longer considered essential.

      However, there are applications that "share memory" over networks; Oracle RAC springs to mind, where the nodes in the cluster share database blocks as required. However, Oracle recommend gigabit point-to-point connections between nodes, rather than a general-purpose network. The latter tends to make the cluster unusable.
    • Maybe I should RTFA...
      Either that, or you should take the class [gatech.edu] that I took this past semester. There's a bunch of links to research papers and lecture slides about distributed shared memory (and other kinds of parallel/shared computing issues), if you care to read them.
  • Ethernet's strength is it's flexiblity, not it's speed per se. It can handle changing network environments where hardware or software is added and removed continually, and you never know quite where the bandwith is most needed. You just plug it all in, and ethernet does a decent job of neotiating who gets to use the bandwidth.

    But it's never been a really high speed protocol. It's easy to beat, speed-wise, as long as you know what the network use looks like ahead of time.

    Which of course is a killer for m
  • No kidding (Score:5, Interesting)

    by ShakaUVM ( 157947 ) on Thursday May 25, 2006 @03:46PM (#15404389) Homepage Journal
    Er, yeah. No kidding.

    When I was writing applications at the San Diego Supercomputer Center, latency between nodes was the single greatest obstacle to getting your CPUs to running at their full capacity. A CPU waiting to get its data is a useless CPU.

    Generally speaking, clusters who want high performance used something like Myrnet instead of ethernet. It's like the difference between consumer, prosumer, and professional products you see in, oh, every industry across the board.

    As a side note, how many parallel apps solve the latency issue is by overlapping their communication and computation phases, instead of having them in discrete phases, this can greatly reduce the time a CPU is idle.

    The KeLP kernel does overlapping automatically for you if you want: http://www-cse.ucsd.edu/groups/hpcl/scg/kelp.html [ucsd.edu]
    • Generally speaking, clusters who want high performance used something like Myrnet instead of ethernet. It's like the difference between consumer, prosumer, and professional products you see in, oh, every industry across the board.

      That reminded me of the TOP500's statistics generator [top500.org], so I just had to look up the current list's (November 2005) statistics for "interconnect family" [top500.org]. For those that are curious:

      • Myrinet is the second most-used interconnect in the TOP500 at 14% (70 out of 500) followed by Hy
      • In the TOP500, it looks like ethernet is not yet an "outsider." Perhaps in the "top 100."

        It depends on what you're doing. If your job is highly parallel, Ethernet is fine. But what happens when every CPU needs access to every other CPUs results in "real time?" Well, low latency is then a must. 1 ms latency is potentially millions of wasted cycles.

      • /shrug

        In practice, cheap is the reality. Just like how consumer goods dominate the market, with much less prosumer and professional equipment sold.

        Fast interconnects are way more expensive than ethernet. People that want the extra performance, though, pay for it.
  • The article's worth reading, if you're not already familiar with currently popular cluster interconnects, but the title of "Data center networks often exclude Ethernet" is totally bogus.

    I guess "Some Tiny Percentage of Data Centers use Something Faster than Ethernet in addition to Ethernet" didn't fit on the page.

    • ..or cheap enough to use Ethernet for processor interconnect.

      SGI had some kind of shared-memory-over-Ethernet protocol back in the day. Worked about as well as a steam-powered ornithopter. It was designed for customers too cheap or unconcerned about performance to use when they had to.

      And I dabbled in OpenMP or whateveritwas back at a contract with just one such cheap customer, and they got what they paid for. Here's a nickel, kid.

      Ethernet is Ethernet, and Infiniband et.al. is Infiniband et.al., dad-gummit.
  • --- malin.vidarlo.net ping statistics --- 15 packets transmitted, 15 received, 0% packet loss, time 14003ms rtt min/avg/max/mdev = 0.310/0.347/0.375/0.019 ms 2 hops, over 100Mb ethernet with a cheapass switch (8 port unmanaged hp). Seems like he got no grip on numbers...
  • Just had a quick ping to the beeb... via a wireless hop onto my ethernet network, two hops to my adsl router, then 6 hops around Nildram's network (ATM into their network then god knows, probably some form of gigabit ethernet) and a couple more hops to the bbc.

    Average latency is around 20ms.

    Now I know this isn't as plain as straight ethernet but I'd have guessed the latency if anything on ATM + the change from 802.11g to ethernet to atm to ethernet to whatever would have been worse.

    So either someone is usin
  • The worst post! (Score:3, Informative)

    by Anonymous Coward on Thursday May 25, 2006 @04:22PM (#15404708)
    I wonder what's happening to slashdot. That's as bad as technical news can get. Ethernet latency -- 100ms?? Typical Ethernet latencies are around a few hundred microseconds. Even the ping round-trip time from my machine to google.com is about 20ms.

    $ ping google.com
    PING google.com (64.233.167.99) 56(84) bytes of data.
    64 bytes from 64.233.167.99: icmp_seq=1 ttl=241 time=20.1 ms
    64 bytes from 64.233.167.99: icmp_seq=2 ttl=241 time=19.6 ms
    64 bytes from 64.233.167.99: icmp_seq=3 ttl=241 time=19.5 ms

    What a shame that such a post is on the front page of slashdot! Someone please s/milli/micro.
  • by m.dillon ( 147925 ) on Thursday May 25, 2006 @04:45PM (#15404920) Homepage
    The slashdot summary is wrong. If you read the actual article the author has it mostly correct except for one comment near the end.

    Ethernet latency is about 100uS through a gigE switch, round-trip. A full-sized packet takes about 200uS (micro seconds), round-trip. Single-ended latency is about half of that.

    There are proprietary technologies that have much faster interconnects, such as the infiniband technology described in the article. But the article also mentions the roadblock that a proprietary technology respresents over a widely-vendored standard. The plain fact of the matter is that ethernet is so ridiculously cheap these days it makes more sense to solve the latency issue in software, for example by designing a better cache coherency management model and by designing better clustered applications, then it does with expensive proprietary hardware.

    -Matt
    • I think you have no clue about what your saying. 1) InfiniBand is an open standard hosted by IBTA which is a consortium of companies. The spec is available for anyone who wants to understand/build InfiniBand hardware. Not IEEE does not make it proprietary. 2) The major roadblock with 10Gbps is physics. You can only reach so far with copper without retiming the signal. And optics are expensive. 10 GbE has the same problem and it won't be cheap any time soon. 3) InfiniBand has already reached a volume whe
      • Boy, you sure have a foul mouth. I suggest washing it out with soap.

        My comments stand. Start posting prices and lets see how your idea of an open standard stacks up the reality. Oh yah, and remember for every $1000 you spend on your interconnect, that's $1000 less you have to spend on cpus and programmers with a clue.

        The reality is that there is only one *correct* way to do a fast interconnect, and that is to build it into the CPU itself. Oh wait, AMD intends to do just that! That's what I'm waiting fo
      • I think you have no clue whom you were talking to.

        Open Standard says nothing about price.
        IB HBA's might be cheap, but the switching fabric sure as fuck aint.

        As for cache coherency, you were addressing the man trying to change the cache coherency game. Watch out, skinny.

        Lastly, there are some proprietary gigabit technologies (non IP based) that, while not 2.7 usec, are very close. Numerous MPI implementations are written with these technologies, although many also require hardware.

        I dont think anyone is wr
  • Note: I do have a dog in this fight.
    One thing that isn't mentioned in the article is the amount of CPU power required to send out ethernet packets. The typical rule is 1 GHz of processing power is required to send 1 Gb of data on the wire. So, if you want to send 10 Gbs of data, you'd need 10 GHz of processor - pretty steep price. Some companies have managed to get this down to 1 GHz/3 Gbs of processing, and one startup(NetEffect) is now claiming roughly ~0.1 Ghz for ~8 Gbs on the wire, using iWarp. With t
  • by Shabazz Rabbinowitz ( 103670 ) on Thursday May 25, 2006 @05:37PM (#15405394)
    I had recently considered using this Tolkien ring until I found out that deinstallation is very difficult. Something about having to take it to a smelter.
  • Well, except the oblig. s/ms/us, but pretty much yeah. With Pathscale (now QLogic) Infinipath HTX cards, you can get 1.5 us latency between nodes, Myrinet 10 G PCI-E can get about 2.5 us. Note that there is now 10Gb ethernet making inroads to compete on terms of throughput (which Infinband SDR, Myrinet are roughly 10 Gbps), but latency is of course still problematic. One chief advantage of non-ethernet is those networks are source routed and every node has a full topology map of how to get to their desti
  • by bill_kress ( 99356 ) on Thursday May 25, 2006 @06:21PM (#15405696)
    Most (all?) Ethernet hardware reads in an entire packet, looks at it, then sends it on to a destination. This makes building routers and switching hardware fairly easy but extremely slow.

    If you go to a high-speed network, what you get is a packet being forwarded as it's being read. By the time the first few bits are through the switch, it should be able to figure out the next hop and have the packet moving in that direction. Phone companies have huge problems with the delays in Ethernet. This is why the ATM protocol was invented, it's hard to use, awkward and not too graceful, but it can fly through a switching network like nobody's business.

    Ethernet is also extremely sloppy--Any switch along the way is allowed to throw a packet away and wait for the originator to resend causing a HUGE hiccupp in the communication stream (Most if not all routers do this whenever an address is not in it's forwarding table yet).

    IIRC the faster protocols see a "Routing" packet in the stream and set up forwarding hardware before getting the actual packet/stream, then wait until the end of the packet (or entire stream) to tear the route down again.

    Ethernet, however, due to it's simplicity is bridging the gaps. It's a pretty crappy protocol in general, but we keep throwing better, smarter hardware at it in an effort to brute-force it into the parameters we require. (I work for a company that makes Ethernet over fiber hardware, and have worked for companies based around ATM, SONET and other interesting solutions).

    I guess the point of the article was to remind a world that is coming to believe that ethernet is the end-all be-all of networking that it was always just the simplest hack available and therefore the easiest to deal with.

    Just like SNMP.
    • Most (all?) Ethernet hardware reads in an entire packet, looks at it, then sends it on to a destination. This makes building routers and switching hardware fairly easy but extremely slow.

      First, Ethernet doesn't forward packets. It forwards frames.
      Most (all?) ethernet switches read just the destination MAC and start forwarding it, just as you've described in the next paragraph. If it can't, because there's no bridge table entry for the destination, it floods the frame.

      Ethernet is also extremely sloppy--Any s

      • He's talking about virtual circuits. Before ATM cells (frames) can be sent to their destination, a connection has to be opened to that destination. Each router in the path from the starting point to the end point has to be able to guarantee the level of service the connection needs (Wikipedia explains [wikipedia.org]). So each router knows exactly where each incoming cell is going. This speeds things up quite a bit, but the problem is that the connection is reserved whether it is used or not. Connections can allocate
  • But on our network we vlan'd everything out. All servers on one vlan, I.T. on another vlan, and then major groups on their own vlans. Keeps traffic nice and segregated which is why the I.T. shop has iTunes sharing turned on full blast.

    But here's where I notice some performance. We've got all the servers on a gigabit vlan. I can shift a 300MB file between servers in under 20 seconds. Transitioning a 5MB link takes five minutes.

    So we did what we could to eliminate latency and we see it in the performanc
  • The funny thing is that there is already a solution to their problem out there. Raptor Networks Technologies [raptor-networks.com] already has their ethernet switches in a bunch of places and have (so far) proven that their distributed network technology runs circles around Cisco's (and others) centralized architecture and costs even less. They could probably keep up with the needs of these data centers. I've spoken to guys who use their hardware and they all say 'wow.' This sounds like a perfect network for Raptor's hardware. A

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...