Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ethernet The Occasional Outsider

Zonk posted more than 8 years ago | from the popular-kid-gets-snubbed dept.

169

coondoggie writes to mention an article at NetworkWorld about the outsider status of Ethernet in some high-speed data centers. From the article: "The latency of store-and-forward Ethernet technology is imperceptible for most LAN users -- in the low 100-millisec range. But in data centers, where CPUs may be sharing data in memory across different connected machines, the smallest hiccups can fail a process or botch data results. 'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,' Garrison says. This forced many data center network designers to look beyond Ethernet for connectivity options."

cancel ×

169 comments

Long Live! (5, Funny)

Anonymous Coward | more than 8 years ago | (#15404153)

Long Live the Token Ring!

One Ring to rule them all

Re:Long Live! (1)

MImeKillEr (445828) | more than 8 years ago | (#15404177)

Mod parent up as funny!

(I was actually going to post something similar, but this one beat me to the punch).

Do they even make Token Ring anymore? I know the MAUs were hella-expensive.

Re:Long Live! (1)

silas_moeckel (234313) | more than 8 years ago | (#15404275)

Posted from a token ring connected computer. Yes they still make it, you can still tow a car with a station cable (untill the pigtail to get it into the laptop). It's still slow as well.

Re:Long Live! (2, Interesting)

EnderWiggnz (39214) | more than 8 years ago | (#15404298)

yes, people (mostly the government) do have token ring setups.

the funnest, is that i've done work for naval ships that required 10base2. You know... CheaperNet!

Didn't RTFA? -Infiniband, FC and Myrinet beat Eth0 (4, Interesting)

hguorbray (967940) | more than 8 years ago | (#15404322)

Actually, even with Gigabit ethernet availability HPTC and other network intensive data center operations have moved to Fibre Channel and things like:

Infiniband http://en.wikipedia.org/wiki/Infiniband [wikipedia.org]

and Myrinet http://en.wikipedia.org/wiki/Myrinet [wikipedia.org]

http://h20311.www2.hp.com/HPC/cache/276360-0-0-0-1 21.html [hp.com]
HP HPTC site

-What's the speed of dark?

Re:Didn't RTFA? -Infiniband, FC and Myrinet beat E (1)

Anarke_Incarnate (733529) | more than 8 years ago | (#15404657)

I believe you mean just Fiber cable, as "Fibre Channel" is an interconnect for storage, etc.

http://en.wikipedia.org/wiki/Fibre_Channel [wikipedia.org]

Re:Long Live! (0, Offtopic)

Itninja (937614) | more than 8 years ago | (#15404179)

I lost my token ring. Now my Interweb is broken.

Re:Long Live! (1)

Jon Luckey (7563) | more than 8 years ago | (#15404414)

Don't you mean that you lost the token, so the ring does not work anymore?

(From an old Dilbert comic IIRC)

More Dilbert (1)

sconeu (64226) | more than 8 years ago | (#15404665)

No, your token is lost in the Ethernet

Re:Long Live! (4, Funny)

MrSquirrel (976630) | more than 8 years ago | (#15404502)

I saw students bring in computers with token-ring cards when I worked at a University Helpdesk. They would come in and say "My computers broken, I plugged 'the internet' in but it won't connect" (we would troubleshoot over the phone and they would want us to come up to their room, after much repeating our policies they would cave and bring it down because they wanted to download their pr0n). I was baffled when it would turn out to be a token-ring card... I was like "Where the HELL did they get this?". I'm convinced it's part of the worldwide conspiracy to drive me insane.

Re:Long Live! (1)

default luser (529332) | more than 8 years ago | (#15405369)

Well, my college was connected with token ring up until 2001, when they did a complete network overhaul. Maybe you got one of our transfer students :D

Apparently, my college got a great deal on token ring from IBM in the early 90s, and at the time it was plenty fast. But by the mid 90s, it was showing its age, with no upgrade path. Back when my college still had no clue how to manage their network (read: 1997, pre-Napster), it consisted of a "turbo" (16Mbit) token ring backbone with various 4Mbit and 16Mbit rings. The bridge to the internet (single T1) was 10-base T.

Since token ring cards were really fucking expensive, the college "loaned" out token ring cards to all students. Students could either shell out $300 for a token ring card and do it themselves, or drop their computers off for a few days and get a "loaner" installed. I say "loaner" because when I graduated in 2001, and the entire network was upgraded, the school sure as hell didn't want these old token ring cards back.

Then the students discovered computers, and then discovered Napster and IM. Within a year, the college upgraded to dual T1s, then a fractional T3, and finally got off their ass and designed a better network. Gigabit optical backbones, 100-base T in the rooms, upgradable to gigabit ethernet. Too bad I wasn't around to enjoy it...

Re:Long Live! (2)

myth24601 (893486) | more than 8 years ago | (#15404815)

ARCNET is the tank of networks protocols. I was once working on an arcnet system and I tripped over the cable and yanked it out of the wall. Would you believe the token jumped out of the cable and ran accross the floor and jumped into the wall.

Nothing stops ARCNET!

Overlords (0, Offtopic)

Zondar (32904) | more than 8 years ago | (#15404169)

I, for one, welcome our new non-ethernet overlords.

My idea: a vat of salt water & CAT5 (5, Funny)

Anonymous Coward | more than 8 years ago | (#15404181)

In our Data Center, we have a great big vat of steaming salt water and we drop one end of the cat5 cables from each server into the vat....those packets that can't figure out where they're going just drop to the bottom and die ...we have to drain this packet-goo out once a month. (but we do recycle it...we press it into CDs and sell them on Ebay)

(Seriously, haven't people heard cut-through switches which just look at the first part of the header and switch based on that... store-and-forward switches are so "1990s")

TDz.

Re:My idea: a vat of salt water & CAT5 (1)

Amouth (879122) | more than 8 years ago | (#15404281)

that was my thought exactly (not the salt vat.. althought i like it)

we have a small office ~20 computers and 3 servers.. and i refuse to buy switchs that can't do cut through.. store and forward is slow..and very memory entisive for switchs on high speed networks..

Store & Forward ONLY for 10 to 100 to 1,000. (3, Informative)

khasim (1285) | more than 8 years ago | (#15404350)

There are only TWO reasons to use Store & Forward.

#1. You're running different speeds on the same switch (why?).

#2. You really want to cut down on broadcast storms (just fix the real problem, okay?)

Other than that, go for the speed! Full duplex!

Re:Store & Forward ONLY for 10 to 100 to 1,000 (1)

drinkypoo (153816) | more than 8 years ago | (#15404567)

People run different speeds on the same switch all the time, and for not necessarily poor reasons: If you have a SMB (in this case, that's small or medium business) with maybe one big fileserver, you don't need to run gigabit to everyone... You can run 100Mbps to the clients, and run gig to the switch only. Of course, since just about everything but laptops is coming with gig now (and probably some of them) this is becoming less valuable.

Re:Store & Forward ONLY for 10 to 100 to 1,000 (1)

multipartmixed (163409) | more than 8 years ago | (#15404578)

There's plenty of hardware out there that doesn't come gig-e equipped. Hell, I still deploy RS232 terminal concentrators at 10 megs now and then.

Re:Store & Forward ONLY for 10 to 100 to 1,000 (1)

drinkypoo (153816) | more than 8 years ago | (#15404608)

While that's true, most of the time those kind of devices would be happiest on their own subnet for security and management reasons - or at least, I'd be happiest with them there. Therefore they can live on different router interfaces, whether the router's from cisco, or a PC from fry's with linux on it. The only time it's really necessary to mix speeds on the same switch is when you have multiple clients accessing a resource and their aggregate speeds make it useful.

For performance, run the same speed. (4, Interesting)

khasim (1285) | more than 8 years ago | (#15404805)

People run different speeds on the same switch all the time, and for not necessarily poor reasons: If you have a SMB (in this case, that's small or medium business) with maybe one big fileserver, you don't need to run gigabit to everyone...
What's with the "need to"?

I'm talking performance. Store & Forward hammers your performance. In my experience, you get better performance when you run the server at 100Mb full duplex (along with all the workstations) and use Cut Through than if you have the server on a Gb port, but run Store & Forward to your 100Mb workstations.

Re:For performance, run the same speed. (0)

Anonymous Coward | more than 8 years ago | (#15405465)

Which is true for 1 or 2 clients at a time, but not when you have more data streams. Let's see you get an agregate bandwidth of 465 Mbps (actual performance on my network) to 50 clients through you 100 Mbps connection.

Re:My idea: a vat of salt water & CAT5 (1)

barawn (25691) | more than 8 years ago | (#15404643)

(Seriously, haven't people heard cut-through switches which just look at the first part of the header and switch based on that... store-and-forward switches are so "1990s")

Even still - low 100 ms for store-and-forward ethernet switches? That seems really, really high. I would've said more like single milliseconds, which is still high, but it isn't 100 ms.

I know from experience that I've used store-and-forward ethernet switches with much, much better latency than 100 ms.

Re:My idea: a vat of salt water & CAT5 (1)

rekoil (168689) | more than 8 years ago | (#15404876)

If you RTFA you'll see that it was a typo - it's microseconds, not milliseconds. You can ping from New York to Seattle in less than a 100 milliseconds if you're on a decent pipe.

Re:My idea: a vat of salt water & CAT5 (1)

barawn (25691) | more than 8 years ago | (#15404901)

It says 100 milliseconds in the article.

I don't doubt that it's wrong (like I said, I know from experience that it's of order 1 ms, not 100 ms) but the article is the one that's wrong, not the story summary.

Re:My idea: a vat of salt water & CAT5 (1)

rekoil (168689) | more than 8 years ago | (#15404955)

We're both right - there's a reference to microseconds in the first page, but the 100-millisecond figure shows up later on. Mea culpa.

30 GB? Take that NSA and your outdated 622MB! (2, Interesting)

Marxist Hacker 42 (638312) | more than 8 years ago | (#15404182)

The NSA's network sniffer, recently discovered at an AT&T broadband center, can only sniff up to 622MB [slashdot.org] . Sounds to me like if you use an InfiniBand switch, that would effectively make the output of the NSA's network sniffers complete gibberish.

Re:30 GB? Take that NSA and your outdated 622MB! (0)

Anonymous Coward | more than 8 years ago | (#15404326)

622MB


Nice troll.

Re:30 GB? Take that NSA and your outdated 622MB! (1)

drinkypoo (153816) | more than 8 years ago | (#15404533)

MB is a measurement of data; in this case 10^6 bytes. (MiB would be 2^10.) I think you want a measurement of data, such as perhaps MBps. Too bad your comment is a) wrong and b) wrong. specifically it's a) just plain wrong and b) fails to take clusters into account. Were you trying to get fp or something? Anyway "this equipment was the Narus ST-6400, a machine that was capable of monitoring over 622 Mbits/second in real time in May, 2000... The latest generation is called NarusInsight, capable of monitoring 10 billion bits of data per second" - how do you know they're not using the current version today?

Re:30 GB? Take that NSA and your outdated 622MB! (1)

Captain Splendid (673276) | more than 8 years ago | (#15404648)

Dude, you've been really bitchy lately, what's up witht hat?

Re:30 GB? Take that NSA and your outdated 622MB! (1)

drinkypoo (153816) | more than 8 years ago | (#15404791)

Probably just trying to release job-related stress. I've been missing my primary outlet since I damaged my race-suspension '89 Nissan 240SX and went back to driving my sloppy-suspension '81 M-B 300SD. This is a lot safer, anyway...

Re:30 GB? Take that NSA and your outdated 622MB! (1)

Marxist Hacker 42 (638312) | more than 8 years ago | (#15404875)

Were you trying to get fp or something?

Yes, and you're right, I should have said Mbps and Gbps (30 Gigabit networking is going to create a packet flow that far outstrips the NSA's 622Megabit packet sniffing capability).

Re:30 GB? Take that NSA and your outdated 622MB! (0)

Anonymous Coward | more than 8 years ago | (#15404880)

"MB is a measurement of data; in this case 10^6 bytes. (MiB would be 2^10.) I think you want a measurement of data, such as perhaps MBps."
No offense, but, but I think what you wanted to say was "I think you want a measure of data transmission speed", such as perhaps MBps".

Generally, though, it is "Mbps" for "Megabits per second", and not "MBps", which would be "Megabytes per second".

Re:30 GB? Take that NSA and your outdated 622MB! (0)

Anonymous Coward | more than 8 years ago | (#15404893)

The latest generation is called NarusInsight, capable of monitoring 10 billion bits of data per second

Good pedantry, but it's worth pointing out that InfiniBand runs at 30 Gbps, which is in fact faster than the 10 Gbps that you claim the NarusInsight can do.

Re:30 GB? Take that NSA and your outdated 622MB! (1)

drinkypoo (153816) | more than 8 years ago | (#15404990)

Well, more than one of them can do. Is there any reason they can't use three insight boxes? Or maybe four, just to have some slack :) It might require additional hardware to split up traffic, but...

100ms ethernet latency? (5, Informative)

victim (30647) | more than 8 years ago | (#15404183)

I don't think I need to read anymore, well, I did verify that the number really appears in the article.
This author does not understand the subject material.

(I suppose you could deliberatly overload a switch enough to get this number, maybe, but that would be silly, and your switch would need 1.25Mbytes of packet cache.)

Re:100ms ethernet latency? (5, Informative)

merreborn (853723) | more than 8 years ago | (#15404228)

Looks like the author fucked up the definition of millisecond too:

"By comparison, latency in standard Ethernet gear is measured in milliseconds, or one-millionth of a second, rather than nanoseconds, which are one-billionth of a second"

http://www.google.com/search?hl=en&q=define%3Amill isecond&btnG=Google+Search [google.com]
"One thousandth of a second"

Seriously. How the fuck does this idiot get published?

Re:100ms ethernet latency? (1)

dextromulous (627459) | more than 8 years ago | (#15404412)

A better question might be: how bad is the editor if this wasn't noticed?

Re:100ms ethernet latency? (1)

Sirfrummel (873953) | more than 8 years ago | (#15404684)

Alright... I looked it up,

millisecond = 1/1,000
microsecond = 1/1,000,000
nanosecond = 1/1,000,000,000

Re:100ms ethernet latency? (0)

Anonymous Coward | more than 8 years ago | (#15404870)

milli = 10^-3 I've always wondered why english have words with (i suppose) Saxon's roots (thousand) and uses latin prefixes (milli) for adjectives and other speech parts.... Like "moon" --> "lunar" It's the only thing I like in my otherwise hard-as-hell-to-speak mother language (italian, for the records). It helps avoiding this kind of disasters...

Re:100ms ethernet latency? (0)

Anonymous Coward | more than 8 years ago | (#15404993)

What the hell is going on in schools these days? Those are standard SI prefixes. (If you have to look them up, that's what they're called, and shame on your teachers.)

Re:100ms ethernet latency? (0)

Anonymous Coward | more than 8 years ago | (#15405146)

In the words of Emily Litella [wikipedia.org] ...

Oh, that's quite different...
Never mind!

Re:100ms ethernet latency? (1)

Solra Bizna (716281) | more than 8 years ago | (#15404258)

Yeah. Even my needlessly complex network setup (we have three routers all routing to the same LAN, long story) I get latencies about 1 millisecond going from a wireless client to the modem (client -> wireless AP -> wired router 1 -> wired router 2 -> modem)...

-:sigma.SB

Re:100ms ethernet latency? (1, Informative)

Phreakiture (547094) | more than 8 years ago | (#15404451)

This author does not understand the subject material.

I disagree. The author has simply misplaced his metric units. He used the word "milliseconds", where he should have used the word "microseconds". You can see an example of this where he refers to milliseconds as one millionth of a second, rather than the one thousandth that they actually are.

Re:100ms ethernet latency? (1)

drinkypoo (153816) | more than 8 years ago | (#15404582)

So what you're saying is that the author may understand the source material, but he's an idiot too stupid to proofread, or even worse, too stupid to catch such a mistake if he does proofread? I don't think that's much of an improvement.

Re:100ms ethernet latency? (1)

Phreakiture (547094) | more than 8 years ago | (#15404646)

So what you're saying is that the author may understand the source material, but he's an idiot too stupid to proofread

Yeah, pretty much.

Re:100ms ethernet latency? (1)

jelle (14827) | more than 8 years ago | (#15404720)

The problems is that it's uncertain that every mention of 'millisecond' was meant to be 'microsecond'.

For example, I can agree with "When you get into application-layer clustering, milliseconds of latency can have an impact on performance,'"

But s/milliseconds/microseconds in that, and you're talking about a significantly reduced number of applications that need that kind of response times.

Microseconds are short for most processes. For example, Linux task-switches 100, 250 or 1000 times, depending on the processor. That means the timeslices are 1us, 4us, or 10us. Applications already sit still, not doing anything for multiples of the timeslices, so latency in the same order of magnitude often won't be noticed much if at all.

Re:100ms ethernet latency? (0)

Anonymous Coward | more than 8 years ago | (#15405004)

Microseconds are short for most processes. For example, Linux task-switches 100, 250 or 1000 times, depending on the processor. That means the timeslices are 1us, 4us, or 10us. Applications already sit still, not doing anything for multiples of the timeslices, so latency in the same order of magnitude often won't be noticed much if at all.
Linux does task-switch either ~100, 250, or 1000 times a sec which means timeslices are really 10ms, 4 ms, or 1 ms.

Re:100ms ethernet latency? (1)

jthill (303417) | more than 8 years ago | (#15405125)

That s/b "1ms, 4ms or 10ms". But your ordinary boxes aren't the kind of systems the article is talking about.

I see NetworkWorld fixed the article.

Re:100ms ethernet latency? (1)

KenSeymour (81018) | more than 8 years ago | (#15405315)

Indeed. When I clicked on the article, they had fixed it to read as the author intended.

That's not his only mistake. (1)

Short Circuit (52384) | more than 8 years ago | (#15405340)

Such a setup requires extremely low latency, as the processors are pulling Linux operating system images over the InfiniBand links, instead of through a local hard drive. Also, processes shared in RAM among the Linux nodes all run through the Voltaire switch.

Loading bulk data over the network (as in BOOTP) suggests high bandwidth, not latency. And it doesn't even require it; high bandwidth for BOOTP is a convenience. My 10Mb/s Ethernet hub could do it.

The author really is clueless...

Re:100ms ethernet latency? (1)

kurtvs (782100) | more than 8 years ago | (#15405032)

I wish I had longer to write this article, but I've got to leave in 10 minutes, so here's what I can do in the time alloted. Ping time is a very coarse measurement of link latency and does NOT give you an absolute number worth anything because it doesn't distinguish between server latency and link latency and for pings, server latency is not consistent from one machine/device to another. Ping time is mostly useful when you have a previous measurement to compare to from the same machine as a relative indicator of link speed and even then, it is still a coarse measurement. For many/most devices, responding to a ping is a very low priority task. A better measure is to look at the latency for a file service request. Just about any PC or Mac made in the last 5 years will turn around a file read request in much less than a millisecond (1/1000 second), so the latency you measure is the latency of the link. Downside, you need to do this with a protocol analyzer. A router will often have a ping turnaround time of 20 milliseconds or greater and this number is variable depending on load, so ping time can be a (very coarse) measurement of router load and then only when you have a reference when the router was not loaded. The latency of Ethernet itself is 9.6 microseconds (one millionth second) for 10 MB, 0.96 microseconds for 100 MB, although that's a theoretical limit -- it's the time that an Ethernet sender has to observe no signal before sending, but this only applies to half duplex. There's no wait time in full duplex. The actual limit is how fast can you get it through the system, across the bus and out the NIC. In most cases, this is less than a millisecond. Token ring has many drawbacks in typical modern network situations. It was designed with a strong and predictable client-server relationship in mind where you would attach to and use a limited number of servers for long periods of time. Its actual link latency is comparable to Ethernet - so minute as to be insignificant. Gotta go, Kurt VanderSluis PS, I just dashed this off, so if there's a typo, sorry, but I didn't have time to do the fine tooth comb thing

Re:100ms ethernet latency? (1)

ajs (35943) | more than 8 years ago | (#15405136)

Problems also include the use of the term "store and forward Ethernet" (WTF does that mean?!) and the fact that Ethernet channel bonding has been around for about 10 years.

Re:100ms ethernet latency? (1)

ctr2sprt (574731) | more than 8 years ago | (#15405404)

Store and forward is when the switch reads in the entire packet before making a routing decision. Most protocols, including Ethernet and TCP/IP, send the target address very early in the frame precisely so that store and forward isn't necessary. Instead they use a strategy called cut-through switching, where they read just enough of the frame to determine where to send it and then send the remainder to the destination port as it arrives on the source port. Most home or small office switches use store and forward switching.

Or maybe you were being pedantic and quibbling about calling it store and forward ethernet instead of store and forward switching.

Low-cost options? (1)

sammy baby (14909) | more than 8 years ago | (#15404191)

Ultra-low latency networking is a minor interest of mine, but one I've never had the chance to really pursue. Can anyone familiar with the landscape recommend some low-cost options for experimenting with this stuff? Or maybe just let me down gently. "No, Sammy, there are no low-cost options. And there's no Santa Claus."

TCP/IP over ultra SCSI do it for ya? (1)

absinthminded64 (883630) | more than 8 years ago | (#15404384)

I've never used it but I know it can be done with Linux, a couple scsi controllers and other nifty things. Don't know the speed or latency of it though

Re:Low-cost options? (2, Informative)

dlapine (131282) | more than 8 years ago | (#15404447)

Define low cost? Myrinet [myrinet.com] with less than 10 microsecond latency is normally considered to be the least expensive option. You can check their price lists, but an 8 port solution [myrinet.com] (with 8 HBA's) will set you back over $8k, not including the fiber.

For some people, that's cheap. If not, sorry.

Re:Low-cost options? (1)

XyborX (632875) | more than 8 years ago | (#15405430)

I recently discovered Xdmx, something that seems capable of splitting my desktop across multiple machines. Along with it, I was also thinking in lines of some kind of cluster system, like OpenMosix. My goal at the moment is to be able to run Quake 3 on three computers (two of which are laptops = limited expansion possibilites), giving me wider peripheral vision, or at least to be able to move windows around between the display. Unfortunately, I'm quite certain that at least OpenMosix would require more speed than 100Mbit for Quake 3, so I started pondering the same question: What cheap and fast network solutions exist?

So far, the best idea I can come up with is a mesh of those USB-USB link cables. If USB 2.0 is in the ~400Mbit range, it should be faster than 100Mbit, right? But then again, speed and latency aren't the same.. Comments are welcome :)

Not an Auspicious Start (5, Informative)

Anonymous Coward | more than 8 years ago | (#15404203)

From the article, three paragraphs in:
"(By comparison, latency in standard Ethernet gear is measured in milliseconds, or one-millionth of a second, rather than nanoseconds, which are one-billionth of a second)"

That would be one-thousandth, not millionth (aka micro second). Not a good start...

Metric System Still Not Clear To Some (1, Funny)

Anonymous Coward | more than 8 years ago | (#15404436)

Maybe the author meant "imperial milliseconds"?

Re:Not an Auspicious Start (1)

Sycraft-fu (314770) | more than 8 years ago | (#15404459)

Well that and ethernet gear is measured in miliseconds? That doesn't seem useful. If I run a traceroute, the time is listed as "1ms" for all the internal hops. There are 5 internal hops, all ethernet. In my experience, all modern ethernet gear adds less than a millisecond of latency. Traceroute programs only report milliseconds because there's a useful measure for Internet traffic and anything under 1ms can be safely called "really fast" for normal work.

Seems to me you'd need to measure ethernet gear in microseconds to get a useful number.

When you get to many hops (5, Funny)

with_him (815684) | more than 8 years ago | (#15404209)

I just blame it on the ether-bunny.

Re:When you get to many hops (1)

Teun (17872) | more than 8 years ago | (#15404581)

Stop using those Duracells...

Re:When you get to many hops (1)

c0d3h4x0r (604141) | more than 8 years ago | (#15405069)

Is that the gay easter bunny?

Re:When you get to many hops (1)

doublem (118724) | more than 8 years ago | (#15405084)

Thank you for that incredibly bad pun.

I needed a laugh after the work day I've had.

Software design (2, Interesting)

nuggz (69912) | more than 8 years ago | (#15404213)

The origional post makes some comments that
sharing memory ... the smallest hiccups can fail a process or botch data results.
Sounds like bad design, or a known design trade off.
Quite reasonable, when on a slow link, until I know better assume the data I have is correct, if it isn't throw it out and start over. Not wildly different than branch prediction or other approaches to this type of information.

'When you get into application-layer clustering, milliseconds of latency can have an impact on performance,'
Faster is faster, not really a shocking concept.

Re:Software design (0)

Anonymous Coward | more than 8 years ago | (#15404330)

If your relying on timing for your application to work, its the wrong design

Re:Software design (2, Funny)

Amouth (879122) | more than 8 years ago | (#15404353)

what it looks like to me is.. ok so they set something up using normal 100/1000 ethernet and then realized something was slow and that if they use gbic 30gb ports things run faster... can someone please sent them a cookie?

Did you mean "microseconds"? (3, Interesting)

pla (258480) | more than 8 years ago | (#15404218)

The latency of store-and-forward Ethernet technology is imperceptible for most LAN users -- in the low 100-millisec range.

I don't know what sort of switches you use, but my home LAN, with two hops (including one over a wireless bridge) through only slightly-above-lowest-end DLink hardware, I consistantly get under 1ms.



When you get into application-layer clustering, milliseconds of latency can have an impact on performance

Again, I get less than 1ms, singular.



Now, I can appreciate that any latency slows down clustering, but the ranges given just don't make sense. Change that to "microseconds", and it would make more sense. But Ethernet can handle single-digit-ms latencies without breaking a sweat.

Re:Did you mean "microseconds"? (1)

Eideewt (603267) | more than 8 years ago | (#15404507)

I wonder if his messed up numbers come from his mistaken belief that a millisecond is three orders of magnitude smaller than it is.

Re:Did you mean "microseconds"? (2, Informative)

dlapine (131282) | more than 8 years ago | (#15404599)

Sure, for an 8 port switch, where all the computers have a direct connection. Consider the issues involved for a router with a 128 machines all trying to cross-communicate. Or larger collections of computers that might need to use multiple sets of switches to span the entire system.

On a Force10 switch, with 2 nodes on the same blade:
tg-c844:~ # ping tg-c845
PING tg-c845.ncsa.teragrid.org (141.142.57.161) from 141.142.57.160 : 56(84) bytes of data.
64 bytes from tg-c845.ncsa.teragrid.org (141.142.57.161): icmp_seq=1 ttl=64 time=0.148 ms
64 bytes from tg-c845.ncsa.teragrid.org (141.142.57.161): icmp_seq=2 ttl=64 time=0.146 ms
64 bytes from tg-c845.ncsa.teragrid.org (141.142.57.161): icmp_seq=3 ttl=64 time=0.145 ms
64 bytes from tg-c845.ncsa.teragrid.org (141.142.57.161): icmp_seq=4 ttl=64 time=0.144 ms

The same nodes using a myrinet connection:
tg-c844:~ # ping tg-c845-myri0
PING tg-c845-myri0.ncsa.teragrid.org (172.22.57.161) from 172.22.57.160 : 56(84) bytes of data.
64 bytes from tg-c845-myri0.ncsa.teragrid.org (172.22.57.161): icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from tg-c845-myri0.ncsa.teragrid.org (172.22.57.161): icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from tg-c845-myri0.ncsa.teragrid.org (172.22.57.161): icmp_seq=3 ttl=64 time=0.044 ms
64 bytes from tg-c845-myri0.ncsa.teragrid.org (172.22.57.161): icmp_seq=4 ttl=64 time=0.043 ms

The latency gets below 10 usec with the use of special drivers, this is just using the 2.4 Linux tcp stack. What's even scarier about the Myrinet is that I can have all 900+ machines talking at the same time with no drop in latency- we have that network spec'd for full bisection bandwidth. Try that on 900 nodes on a gige network, let alone a 100baseT.

As was mentioned here earlier, ethernet is nice for networks that change. Once you have a significant number of machines attached, and the number of switches and routers gets past 1, ethernet loses it's equivalence in latency.

Re:Did you mean "microseconds"? (1)

cgori (11130) | more than 8 years ago | (#15405317)

1) Neat stuff in your cluster.

2) A fair number of ethernet switches exist for ~500 nodes @ 1Gbps that will have predictable latency, like the force10 you are describing. 900 nodes would be tough, admittedly, at the moment. Also, I don't think you meant to say "router" -- you almost certainly are switching if it's all configured right.

3) Myrinet is very specialized and uses cut-through switching. Ethernet is a generalized protocol that can be used on a WAN, and is almost always store-and-forward. Store-and-forward scales better to distance, and under massive load. If your input bandwidth is able to oversubscribe your switch fabric in a cut-through switch, the performance will decline horribly, and the distribution of latencies almost random. Store-and-forward will decline gradually and (usually) have monotonic increasing latency under load.

I duhno . . . (1)

JazzLad (935151) | more than 8 years ago | (#15404987)

100/1,000,000th sounds ok. That's what, 1/10th a millisecond?


;)

Milliseconds? (2, Funny)

rubmytummy (677080) | more than 8 years ago | (#15404220)

On my planet, a millisecond is a full thousandth of a second, not just one millionth.

Oh, well. People tell me I'm just slow.

U send me (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15404224)

U send me some information on this ethrnt immediately plz. I'm doing ur outsourced job and nedd ur hlp.

My id is slash@spambob.com

Thx.

Bluetooth! (-1)

Anonymous Coward | more than 8 years ago | (#15404226)

I suggest Bluetooth. It would also eliminate all those wires. I still don't understand why these people won't hire me, I've got great ideas!

Re:Bluetooth! (1)

MrSquirrel (976630) | more than 8 years ago | (#15404357)

What about a co-axial bus connection? VAMPIRE TAPS!!! WEEEEEE. Come on, someone else has to remember those. It was like playing "Operation" except if you "touched the sides" and screwed the tap too far in, you broke the cable. Fun to the max!

sharing memory over ethernet? (0, Flamebait)

jm91509 (161085) | more than 8 years ago | (#15404232)

That just sounds daft. Given the bottle neck harddrives are for cpu's, it doesn't sound like a great shock that when you gotta wait for your data over ethernet you're going to see problems.

Maybe I should RTFA...

Re:sharing memory over ethernet? (1)

AuMatar (183847) | more than 8 years ago | (#15404299)

This is a NUMA (non-uniform memory access) cluster. Basicly a bunch of computers woring together that occasionally need to access the same data. If the last process to need that data happens to be on another computer, it needs ot be transfered. The trick to these clusters is writing software so that transfer need is minimal, and that the same data set stays on the same processor, to the best of your ability.

Lesson: Use appropriate Tech. (1)

Daniel_Staal (609844) | more than 8 years ago | (#15404267)

Ethernet's strength is it's flexiblity, not it's speed per se. It can handle changing network environments where hardware or software is added and removed continually, and you never know quite where the bandwith is most needed. You just plug it all in, and ethernet does a decent job of neotiating who gets to use the bandwidth.

But it's never been a really high speed protocol. It's easy to beat, speed-wise, as long as you know what the network use looks like ahead of time.

Which of course is a killer for most general use, but for specialty use that's not so much of a problem.

Re:Lesson: Use appropriate Tech. (1, Insightful)

Anonymous Coward | more than 8 years ago | (#15404379)

I assume you mean "It's never been an efficient protocol." This is somewhat true, but relative to what, exactly? TDM? SONET? ATM?

If you did truly intend to state that it's not high speed, you're mistaken. 10 gigabit Ethernet is common, and modern hardware latencies are not significant. The only way you're going to exceed that would be with WDM (that's cheating) or with an OC-768 (good luck finding one outside of a research lab).

Re:Lesson: Use appropriate Tech. (1)

Daniel_Staal (609844) | more than 8 years ago | (#15404428)

You're right; the terminology there is weak. I mean that it usually can't get the full potential out of the underlying physical network that others can. 'High speed' is relative, of course, to whatever you are comparing it to.

Basically, at a given level of tech, you should be able to build a network that is faster than an ethernet network at that level of tech. But it will be more complicated to set up and maintain.

Store & Forward Unnecessary? (1)

ljc86 (921909) | more than 8 years ago | (#15404269)

The article's a bit lacking on details, but... Isn't store and forward unnecessary? It's definitely possible to get it down to a much lower latency than is stated in the article if you don't use it.

Re:Store & Forward Unnecessary? (1)

Anonymous Struct (660658) | more than 8 years ago | (#15404483)

I think the article just has a typo. 'Imperceptible' is definitely not how I'd describe 100ms latency on a switched LAN. It's also true that switches do not necessarily have to store and forward, and cut-through switching used to be a lot more popular. I believe it's probably less popular now because store and forward performance is more than adequate, and because it offers a handful of advantages (verify FCS before forwarding, for example).

Re:Store & Forward Unnecessary? (1)

Nefarious Wheel (628136) | more than 8 years ago | (#15405461)

Might be talking about queueing systems like MQ-Series. I know the latter is a bit of a store-and-forward buffer; not terribly fast, but it talks to a lot of hosts & you'll see it a lot in retail supply chains.

Hard to credit any article with so little context, though.

Seriously folks, we engine-hearing types had better learn to write, because it's a fair call that the journalists don't understand engineering.

nonic (-1)

dmindless (973977) | more than 8 years ago | (#15404325)

When I ping the servers in my server room, every server replies, except her, she has no nic [nonic.org] .

Channel Bonding (0, Offtopic)

Perl-Pusher (555592) | more than 8 years ago | (#15404388)

I have a cluster of 45 dual Xeon processing nodes. Latencies average about 210 usec the same as could be expected in any 100Mbs connection, but using channel bonding my bandwidth is double that of a single ethernet connection. I don't have the need for faster, all our processes are wholly independent and don't need to do message passing.

Re:Channel Bonding (1)

booch (4157) | more than 8 years ago | (#15404832)

I wouldn't expect channel bonding to significantly improve latency. I'd be surprised if you got more than 10% improvement, unless you are bandwidth-limited.

Re:Channel Bonding (4, Funny)

kjs3 (601225) | more than 8 years ago | (#15404943)

So you have an environment with requirements totally unlike the ones described in the article and needing none of the solutions illustrated in the article. Hey...thanks for letting us know. Maybe the other million Slashdot users with environments irrelevant to the post can let us know what they have as well.

No kidding (4, Interesting)

ShakaUVM (157947) | more than 8 years ago | (#15404389)

Er, yeah. No kidding.

When I was writing applications at the San Diego Supercomputer Center, latency between nodes was the single greatest obstacle to getting your CPUs to running at their full capacity. A CPU waiting to get its data is a useless CPU.

Generally speaking, clusters who want high performance used something like Myrnet instead of ethernet. It's like the difference between consumer, prosumer, and professional products you see in, oh, every industry across the board.

As a side note, how many parallel apps solve the latency issue is by overlapping their communication and computation phases, instead of having them in discrete phases, this can greatly reduce the time a CPU is idle.

The KeLP kernel does overlapping automatically for you if you want: http://www-cse.ucsd.edu/groups/hpcl/scg/kelp.html [ucsd.edu]

OK article, bad title (1)

Medievalist (16032) | more than 8 years ago | (#15404466)

The article's worth reading, if you're not already familiar with currently popular cluster interconnects, but the title of "Data center networks often exclude Ethernet" is totally bogus.

I guess "Some Tiny Percentage of Data Centers use Something Faster than Ethernet in addition to Ethernet" didn't fit on the page.

The real title, "Most Data Centers aren't stupid.. (1)

wsanders (114993) | more than 8 years ago | (#15405094)

..or cheap enough to use Ethernet for processor interconnect.

SGI had some kind of shared-memory-over-Ethernet protocol back in the day. Worked about as well as a steam-powered ornithopter. It was designed for customers too cheap or unconcerned about performance to use when they had to.

And I dabbled in OpenMP or whateveritwas back at a contract with just one such cheap customer, and they got what they paid for. Here's a nickel, kid.

Ethernet is Ethernet, and Infiniband et.al. is Infiniband et.al., dad-gummit.

Metric system bites U.S in ass again (0)

Anonymous Coward | more than 8 years ago | (#15404480)

I'm thinking that the reason the article got the idea that milli = millionth is because the US doesn't use the metric system.

All 7th graders in Canada know that micro means millionth, and milli = thousandth...hopefully the doctors and nurses in the US know the same thing.

Crashed any rockets lately?

Milliseconds? 100's of them? (1)

vidarlo (134906) | more than 8 years ago | (#15404526)

--- malin.vidarlo.net ping statistics --- 15 packets transmitted, 15 received, 0% packet loss, time 14003ms rtt min/avg/max/mdev = 0.310/0.347/0.375/0.019 ms 2 hops, over 100Mb ethernet with a cheapass switch (8 port unmanaged hp). Seems like he got no grip on numbers...

Someone needs to look at their network... (1)

Arimus (198136) | more than 8 years ago | (#15404536)

Just had a quick ping to the beeb... via a wireless hop onto my ethernet network, two hops to my adsl router, then 6 hops around Nildram's network (ATM into their network then god knows, probably some form of gigabit ethernet) and a couple more hops to the bbc.

Average latency is around 20ms.

Now I know this isn't as plain as straight ethernet but I'd have guessed the latency if anything on ATM + the change from 802.11g to ethernet to atm to ethernet to whatever would have been worse.

So either someone is using cheep hardware or has misconfigured their network.

Apart from that if I was running a cluster each machine would probably have two NIC's depending on their use - one using gigabit ethernet to provide the internal network between nodes on the cluster and the other for external use. The external network would be as normal, the internal network I'd ensure had minimal routers/switches between the nodes and any switches/routers where a) good quality and b) correctly configured.

Real-World Experience (1)

FooHentai (624583) | more than 8 years ago | (#15404550)

All this time I've been playing Quake over LAN and I thought my ping was about 5ms. Silly me, it's clearly in the range of 100ms, even worse than when I take it online!

Whoops...

Re:Real-World Experience (0)

Anonymous Coward | more than 8 years ago | (#15404753)

DDR Infiniband is a great way to push large amounts of throughput with low latency. ~20Gbps with 2.7us latency (not counting switch ASICs)... each switch hop being about .3us latency... You can use the IP-over-IB driver... latency goes up a little, and throughput goes down a little because of the overhead... but for the most part it's quite nice.

Infinipath is even lower latency and more native IP support. Infinipath is only SDR right now so ~10Gbps with sub 1us latency on the card, and the same switch hop latency additions.

The worst post! (3, Informative)

Anonymous Coward | more than 8 years ago | (#15404708)

I wonder what's happening to slashdot. That's as bad as technical news can get. Ethernet latency -- 100ms?? Typical Ethernet latencies are around a few hundred microseconds. Even the ping round-trip time from my machine to google.com is about 20ms.

$ ping google.com
PING google.com (64.233.167.99) 56(84) bytes of data.
64 bytes from 64.233.167.99: icmp_seq=1 ttl=241 time=20.1 ms
64 bytes from 64.233.167.99: icmp_seq=2 ttl=241 time=19.6 ms
64 bytes from 64.233.167.99: icmp_seq=3 ttl=241 time=19.5 ms

What a shame that such a post is on the front page of slashdot! Someone please s/milli/micro.

Slashdot summary wrong, actual article is better (3, Interesting)

m.dillon (147925) | more than 8 years ago | (#15404920)

The slashdot summary is wrong. If you read the actual article the author has it mostly correct except for one comment near the end.

Ethernet latency is about 100uS through a gigE switch, round-trip. A full-sized packet takes about 200uS (micro seconds), round-trip. Single-ended latency is about half of that.

There are proprietary technologies that have much faster interconnects, such as the infiniband technology described in the article. But the article also mentions the roadblock that a proprietary technology respresents over a widely-vendored standard. The plain fact of the matter is that ethernet is so ridiculously cheap these days it makes more sense to solve the latency issue in software, for example by designing a better cache coherency management model and by designing better clustered applications, then it does with expensive proprietary hardware.

-Matt

Ethernet Problems, IB problems, etc (2, Interesting)

mrjimorg (557309) | more than 8 years ago | (#15405245)

Note: I do have a dog in this fight.
One thing that isn't mentioned in the article is the amount of CPU power required to send out ethernet packets. The typical rule is 1 GHz of processing power is required to send 1 Gb of data on the wire. So, if you want to send 10 Gbs of data, you'd need 10 GHz of processor - pretty steep price. Some companies have managed to get this down to 1 GHz/3 Gbs of processing, and one startup(NetEffect) is now claiming roughly ~0.1 Ghz for ~8 Gbs on the wire, using iWarp. With this, your system can be processing information rather than creating packets.
The problem with Infiniband, Myranet, etc is that they require another card in your system (and associated heat problems, size issues, etc), special switches and equipment, and new training for your staff on how to get it up and going. However, IWarp, which is based on TCP/IP can use your standard DHCP, ping, tracert, ipconfig, etc and can allow a single card to be used for networking to the outside world (TCP/IP), clustering in the datacenter(IWarp), and storage (IScsi). 1 card, no special new software widgets, 10 Gb speeds.
However, you cant go and buy a iWarp card from Fry's today. Although, you cant buy an infiniband or myranet card there either

Tolkien ring (4, Funny)

Shabazz Rabbinowitz (103670) | more than 8 years ago | (#15405394)

I had recently considered using this Tolkien ring until I found out that deinstallation is very difficult. Something about having to take it to a smelter.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...