×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

10Gb Ethernet Alliance is Formed

Zonk posted about 6 years ago | from the ethernet-alliance-powers-activate dept.

Networking 173

Lucas123 writes "Nine storage and networking vendors have created a consortium to promote the use of 10GbE. The group views it as the future of a combined LAN/SAN infrastructure. They highlight the spec's ability to pool and virtualize server I/O, storage and network resources and to manage them together to reduce complexity. By combining block and file storage on one network, they say, you can cut costs by 50% and simplify IT administration. 'Compared to 4Gbit/sec Fibre Channel, a 10Gbit/sec Ethernet-based storage infrastructure can cut storage network costs by 30% to 75% and increases bandwidth by 2.5 times.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

173 comments

Math on /. (5, Funny)

Idiomatick (976696) | about 6 years ago | (#23106684)

i'm worried they had to say 4 * 2.5 = 10 on /.

Re:Math on /. (0)

Anonymous Coward | about 6 years ago | (#23106988)

I'm concerned that they did say it.

I don't know that much about the fibre channel spec. Ethernet was designed so it would work okay over crappy cable and unreliable links, it wasn't designed for throughput. It can only get 80% saturation. I'm going to guess that fibre-channel is significantly better than that. So it's probably more like 2x.

Re:Math on /. (2, Insightful)

Shakrai (717556) | about 6 years ago | (#23107618)

It can only get 80% saturation

Do you have a citation for that? I've seen Ethernet networks with decent switches approach 95% of the rated capacity.

Re:Math on /. (1)

K. S. Kyosuke (729550) | about 6 years ago | (#23108422)

At the time when Ethernet was being designed for much-less-than-100% saturation, Bob Metcalfe knew much less about network switches than he does now... The equations for CSMA/CD just do not apply today...

Re:Math on /. (3, Informative)

rdebath (884132) | about 6 years ago | (#23108628)

Modern ethernet 100Base-T switched or 1000Base-T can work to 100%. With the switched medium all the links run full duplex and packets for busy links are stored in memory like a router. With a good switch packets for non-busy links get 'wormholed' to the output before they arrive (arrive completely that is).

Normally this means that modern lans won't lose any packets; if your lan is losing packets you have a hardware problem. Perhaps you have an unswitched hub somewhere or a seriously overloaded switch that's running out of memory. But even a low spec switch that can't keep up with the net speed shouldn't lose anything, it should just block the senders till it can deal with the data.

In fact 10Base-2 (cheapernet) was the last ethernet standard that that you couldn't avoid congestion collapse.

Re:Math on /. (1)

petermgreen (876956) | about 6 years ago | (#23109408)

Modern ethernet networks do not use CSMA/CD they use switches and point to point full duplex links. As such with good quality equipment they can get very close to 100% saturation.

Re:Math on /. (2, Informative)

gallwapa (909389) | about 6 years ago | (#23109678)

CSMA/CD still applies, except for the fact that on a switched architecture your collision domain is only a single port on the switch. Therefore the problem will lie between the switch and the device itself.

CSMA/CD is still important in modern ethernet networks, due to the fact that some devices do not properly auto-negotiate. Some devices doesn't obey the RFC's for interpacket spacing in an effort to improve their throughput that can wreak havoc on networks.

In many cases, if a link fails auto negotiation it will default to a half duplex link, where CSMA/CD is of vital importance.

Re:Math on /. (1)

haifastudent (1267488) | about 6 years ago | (#23107058)

i'm worried they had to say 4 * 2.5 = 10 on /.
I'm worried that they'll make a super-high-speed network and forget about latency again. Consortium engineers seem to favor headline-making numbers over real-world-benefit numbers.

Fibre-channel has significantly lower overhead (1)

LukeCrawford (918758) | about 6 years ago | (#23109074)

than ethernet alone- to even get the quoted ratio, the new ethernet standard will have to be much more efficent than the old ethernet standard. And if they use iscsi, then they are running storage over IP over ethernet, adding further overhead

I work a lot with 1G fibre channel; it is worlds faster than 1G ethernet for storage applications, just 'cause fibre channel was designed, first and formost, to handle storage. Sure, you can run IP over it, but that's not what it's *for* - the problem is compounded by the fact that ethernet has a bunch of legacy baggage. so obviously, given equal speed, fibre is going to beat ethernet when it comes to storage.

The big advantage of using bog-standard ethernet is scale. If everyone uses this connectivity method, it's gonna get cheap. Potentially you could have ethernet that is a lot faster than fibre channel for the same price, making up for the overhead. The Idea is sound, I think. Reality has yet to catch up, though.

Good,managed name-brand 1G fibre-channel switches are almost free, and good (name brand, managed) 1G ethernet switches still cost real money. (the used market is what I'm familiar with for both)

Fibre only? (5, Interesting)

masonc (125950) | about 6 years ago | (#23106738)

From their white paper,
"The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily
in that it will only function over optical fiber, and only operate in full-duplex mode"

There are vendors, such as Tyco Electronic's AMP NetConnect line, that have 10G over copper. Has this been discarded in the standard revision?

Re:Fibre only? (3, Informative)

sjhwilkes (202568) | about 6 years ago | (#23106878)

Not to mention 10 gig CX4 - which uses the same copper cables as Infiniband, and works for up to 15M - enough for many situations. I've used it extensively for server to top of rack switch, then fiber from the top of rack switch to a central pair of switches. 15M is enough for interlinking racks too provided the environment's not huge.

Re:Fibre only? (0)

Anonymous Coward | about 6 years ago | (#23108322)

15 mega what?

Re:Fibre only? (2, Interesting)

gmack (197796) | about 6 years ago | (#23107014)

If that's true I'm going to be a tad pissed. I payed extra when I wired my apartment so I could be future proof with cat6 instead of the usual cat5e.

Re:Fibre only? (0)

Anonymous Coward | about 6 years ago | (#23107266)

If that's true I'm going to be a tad pissed. I payed extra when I wired my apartment so I could be future proof with cat6 instead of the usual cat5e.
They'll reimburse you the $10 bucks. Just send them a S.A.S.E.

People get pissed over absolutely nothing.

Re:Fibre only? (3, Informative)

mrmagos (783752) | about 6 years ago | (#23107314)

Don't worry, according to the task force for 10GBASE-T (IEEE 802.3an), cat6 can support 10Gbit up to 55m. The proposed cat6a will support it out to the usual 300m.

Re:Fibre only? (4, Insightful)

Belial6 (794905) | about 6 years ago | (#23107348)

Unfortunately, you made a fundamental, but common mistake. You cannot future proof your home by running any kind of cable. You should have run conduit. That is the only way to future proof a home for data. When I renovated my last home, I ran conduit to every room. It was pretty cool in that I didn't run any data cables at all until the house was finished. When The house was done, I just pulled the phone, coax and Ethernet lines to the rooms I wanted. If and when fiber, or a higher quality copper is needed, it i will just be a matter of taping the new cable to the end of the old, and pulling it through.

Re:Fibre only? (1)

Anonymous Coward | about 6 years ago | (#23107790)

Sorry if this sounds like a dumb question, but how do you pull the first cable since there's nothing to attach it to? And if there is something, how do you pass it into the conduits so that it reaches the exit you need?

Re:Fibre only? (1)

JLester (9518) | about 6 years ago | (#23107848)

You can: place a string in the conduit as you glue them together, use a fish tape (thin, stiff metal or fiberglass wrapped on a reel) to push through the conduit to the opposite end, use a vacuum with a small spongy ball slightly smaller than the conduit, etc.

Re:Fibre only? (1)

jimicus (737525) | about 6 years ago | (#23107884)

Conduits join at junctions, and when you put the conduits in you run a piece of string with a weight on it through them. Use the string to draw the cable to the first junction box, tie the cable to another string going in another direction off the box and pull again. Lather rinse and repeat until complete.

Re:Fibre only? (1)

LukeCrawford (918758) | about 6 years ago | (#23107914)

you use a conduit snake, or a push-pull rod. For short distances, you use a push-pull rod- a flexible fibre-glass rod you can push through to the other side, attach the wire (or a string) to and pull back through. For longer runs, you need a 'conduit snake' which is essentially a big roll of similar (though usually more flexible) material.

Both these tools operate on the principle that you can force this semi-rigid thing through the conduit to the next entry point by pushing it. Depending on how sharp the turns in the conduit are, it can be difficult, but It's a whole lot easier than doing it without conduit. (and if you have ever run 14ga romex through conduit, compared to that, running a conduit snake or cat5 is pie)

Re:Fibre only? (1)

afidel (530433) | about 6 years ago | (#23108002)

62.5nm fiber is pretty damn future proof. Considering they run terabits per second over it today I don't think any home network is going to outgrow it during my lifetime =)

Re:Fibre only? (2, Informative)

segfaultcoredump (226031) | about 6 years ago | (#23108814)

I guess that is why we run 50/125 multimode everywhere. the 62.5 just didnt cut it anymore for higher bandwidth applications :-)

Maybe you are thinking about 9micron singlemode fiber?

Re:Fibre only? (1)

afidel (530433) | about 6 years ago | (#23109134)

MM is ok for short runs with a single wavelength but single mode is what's used for the truly high speed stuff =)

Re:Fibre only? (1)

hosecoat (877680) | about 6 years ago | (#23109636)

>62.5nm fiber is pretty damn future proof.

so your saying that 62.5nm fiber ought to be enough for anyone

Re:Fibre only? (1)

peragrin (659227) | about 6 years ago | (#23108540)

When i build my home I am going to do just this. It doesn't make sense to put cat5, phone line or even cable TV coax inside a wall without being able to pull it back out later.

Electric wiring is good for decades communication tech changes dramatically every 5-10 years.

Re:Fibre only? (0)

Anonymous Coward | about 6 years ago | (#23109226)

Details? What kind of Conduit did you use? Plz!!!

Re:Fibre only? (1)

oolon (43347) | about 6 years ago | (#23107640)

Yes you have wasted your money unfortunately cat 6 only supports 2.5 Gbs, and unfortunately no one produced equipement to work at that speed because the wiring people came up with the standard before asking if anyone wanted to produce equipement to work over it, cat5e is certified for 1Gbs use. The newer standards like 10Gbs ethernet has been designed with buy in from equipement manufacters, copper 10 or cat 6e was probably what you need. However as 10Gbs (copper) ethernet currently uses 45 watts a port... you probably would not want it for some time yet (next appartment?).

Re:Fibre only? (1)

afidel (530433) | about 6 years ago | (#23109606)

Wrong, Cat6 supports 10Gbit (10GBase-T) over reasonable distances and Cat6A supports it to 100m. If you have a small number of runs you should be able to run to 100m over Cat6 so long as you can physically separate the runs so there is no alien crosstalk (inter-cable crosstalk).

How about hard drive speeds (0)

Anonymous Coward | about 6 years ago | (#23106756)

Networking is getting faster in leaps and bound, yet hard drives are still uber slow.

Re:How about hard drive speeds (1)

malinha (1273344) | about 6 years ago | (#23106820)

Networking is getting faster in leaps and bound, yet hard drives are still uber slow.

But getting bigger and bigger

Re:How about hard drive speeds (2, Insightful)

cshake (736412) | about 6 years ago | (#23107076)

That seems to be the idea behind this spec - a network interface that is as fast as a drive interface on a local machine, which would allow for nearly transparent remote drives, or even striped and mirrored raid across multiple machines to make it really fast. It really would be nice to see that.

Re:How about hard drive speeds (1)

jimicus (737525) | about 6 years ago | (#23107904)

By the time you account for seek time and latency, we've practically got that already with gigabit ethernet unless you're already running a reasonable RAID.

Re:How about hard drive speeds (1)

Em Ellel (523581) | about 6 years ago | (#23107534)

Networking is getting faster in leaps and bound, yet hard drives are still uber slow.
Yeah, but they are getting larger and cheaper. You can have drives in parallel (RAID) which will increase your throughput significantly. If you have $$ to spend on 10Gig link, you are unlikely to be hooking up a single drive.

Re:How about hard drive speeds (1)

arivanov (12034) | about 6 years ago | (#23108804)

Exactly, and I am not particularly sure about the benefits of 10GE vs Fiberchannel or Infiniband. Ethernet tends to have higher latency than these two and latency is what makes people go for SAN instead of NAS in the first place.

speed (0)

Anonymous Coward | about 6 years ago | (#23106796)

I bet in 8 years 1GB/sec will be normal. I'm already downloading at 2.5MB/sec from my ISP roadrunner.

Awesome! (0)

Anonymous Coward | about 6 years ago | (#23106838)

Can't wait to wire my house with 10Gbit ethernet while my cable modem is lucky to get 10Mbit during off-peak hours.

Re:Awesome! (2, Insightful)

moderatorrater (1095745) | about 6 years ago | (#23107142)

You're not using your home network like you should be then. I often find myself transferring multiple gigabytes of information from one computer to another.

Re:Awesome! (-1)

Anonymous Coward | about 6 years ago | (#23107350)

You're not using your home network like you should be then. You shouldn't HAVE to transfer gigabytes from one computer to another.

Misleading Title. (2, Insightful)

Anonymous Coward | about 6 years ago | (#23106842)

The 10GEA [wikipedia.org] is not the same as the storage alliance mentioned in TFA.

Block storage? (2, Interesting)

mosel-saar-ruwer (732341) | about 6 years ago | (#23106844)


By combining block and file storage on one network, they say, you can cut costs by 50% and simplify IT administration.

What is "block" storage?

Re:Block storage? (4, Informative)

spun (1352) | about 6 years ago | (#23107104)

SAN is block storage, NAS is file storage. Simply put, if you send packets requesting blocks of data, like you would send over your local bus to your local hard drive, it is block storage. If you send packets requesting whole files, it is file storage.

Re:Block storage? (2, Informative)

Guy Harris (3803) | about 6 years ago | (#23107472)

SAN is block storage, NAS is file storage. Simply put, if you send packets requesting blocks of data, like you would send over your local bus to your local hard drive, it is block storage. If you send packets requesting whole files, it is file storage.

No. If you send packets requesting blocks of data on a region of disk space, without any indication of a file to which they belong, that's block storage. If you send packets opening (or otherwise getting a handle for) a file, packets to read particular regions from a file, packets to write particular regions to a file, packets to create, remove, rename files, etc. that's file storage.

Most of the file access protocols out there (NFS, SMB/CIFS, AFP, NCP, etc.) permit you to read or write particular regions of a file (they don't even have to be aligned on block boundaries; they don't require whole file access. That's NAS, not SAN.

There are protocols used on SANs that mix file and block access, e.g. the protocols used by Quantum's StorNext, where create, delete, rename, open, etc. operations go to a metadata server and involve files, but reads and writes are done directly to the disk blocks in question over the SAN (you ask the metadata server for information to let you know what blocks on the SAN corresponds to particular data within a file).

Re:Block storage? (1)

spun (1352) | about 6 years ago | (#23107714)

Thanks for clarifying that. But I've never heard anyone refer to random access on a given file as 'block' storage.

Re:Block storage? (1)

DamageLabs (980310) | about 6 years ago | (#23107902)

I am sitting here just now contemplating should I go iSCSI over Ethernet or NFS (over the same gigabit Ethernet) for a small VMware Server (to cheap for ESX) deployment.

My brain tells me that iSCSI should be faster and simpler - one filesystem layer less to translate - but it seems that NFS is simpler and not much slower; actually faster when sharing a datastore with multiple VMware physical servers.

Has anybody got experience on how linux NFS deals with large (10-100GB) files being mounted as virtual file systems on remote end? And which local filesystem would be ideal for sharing vmdk files over NFS?

Re:Block storage? (1)

QuantumRiff (120817) | about 6 years ago | (#23108722)

I don't know about VMWare server, but I do know that MS recommends Block storage over file storage for Virtual Server 2005. Ask VMWare.

Re:Block storage? (1)

DamageLabs (980310) | about 6 years ago | (#23109078)

Their answer would be simple: buy ESX, buy FC storage, buy buy...
I am trying to skip some of those buy steps for my small deployment.

Re:Block storage? (1)

QuantumRiff (120817) | about 6 years ago | (#23107254)

The previous reply was good, just wanted to expand. File access is literally grabbing a file over the network. Like opening a word document. It pulls the entire file over the network, then opens it.

Your hard drive is a block device. A SAN just uses some protocols to make the OS treat a remote storage as a local disk (Think of it as scsi going over the network, instead of a local cable, which is almost exactly what iSCSI is). You can format, defrag, etc. The OS does not know that the device isn't inside the box. Very usefull for things like databases, because you just modifiy the blocks (think of it as clusters/sectors on the hard drive that need to be changed.

Also, block storage is usually dedicated to a particular server, so you don't need to worry about "locking" a file that is open for changing like with NFS to ensure integrity.

Re:Block storage? (0)

Anonymous Coward | about 6 years ago | (#23108052)

may as well mention shared storage by servers using clustered filesystems is also block level. The open source products for such filesystems are lagging the proprietary ones in capability and performance.

SCI, Infiniband (1)

Colin Smith (2679) | about 6 years ago | (#23106908)

etc.

I can do this already. Up to 90 odd Gbit.

Ethernet will have to be cheap.

 

Re:SCI, Infiniband (2, Insightful)

afidel (530433) | about 6 years ago | (#23108104)

There are literally several orders of magnitude more ports of ethernet sold per year than fiberchannel and there are about an order of magnitude more fiberchannel than infiniband. Most of the speakers at storage networking world last week think that it's inevitable that ethernet will take over storage, the ability to spread R&D out over that many ports is just too great of an advantage for it not to win in the long run.

Re:SCI, Infiniband (1)

jd (1658) | about 6 years ago | (#23108292)

I thought the current limit on Infiniband was 12 channels in any given direction with 5 Gb/s per channel (and even then that only applies if you're using PCI 2.x with the upgraded bus speeds), giving you a peak of 60 Gb/s. Regardless, 60 Gb/s is still well over the 10 Gb/s of Ethernet. More to the point, latencies on Infiniband are around 2.5-8 us, whereas they can be 100 times as much over Ethernet. Kernel bypass is another factor. It exists for Ethernet, but it's rare, whereas it's standard for Infiniband. Remember, Linux has something like a 20 ms context switch time and that's low, so you really want to keep switching to and from the kernel to a minimum.

SCI is definitely an interesting technology - I've seen several presentations from Dolphinics - and it would seem to be ideal for something like a storage system. Not sure what the current limitations are.

There are certainly other networking technologies out there, and some of those may also compete in this market. Part of the problem, I think, is that there is a lot of scattered information out there and very little independent, organized collection and dissemination of it. You wouldn't need a consortium to promote 10 Gb Ethernet if it was already clear to people what 10 Gb Ethernet actually offered on a practical, day-to-day basis. Not ping-pong tests, not dodgy benchmarks, not marketspeak but actual practical information and some form of real-world guide on how to map user requirements onto what each network technology would realistically deliver.

Re:SCI, Infiniband (0)

Anonymous Coward | about 6 years ago | (#23108852)

Yes, and the problem with 10GB ethernet is that it is _not_ cheap... Still many people seem to buy it, which is rather weird.

Infiniband cards and switches are dirt cheap in comparison, AND have much lower latency and higher bandwidth.

bandwidth = performance ? (4, Interesting)

magarity (164372) | about 6 years ago | (#23106930)

So how will tcpip networking over this speed measure up to dedicated storage devices like SAN over fibre channel? I have to suspect not; existing iSCSI over 1GB tcpip is a lot less than 1/4 of 4GB fibre to a decent SAN. Sigh, I'm afraid even more of my databases will get hooked up to cheap iSCSI over this instead of SANs space that costs more dollars per capacity but delivers the speed when needed :( Reports coming up fast enough? Remember the planning phase when the iSCSI sales rep promised better performance per $ than SAN? It wasn't better overall performance, just better per $. There's a BIG difference.

Re:bandwidth = performance ? (1)

RingDev (879105) | about 6 years ago | (#23107066)

If the practical bandwidth differences between bleeding edge SCSI and bleeding edge Ethernet over fiber between the physical storage of your data, the controller of the database, and the requester of the data, is the limiting factor of your "reports coming up", there is either a fundamental design issue going on, or your clients are sitting at the terminals with stop watches counting the milliseconds of difference.

-Rick

Re:bandwidth = performance ? (2)

Znork (31774) | about 6 years ago | (#23107754)

existing iSCSI over 1GB tcpip is a lot less than 1/4 of 4GB

I'd have to wonder what kind of config you're running then. I've gotten 90MB per second over $15 RTL8169 cards and a $70 D-Link gigabit switch. Between consumer grade pc's running ietd on linux to a linux iscsi initiator. I have no doubt that 10GB ethernet will wipe the floor with FC.

Remember the planning phase when the iSCSI sales rep promised better performance per $ than SAN?

Remember the planning phase when the SAN vendor promised cheaper storage than disks in every server? I saw an article the other day about a SAN consultant who had helped companies cut storage costs by $75000 per terabyte. That's impressive for something that costs around $200...

Your database servers may have some requirements (particularly if, as is so often the case, the application developers are using the database the wrong way), but the vast majority of servers can share SAN and NAS connection without a problem, even on 1Gb networks.

So get the expensive option for your databases and let them carry the whole cost for the expensive infrastructure. Maybe it turns out you'd be better off distributing the databases and putting them on cheaper hardware too. Consolidation and expensive hardware isn't an end to itself (well, except for the ones actually selling the expensive hardware).

Re:bandwidth = performance ? (1)

oolon (43347) | about 6 years ago | (#23107900)

It also means when the networking team does a bad firewall change, not only will prevent user access it will mess up the storage, requiring alot more work to get your databases up and filesystems running again or atleast a forced shutdown and reboot. Personally I would not want to share block storage on my public interface in any case, as iscsi is not designed as a highly secure standard, as this would impact performance, public interfaces generally has more sufficiated firewalling rules on the switch gear with slows performance. It also means your need yet another person available for storage changes. It might seem sensible at the low end to share switches, however the public switches need to route everywhere, storage it generally limited to within a building, and all special cases need to be considered to avoid over loading things.

Re:bandwidth = performance ? (2)

QuantumRiff (120817) | about 6 years ago | (#23108846)

It is usually recommended to run a seperate network for the storage network. It is possible to run storage over the same nic that the server uses for other network traffic, but is not recommened (but is used often as a "failover"). This also helps when you turn on Jumbo Frames, some servers just don't like to work correctly. Seperate network makes it better.

However, the best advantage of iSCSI over FC is replication. How much extra infrastructure and technologies do you need to replicate to a site 1000 miles away? Compare that with FC, and its special software and hardware to do replication over TCP/IP.

Re:bandwidth = performance ? (1)

afidel (530433) | about 6 years ago | (#23108156)

10Gb ethernet is 20% more bandwidth than bleeding edge 8Gb FC and it's a fraction of the cost per port. For new installations it's a no-brainer. For places with both infrastructures it's most likely to be evaluated on a per box basis.

Re:bandwidth = performance ? (1)

towelie-ban (1234530) | about 6 years ago | (#23108450)

25% more bandwidth, not 20%. And to think, the original poster was worried that the article had to explain how math works.

Re:bandwidth = performance ? (1)

Sentry21 (8183) | about 6 years ago | (#23108260)

One of the big issues you should consider is not just whether you are using jumbo framers or not. Some people claim a minimal performance increase, but jumbo frames can significantly reduce transmission/reception overhead on a gigabit network when doing block data transfers between 1500 and 9000 bytes.

For a database server, it depends on your read/write patterns, but especially when doing large blocks of data, it can make a difference in both CPU use and throughput. Might be worth a look, but the NIC, switch, and iSCSI SAN all need to support it (which might not be the case).

Re:bandwidth = performance ? (1)

rathaven (1253420) | about 6 years ago | (#23109428)

8Gb FC v 10Gb iSCSI? Well thats 2Gb potential bandwidth more... Tcpip is not really a limiting factor. In test I've seen boxes demo'd using IOMeter and the like showing iSCSI boxes with 10Gb with throughputs of 800MBps or higher whilst 2 x 4Gb FC connected SANs of a higher price were only pushing 600MBps. As someone else here says - the disks are slow. The limitation in any well designed SAN is disk spindles and disk seek times and unfortunately most iSCSI boxes are SATA disks not SAS or Fibre Channel disk. SATA is big with higher seek times and it has approximately 1/4 the throughput of SAS or FC disk. If your disk is decent, the raid cards in the iSCSI boxes are up to the task (some are just plain cheap - if they aren't just cheap servers running Windows Storage Server) and you have the network bandwidth on the iSCSI boxes then you should look elsewhere for your performance limit not blame iSCSI or TCPIP in general. Q. Are your switches configured for Jumbo frames? Q. Is you bandwidth to the SAN through the same network cards as the normal network data? Q. Are your servers performance not able to load the iSCSI boxes? Any of these may be the other factors...

Bonding for Unlimited Bandwidth (1)

Doc Ruby (173196) | about 6 years ago | (#23107006)

If these new fast ethernet specs came with specs for plugging multiple parallel paths between machines all under the same host IP#s, so we just add extra HW ports and cables between them to multiply our bandwidth, ethernet would take over from most other interconnect protocols.

Is there even a way to do this now with 1Gb-e, or even 100Mbps-e? So all I have to do is add daughtercards to an ethernet card, or multiple cards to a host bus, and let the kernel do the rest, with minimal extra latency?

Re:Bonding for Unlimited Bandwidth (1)

Feyr (449684) | about 6 years ago | (#23107310)

yes, look up "etherchannel" or "bonding"

Re:Bonding for Unlimited Bandwidth (1)

Phishcast (673016) | about 6 years ago | (#23107572)

Kind of. This method of aggregating bandwidth by using multiple links does poor man's load balancing. The traffic between one source and one target will only traverse a single path until that path fails. If you have a lot of different sources on one side of an etherchannel going to a lot of different targets on the other side of the etherchannel, you get a relatively balanced workload. If you've got a smaller number of sources and targets it's easy to get uneven bandwidth utilization across those links. This is more common in networked storage than in everyday IP land.

Re:Bonding for Unlimited Bandwidth (2, Funny)

Em Ellel (523581) | about 6 years ago | (#23107802)

yes, look up "etherchannel" or "bonding"
Wow, that takes me back years. A little over 10 years ago, straight out of college and not knowing any better I purchased the "cisco etherchannel interconnect" kit for their catalyst switches. I had to work hard to track down a cisco reseller that actually had it (which should have been a clue). When I finally got it, the entire "kit" contents were, I kid you not, two cross connect cat5 cables. I learned an important lesson about marketing that day.

-Em

P.S. In all fairness to Cisco the cost of the kit was about same as you would expect to pay for two cross connect cables in a retail store. Not that I would have bought cat5 cables at a store.

Re:Bonding for Unlimited Bandwidth (1)

SpacePirate20X6 (935718) | about 6 years ago | (#23107338)

The two gigabit ethernet plugs on the Mac Pro can do this; they can be combined to serve as one 2Gb connection.

Re:Bonding for Unlimited Bandwidth (1)

witherstaff (713820) | about 6 years ago | (#23107676)

I don't believe anyone has done this as you'd need to make new switches as they use the MAC to route traffic. Potentially I could see this done between two machines with custom code, directly wired together, no need to get custom switches then.

That is a neat idea, a real use for the extra 10/100 NICs that everyone has laying around, and for speed increases without rewiring.

Re:Bonding for Unlimited Bandwidth (1)

Doc Ruby (173196) | about 6 years ago | (#23108022)

No, it looks like what I asked about is standardized as IEEE 802.3ad [slashdot.org] , and even implemented in the Linux kernel. But does it actually work?

Re:Bonding for Unlimited Bandwidth (2, Informative)

imbaczek (690596) | about 6 years ago | (#23107684)

802.3ad [wikipedia.org]

Re:Bonding for Unlimited Bandwidth (1)

Doc Ruby (173196) | about 6 years ago | (#23108090)

It looks like that's even supported in the Linux kernel [linux-ip.net] . But does it really work?

Yes and no... (1)

Junta (36770) | about 6 years ago | (#23108364)

Yes, it works to the extent reasonable/feasible.

No, it isn't a robust scalable solution. To play nice with various standards and keep a given communication stream coherent, it has transmit hashes that pick the port to use based on packet criteria. If it tries to use criteria that would actually make the most level use of all member links, it would violate the aforementioned continuity criteria. I have seen all kinds of interesting behavior depending on the hashing algorithm employed. I have seen a place buy 500 servers from one vendor and have a 2 port aggregation to somewhere. The problem being, that vendor had all even mac addresses on the first ethernet port, and so one of the ports got to be virtually unused because it was a hash of the mac addresses mod the number of member ports. Since there were two ports, it was mod two and all evens went on one port, and only odds would have been the other.

Re:Yes and no... (1)

Doc Ruby (173196) | about 6 years ago | (#23109000)

That sounds like a problem with some rare edge cases. Can't the admins test the deployment to ensure the traffic is maximally using the multiple channels in the actual installation configuration? Is it that complicated to test and reconfig until it works? Maybe with a mostly automated tool?

Cost? (1)

segfaultcoredump (226031) | about 6 years ago | (#23107044)

Hope they fix the pricing issue, because I the FC network I just put in cost less per Gb than a 1 gig-E network. When compared to the cost per Gb of a 10G-E netowrk, the entire thing cost less than the optics on the 10G stuff, let alone the actual switch costs.

I'm also noticing that most if not all of my systems never even tap the bandwidth available on a pair of 4Gb FC ports, let alone 10Gb. I'm sure there are folks out there who need it, but it aint us.

Of course, our corporate standard is Cisco, so I'm sure that had something to do with the high cost of the ethernet equipment ;-)

Re:Cost? (1)

Phishcast (673016) | about 6 years ago | (#23107634)

Didn't you buy Cisco Fibre Channel gear then? I don't recall their FC gear being inexpensive either!

The 10Gb ports aren't really about the hosts (today anyhow). They're generally more useful for the connections to large storage arrays which can push that kind of bandwidth, you'd be able to fan in more hosts to each storage array port.

Re:Cost? (1)

segfaultcoredump (226031) | about 6 years ago | (#23108286)

Our original SAN consisted of four MSS 9216's with the 32 port blades for a total of 48 ports per switch. Two at each site (stretched fabric, 3k of dark fiber between the two sites, dual fabric, bla, bla, bla) Every server has two ports and thus full redundancy in the event of a switch/cable/whatever failure. (ever try and bond two ethernet ports between two switches?).

To replace the aging MDS switches we went with four Brocade 4900's. We have what I would call a medium sized environment, so the larger brocade switches did not make sense, but then again neither did the little things like the Brocade 200E. We have some hosts who will push 500+MB/s of traffic through our SAN, so trying to bond a ton of ethernet ports together did not make sense (think of the cables). Of course, bonding to a single switch is easy, ever try bonding between two physically different switches for redundancy? (to save money, our 6509's only have 1 sup, so we need switch diversity). Storage multipathing is easy with a dual port 4Gb hba. One could argue that a dual port or a 10G-E card allows for failover but removes the need for bonding. Of course, guess which one is cheaper :-)

The nice thing about the 4900's vs the cisco MDS switches are as follows;
1) Smaller (2RU for 64 ports)
2) No oversubscription (64 ports, 4Gb any to any)
3) Less power. (Not even close)
4) Hitless firmware upgrades (cisco sorta has this with the MDS 9500, at 10x the cost)
5) Ooooh, shiny box :-)

At the same time our network folks were purchasing some 10G line cards for their Cat 6509's. Lets just say that the 8 port card with optics cost more than my 64 port 4900 with optics. They got 160Gb of bandwidth (assuming you never leave the card when it quickly falls to 40Gb), I got 512Gb.

Just for kicks I also looked at their costs for the 48 port gig-e line cards. Loking at just the cost of the cisco line cards (6748's) and ignoring the cost of the chassis and supervisor, my cost per Gb was just a touch under theirs (and mine included the whole chassis). Of course, 256 ports of gig-E takes up a lot more space, more cables and I still have the bonding issue for redundancy.

I'm sure some day we will see it all merge together. Whitness the Magnum switch from Sun: http://www.sun.com/products/networking/datacenter/ds3456/ [sun.com]

Until then, I'm sticking with ethernet for IP and FC for storage.

Re:Cost? (1)

Phishcast (673016) | about 6 years ago | (#23109318)

It looks to me like everything on your list about the 4900s could be achieved using the stackable MDS 9134 switches. You get a 64 port switch in 2RUs, 4Gb line rate ports (no oversub), hitless firmware upgrades and less power than your old 9216s. There aren't two supervisors like in the director class MDS switches, but I suspect the same is true for your Brocade 4900s (I've never looked into them).

Interesting you point out a Sun Infiniband switch as the a possible option to "merge it all together". Cisco's idea of a unified datacenter fabric is based on Ethernet, see Nexus [cisco.com] . I dunno...Infiniband is certainly cool stuff, but could it ever overtake Ethernet in the datacenter?

By the way, it may sound like I work for Cisco. I don't. I do, however, manage large Cisco MDS Fibre Channel SANs.

Re:Cost? (1)

segfaultcoredump (226031) | about 6 years ago | (#23109744)

The 9134's look a lot like the Qlogic 5600's. They both have the same issue: Stacking two will oversubscribe the 10Gb FC interconnects. I know that there are ways to avoid this in theory, but luck always has it that I never have a port available where I need it for optinal performance. Thus things like the 4900 are so nice in that it is next to impossible to screw up :-)

That said, I'm happy to see that cisco finally got their power consumption under control with the 9134's. The older models sucked so much power that it was not even funny (Like a lot of folks, I'm limited on 3 things in my datacenter right now: space, power and cooling)

I did entertain quotes for a 9506 from the cisco folks. But when I compared the physical size, power requirements and cost (2x as much for 64 blocking ports), it did not make sense in my environment. The 9509 was just way too big to put anywhere after our network team droped in a pair of 6509's at each site and took most of the remaining power and space that we had. The cisco folks never even offered the 9134 as an option.

As for the Nexus vs Sun magnum, Cisco's bread and butter is ethernet, so that made sense. Sun just went for whatever gave them the best performance, and the fact that the HPC world was already used to Infinitiband made sense for them. That said, sun moves 110Tb/s in their switch vs Cisco's "System designed for investment protection with a 15 Tbps highly scalable fabric". Ok, so the cisco one is a bit smaller, but still, its not that much smaller :-)

Re:Cost? (1)

evilbessie (873633) | about 6 years ago | (#23109416)

This is only really for the datacentre for the moment, from servers to SAN. So yes if you have already paid for plenty of bandwidth using current tech then maybe you're not going to go this route, but those companies who are upgrading may go for this because of a possibility of lower TCO. That and 40/100GbE is on the way so there is an upgrade path for the future.

May I propose... (0)

Anonymous Coward | about 6 years ago | (#23107416)

...the 20Gb per sec ethernet alliance, it has all the benefits of the 10Gb alliance, and more.

Surely there'll be no way, that anyone can possibly think of, that'll somehow be better than that.

Channel bonding (3, Informative)

h.ross.perot (1050420) | about 6 years ago | (#23107654)

Sigh, Aggregating 2 or more 1 GIG adapters does not give you 2 GIG of throughput. It is a sliding scale; the more you add the less total bandwidth you see. The safest bonding scheme uses LACP; Link Aggregation Control Protocol. This protocol communicates member state and load balancing request to the link member. 10G over copper will be a good thing for VM's. Sad; that the current crop of 10G over copper adapters do not approach 5 gig throughput; raw. Give the industry time; this it just like the introduction of 1 GIG from fast Ethernet. It took 2 generations of ASICs to get to what we consider a GIG card today.

look forward to the new standard (1)

v1 (525388) | about 6 years ago | (#23107716)

in other news, ISO starts the process of ratifying the new MS10G(tm) specifications.

Further blurring the distinction (1)

FranTaylor (164577) | about 6 years ago | (#23107972)

Between the network cable and the drive cable. USB subsumed many old-technology interconnects, perhaps 10 Gb Ethernet can replace SATA and continue the trend of decreasing the number of interfaces required on a computer.

"Tolken-Ring Empire" (0)

Anonymous Coward | about 6 years ago | (#23108038)

hehehe, that makes marginally more sense with an "l" crammed in there ;)

Will there ever be "enough" bandwidth to a home? (3, Interesting)

Dr. Spork (142693) | about 6 years ago | (#23108864)

Stories like this always make me think of the following:

I can't think of anyone who's longing to get a fatter gas pipe connected to their house, or a fatter pipe to municipal water, or a cable of higher capacity to bring in more electricity.

But we're not like that with bandwidth. We always seem to want a fatter pipe of bandwidth! Will it forever be like that? Is the household bandwidth consumption ever going to plateau, like electric, gas and water consumption has in the US? (I know that global demand for these utilities is growing, but that's mainly because there are more people and a larger proportion are being hooked up to municipal utilities. The per-household numbers are not really changing very much, and in some cases decreasing.)

Will there be a plateau in bandwidth demand? If so, when and at what level? Thoughts?

Re:Will there ever be "enough" bandwidth to a home (1)

RiotingPacifist (1228016) | about 6 years ago | (#23108994)

yes but this isnt for homes, this is for offices, no (normal person's) home has a 100MB internet. I think the limit for broadband will simply be when you can download a film in 5 minutes over BT, which basically depends on how fast the average is not just your connection.

Re:Will there ever be "enough" bandwidth to a home (1)

Dr. Spork (142693) | about 6 years ago | (#23109184)

Yea, I got that, but the fact that this is feasible for offices now means that homes could use the same technology in the near future. There are some "normal" homes in Sweden that already get 100MB internet, which is enough for streaming HD, but not enough for many other things we will eventually come up with. So I was wondering whether there will ever be an "enough" level to the home. For a business like Google, "enough" might only be: the sum of all user bandwidths in the world. But for users, will there be a plateau?

Re:Will there ever be "enough" bandwidth to a home (0)

Anonymous Coward | about 6 years ago | (#23109114)

If us in the US had to pay per mb, I'm sure it would plateau pretty darn quick.

you fail It (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#23109236)

A PRODUCTIVITY is dying. Fact:
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...