Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Fibre Channel Storage?

Cliff posted more than 8 years ago

Data Storage 119

Dave Robertson asks: "Fibre channel storage has been filtering down from the rarefied heights of big business and is now beginning to be a sensible option for smaller enterprises and institutions. An illuminating example of this is Apple's Xserve Raid which has set a new low price point for this type of storage - with some compromises, naturally. Fibre channel switches and host bus adapters have also fallen in price but generally, storage arrays such as those from Infortrend or EMC are still aimed at the medium to high-end enterprise market and are priced accordingly. These units are expensive in part because they aim to have very high availability and are therefore well-engineered and provide dual redundant everything." This brings us to the question: Is it possible to build your own Fibre Channnel storage array?"In some alternative markets - education for example - I see a need for server storage systems with very high transaction rates (I/Os per second) and the flexibility of FC, but without the need for very high availability and without the ability to pay enterprise prices. The Xserve Raid comes close to meeting the need but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives.

I'm considering building my own experimental fibre channel storage unit. Disks are available from Seagate, and SCA to FC T-card adapters are also available. A hardware raid controller would also be nice.

Before launching into the project, I'd like to cast the net out and solicit the experiences and advice of anyone who has tried this. It should be relatively easy to create a single-drive unit similar to the Apcon TestDrive or a JBOD, but a RAID array may be more difficult. The design goals are to achieve a high I/O rate (we'll use postmark to measure this) in a fibre channel environment at the lowest possible price. We're prepared to compromise on availability and 'enterprise management features'. We'd like to use off the shelf components as far as possible.

Seagate has a good fibre channel primer, if you need to refresh your memory."

cancel ×

119 comments

Sorry! There are no comments related to the filter you selected.

why "build" your own array? (5, Interesting)

heliocentric (74613) | more than 8 years ago | (#14595190)

Sun's A5200s are cheap on eBay, and you can pick up something like a 420r or a 250 to drive the thing. Put a qfe card in with the free sun trunking now for Solaris 10 and it'll serve up your files super speedy, all for very reasonable.

My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.

I think new(er) fibre things are getting cheaper, but what was often high-end data-center-only big-$$ of a few years ago hits the price point of "at home" now.

Re:why "build" your own array? (2, Funny)

Anonymous Coward | more than 8 years ago | (#14595297)

My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.

That's going a long way to protect one's pr0n stash.

Re:why "build" your own array? (1, Funny)

Anonymous Coward | more than 8 years ago | (#14595313)

He stores his "home collection" on an A5200 [sun.com] ?!

Now that's what I call hardcore porn.

Re:why "build" your own array? (3, Interesting)

Big Jason (1556) | more than 8 years ago | (#14595390)

Plus you get a free Veritas license, at least on Solaris (sparc). Don't know if it works on x86.

Re:why "build" your own array? (1)

DaedalusHKX (660194) | more than 8 years ago | (#14595457)

Bah, I can solve that issue with an Amanda Daemon and won't have to worry about "free" veritas licenses. Especially since they (Veritas) were bought out by Symantec last year, and we've all seen what they did to l0pht heavy industries. (Remember, they became @Stake and were bought by symantec, and now SELL lophtcrack under the monicker LC5 (lophtcrack version 5) but as I understand it, the earlier versions did more back when they were free. I haven't had the guts to shell out 600 bucks or whatnot its gotten to now (I stopped following them) when there are still some pretty good PW crackers out there.)

~D

Re:why "build" your own array? (1)

stric (1067) | more than 8 years ago | (#14595642)

Veritas Volume Manager (vxvm), not Veritas Netbackup or somesuch.

This produced by the SAME Veritas company?? (1)

DaedalusHKX (660194) | more than 8 years ago | (#14595734)

Meaning they bought the whole company not just a license to the backup system.

Symantec is a financially powerful corp. They DID buy Veritas... unless this is a different Veritas corp.

~D

Re:This produced by the SAME Veritas company?? (1)

itwerx (165526) | more than 8 years ago | (#14596061)

Symantec is a financially powerful corp. They DID buy Veritas...

Yep, and the minute that deal went through I stopped recommending any/all Veritas' products!

Re:why "build" your own array? (2, Interesting)

adam872 (652411) | more than 8 years ago | (#14596094)

Or, you get OpenSolaris and use ZFS on that same array. It's a filesystem and a volume manager in the same piece of software. Best of all, it's all free and I think the ZFS is open sourced too.

Re:why "build" your own array? (4, Informative)

SWroclawski (95770) | more than 8 years ago | (#14596157)

ZFS != Vertias Volume Manager

The Veritas cluster file system (which is the reason I'd imagine someone would go through all the effort) has the ability for multiple systems to access a single volume at the same time, the moral equilivant of NFS, but without the NFS server or the speed problems associated with NFS due to the filesystem abstraction (ie it's good for databases).

The only Free competitor that I know of for this is GFS.

ZFS is a very powerful filesystem/volume manager, but it's more akin to LVM + very smart filesystem access.

Re:why "build" your own array? (1)

ianezz (31449) | more than 8 years ago | (#14596870)

The only Free competitor that I know of for this is GFS.

What about OCFS [oracle.com] and OCFS2 [oracle.com] ?

Re:why "build" your own array? (2, Informative)

adam872 (652411) | more than 8 years ago | (#14597487)

Hang on a sec, VxVM is not the same as VCFS. ZFS is analogous to VxVM, whereas VCFS is like the other Sun product: SAM-QFS. I don't think the original requester was looking for clustered FS, but if they were then you are right, VCFS would be a very good choice.

Re:why "build" your own array? (1)

SWroclawski (95770) | more than 8 years ago | (#14597845)

If you don't care about clustering, then I wonder about the price hit that you take for using Fiber. Fiber has wonderful advantages but for home use, without clustering, I don't see the advantages.

Re:why "build" your own array? (1)

adam872 (652411) | more than 8 years ago | (#14598373)

The main advantage I can think of is dual ported drives, where there is a completely redundant path, all the way to disk. But for home or small business use, SATA, SAS (Serial Attached SCSI) or plain old SCSI is probably enough.

Re:why "build" your own array? (1)

Big Jason (1556) | more than 8 years ago | (#14596242)

Hrmm, I can use VxVM which has been out for 10 years and is rock solid OR I can use ZFS which Sun just released in the latest Solaris Express; it's not even shipping in a supported product!

Re:why "build" your own array? (1)

adam872 (652411) | more than 8 years ago | (#14597511)

All of that is true, *but* if I wanted proper enterprise class file serving, I wouldn't be using software RAID. I'd get an EMC, Sun, HDS, IBM, HP or NetAPP array and do away with the volume manager part. Having said that, VxVM would work OK, or Solstice Disksuite or this fancy new ZFS stuff, which looks very promising, but doesn't have the performance or features that a good quality array has.

Re:why "build" your own array? (5, Funny)

Monkeys!!! (831558) | more than 8 years ago | (#14595435)

My friend, recursive green, has three A5200s in his basement right now, one stores his *ahem* photo collection and is web accessible.

I call bs on this. I demand you post the IP address of the said server so I can verify your claims.

Re:why "build" your own array? (2, Funny)

NoMoreNicksLeft (516230) | more than 8 years ago | (#14595717)

You think that's impressive? Read my kuro5hin diaries [kuro5hin.org] where I talk about how to actually index the photos (start with Part I first). I'm actually turning it into a website, and have started beta testing, so if you want a free account....

Re:why "build" your own array? (1)

Monkeys!!! (831558) | more than 8 years ago | (#14595766)

*skims over details* Don't have much database experience but I'm willing to beta. Have emailed you with my contact details.

Re:why "build" your own array? (1)

Baddas (243852) | more than 8 years ago | (#14597343)

I dropped you an email as well. Fascinating stuff. Hope slashdot doesn't crush you in fanmail

Re:why "build" your own array? (2, Interesting)

Bert64 (520050) | more than 8 years ago | (#14595451)

Aren't the A5200 arrays JBODs ? Or do they do hardware raid like the A1000 does...

I need something that will do raid5 in hardware, and show up to the OS as a single device, just like the A1000 does... I considered an A5200 but i was told i`d need to use software raid on it.

Re:why "build" your own array? (1)

briansmith (316996) | more than 8 years ago | (#14596123)

Have you investigated doing software RAID using Solaris 10's ZFS file system? I thought hardware RAID was the best, but the Solaris engineering team's blog postings indicate that they believe that their ZFS RAID system is more reliable than hardware RAID.

Re:why "build" your own array? (1)

Bert64 (520050) | more than 8 years ago | (#14597121)

More reliable perhaps, but the cpu usage is likely to be a lot higher...
Also software raid makes it more difficult to put my root filesystem (and kernel, bootloader etc) on there.
Finally, software raid isn't likely to hotswap as nicely as hardware raid does.

Re:why "build" your own array? (1)

Octorian (14086) | more than 8 years ago | (#14597975)

Actually, software RAID works very well in Solaris for the root filesystem. In fact, Solaris is the only OS where I currently trust software RAID for the root filesystem.

Re:why "build" your own array? (1)

AKAImBatman (238306) | more than 8 years ago | (#14595985)

Sun's A5200s are cheap on eBay, and you can pick up something like a 420r or a 250 to drive the thing.

Even better is to check AnySystem.com [anysystem.com] for your needs. Their everyday prices are excellent, and their EBay prices are even better! (Often you can get an 8 way system loaded with fibre channel drives and GIGs of RAM for $2000-$3000.) I don't have any affiliation with them other than trying to get my boss to replace our expensive Windows servers with AnySystem servers. :-)

Has anyone else used these guys?

Re:why "build" your own array? (0)

Anonymous Coward | more than 8 years ago | (#14596251)

A couple years back the software pirates* I worked for bought a system from AnySystem. It was a good price and showed up quickly. My only concern is/was licensing. While AnySystem installs Solaris on the machines for you, it's the buyer's responsibility to come up with an RTU for it.

Sun keeps dinking around with the definition of what machines qualify for a free RTU but at the time, the purchased machine did not.

*Okay, maybe they didn't wear eye patches, have hooks for hands, or say "Arrrr" but they were pirates. There was _one_ XP CD in the office, a nice little CD-R with the key written on the CD itself with a Sharpie -- that's how they come from Microsoft, right? Karma's a bitch though. The offshore office in Chennai had a CD just like it. Said CD came preinfected with a nasty worm and when they connected the onshore office to the Chennai office via VPN almost all the machines were infected. Lost about a week of time on a very tight development schedule

Re:why "build" your own array? (0)

Anonymous Coward | more than 8 years ago | (#14596885)

Hey! My copy of windows looks like that. It came from my high school networking teacher. He told me to make a copy soon casue the disc was gettin gold and worn out.

Re:why "build" your own array? (2, Funny)

baadger (764884) | more than 8 years ago | (#14596887)

There was _one_ XP CD in the office, a nice little CD-R with the key written on the CD itself with a Sharpie -- that's how they come from Microsoft, right?

Yep, thats how the best ones come.

Re:why "build" your own array? (4, Interesting)

LordMyren (15499) | more than 8 years ago | (#14596231)

You're right and you're wrong. I myself started with T-cards and 36gig cheetahs. It was amazing after a life of cheap low performant IDE (college student at the time). But shit kept breaking, the hacks kept getting worse and worse, the duct tape bill started getting too big and I just got tired of it. Drives would go offline and there was no hotswap support... kiss your uptime goodbye.

So I exactly that, went on ebay and bought a pair of photons. Only 5100's, but 28 drives was pretty nice.

I was pretty undewhelmed. They were a steal when I got them (well, a "good" price when you factor shipping), but the performance was never there even with really good 10k6 cheetah's. RAID never helped, no matter how it was configured. It just didnt seem that useful.

Plus the A5200's weigh 125lbs and hauling them between dorm rooms proved less than fun.

And even locked my basement closet I could hear the roar of two A5100's. I'd been "meaning" to get rid of them for a while, but now that I'm changing states... it was finally time. I sold em on craigslist for $280 for both. Same I bought em for, and taht includes shipping.

I dunno, If I were anyone with a brain, I'd wait another year for SAS go to ape-shit on everyone. The enclosure-hostcontroller system is a smart breakdown that'll really help beat away the single-vendor-solution... the reason everyone can charge so much for hw now is that everything is one unit, the enclosures, the controller, its a big package with a nice margin. when XYZ company can come along and sell you a 24 drive enclosure for pennies that you can plug in to a retail SAS controller... its a game changer. Just watch the rediculous margins drop.

If you need something now, just get SATA raid. Intel's new IO processor is amazing, it'll give you really nice performance. But otherwise, I'd say wait for SAS. I suppose its still more expensive than a pair of A5100's, but I'd wager the performance will be better.

As a side note, I sometimes wonder whether the fibre cabling i bought was bad. I really couldnt sustain more than 40 MB/s even doing XFS linear copies, even with 14 drives dedicated to the task. I'm not sure if bad cabling would've given me some kind of overt error, or might have just quietly degraded my performance.

Myren

Re:why "build" your own array? (1, Informative)

Anonymous Coward | more than 8 years ago | (#14596344)

I really couldnt sustain more than 40 MB/s even doing XFS linear copies

40MB/s seems awfully close to a saturated SCSI bus of that era. E.g., Sun Ultra workstations have 40MB/s SCSI busses in them. Probably the way to boost performance is to have multiple arrays, each using a dedicated controller in the workstation/server (ensure controllers are on separate PCI busses, too) and the mirror set up across the controllers (gee, three controllers, three arrays, say five disks per array...I'll let you cover the power bill this time ;)

Re:why "build" your own array? (1)

Achromatic1978 (916097) | more than 8 years ago | (#14596287)

I'm not sure $2000-$3000 a pop is 'at home' price point, just yet, for a RAID array of HDDs.

speedwise (0)

TheSHAD0W (258774) | more than 8 years ago | (#14595201)

It actually looks like SATA has a higher potential speed than FC, though it's not designed to act as a bus like FC is. I suspect a machine with a multiple SATA RAID controller will beat the equivalent FC solution, though perhaps with less failover capability.

Re:speedwise (2, Informative)

NoMoreNicksLeft (516230) | more than 8 years ago | (#14595217)

Really? You think this is true, when I can continue to just throw more host bus adapters into the machine, and more drives on the array? FC is meant to allow for up to 64k drives, you know. All striped if that's how I want it. And with many, many gigabit adapters attached to that same fabric.

Re:speedwise (0)

Anonymous Coward | more than 8 years ago | (#14595890)

Actually, the fabric address space is 24 bits (16M nodes). However, most drives are designed work with FC-AL (arbitrated loop) which has an 8 bit address (which you'd think would allow 256 drives, however, for reasons I won't get into here, the real number is just less than half of that (126 nodes on a single loop)).

Re:speedwise (1)

t0rkm3 (666910) | more than 8 years ago | (#14596136)

And even in that limited address space performance can get a bit spotty after 30 or so devices...

YMMV and is highly dependent on the hardware vendor.

Re:speedwise (0)

Anonymous Coward | more than 8 years ago | (#14596452)

throw more host bus adapters into the machine

If the PCI bus can handle it. Good workstations and servers (not $399 Dell specials) come with several independent busses to cover high bandwidth setups.

Re:speedwise (1)

NoMoreNicksLeft (516230) | more than 8 years ago | (#14596500)

You only have one PCI bus? I think my machine has 4 (for 18 PCI slots). All are 32bit though, so it's not so impressive. I need a new backplane.

Re:speedwise (0)

Anonymous Coward | more than 8 years ago | (#14596689)

You should probably check again, most motherboards run everything through ONE channel, including IDE controllers, and AGP

Re:speedwise (1)

NoMoreNicksLeft (516230) | more than 8 years ago | (#14596955)

Really? That'd be sort of difficult, being that PCI busses max out at 6 slots or so. I have 18, so stfu.

Re:speedwise (0)

NoMoreNicksLeft (516230) | more than 8 years ago | (#14596959)

Oh, and notice I said backplane, ACtard. It's not a motherboard, the CPU and logic is all on a riser card. I'm hardcore, loser.

My memory recalls (4, Informative)

DaedalusHKX (660194) | more than 8 years ago | (#14595301)

The major thing about both SCSI and FC was that both of those designs imply greater redundance. Serial ATA provides one major thing which Parallel ATA could almost do. High THROUGHPUT... but no redundancy and higher CPU usage than SCSI chipsets.

Transactions make significant use of CPU resources in ATA based systems. The only cards I am aware of that move ATA transactions to hardware come from Advansys and Adaptec, both use SCSI chipsets but ATA translators. Available for about 190.00 each but hard to find.

On a PII 450 the chipset usage on a softraid was almost 40% when at full throttle with 3 P/ATA drives. Same system went down a tidbit using a PCI card (not sure why actually, presumably some operations are offboarded by the PCI card thus reducing the overhead, but don't quote me on it).

The main problem with all of this is cost vs redundancy vs speed... which brings us to the old issue we found with Cars. the FAST, GOOD, CHEAP pyramid.

                    FAST
                  / \
          GOOD-----CHEAP

CHEAP + FAST != GOOD
GOOD + CHEAP != FAST
FAST + GOOD != CHEAP

My suggestion since my own buying experience with FC is limited (read as NONEXISTANT) I can only tell you what I've learned on SCSI vs PATA/SATA rigs.

SATA systems that do not use an Adaptec chipset are usually mere translators that make use of CPU resources to monitor drive activities. 3Ware cards seem to transcend this limitation rather well and they provide a fine hardware RAID setup for ATA drives.
PATA systems that do not use an Adaptec chipset are likewise mere translators.
SCSI interfaces make use of a hardware chipset that monitors and controls each drive... thus relieving the CPU of being abused by drive intensive operations (think of a high provile FTP or CVS/Subversion server and you get the idea of what would happen to the CPU if it was forced to perform the duties of server processor AND software RAID monitor).

Onto bandwidth. Serial ATA setups suffer from a similar issue I've found with almost all setups. Without separate controllers, SATA setups and PATA setups split up the total bandwidth to the maximum number of active drives. More recent controllers available via SATA may have fixed this issue, but in general, I've found that my systems slowed down in data transfer rate when using drives or arrays found on the same card. SCSI seems to bypass this completely by providing each drive with a dedicated pipeline, but I am not sure what the set ammount is, there is an issue however, as some of the older chipsets DO have some issues when handling a full set of 15 drives. The primer on the article link at Seagate is on the money in their LVD vs FC link/FAQ. You will most likely require a backplane and full setup, and it probably won't be cheap. The big difference will be that FC setups are usually bulletproof, and can support MASSIVE amounts of drives (128 or 125 I think...), with the only thing being faster being a VERY high end Solid State drive or array. Those also have a very small amount of latency, but as far as speed, setup a good ram drive on a dedicated memory bank with backup battery and EMP shielding... okay, that's way expensive :)

~D

PS - I understand that I've not answered your question but I should've simplified this for everyone else out there.

Re:speedwise (2, Informative)

sconeu (64226) | more than 8 years ago | (#14595393)

Say what?

When I worked at an FC startup, FC was 4Gbps full duplex, and 10Gbps was in the works.

eSATA (1, Interesting)

Anonymous Coward | more than 8 years ago | (#14595208)

Would the new eSATA external SATA interface be fast enough for your purposes?

Re:eSATA (1)

innosent (618233) | more than 8 years ago | (#14595497)

Yeah, I was thinking his problem sounded like a good choice might be an ATA over Ethernet-type solution, like Coraid. [coraid.com]

Re:eSATA (1)

Octorian (14086) | more than 8 years ago | (#14598024)

Whenever I look at those CORAID boxes, compared to something like iSCSI or FC, I seriously wonder why they built them. (even heard from someone who uses one that he hasn't gotten very good performance)

Honestly, it seems like that company's products are only popular because they got a post on Slashdot. Not sure why I'd ever want one of their boxes, as the whole concept just feels "wrong". Couldn't they just have put that energy into something like low-cost iSCSI or FC boxes? (using SATA drives, of course)

Re:eSATA (3, Interesting)

TinyManCan (580322) | more than 8 years ago | (#14596098)

eSATA is getting closer, but I believe the real long term answer is going to be iSCSI.

I used to be really against iSCSI, as the native stacks on various OSes just did not deal with it well. By that I mean that a 50 MB/s file transfer would consume almost 100% of a 3ghz CPU. Also, the hard limit on gig-e transfers of 85 MB/s (TCP/IP overhead + iSCSI overhead) was just too low.

Now, that has all changed. Not only can you get TCP/IP Offload Engines for just about every OS (I don't work with Windows, so I don't know what the status of that is). Also, 10 gigabit ethernet has become financially reasonable.

For instance, the T210-cx [chelsio.com] is around $800, and will deliver a sustained 600 MB/s (not peak or any other crap). Also, the latency on a 1500 MTU 10-gbs ethernet fabric is something to behold.

I think by the end of this year, we will see iSCSI devices on 10gbe that out-perform traditional SAN equipment in the 2gbs evironment, in every respect (including price), by a large margin. 4gbs SAN could come close, but I still think hardware accelerated iSCSI has a _ton_ of potential.

If I were starting a storage company today, I would be focusing exclusively on the 10gbs iSCSI market. It is going to explode this year.

Re:eSATA (0)

Anonymous Coward | more than 8 years ago | (#14597899)

Apples and Oranges.

eSATA is an extenstion of SATA from the motherboard to the backend. iSCSI is for accessing data over a network. Fibre channel has the benefit of being both a backend and front end protocol, but at the cost of a higher price and complexity. Systems with iSCSI front ends still have SATA or FC backends. Nobody will ever use iSCSI on a backend so your comment is kinda out of line.

Re:eSATA (1)

TinyManCan (580322) | more than 8 years ago | (#14598982)

I guess I feel that the method in which the drives are connected to the RAID controller doesn't really matter. Any RAID head with enough cache and bandwidth is going to perform well, no matter if it is using SCSI, FC or SATA drives. The important part is how that data is shared in the datacenter, and how the data makes it way to the clients. iSCSI is a revolution in this area.

Try areca (3, Interesting)

Anonymous Coward | more than 8 years ago | (#14595236)

You can get a sata-II to FC adapter from areca, these are pretty expensive, but the nice thing is that you don't need a motherboard in your case. Combine it with a chenbro 3U 16 bay case and you have a relatively affordable setup.

http://www.areca.com.tw/products/html/fibre-sata.h tm [areca.com.tw]

Re:Try areca (2, Interesting)

Loualbano2 (98133) | more than 8 years ago | (#14595290)

Where can you go to order these? I did a froogle search and got nothing and the "where to buy" section doesn't seem to have any websites to look at prices.

-Fran

Re:Try areca (0)

Anonymous Coward | more than 8 years ago | (#14595838)

That is a motherboard for an Intel IOP, which is the IO-heavy next-gen StrongARM. The OS, etc. are all in flash rom, and it has a little extra hardware off-load for the ECC. The main savings versus a build-it-yourself x86 version would be in power/heat and management. Otherwise, it's just a single purpose PC.

alternative to FC (3, Informative)

Yonder Way (603108) | more than 8 years ago | (#14595293)

Aside from Fiber Channel, you could roll your own with iSCSI or ATAoE (ATA over Ethernet). This way you could take advantage of existing ethernet infrastructure and expertise, and partition off a storage VLAN for all of your DASD.

Re:alternative to FC (1)

lgw (121541) | more than 8 years ago | (#14595927)

If you don't have experience trouble-shooting fibre channel, and do have experience trouble-shooting ethernet, this is definitely the way to go.

Fibre channel is its own world, and there's no real carry-over from IP networking skills. No software anaylzer. No 'ping' command. Of course, if i all works when you plug it together, then no worries!

Re:alternative to FC (1)

t0rkm3 (666910) | more than 8 years ago | (#14596165)

I found it very easy to transition from "Standard Network Guy" to "Standard Network Guy+SAN". Although it is quite painful to not have a sniffer (unless you have a Cisco MDS9000 in which Ethereal works just fine) but you can make do by having verbose logging turned on for the HBAs and keeping track of the FCID s of your devices.

Also the system tends to tell you sooner rather later about issues on your fabric or loop.

XServe RAID not fast enough? (5, Informative)

Liquid-Gecka (319494) | more than 8 years ago | (#14595305)

Speaking as the owner of two XServe RAID devices (5TB and 7TB models) as well as several other Fibre Channel devices I can say that the Apple Fibre Channel is by no means slow. Each SATA drive has pretty much equal performance to the SCSI drives we use in our Dell head node. Combined together there are times where we can pull several hundred megs a second off the XRAID's. Plus our XRAID has been fairly immune to failures thusfar. I have yanked drives out of it and it just keeps right on going.

Another little hint, if you are really worried about speed you can just install large high RPM sata drives yourself. Its not that hard to do at all.

Check out Alien Raid [alienraid.org] for more information.

Re:XServe RAID not fast enough? (2, Informative)

DaedalusHKX (660194) | more than 8 years ago | (#14595377)

Actually I don't know why he'd need 5TB or 7TB, but with SATA drives this would be relatively easy to achieve since most of these are relatively huge. (SCSI drives by comparison get extremely expensive for even 36GB drives ($200.00 or so a piece, plus host controller, etc). Prices may have changed since I worked IT, but I recall even the refurbs being very pricey. SATA drives at $200.00 can give you a pretty nice 400 GB or so. As you've said, this can easilly solve your "cost" issues.

SCSI Drives
1000G (rounded off 1TB) / 36G = 28 drives * $200.00 = $5600.00 USD (excluding shipping, RAID towers, etc). This is presuming a STRIPE RAID 0... if you're doing this via RAID 5 or 0+1 all I can say is "YE GODS!!!"
** YOu will also have to price out extra controllers (you will need 3 ribbon cables, plus terminators, etc to install all 28 drives, 15 devices per chain plus terminators. Don't forget the massive power requirements.

SATA Drives
1000G (rounded off 1TB) / 400G = 3 drives * $200.00 = $600.00 USD plus 3 SATA cables and ONE host adapter. If you make it a RAID 5 or 0+1 you will need at least 5 drives to make it solid, but you can see that if you were to reach 7 TB, the drive amount would shoot up significantly, requiring several RAID towers for the SCSI setup.

Pricewatch currently has the SATA 400GB setups at $205.00 for the lowest.
By comparison the cheapest 36G SCSI drives are Hitachi Ultrastars at 138.00 per drive at Zip Zoom Fly. Next one is 236.00 USD and 556.00 respectively. Don't forget cables and cooling since those high RPM ultrastars really heat up (I owned several deskstars and the latest ones died of heatstroke).

~D

Re:XServe RAID not fast enough? (1)

undeadly (941339) | more than 8 years ago | (#14595512)

Why buy so small drives (36GB) when needing so many (28)?

Because they get... (1)

DaedalusHKX (660194) | more than 8 years ago | (#14595565)

Because they get FAR more expensive as you go up. Okay, so here's the next setups.

10k rpm (not 15000 mind you) Seagate and Maxtor. Cheapest entries on pricewatch both 495 plus 5 or so shipping UPS ground.

SCSI U320 (nice) and of course they're 300 gigs. GREAT!

Okay now here's the kicker. ALL the other suppliers provide them at 640.00 USD or more. You save some, but the downside I wager is that they won't carry enough to build a large RAID 5 setup. Also, the smaller drives are faster in a large array because you spread the seek requests accross many smaller drives. Same goes with the independent write requests. I don't recall being able to find a SATA raid solution with THAT many sockets that isn't a standalone device / tower.

Again, depending how you do it, you'll end up paying a lot for power and cooling (air conditioning so another power cost).

~D

Re:XServe RAID not fast enough? (0)

Anonymous Coward | more than 8 years ago | (#14595578)

Why didn't you look up SCSI drives on the same site?

SCSI drives are about $3/gig vs $0.50/gig for SATA. Makes the math so much easier.

Some corrections (1)

misleb (129952) | more than 8 years ago | (#14595600)

First of all, a good 400GB SATA drive [newegg.com] runs around $300. Note that it isn't even SATA2 which one would probably want for a high performance RAID (NCQ et al).

Second, 36GB is hardly the top end for SCSI drives. For a little more than $300 you can get 147GB SCSI drives [newegg.com] . Also keep in mind that more drives means more striped performance. So more drives isn't necessarily a bad thing.

Granted, it is still more expensive for teh SCSI setup. I just think you should make a fair comparison.

-matthew

Re:Some corrections (1)

WuphonsReach (684551) | more than 8 years ago | (#14598320)

On the flip side, more drives = more power usage (and probably more heat?).

Compromises aplenty when building RAID arrays. Performance, heat, power usage, noise, cost... etc.

Re:XServe RAID not fast enough? (0)

Anonymous Coward | more than 8 years ago | (#14596108)

Speaking as an Xserve owner who read the question and didn't go of half-cocked, thanks for you koolaid apology.

emphasis obviously needed.

"The Xserve Raid comes close to meeting the need but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives."

No where does it state or imply that the Xserve is no good for what it is. The "spec" needs FC drives for real sustainable troughput. Que.

Re:XServe RAID not fast enough? (1)

Liquid-Gecka (319494) | more than 8 years ago | (#14596406)

Yes, but what I was saying is that blanket excluding drives because they use a SATA->FC converter is limiting the project and not gaining anything. We benchmarked our drives against SCSI drives and other fibre channel solutions. The SATA drives in the XServe RAID kicked the 10K RPM SCSI drives around the block. The 15K SCSI drives did better.. but not "drasticly better" when compared 5 disk array vs 5 disk array.

So my question is this: why limit yourself? Did you look at the "SATA" specifications on the XServe RAID and think "slow" or did you actually throughly look through the design documents on apple.com? I work with a guy that swears up and down that only SCSI/Fibre Channel is the only fast interconnect solution but our benchmarking clearly showed that SATA is often times faster and worst case is only a big slower.

There is no real speed advantage to using FC drives vs SATA drives. The only advantage is when you start chaining a bunch of them together. But the XServe RAID uses a Fibre Channel bus to connect to everything outside of the actual RAID device.

Don't confuse me with somebody that is a die hard apple fan boy. Right now our Mac's are actually running Linux and only account for less than 15% of our computing hardware. We just looked at the XServe RAID and realized that it was by far the best solutin for us being that we wanted a cheap and fast drive array.. exactly what the poster seems to be looking for.

Re:XServe RAID not fast enough? (3, Informative)

dasdrewid (653176) | more than 8 years ago | (#14596372)

Just to point out...the XServe RAID uses Ultra ATA drives, *NOT* SATA drives. I spent the last month or two researching RAID arrays, and that was one of the most disappointing things I saw...

Xserve RAID features a breakthrough Apple-designed architecture that combines affordable, high-capacity Ultra ATA drive technology with an industry standard 2Gb Fibre Channel interface for reliable... from Apple [apple.com] .

Re:XServe RAID not fast enough? (2, Informative)

Liquid-Gecka (319494) | more than 8 years ago | (#14596563)

I would so love to prove you wrong right now. Turns out you are completly correct. both of our XServe RAID devices use 7200RPM Ultra IDE Hitachi drives. This invalidates at least some of our benchmarking as it was done single drive on our XServe systems (what where supposed to be like drives but are serial ATA drives.) all of our single drive benchmarks are invalid then (or rather, are meaningless to this discussion =) However, the XServe RAID still performed very well when doing a 5 disk vs 5 disk RAID setup.

What really sucks is that we just found out that our nifty Cisco switch that was purchased before I got here has a severe bandwidth restriction making it nearly useless for MPI communication on our clusters.. 6GB/s between 48 port gig switches. who thought that was a good idea?

Re:XServe RAID not fast enough? (1)

univgeek (442857) | more than 8 years ago | (#14596924)

48Gigabits per second sounds pretty decent to me, unless you meant 6Gigabits per second?

Re:XServe RAID not fast enough? (0)

Anonymous Coward | more than 8 years ago | (#14599070)

Whoops! Your right! 6 giga BITS per second for a 48 port gigabit ethernet module.

Huh?! (-1, Troll)

Anonymous Coward | more than 8 years ago | (#14595339)

Why would I want to store something that came out of my fiber channel? That is just plain sick. Maybe if you were into composting..

Infortrend (1)

penguinboy (35085) | more than 8 years ago | (#14595394)

Infortrend is crap! Stay far, far away from anything produced by them. I'd also warn against purchasing from one of their US distributors - Zzyzx - but that's not an issue since the company just went out of business.

Re:Infortrend (1)

danpritts (54685) | more than 8 years ago | (#14595948)

I haven't had a lot of experience with their fibre channel stuff (one system only), but my experience with infortrend SCSI arrays has been very good.

When I worked at UUNET (1997-2000) we had hundreds of servers with infortrend-based arrays as their storage.

I've had reasonable service, great pricing, and bad support from CAEN Engineering, one of their resellers. I have heard good things about Western Scientific.

Apple Xraid is hardly a new price point... heh... (0)

Anonymous Coward | more than 8 years ago | (#14595460)

So, the Xraid has Apple's name attached and it's cheaper than an EMC Clariion. Great. Come on, people! Apple is not in any way, shape, or form making their own fibre-channel RAID gear. To think that is just downright naive and fanboyish. Apple has another OEM make their stuff to their spec. It's just that simple.

Apple has not brought anything to a new price point, either. That's a ridiculous notion!

Apple brings nothing to any price-point worth mentioning. The iPod line is grossly overpriced for what you get, their desktops and notebooks are insanely expensive, and their support is obscenely priced.

Now, to get to something meaningful...

You can find much better FC and iSCSI units for far less if you look around a little. I know for a fact that Promise Tech makes some extremely powerful and very affordable FC and iSCSI chassis. http://www.promise.com/ [promise.com]

And before anybody (fanboys) start up with the "Apple is a tier one vendor and Promise is not!" -- I urge you to do your homework on storage. You will find that what Promise is peddling is not only cheaper but far more flexible (check out the VTrak manuals).

Re:Apple Xraid is hardly a new price point... heh. (0)

Anonymous Coward | more than 8 years ago | (#14595604)

..actually, the hard drive iPods are quite reasonable.
Their competitors are often quite a bit more expensive in the same storage range.
Often the competitors products do more than the iPod, but the iPod does what people want, they don't care about the extra stuff if they have to pay for it.

Why FC? (1)

statemachine (840641) | more than 8 years ago | (#14595531)

You say you want inexpensive. Yet you want fibre channel. I think you're looking in the wrong direction. FC is cute to experiment with, but not really feasible for your purposes.

In your question, you said you don't want a whole lot of redundancy or high availability. You can do nearly the same with an inexpensive computer with a large raid, gigabit ethernet, and NFS or Samba.

If there's money riding on this (i.e. you will lose big money each second the connection is down), then you need FC and service contracts (and if it's that important, a data center, etc.). Otherwise, FC's overkill.

Look at Storcase (1)

bpb213 (561569) | more than 8 years ago | (#14595569)

We personally use a StorCase infostation at work (http://www.storcase.com/infostation/ifs_ovrvw.asp [storcase.com] ). Now, we have the scsi version, so I can't speak to the Fiber version, but a fully loaded Storcase is cheaper then an XServe, and more dense. Alas, it would not come with the instant solution tech support that an XServe would.

All FC RAID is going to be high-availibility (4, Informative)

sirwired (27582) | more than 8 years ago | (#14595670)

First, you haven't articulated your needs properly. "High I/O rates" means two separate things, both of which must be considered and engineered for:

1) High numbers of transactions per second. Your focus here is going to be on units that can hold a LOT of drives (not necessarily of high capacity). You want as many sets of drive heads as possible going here. In addition, SATA drives are not made to handle high duty-cycles of high transaction rates. The voice coils have insufficient heat dispersion. (They are just plain cheaper drives.) High transaction rates require a pretty expensive controller, and you won't be able to avoid redundancy, but that isn't a problem, since you are going to need both controllers to support the IOPS. (I/O's per second.)

2) High raw bandwidth. If you need raw bandwidth and your data load is non-cacheable, then really software RAID + a JBOD may be able to get the job done here, if you have a multi-CPU box so one CPU can do the RAID-ing. Again, two controllers are usually going to provide you with the best bandwidth. SATA striped across a sufficient number of drives can give you fairly decent I/O, but not as good as FC.

There are "low end" arrays available that will offer reduced redundancy. The IBM DS 400 is an example. This box is OEM'd from Adaptec, and pretty much uses a ServRAID adapter as the "guts" of the box. This unit uses FC on the host side, and SCSI drives on the disk side. It is available in single and dual controller models. (Obligatory note: I do work for IBM, but I am not a sales drone.) This setup has the distinct advantage of being fully supportable by your vendor. A homegrown box will not have that important advantage.

Don't be scared away by the list price, as nobody ever pays it. Contact your local IBM branch office (or IBM business partner/reseller), and they can hook you up.

This unit is also available as part of an "IBM SAN Starter Kit" which gives you a couple of bargain-barrel FC switches, four HBA's, and one of these arrays w/ .5TB of disk. (I am writing a book on configuring the thing in March, so you will have a full configuration guide (with pretty pictures and step-by-step instructions) by the beginning of April.)

SirWired

Re:All FC RAID is going to be high-availibility (1)

Jeffrey Baker (6191) | more than 8 years ago | (#14595825)

Well, he _did_ say that he would measure "high i/o rate" using Postmark, so I think the requirement is clearly stated: scores well in Postmark.

Obviously, Postmark is a transaction rate/metadata benchmark.

Re:All FC RAID is going to be high-availibility (0)

Anonymous Coward | more than 8 years ago | (#14596097)

Interesting, I've got some questions about DS4000 Turbo and some other IBM SAN offerings. If you're up to it shoot me an email with your IBM addy. Rlittle AT Gmail.

They are available ... (0)

Anonymous Coward | more than 8 years ago | (#14595718)

Searching a bit on the net gives you a couple of possibility. An example from an all FC array available in europe:
http://www.transtec.de/D/E/products/Storage/transt ec_6600.html [transtec.de]

It's a question of values with many trade-offs (4, Insightful)

postbigbang (761081) | more than 8 years ago | (#14595739)

The answer has a lot to do with the previously mentioned I/O goals that you have. Let me try to answer this in a taxonomy. This rambles for a bit but bear with me.

Case One

Let's say this array is to be used for a single application that needs lots of pull and is populated initially from other sources, with a low delta of updates. In other words, largely reads vs writes. Cacheing may help; and if so, then you can tune the app (and the OS) to get fairly good performance from SATA RAID or through FC JBODs when in a Raid 0 or 5 configuration. (there is no Raid 0; its just a striped array without redundancy/availability and is therefore a misnomer)

Case Two

Maybe you need a more generalized SAN, as it will be hit by a number of machines with a number of apps. You'll need better controller logic. You'll likely initially need a SAN that has a single SCSI LUN appearance, where you can log on to the SAN via IP for external control of the can that stores the drives (and controls the RAID level, and so on). This is how the early Xserve RAID worked, and how many small SAN subsystems work. Here, the I/O blocks/problems come at different places-- mostly at the LUN when the drive is being hit by multiple requests from different apps connected via (hopefully) an FC non-blocking switch (Think an old eBay-purchased Brocade Silkworm, etc). SCSI won't necessarily help you much.... and a SATA array has the same LUN point block. Contention is the problem here; delivery is a secondary issue unless you're looking for superlative performance with calculated streams.

Case Three

Maybe you're streaming or rendering and need concurrent paths in an isochronous arrangement with low latency but fairly low data rates-- just many of them concurrently. Studio editing; rendering farms, etc. Here's where a fat server connecting a resilient array works well. Consider a server that uses a fast, cached, PCI-X controller connected to a fat line of JBOD arrays. The server costs a few bucks, as does the controller, but the JBOD cans and drives are fairly inexpensive and can be high-duration/streaming devices. You need to have a server whose PCI-X array isn't somehow trampled by a slow, non-PCI-X GBE controller as non-PCI-X devices will slow down the bus. You also get the flexibility of hanging additional items off the FC buses, then adding FC switches as the need arises. At some point, the server becomes more useless in cache and becomes its own botleneck-- but you'll have proven your point and will have what now amounts to a real SAN with real switches and real devices.

The SATA vs SCSI argument is somewhat moot. Unless you cache the SATA drives, they're simply 2/3rd the possible speed (at best) of a high-RPM SCSI/FC drive. It's that simple. uSATA will come one day, then uSATA/hi-RPM..... and they'll catch up until the 30Krpm SCSI drives appear.... with higher density platters....and the cost will shrink some more.

I've been doing this since a 5MB hard drive was a big deal. SCSI drives will continue to lead SATA for a while, but SATA will eventually catch up. In the mean time, watch the specs and don't be afraid of configuring your own JBOD. And if you want someone to yell at, the Xserve RAID is as good as the next one.... except that it has the Apple Sex Appeal that seems a bit much on a device that I want to hide in a rack in another building.

Re:It's a question of values with many trade-offs (1)

Tweekster (949766) | more than 8 years ago | (#14596440)

Yes, but 2/3 of the speed for like a 1/10th of the cost is still way better for non true enterprise systems and 30KRPM SCSI drives, i doubt that

Try AoE instead (2, Insightful)

color of static (16129) | more than 8 years ago | (#14595767)

Fiber channel just seems to have to high a cost of entry these days (or maybe it always have :-). It's not bad today with SATA being used on the storage arrays, but it is hard to compete with the other emerging standards. I've been using AoE for a little while now and have been impressed with the bang for the buck.
    A GigE switch is cheap, and a GigE port is easy to add, or you can use the existing one on a system. AoE sits down below the IP stack so there is little overhead for comm, and it looks like a SATA drive in most ways. The primary vendor's appliance (www.coraid.com) will take a rack full of SATA and make it look like one drive via various RAID configs.
    Yeah FC is faster, but how many drives are going to be talking at once? Are you really going to fill the GigE and need a FC to alleviate the bottleneck? If you are then FC is probably not the right solution for you anyway.
    Your mileage may vary, but I expect anyone will get comparable results for the price, and many will get excellent results overall.

Advice from a SAN lab manager (5, Informative)

Animixer (134376) | more than 8 years ago | (#14595810)

I can toss in a bit of advice as I've been working with fibre channel from the low to the high end for several years now. Currently I'm managing a lab with equipment from EMC, HDS, McData, Brocade, Qlogic, Emulex, 3par, HP, Sun, etc. from the top to bottom of the food chain. I'm personnaly running a small FCAL setup at home for my fileserver.

0. Get everything off of ebay.
1. Stick with 1GB speed equipment. It's older, but an order of magnitude less expensive.
2. Avoid optical connections if you can -- for a small configuration, copper is just fine and often a lot less expensive. Fibre is good for long distance hauls and >1gb speed.
3. Pick up a server board with 64bit pci slots, preferably at 66mhz.
4. Buy a couple of qlogic 2200/66's. These are solid cards, and are trivial to flash to sun's native leadville fcode if you desire to use a sparc system and the native fibre tools. They also work well on linux/x86 and linux/sparc64. These should run about $25 each.
5. Don't buy a raid enclosure. Get a fibre jbod. You can always reconstruct a software raid set if your host explodes if you write down the information. If you blow a raid controller, you're screwed. Besides, you won't want to pay for a good hardware raid solution, and I have yet to see a card-based raid; they're always integrated into the array. I recommend a Clariion DAE/R. Make sure you get one with the sleds. These have db-9 copper connections and empty should run about $200. Buy 2 or 4 of these, depending on how many hbas you have. They'll often come with some old barracuda 9's. Trash those; they're pretty worthless.
6. Fill the enclosures with seagate fc disks. If you're not after maximum size, the 9gb and 18gb cheetahs are cheap, usually like $10 a pop on ebay and are often offered in large lots. They are so inexpensive it's hard to pass them up. Try to get ST3xxxFC series, but do NOT buy ST3xxxFCV. The V indicates a larger cache, but also a different block format for some EMC controllers. They are a bitch to get normal disk firmware on.
7. Run a link from each enclosure to each hba. Say you have 2 enclosures with 10 disks each. Simple enough; and 1gb second up and down on each link.
8. Use linux software raid to make a bigass stripe across all the disks in one enclosure, repeat on the second enclosure, and make a raid10 out of the two. Tuning the stripe size will depend on the application; 32k is a good starting point.

With that setup, you should pretty much max out the bandwidth of a single 1gb link on each enclosure, and enjoy both performance and reduncancy with the software raid, and not have to worry about any raid controllers crapping out on you.

You should be able to get two enclosures, 20 disks, a couple of copper interconnects and some older hbas for about $750 to $1,000 depending on ebay and shipping costs.

This should net you some pretty damn reasonable disk performance for random access type io. This is NOT the right approach if you're lookign for large amounts of storage. You'll get the raid10 redundancy in my example, but if you want real redundancy (and i mean performance-wise, not just availability -- you can drive a fucking truck over a symmetrix, then saw it in half, and you probably won't notice a blip in the PERFORMANCE of the array--something fidelity and whatnot tends to like) you have to pay big money for it. The huge arrays are more about not ever quivvering in their performance no matter what fails.

Hope this was of some use.

Re:Advice from a SAN lab manager (2, Funny)

Jeffrey Baker (6191) | more than 8 years ago | (#14595865)

You just spent $750 to $1000 on a 90GB storage subsystem with only 1gb write bandwidth. Do you really think that's such a swell deal?

Re:Advice from a SAN lab manager (1)

danpritts (54685) | more than 8 years ago | (#14595959)

Plus, you'll spend $25 a month on electricity for it.

Re:Advice from a SAN lab manager (0)

Anonymous Coward | more than 8 years ago | (#14596384)

8. Use linux software raid to make a bigass stripe across all the disks in one enclosure, repeat on the second enclosure, and make a raid10 out of the two. Tuning the stripe size will depend on the application; 32k is a good starting point.

This is the wrong way to do RAID-10. You want to mirror first and then stripe across the mirrors.

If you stripe first and then mirror, you have 2 problems:
1) If you lose a drive, you have to rebuild the data on every drive that was in the stripe ... not just the drive that died.
2) If you lose a 2nd drive before the 1st drive is replaced and the RAID rebuild completes, you're restoring from tape.

If you mirror first, you only have to rebuild 1 drive worth of data when you lose a drive. It's also possible to lose 2 or more drives and not lose data ... as long as the 2nd, 3rd, etc drives come out of different mirror sets.

Re:Advice from a SAN lab manager (0)

Anonymous Coward | more than 8 years ago | (#14599356)

Also known as: There's a reason that the 1 comes first.

Re:Advice from a SAN lab manager (1)

Octorian (14086) | more than 8 years ago | (#14598279)

I had a setup using one of those CLARiiON FC boxes a while back. Used 20 x 36GB drives, 2 controllers, did a mix of hardware and software RAID, and managed to get >100MB/s sustained read performance (from the raw device, anyways). Only it had one major problem: power

My setup consumed about a continuous 800W, not to mention any increase in air conditioning usage that resulted. I've since moved to a 4-drive 3ware setup, which is slower on raw reads (marginally, but not by enough for me to care), has about the same capacity, and the WHOLE server uses about 230W of power. (a server which also has a separate RAID-1 for the system disk, and a redundant power supply)

If I didn't have to pay my electric bills, I'd love to get that CLARiiON up and running again, though. I do wish I could find another use for it, as it is a bit too big/heavy to conveniently resell/ship, and I really would like to find a use for it.

At least my experiences did prompt me to create this [hecomputing.org] website, which includes a page with a lot of information on these arrays. (official documentation is almost unseen, and googling rarely pulls up much useful info beyond my page and some usenet posts)

How about iSCSI? (2, Insightful)

MikeDawg (721537) | more than 8 years ago | (#14595821)

Depending on your companies needs, could an iSCSI [wikipedia.org] solution be more viable. There are some very good units out there with loads of various different RAID setups. There are some trade-offs vs. Fibre Channel, such as speed vs. cost etc. I've seen quite a bit of data being handled to/from iSCSI arrays quite nicely. However, the companies I worked for had no true need for the blindingly fast speed, and extremely high cost of FC arrays.

Re:How about iSCSI? (1)

madstork2000 (143169) | more than 8 years ago | (#14596140)

ATA over Ethernet seems like an even better choice for small biz, than obsolete fibre channel. In some cases it may be a better choice over new fibre channel.

Heres a little write up on it: http://linuxdevices.com/news/NS3189760067.html [linuxdevices.com]

It mentions how you can use the ata over ethernet in combination with iSCSI. The ATAoE protocol has much less overhead than iSCSI, because ATAoE is not using TCPIP, rather it is its own non-routable protocol to be used for local storage using the ethernet hardware. It is explained a lot better in other places, but Iit sounds like something the original poster would be interested in.

-MS2k

HP (1)

slazar (527381) | more than 8 years ago | (#14596051)

take a look at what HP/Compaq has to offer. We have a few arrays from them, both SATA and SCSI. Not all Fiber channel though.

Use commodity hardware (1)

this great guy (922511) | more than 8 years ago | (#14596233)

My suggestion to the OP is that if he wants to achieve a high I/O rate at the lowest possible price, then the solution if of course to use a Google-like solution: use commodity hardware.

For example buy a lot of ATA/SATA harddisks (in order to spread the load over them), use them inside ATA-over-Ethernet enclosures (www.coraid.com, Linux driver available in any 2.6 vanilla kernel), and connect them with multiple Gigabit Ethernet links to the storage server. And the best part in all of this: ATA/SATA is so cheap, that you will be able to buy 3 or 4 times more disks than a Fiber Channel solution, allowing you to get more storage space, and a substancially better "I/O rate per dollar" ratio.

Of course ATA/SATA disks are less reliable than FC disks, but this shouldn't matter, because even with expensive high-end solutions, you have to plan for disk failures. With a Fiber Channel solution, you might have to replace a disk every 12 months in your RAID arrays. With an ATA/SATA solution, it might happen every 2 months. But in both cases you don't care: the RAID layer will protect you.

The question makes no sense (1)

egarland (120202) | more than 8 years ago | (#14596248)

but its major design compromise is to use ATA drives, thus losing the high I/O rate of FC drives

I'd recommend more SATA drives for the same price over fewer, more expensive FC drives. The differences in RAID controllers and number of drives has much more impact in array performance than interface technology. Since FC controllers and drives are more expensive, it's a disadvantage when you are tring to get high speed on a budget.

Fibre channel storage has been filtering down from the rarefied heights of big business and is now beginning to be a sensible option for smaller enterprises and institutions.

Storage systems designed with Fibre Channel have almost no advantages over SATA based ones and cost much more. How is that sensible?

Re:The question makes no sense (0)

Anonymous Coward | more than 8 years ago | (#14596391)

"Storage systems designed with Fibre Channel have almost no advantages over SATA based ones and cost much more. How is that sensible?"

The major advantage of SCSI or FC over SATA is its performance under heavy multi-user load.
I refer you to test results at http://www.storagereview.com/articles/200601/WD150 0ADFD_6.html/ [storagereview.com]

The OP

Re:The question makes no sense (1)

egarland (120202) | more than 8 years ago | (#14599522)

The major advantage of SCSI or FC over SATA is its performance under heavy multi-user load. I refer you to test results at http://www.storagereview.com/articles/200601/WD150 0ADFD_6.html [storagereview.com]

True, the IOMeter performance of the drive revewed, and most SATA drives under deep queues isn't as good as the more expensive SCSI/FC drives out there but looking at this fact in isolation gives a skewed picture.

Most servers operate with a queue depth of 1 most of the time and that's especially true in a small office. If you are looking to make an array that performs well under queue depths of 128, you are generally dealing with many thousands of users and therefor probably not designing for a small, inexpensive system. If you are looking to see how fast a bunch of people can transfer medium/large files to/from a machine, don't look at I/O's per second in deep queues, it's not applicable. The High-End DriveMark is really more appropriate there.

Even so, even if you somehow have an environment where you are going to be pushing queue depths way up, my argument still holds. In an array with 8 of those Raptors reviwed with a queue depth of 64, each drive can read about 190 I/Os per second for a total of 1500 I/Os per second. If you have 4 of the very best 15,000 RPM Maxtor drive you'll be pushing about 400 I/Os per second per drive with a depth of 128 for a total of 1600 I/Os per second. That's appears to be about the same speed but even that comaprison isn't fair. Those are both 150GB drives and the access pattern is random across the whole disk. With 300 GB of mirrored data (the most that 4 150 GB FC disks could hold) you'd only half fill the 8 SATA disks. Random seeks across half the disks surface are much faster than random seeks across the whole disk. The 8 SATA disks would far outperform the 4 FC disks in I/Os per second, and the speed increase would be even more dramatic when dealing with low queue depths and sequential transfers.

IOMeter is not a typical light-medium load/file server usage benchmark and it's results should not be interpreted as such. For example, for a file server running Samba, the application is single threaded so the queue depth for it will never exceed 1. IOMeter performance is really best used when an entire set of disks was dedicated as storage for a database with a large number of concurrent users. That's what these SCSI/FC drives are designed for and it also why you don't see FC in small businesses and education. Where you see it in use is when there are thousands of users hitting a system and you will lose millions of dollars in productivity if the array fails. That's when you buy FC and that's why the prices are so insane. These high-end drives don't do anywhere near as well when used in single threaded/file server rolls and are often even slower (look at the High-End DriveMark results in the article you linked.)

The performance of drives under deep queues has little to do with the interface and much more to do with the speed of the internal queueing mechanisms and the drives seek time. There's nothing magic about SCSI or Fibre Channel that will make a drive faster. It's simply high end heads, seek arms, and powerful controler logic designed for heavy concurrent use.

For a light dubty, high speed, milti-purpose array I'd suggest a nice 8 drive SATA array with a good 8 drive RAID controller. It will be bigger, faster, cheaper and able to handle more load than if you spent the same money on FC equipment. If that speed and size isn't required, go with a 4 drive SATA array. You would probably be amazed at how fast something like that is if you get a good controller and good drives.

But, how do you crate your own RAID controller... (1)

phungus (23049) | more than 8 years ago | (#14596410)

The coolest thing would be to turn a Linux box into a Hardware RAID controller. Most of the arrays out there do not run specialized firmware inside for the OS. They run Linux, VxWorks, Windows NT (*cough* EMC Clariion *cough*), of course, heavily customized versions of these OS's, with some having specialized ASICs inside (i.e. Engenio). The thing is, they change their HBAs into Targets, so that the Initiators (your client PC) can use their disks.

I want to figure out how to do this with a Linux box. How could you stuff 4 HBAs in a box and present two of them to the backend disks, and two to the front-end switches/hosts. With this method, you don't have to worry about Software RAID speed, because you're not doing your CPU processing on this server. You're using it just for RAID calculations, which I'm sure a late-model AMD would be great at.

Anyone have any ideas?

I run a SAN... (4, Informative)

jmaslak (39422) | more than 8 years ago | (#14596415)

I administer a decently sized storage subsystem connected to about 10 servers (half database servers, 1/4 large storage space but low speed requirement, 1/4 backup/tape/etc server).

For a single server, a FC system seems like overkill to me. Buy a direct attached SCSI enclosure and be done with it.

For 10 or more servers, sharing disk space, a SAN (FC IMHO, although iSCSI is acceptible if your servers all share the same security requirements - I.E. are all on the same port of your firewall) is the way to go.

Here's what I see the benefits of a FC SAN as (if you don't need these benefits, you'll waste your money on the SAN if you buy it):

1) High availability

2) Good fault monitoring capability (me and my vendor both get paged if anything goes down, even as simple as a disk reporting errors)

3) Good reporting capability. I can tell you how many transactions a second I process on which spindles, find sources of contention, know my peak disk activity times, etc.

4) Typically good support by the vendor (when one of the two redundant storage processors simply *rebooted* unexepectedly, rather than my vendor saying, "Ah, that's a fluke, we're not going to do anything about it unless it comes back in again", they had a new storage processor to me within one hour)

5) Can be connected to a large number of servers

6) Good ones have good security systems (so I can allow servers 1 & 2 to access virtual disk 1, server 3 to access virtual disk 2, with no server seeing other servers' disks)

7) Ease of adding disks. I can easily add 10 times the current capacity to the array with no downtime.

8) LAN-free backups. You can block-copy data between the SAN and tape unit without ever touching the network.

9) Multi-site support. You can run fiber channel a very long way, between buildings, sites, etc.

10) Ability to snapshot and copy data. I can copy data from one storage system to antoher connected over the same FC fabric with a few mouse clicks. I can instantly take a snapshot of the data (for instance, prior to installing a Windows service pack or when forensic analysis is required) without the hosts even knowing I did it.

Note that "large amounts of space" and "speed" aren't in the 10 things I thought of above. Really, that's secondary for most of my apps, even large databases, as in real use I'm not running into speed issues (nor would I on direct attached disks, I suspect). It's about a whole lot more than speed and space.

I have built my own Fibre Channel array. (3, Interesting)

nuxx (10153) | more than 8 years ago | (#14596519)

I have done this using a Venus-brand 4-drive enclosure, some surplus Seagate FC drives from eBay, a custom-made backplane, a Mylex eXtremeRAID 3000 controller, and a 30m HSSDC DB9 controller from eBay.

I located the array in the basement, and the computer was in my office. I had wonderful performance and no disk noise, which was quite nice...

If you want photos, take a look here [nuxx.net] .

Also, while I sold off the rest of the kit, I've got the HSSDC DB9 cables left over. While they tend to go for quite a bit new (they are custom AMP cables) I'd be apt to sell them for cheap if another Slashdotter wants to do the same thing.

iSCSI/ATA over Ethernet - how/more info? (1)

zardie (111478) | more than 8 years ago | (#14596999)

I've been considering the storage thing for a while now. My current configuration is a Broadcom RAIDcore 6x250GB RAID 5 in a dual Opteron system with PCI-X 64/133mhz slots. Given that it's a workstation Tyan board, it cost me a mint but I have oodles of bandwidth to play with. I've got a few other arrays in that machine on other controllers. The board also has U320 and was all set to buy some 15KRPM drives from eBay till I saw the benchmarks of WD's new Raptor 150 which seems to kill all but the top end 15KRPM drives - and even in some tests, the Raptor wins out (all current 15K models are around 18 months behind).

Now, ny disks are fast but my network is not. My primary use of the network is for data transfer and while ttcp can top out PCI-based NICs at 65MByte/sec, I find it hard to see even half that when accessing files remotely over the network. I'm after a solution to mount these disks remotely and improve upon performance of windows filesharing.

What would be even better, if I could have multiple mounts of the same filesystem. I run Windows and Mac OSX - in an ideal world, I'd like to be able to work off the disks on my PowerBook G4 as I know my GigE card on taht thing will beat FW400 and some FW800 systems. You know, if I could get 50-55MB/sec with disk reads across a network, I;d be happy.

I suppose I can lash out on a few FC cards, an FC switch and an FC enclosure that supports SATA drives. I have boxes of multimode fibre - which is handy, so I can put the disks in another room to save on heat production. Doesn't solve the powerbook issue but my other laptop has a PCI docking station which could take an FC card easily.

Re:iSCSI/ATA over Ethernet - how/more info? (1)

silas_moeckel (234313) | more than 8 years ago | (#14599199)

Your looking for iSCSI. Google for iscsi target drivers for linux and you can export files or drives to any other system. That takes care of the block device. Multiple reader and writer file systems can be had if you look and feel like spending.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?