Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cross-Platform Company Storage Architecture?

Cliff posted more than 8 years ago | from the that's-a-lotta-bits dept.


Eric^2 asks: "My company is preparing to implement a major network storage upgrade, and I'd like to get some ideas from Slashdot about what devices should be considered, and hopefully some experiences with some of the offerings that are available. What types of storage are you using and what would you recommend?""We are currently using approximately 2TB of storage space, and will need to expand to over 10TB in the next two to three years. We have a mix of Windows, Mac OS X, and Linux clients and servers. All of our authentication is presently done through an Active Directory. If possible, we would like to centralize all of the storage into a single namespace, such as OpenAFS or DFS. Anything we purchase will have to be under maintenance contract for hardware such as failed drives or controllers. Ideally, whatever system we choose would allow us to purchase both high-speed SCSI spindles for our transactional needs and lower-speed SATA high capacity drives for our archival storage needs."

cancel ×


Sorry! There are no comments related to the filter you selected.

NetApp (4, Informative)

ZESTA (18433) | more than 8 years ago | (#15355947)

Depends on what your budget is, but I would look into Network Appliance ( Their systems are top notch, and have some very cool software features. They support NFS, CIFS, iSCSI, and Fibre Channel as connection methods.


Re:NetApp (2, Informative)

weaselprince (933254) | more than 8 years ago | (#15356325)

The OP doesn't say much about the selection criteria - scalable? performant? manageable? cheap?

If it's cheap, then Netapp might not qualify... :)

What about technologies - NAS? Host-attached? Gateway/NAS? Grids?

Other companies/products to consider:

EMC [] (The Celerra is a nice product)

Onstor [] Bobcat

HP []

IBM []

Hitachi []

Panasas []

Exanet []

Yotta Yotta []

StoreAge []

If you want basic raid devices look at Infortrend [] /Transtec [] . Their S-ATA offerings now support RAID-6 and are dirt cheap.

Re:NetApp (3, Informative)

Jim_Maryland (718224) | more than 8 years ago | (#15356945)

In some cases, going cheap isn't always a good idea. A group that manages one of our labs decided to buy a device from Excel Meridian [] and in my opinion, it's a piece of junk. We found limitations that affect both a mixed OS environment as well as scalability (not in disk space but handling larger Active Directory structures). The Excel Meridian device we have has about 6 TB of storage so space isn't an issue but the ability to execute files on it from an NFS mount fails. We also find that it can't join a domain that has a large number of user entries (don't recall the exact number but want to say it's around 1000 users). For a small workgroup this might be fine but not for a larger corporation.

NetApp is by far my choice but if I need to get a cheaper device, Dell PowerVaults are generally adequate unless you are looking for highly transactional file activies (we've run into the file lock problem on the device occasionally if we process files, in our case image processing). To avoid the file lock though, we process locally on our UNIX boxes and transfer the results to the PowerVault. One limitation we find on the PowerVault (and likely it affects all MS Win32 based file systems) is case sensitivity. I believe you "can" change to allow differentiation of files based on case but Microsoft doesn't recommend it.

Re:NetApp (1)

araven (71003) | more than 8 years ago | (#15356437)

I absolutely second that endorsement. We've been using NetApp Filers for about five years now for everything from direct FC storage for Oracle, to back-end storage for media streaming/serving, to a fileserver acting as a gateway between two LANs to give us a virus-checking point and cross-platform access for Win, Mac, Linux, and Solaris clients.

The Filers perform outstandingly and do everything they're touted to do (no vaporware yet!). The machines themselves, however, are nothing compared to the service from NetApp. The Admin (only half-jokingly) has said "when a package from NetApp shows up on my desk, I know I must have lost a drive in one of the Filers." I guess it's not the cheapest storage available, though their pricing is certainly competitive, but what you get for the money isn't available anywhere else I've found, and I spend a lot of time evaluating and buying storage.

Not that you asked, but their NetCache product is outstanding as well.

Re:NetApp (1)

Miniluv (165290) | more than 8 years ago | (#15357173)

I'd love to back this up as well, we use our NetApps for a mixture of CIFS, iSCSI, NFS and soon FC as well.

Everything just works, their boxes are incredibly stable. Ours are pushing 900 days of uptime, with zero service interruptions during that time.

We've had one hard disk fail, I got a call from NetApp support while at lunch and I literally had to argue with them to get them to just drop ship the drive and not send a tech along to replace it.

NetApp is a fantastic product, and really offers surprisingly good price performance. Especially now that the FAS line allows SATA attached as well as FCAL.

Re:NetApp (0)

Anonymous Coward | more than 8 years ago | (#15357608)

I've been using NetApp Filers for about 10 years. Things were a bit scary at the beginning, but they've made a lot of improvements. It's a solid product now.

We recently replaced an 2 F820s with a single F960. We also migrated a couple TB of data from various Windows boxes (Dells w/ PowerVaults).

The problem came in with our Mac clients. We were running SFM on Windows to share some things over AFP. That had lots of limitations (like not being AFP 3.0 compliant, so no long filenames, etc), but it did work on most days.

The NetApp doesn't support AFP natively, so we purchased an Xserve to NFS-mount the NetApp and reshare it over AFP. This is an advertised feature from Apple. Unfortunately, the only 2 NFS servers they've tested are Solaris and Mac OS X. Every time I call Apple, they ask me to replicate the problem using another Xserve or Solaris and then call them back. It's as if they've never heard of NetApp. :(

Now, they've created something called "Mac OS X Server Software Support", which you, too, can get for the low, low price of $5995 per year. That apparently covers things like integration with heterogeneous devices ... which my "AppleCare Premium Service and Support for Xserve or Xserve RAID" that I purchased with the Xserve for $950 does not cover. And the Premium Service for Xserve only provides parts during business hours.

So, after a bunch of beating and basically no help from NetApp or Apple, I made this work. The only issue I have is the Xserve uses automount and sometimes when you hit a share from a Mac client, it doesn't trigger automount to pick up the NFS mount ... so the Mac client ends up seing an empty folder. If you ssh to the Xserve and do an 'ls', it triggers the automount and then everything works again.

Oh -- heres another gotcha with the NetApp. If you're doing NDMP backups, you can only restore those to another NetApp. This means you have to have a NetApp at your DR site. You can't just pick & chose files & restore them to a Windows box or whatever is available at the time.

Re:NetApp (1)

weaselprince (933254) | more than 8 years ago | (#15359603)

Your comments on NDMP restores aren't the whole picture - you can allegedly restore them to something other than a Netapp: Solaris. The dump format is compatible with ufsrestore in Solaris. Admittedly I've never tried it, and I'd imagine that Windows ACLs would be dropped during the restore.

Re:NetApp (0)

Anonymous Coward | more than 8 years ago | (#15363501)

Yes, I've heard it's compatible with ufsdump/ufsrestore, too. We use Veritas (aka Symantec :-/) NBU Enterprise. I tried to specify a Solaris 9 box as the destination for an NDMP restore once. It failed miserably. Perhaps if you sift in bpimmedia for a while, tpreq the tape, mt -t /tmp/tape fsf some amount, and then ufsrestore it'll work ... ?? That's a big pain in the ass if you're world has burned down and you're trying to bring up a DR site while 30 people stand around and ask when it's going to be up. Never-the-less, if anyone has actually managed to pull this off, I'm all ears.

Veritas had told me that this would all magically work in NetBackup 6. Now they're denying every saying that, even though our VAR and several other people were in the room at the time. Worthless.

Re:NetApp (0)

Anonymous Coward | more than 8 years ago | (#15363930)

Another problem is the parallisation of Netapp backups with NBU NDMP. If you have a large volume (>2Tbytes) that you're trying to back up with Symantec NBU the temptation would be to define a policy that backs up /vol/bigvol. Bad idea - the backup will write as a single stream and only use one tape drive. The problem is that NBU NDMP doesn't support wildcards, as it does for file level backups. More specifically, the Netapp dump command doesn't support wild cards or dump parallelisation (multiple targets per dump stream)

If you're trying to complete the backup within a time limit (say over a weekend) and you want to allow for retries due to media failures, etc. you're going to have a very hard time.

The workaround is apparently to manually define your backup streams:


So every weekend you have to rewrite your policy (either manually or with some home grown script) to ensure the largest directories are individually named in your policy.

This is pretty poor for something that is supposed to be an enterprise backup solution.

Consider Apple's XServe RAID and XSan (3, Insightful)

Alowishus (34824) | more than 8 years ago | (#15355949)

I went through this decision process at a civil engineering firm with 3TB of data. We wound up building a SAN out of XServe RAID cabinets, QLogic SANbox switches, XSan software for the OS X machines and ADIC's StorNext FX software for the Linux and Windows. XSan/StorNext is a shared filesystem, which inherently gets you a combined namespace without any DFS-type machinations. All servers literally mount the very same volume and access it simultaneously.

XSan is really the deal of the century - you can build a full-blown StorNext system starting with ADIC's software, but that approach can be exceptionally expensive. Instead, start with XSan (which is a functional but slightly stripped version of StorNext) and then use ADIC's much less expensive StorNext FX client licenses for each non OS X server that needs to join.

Redundancy can be everywhere. Start with a pair of redundant XServes as metadata controllers. Add a pair of redundant SAN switches. Apple's FiberChannel HBAs are all dual channel, as are the XServe RAID cabinets. For any non-Apple hardware, buy dual channel QLogic HBAs.

Apple provides a variety of maintenance contracts for all their hardware, as does QLogic. ADIC and Apple provide support maintenance agreements for the software. The only missing piece of your equation is SCSI-based storage. But since this whole system is entirely standards-based, all you need to do is find a favorite vendor of SCSI FiberChannel cabinets and drop a few into your SAN and then partition them accordingly, right along with all the SATA storage.

It's a beautiful system, and a raging bargain compared to every other comparable solution I've investigated.

Re:Consider Apple's XServe RAID and XSan (1)

lmwang (975645) | more than 8 years ago | (#15356718)

SANTA CLARA, Calif., April 3 /PRNewswire/ -- At NAB 2006, Exavio Inc. (Booth # SL585) will unveil innovative new workflows that leverage the ExaMax 9000 I/O Accelerator to deliver increased performance and efficiency in existing complex storage environments. The presentation will include an uncompressed HD workflow for Apple(R) Power Mac(R) with Xsan(R) and a multi-stream PC-based 2K digital intermediate (DI) workflow demonstration running off an accelerated storage area network.

"In the post production world, a manager's dream is to work in uncompressed HD or DI format, reading and writing data from a SAN with a true collaborative workflow," said Mike Moone, President and CEO of Exavio Inc. "That just hasn't been possible before because the infrastructure itself is proving to be a significant bottleneck. Exavio is enabling new postproduction and DI workflows to leap off the drawing board and be put to practical use today so facilities can adapt to take on the most demanding projects."

The ExaMax 9000 I/O Accelerator increases the performance of Apple XServe(R) RAID and Xsan(R) systems by a factor of more than 4X, providing facilities with a practical approach to building large, multi-seat HD collaborative workflows with Apple Power Mac Workstations and Final Cut Pro(R) Studio.

About ExaMax 9000 I/O Accelerator

The ExaMax 9000 I/O Accelerator increases the efficiency and performance of existing SAN storage solutions. The ExaMax 9000 SAN Accelerator provides facilities with a practical and cost effective approach to building large-scale collaborative workflows

The ExaMax 9000 I/O Accelerator touts a dynamic cache, scalable from 128 GB up to 1 TB, mitigating storage random-seek and concurrent access issues. The system provides stable performance over time, even with high storage usage. Optimized with proprietary algorithms tuned for video precision, the ExaMax 9000 I/O Accelerator assures immediate content availability with low latency. The platform can expand to 36 2-Gb Fibre Channel ports in one chassis to support various post production environments including uncompressed HD, and 2K and 4K digital film resolutions.

A powerful, Java(TM) based management application called ExaView(TM) monitors and manages the ExaMax platform storage virtualization and network environments through a simple GUI from a remote client.

About Exavio Inc.

Exavio Inc. is enabling better workflow for post production, moving digital intermediate & HD as well as storage area network markets. With the ExaMax 9000 I/O Accelerator, the company is improving SAN real-time performance in order to make HD and DI workflows more affordable, more manageable and more flexible. Exavio Inc. is a privately held company with engineering offices in Beijing and headquarters in Santa Clara, CA. For more information, please visit Exavio Inc. at

Source: Exavio Inc.

Web site: []

Re:Consider Apple's XServe RAID and XSan (0)

Anonymous Coward | more than 8 years ago | (#15357241)

Apple's XServe RAIDs are not dual channel. They are INDEPENDENT dual controllers. They will NOT fail over to each other, or load balance in any way. If you lose a controller, you lose every LUN behind that. For full redundancy, check out [] .

From the site: Vicom Systems, an enterprise provider of transparent, wire-speed data services for systems and storage, has teamed up with Apple Computer to deliver the Vmirror(TM) Series, a new family of high-availability and data protection appliances customized exclusively for Apple's Xserve RAID storage systems. Each Vmirror appliance supports PowerMac G5, Xserve G5 host systems, Xsan consolidations, and up to four Xserve RAID systems.

Re:Consider Apple's XServe RAID and XSan (1)

Fishbulb (32296) | more than 8 years ago | (#15357969)

I can also recommend the Xserve RAIDs from Apple.

Mostly for the RAID Manager app, which, if you open it up and pull out the .jar file, can be run on any system with a java VM. I had Xserve RAIDs attached to Xserves, Sun servers, Linux servers, and a Windows server.

See: []

Venti (3, Informative)

DrSkwid (118965) | more than 8 years ago | (#15355968)

Either through plan9 port or the real thing

Venti is block level and, as such, coalesces identical blocks, a bit like LZW, so backing up 100 Windows machines doesn't take up 100x the disk space of backing up 1 windows machine. [] [] [] []

Sean Quinlan (one of the 2 Venti inventors) moved from Bell Labs by Google.

News for nerds, stuff that matters
Post Comment
Database maintenance is currently taking place. Some items such as comment posting and moderation are currently unavailable.

MySQL r0x0rs

Application ? (2, Insightful)

pwet (975604) | more than 8 years ago | (#15356026)

you forgot the most important : what are you using your storage for ?

Re:Application ? (1)

eric2hill (33085) | more than 8 years ago | (#15357331)

Everything except OS boot partitions. I want all of our data to reside on this solution. We have a large Oracle database, a few smaller SQL databases, lotus notes databases, Maildir folders, web site folders, and a few terabytes of artwork, cad drawings, and office documents. This means that I will have both clients and servers of all three stated platforms accessing the community storage.

I'm not looking for a *cheap* solution, per se, although price will be factored into the decision. What I'm looking for is a comprehensive storage solution that we can easily grow, maintain, and have rock solid uptime and performance.

Re:Application ? (0)

Anonymous Coward | more than 8 years ago | (#15357349)


I think... (-1, Troll)

Elitist_Phoenix (808424) | more than 8 years ago | (#15356042)

I think some sort of beowulf cluster of mobile, pornography distrubting wireless access points would be in order.

create by yourself? (-1, Redundant)

WetCat (558132) | more than 8 years ago | (#15356074)

Install Fedora Core, install iSCSI modules from, LVM or EVMS for volume management, share all that by all possible means?
Message me if you need more info.

Re:create by yourself? (3)

walt-sjc (145127) | more than 8 years ago | (#15356162)

I really don't understand why people keep pushing Fedora Core for production systems. It's not appropriate. Not that FC is bad, it's not, but in a production systen you need a level of stability and consistancy that FC by design does not provide. This is especially the case when it comes to things like SAN's and such. Centos is MUCH more appropriate. The Fedora legacy project was supposed to help, but has proven to be ineffective.

Re:create by yourself? (1)

the eric conspiracy (20178) | more than 8 years ago | (#15357524)

God yes. FC is a PITA because the supported lifetime of a release is so short. CentOS is probably the best free solution available today.

Re:create by yourself? (0)

Anonymous Coward | more than 8 years ago | (#15362219)

FYI In the storage world FC = Fibre Channel not Fedora Core :)

NetApp (1, Informative)

Anonymous Coward | more than 8 years ago | (#15356125)

I would definitely recommend NetApp. We have both a Linux (Debian) and Windows environment and the NetApp works brilliant with both. We initially went with the FAS270 which can scale to 6TB directly or with the upgrade of the "head" I think you can go up to a few PB. It was the most cost effective and scaleable we could find and their support/response is much better than EMC. Builtin technology is fantastic and flexible and I know they have a tie-in module for cheaper archival, I think (not really sure on that part) it's called snapvault.

Re:NetApp (0)

Anonymous Coward | more than 8 years ago | (#15357093)

What was wrong with the EMC Celerra?

LaCie Network Storage (0)

enigm4_ (975344) | more than 8 years ago | (#15356234)

LaCie have network storage available in 1TB and 2TB 1RU rack mountable that also has 3 or 4 USB ports on the front for adding more external storage (albeit at USB speeds) They are compatible with all OSes but I'm unsure what format they are in. You could always read their online gear at : 7/ [] . Downside : Powered by XP Embedded. As for service plans/etc. I guess that would be up to whoever you brought it through in the end.

AmazonS3 (0)

Anonymous Coward | more than 8 years ago | (#15356239)

Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

You might not have considered this option yet but you should. I can say that it works very well.

See their website for more info:

it depends... (5, Informative)

therus121 (536202) | more than 8 years ago | (#15356262)

You need to see how you want to access your storage, and what is going to be running on it, as to how you go:-

SAN - block level data access to storage. Good for databases; low client counts (because SAN ports are expensive relative to ethernet) - but with high IO demands. EMC are good, but pricey - a low to mid end Clariion would probably be the right range to aim at.

NAS - file level data access to storage. Good for situations where there are many clients connecting, and their IO demands are not excessive. Netapps filers are very good at this (if youy can find information on their new OS (10GX) then it's VERY interesting. ILM use them in their render farms.

iSCSI - a blend of the best of both, but it's still looked upon as an emerging technology. You get (or did) free iSCSI licenses with netapps filers.

O'Reilly have a good book on this. "Using SAN's and NAS" which is vendor agnostic []

Re:it depends... (2, Informative)

TrueKonrads (580974) | more than 8 years ago | (#15356341)

Don't forget that there is ATA-over-ethernet [] You can buy the 10 disk arrays make them RAID5 and offer as SAN solutions to linux machines with ease, without expensive fiber switches.

Re:it depends... (1)

crow (16139) | more than 8 years ago | (#15356515)

I would certainly consider an EMC Clariion. The Symmetrix is probably overkill for your needs and budget. (Disclaimer: I work for EMC on the Symmetrix side.) I suspect that Clariion also supports iSCSI. (You could check with Dell's or EMC's web sites.)

Also, don't forget about backup. Sure, you're protected by RAID, but with the more advanced systems, you can send a single command to the storage system and make a copy of everything within the array to allow you to recover from user errors (or virus destruction). As you grow, you may consider replication to a second array. So consider your growth needs, and get something that will fit into the long-term plan.

Samba, Dfs, and NFS (2, Interesting)

skroz (7870) | more than 8 years ago | (#15356288)

At my company we've used a number of Dell PowerEdge Linux servers running Samba. All of the servers are then tied together using Samba's Dfs implementation to "stitch" individual components together for Windows clients and NFS/AutoFS/symlinks for Linux clients. This is all accomplished with some very simple perl and shell scripts.

This likely won't work in all environments, however. Our data is divided into thousands of discrete and manageable chunks stored in individual directories, so stitching it together via an automated process is relatively simple. Part of the job of the scripts mentioned above is to "rebalance" these chunks (move them from server to server) to prevent any one volume from becoming full. If your data "components" are large, or if your data is too active to move regularly, this won't work.

It's the poor man's cluster, and there are better solutions out there, but it works extremely well in our case.

If I were starting over from scratch (1)

Yonder Way (603108) | more than 8 years ago | (#15356415)

1) SAN, any SAN, pick a SAN. I'm not going to endorse any one brand here.
2) a pair of IBM pSeries boxes that can be DLPAR'd
3) Put all storage, including boot disks, on the SAN. All servers boot off of the SAN within the pSeries hardware. All servers have a failover DLPAR ready to go on the second pSeries box.
4) Run Linux on all of the DLPAR's. The storage servers would be running OpenAFS.

OpenAFS client is well supported on *NIX & Windows... it's a mature and actively developed platform. Very secure. Combined with the hardware mentioned above, performance and reliability will be outstanding unless you seriously screw up the configuration.

etherdrive (2, Interesting)

Apreche (239272) | more than 8 years ago | (#15356494)

Here are some guys my friend was looking at for a storage solution. Basically they just ethernet-ify as many hard drives as you want. How you configure them is up to you. It's a bit expensive, but it's incredibly simple and flexible. []

How we skin a cat... (5, Interesting)

gurutc (613652) | more than 8 years ago | (#15356499)

Here's a pretty OS-nonspecific example of cross-platform storage implementation. Some of it is about backups and may seem off-topic but is valuable as an example of how much you can mix platforms and OS to get what you need in network storage solutions.

We protect 3 Terabytes per night from 250 remote servers with a backup strategy using RSYNC. These include both Windows and Netware servers. Our centralized backup file server is a single Dell PowerEdge 2850 with dual Xeon CPUs which runs OpenSuse 10 and has a combination of both Dell Powervault RAID SCSI enclosures and LaCie Big Disk USB External drives attached. Using a fast server with an OS that we can tune gives us incredible multistream-capable throughput for network storage. Think about the speed required folks, 3 Terabytes in 12 hours from 250 hosts at 75 sites. (Well RSYNC means we don't send all the data, but still! ;-0 )

Then, each day, we back up the Linux box using a Windows server installed on a Dell Optiplex workstation box with a tape jukebox attached and running CA ArcServe. That way we get a daily snapshot to tape allowing us to do a scheduled rotation.

This means we are following the Golden Rule of Backups, which applies no matter how much data you back up, which is this: Always have 2 separate backup copies of important data. And it's better if they are different types of media. And with SANS and NAS solutions redundancy is critical. These acronyms should be called AIOB which stands for 'All In One Basket'

RSYNC has done what no commercial software seemed to be able to do: give us a good working backup system for our enterprise. It uses very efficient synchronization and compression algorithms to move the changes from our distributed servers. If you want this rig to do backups too I recommend considering it. Here's a link to the RSYNC Project: []

Here's the Novell RSYNC forum: hread.php?group_id=1148&group=novell.forge.rsync.h elp []

And here's a good resource for RSYNC on Windows: rsync.shtml []

Here are two more good RSYNC Windows links: ster&PAGE_user_op=view_page&PAGE_id=6&MMN_position =23:23 [] []

The NASBackup Project is a neat Open Source effort to make a gui-based RSYNC client for Windows. It works very well.

More info: RSYNC uses an algorithm that only sends the changes in the file systems. This algorithm is so efficient that i can even get down to only sending the changed blocks in an individual file without having to send the whole file. It works very well for us even over DSL/Cable speed connections. You want to optimize your entire I/O schema including all network layers as well as the way you read, write, and cache file and database operations on all connected hosts.

I hope this little bit of info helps you. (2, Informative)

gurutc (613652) | more than 8 years ago | (#15356530)

Free NAS Wireless, secure, open source, multi platform, easy to configure, etc etc etc. For free! I've used it. Compared to the LaCie network devices (not the USB LaCies, they're great) it is FASTER! But a dedicated Linux box you config yourself and with a tuned IP stack is quicker. However, for the effort of downloading a teensy ISO, burning a CD, and spending 5 minutes to install and config, this solution is really astounding.

What about Bluearc? (0)

Anonymous Coward | more than 8 years ago | (#15356677)

We've had a couple Bluearc [] boxes for about a year and I really like them. It sounds like they have all the features you want. The backend drive array is all fiberchannel and the demo unit we looked at had a fix of FC and SATA disks (in seperate shelves of course.)I think we also looked at Netapp and a few others, but nobody else could come close in terms of price and features to what we wanted and what my boss was willing to spend. The service has been very good. Every time we've had a drive fail, I've had a message from their support people waiting for me when I came in and I usually get the replacement drive within a couple days (and that delay is more a matter of how our receiving department handles packages than anything else.)

SAN (1)

C_Kode (102755) | more than 8 years ago | (#15356752)

Use a SAN. If you can't afford fibre channel then use iSCSI. The prices are coming down and you can mix SCSI and UltraATA storage on it. So you get local disk performace on all devices (using FC that is) and you can implement ATA where you require mass storage but not high performance and SCSI 320s where you need the transaction processing.

We use SCSI 320s for our main file servers and databases and use UltraATA storage for disk to disk backups and other non-highspeed required storage.

iSCSI from LeftHand or EqualLogic (1)

Thundersnatch (671481) | more than 8 years ago | (#15356805)

We just did a similar project, and concluded that iSCSI was the way to go for complete cross-platform accessibility. We evaluated the contenders, and LeftHand came out on top. EqualLogic was a close second. Both vendors allow you to add iSCSI storage devices in smallish increments, which each add their cache, bandwidth, and processing power to the storage pool.

Nothing from EMC, HP, or the other big boys came close in terms of functionality and scalability at this low end of the market. EqualLogic and LeftHand let you start small and grow incrementally without dumping $50+K to get your foot in the door.

This is too important to rely on /. nonsense (1)

csoto (220540) | more than 8 years ago | (#15356822)

Talk to top vendors. We just decided on a Dell/EMC CX3-40 system, after comparing systems from HP (EVA 6000), IBM (DS4800) and StorageTek (FLX 380). They were so close to one another in support and features. Surprisingly, their engineers and sales teams were quite complimentary of each other, and each attested to interoperability (our mix of WS2K3EE, RHEL/SEL, VMware ESX and MOSXS). Quite honestly, we would be very happy with any one of them. Dell just went all out and gave us the best value.

Google's Appliance? Storage is cheap. (3, Interesting)

twitter (104583) | more than 8 years ago | (#15356895)

Google's Search Appliance [] has been on the market for years. They have a page of user stories [] , which includes National Semiconductor, Nextel, Universities, government agencies, large and small companies.

Given an effective search, you can store the information on anything. That means you can deploy many cheap and fast servers close to the source of information creation, and have that information available everywhere. With 250 GB drives going for $50, you could have all 10TB of storage taken care of twice for $4,000.

Re:Google's Appliance? Storage is cheap. (1) (782137) | more than 8 years ago | (#15360034)

That doesn't solve the problem of where the data is located, or how computers are supposed to access it. All Google's Appliance does is index it; it still needs somewhere to link to.

Isilon (1)

Evro (18923) | more than 8 years ago | (#15357731)

I heartily recommend Isilon's storage system. It's expensive but the features are incredible. The killer feature for us was the ability to add more space without having to muck around with repartitioning or rebuilding arrays. Our system has 3 nodes each with 9 drives, we can lose a disk or an entire node and the system doesn't miss a beat, it begins restriping the data that was on the dead disk onto other free space. When you want to add another node to add more space, you basically plug it in and turn it on and your single NFS/cifs/ftp mount point just grew 2 TB. The speed is awesome too, they use gig-e or Infiniband on the backend.

Just went through this... (1)

eldub1999 (515146) | more than 8 years ago | (#15358424)

We tried a few of the "big company" combo SAN/NAS devices and found that they... well suck. They can't do all things well. They can either do Windows well, or UNIX well, or SAN well. But not all things well no matter what the marketing literature says. It is also very simple to end up paying a whole lot of money by the time you get the pieces and parts put together.

What we ended up doing is getting a SATA2 SAN that supports 5 simultaneous connections over gig copper. We connect to it over iSCSI. We have:
- Mail Server mounting via iSCSI
- DB server mounting via iSCSI
- Windows file server that acts as a NAS, but is saving to the SAN via... yup, iSCSI
- LINUX NFS server that acts as a NAS, but is also saving to the SAN via iSCSI

This ended up being way simpler and much more cost effective. And yes you can run SAMBA on LINUX, but this is way easier to manage and maintain.

What is with all the NAS? (1)

Informix (975583) | more than 8 years ago | (#15358839)

Looks like you are looking for mid-tier disk that is able to handle Fibre Channel (SCSI) and SATA disk behind a single controller. You have several choices including SUN, EMC, IBM, Xiotech, and HDS. I would include all of those in your research. Price and performance leaders in the space you are looking at are probably SUN and Xiotech, with SUN's new storage line (aquired from StorageTek) being some of the best bang for the buck storage around. This is assuming that you are looking for enterprise quality product and service. If you just want cheap disk w/ little to no support- there are several small shops in Boulder, CO that want to help you. Oh, and HP- they would fall in that space as well. Little to no support and performance to match.

Inferno (0)

Anonymous Coward | more than 8 years ago | (#15358853)

We are solving this problem at the army with inferno [] . Hope it can help solve your problems.

great product (1)

sgt scrub (869860) | more than 8 years ago | (#15359427)

From an email I, ironically, received today.

I'm relocating to US from beginning of June, and I will be available under:

Krzysztof (Kristof) Franek
CEO & President
Open-E, Inc.
2694 Middlefield Rd, Suite A
Redwood City, CA 94063

Open-E develops innovative software products for cost-effective Network Attached Storage (NAS) and iSCSI solutions. It is with great excitement to announce that we are growing globally with our products and talented team members. []

ibrix (1)

phonics (312657) | more than 8 years ago | (#15359438)

I can't remember exactly where I heard, but a company called Ibrix is doing exactly what you're looking for. So much so, in fact, that I was suspicious that this was a spam question ;)

Their site seems to be down now, but google for them and you will see articles.

HTH, i'm no storage / SAN expert.

reliability, raid, updates, security, monitoring (1)

Khopesh (112447) | more than 8 years ago | (#15359514)

You mentioned needing a support contract. What happens if the system goes down? Does the company go out of business after a few hours of downtime? How about a day? Do-it-yourself solutions don't have full software support, even if your hardware support is above reproach and you subscribe to RHN or whatever. In 24/7/365 environments with big money on the line, you need to go with NetApp like everybody here is telling you. Always ask your sales reps how long it will take to get a support expert ON SITE after your system dies at 4PM on a Friday.

Looks like there is enough advice on vendors and hardware specs ... the only thing I'd add is that SATA is NOT reliable enough for this purpose unless you're comfortable replacing a drive or two every month or so (don't do SATA RAID 5; try RAID 1 or 10 ... see WikiPedia:RAID [] for more). ... Use SCSI despite the price hit. SATA in stripes (a la RAID 10) will partially compensate for the RPM hit. Oh, and get redundant power and a battery backup (UPS). If you can get an on-board battery for your hardware RAID card, do it.

The file server and ALL systems connected to it must have synchronized time. Also, be sure it's on a gigabit ethernet, hosts only a VERY minimal number of services, and is completely locked down from the internet ... not even SSH should be visible; force administration to go through a bastion machine first. Keep the thing updated, and set auto-updates to do dry-runs and email you what they could do. I have my Debian box set up to apt-get update; apt-get -y --download-only upgrade; apt-get -qq -s upgrade |mail -s "Updates for `hostname`" root every night (note, that's a hasty summarization; I actually have a nice shell script for that ... ask me and I'll post it online for you ... ideally, this should be a part of the daily logwatch output and only a seperate email when there are security updates).

Lock down the file server. NOBODY but an admin doing admin work should even have the ability to log into it, for any reason. If there is such a need, make a nice little dummy machine that mounts the network shares and give them access to that.

Monitor the system from afar. Intrusion detection (NIDS like Snort or LIDS) is nice, maybe even essential for you, but I'm referring to something more basic ... you need to be alerted the moment something on the server fails. There are a few solutions for this out there (I use a home-brew one), but the nicest I've seen is Big Brother [] , which is freeware unless you depend on them (in which case you would want to pay for support anyway). BB4 is open-source but non-free (a look-but-don't-touch "Better than Free" license [] ).

In over your head yet? Get a NetApp. They're the Apple of the NAS/SAN world; their products just work.

Re:reliability, raid, updates, security, monitorin (1)

eric2hill (33085) | more than 8 years ago | (#15361073)

Thanks for taking the time to respond!

You mentioned needing a support contract. What happens if the system goes down? Does the company go out of business after a few hours of downtime? How about a day?

No, but we are out some money. We could probably survive one days outage, and our existing NetApp FAS250 has next-day service on it. I'm happy with the NetApp, but want to do my homework on this next storage device just like I did three years ago.

Looks like there is enough advice on vendors and hardware specs ... the only thing I'd add is that SATA is NOT reliable enough for this purpose unless you're comfortable replacing a drive or two every month or so (don't do SATA RAID 5; try RAID 1 or 10 ... see WikiPedia:RAID [] for more). ... Use SCSI despite the price hit. SATA in stripes (a la RAID 10) will partially compensate for the RPM hit.

I'm looking at SATA simply for the price per GB. Anything I get will have a minimum of RAID 5 with a hot spare or possibly RAID 6 if I can find such a critter. I'm not opposed to doing SCSI all the way around, but it does make for a more expensive storage device when about 75% of our data is archival storage that rarely changes.

Oh, and get redundant power and a battery backup (UPS). If you can get an on-board battery for your hardware RAID card, do it.

We have a Liebert UPS (dual-refrigerator) in the basement with a backup diesel generator. We're covered for around 24 hours without gasoline, and indefinitely with gasoline.

The file server and ALL systems connected to it must have synchronized time.

NTP is configured on all servers, routers, printers, and any other device I can connect. Kerberos doesn't work well with time slips. :)

Also, be sure it's on a gigabit ethernet, hosts only a VERY minimal number of services, and is completely locked down from the internet ... not even SSH should be visible; force administration to go through a bastion machine first.

Definitely a good common-sense statement. All of our internal servers are completely isolated from the internet with the exception of a couple of web servers, and FTP server, and some EDI ports to our AS400. Beyond that, all access goes through a SonicWall Pro 330.

Keep the thing updated, and set auto-updates to do dry-runs and email you what they could do. I have my Debian box set up to apt-get update; apt-get -y --download-only upgrade; apt-get -qq -s upgrade |mail -s "Updates for `hostname`" root every night (note, that's a hasty summarization; I actually have a nice shell script for that ... ask me and I'll post it online for you ... ideally, this should be a part of the daily logwatch output and only a seperate email when there are security updates).

Yes, please post it or email it to e-r-i-c-a-t-i-j-a-c-k-d-o-t-n-e-t. I've got a dozen or so Debian servers that could benefit from it regardless of the new storage box. Thanks in advance.

Lock down the file server. NOBODY but an admin doing admin work should even have the ability to log into it, for any reason. If there is such a need, make a nice little dummy machine that mounts the network shares and give them access to that.

There are only 3 admins (myself included) that will ever get into the box. All other users access through CIFS/NFS only, no shell access.

In over your head yet? Get a NetApp. They're the Apple of the NAS/SAN world; their products just work.

Not so much in over my head, just trying to cover all my bases. We have a NetApp FAS250 now and it's served us reliably over the last 3 years with a half-dozen or so failed drives and some locking issues. Our support contract has a new drive in my hands the following day, and the locking issue seems like a bug in the OnTap OS, but it's a relatively easy fix and I only run into it a couple of times per year. All in all, we might just end up buying another NetApp, but I can't justify spending another $75K without seeing some alternatives.

Debian updates checker, apt-update (1)

Khopesh (112447) | more than 8 years ago | (#15366986)

Keep the thing updated, and set auto-updates to do dry-runs and email you what they could do. ... I actually have a nice shell script for that ... ask me and I'll post it online for you.

Yes, please post it or email it ... I've got a dozen or so Debian servers that could benefit from it regardless of the new storage box. Thanks in advance.

My script in its current form will email security-related update notifications as they arrive, and other upgrades are only reported on Mondays. Some day, I'll write a logwatch plugin that shows available updates in the daily output (and emails directly on security updates, as the current script does).

I run this from a bash script /etc/cron.daily/apt-update which delays 30-60 minutes and then runs the main script. Note that $RANDOM, and the hash function need bash and won't work in dash/sh. The cron script's code looks like this: sleep $(($RANDOM % 30 + 30))m && /usr/local/sbin/apt-update -m ... I'm not even going to try to put my apt-update script here as a slashdot comment.

This is my first public release of apt-update [] , released under the GPL. Also note there are other similar solutions, like apticron and cron-apt, both of which are in the Debian stable repository, but both of which seemed more code than is needed (and they are primarily for actually performing the upgrades, which is dangerous).

On RHEL/CentOS, Fedora, and other APT-capable distributions, this script will work fine. There is one snag; the script searches for "security" in the dry-run install ... DAG [] /Dries [] /RPMForge, FreshRPMS [] , CentOS [] , and ATrpms [] don't have a specially reserved source for security the way Debian does, so this won't work. Also of note, Axel Thrimm's atrpms package for most Fedora/RHEL derivates includes a script called "check4updates" which was the inspiration for my script. ... it is a bit more basic, but it uses what it can find of up2date, yum, apt, and smart.

From a Sun Employee... (1)

trims (10010) | more than 8 years ago | (#15359529)

OK. Up front:

I work at Sun. I do not speak for Sun in any way, however. None of what I'm saying is priviledged, or otherwise not publicly available.

A lot of what is right for you depends on your exact setup. Given your description, I'm assuming you are primarily concerned with file serving for clients, with the possibility of needing to centralize some primary storage for DB or similar app-specific servers. I'm also going to assume you are single-site (given the relatively small amount of storage, I think that's fair).

10TB is not much space (really). I'm going to assume the solution should scale to 3x that over its lifespan (3-5 years). That is, you should have the ability to add up to 20TB or so more rather simply, and without buying major upgrades.

NAS is the cheap way to go, but I'd recommend against it for now: it's rather hard to find one that supports DFS, and OpenAFS isn't well supported either in the NAS space. I'd also shy away from a true SAN solution, since they're going to be way over-engineered for your rather modest needs. By "true" SAN, I'm talking about a large controller head (usually a modified and upgraded FiberChannel switch mated with a small management controller) which front-ends a large number of disk arrays. Rather, I'd recommend a clustered FC solution.

For your problem, I'd look at 2-3 machines which would be your primary file server cluster. They should be hot-clustered together (using your favorite cluster software). I'm going to suggest you use FibreChannel as the back-end direct-attach-storage technology. Connect the head machines to redundant FC switches (you don't need anything really fancy here), and then use JBODs or HW arrays as your storage devices.

Here's a sample solution from Sun (which, if you look at is, is going to be very competative with anyone, including Dell and Build-it-yourself stuff):

  • (3) Sun x4200 w/ 8GB RAM & 2 dual Opteron 275
  • (6 total): two single-port 2gbps FC host adapters per server
  • two 8-port low-end FC switches
  • (1) 3510FC array w/ redundant HW Raid contollers & 12 x 146GB 10k FC disks
  • (1) 3511FC JBOD w/ redundant FC connections & 5 x 500GB 7.2k SATA disks
  • Solaris 10 Update 2 with ZFS (coming Summer 2006)
  • OpenAFS for Solaris
  • Samba on Solaris
  • SunCluster software
  • Possibly use Zones (i.e. Solaris' VM setup) for better server partitioning
  • 3-year SunSpectrum Gold support (24x7x365 telephone support, 8-8 onsite hardware replacment, 4-hour response time for hardware)

ZFS is the bee's knees. It's just so great. Check it out here: ZFS on OpenSolaris [] . It's currently available only in the preview Solaris 11 (codename: Nevada), but it will be included in Solaris 10 Update 2 as production-quality code. S10u2 should be available sometime this summer.

I run a similar setup here at work, plus at my private ISP company. The above config is fully supported (all software stacks, hardware, and interaction) by Sun, so you've got 1-stop maintanence support. The above config is for 4.2TB (raw). Assume you RAID-5 the SATA drives, and RAID-10 (striped mirrors) the FC drives, that's 2.9TB usable. Adding additional FC JBODs (either FC or SATA drives) is relatively cheap, and VERY simply to configure.

The solution above is very flexible, and will allow you to add disk and servers to the mix easily and (relatively) cheaply. It is also quite good performance. It does NOT support iSCSI directly to the JBODs or FC switches - you can get an iSCSI HBA for the X4200s should you want to. the 3510FC has built-in administration, so you could attach additional app/DB servers directly to the FC switch for better performance, while still maintaining good overall maintenance/configuration control. The really nice thing about the above hardware config is that it will run Solaris, RedHat or SLES Linux, and Windows. It's all been tested and you can get SW support for Solaris and Linux DIRECTLY from Sun. It's all WHQL-certified, but you'll have to talk to MS for Windows support.

Talk to a Sun sales rep - the above config Lists (i.e. prices from SunStore) at under $100k, and I'd expect you should be able to get it for considerably less. Remember - that cost includes 3 years of full support for everything.


Re:From a Sun Employee... (1)

eric2hill (33085) | more than 8 years ago | (#15361098)

First, thank you for this information. I've been looking at a solution like this, but hadn't realized that the Sun support contract would cover *everything* in the solution, which is a huge plus in my book.

I've been following ZFS for the last few months (since the November slashdot post about the open source release) and am very impressed with its' capabilities. That, combined with the binary-ready build of AFS on Solaris, and this is a very attractive solution that I'll look into.

Talk to a Sun sales rep - the above config Lists (i.e. prices from SunStore) at under $100k, and I'd expect you should be able to get it for considerably less. Remember - that cost includes 3 years of full support for everything.

Do you have someone in mind, or should I just call the Sun 800 number?

Re:From a Sun Employee... (1)

trims (10010) | more than 8 years ago | (#15363559)

Honestly,just go with the 800 number. I can't say who your Acct Rep would be these days.

Also, I'm not 100% sure that OpenAFS is covered by Sun. I'm pretty sure it is, but... Everything else definitely is covered, though, and at the minimum, you'll get help from a Sun SysEng if you're looking at OpenAFS. (with a Gold Contract, SysEng's respond quite fast. ;-)


SAN/NAS with multiple tiers of performance (Pillar (1)

cblack (4342) | more than 8 years ago | (#15359690)

I think what you should be looking at is a SAN that has the capability of a NAS frontend. For your transactional/database loads you will want block-level I/O which usually means either fibre channel or iSCSI connections from the servers. For the file sharing you can either connect a regular server to the SAN and have it handle exporting the storage or get NAS functionality from your SAN vendor. Most NAS functionality from SAN vendors only does SMB or NFS but is nicely integratable and manageable.
We were in a similar situation and evaluated about seven vendors and narrowed it down to EMC, NetApp, and Pillar Data for solutions in our price range that would be able to handle different types of workloads all from one pool of disk. We decided on pillar for a few reasons. One of the biggest things pillar had that nobody else did at the time was the ability to get different levels of performance from a single type of disk by using a neat QoS I/O scheduler and putting the high priority data on the outer tracks of the disk. They also have fully redundant paths to SATA drives where each drive is dual-ported and can be addressed by two raid cards. Supposedly they can get similar performance of 10k RPM FC drives from the SATA drives they use with their system. They are also adding the ability to add trays of FC disks as well. I'm really pleased with the system, it is very flexible (can scale by I/O as well by adding just controllers, not replacing) and one of the only solutions that allow you to start at around 2TB but still have several levels of great performance all on one pool of disk. EMC is adding more similar QoS stuff to their new systems. Another thing that I found out is that it is really hard to get full redunancy from controllers down for under $75k on most other systems (with netapp you need to buy two separate frontends and can't go beyond two without another level of software/management).

Re:SAN/NAS with multiple tiers of performance (Pil (0)

Anonymous Coward | more than 8 years ago | (#15390185)

FYI: Pillar don't use dual ported SATA drives. AFAIK these don't exist. The systems do have the ability to automatically switch in a second disk controller to replace a failed one, but at the lowest level there is only one path to a SATA disk.

SATA vs. FC performance depends upon your workload.

Favorite type of storage (0)

Anonymous Coward | more than 8 years ago | (#15360438)

What types of storage are you using and what would you recommend?
I tried out floppies for a long time, and flash RAM too, but I've found that the best type of storage, overall (taking into account cost and performance '(both speed and capacity)) is HARD DISK. You'll be happy, trust me.

Recommendation for NetApp (1)

zorro6 (836387) | more than 8 years ago | (#15360600)

I can highly recommend NetApp. My installation supports about 100TB on NetApp FAS and R servers. The FAS servers use FC disks for very high performance at fairly high cost. The R servers use SATA disks for reasonable performance at reasonable cost. Support is excellent. We support Solaris (NFS), Windows (CIFS), Linux (NFS) and Mac OS (NFS) from the NetApps. The same file system can be exported through NFS and/or CIFS.

We tried Snap Servers, Sun SPARC boxes with FC disk arrays, Linux boxes, various RAID array vendors and none compared to the NetApps in performance, stability and service.

Consider this - there is lots being left out (1)

ejoe_mac (560743) | more than 8 years ago | (#15360633)

There are item specific bits to consider on this.

1) It doesn't matter if you use SCSI or FC - if it's connected via iSCSI that will be your bottle neck. 2gb / 4gb Fiber Channel will give you the perf that iSCSI won't. Eiter way spend the money on good HBA's, and get a spare

2) If it's not something you're already fimilar with, you should spend the money and get a box from EMC and let them deal with it. Yes, some times its worth spending the money on. They show up to swap out any "issue" component, and things keep going. The CX300 is where you're looking to be - everything else on the market is going to be bigger (more disks, more features, more $)

3) If you're looking for ideas to make it a "cheap" solution, the XServe RAID is the cheapest way to get into a small SAN. It's $17,000 for 7TB raw disk, redundant components, and Applecare. It's all SATA / FC stuff, but it just works.

4) Server Virturaliation - if you're not already using it, consider the impact of it with this storage array. Look at VMWare ESX - VMotion is a beautiful thing, but costly. This is where your HCL list should drive your spending.

5) How are you going to backup this thing? It's easy to have 2-5tb of spinning disk, but you still have to have a backup plan. Maybe buy one that's 2-5tb now and get the 5-20tb one in a few years, moving your existing SAN into a backup type role.

Do you care about Unix-side security at all? (1)

buildboy (30079) | more than 8 years ago | (#15363825)

I'd say one of the first questions you need to ask yourself (and your management and legal people) is what level of security you require for your data. After that read up on NFSv3 security; a good article is at dfs/musings.pdf [] , which touches on most of the major problems. And yes, the situation really is that bad, and tools to exploit the numerous weaknesses are easily obtainable. NFSv3 "security" is a joke. Unless you use it purely as a back end system on a secured, private network between physically secure machines that only people who have access rights to all files on the server have access to, you will lose to any minimally skilled cracker or disgruntled employee (or if someone decides to write self-replicating malware that exploits NFSv3 weaknessess, which frankly I wish someone would do so management types could fully grok how exposed they are).

Once your company understands how unacceptable NFSv3 security is for any kind of situation involving company-confidential or legally-sensitive data, solutions like Network Appliance will start to look like they suck, because they do not support any decently secure protocol that the majority of Unix clients can use, nor will they unless the vendor feels like adding them (appliance model = big, useless / overpriced bricks if you change storage strategies). Only the very latest Unix versions support NFSv4 at all, and that support is universally not well documented, and in my experiance, esp. on GNU/Linux, somewhat buggy. Managing the differing permissions models between CIFS, NFSv3 and NFSv4 is also insanely complex, with lot of subtle problems that can leave you wide open.

There is exactly one non-kludgey widely used solution to this problem, and that is OpenAFS ( Designed for security, proven over more than a decade in demanding environments (Morgan Stanley, MIT, CMU), same permissions model across platforms etc. If you'd like to talk to a vendor, Sine Nomine Associates ( is one of several that sell support contracts (the software itself is Open Source). The best vendor backup solution for OpenAFS is TiBS (, although roll-your-own is pretty easy as well. Note that if you don't want to touch the Windows desktops with OpenAFS client installs, Samba has excellent support for using OpenAFS as a back end (i.e. Windows clients accessing AFS-space via their native CIFS clients via Samba). There is also a NFSv3 translator service for if you happen to have any extremely odd or old Unix operating systems that aren't supported by OpenAFS or ARLA
( clients. Another option in some cases would be to buy Sharity ( licenses and access AFS-space via CIFS/Samba. To use OpenAFS you also need a Kerberos 5 KDC; for this you can use Active Directory, or MIT or Heimdal Kerberos 5, which are both free. For a cross-platform single signon solution, you can combine Samba, OpenLDAP and Heimdal; this requires experianced unix-y sys admins, but companies like Symas Corp., [] , will do it for you.

You mentioned DCE/DFS (which I've noticed several people have misinterpreted as Microsoft dfs, which has almost nothing in common with DCE/DFS). DCE/DFS is dead. It had little vendor uptake; IBM supplied clients for most platforms, and IBM stopped development and ended support quite a while ago. Management was a complete nightmare. There is no open source implementation. It's dead, Jim! :-)

IBM has 2 major migration paths away from DCE/DFS (and IBM AFS, which is also end-of-lifed, although most of those customers just moved to OpenAFS). One is SANFS ( ml?Open), which is cool but appropriate to only a limited range of environments. The other is GSA, which is more generally useful. IBM also had a nifty offering called Stonehenge ( ), which was a OpenAFS + Samba + Virtulization + Admin tools solution, but as of late 2005 they don't seem to be actively marketing it any more, pushing other (more IBM-propriatary) solutions instead.

You should call IBM Global Services and get someone to give you a presentation on Global Storage Architecture (GSA). Unfortunatly it's not something they advertise widely (IBM is good at having excellent filesystem products that get no press).

Here is a rather old presentation on GSA from a NFS Conference: []

Here is a bit of a IBM Redbook (chapter 3 of 247229.html?Open [] ) that describes how GSA makes NFSv3 security suck as little as humanly possible:

IBM GSA supports the NFS[v3] file-system protocol. [...] GSA maintains an LDAP directory of user IDs and passwords. Clients securely (via HTTPS) [this is secure out of band authentication, either integrated with user logins via PAM or Authenticate or after login via a command line or GUI tool] authenticate with their user ID and password directly to the GSA server. This in turn opens up NFS file sharing access to the client's current IP address and UID (numeric User ID) for a specified amount of time. With GSA's NFS authentication, the access controls are stored on the server, instead of storing a token locally on the client. More importantly, with GSA's authentication, the client not only can obtain credentials for its own IP address/UID but can also delegate credentials to a list of other IP address/UID's without having to contact those IP addresses directly.

It should be noted that GSA uses GPFS as a backend filesystem, so it, like OpenAFS, gives you a unified permissons model.

Re:Do you care about Unix-side security at all? (0)

Anonymous Coward | more than 8 years ago | (#15390662)

One correction:

Saying that NFS cannot be secure isn't right. NFS v2 or v3 do not implement security but depend upon the RPC layer to provide it instead. If you choose little or no security at the RPC level (e.g. Unix AUTH_SYS instead of AUTH_GSS) then you get what you ask for.

The author of the USENIX paper is speaking more about the state of NFS/RPC security on Linux, which is poor to non-existent. Security is mandatory for NFSv4 so it's being added there but there's no reason NFSv3 can't have it also, like on Solaris and some other Unixes.

Netapp (2, Interesting)

TopSpin (753) | more than 8 years ago | (#15369668)

Netapp has a new division called "StoreVault" that is about to release new products that might be ideal for your purposes. There isn't much information publically available yet, but what is available is:

o Data OnTap OS
o NAS and iSCSI
o Optional FC interface (yes, NAS, iSCSI and FC in one device)
o "simplified" web interface
o Based on FAS250/270
o $5000 entry level price
o Scalable to 12TB

Presumably the products will launch some time in June.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?