Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Slashdot: Best File System For Web Hosting?

timothy posted about 2 years ago | from the use-the-one-with-the-bits dept.

Software 210

An anonymous reader writes "I'm hoping for a discussion about the best file system for a web hosting server. The server would serve as mail, database, and web hosting. Running CPanel. Likely CentOS. I was thinking that most hosts use ext3 but with of the reading/writing to log files, a constant flow of email in and out, not to mention all of the DB reads/writes, I'm wondering if there is a more effective FS. What do you fine folks think?"

cancel ×

210 comments

Sorry! There are no comments related to the filter you selected.

ZFS (5, Informative)

Anonymous Coward | about 2 years ago | (#42136119)

Or maybe XFS.

Re:ZFS (2, Funny)

Anonymous Coward | about 2 years ago | (#42136265)

FAT

Re:ZFS (-1)

Anonymous Coward | about 2 years ago | (#42136725)

Ah hahahahahahah ahahahahahahahah ahhhhhh.... you kill me.

Re:ZFS (-1)

Anonymous Coward | about 2 years ago | (#42136765)

NFS

Re:ZFS (3, Informative)

Anonymous Coward | about 2 years ago | (#42136715)

"Maybe" XFS? XFS.

ZFS if funky and all but you don't need the extra features and the additional CPU overhead is just wastful. The only real thing to care about is lake of fsck on unclean reboot, and fast reads. XFS+LVM2+mdraid (although a proper RAID controller is preferable) is perfect.

Re:ZFS (2)

corychristison (951993) | about 2 years ago | (#42137401)

Agreed. XFS is solid and fast, with relatively low CPU overhead.

EXT4 from what I have read is very good, as well. Some debate its stability though.

Re:ZFS (5, Funny)

sjames (1099) | about 2 years ago | (#42137369)

Depending on the type of web content, XXXFS might be appropriate.

Just (-1, Offtopic)

M0j0_j0j0 (1250800) | about 2 years ago | (#42136135)

Who cares, just get an SSD on the bay and be happy, you can get affordable systems from 50$ with SSD :)

Re:Just (0)

Anonymous Coward | about 2 years ago | (#42136259)

This is certainly one option, and it will, generally, make up for most perceived deficiencies in the file system.

That being said - you're not going to gain enough performance over ext3 or ext4 for a generalized use case like yours to warrant the time/energy spent implementing one of the other non-default filesystems.

Re:Just (3, Insightful)

Anonymous Coward | about 2 years ago | (#42136345)

Even with an SSD you still need a file system format for it to be usable.

I'm all for ZFS, very reliable over long periods of time.

Re:Just (5, Insightful)

gbjbaanb (229885) | about 2 years ago | (#42136353)

yeah - its especially good for your log files, after all, SSD is just like a big RAM drive.....

you're going to be better off forgetting SSDs and going with lots more RAM in most cases, if you have enough RAM to cache all your static files, then you have the best solution. If you're running a dynamic site that generates stuff from a DB and that DB is continually written to, then generally putting your DB on a SSD is going to kill its performance just as quickly as if you had put /var/log on it.

RAID drives are the fastest, stripe data across 2 drives basically doubles your access speed, so stripe across an array of 4! The disadvantage is 1 drive failure kills all data - so mirror the lot. 8 drives in a stripe+mirror (mirror each pair, then put the stripe across the pairs - not the other way round) will give you fabulous performance without worry that your SSD will start garbage collecting all the time when it starts to fill up.

Re:Just (4, Informative)

M0j0_j0j0 (1250800) | about 2 years ago | (#42136863)

don't know the budget, but 250gb of "RAM" for 500$ looks like a good deal. and you just suggested an array of 4 drives to someone that wants the classic webserver with CPanel, all stuffed in one system, that would be like 3-4k$ just for the disks. SSD is the way to go on this cases mainly because of the money you save; and the lifespan? i replaced way more HDD than SSD in the last 3 years since using them, and they are in the same ratio right now and the SSD get way more I/O.

Re:Just (2)

corychristison (951993) | about 2 years ago | (#42137521)

You're still going to want redundancy. At the very least 2 identical drives mirrored with software RAID.

If redundancy is important, 500GB/1TB "Enterprise" drives are cheap. 4 drives in RAID10 would give the best cost:redundancy:performance ratio. You can probably get 4 HDD's for the cost of the one $500 240GB SSD you mentioned.

Re:Just (1)

rs79 (71822) | about 2 years ago | (#42136911)

2? 4? fuq dat, use 12. use another 12 if you need redundancy. and scsi is still a better performer than sata...

Re:Just (-1)

Zero__Kelvin (151819) | about 2 years ago | (#42137057)

"you're going to be better off forgetting SSDs and going with lots more RAM in most cases"

Right, because nothing says stability more than volatile RAM.

Re:Just (2)

yamum (893083) | about 2 years ago | (#42137655)

You forget the reason why adding RAM makes things faster. Linux caches a tonne of stuff in RAM so constantly reading from disks won't occur.

Re:Just (2, Insightful)

Synerg1y (2169962) | about 2 years ago | (#42136381)

Due to the amount of read writes & the life span of SSD's they are some of the worst drives you can get for a high availability web server. ext3 should work fine for you, especially if you're not too familiar with the different types of file systems. Two things I might recommend is if you're looking at really high traffic, you need to separate out your database, email, & web server into 3 different entities. If not... again the file system is not really a concern for you. Last, but not least, redundancy is what will save you a lot of time and headache, make sure you have some sort of mirroring going on, or if your server is at a datacenter, they probably take care of it for you.

Re:Just (4, Insightful)

Guspaz (556486) | about 2 years ago | (#42136843)

Due to the amount of read writes & the life span of SSD's they are some of the worst drives you can get for a high availability web server.

Only if you're completely ignorant about the difference between consumer and enterprise SSDs. The official rated endurance of a 200GB Intel 710 with random 4K writes (the worst case scenario) with no over-provisioning is 1.0 TB. In order to wear this drive out in a high-load scenario, you could write 100GB of data in 4k chunks to this drive every day for nearly 30 years before you approached even the official endurance.

If you use a consumer SSD in a high-load enterprise scenario, you're going to get bit. If you use an enterprise SSD in a high-load enterprise scenario, you'll have no problems whatsoever with endurance, regardless of what people spreading FUD like you would have you believe.

Re:Just (2)

mcrbids (148650) | about 2 years ago | (#42136891)

Came here to say this. Unfortunately I have no mod points. Enterprise drives are more expensive, but if you need the performance, are an excellent option.

Re:Just (3, Insightful)

Zero__Kelvin (151819) | about 2 years ago | (#42137105)

"Due to the amount of read writes & the life span of SSD's they are some of the worst drives you can get for a high availability web server"

If only they had some kind of way of allowing drives to fail while still retaining data integrity. It's probably because I just dropped Acid, but I'd call the system RAID.

XFS (1)

Anonymous Coward | about 2 years ago | (#42136139)

Check it out here http://en.wikipedia.org/wiki/XFS

Re:XFS (1)

Anonymous Coward | about 2 years ago | (#42136243)

Check it out here http://en.wikipedia.org/wiki/XFS

I have to agree. I've been using XFS for years on several of my busiest systems with no data loss and terrific performance on massive amounts of data. It performs significantly better than ext3 and makes better use of I/O bandwidth.

Re:XFS (2)

steveg (55825) | about 2 years ago | (#42136709)

Has XFS gotten over it's corruption problems when shut down dirty?

Back when I used it, I was always very careful to have *good* UPS support.

Re:XFS (1)

0123456 (636235) | about 2 years ago | (#42136813)

My MythTV box gets shut down a few times a year when we lose power at our house and so far there haven't been any problems. I'm not sure I'd trust it for valuable data, though.

Re:XFS (-1)

Anonymous Coward | about 2 years ago | (#42137317)

If you were having corruption problems then you had something configured wrong. Having used XFS for a commercial NAS targeted at NLE markets it was rock solid. We had more problems with LVM that we did with XFS.

Re:XFS (5, Informative)

greg1104 (461138) | about 2 years ago | (#42137539)

The biggest source for early XFS corruption issues was that at the time the filesystem was introduced, most drives on the market lied about write caching. XFS was the first Linux filesystem that depended on write barriers working properly. If something was declared written but not really on disk, filesystem corruption could easily result after a crash. But when XFS was released in 2001, all the cheap ATA disks in PC hardware lied about writes being complete, Linux didn't know how to work around that, and as such barriers were not reliable on them. SGI didn't realize how big of a problem this was because their own hardware, the IRIX systems XFS was developed for, used better quality drivers where this didn't happen. But take that same filesystem and run it on random PC hardware of the era, and it usually doesn't work.

ext4 will fail in the same way XFS used to, if you run it on old hardware. That bug was only fixed in kernel 2.6.32 [phoronix.com] , with an associated performance loss on software like PostgreSQL that depends on write barriers for its own reliable operation too. Nowadays write barriers on Linux are handled by flushing the drive's cache out, all SATA drives support that cache flushing call, and the filesystems built on barriers work fine.

Many of the other obscure XFS bugs were flushed out when RedHat did QA for RHEL6. In fact, XFS is the only good way to support volumes over 16TB in size, as part of their Scalable File System [redhat.com] package, a fairly expensive add-on the RHEL6. All of the largest Linux installs I deal with are on XFS, period.

I wouldn't use XFS on a kernel before RHEL6 / Debian Squeeze though. I know the software side of the write barrier implementation, the cache flushing code, works in the 2.6.32 derived kernels they run. The bug I pointed to as fixed in 2.6.32 was specific to ext4, but there were lots of other fixes to that kernel in this area. I don't trust any of the earlier kernels for ext4 or xfs.

ext3 (5, Insightful)

Anonymous Coward | about 2 years ago | (#42136141)

if you have to ask you should stick with ext3

Re:ext3 (2)

SuperQ (431) | about 2 years ago | (#42137129)

+1 to this.

Unless you have a business case where you know you need something different, stick to what's simple and what works.

ext4 is also a nice option over ext3. It uses extent instead of bitmap block allocaiton which improves metadata efficiency with no downside.

Re:ext3 (0)

Anonymous Coward | about 2 years ago | (#42137329)

ext4 is also better than ext3 at block allocation in general, so concurrent writes to multiple files don't generate as much fragmentation.

there is really no reason to use ext3 on CentOS, even on the older 5.x versions where you had to install extra packages to get ext4-specific builds of tools instead of having native support in the e2fsprogs.

ZFS (0, Flamebait)

Anonymous Coward | about 2 years ago | (#42136143)

ZFS. All of the Linux filesystems are junky and will eventually corrupt all your data. Choose a real OS like Solaris, AIX or HP-UX if you want a real reliability and durability not just a cobbled together hobbyist OS. Hell even OS X Server would be more reliable than Linux junk.

The best server? (3, Insightful)

Anonymous Coward | about 2 years ago | (#42136167)

The best file system would be one not running: mail, database, web hosting, and CPanel.

Tempfs (0)

Anonymous Coward | about 2 years ago | (#42136175)

Tempfs! RAM POWER!

ext4 unless there's a good reason not to. (3, Insightful)

MarcAuslander (517215) | about 2 years ago | (#42136179)

The obvious argument for ext4, the current ext version, is that it's been around a long time and is very solid. I'd only use something else if I knew the performance of ext4 would be an issue.

Re:ext4 unless there's a good reason not to. (1)

Desler (1608317) | about 2 years ago | (#42136343)

4 years is a "long time" now? Something like XFS must be considered ancient by your standards since it's 18 years old.

Re:ext4 unless there's a good reason not to. (2)

cheater512 (783349) | about 2 years ago | (#42136703)

ext2 which is what ext4 is based on beats xfs by a year. 19 years old. :)

Re:ext4 unless there's a good reason not to. (0)

Zero__Kelvin (151819) | about 2 years ago | (#42137133)

Welcome to the world of high technology. Now go home.

Re:ext4 unless there's a good reason not to. (0)

Anonymous Coward | about 2 years ago | (#42137581)

What idiot modded this flamebait? If there was a +5 Funny, that is what it should have received.

Re:ext4 unless there's a good reason not to. (1)

Anonymous Coward | about 2 years ago | (#42136589)

2nd this. Ext4 is the default, and it's a good default.

For your average site it's not going to make a shred of difference either way. If your workload is so special that you need to tune the file system, you probably already know approximately what you need.

One potential pitfall: I would avoid putting a database on a copy-on-write filesystem like Btrfs or ZFS. Besides that, the database will just allocate some disk space and then ignore the file system completely - in fact, it would be even happier if you just gave it a raw partition to play with. (But then it's a pain to move or expand, so again, don't do that unless you know you need to.)

When Btrfs is five years more mature, maybe I will recommend Btrfs, with COW disabled on the database files for busy sites.
Not for any performance advantage over Ext4, but because of the snapshot and checksum features.

Re:ext4 unless there's a good reason not to. (4, Informative)

Vekseid (1528215) | about 2 years ago | (#42136639)

By my benchmarks ext4 was about 25% faster than ext3 for my typical database loads, largely due to extents. This is on my twin RAID 1 10krpm drives.

I still use ext3 for my /boot partitions, but other than that there doesn't seem to be much reason to stick to ext3 at all.

Re:ext4 unless there's a good reason not to. (1)

Carewolf (581105) | about 2 years ago | (#42136783)

I use ext2 for /boot and /tmp. On boot for compat and because it is rarely written to, on /tmp because it is faster and /tmp doesn't need to be able to recover after a crash.

Re:ext4 unless there's a good reason not to. (1)

MightyMartian (840721) | about 2 years ago | (#42136985)

I store all my most important files in /tmp, you insensitive clod!

Re:ext4 unless there's a good reason not to. (1)

Vekseid (1528215) | about 2 years ago | (#42137367)

I use tmpfs for /tmp, but then it is a webserver with a rather large amount of database throughput.

Re:ext4 unless there's a good reason not to. (3, Interesting)

Vanders (110092) | about 2 years ago | (#42136755)

The good reason not too is that I don't need to log in at 3am to type the root password and watch it fsck should it need an unclean reboot. Use XFS.

Re:ext4 unless there's a good reason not to. (1)

mister_playboy (1474163) | about 2 years ago | (#42137379)

Recommending ext3 over ext4 at this point is silly. It's like recommending Vista over Win7.

ReiserFS Sure (4, Funny)

Anonymous Coward | about 2 years ago | (#42136199)

It will kill your innocent files to save some space....

Re:ReiserFS Sure (3, Funny)

Anonymous Coward | about 2 years ago | (#42136299)

I heard it can murder your server's performance.....

Re:ReiserFS Sure (4, Funny)

Anonymous Coward | about 2 years ago | (#42136795)

I heard it only kills the wifi. And then makes it disappear completely.

CPanel will be your problem (5, Insightful)

MindCheese (592005) | about 2 years ago | (#42136215)

The inefficiencies and handicaps introduced by that bloated turd of a platform will far outwiegh the sub-percentage point gains you might see from using ReiserFS or any other alternative filesystem.

Re:CPanel will be your problem (1)

DNS-and-BIND (461968) | about 2 years ago | (#42136999)

Huh, I was thinking about putting it on the crappy Windows 2003 Server server that was just dumped on me. What else should I use for management of ordinary tasks over the web? No budget, of course.

Re:CPanel will be your problem (1)

Zero__Kelvin (151819) | about 2 years ago | (#42137163)

", I was thinking about putting it on the crappy Windows 2003 Server server that was just dumped on me.. What else should I use for management of ordinary tasks over the web"

Meditation is your best bet, since it involves not thinking.

Whatever is the best with the given OS (4, Interesting)

mikeken (907710) | about 2 years ago | (#42136237)

Typically, that is the default file system. That is how you will get the best support when there is an issue. It will also be the most stable with your OS because the developers focus on that FS. So personally, I would use whatever is the default FS for whatever OS you decide to use. To get off topic a bit, IMHO that OS should be Debian because it is just too awesome and Debian based OS's have the largest community. Also, it should be running on Linode.com ;)

separate your data (1)

Dan9999 (679463) | about 2 years ago | (#42136267)

Put the data on the best type of filesystem required for it whether it be ext3, ext4, some NAS box with tons of memory, Ramdisks. If you have a complex web site, have multiple filesystem types. If you decide that you want a one size fits all then you obviously aren't that serious about the question.

WinFS (2, Funny)

jfdavis668 (1414919) | about 2 years ago | (#42136273)

It will be released someday

Re:WinFS (0)

Anonymous Coward | about 2 years ago | (#42137227)

Yehm maybe Apple will "invent" it

Not quite ready for prime time but.... BTRFS (1)

philbreed (2784617) | about 2 years ago | (#42136279)

BTRFS will be quite interesting: https://btrfs.wiki.kernel.org/index.php/Main_Page [kernel.org] . And of course ZFS: http://en.wikipedia.org/wiki/ZFS [wikipedia.org]

Re:Not quite ready for prime time but.... BTRFS (1)

WhitePanther5000 (766529) | about 2 years ago | (#42137571)

BTRFS has not fully stabalized yet, making it a poor choice for a production system. And ZFS is only a viable option if you're running Solaris (Sure, you can use the 2009 OpenSolaris version of ZFS in BSD or FUSE... but again, not good production choices).

Ext3/4 and XFS are good choices depending on your needs and distribution. But for a small standalone sever, you will probably never notice the difference - use the default.

Turn off the last accessed time stamp (2, Interesting)

Anonymous Coward | about 2 years ago | (#42136285)

Especially if you decide to use a SSD. Even if there's not alot of data writing going on the constant rewriting of the directory entries to update the last accessed time stamp would wear an SSD and slow a regular hard drive.

Re:Turn off the last accessed time stamp (1)

Lehk228 (705449) | about 2 years ago | (#42137307)

it won't significantly wear any modern SSD, but shutting it off will save you wasted I/O for a function that is not terribly important for a web server, especially a web server that keeps logs storing far more complete information than most recent

NFS + ZFS (1)

Anonymous Coward | about 2 years ago | (#42136295)

NFS export hosted on ZFS but what do I know.

Re:NFS + ZFS (0)

TopSpin (753) | about 2 years ago | (#42136667)

As recently as Redhat/CentOS 6.2 the NFS kernel client choked on large NFS directories [redhat.com] (11 months ago,) breaking Maildir among other things. NFS, particularly on Linux, has always been a flaky POS. Please stop inflicting NFS on people. NFS is for /home and not much else.

Yeah, I know there aren't any good alternatives. That doesn't mean using NFS isn't a mistake.

XFS for huge mailqueues, otherwise EXT3 or EXT4 (2)

ZG-Rules (661531) | about 2 years ago | (#42136309)

From memory (I've been out of that business for 6 months) CPanel stores mail as maildirs. If you have gazillions of small files (that's a lot of email) then XFS handles it a lot better than ext3 - I've never benchmarked XFS against ext4. Back in the day, it also dealt with quotas more efficiently than ext2/3, but I really doubt that is a problem nowadays.

If you aren't handling gazillions of files, I'd be tempted to stick to ext3 or ext4 - just because it's more common and well known, not because it is necessarily the most efficient. When your server goes down, you'll quickly find advice on how to restore ext3 filesystems because gazillions of people have done it before. You will find less info about xfs (although it may be higher quality), just because it isn't as common.

Re:XFS for huge mailqueues, otherwise EXT3 or EXT4 (3, Interesting)

spacey (741) | about 2 years ago | (#42136373)

From memory (I've been out of that business for 6 months) CPanel stores mail as maildirs. If you have gazillions of small files (that's a lot of email) then XFS handles it a lot better than ext3 - I've never benchmarked XFS against ext4. Back in the day, it also dealt with quotas more efficiently than ext2/3, but I really doubt that is a problem nowadays.

If you aren't handling gazillions of files, I'd be tempted to stick to ext3 or ext4 - just because it's more common and well known, not because it is necessarily the most efficient. When your server goes down, you'll quickly find advice on how to restore ext3 filesystems because gazillions of people have done it before. You will find less info about xfs (although it may be higher quality), just because it isn't as common.

XFS is probably better for large maildirs, but ext3 in recent kernels has much better performance on large directories starting in the late 2.6 kernels. It doesn't provide for infinite # of files per directory, but it doesn't take a huge hit listing e.g. 4k files in a directory anymore.

Re:XFS for huge mailqueues, otherwise EXT3 or EXT4 (1)

ivoras (455934) | about 2 years ago | (#42137211)

it doesn't take a huge hit listing e.g. 4k files in a directory anymore.

Umm, maildirs store each message in its own file. I clean up (archive) emails from each past year in a separate folder and still easily have 8k files in each... and that is not my busiest mailbox.

After a few thousand items of anything, the proper tool for the job is a database, not a file system. Though file system can be described as a kind of database, any in case there are problems common to both, such as fragmentation, a specialized data storage always beats generic ones. Personally, I like what Dovecot does - maintains a mbox-like structure ("old-fashined", all messages from a single mail folder in a single file) which is also padded appropriately so fields can be updated without rewriting the file) and builds an index file on top of it to enable efficient random message access. In this way you get efficient, big, append-only data files, and small, easily cacheable index files: win-win.

Is it for work? Don't roll a custom solution (5, Insightful)

93 Escort Wagon (326346) | about 2 years ago | (#42136321)

You're not going to be there forever, and all using a non-standard filesystem is going to accomplish is to cause headaches down the road for whoever is unfortunate enough to follow you. Use whatever comes with the OS you've decided to run - that'll make it a lot more likely the server will be kept patched and up to date.

Trust me - I've been the person who's had to follow a guy that decided he was going to do the sort of thing you're considering. Not just with filesystems - kernels too. It was quite annoying to run across grsec kernels that were two years out of date on some of our servers, because apparently he got bored with having to constantly do manual updates on the servers and so just stopped doing it...

Re:Is it for work? Don't roll a custom solution (0)

Anonymous Coward | about 2 years ago | (#42137495)

Absolutely. Avoid anything weird. FFS should do the job nicely.

XFS (1)

axehind (518047) | about 2 years ago | (#42136351)

If you need a large filesystems then go with XFS. RHEL only supports up to 16TB filesystems with ext4 and up to 100TB with XFS. I'm not sure at this point where the limitation comes from as it is limited even with X86-64.

Stick with the default (2)

dokebi (624663) | about 2 years ago | (#42136367)

Unless you want the special features of other file systems (say ZFS), the default (ext3 or ext4) should be fine. They are capable of handling high I/O loads.

If you want even more I/O performance, then use SSDs.

What is wrong with you? (5, Insightful)

glassware (195317) | about 2 years ago | (#42136389)

This isn't 1999. You have no reason to host your web server, email server, and database server on the same operating system.

You would be well advised to run your web server on one machine, your email server on another machine, and your database server on a third machine. In fact, this is pretty much mandatory. Many standards, such as PCI compliance, require that you separate all of your units.

Take advantage of the technology that has been created over the past 15 years and use a virtualized server environment. Run CentOS with Apache on one instance - and nothing else. Keep it completely pure, clean, and separate from all other features. Do not EVER be tempted to install any software on any server that is not directly required by its primary function.

Keep the database server similarly clean. Keep the email server similarly clean. Six months from now, when the email server dies, and you have to take the server offline to recover things, or when you decide to test an upgrade, you will suddenly be glad that you can tinker with your email server as much as you want without harming your web server.

Re:What is wrong with you? (2, Insightful)

Anonymous Coward | about 2 years ago | (#42136969)

After having worked for companies that do both, I honestly disagree. If you host your dbs and web servers on different machines, you wind up with a really heavy latency bottleneck which makes lamps applications load even slower. It doesn't really make a difference in the "how many users can I fit into a machine" catagory. CPanel in particular is a very one-machine centric piece of software, while you could link it to a remote database its really a better idea to put everything on one machine.

Re:What is wrong with you? (2)

leandrod (17766) | about 2 years ago | (#42136975)

use a virtualized server environment

And ðere goes I/O thru ðe drain.

Re:What is wrong with you? (2)

Hatta (162192) | about 2 years ago | (#42137203)

Run CentOS with Apache on one instance - and nothing else. Keep it completely pure, clean, and separate from all other features. Do not EVER be tempted to install any software on any server that is not directly required by its primary function.

Why is this required? Shouldn't we expect our operating systems to multitask?

Re:What is wrong with you? (4, Insightful)

timeOday (582209) | about 2 years ago | (#42137221)

It's a shame, isn't it? We have all these layers and layers of security (such as user separation, private memory address space for processes, java virtual machine...) which we do not trust and are therefore essentially nothing but configuration and performance cruft. If we're really just running one application on each (virtual) machine, that machine might as well be running DOS.

ReiserFS. It kills the competition. (0, Funny)

Anonymous Coward | about 2 years ago | (#42136393)

EOM.

You are asking wrong question (2)

JoeSchmoe007 (1036128) | about 2 years ago | (#42136395)

If you are concerned about performance and expect constant email stream you should host mail, database and web-servers on separate computers. There is a reason any reputable host does it this way. Plus increased load on one component doesn't affect others.

I think file picking system should be the least of your worries.

Re:You are asking wrong question (2)

JoeSchmoe007 (1036128) | about 2 years ago | (#42136409)

* I think picking file system should be the least of your worries.

Whatever is standard for your hosting provider (1)

Anonymous Coward | about 2 years ago | (#42136407)

If you're going with rackspace, it'll be EXT3. If you're going with Amazon, well, there are more choices. But unless you have a really good reason to deviate from the standard (and it sounds like you don't), why would you make yourself a bunch of unnecessary trouble?

performance concerns? (2)

bertomatic (2743049) | about 2 years ago | (#42136485)

based on your topology you have described, the last thing you need to worry about is what file system to choose, since you have decided to host ALL tasks on a single server. if performance was an issue, you would separate them all to dedicated "farms" and if security is a factor (which it should be), none of them would be in the DMZ, only your proxy(s) would live there.

Use the standard and focus on what matters (3, Insightful)

e065c8515d206cb0e190 (1785896) | about 2 years ago | (#42136513)

Whether your focus is on performance, reliability or both, you have other areas that require much more attention than the FS.

Why ext3 in the first place? (1)

atari2600a (1892574) | about 2 years ago | (#42136571)

Everyone switched to ext4 years ago. Before that, I used ReiserFS, but then, you know....

Go old school (3, Informative)

RedLeg (22564) | about 2 years ago | (#42136629)

What do you fine folks think?"

I think you're not a very well trained sysadmin.

There is no reason to not have various parts of the filesystem mounted from different disks or partitions on the same disk. If you do this, you can run part of the system on one filesystem, other parts on others as appropriate for their intended usage. This is commonly done on large servers for performance reasons, quite like the one you are asking about. It's also why SCSI ruled in the server world for so long since it made it easy to have multiple discs in a system.

So run most of your system on something stable, reliable and with good read performance, and the portions that are going to take a read/write beating on a separate partition/disc with the filesystem which has better read or write, whichever is needed, performance. If you segregate your filesystem like this correctly, an added benefit is that you can mount security critical portions of the filesystem readonly, making it more difficult for an attacker.

Red

You should use XFS ... avoid ext3 at all costs (5, Informative)

Anonymous Coward | about 2 years ago | (#42136757)

Contrary to the majority of the people replying to this post, I emphatically DO NOT recommend ext3. ext3 by default wants to fsck every 60 or 90 days; you can disable this, but if you forget to, in a hosting environment it can be pure hell if one of your servers reboots. Usually shared hosting web servers are not redundant, for cost reasons; if one of your shared hosting boxes reboots you thus get to enjoy up to an hour of customers on the phone screaming at you while the fsck completes

XFS is a very good filesystem for hosting operations. It has superior performance to ext3, which really helps, as it means your XFS-running server can host more websites and respond to higher volumes of requests than an ext3-running equivalent. It also has a feature called Project Quotas, which allows you to define quotas not linked to a specific user or group account; this can be extremely useful for hosting environments, both for single-homed customers and for multi-homed systems where individual customer websites are not tied to UNIX user accounts. The oft-circulated myth that XFS is prone to data loss is just that; there was a bug in its very early Linux releases that was fixed ages ago, and now its no worse than ext4 in this respect.

Ext4 is also a good option, and a better option than ext3; it is faster and more modern than ext3 and is being more actively developed. Ext4 is also more widely used than XFS, and is less likely to get you into trouble in the unlikely event that you get bit by an unusual bug with either filesystem.

Btrfs will be a great option when it is officially declared stable, but that hasn't happened yet. The main advantages for btrfs will be for hosting virtual machines and VPSes, as Btrfs's excellent copy on write capabilities will facilitate rapid cloning of VMs.

This is already a reality in the world of FreeBSD, Solaris and the various Illumos/OpenSolaris clones, thanks to ZFS. ZFS is stable and reliable, and if you are on a platform that features it, you should avail yourself of it. I would advise you steer clear of ZFS on Linux.

Finally, for clustered applications, i.e. if you want to buck the trend and implement a high availability system with multiple redundant webservers, the only Linux clustering filesystem I've found to be worth the trouble is Oracle's open source OCFS2 filesystem (avoid OCFS1; its deprecated and non-POSIX compliant). OCFS2 lets you have multiple Linux boxes share the same filesystem; if one of them goes offline, the others still have access to it. You can easily implement a redundant iSCSI backend for it using mpio. Its somewhat easier to do this then to setup a high availability NFS cluster, without buying a proprietary filer such as a NetApp.

Reiserfs was at one time popular for mail servers, in particular for maildirs, due to its competence at handling large numbers of small files and small I/O transactions, but in the wake of Hans Reiser's murder conviction, it is no longer being actively developed and should be avoided. JFS likewise is a very good filesystem, on a par with ext4 in terms of featureset, but for various reasons the Linux version of it has failed to become popular, and you should avoid it on a hosting box for that reason (unless your box is running AIX).

Speaking of older proprietary UNIX systems; on these you should have no qualms about using the standard UFS, which is a tried and true filesystem analogous to ext2 in terms of functionality. This is the standard on OpenBSD. NetBSD features a variant with journaling called WAPBL, developed by the now defunct Wasabi Systems. DragonFlyBSD features an innovative clustering FS called HammerFS, which has received some favorable reviews, but I haven't seen anyone using that platform in hosting yet. The main headache with hosting is the extreme cruelty you will experience in response to downtime, even when that downtime is short, scheduled or inevitable. Thus, it pays to avoid using unconventional systems that customers will use as a vector for claiming incompetence on your part.

If you are on Windows, of course, you will be running NTFS in any case, although Windows Server 2012 is introducing new systems designed to be competitive with ZFS and btrfs. I have not had a chance to play with them yet and cannot comment on their quality.

Re:You should use XFS ... avoid ext3 at all costs (1)

Anonymous Coward | about 2 years ago | (#42137589)

Since OP is using CentOS, there's really no competition - ext3 or ext4. You could use ext2 on /tmp if you really feel the need. Beware of the path less travelled when it comes to filesystems - it might sound pedestrian, there might be flashier filesystems out there that promise to give you all sorts of interesting features, but what you want from a filesystem is the ability to hold files without crapping out. All of the ext* filesystems have reasonable performance compared to the alternatives, certainly not enough difference to make a compelling argument to change.

XFS, for example, is all fine and dandy unless you experience filesystem corruption; where (backups aside) you'll be cursing yourself and wishing you had the same kind of recovery tools available for ext*. An oft-quoted myth? I guess I must be one of those unlucky mythical people who has experienced terminal non-recoverable data loss nearly every XFS formatted system they've come across.

I've not had any experience of OCFS, but if it's anything like GFS (Redhat, not Google) then I'd steer well clear - even if you know what you're doing you'll find them incredibly fragile and liable to collapse with unexpected modes of failure.

BtrFS (0)

Anonymous Coward | about 2 years ago | (#42136797)

developer for ext3/4, Theodore Ts’o, has said that BtrFS is the “way forward”.
although putting a mail server alongside a db server probably means your needs are not critical, so whatever is easiest is better than what is technically superior.

AWS all the way (0)

Anonymous Coward | about 2 years ago | (#42136825)

Dude, AWS S3 and don't worry about it.

Re:AWS all the way (1)

ZG-Rules (661531) | about 2 years ago | (#42137247)

I have a few websites solely in S3 and CloudFront. It works. Similarly RDS - it's a pretty uncomplicated MySQL service. Not sure about hosting mail on AWS - You can certainly send mail (SES) but I don't know about receiving it. But in general your presumed point is valid - if you can get away with cloudsourcing some of your infrastructure needs, it can be cost-effective and useful.

turn off the log files (0)

Anonymous Coward | about 2 years ago | (#42136855)

it will really speed things up. Nobody uses the damn things when it hits the fan anyway.

Insufficient data (0)

Anonymous Coward | about 2 years ago | (#42136861)

"What is the best FOO for BAR" is academic.

What you want to ask is what is the best overall solution to YOUR problem, or at least what solutions are "good enough."

As a general rule, if a cheap, major-vendor supported solution that meets your needs in its default configuration is available, go with it unless it's clearly cost-effective to change things.

Some things, like default passwords, are almost always not only cost-effective to change but downright stupid/dangerous to leave alone. File system selection on the other hand may be fine in the out-of-the-box default configuration.

Perhaps if you gave a complete desciption of your proposed system and a description of its work-load and other factors such as how you weight robustness, support, speed under your proposed load, and other factors against each other, we could give better guidance.

It cracks me up when... (1)

dnaumov (453672) | about 2 years ago | (#42136879)

... in the year 2012, people are seriously suggesting others use filesystems that can (and eventually will) lose data on an unclean shutdown. C'mon people, this isn't stone age anymore.

Re:It cracks me up when... (0)

Anonymous Coward | about 2 years ago | (#42137043)

Because smart admins use luster, so they can back up the filesystem metadata with mysqldump right ;)?

Depends on your priority (0)

Anonymous Coward | about 2 years ago | (#42136899)

It depends a lot on your priority. ext3 is about as stable as you can get, a proven workhorse that is easy to recover in case of a crash. ZFS is quite a bit slower and uses a little more resources, but will allow for snapshots, great for rolling back when problems hit. ext4 is lightning fast, but is still in active development which has caused a couple of bugs to pop up over the past four years. Usually nothing major, but I don't trust it with production data yet.

So the question is, do you want speed, stability or powerful features?

PostgreSQL (1)

leandrod (17766) | about 2 years ago | (#42136961)

Go for PostgreSQL-backed services whenever feasible. For example, ðere is a quite competent IMAP server called Archiveopteryx, you can run Mediawiki on PostgreSQL, as well as Zope and whatnot.

ext3 with -o noatime (0)

Anonymous Coward | about 2 years ago | (#42136993)

I would use ext3 with the no access time option set.

ReiserFS (0)

Anonymous Coward | about 2 years ago | (#42137161)

If you try to hack it, it will fucking kill you.

But at what scale? (0)

Anonymous Coward | about 2 years ago | (#42137259)

Mail can generate a lot of i/o. Your web-served files will likely end up in cache. Log files are sequential i/o and nearly irrelevant. But all of these things become problematic if you're suddenly, say, pushing several thousand emails a minute or in any case, if you've under-sized the RAM. If you're running at scale, the filesystem is basically unimportant -- worry about the disk subsystem (iops, spindle count, cache, transport) first.

The real question is... (0)

Anonymous Coward | about 2 years ago | (#42137425)

What socks should I wear with my runners?

Take a look at BetterLinux (2)

hendersj (720767) | about 2 years ago | (#42137505)

I spent some time late last year and earlier this year working very closely with the developers of BetterLinux, and in the work I did, I did stress testing (on a limited scale) to see how the product performed. It has some OSS components and some closed-source components, but the I/O leveling they do is pretty amazing.

http://www.betterlinux.com/ [betterlinux.com]

Definitely FAT, but which one? (3, Funny)

jonadab (583620) | about 2 years ago | (#42137565)

There are arguments to be made in favor of FAT16 or even FAT32, but I think I'd go with FAT12, just because it's simpler. You don't need LFNs for web hosting, do you?

Re:Definitely FAT, but which one? (2)

ultrasawblade (2105922) | about 2 years ago | (#42137667)

404ERR~1.HTM

Filesystem comparison (0)

Anonymous Coward | about 2 years ago | (#42137709)

Google is your best friends:

Filesystem comparison
http://www.phoronix.com/scan.php?page=article&item=linux_2638_large&num=1

ZFS, XFS, and EXT4 filesystems compared
http://www-958.ibm.com/software/data/cognos/manyeyes/datasets/89ade5ae14209c140114bcee8c082d35/versions/1

EXT3 vs EXT4 vs XFS vs BTRFS - filesystems comparison on Linux kernel 3.0.0
http://www.ilsistemista.net/index.php/linux-a-unix/21-ext3-ext4-xfs-and-btrfs-filesystems-comparison-on-linux-kernel-300.html?start=2

Use ext4 not XFS. (0)

Anonymous Coward | about 2 years ago | (#42137715)

I've used XFS and JFS on a number of servers over the years, and I would go ext4 myself (despite the apparantly prevalent opinion here on Slashdot). I have inevitably had some kind of filesystem problem with most of my servers that ran something else. Not right away, but eventually. Often the problem botched an entire partition and it had to be reformatted. Fortunately I've always had backups. But since the other filesystems don't see the wide deployment of the ext* filesystems, I just don't have confidence in them any more.

I will say that I had a really nice ongoing relationship with the one guy at IBM still working on JFS when I was having bug problems. We traded bug reports and possible fixes for weeks on end until some race condition was finally eliminated and my server became stable at last. But I'd still go ext4.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>