Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

What is the Best Remote Filesystem?

Cliff posted more than 10 years ago | from the sucking-your-files-through-a-straw dept.

Data Storage 72

GaelenBurns asks: "I've got a project that I'd like the Slashdot community's opinion of. We have two distant office buildings and a passel of windows users that need to be able to access the files on either office's Debian server from either location through Samba shares. We tend to think that AFS would be the best choice for mounting a remote file system and keeping data synchronized, but we're having trouble finding documentation that coherently explains installing AFS. Furthermore, NFS doesn't seem like a good option, since I've read that it doesn't fail gracefully should the net connection ever drop. Others, such as Coda and Intermezzo, seem to be stuck in development, and therefore aren't sufficiently stable. I know tools for this must exist, please enlighten me."

Sorry! There are no comments related to the filter you selected.

Easy (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#7753537)

NTFS with file and print sharting enabled, duh.

net use * \\servername\share

Simple, none of this perverted mounting garbage.

Samba (4, Interesting)

fluor2 (242824) | more than 10 years ago | (#7753542)

It looks to me that both AFS and NFS are kind'a outdated. SAMBA 3 combines NTLMv2 or kerberos encrypted passwords. I like that.

Re:Samba (2, Insightful)

passthecrackpipe (598773) | more than 10 years ago | (#7754251)

doesn't feature disconnected mode - and given that the article discusses AFS, InterMezzo and Coda, all of whom support disconnected mode natively, I guess that that would be a requirement.

Re:Samba (4, Informative)

GaelenBurns (716462) | more than 10 years ago | (#7754367)

There is a T1 at either office, so they will be operating in connected mode the vast majority of the time. It's just that if the network connection breaks, I want to be able to rig up a way in which the network shares fail in a nice way. No crashes, no 5 minute timeouts for the users. And it'd be nice to be able to script the restoration of those network shares when the connection between the two servers is reestablished.

I actually want AFS because it does local caching of files. Here is the comment [slashdot.org] where I describe that.

Re:Samba (1)

drinkypoo (153816) | more than 10 years ago | (#7755406)

Coda also does local caching of files, or at least, so claims its documentation. I've never tested it.

Re:Samba (3, Insightful)

Jahf (21968) | more than 10 years ago | (#7757509)

I thought that NFSv4 was also supposed to support local caching of files?

I REALLY wanted to use Intermezzo for my home setup where I have a central server and some nomadic (as in all over the country, not just house) laptops but after trying it and participating briefly in the mailing list, I agree with the poster, it is just too stuck in development. The version of Intermezzo that most people have in their distros often is the older version that isn't even compatible anymore.

AFS was too much for my personal needs. I think for now I'll just be doing manual syncs using one of the various non-filesystem sync tools, but I really would like to see something like Coda or Intermezzo fully mature to an end-user I-feel-safe-with-my-data level.

AFS (1)

GaelenBurns (716462) | more than 10 years ago | (#7754303)

(note: by master/slave terminology I only mean that the master server is used more. Only AFS has a hierarchy where master/slave really matters)

AFS would be awesome... you see, sometimes these two offices need to work on the same files from both locations... not simultaneously, but sometimes consecutively. In those cases, it'd be great to have a setup that locally caches the file on the slave server, but will automatically serve the most recent version of the file, even if it had since been edited master server. With AFS, all of that is taken care of by the server, I believe.

Now, of course we could set up Samba networked drives, but then there would be no caching... a file would either be stored on the master or slave server, and if someone from another location wanted to work with the file, they'd have to redownload it every time. That would be an *alright* solution, but pretty inelegant, as far as I'm concerned. Linux is supposed to be good at this advanced server stuff, damnit.

So, finally, a question in response to your post: What happens to remote Samba connections when the net connection goes down? Is it a gracefull timeout, or does it start crashing things like NFS?

Wrong problem? (4, Insightful)

fm6 (162816) | more than 10 years ago | (#7755965)

AFS would be awesome... you see, sometimes these two offices need to work on the same files from both locations... not simultaneously, but sometimes consecutively. In those cases, it'd be great to have a setup that locally caches the file on the slave server, but will automatically serve the most recent version of the file, even if it had since been edited master server. With AFS, all of that is taken care of by the server, I believe.
So far, you've said nothing about what's in these files and how they are being modified. That's not a secondary question. In fact, it may make your whole search for the right filesystem irrelevent.

You're assuming that a remote filesystem is the only way to share files. But its only the most common and simplest. When you start talking about replication and version control (which you are, even though you don't use the terms) you need to consider a technology that directly supports these features. There's version control systems, databases, content management systems. Which is right for you? Without knowing more about the data you're dealing with, it's impossible to say.

But if it has master/slave ... (0)

Anonymous Coward | more than 10 years ago | (#7756940)

Then they can't use it in L.A.!

Re:AFS (2, Informative)

rufey (683902) | more than 10 years ago | (#7759335)

Unless AFS has changed signifigantly since the last time I used it (1998), I don't know if it would be the best solution.

AFS was a nice filesystem to work with, but it took more to maintain it than our regular NFS mounts. The local (client-side) caching of files was nice though. So was the concept of having a master read/write volume and being able to then replicate that volume to read-only volumes, and replicating them only when we wanted to. So we could put new programs on the read/write volumes, test them out, and, when it all was tested, "release" the volumes, et al.

Access permissions are definatly different than your samba/CIF/NFS filesystems though. Its akin to Kerberos where you have to have a "token", and your "token" has to have rights to the file in order to read the file. And "tokens" used to not be a obtain-once-and-use-forever thing. They expired every 24 hours, so every 24 hours you'd have to re-authenticate.

One thing that we found we didn't like (this was with AFS 3.3/3.4) was that the cache of files on the client machine was not encrypted. So if someone knew how the cache was structured, they could retreive the files in the cache without having any AFS tokens (the cache exists on local disk, not in AFS space). This may have changed.

One other thing we had a problem with is when the AFS volume(s) would disappear from the client, and/or if the client lost contact with the cell AFS servers. The machine would become useless until it came back. This was all on UNIX (Sun, HP, SGI, BSD, Linux). Part of the problem was that /usr/local was in AFS space and contained most userland programs used.

Re:AFS (2, Interesting)

wik (10258) | more than 10 years ago | (#7759429)

afsd now refuses to start unless the cache directory is owned by root and chmod 600. As far as I know, the cache is still not encrypted, but if you can't trust root on the system, then you have bigger problems.

AFS is still nasty if you lose contact with the servers. That definitely will be a problem if /usr/local is remote. I have yet to see a network file system that can gracefully handle this situation.

Re:AFS (1)

Professor Bluebird (529952) | more than 10 years ago | (#7770315)

Its akin to Kerberos where you have to have a "token"
It is Kerberos, although AFS ships with Kerberos v4. However, I've heard of people using Kerberos v5 with it, though that needs some extra effort.

Re:Samba (4, Informative)

nocomment (239368) | more than 10 years ago | (#7755134)

eh NFS is a fine way to do it. I might suggest that since you are trying to keep data synchronized, you could very easily make it filesystem agnostic by using rsync.

I have a cluster of 4 machines that is remotely sync'd over an ssh tunnel using rsync. It's pretty easy to to do.

Re:Samba (0)

Anonymous Coward | more than 10 years ago | (#7761997)

You didn't address his reasoning at all!

Samba uses OK, but not great, NTLM2 password hashing and user authentication while NFS on Linux generally does NO password authentication at all.

It doesn't matter if you are using SSH when your users can spoof each other trivially.

Re:Samba (1)

dasunt (249686) | more than 10 years ago | (#7761553)

BSD (OpenBSD at least) doesn't mount SMB/CIFS shares easily. 'shlight' is supposed to be able to do it, in some circumstances, but as a newbie user, I have problems with certain setups.

(OpenBSD + Samba will export SMB/CIFS shares just fine)

I'm in a similar situation (2, Interesting)

Hitch (1361) | more than 10 years ago | (#7753568)

I've got developers that need to have a consistent home directory over several unix and windows boxes - we're using samba *and* nfs - an ugly system at best. I'm currently in the situation where I can start over, more or less, so I'm looking at better options. any suggestions are appreciated.

Re:I'm in a similar situation (4, Insightful)

David McBride (183571) | more than 10 years ago | (#7753809)

The way we do it is that we have some underlying file store running on unix machines. At the moment we've got a couple Sun machines with large RAID arrays.

Then, to provide access to clients, we use Samba as a bridge to the Windows desktops and NFS for trusted linux clients; untrusted hosts can use SFTP or, if they just need read access, HTTP.

Having multiple storage nodes on multiple sites synchronized is a SAN, not client access, problem. NFS just doesn't provide multiple-node functionality. NFSv4 (link [nfsv4.org] , link [umich.edu] ) may have some interesting features that could help; AFS [openafs.org] was designed with multiple sites in mind and does intelligent caching and has other useful features over NFS but does have some limitations; and then there's things like IBM's Storage Tank [ibm.com] which I haven't had a chance to look at properly yet.

Bottom line: If you have a flexible SAN infrastructure, you can use bridging nodes to provide access to the SAN tailored to whatever your clients require. The infrastructure is the hard part; with commodity packages like Samba client support is a much simpler seperate issue.

Re:I'm in a similar situation (2, Interesting)

GaelenBurns (716462) | more than 10 years ago | (#7754482)

The infrastructure is the hard part; with commodity packages like Samba client support is a much simpler seperate issue.

Exactly right. The client connections will all be done via samba... it's the infrastructure I'm asking about.

That being said, NFS4 seems to still be in development and we need something that is finished and ready for use now. Storage Tank sounds nice... but something tells me it's not free software. Free is good. Finally, AFS is the glory... but the documentation is horrible. We can find a number of how-tos, but they're all either out of date or useless. Have I missed one?

drbd (5, Informative)

JimmyGulp (60100) | more than 10 years ago | (#7753643)

What about drbd? Its a mirroring thing, like raid 1, over a network. This way, the data is syncronised, and all you have to do is mount/share the data from the nearest server, by whichever way you want. Try http://drbd.cubit.at/ [cubit.at] this.

I think it can manage to re-sync everything when the network line comes back up, but I'm not sure.

Re:drbd (3, Informative)

kzanol (23904) | more than 10 years ago | (#7754433)

What about drbd? Its a mirroring thing, like raid 1, over a network

Won't help in this situation:
A drbd setup will keep one (or several) partitions syncronized between two servers. The problem is, one and only one server may access the device at a time.
drbd is useful for high availability configurations where you need a standby server with current data that can take over if something happens to the primary server. It's most often used together with a cluster manager like heartbeat.
In the scenario described above, where you need concurrent access to the same data on several servers, drbd isn't yet useable.
Still, keep watching: development definitely moves in a direction that should make this possible. Steps needed to make this happen:

  • Make drbd writeable on both servers
  • add a distributed file system like GFS
  • add a distributed lock manager
It'll be some time before drbd will be able to do all that.

More Questions, Options, No Answers (5, Insightful)

4of12 (97621) | more than 10 years ago | (#7753726)


I'm sorry I can't address your question for good remote filesystems in the face of an unreliable network. My network has been relatively reliable and that's been a decreasing concern. Perhaps network reliability will be less of a concern for you, too, in future.

Lately, what I've been looking for is a remote filesystem that provides performance, security, flexibility, the latter in reference to being able to log into someone else's desktop machine and easily get my home directory mounted, whether from a big server up 24x7, or from my desktop.

Some have dabbled with DCE/DFS [lemson.com] , but I've heard that's slowly dieing, ponderous to set up, performance suffers.

SFS [nec.com] looks intriguing, but I haven't heard pro or con about its performance. It appears to be secure and flexible.

NFS is an old friend and, yes, if the network or the server dies, a lot of local sessions will hang interminably 'NFS server not responding'. But, this doesn't happen as much as it did 5 years ago.

Right now we're running NFS v3, but the new NFSv4 [nfsv4.org] looks like it has a better security model.

Finally (and you shouldn't even think about this if network reliability is an issue), simple block service like iSCSI [digit-life.com] looks promising as a way of interchangeably moving around from desktop to desktop and getting your same home directory no matter where you are. More, you could conceivably even get your own flavor of OS booting, be it Red Hat 9, Win2K, XP, Gentoo, etc. Don't know about its security; it's heavily dependent on a reliable, high-performance network, but looks like a good way to get the most storage for your dollar (NAS instead of SAN).

Re:More Questions, Options, No Answers (1)

David McBride (183571) | more than 10 years ago | (#7753838)

"I'm sorry I can't address your question for good remote filesystems in the face of an unreliable network."

I suspect what he means is that the core network within each site is reliable -- just that the linkage between the two or more storage nodes he has to manage may *not* be, and he wants to be able to recover gracefully in the event of the inter-site link going down and back up again.

Re:More Questions, Options, No Answers (1)

GaelenBurns (716462) | more than 10 years ago | (#7754625)

Right on the head.

The LAN is as stable as any switched network. It's the T1s connecting the sites that could conceivably go down and back up.

Re:More Questions, Options, No Answers (1)

GaelenBurns (716462) | more than 10 years ago | (#7754591)

I only mention the unreliable network because technically, it is. Just like any net connection that I've ever heard of, a T1 does not guarantee 100% uptime. We should see an uptime of greater than 99.98%.

Re:More Questions, Options, No Answers (1)

earlytime (15364) | more than 10 years ago | (#7772991)

If this is the case, it sounds like your real problem is with the network. For all your time, effort, and money, you may be better served by tackling this first. A few extra links and a routing protocol can dramatically increase the reliablilty of a lan/wan environment.
Consider the Internet, how often do you hear about the WAN links going down? Considering the number of links involved in a normal internet scenario, very few. Consider that 99% reliability means 2.5 days/year of downtime. If you use an average of 3 wan links to get to slashdot, it should be down an average 1 week/year, which is clearly not the case.

Re:More Questions, Options, No Answers (1)

Great_Jehovah (3984) | more than 10 years ago | (#7757436)

SFS performs quite well. I recommend it but AFAICT it's only for linux and bsd ATM.

security (2, Informative)

1isp_hax0r (725178) | more than 10 years ago | (#7753800)

Since the office buildings are distant, chances are that there is untrusted connection between them. Don't forget to send data through secure tunnels (eg: ssh [openssh.org] tunnel).

Re:security (1)

GaelenBurns (716462) | more than 10 years ago | (#7754678)

Already done. All communication between sites is piped through a secure tunnel.

three letters (1)

kipple (244681) | more than 10 years ago | (#7753831)

CVS.

Re:three letters (2, Funny)

OrangeSpyderMan (589635) | more than 10 years ago | (#7754485)

"CVS is not the answer, CVS is the question - the answer is no!"

Can't remember where I saw that quote first (LKML??) but I think it sums things up quite nicely... :-)

Re:three letters (1)

BigGerman (541312) | more than 10 years ago | (#7756088)

Yes, CVS is what I would use.
Especially with that Windows client, Tortoise(?), that is embedded into Windows Explorer so there is no ugly client to learn. Nice color coded folders and files: green - current, red - updated, ? - new.

Simple : 9p (2, Interesting)

DrSkwid (118965) | more than 10 years ago | (#7753837)


9p [bell-labs.com]

Re:Simple : 9p (1)

GaelenBurns (716462) | more than 10 years ago | (#7754728)

Oh yes... we're well aware of Plan 9. As a matter of fact, we lament it almost constantly. Correct me if I'm wrong, but there is no linux version of Plan 9. It's still an OS, right? I suppose that if I were convinced this was the only solution I could always throw another pair of servers into the mix.

Re:Simple : 9p (2, Interesting)

DrSkwid (118965) | more than 10 years ago | (#7754989)

9p is a protocol not an OS, it is OS agnostic.

I have a python 9p server daemon and clients.

The ask was "what's the best remote file system", 9p is the answer.

So why not use what you have already (4, Interesting)

Halvard (102061) | more than 10 years ago | (#7753839)

Then you don't have to syncronize.

If you haven't already installed SSH on a machine in both locations, do so.

Follow the "Setting up Samba over SSH Tunnel mini-HOWTO" by Mark Williamson [ibiblio.org] . Then you can use the server on each side to share out the files on the other side and not even change anything about how your users do anything. It's very simple to set up. It's 3 steps on each side plus adding it into a log in script or mapping on the individual machines. So you should be ready in 5 minutes.

If you still want to syncronize, there are tons of tools to do that including Unison [upenn.edu] .

Oh. My. God. (4, Informative)

schon (31600) | more than 10 years ago | (#7754315)

Setting up Samba over SSH Tunnel

For a quick-and-dirty solution for one or two users, over a reliable connection, this might be sufficient, but for the poster's problem, it would be a nightmare.

TCP over TCP is a bad idea because it amplifies the effect of lost packets.. two or three dropped packets in a short period of time will result in a cascade failure as each TCP stream attempts to compensate for the loss.

You can find all the gory details here [sites.inka.de] .

Not quite (4, Insightful)

wowbagger (69688) | more than 10 years ago | (#7754980)

SSH port forwarding isn't "TCP over TCP" - the SSH client isn't simply sending the TCP packets over the wire, it is sending the contents over.

Suppose we have 2 computers, A and B, connected via SSH, and forwarding some service. A sends a block of data to B.

The sequence is NOT:
A packaged data into TCP packet.
SSH encrypts packet and packages it into another TCP packet
B receives SSH packet and acks it
B decryptes packet
B acks that packet.

The sequence IS:
A packages data into TCP packet
SSH receives and acks packet.
SSH encrypts PAYLOAD of TCP packet
SSH sends packet
B receives SSH packet and acks it
B extracts data.
B packages data into local TCP packet, sends it, acks it locally.

So you don't get into the cascade failure mode for TCP over TCP.

Now, if you use your SSH connection to forward PPP data over the wire - THEN you are getting into TCP over TCP because the SSH session is actually forwarding the PPP packets.

Re:Oh. My. God. (0)

Anonymous Coward | more than 10 years ago | (#7781243)

Nice job moron. You just showed the world how stupid you are.

Here's a tip:

If you don't understand something, don't go around telling others about it.

Re:So why not use what you have already (1)

phildog (650210) | more than 10 years ago | (#7754493)

Nice howto. One thing you may want to consider adding is the option to use SSH w/o any encryption. I think you may have to recompile to get that support in the sshd.

I set up what you are talking about with cygwin's ssh connecting to a linux box at home, and the connection was slow to the point of being almost unusable. That SMB is one chatty protocol. I did not try the SSH w/o encryption thing though.

Re:So why not use what you have already (1)

Cheeze (12756) | more than 10 years ago | (#7760650)

if you have the cpu, you might try compressing the data with the -C switch given to ssh. I know over dialup, it makes ssh sessions slightly more responsive.

AFS is what you want (5, Informative)

LoneRanger (81227) | more than 10 years ago | (#7753954)

Frankly AFS is what you want and what you need. I used to work at a site with over 26,000 AFS users and it was a magical system. It is hard to setup, I'll grant you that, but only the first time. After you've got it down once it's old hat after that.

My biggest issue when I was setting it up was Kerberos integration, can be tricky but the guys on the OpenAFS mailing-lists are incredibly nice and knowledgable. Some other issues are daemons that like to write to user home dirs won't work real well unless you find a way to have them get an AFS token or Kerb ticket.

If I were you I would SERIOUSLY consider AFS, don't listen to those who would say it's old and outdated, because it's not. OpenAFS is being actively developed and new features are being added all the time.

Feel free to email me if you want and I'll discuss the advantages/disadvantages further or help you get resources to set up your AFS system.

Re:AFS is what you want (4, Informative)

TilJ (7607) | more than 10 years ago | (#7754319)

I agree, though from the other side of the fence: I have an existing Kerberos realm and am finding the AFS integration difficult ;-)

There are two current stumbling blocks for me that likely won't affect the original poster:

* OpenAFS doesn't run nicely (read: at all) on FreeBSD (tested with -STABLE on i386 and -CURRENT on sparc64). Doesn't matter if you're running it on Linux, of course.

* AFS uses it's own filesystem rather than riding on top of the O/S. That's fine, and better for security, but sucks if you want to do something fancy like distribute the same filesystem via samba, NFSv3 and AFS simultaneously.

To me, AFS is much more appealing than NFSv4. For one, NFSv4 is fairly rare - the implementations are basically for testing purposes and there's a limited set of operating systems supported. The extra features that AFS has (volume management, failover, ease of client maintenance, intelligent client-side caching, etc) make it a win for me.

Re:AFS is what you want (1)

Beowabbit (306889) | more than 10 years ago | (#7754885)

AFS uses it's own filesystem rather than riding on top of the O/S. That's fine, and better for security, but sucks if you want to do something fancy like distribute the same filesystem via samba, NFSv3 and AFS simultaneously.
Another side effect is that the symantics of AFS aren't the same as the symantics of traditional Unix filesystems. For instance, there are some permissions issues that can make building/installing software onto an AFS-served filesystem a hassle. I had to administer (commercial) AFS once, and it's not an experience I'd like to repeat.

Re:AFS is what you want (2, Informative)

LoneRanger (81227) | more than 10 years ago | (#7755171)

I agree there can be /some/ issues with installing software, but anything worth it's salt shouldn't break too badly. Some quick permissions fixing can be done and if you have the top-level directory permissioned right then it isn't an issue. I wouldn't ever suggest using AFS as your root fs. :)

Even then the poster isn't asking for a software repository, he's/she's asking for a networked filesystem that provides some sort of offline use. Which is exactly the niche AFS fills.

Re:AFS is what you want (3, Informative)

LoneRanger (81227) | more than 10 years ago | (#7755271)

* OpenAFS doesn't run nicely (read: at all) on FreeBSD (tested with -STABLE on i386 and -CURRENT on sparc64). Doesn't matter if you're running it on Linux, of course.

How long has it been since you tried this? I seem to remember the OpenAFS team fixing a lot of their FreeBSD issues. I know OpenBSD recommends OpenAFS as a network file store. Even then you could try ARLA (?). Should be able to Google for it. IIRC Arla fully supports FreeBSD as both a client and a server.

* AFS uses it's own filesystem rather than riding on top of the O/S. That's fine, and better for security, but sucks if you want to do something fancy like distribute the same filesystem via samba, NFSv3 and AFS simultaneously.

Samba is supported somehow IIRC, but I KNOW that AFS over NFS is supported because it's in the docco... Appendix A. Managing the NFS/AFS Translator [openafs.org]

Re:AFS is what you want (1)

TilJ (7607) | more than 10 years ago | (#7756701)

How long has it been since you tried this? I seem to remember the OpenAFS team fixing a lot of their FreeBSD issues. I know OpenBSD recommends OpenAFS as a network file store. Even then you could try ARLA (?). Should be able to Google for it. IIRC Arla fully supports FreeBSD as both a client and a server.

It's been over a month, I think. My only -CURRENT server is a sparc64 and that could be revealing problems that don't occur on i386. Arla has been marked as "broken - does not build" in the ports tree for some time now (months, at least). I've been struggling with getting AFS onto FreeBSD since last Spring: I don't want to migrate the Vinum file server to another O/S.

Samba is supported somehow IIRC, but I KNOW that AFS over NFS is supported because it's in the docco... Appendix A. Managing the NFS/AFS Translator

Ah! Thanks for the link. At first glance it looks like it needs a second server to act as the translator, but definitely sounds like it does the job.

Re:AFS is what you want (2, Interesting)

pjl5602 (150416) | more than 10 years ago | (#7754476)

After you've got it down once it's old hat after that.

I remember the first time that I did a 'vos move' on an AFS server and the volume moved from one server to the other without any downtime for the users. Talk about an admin's dream! :-)

Re:AFS is what you want (1)

GaelenBurns (716462) | more than 10 years ago | (#7754836)

This is the kind of story that warms my heart. Thank god for Linux.

Re:AFS is what you want (1)

LoneRanger (81227) | more than 10 years ago | (#7755294)

Or backup volumes? :)

Oh how wonderful to be able to snapshot and backup an entire volume without user interruption.

How about Lustre? (5, Informative)

Anonymous Coward | more than 10 years ago | (#7754103)

Lustre [lustre.org] is something we're looking at rolling out for user home directories. Although a few labs have 100TB+ file systems using it. You get redundant servers at all levels (which deals with the synchronization problems), and best of all, you can stripe all your existing disks to create one logical disk. Think LVM for network connected machines. It's pretty fast too.

Yikes! (1)

fm6 (162816) | more than 10 years ago | (#7756391)

Part of Lustre appears to be a new local journalling file system called OBDFS. Pretty interesting in itself, thought they say little about it.

Worth noting that ClusterFS is advertising Lustre as a pre-1.0 product. Probably not a current option for anybody who can't afford a big support contract.

Re:Yikes! (2, Informative)

Anonymous Coward | more than 10 years ago | (#7756700)

OBD's run on top of EXT3 (well sort of, its a hacked ext3, but basically it doesn't add any really new features on the journalling side).

Lustre is a lot more stable than it used to be :)

The failover is an "in development" feature. I know people who claim to be using it, but I wouldn't count on it working when you need it. Its just using clumanager (or simular) and a service start on the "failover" machines. It really doesn't do all that much, and requires some hgeavy scripting and hand holding to get it to work at all.

Its a pretty good "in cluster" solution, I wouldn't recommend it (today at least) as a remote filesystem option.

Re:Yikes! (0)

Anonymous Coward | more than 10 years ago | (#7757686)

Lustre is now at 1.0.2, and I know several places using it without a support contract. As for failover, RAID 0+1 support for LOV's will be coming soon, which should eliminate the need for OST failover. However MDS failover is still needed.

SMB (1)

gngulrajani (52431) | more than 10 years ago | (#7754254)

I was suprised to hear were I work [uni-saarland.de] they are dropping NFS in favor of SMB.
The reason I was given was that SMB has better permissions/access rights across all platforms.
-greg

Re:SMB (1)

MarcQuadra (129430) | more than 10 years ago | (#7788051)

The problem is getting your *NIX machines to play nice with the SMB server's permissions. NFS permissions transfer smoothly (dangerously smoothly if you ask me!). I couldn't get my Linux boxen to play nice client with my SAMBA servers though, they just didn't respect the permissions like they should have.

Maybe I did something wrong, but now that I've got te *NIX boxes using NFS and the Win32 boxes getting the same data via SMB, all is well.

Anyone else have this problem? Anyone out there mounting home folders from SAMBA servers on Linux clients? Any tips on how to get the linux SMB client to behave? Should I just wait for the totally revamped SMB client in 2.6?

Beware of Linux Kernel-Samba (2, Informative)

CaraCalla (219718) | more than 10 years ago | (#7754286)

I advise against Linux Kernel-Samba, at least if you want your Clients (be it Workstations or Servers) to have some uptime. After some days, possibly weeks it randomly stops working, all programs having open filedescriptors on the samba-share hanging. If you kill (-9) them, or the smbmount-process, they go zombie. Any other program which tries to access the former mount-point immediatly goes zombie as well (Your shell checking whats wrong, updatedb,...) After several more days I have seen those zombie-processes disappearing again, however not always.

If you reboot daily anyway there shouldn't be any problem.

All in all not a satisfactory situation.

Tested with:
- Samba 2.2.3a (Debian Woody) as Server
- Kernel-Samba 2.4.* as Client

But perhaps I missed something...

Edgar

Re:Beware of Linux Kernel-Samba (0)

Anonymous Coward | more than 10 years ago | (#7754464)

upgrade to samba 3.x

Re:Beware of Linux Kernel-Samba (2, Informative)

lkaos (187507) | more than 10 years ago | (#7757707)

So you're using smbfs (it's not called Kernel-Samba) plus Samba 2.2.3a.

Both of those are very old and unmaintained. You should try out this setup with Samba 3.0 and cifsfs (available for 2.4 or 2.6).

If you still have this problem, submit a bug report to Samba.

Re:Beware of Linux Kernel-Samba (1)

SaDan (81097) | more than 10 years ago | (#7757725)

Yeah, you're probably missing something.

I use smbfs mounts all over the place where I work, for weeks/months on end. Have not seen what you describe at all.

This is with varying versions of 2.4 Linux kernels (on Red Hat and Slackware systems), varying versions of Samba, frame-relay, straight T1, and VPN connections between more than twenty sites.

A Samba mount will hang when a link goes down, although sometimes Samba will recover (if the outage is only for a short period of time).

Re:Beware of Linux Kernel-Samba (1)

treat (84622) | more than 10 years ago | (#7777286)

I use smbfs mounts all over the place where I work, for weeks/months on end. Have not seen what you describe at all.

I use smbfs a lot also without problems. I still have problems every once in a while. For example, if I copy a large amount of data (say, with cp -a) from an smbfs mount to a local mount, and do a df in another window and ctrl-c the df when it hangs for a few seconds on the smbfs mount, a couple dozen files will fail to copy with an IO error, and then the copy will continue. When I ran into this (totally repeatable) problem, I decided that smbfs is clearly not stable enough for any important use.

First of all, you should be using a VPN ... (2, Insightful)

scum-o (3946) | more than 10 years ago | (#7754386)

You should be using a VPN if you have two offices and two firewalls. Unless your debian machines ARE your firewalls, then NFS or samba would be fine. However, machines will still lock or be slow of the internet gets slow or you drop a connection from one place to another.

samba+openssh+putty for your win32 clients (1, Redundant)

blumpy (84889) | more than 10 years ago | (#7754709)

Simple. You have samba already, setup openssh on a machine with nics on the inside and outside of your network.

On your win32 clients, setup putty (use latest dev version) with a tunnel to port 139 to your fileserver, map the network drive on windows as \\127.0.0.1\sharename

That's it! A free solution.

FAT16 (4, Funny)

turg (19864) | more than 10 years ago | (#7755866)

I think FAT16 is the best remote filesystem -- I like it best when FAT16 is as remote from myself as possible.

Re:FAT16 (0)

Anonymous Coward | more than 10 years ago | (#7793836)

So you like FAT12 better then FAT16?

It all sucks. (1)

eGabriel (5707) | more than 10 years ago | (#7755940)

Samba3 is an amazing piece of software, don't get me wrong. Yet it exists to play patty-cake with Windows, and neither the Windows or the Linux side gets what it really wants. The NFS on the table doesn't look terrible, but what we have available now is pretty unusable. AFS, Coda, etc. probably aren't going to be a good solution either.

I am starting to get interested in whatever Novell has that can save us from this mess. Of course, something free would be best, some middle ground that any OS can implement without losing their own brand of authentication, roles, acls, file attributes, etc. Why this is still a problem for us creeping up on 2004 escapes me.

Unison + SAMBA (2, Informative)

obi (118631) | more than 10 years ago | (#7756969)

It sounds to me like you're trying to connect two servers on different locations, which then serve out the files out to the clients through samba. And the connection between those offices might drop.

Maybe it's worth considering Unison - it's built to run over SSH, and can is like a two way rsync. It keeps state on both sides, and you can set it up so it automatically/regularly updates both sides with the changes of the other side. There's a window of conflicting updates, that's true, but you'd also have that with intermezzo or coda when they're in disconnected mode. Additionaly, unison is completely userspace, it doesn't care about what filesystem it might be running on. And there's Windows/MacOSX port too iirc.

And hey, it's only an apt-get away :) - it's in Debian.

what they all lack is clientside encryption (1)

graf0z (464763) | more than 10 years ago | (#7757635)

The only remote filesystem i found where the en-/decryption of files is done on the client side is TCFS [www.tcfs.it] . Unfortunately, it seems to be not maintained any more (last news 13months old, linuxversion still uses 2.2).

All other "encrypting remote filesystems" encrypt only the filetransfer, not the filestorage (AFS or - if i understood the FAQ correctly - SFS [fs.net] ). So the fileserver admin (or an intruder or trojan) is able to read served files cleartext.

What's required is a remote filesystem where the clients do not need to trust the service nodes for data integrity and privacy. If i did not miss something (please tell me!), the only option nowadays is stacking a local crypting fs on top of a remote fs, e.g. NCryptfs [columbia.edu] on top of NFS or AFS.

/graf0z.

Re:what they all lack is clientside encryption (1)

toast0 (63707) | more than 10 years ago | (#7757828)

Judging from the way he wants to use the filesystem, I don't think encrypted storage would be necessary, and probably not convenient.

He's talking about using samba to export the files to windows clients anyhow, and I don't think samba does encryption on filetransfer anyhow, so I don't think the data is that sensitive. (He has mentioned the link between the two sites is encrypted though)

If you have money.... (2, Interesting)

adam872 (652411) | more than 10 years ago | (#7758502)

...then I would consider building a SAN with replication. High end storage solutions using HDS and/or EMC gear fix this problem by enabling remote block for block copy of data between identical arrays. Veritas also makes a product called Volume Replicator that does effectively the same thing. By the sounds of it, this would be out of your price range, but it would do the job (we have a 15TB data centre mirrored using EMC's SRDF and another one using Volume Replicator).

In terms of free ways to do it, it will really depend on how sync'd the two offices need to be. If it's instantaneous, then you will need to have one master server and both sites pointing to it. Others have mentioned AFS, but that is also non trivial. If the synch doesn't have to be instantaneous, then perhaps a regular rsync tunneled through SSH would do the trick. CVS may also help, depending on the data you have.

Re:If you have money.... (1)

La Fortezza (690838) | more than 10 years ago | (#7782326)

VVR doesn't allow for concurrent access at the secondary site(s). The replication is one way and while the volumes at the secondaries are started, Veritas strongly recommends you do access the filesystems on them.

sneaker.net (3, Funny)

turgid (580780) | more than 10 years ago | (#7762671)

Back in the day, we were forced to use sneaker.net (TM). It worked quite well, even on MS-DOS workstations with 512k RAM, and the 80286 processor and still works to this day. Reliability is so-so, and speed can be poor, but nowadays with technological progress transfer rates can be the orders of gigabytes per second, but latencies are large (tens of seconds upwards to several days). One downside was the propagation of viruses, but distribution of code across platforms by source and proper protected mode operating systems with selectable user privileges make viruses less dangerous.

AFS documentation (5, Informative)

wik (10258) | more than 10 years ago | (#7767932)

As far as AFS documentation goes, I found the following documents useful when installing a new AFS cell/kerberos realm earlier this month.

First, the AFS quick start guide on openafs.org (http://www.openafs.org/pages/doc/QuickStartUnix/a uqbg000.htm) provided step-by-step installation instructions for the AFS server and client. Having been an AFS user for the past 7 years did help a bit.

Second, the quick start guide assumes you are using the kaserver included with OpenAFS. Everyone and their pet dog now recommends installing a real kerberos 5 daemon instead. We chose Heimdal 0.6. The new O'reilly book "Kerberos: A definitive guide" was invaluable for this. In order to put the two together, this impossible to find wiki page http://grand.central.org/twiki/bin/view/AFSLore/Ke rberosAFSInstall explains the changes to the quick start required to actually integrate kerberos 5.

Finally, to get a pam login that gets both kerberos 4 (for AFS) and 5 tickets and tokens, we used pam-krb5afs (http://sourceforge.net/projects/pam-krb5/) for the login module.

Unfortunately, none of this is tied together in a single cohesive document and I'm still trying to organize my notes. Overall, I was able to get the kerberos realm and AFS up in about a day, while getting the pam module and openssh to play nicely took three to four days.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?