Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses Data Storage

Red Hat announces GFS 240

PSUdaemon writes "Over at Kernel Trap they have an announcment that Red Hat has released GFS under the GPL and offer it through RHN. This could potentially be a very substantial offering from Red Hat."
This discussion has been archived. No new comments can be posted.

Red Hat announces GFS

Comments Filter:
  • Compatibility? (Score:4, Interesting)

    by Grant29 ( 701796 ) * on Sunday June 27, 2004 @10:08AM (#9542175) Homepage
    Will it run on distros other than Redhat? According to the linked page, it looks like it only for redhat enterprise platforms.

    --
    11 Gmail invitations availiable [retailretreat.com]
    • Re:Compatibility? (Score:5, Informative)

      by Pros_n_Cons ( 535669 ) on Sunday June 27, 2004 @10:15AM (#9542252)
      "Will it run on distros other than Redhat?"

      Of course it will, It's GPL and looking for inclusing into the kernel. Just like everything else from Red Hat. If you expect them to optimize it for SuSe, Mandrake, Gentoo you're mistaken but sometimes they supply Debian packages for things they write. If it doesn't get accepted upstream for whatever reason It's up to vendors to supply the packages, not the writer of the software.
      • I don't think so (Score:3, Interesting)

        by Donny Smith ( 567043 )
        I don't think so.

        Red Hat's HA clustering software is also GPL but it doesn't run on other distros (and is not supported by Red Hat on other distros).

        The code itself is open source, that is true, but "Red Hat Enterprise Linux subscription [is] required" (http://www.redhat.com/software/rha/gfs/)

        • Re:I don't think so (Score:5, Informative)

          by Sunspire ( 784352 ) on Sunday June 27, 2004 @01:27PM (#9544042)
          Red Hat's HA clustering software is also GPL but it doesn't run on other distros (and is not supported by Red Hat on other distros).

          Of course Red Hat doesn't support other distros, but what makes you think the clustering software doesn't work on them? All the bits and pieces are available for download [redhat.com]. If you find any "if (distro != RH) exit()" code in the fully GPL'd cluster toolchain, please feel free to remove them. There's no secret sauce to RHEL, it's all open source and everyone is free to copy and modify the code.

          There's already one distro that includes the new GPL'ed GFS filesystem out as of today, Lineox [lineox.com]. And Red Hat will be working to get GFS up to spec for inclusion in the official Linux kernel according to posts made to the kernel mailing list.

          The code itself is open source, that is true, but "Red Hat Enterprise Linux subscription [is] required"

          This only refers to that point that Red Hat is not interested in selling to you unless you have a RHEL subscription. That $2,200 gets you GFS up and running on your RHEL cluster in a turnkey fashion, and it gives you the option to purchase further 24/7 one-hour response support contracts. You're free to assemble it all into a working system by yourself if you want.
          • Re:I don't think so (Score:5, Informative)

            by SuperQ ( 431 ) * on Sunday June 27, 2004 @02:11PM (#9544432) Homepage
            actualy, they do support other distros. Sistina software, who was aquired by RedHat, is down the street from my office. They still show SuSE as a supported distro.

            I am personaly going to try installing GFS on some Debian systems for a U of M student group who recently got a donation of some used Fibre-Channel disk.

            What I'm hoping for now is support for ia64, and other platforms. It would also be nice if GFS could now be ported to other OS's like AIS and Solaris.
  • executive summary? (Score:5, Insightful)

    by Speare ( 84249 ) on Sunday June 27, 2004 @10:08AM (#9542181) Homepage Journal
    Would it be too much to ask that the writeup blurb include a ten-word summary of what makes GFS any different from any other Linux-ready filesystem? Many sites get slashdotted, making most links unusable for 12 hours or more.
    • by Night Goat ( 18437 ) on Sunday June 27, 2004 @10:12AM (#9542223) Homepage Journal
      I'd be happy with just the mention that it IS a file system. I had no idea what GFS was until I read your post.
    • by Anonymous Coward on Sunday June 27, 2004 @10:14AM (#9542242)
      From http://sources.redhat.com/cluster/gfs/

      GFS (Global File System) is a cluster file system. It allows a cluster of computers to simultaneously use a block device that is shared between them (with FC, iSCSI, NBD, etc...). GFS reads and writes to the block device like a local filesystem, but also uses a lock module to allow the computers coordinate their I/O so filesystem consistency is maintained. One of the nifty features of GFS is perfect consistency -- changes made to the filesystem on one machine show up immediately on all other machines in the cluster.

      and

      GFS has no single point of failure, is incrementally scalable from one to hundreds of Red Hat Enterprise Linux servers, and works with all standard Linux applications.

      Dunno if any other linux "file systems" have all that. :p
      • by elmegil ( 12001 )
        I want to see the numbers that prove the "high performance". This is a hard problem, and many others have tried to solve it, with pretty mixed results. I'm very skeptical that a newcomer to the project has solved it, but I'm willing to be convinced. But marketing speak claiming high performance is not convincing.
      • I'm confused. If it's a shared disk setup, how can there not be a single point of failure? If your FC/iSCSI disk box goes down, where's your storage gone? Obviously I've missed something, so if anyone would care to explain it to me I'm all ears...

        What I need is a simple mirroring system for two failover servers, without single point of failure. Nothing out there at the moment seems to be stable enough for this in production. It's very frustrating. DFS and FRS seem to work just fine under Windows, so why ha
        • by Anonymous Coward on Sunday June 27, 2004 @11:25AM (#9542896)
          I'm confused. If it's a shared disk setup, how can there not be a single point of failure? If your FC/iSCSI disk box goes down, where's your storage gone? Obviously I've missed something, so if anyone would care to explain it to me I'm all ears...

          Yes your architecture can be designed with a single point of failure. However, in practice you will want to connect this to a SAN. The SAN will be full of dually connected disks, have 2 main controllers, at least 2 power supplies, be connected to two switch banks via 2 HBA's, and each server will be connected to each switch. For added safety, direct connect another SAN to the first, and mirror all data between the SAN's.

          But mainly, a good SAN is designed to be dually redundant from the ground up. Kind of like those (Fujitsu? Panasonic?) servers that have 2 standard mobo's in them and sync all data between cpu's, so if one dies the whole system is still alive.



          What I need is a simple mirroring system for two failover servers, without single point of failure.

          What kind of servers? The best method will depend on the type of server.



          It's very frustrating. DFS and FRS seem to work just fine under Windows, so why hasn't Linux got it?

          Because you haven't paid for it yet, be it in cash or time.

        • by Evo ( 37507 )
          It is presumably a shared _logical_ disk. Simply have failover on your FC node and it's not really a problem.

          For example, the boxes I used to work with were dual host adapter boxes with the RAID5 containers in RAID1 setup. Each box+adapter has two NICs, going to different switches. Each box has a three PSUs, going to different UPSs.

          Using a simple setup like this, there simply is no single point of failure. Apart from the room they are in, obviously.

          Cheap? No. Avoiding single points of failure completely
      • Cool. Faster than light communication. How are they doing this? Quantum entaglement?
      • OpenAFS provides the scalability, redundancy, and clustering that GFS does, but tends to assume that block devices are not shared. On one hand, OpenAFS is cross-platform, extremely secure, and very powerful. On the other, GFS allows the actual block devices to be shared.

        I still think that OpenAFS is generally a better solution in these areas, but who knows?
    • by Pros_n_Cons ( 535669 ) on Sunday June 27, 2004 @10:20AM (#9542307)
      Yes, here [com.com] is a news.com article on it.

      The GFS software lets files be stored in a single file system shared by numerous servers. The information can reside on servers themselves or on a storage area network.

      The software is used to speed data access and replicate information so it's still available even if individual machines fail. It's useful for the two conventional types of clusters: groups of machines linked so one can take over for another in case of a problem, and groups linked as part of a sprawling supercomputer.

      Red Hat GFS is tuned to work with Oracle's 9i RAC, database software that can spread across multiple clustered machines, and work with Red Hat's cluster software for ensuring services remain available despite computer problems.

      • Red Hat GFS is tuned to work with Oracle's 9i RAC, database software that can spread across multiple clustered machines, and work with Red Hat's cluster software for ensuring services remain available despite computer problems.

        Which makes it a direct competitor to Oracle's own GPLed Linux clustered filesystem, OCFS. Interesting.
        • by twenex ( 139462 )
          No, it's a shot across the bow of Veritas, not Oracle. Veritas offers the (rather good but very expensive) Storage Foundation Cluster File System. It is also used for running Oracle 9iRAC and other high availability applications.
    • by Alan Cox ( 27532 ) on Sunday June 27, 2004 @10:28AM (#9542396) Homepage
      I think the other people have covered the basics pretty well - plug lots of computers into one fibrechannel or possibly firewire disk or disk array.

      The second really interesting use is with virtualisation - imagine if you want all your S/390 virtual machines to share the same bsse file systems for efficiency (given the price IBM charge for mainframe disks ;)) or the same with uml, Zen, etc

  • Newbie (Score:2, Interesting)

    by Wardini ( 608107 )
    What does GFS exactly do for you? Allow you to have your hard drive in another computer?
    • It looks to be a storage-area-network program, like Apple Xsan (and I think there's one called FibreShare (FiberShare?) or something)
    • by nounderscores ( 246517 ) on Sunday June 27, 2004 @10:17AM (#9542280)
      Say you want to create a webserver cluster that can host some big files and dynamic content and survive a slashdotting. No one machine can survive all of us hitting it for video and dynamic content at once, so you build your cluster so that the video is distribtued over several machines, the webservers are distributed over some other machines, and the layers in between the that decide which request goes to which physical hard drive holding a copy of the video are also made redundant.

      Now if, after running for some time, one of the machines gets coffee spilled on it and dies, GFS will automatically route around it. The result is that a slashdotter will not be aware of the failure, and still get the video.

      Meanwhile you can fix the problem and bring the downed machine back on-line again.
      • by cjsnell ( 5825 ) on Sunday June 27, 2004 @01:08PM (#9543875) Journal
        While you do have the basic idea down, your suggestion of a clustering FS isn't the best for your application. You are describing "vertical scaling", which GFS and clusters will be very good for. Web serving is not a good place for a cluster--"horizontal scaling" is how you scale most web sites and web applications. Typically, for web serving, you will have a block of content that can fit on the hard disk of the average web server.

        The best way to deliver this to the user (in this case, the slashdotter) would be to replicate this content onto a group of web servers using rsync(1). Each machine serves the content off of its local drive and can use its memory to cache/buffer the disk reads. In front of the web servers, you would put a wire-speed load balancer, such as an Nortel Alteon content switch [nortelnetworks.com] or a Foundry Networks ServerIron switch [foundrynet.com]. The load balancer, when configured properly will take care of monitoring your web servers. It would take me too long to explain it here, but these switches are sophisticated enough that they can take failed webservers out of the load-balancing group for everything from a ping failure to a content failure.

        The key to designing web architectures is simplicity. Web serving does not need fancy clustering software or distributed filesystems. Very few web sites will not fit on the hard disk of your average 1U server. Keep it simple and put the intelligence up front in the switch.

        What is GFS good for? Many things! It would be great for a large computational cluster that had a very large (multi-terabyte) dataset and high disk I/O requirements. Anything that has a requirement to provide one or more very large files to a number of cluster nodes would be perfect for GFS.

        Chris
    • Re:Newbie (Score:4, Informative)

      by silas_moeckel ( 234313 ) <silas@@@dsminc-corp...com> on Sunday June 27, 2004 @10:20AM (#9542312) Homepage
      It allows you to realy use SAN's it's a filesystem that allows multiple readers and writers on one real disk. No other standard linux FS does this but there are add on ones that have similar functionality and work better or worse on different hardware/OS's. Realy what it means is you can have say 10 servers all serving up the same content by all looking at the same set of disks. This is extreamly usefull in tightly packed clusters that need to share large data sets (things that just dont fit all into ram) or HA/HP clusters to serve up ritch media like streaming video (Most web data is to trivial to replicate to make this worth it there but when your talking about TB's of streaming content it's non trivial to use a lot of redundant disk/replicate)
  • Really? (Score:5, Funny)

    by cubicledrone ( 681598 ) on Sunday June 27, 2004 @10:17AM (#9542274)
    GFS on the GPL? From RHN? WTF?

    Normally I'd ask what's the BFD? but most people would just LOL. Then other people would probably want to know if it comes on DVD or FTP, but the FAQ will explain it JIT. Now what would be really cool would be a PDA that would run it with an RGB display, but it might need extra RAM.

    HTH.
    • OTE: BFD (Score:2, Funny)

      by tepples ( 727027 )

      Normally I'd ask what's the BFD?

      BFD is a library from the GNU project for manipulating ELF object code files, among other formats.

      (OTE: off-topic excursion, PWB)
      (PWB: posted without bonus)

    • Re:Really? (Score:3, Funny)

      by identity0 ( 77976 )
      IANAL, but AFAIK, GFS was formerly GPL, then became EULA'd proprietary SW, then was GPL'd by RH. If you'd RTFA, you'd know it's a NFS-like NAS FS used for HPC clusters.

      BTW, IIRC the CIA and NSA claim it was used by the AoE to make WMD to drop on GWB and the USA, and was only GPL'd after it was liberated in OIF. TGIF!
  • GFS is cool! (Score:5, Informative)

    by Anonymous Coward on Sunday June 27, 2004 @10:17AM (#9542277)
    GFS allows multiple redundant storage computers to serve a whole lot of other servers for data availability purposes. It isn't just another FS like EXT* or JFS or .... It's a transparent networkable filesystem with failover and all of the other goodies needed to implement a hardcore enterprise level solution for serving needs like a million hits a minute sites, or filesharing with 50,000 users...
    • what exactly does it give you that OpenAFS does not?

      Is it primarily that it is more useful for highly parallel computing systems so that the actual nodes can share the actual block devices?
      • Re:GFS is cool, but (Score:2, Interesting)

        by Anonymous Coward
        what exactly does it give you that OpenAFS does not?

        Performance. Simplicity. Adopting AFS is kindof an all-or-nothing proposition - we looked into it, but it would mean retraining a _lot_ of physicists, some of whom make computer geeks look like social geniuses.

        Is it primarily that it is more useful for highly parallel computing systems so that the actual nodes can share the actual block devices?

        Yes, and it does it relatively elegantly. PVFS, the main alternative, is fundamentally a kludge IMHO and co
        • However, until such a time as GFS supports MPI-IO "ROMIO", PVFS will be the cluster FS used on our cluster :-(.

          Sorry, ROMIO is only supported by the Microsoft Joliet extensions, which you will need to port to GFS....
  • GFS defined... (Score:5, Informative)

    by jarich ( 733129 ) on Sunday June 27, 2004 @10:20AM (#9542308) Homepage Journal
    From the website....

    Red Hat Global File System (GFS) is an open source, POSIX-compliant cluster file system and volume manager that executes on Red Hat Enterprise Linux servers attached to a storage area network (SAN). It works on all major server and storage platforms supported by Red Hat. The leading (and first) cluster file system for Linux, Red Hat GFS has the most complete feature set, widest industry adoption, broadest application support, and best price/performance of any Linux cluster file system today.

    Red Hat GFS allows Red Hat Enterprise Linux servers to simultaneously read and write to a single shared file system on the SAN, achieving high performance and reducing the complexity and overhead of managing redundant data copies. Red Hat GFS has no single point of failure, is incrementally scalable from one to hundreds of Red Hat Enterprise Linux servers, and works with all standard Linux applications.

    Red Hat GFS is tightly integrated with Red Hat Enterprise Linux and distributed through Red Hat Network. This simplifies software installation, updates, and management. Applications such as Oracle 9i RAC, and workloads in cluster computing, file, web, and email serving can become easier to manage and achieve higher throughput and availability with Red Hat GFS.

    Highlights

    Performance

    Red Hat GFS helps Red Hat Enterprise Linux servers achieve high IO throughput for demanding applications in database, file, and compute serving. Performance can be incrementally scaled for hundreds of Red Hat Enterprise Linux servers using Red Hat GFS and storage area networks constructed with iSCSI or Fibre Channel.

    Availability

    Red Hat GFS has no single-point-of-failure: any server, network, or storage component can be made redundant to allow continued operations despite failures. In addition, Red Hat GFS has features that allow reconfigurations such as file system and volume resizing to be made while the system remains on-line to increase system availability. Red Hat Cluster Suite can be used with GFS to move applications in the event of server failure or for routine server maintenance.

    Ease of Management

    Red Hat GFS allows fast, scalable, high througput access to a single shared file system, reducing management complexity by removing the need for data copying and maintaining multiple versions of data to insure fast access. Integrated with Red Hat Enterprise Linux (AS, ES, and WS) and Cluster Suite, delivered via Red Hat Network, and supported by Red Hat's award winning support team, Red Hat GFS is the world's leading cluster file system for Linux.

    Advanced features

    Scalable to hundreds of Red Hat Enterprise Linux servers. Integrated with Red Hat Enterprise Linux 3 and delivered via Red Hat Network, comprehensive service offerings, up to 24x7 with one-hour response. Supports Intel X86, Intel Itanium2, AMD AMD64, and Intel EM64T architectures. Works with Red Hat Cluster Suite to provide high availability for mission-critical applications. Quota system for cluster-wide storage capacity management. Direct IO support allows databases to achieve high performance without traditional file system overheads. Dynamic multi-pathing to route around switch or HBA failures in the storage area network. Dynamic capacity growth while the file system remains on-line and available. Can serve as a scalable alternative to NFS. Product Information Supported on Red Hat Enterprise Linux AS, ES, and WS. Red Hat Cluster Suite support available on Red Hat Enterprise Linux 3. Support for a wide variety of Fibre Channel and iSCSI storage area network products from leading switch, HBA, and storage array vendors. Mature, industry-leading, field-proven, open source cluster file system.

  • `GFS' (Score:4, Interesting)

    by Anonymous Coward on Sunday June 27, 2004 @10:32AM (#9542432)
    I was reading only the other day about the Google File System. So there are now two acronymns which are both GFS which both refer to a distributed file system. That's not going to get confusing. Nope, not at all.
    • gfs has existed for much longer than google fs. redhat just bought it from sistina and made it GPL (again)

      there's also an "OpenGFS" project that was forked from GFS when it was open source. that project seems dead (last update in 2003)
    • I was reading only the other day about the Google File System. So there are now two acronymns which are both GFS

      Since the other GFS came first, I suggest Google rename their system 'GooFS'. The marketing guys'll love it!
  • by RAMMS+EIN ( 578166 ) on Sunday June 27, 2004 @10:32AM (#9542433) Homepage Journal
    Are there any distributed filesystems that don't have serious issues?

    I mean, NFS has issues with security (relying on numeric user id's sent by the client is a nightmare). Locking is problematic. Different versions have severe compatibility issues.

    I forget the issues with AFS, but it's successor, Coda, seems not very mature, although it is one of the more promising filesystems out there. InterMezzo is a more complete and robust implementation of the Coda featureset, but is Linux-only.

    SFS looks very promising (simple, but effective), but requires NFSv3 clients and servers to interact with the kernel.

    None of these filesystems allows regular users to access remote filesystems (superuser privileges are required for mounting) like with FTP.

    What's so hard about getting this stuff right? And can we please have kernels that support userspace filesystem drivers (or, better, any drivers)? (Yes, I know about LUFS and FUSE).

    Ok, rant over. Thoughtful comments, corrections and pointers appreciated.
    • This [sourceforge.net] is pretty nifty. It's more of an admin tool than a general use tool though. It only requires some scripts running on the server side. The client side needs a kernel module.
      • See, that's just what I mean.

        We already have SSH, and it can be used for accessing remote files (e.g. through the sftp command). All there is to making it a remote filesystem is to write a kernel module. Locking works. Authentication works. With a little extra effort, a generic system can be set up to allow for disconnected operation over any filesystem.

        Of course, using SSH is heavy on resources, so it would still be better to have the encryption optional.

        • Yeah, something like smbfs for mounting samba shares. A lot of times, you don't need a lot of speed and you just want to allow mounting without adding a new service. Something running off ssh would be great.

          fishio on konqueror does this, allowing you to browse a remode ssh account like it was a local folder, but I don't think you could mount it as a normal home directory or fs.

        • Of course, using SSH is heavy on resources, so it would still be better to have the encryption optional.

          No problem! From the FAQ:

          Is it possible to use another command instead of ssh?

          Yes. See --cmd shfsmount option.


          You can tell ssh to not use encryption or use some other command entirely as the transport.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Sunday June 27, 2004 @10:53AM (#9542620)
      Comment removed based on user account deletion
    • Many people believe that the salient problem with AFS is that it violates unix semantics. AFS has a program called "fs" that facilitates dealing with metadata like permissions, etc. For example, chmod doesn't do anything in the AFS environment; you need "fs sa" instead.

      On the other hand, if GFS doesn't do something intelligent about security, then we're left with the same fundamental problem that NFS has. Namely, we need to presume that it operates within a local environment in which all users on the i

      • If the network block devices are shared between nodes, the only way you can prevent me from mounting them with my laptop is to have a completely separate network for the storage devices. Even so, physical security will be extremely important.

        Personally, I don't think that GFS *can* do much about security, nor so I think it is a replacement for AFS. GFS may ONLY be useful for redundant cluster applications, such as the mentioned Oracle 9i RAC or for PVM/MPI stuff.

        So for that sort of thing, GFS might be g
    • by Salamander ( 33735 ) <jeff AT pl DOT atyp DOT us> on Sunday June 27, 2004 @12:22PM (#9543423) Homepage Journal
      None of these filesystems allows regular users to access remote filesystems (superuser privileges are required for mounting) like with FTP

      No, and they don't cook your dinner for you either, but if that's what you're expecting then you're completely missing the point of what a cluster filesystem is for. Granted, the name "Global File System" is a misnomer, but it has been a misnomer for several years now and if you have anything more than a dilettante's interest in this you should know what GFS really does.

      What's so hard about getting this stuff right?

      Yeah, everything's easy when you're not the one doing it. Tell me what you do, and I'll tell you how wimpy that is. If you think that maintaining consistency across multiple machines in a cluster without compromising performance is easy, you're a fool. If you think that high availability of any form is easy, then you're an idiot. If you think putting those two together doesn't lead to an exponential increase in complexity and hence difficulty, you're a moron.

      If you want a filesystem stub (not really a complete filesystem) that lets you access files stored half-way around the world over a standard protocol, look into one of the many efforts based on WebDAV. If you want a true global filesystem, look into OceanStore so you can appreciate some of the problems that are involved. If you want to be able to change the filesystem namespace without being root, look into Plan 9. Do your own googling. None of those are what GFS is about.

      • Indeed, GFS is not a networked filesystem like those I have been mentioning. Call my post off-topic if you want, but this is the closest topic that has come up on /. in a long time, that's why I posted here.

        Also, I did not mean to suggest that cooking up a distributed filesystem with good consistency and performance is easy, just that, seeing how long people have been at it, I would expect the state of the art to be a lot better than it is now. It's not like distributed filesystems aren't useful, so there
        • The problems that face a distributed filesystem are well known, and solutions can be found in any good book on distributed (file)systems.

          Would be found in any good book on distributed filesystems, you mean. Good solutions are not known for the general wide-area case, and therefore the good books have not been written. Anybody who seriously studies this area - as I do - has to rely on tracking down the relevant papers, often by finding them in the bibliographies of subsequent papers. Sometimes the sear

          • As I am interested in this, would you mind dropping me a line using the contact form on my website [inglorion.net]?

            I would think that leases (server will send you updates for some time, if you keep the file cached longer, renew the lease) do a pretty good job at maintaining consistency without being prohibitively resource-intensive. Of course, it depends on the way the content is used, so the best solution is probably flexibility: specify the best behavior in case the system doesn't guess it.
  • by dekeji ( 784080 ) on Sunday June 27, 2004 @10:32AM (#9542441)
    GFS has a number of useful applications. But I think the times where you could design your enterprise around the idea of a globally consistent file storage system are over: enterprises are getting more flexible, more decentralized, and people would prefer not to have to deal with IT staff over issues such as file space and permissions. And they can avoid it--since many of them make the purchasing decisions.
    • Google relies on their own custom filesystem that provides similar features: massively distributed and scalable, supporting clusters 3 orders of magnitude higher (100,000 nodes) than Red Hat GFS. Further, many life sciences companies have very large computing problems requiring large amounts of storage and hundreds of nodes to solve--hence, GFS (as could XSan from Apple) can be useful in these classes of problems.

      I think you are likely correct--typical IT shops in your average enterprise will not find thi
  • What about security? (Score:4, Interesting)

    by ee96090 ( 56165 ) on Sunday June 27, 2004 @10:33AM (#9542444)
    I don't see security in the least of features. Calling this a Global file system is a bit presumptuous, considering the lack of security prevents it from being used outside of a closed LAN segment.
  • by rkoski ( 753506 ) on Sunday June 27, 2004 @10:39AM (#9542499)
  • by techmuse ( 160085 ) on Sunday June 27, 2004 @10:51AM (#9542602)
    What is the difference between GFS, NFS and AFS? (Other than AFS's global file structure, kerberization and encryption)? Do they all do the same thing, or does GFS add something that the others don't have?
    • I don't know much about AFS, but two significant differences between NFS and GFS:

      GFS supports a global file locking interface; NFS does not. So for instance you can have a farm of web servers whose cgi scripts access/update shared files atomically, or multiple database servers which share the same database file, locking individual records to perform simultaneous INSERT/UPDATE transactions.

      GFS supports host-granularity redundancy and failover; NFS does not. So if your NFS server bursts into flame, the


  • I was hoping they'd do this. I think (IIRC) the original GFS for linux was (or was intended to be?) open source, then Sistina changed their minds and made it proprietary and commercial. So then there was an OpenGFS project, which never got off the ground. Now RedHat bought Sistina and they're GPLing the code.
    • Yep.. I've been a licensee of GFS for a couple years now at work. We use GFS across several nodes with FibreChannel disk. The bigest problem I have run into is dealing with custom-built binary kernel modules, and having to wait for Sistina/RedHat to release updates.. Now that it's GPL, we can get the GFS support in the base redhat AS kernel, and not have to have custom kernels for the few nodes in the cluster that use GFS. I can now also use FibreChannel disk for a student group at the U of MN with thei
  • A good test of a filesystem is how well it performs when updating millions of small files. We have this problem at work (application issue), and anyone who's run a news server is familiar with it (most news servers store messages in directories & separate files).

    Chip H.
  • by freelunch ( 258011 ) on Sunday June 27, 2004 @11:17AM (#9542831)
    GFS was well-liked at supercomputing centers I have worked with until Sistina dropped the GPL license in favor of proprietary. They did this very suddenly and without warning. It pissed off a lot of potential users and the open source community. It has since fallen out of favor.

    This move by Red Hat gives new life (and resources) to GFS beyond the OpenGFS Project [sourceforge.net] that has also been continuing to work on the code.

    Another recent development in this area is HP's decision to productize Lustre [tmcnet.com]. Lustre [lustre.org] is perhaps the most prominent and promising HPC filesystem.

    SGI also announced [linuxelectrons.com] a major deal last week involving Luster:

    The new file system is expected to sustain write rates in excess of 8GB/sec and demonstrate single client write rates of more than 600MB/sec. To achieve this performance, the new file system will leverage Lustre, an open source, object-oriented file system with development lead by Cluster File System Inc., with funding from DOE. Lustre currently is used on four of the top five supercomputers, including the PNNL cluster based on 1,900 Intel® Itanium® 2 processors.

    • This move by Red Hat gives new life (and resources) to GFS beyond the OpenGFS Project that has also been continuing to work on the code.

      It's been pretty quiet regarding OpenGFS lately. Now it is of course possible that they join forces and work on the GPL GFS.

      But Red Hat is hoping for inclusion of GFS into the Linux kernel. This would be the best solution, as it would give it a larger developer base, keep the filesystem evolving in sync with the kernel and preventing bitrot even in the unlikely situation
    • I use GFS (proprietary version) at work in our HPC cluster. I like it a lot, the support Sistina provided while they were selling GFS was excelent. It was really anoying to have to deal with patching/building special kernels with special names to match the sistina binary versions, I'm very glad that will be over.

      SGI's CXFS is a pile of crap.. I hope this thing with HP provides something that doesn't suck.

      We recently ran into a bug in CXFS that would cause group readable files to not be readable by the g
  • IBM has a product called GPFS (General Parallel File System) which has sold on AIX for several years and is offered for Linux as well. On Intel based boxes it sells for about $1000 per CPU. I wonder how IBM will react to this Open Source competition ? The IBM product has very similar function - it is also used with Oracle RAC. It originated on the RS/6000 based SP clusters but has been ported out to be used on pretty much any AIX or Linux based cluster.
    • GFS is more like IBM's SAN Filesystem [ibm.com] (a.k.a. Storage Tank) or SGI's CXFS [sgi.com] than GPFS, which is more analogous to parallel filesystems like Lustre [luster.org] or PVFS2 [pvfs.org]. The difference is how the clients talk to the underlying storage devices; clients of GFS, SANFS, and CXFS talk directly to the storage devices via Fibre Channel or iSCSI, whereas clients of GPFS, Lustre, and PVFS2 go through some number of intermediate I/O servers.

      --Troy

  • How does this compare to other SAN hacks like Inter Mezzo [inter-mezzo.org], coda [cmu.edu] or the Open Mosix File System [x-tend.be] (find text: mfs)?
    • Intermezzo and CODA try and solve a different problem (the one AFS does), they replicate data as much as possible without violating coherency and at a file level.

      GFS instead gives everyone access to the same disk at the same time rather than replication. Both methods work well for different data sets - so yes GFS and oMFS are similar
    • by thule ( 9041 ) on Sunday June 27, 2004 @06:36PM (#9546163) Homepage
      The difference is how it tries to solve the problem. NFS works over IP and access files at the inode level. This requires the server system or device to be running RPC and the NFS protocol. Most network filesystems work in a similar way. You have servers and clients accessing the servers via some protocol.

      Now imagine a filesystem designed for servers that allows them to access the filesystem at a block level directly via the shared bus. Let's say a parallel SCSI buss (or any bus that allows more than one host, e.g. iSCSI, Fibre Channel, Firewire). Imagine how fast it would be to access a shared disk over Fibre Channel! The problem is that if two servers mount the filesystem at the same time it would normally currupt the filesystem. People with SAN's (Storage Area Networks) solve this problem by making mini virtual hard drives and setting ACL's on them so only one host can access that virtual hard drive at a time. This could lead to a waste of space.

      GFS solves the SAN problem by using a Distributed Lock Manager (DLM). No one host is the server of the filesystem, but writes/locks are coordinated via the DLM. Now multiple hosts *can* share a virtual hard drive or real block device and not corrupt the filesystem. If a host dies, no problem, there is no server for the filesystem!

      Let's give an example. Say you have a firewire enclosure. Now plug that firewire hard drive into two computers. This, by the way, may still require a patch to sbp so that Linux will tell the enclosure to allow both hosts to talk to it at the same time. Now that the hard drive is talking to both computers you could run GFS on it and access the data at the block level by both systems. Now start serving email via IMAP (load balanced), *both hot*, no standby. Now kill a box. IMAP still works. No remounting, no resycronization.

      Pretty amazing if you ask me! This technology is pretty rare. IBM has GPFS. SGI has Clustered XFS. Both are pretty expensive. GFS? RedHat just re-GPL'd it! Microsoft? Ummm. I think they are just now getting logical volume management.

      GFS also has nice features like journaling (kinda required for this sorta thing), ACL's, quotas, and online resizing.

      Now tell me Linux isn't enterprise!
  • What's wrong with having a single point of failure in itself? IMO one should ask whether it is acceptable to have a particular thing as a single point of failure or not.

    People hype the benefits of centralized management all the time. Not much better than "single point of failure" if you really think about it. One system administrator = "single point of failure", many admins = multiple points of failure.

    We're all living in on Planet Earth. The odds of a catastrophic astronomical event in our lifetimes is n
    • Spoken like someone who has never needed to run a real high-end enterprise system. Most typical enterprise systems need to be running 24 hours a day, 7 days a week, 365 days a year, with little or no downtime AT ALL. Downtime tends to have awful consequences that range from losing significant amounts of money (a telecom billing database system I once helped set up would lose the telco the equivalent of about US$10,000 every minute it was down), to causing mayhem, death, and destruction (the canonical exam

  • Large majority of todays apps are limited by the i/o of the harddrives. So what's the point in having multiple machines accessing the same drives at almost the same time? Yup, even bigger bottleneck.

    Now if the shared storage is a rackfull of ram (flash or dram + batteries), that's something completely different. Then such a shared filesystem can really show its muscles. Of course, if the locking and fencing system can keep up with the demands :)
  • by sneakerfish ( 89743 ) on Sunday June 27, 2004 @05:07PM (#9545667)
    There is also OpenGFS http://opengfs.sourceforge.net/ and Oracle Cluster File System http://oss.oracle.com/projects/ocfs/

    These may go away since their major reason for existing was that Sistina had closed up source for GFS.

    Thanks RedHat. With LVM2, GFS, my EMC SAN and my cluster of Gentoo boxes (ya, sorry 'bout that part) I'm going to have lots of fun.
  • Too lazy to check (Score:3, Insightful)

    by Etyenne ( 4915 ) on Sunday June 27, 2004 @10:48PM (#9547582)
    I am too lazy to check myelf, so I'll ask the collective : does GFS support locking and mmap() ? I am asking because this is a sine qua non condition to run my favorite mail server, Cyrus imapd. Redundant high-availability servers are one of the most asked for scenario. And no, Cyrus Murder don't cut it (it solve a different problem, that is scalability).
    • Yes, it does. (Score:3, Informative)

      by Ayanami Rei ( 621112 ) *
      Both GFS and Lustre (since it was mentioned earlier) support mmap. Note that the syscall semantics might be slightly different (I don't know the details). So give it a shot.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...