Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

What High End Unix Features are Missing from Linux? 1264

An anonymous reader asks: "Sun and other UNIX vendors are always claiming that Linux lacks features that their UNIX provides. I've seen many Slashdot readers claim the same thing. Can someone provide a list of these features and on what timeline they might be implemented in Linux?"
This discussion has been archived. No new comments can be posted.

What High End Unix Features are Missing from Linux?

Comments Filter:
  • by hawkbug ( 94280 ) <psxNO@SPAMfimble.com> on Monday March 03, 2003 @06:19PM (#5427403) Homepage
    The price.
  • by Anonymous Coward on Monday March 03, 2003 @06:19PM (#5427412)
    so famous in *BSD.
  • Well of course (Score:4, Insightful)

    by creative_name ( 459764 ) <pauls@nospaM.ou.edu> on Monday March 03, 2003 @06:19PM (#5427413)
    There are some features missing, after all - GNUs NOT unix.

    Seriously though, feasibly any Unix feature could be added to Linux, it just takes time and man power.
    • by Anonymous Coward on Monday March 03, 2003 @06:34PM (#5427612)
      man power didn't seem to return ANY man pages. What's up?
      • Re:Well of course (Score:5, Insightful)

        by Dan Ost ( 415913 ) on Monday March 03, 2003 @06:49PM (#5427797)
        Funny, but sad in its truthfulness.

        The FSF has for some unfathomable reason decided
        that man pages are obsolete and so man pages for
        GNU utilities are horribly incomplete. Many Linux
        developers seem to agree that man pages aren't
        worth the effort to make them useable.

        BSD, on the otherhand, goes to great lengths to
        make the man pages clear, helpful, and complete.

        Why can't Linux be more like BSD in that respect?
        • Re:Well of course (Score:5, Informative)

          by Xouba ( 456926 ) on Monday March 03, 2003 @07:20PM (#5428090) Homepage

          In Debian, every binary must have a man page explaining its use. It's part of the Debian policy, and a package not honoring this rule is taken as broken (i.e., it's reported as an error when building the package) So, again, Debian rocks :-)

          • by ffatTony ( 63354 ) on Monday March 03, 2003 @11:01PM (#5429916)
            That's all well and good (I love debian btw), but a large number of utilities seem to give this for a manpage :) That's just silly.

            UNDOCUMENTED(7) Linux Programmer's Manual UNDOCUMENTED(7)

            NAME
            undocumented - No manpage for this program, utility or function.

            DESCRIPTION
            This program, utility or function does not have a useful manpage.
            Before opening a bug to report this, please check with the Debian Bug
            Tracking System (BTS) at if a bug has already
            been reported. If not, you can submit a wishlist bug if you want.

            If you are a competent and accurate writer and are willing to spend the
            time reading the source code and writing good manpages please write a
            better man page than this one. Please contact the package maintainer
            in order to avoid several people working on the same manpage.

            Try the following options if you want more information.

            foo --help, foo -h, foo -?

            info foo

            whatis foo, apropos foo

            check directories /usr/share/doc/foo, /usr/lib/foo

            dpkg --listfiles foo, dpkg --search foo

            locate '*foo*'

            find / -name '*foo*'

            The documentation might be in a package starting with the same name as
            the package the software belongs to, but ending with -doc or -docs.
        • Re:Well of course (Score:5, Interesting)

          by ddilling ( 82850 ) on Monday March 03, 2003 @07:22PM (#5428114) Homepage

          Amen.

          I hate info. An unnecessary tool poorly implemented (implemented in the 'practically unusable' sense -- for all I know, the code for info could be excellent).

          Okay, now that I've drawn my line in the sand, what differentiates me from everyone else on /.? Why do I hate info?

          I pretty much already answered that above. It's an unnecessary tool. There exists no gap between "man page" and "html/pdf/tool of choice for 'real' documentation" that needs to be addressed.

          'man' is exactly what you want for immediate access to practically everything (although I wish man -k worked better...) when you're in the middle of completing some task. All you need to know is a couple simple things, like how your pager works, and whether you're getting the bash or the libc man page for something when there's an overlap -- and even that could probably be addressed by adding symbolic names so we can type 'man libc printf' instead of the random 'man 3 printf' so we don't look at printf for the shell. But anyway. They all are arranged the same, and you can zip to what you need in moments; plus, in my experience the man page holds what I needed better than three quarters of the time. All that, and I didn't have to use a strange tree-structured pager that poorly identifies links and doesn't behave like lynx or any other similar text-mode document navigation tool I am familiar with.

          For any documentation need more heavyweight than that, I probably want to be looking at something like javadoc or the python library reference, in my web browser. A web browser is very well suited to navigating hierarchal documentation structures (especially if they use their link tags well!). I have all the tools of mozilla (phoenix, galeon, konqueror, etc.) at my disposal to locate the information I need, and for a serious documentation need, I would rather be reading for two hours on a browser with good (well, better) font support than in an xterm. And for the doc writer, there are lots of tools available (starting with LaTeX, which is totally free) to generate these docs not only as html, but as postscript or pdfs for paper presentation as well.

          So for me, it's a one-two punch; I don't see a need in the space the tool addresses, and I find the tool itself to be unwieldy. I'd love to see better man pages, as far as I'm concerned, it has far from outlived its usefulness.

          • Comment removed (Score:5, Interesting)

            by account_deleted ( 4530225 ) on Monday March 03, 2003 @07:49PM (#5428391)
            Comment removed based on user account deletion
          • Re:Well of course (Score:5, Informative)

            by paladin_tom ( 533027 ) on Monday March 03, 2003 @07:58PM (#5428481) Homepage

            All that, and I didn't have to use a strange tree-structured pager that poorly identifies links and doesn't behave like lynx or any other similar text-mode document navigation tool I am familiar with.

            info, the program, is just one (default) info file viewer. There's also pinfo (which is Lynx-like), Emacs, and the KDE and GNOME help browsers, among other things that read info-format files.

          • Re:Well of course (Score:5, Insightful)

            by Drakonian ( 518722 ) on Monday March 03, 2003 @08:37PM (#5428835) Homepage
            OK, I have a humble request. Is it against the rules to put some examples in man pages? The language of man pages is sometimes so arcane. I think people learn best by example, why can't man pages have a couple?
        • Re:Well of course (Score:5, Interesting)

          by Trogre ( 513942 ) on Monday March 03, 2003 @08:39PM (#5428845) Homepage
          Most man pages would be fine if they included just one more thing:

          EXAMPLES!

        • man vs info (Score:5, Insightful)

          by jefu ( 53450 ) on Monday March 03, 2003 @09:14PM (#5429103) Homepage Journal
          I want to add my voice to the throng.

          I like man. I like man -k . I don't like info much.

          A well written man page provides a minimal description of the program/system call/... and provides the information a user/programmer really needs quickly and easily. Do man in a terminal and use "/" to search quickly. Do man in emacs and get the ability to do more with the result. Do man: in konqueror and get more.

          Info tends to provide long winded description of this that and t'other, usually completely unindexed, unsearchable and for most of my purposes unusable. Let alone that I now need man for some things, info for other things, html for other things and so on....

          Personally I'd rather like to see an xml format that would enable documentation writers to build both html pages (I personally think "info" is obsolete) and man pages at the same time. (That is, with tags like <synopsys>, <see-also> and the like, as well as with tags to mark indexable terms). Ideally it should be possible to generate man pages, a howto, a set of html pages for users all from the same input.

          But I'd rather have everything documented in "man" style than any of the rest.

  • Here is my list (Score:5, Insightful)

    by Billly Gates ( 198444 ) on Monday March 03, 2003 @06:21PM (#5427438) Journal
    Read my previous comment. [slashdot.org]

    • by sean23007 ( 143364 ) on Monday March 03, 2003 @06:45PM (#5427753) Homepage Journal
      But its hard to match suns gold memember.

      Did Austin Powers do it?
    • Re:Here is my list (Score:5, Insightful)

      by ivan256 ( 17499 ) on Monday March 03, 2003 @07:30PM (#5428167)
      1.)I have hot swapable drive support. HP is working on this for w2k but does dell have this?

      This works fine in linux. If you're crazy you can even do it with IDE disks.

      2.)I can upgrade the hardware while the system is running!

      This is a hardware feature more than an OS feature. Linux supports hardware that supports hot-swapping. Hot swap PCI, pcmcia, USB, and SCSI are all great examples of this.

      3.)I have 64 bit memory access and integers for workstation cad apps as well as database access. Type double in C/c++ does not allow enough precision. Int64 ?? I can use larger numbers with more decimal points.

      Again hardware related. Buy an alpha, or an ultra sparc, or an Itanium, or... You get the idea.

      4.) I have a scalable server that has supperior clustering software that NT and Linux lack

      You need superior cluster software? I'll sell you superior cluster software. :)

      5.) With up to 128 processors I can have one fast mutha.

      Again with the hardware. Linux supports huge numbers of processors too. It's your i810 motherboard that's the problem here.

      6.) World class stability. Linux has serious VM problems and the filesystem has been known to corrupt under large disk loads. Ask any database admin who uses oracle in Linux. Real servers need 24x7 support and linux is close and is very stable but has some rough edges in heavy server use. A reboot could be disasterous and cost tens if not hundreds of thousands of dollars. May god help you if your wharehouse database crashes or if your factory goes offline for a system reboot.

      Give me a kernel panic in Linux and I'll give you one in solaris. Better yet, I'll give you highly available clustering software so you don't have to worry about those pesky and rare panics. Can't have down time? You won't even notice it. Really.

      7.)WOrld class support. If a chip fails you can have an engineer from Sun with a replacement part be at your office within a matter of hours if your a gold member!

      You're talking hardware again. There is plenty of world class linux software support out there. If you want hardware support you simply have to pick the correct hardware vendor.

      I'm not saying linux has it all, but it's got everything on your list.
    • Re:Here is my list (Score:4, Informative)

      by Fluffy the Cat ( 29157 ) on Monday March 03, 2003 @08:14PM (#5428618) Homepage
      1.)I have hot swapable drive support. HP is working on this for w2k but does dell have this?

      Yes, as long as you're using SCSI.

      2.)I can upgrade the hardware while the system is running!

      If your hardware supports it. There's linux support for hotswap PCI, and I believe there to be support for hotpluggable CPUs.

      3.)I have 64 bit memory access and integers for workstation cad apps as well as database access. Type double in C/c++ does not allow enough precision. Int64 ?? I can use larger numbers with more decimal points.

      If the hardware supports it. Run on Sparc, Alpha, IA64 or HPPA. IA64 is probably your best bet - Intel are keen on Linux support, and more commercial Linux vendors support it.

      4.) I have a scalable server that has supperior clustering software that NT and Linux lack

      This depends on precisely which sort of clustering you want.

      5.) With up to 128 processors I can have one fast mutha.

      Which seriously compromises performance on the single CPU machines that are significantly more common. How much of the market is comprised of machines with more than 4 CPUs? How much time do you think the kernel is spending dealing with the fine grained locking needed for Solaris to scale to 128 CPUs? On equivilant single CPU hardware, Linux will easily outperform Solaris.

      6.) World class stability. Linux has serious VM problems and the filesystem has been known to corrupt under large disk loads. Ask any database admin who uses oracle in Linux. Real servers need 24x7 support and linux is close and is very stable but has some rough edges in heavy server use. A reboot could be disasterous and cost tens if not hundreds of thousands of dollars. May god help you if your wharehouse database crashes or if your factory goes offline for a system reboot.

      Shrug. I've never had issues with Oracle on Linux, but there you go.

      7.)WOrld class support. If a chip fails you can have an engineer from Sun with a replacement part be at your office within a matter of hours if your a gold member!

      That's probably the biggest issue. On the other hand, my experience of commercial vendor hardware support has never been wonderful. Being told that repeated machine check exceptions are due to software issues despite the logging software clearly stating that they're ECC errors doesn't result in my mood improving that much. And Sun is still sitting on 6 nodes of our E10K because we keep hitting kernel bugs that they can't otherwise test because they don't have an installation as big as ours. It's been going through commissioning for 3 months now. We're almost at 48 hours between kernel panics, too.
    • by Morgaine ( 4316 ) on Monday March 03, 2003 @08:58PM (#5429009)
      Let me dispell a major misconception regarding the stability and lack of problems on big iron like large Suns and Netapps. The problems are every bit as prevalent as on little iron running Linux ... you just have to push the big systems harder.

      Loading up Unix/NFS systems from such vendors to meet the needs of multi-million customer ISPs can produce no end of nastiness in the native software of their machinery, especially in networking and filestore kernel functions. A professional outfit doesn't push its systems to such extremes by design, but alas multi-million customer ISPs have nightmarish management structures that grind exceedingly slowly, and sometimes planned capacity is reached and exceeded before extra boxes become available. In the ensuing month or two of desperate firefighting to keep the systems up, eye-opening problems sometimes arise that don't help reduce the general air of panic ... ... like storage systems that start serving only a fraction of their rated NFSops owing to internal filestore management bottlenecks despite their disk, CPU and I/O resources not running hot. ... like total freezeups when internal unadvertised limits on sizes of directories are reached (yay, triggered by a really bad spam which you couldn't keep up with because you were overloaded before it even started). ... like the realization that not all parts of a vendor's kernel are equally optimized, and if you select something unusual to give yourself an extra couple of percent of performance you might find a truss of "ls -l" showing each syscall taking 30 seconds to complete despite the system allegedly working normally. ... like large hardware beasts that under pressure give up the ghost and die, despite passing all the vendor diagnostics, and despite all internal components being swapped out during days and nights of engineering visits, until in total dispair you raise it to board level and the respective MDs (over a game of golf no doubt) decide finally to replace the entire thing just to keep it out of the press. ... and plenty more. When things go wrong, it's not the most relaxing job in the world.

      Furthermore, don't think that having extortionately priced platinum maintenance contracts saves your bacon every time. Sometimes the response is extremely good if someone else has suffered the same problem and it's recorded in their support database and they have a fix. But on other occasions the big vendor's analysts just look in bafflement at the performance indicators and recommend extra boxes (well they would), and on a few rare occasions they simply refuse to admit that your very thorough measurements and timestamped traces indicate that there is an internal problem in their machinery. Now that's bad.

      And finally ... big vendor support helpdesks. If you've ever placed a desperate support call in the middle of the night, only to be greeted with a response of "What is telnet?", or if you've been requested to send in an urgent diagnostic system dump, seen it fail in the file transfer to the vendor site, and get told "Oh yes, we only have 10 meg free on the server" ... then you too may wonder why you're paying them all that money every year.

      Fortunately there's more good than bad coming from the big iron boys, but to think that all is roses in that area and in big-iron Unix would be a misconception.
  • by bryny ( 183816 ) on Monday March 03, 2003 @06:21PM (#5427440) Homepage
    The biggest missing feature would be that lunches that the vendor reps buy when they are in the sucking up phase.....
  • Who cares. (Score:5, Funny)

    by Anonymous Coward on Monday March 03, 2003 @06:22PM (#5427450)
    Does solaris or HP/UX have nvidia drivers and UT2003?

    Forget scalability and stability. Get a list of priorities. geez!
  • by lethalwp ( 583503 ) on Monday March 03, 2003 @06:22PM (#5427454)
    linux is still missing some posix 1003.1b features (realtime extensions)

    I was especially thinking to message queues

    Yeah, there are other implementing it like RTlinux etc, but it's still not in the main linux tree

    it's all i can think for now ;)
  • Tape stuff for one (Score:4, Informative)

    by svenqhj ( 558678 ) on Monday March 03, 2003 @06:23PM (#5427471)
    I used to do Tape support for IRIX about 2 years ago. I remember looking at Linux to see learn how Linux handles tape drives and found very little information other than floppy tape drives.

    I'd like see some commands like:

    scsicontrol -send scsi commands
    scsiha - used to reset and probe scsi bus
    stacker - jukebox control

    Plus, I'd like to see more about a Linux tape driver.

    • by churchr ( 24226 )
      Check out 'scsiadd' for resetting and probing the bus. I'm not sure about the other two.
    • by doorbot.com ( 184378 ) on Monday March 03, 2003 @06:59PM (#5427901) Journal
      I'd like see some commands like:

      scsicontrol -send scsi commands
      scsiha - used to reset and probe scsi bus
      stacker - jukebox control


      Well...

      apt-get install mt-st scsiadd scsitools sformat sg-utils sg3-utils smartsuite taper

      With some info:

      mt-st - Linux SCSI tape driver aware magnetic tape control (aka. mt)
      scsiadd - Add or remove SCSI devices by rescanning the bus.
      scsitools - Collection of tools for SCSI hardware management
      setcd - Control the behaviour of your cdrom device
      sformat - SCSI disk format and repair tool
      sg-utils - Utilities for working with generic SCSI devices.
      sg3-utils - Utilities for working with generic SCSI devices.
      smartsuite - SMART suite - SMART utility suite for Linux
      taper - Full-screen system backup utility.

      Thanks to: "apt-cache search scsi"
  • by MrFreshly ( 650369 ) on Monday March 03, 2003 @06:25PM (#5427493)
    A large following of people who resist change?
  • by Anonymous Coward on Monday March 03, 2003 @06:26PM (#5427509)
    Look at the high end UNIX systems and the features they have added for reliability. If the H/W supports a hot swap component, then Linux needs to as well or it will stay second rate on that platform.
    • by stratjakt ( 596332 ) on Monday March 03, 2003 @06:32PM (#5427577) Journal
      There are high-end PCs with the same features. We just got one of these bad boys [stratus.com] into the shop to test it as a machine to sell our clients for our critical applications. Pretty much everything is redundant. You can hotswap anything in 'em.

      Haven't done anything by way of checking linux compatibility with ours, but the drivers are all standard enough.
    • by Tailhook ( 98486 ) on Monday March 03, 2003 @09:22PM (#5429157)
      Stuff Linux "needs":

      Support for hotswap CPU/RAM etc. This is tough without hardware vendor support. Getting the info to write the driver (under NDA or whatever) is one thing. Proving the OS can actually cope with a CPU hotswap is another. Without high end hardware for testing, this ain't gonna be real. Solution: force the vendors to make Linux a priority on high end hardware.

      Mature LVM. Mature enough that you bet your career on it, like HP/Sun/IBM admins do every day while barely understanding what's really involved. Having multiple competing (diluting?) implementations doesn't help.

      >8 way scalability. If I had to pick from amongst my wish list, this would be one of the last. However, it does matter. For credibility, if nothing else. Solution? Hmm. Breakthrough in OS engineering, where the big boys get the scalability they want without compromising the low end. Ain't been done yet. But then, that's where the real opportunity is huh?

      Compatibility with some significant percentage of the bazaar third party hardware in the world. Like EMC^2 arrays and the wild world of Fiber Channel. On one hand, Linux can/does thrive quite happily in the edge/cluster/small-database/terminal market. On the other, until you can manage a high end drive array from Linux (no, NFS doesn't count) that is where it's gonna stay. Only market share will make this happen.

      Diagnostics that don't suck. Again, low level hardware vendor support required. So you paid extra for that nice ECC memory in your self-built machine. Do you know what would actually happen if a bit went bad? What would you get in the way of diag from the machine? Bet most of you don't know... Not good enough. Solution? See "hotswap" above.

      Time. Linux is competing with OSes that are 3 times as old in some cases. PHB instinct is going to shy away from something less mature. Truth is those instincts tend to keep planes in the air, whether it fits your agenda or not. Linux isn't exactly new, but it hasn't really met the test of time yet either. Solution? Patience.

      Software issues need fixing. GNU compilers suck. The native compiler on a *nix machine needs to not suck. This is basic. Linux has some real POSIX issues too. Threading only being the most obvious. Solution? Someone with the pragmatism and skill of Linus on the compiler/library side.

      Mature advocacy. The way to be an effective Linux geek is to not try to sell it. If it's worthy of your advocacy, it doesn't need it. When opportunities appear, out in the "real world", step up. Otherwise, keep your geek mouth shut. Solution? Look within.
  • Wrong Question (Score:5, Insightful)

    by turgid ( 580780 ) on Monday March 03, 2003 @06:27PM (#5427516) Journal
    Not to flame, but "Unix" is an API, ABI and set of conformant utilities and libraries. To ask "what high-end UNIX features are missing from Linux" is missing the point somewhat, since these features are necessarily non-standard, and therefore "not Unix". Of course, the immediate obvious answer is support for all the NUMA, ccNUMA and COMA hardware out there (which is specific to each machine let alone vendor), things like domaining, partitioning, hot-swappabe CPUs (again specific to each individual machine). Pehaps a better question might be, "How could the Unix standards be extended to encompass these developments, and how could the Linux kernel implement them (or provide an infrastructure)"
    Just my £0.02 worth.
  • Scale over 4 CPUs (Score:5, Informative)

    by McDiesel ( 447709 ) on Monday March 03, 2003 @06:27PM (#5427517)
    Supposedly Linux does not scale linearly over 4 CPUs with SMP, and from my own experience I have seen that Solaris does this nicely and has done so for years.
    Supposedly this is being addressed in the 2.5.x series.
    The response to this is that even high end Unix does not do scale well over greater than 8 CPUs- every E10K or F15 that I have ever seen gets carved up into virtual domains of 8 or 12 CPUs...
  • by mentin ( 202456 ) on Monday March 03, 2003 @06:28PM (#5427524)
    The framework [slashdot.org] is a good start, but Linux would not be considered enterprise-ready until it actually appears in TPC-C results list [tpc.org] (at least in first hundred).
  • by Ami Ganguli ( 921 ) on Monday March 03, 2003 @06:28PM (#5427529) Homepage

    1) Linux doesn't scale to large SMP systems yet. I think 2.6 is supposed to make it nicely to 16 processors.

    2) Recently most (all?) of the big Unix vendors have included mainframe-style partitioning. You can do that with Linux on IBM zSeries and pSeries (and maybe iSeries), but you need another OS acting as the executive.

    I can't think of anything else off-hand. I'd say that for the vast majority of applications Linux is as good or better than commerical Unix.

    • by errorlevel ( 415281 ) on Monday March 03, 2003 @06:51PM (#5427807) Homepage
      1) Linux doesn't scale to large SMP systems yet. I think 2.6 is supposed to make it nicely to 16 processors.
      2.6 will be using the O(1) scheduler which SGI has successfully used to make Linux scale to 64 processors, and should be able to scale further in a linear fashion.
      2) Recently most (all?) of the big Unix vendors have included mainframe-style partitioning. You can do that with Linux on IBM zSeries and pSeries (and maybe iSeries), but you need another OS acting as the executive.
      I think one of the main advantages to having UML (User-Mode Linux) will be able to have Linux running on top of Linux, and be able to create environments similar to these partitions you mention.
  • the price tag (Score:5, Interesting)

    by motorsabbath ( 243336 ) on Monday March 03, 2003 @06:29PM (#5427531) Homepage
    I use AIX, Linux and Solaris every day. The only thing Linux is missing is the enormous price tag. These concerns of stability I just don't see, seems pretty fscking solid to me.

    A better question would be "where is Linux kicking the crap out of Unix?". Now *there* would be a flame fest. Note that I'm a Unix fan, but Linux has surpassed it as a developer's workstation and basic desktop. From the standpoint of an ASIC developer, that is.

    JB
  • by puppetman ( 131489 ) on Monday March 03, 2003 @06:29PM (#5427536) Homepage
    The most common complaint I hear about Linux is that it can't replace Win2k on the desktop.

    Now we hear complaints that it can't replace Sun on the back end.

    Which one is it? A desktop OS, or a server OS? Granted, it does both well, but I think it's not the best in either category (no, not trying to troll).

    It doesn't have the games and apps on the desktop (though it's getting better all the time), and it's not as reliable on the back end. We have a bunch of app/web servers in our middle tier; some are Sun servers running the lastest OS from Sun, and some are Intel PCs running Linux. The Linux machines crash far more often. Granted, hardware could be at least part of the problem.

    On the other hand, we our database (Oracle) running on Win2k with dual P3 933 clones. One of our databases, with an average Oracle load of 10%, did not crash for over 300 days. That's pretty damned good. Our other machine (with a much higher load) crashes ever month or two (or at least needs a database-restart).

    Perhaps it's time for Linux to split into two seperate camps. A version for Linux for servers, and a version for the desktop.
    • by sean23007 ( 143364 ) on Monday March 03, 2003 @06:52PM (#5427833) Homepage Journal
      I disagree that Linux should split into a server version and a desktop version. Many of the things that make a server work better would also improve the desktop, and vice versa. Another note, while Linux may not be better than the best of the desktop nor better than the best of the backend, it is improving faster than its opponents in both categories. The future is promising for Linux, and even moreso for users (after all, if Linux fails, it will ultimately be because someone else has something better for the same price -- good for users).
  • Propaganda (Score:4, Insightful)

    by shtarker ( 621355 ) on Monday March 03, 2003 @06:30PM (#5427549)
    Prehaps you should start by having a look at the websites of a few people who actually sell unix:
    http://www-1.ibm.com/servers/aix/ [ibm.com]
    http://docs.sun.com/db/prod/solaris.9u1202#hic [ibm.com]
  • by rkt ( 9943 ) on Monday March 03, 2003 @06:31PM (#5427569) Homepage
    The only reason why I still would go for Sun hardware instead of Intel with Linux on it is because of ease of maintainance without investing a lot of getting third party cards installed which is non standard.

    This is not a high end feature... but a feature critical enough for many corporate organizations to avoid linux.

    The other think I love about Sun is the ease of Jumpstart. I always have issues with kickstarts on linux. RH8 doesn't even boot up on Del 1650s leave alone kickstarts. Sun puts a lot of effort in testing. I can't promise anything to my management without first testing it out on linux.... on Sun however, I believe them when they say XYZ version of software runs on SUN Hardware :)

    Its not a big deal... and since both hardware and software belong to sun some would claim that I shouldn't even bring this issue up. But the fact is that these are two good reasons I don't enjoy linux in my corporate network, even though I love and run linux everywhere else possible.

    rkt
  • by AchilleTalon ( 540925 ) on Monday March 03, 2003 @06:33PM (#5427586) Homepage
    is missing. If you are dedicating a machine per application, per department, you don't need this.

    However, if you manage a single machine with more than one application running from more than one department, you may need to determine the amount of ressources each application can use at minimum and/or maximum. If an application is almost idle, you may need it doesn't lock ressources and let other applications use them with a given priority pattern.

    Also, partitionning is not available as far as I know.

  • LVM (Score:3, Interesting)

    by Reality Master 101 ( 179095 ) <RealityMaster101@gmail. c o m> on Monday March 03, 2003 @06:34PM (#5427605) Homepage Journal

    I would love to see some standard Logical Volume Managers make into Linux. I believe there are some that are kicking around, but I haven't seen anything getting standardized.

    LVMs for the unaware are disk managers that allow such things as filesystems spanning multiple physical devices, dynamic creation and destruction of filesystems, dynamic resizing of filesystems, and other such goodies. AIX's volume management rocks.

  • lots of stuff.... (Score:5, Interesting)

    by vvikram ( 260064 ) on Monday March 03, 2003 @06:34PM (#5427609)

    considering linux vs any general *nix based OS i can think of quite a few places where linux is deficient right now:

    * scalability: linux needs to scale to hundreds
    of machines and scale well. the NUMA stuff has
    gotten into the mainstream 2.5.x kernel so it
    should be a good step forward.
    * a kick ass scheduler [yes i know ingo's o(1)
    patch] is quite important. i still think
    linux doesnt have the kind of scheduling
    solaris [especially high loads] seems to have
    but i will be glad to be proved wrong here.
    * VM subsystem: lots and lots of work to be done
    here. its been an academic favourite for long
    and imho linux VM sucks badly........lots of
    work is going into it though

    imho not many people who read slashdot know about the linux kernel and OS specific strengths in depth - they tend to jump on the linux bandwagon just for the coolness. i think there are a LOT of issues other than the above where linux is not yet highend. true highend is "big iron" not the
    mysql+apache+php webserver projects for which linux seems to be a favourite.

    its just that linux is growing. its a long way from maturing imho.

    vv
  • SCSI Support (Score:4, Informative)

    by Anonymous Coward on Monday March 03, 2003 @06:34PM (#5427611)
    SCSI support on linux sucks. It's not that the devices don't work, it's that if you have specialized hardware, like MO (Magna-Optical for those of you who don't know what this is) drives the SCSI implementation doesn't allow you to selectively disable devices to allow your custom driver to control it. In the case of MO drives, the cdrom driver automatically just assumes that an MO is a cdrom burner or otherwise. MOs are quite different from cdroms, you can't just blindly use them like a cdrom. Other flavors of unix (Unixware, Solaris, AIX, HP/UX, Free BSD, etc) do support this in their SCSI implementations. Until this is changed I will not be able to run linux in a production environment, as much as I'd like to be able to.
  • by Usquebaugh ( 230216 ) on Monday March 03, 2003 @06:36PM (#5427635)
    I posted an AskSlashdot about this, it's only been three weeks waiting to be posted.

    2003-02-07 22:32:30 High Availability Desktop (askslashdot,linux)

    This is the one feature I'd like to see. Not your namby pamby heartbeats and redirection. But honest to goodness fault tolerance. I want a system that if I rip out a node will keep running and lose not a single thread. XSessions and DB handlers should not notice a glitch. Think VMS, Stratus etc. Add nodes, remove nodes, re-configure etc.

    There's a reason most systems do not offer this, it ain't trivial. In fact most research I've read just says it's unfeasable and then goes on to spout about re-direction is alomost good enough.

    Mosix, High Availability Linux, etc do not offer this feature.
  • Soft Features... (Score:3, Insightful)

    by NOT-2-QUICK ( 114909 ) on Monday March 03, 2003 @06:38PM (#5427662) Homepage
    As another poster commented earlier, there are few, if any, features that the mainstream Unices can or do offer that cannot be ported in some fashion to Linux...it is simply a matter of time & effort!

    From my perspective, the primary components that Linux lacks over the commercial brands is more in the realm of "soft features"...aka non-technical features!

    The two biggest I would estimate as being (1)a unified product offering and (2) an active sales force...

    To address the first issue, I would submit that it is both a strength and a weakness (depending on your perspective) in have the Linux operating system splintered into the many unique distributions. An obvious technical strength is in the niche-filling capacity of the several flavors of Linux that can and do meet the needs of an extremely diverse market... Alternatively, a "soft" weakness exists in the sense that branding/commercialization of a product with so many various "names" is difficult if not impossible! Linux in and of itself is a generalization of the group of OS's that are build upon the Linux kernel...that is not an easily sold concept to a manager who wants someone to blame should things go south!

    As for the second concern of a non-existent sales force, that is a rather obvious (at least to me) down fall of widespread corporate adoption of Linux! Sure...every I/T department has a Linux zealot or two and can read positive write-ups on the benefits of Linux. However, this is not quite equivalent to the polished sales professionals (snake-oil salesmen?) who live, breath and die with the sole purpose of peddling their specific flavor of Unix!

    Anyways...just some food for thought! As always, I could be completely off base and living in my own happy little world! :-)

    n2q

  • Linux != High End (Score:5, Interesting)

    by dhall ( 1252 ) on Monday March 03, 2003 @06:39PM (#5427670)
    The high end niche is marketed more by the hardware than the software. The technology of LPAR's on the Regatta (high end rs6k) is nearly on par with mainframe technology. It's also at a price point of over a quarter of a million (after discounts).

    Linux is not an Enterprise level Unix. That isn't its niche. It's an OS for low-mid range hardware.

    The argument for Unix versus Windows has been... Unix is expensive hardware with cheap (nearly free software). Windows is the exact opposite, cheap and redundant hardware with expensive software licensing. Trying to license Microsoft SQL can be as onerous as trying to negotiate an Oracle contract.

    Are there other things available in Enterprise Linux? Sure, it's called licensed software. Enterprise level companies are extremely leary of deploying software unless it's licensed. They don't want to hear the word "free". "Free" in their minds often means there is noone to sue.

    Also with corporate enterprise, there is a sincere fear of employee empowerment. No company wants to be held hostage by their employees. With Linux, the power is within the administrator to have full control over the operating system. Most companies have no way of watching the watchers to this level, especially with knowledgable, disgruntled employees. It's not a sound argument, but it's one that is often tossed out there.

    Other more obvious things include mature LVM (logical volume management). Being able to add and grow filesystems on the fly. Active and mature SAN access. The VMM has come a long way from the 2.x kernel, but still needs to play catch-up.

    You realize the ideal setup for an AIX 5.x server? You optimize the server (performance wise) for ZERO percent paging space. There are certain tools that come with the operating system at the kernel level you just won't find with Linux unless you're a kernel hacker... Companies don't have the luxury of hiring kernel hackers to administrate their systems.
  • by Shane ( 3950 ) on Monday March 03, 2003 @06:41PM (#5427702) Homepage
    btw I am not a Win2k fan.

    Win2k DNS supports Multi-Master Servrs through their active directory. What this means different servers servering the same domains can be updated and changes will be replicated to the other servers. Microsoft uses active directory to achieve this. Linux/unix could use LDAP to serve the same function.

    I was reading about Win2k's file/print/active directory structure and I must say I am impressed with how powerful the system is. We have LDAP but it is not tied into all the rest of our applications and systems like AD is. If someone tied DNS, DHCP, Printing, SAMBA, Mono, Apache etc.. into LDAP and then provided a solid administrative interface it would _begin_ to provide the level of management and flexability that I am sad to report Win2k and AD provide.

    You might ask yourself why would anyone need this? Well if your DNS is only static content then you most likely would not. But if your DNS server is acting as a dynamic name host for SRV or RR records supporting this for 50,000 could very easily overload the server.

    Microsoft printing is much more flexable then LPR/LDP as far as I know unix systems have no capabilites for advanced features like distributing new drivers and define where the "closest" printers are.

    Some people might not see this as a feature, but a unfied configuration interface (i.e. something like webmin but more flexable, documented, powerful)is VERY MUCH NEEDED to convert smaller IT shops over to Linux.
    • by talon77 ( 410766 ) on Monday March 03, 2003 @07:12PM (#5428025) Homepage
      AD still is lightyears behind NDS. Look to Novell if you want to see impressive Directory services working with LDAP.
    • by mattACK ( 90482 ) on Monday March 03, 2003 @10:46PM (#5429823) Homepage
      While I agree that Active Directory 1.0 (Windows 2000/NT 5/pick one) is impressive in mid to large-ish environemnts, the design methodology absolutely falls apart in a true enterprise environment (inasmuch as AD integrated DNS is concerned).

      Consider this: the AD DNS zone is required to be in your domain container. This means two major things: ALL DCs in your domain have this information replicated to them (whether they are DNS servers or not) and NONE of your DCs in other domains can host these zones.

      Stretch item one out, and you will see that when a user in Japan powers on his workstation, it replicates to my DC here in the states. Do I care to access that guys data SO BAD that his replication storm^H^H^H^H^H event hits my DC? Even though it isn't running DNS? Kinda silly, really.

      Taken the other way, if I want a multimaster DNS zone to cross a domain boundary even in the same forest I cannot do it. It simply cannot be done. You could set up a zone transfer and work some mojo, but you lose the benefits cited in your post. Active Directory DNS doesn't support stub zones, either.

      Active Directory 1.1 (Windows 2003/Windows .NET Server/Pick one) fixes these complaints with enlistable name spaces that can cross domain boundaries, but just try to get THAT pushed through in a large environment until 3 months after SP1. Not very fscking likely.

      I actually find the automagical functionality of AD fascinating, and I do not mean to troll. I just find that most folks who extoll AD haven't seen it with over a couple of thousand clients.

  • by Doc Hopper ( 59070 ) on Monday March 03, 2003 @06:57PM (#5427882) Homepage Journal
    A few things that are very nice about some commercial UNIX variants you don't have on GNU/Linux systems:

    1. Integrated systems management, ala "Sam" in HP/UX. Although I'm first in line to say that systems administration should never be handed over to imbeciles, Sam is easy enough that non-professionals can use it, yet it covers all the bases of systems administration from your hosts file through recompiling a kernel. It seems to be what Linuxconf wants to be, but isn't quite yet. It also does this without royally screwing up particularly hard-fought configuration files. Just use Linuxconf to configure network interfaces after you've set up a beautiful five-lne config and see what it does to /etc/sysconfig/network-scripts/ifcfg-ethX. Red Hat's config tools are getting there, and YaST seems to have nailed it -- but it's not free software.

    2. Transparent X configuration w/3D support out of the box. When the installers get it right (about 75% of the time), Linux + X-windows is just fine. When it gets it wrong, the iterations are ugly:
    XFree86 -configure
    (blah blah blah)
    XFree86 -xf86config /root/XF86Config.new
    (dumps out, some obscure error)
    vi /etc/XF86Config.new
    (ad nausem)

    I miss how trivial it is to adjust X on my old Sun. Then again, there, instead of hacking a config file, you had to hack some obscure command options. And setting up dual monitors on XFree86 is much better than on Solaris (or was, back when Solaris 8 was the standard, haven't mucked with Sun equipment much since then).

    3. More on the X server: FAST X services. I've run XFree86 on really new, top-of-the-line Nvidia, ATI, and Matrox hardware, and not one of them can even touch the performance of X-windows on my old SGI O2. IRIX X is just amazingly faster. I'm not talking so much about 3D performance, but multi-head, full-window drag type stuff. Watching the ghosting as I wiggle this very screen I'm typing in back & forth on my RedHat 8.0 box at work right now on an Nvidia Geforce4 @ 1280x1024 is just painful. I know people are going to say "it's the configuration, stupid!" but if optimizing for decent X-windows performance isn't easy enough for a UNIX veteran of 7 years to do it without serious pain, it's not easy enough for an admin to want to deal with it.
    NOTE: I optimized all 686 at home on Gentoo with Nvidia's drivers. It's considerably better, but still doesn't compare. Then again, I don't have an O2 anymore for real head-to-head comparison, so maybe my memory is playing tricks on me. On the other hand, identical hardware in MS Windows gives immensely better 2D performance.)

    Then again, that's just a graphics professional feature, more than a server-type feature. Comparing any other UNIX to SGI's IRIX for graphics work is just no contest.

    4. Memory fault isolation. On Solaris, I'll actually get a message telling me which DIMM is bad, and which slot it is in. Admittedly, this is a failure not only of the operating system, but also of the hardware design. When you have 30-some-odd DIMMs in some E10K server, if you didn't have this kind of isolation, trying to find the bad stick of RAM would be beyond time-consuming. Ditto for HP/UX when replacing faulty RAM. Once again, though, IBM seems to be adressing this with their higher-end servers, and I look forward to about a year from now when it becomes more of a common feature on GNU/Linux servers.

    5. Something like "OpenBIOS" or Sun's OpenBoot (I think that's the name? Been a while, I forgot). This is great to work with, for instance, on Alpha systems. Fairly complete diagnostics before the OS even boots, and it all gets shucked out the serial port. You can compensate for this by installing some kind of lights-out management board in your PC, but if you ask any UNIX admin that has used the non-PC-BIOS stuff on pro UNIX systems, a PC BIOS just doesn't compare. For instance, on the Alpha I have at home, I can hook up fibre channel and enumerate all the available partitions, flag one as bootable, mount some filesystems and make changes, force boot to HALT temporarily rather than boot to full, stop the OS, do a memory dump, sync the filesystems and reboot... a whole lot.

    GNU/Linux on Alpha/Sparc inherits these benefits, and so it is a non-issue. GNU/Linux on X86 still really, really sucks in this dep't.

    That's about all I can think of for now. The difference between managing UNIX systems from Sun & HP, versus PC-based GNU/Linux systems, is still large but shrinking. As evidenced above, a BIG chunk of what still sucks about Linux is due to hardware & hardware integration, not the O/S itself, really. GNU/Linux is definitely getting there; I love running it on my Alpha at home, because I get many of those benefits mentioned and still use the operating system I love.
  • by rhfrommn ( 597446 ) on Monday March 03, 2003 @07:00PM (#5427916)
    I admin about 20 solaris servers and have been a Unix admin for about 5 years. The main reasons I won't switch to Linux can be summed up in a couple short points.

    1. Too small. It won't run on big enough boxes to do real datacenter work. My company runs data warehouses in the terabytes on servers with more processors and memory than Linux can handle. Before Linux can compete in the datacenter it needs to handle 16 procs at least, preferably as many as Solaris and the other commercial Unix implementations can. One other thing that is needed is for a volume manager and filesystem product with the functionality I can get from Veritas on Solaris. When you're dealing with 100-900 GB filesystems like the ones our databases live on the stuff built into Linux doesn't work.

    2. Too fragile. I've never tried running big Oracle databases on Linux but what I've heard from people that have is that it is too prone to crashing and corruption. Plus the stability of the hardware isn't there. You simply can't buy an Intel/Linux server that has the stability and reliability that a Sun/Solaris box has. Hot swappable hardware, the ability to route around failures without a panic or reboot, and so on just doesn't exist (or at least is extremely uncommon) for Linux yet.

    Both of these issues may well be fixed in the near future, but for Linux misses the mark too badly for me to even think about recommending Linux.
    • by bruthasj ( 175228 ) <bruthasj@@@yahoo...com> on Monday March 03, 2003 @09:09PM (#5429069) Homepage Journal
      # You simply can't buy an Intel/Linux server that has the stability and reliability that a Sun/Solaris box has.

      Sure I can. Just depends on the software load that you put on the system and its overall architecture. Solaris has advantages on some things, but its becoming more and more marginalized by Linux as it moves forward. As far as hardware on Sparc -- in my experience I have seen it crash just as much as Intel. Of course, you say Intel, but what Vendor is producing the materials for the "Intel" system? It requires a little initial fact-finding prior to purchasing the hardware as compared to Solaris/Sparc... since Sun is the only one that spits out those systems.

      The software that I'm involved in is Manufacturing Control Systems. Our architectures are quite varied from factory to factory. We've run on Xenix, Interactive Unix, Solaris 7/8, SunOS and Linux. I wasn't with the company with Xenix, but Interactive is a Dog and terrible at filesystem stuff. We used that until Sun bought it and integrated it into SunOS to become Solaris. Then we moved along with it and began using Solaris on Sparc machines. This worked quite well at one factory, except it was a bane trying to train the customer on how to setup the systems.

      Then we went to Linux. Linux brought us not only more bang for our buck, but -- on an OS level -- more stability for our buck. Yes, we did purchase copies of RedHat Linux ... not just download them. The first Linux/Intel system sucked entirely because of the Hardware. Well, to point the finger, it was the darned power supplies ... (does that count as Intel Hardware??) ... we purchased from a cheap distributor of Linux/Intel 1U servers. I won't name them here, but if you want to know, email me. It was a big mistake. So, this justifies a little bit of what you said about hardware.

      But, our next system was Solaris/Sparc. This time we used jumpstart and a bunch of nifty things to make it easier for the customer to get it setup. The integration on Solaris/Sparc for these kind of things is quite cool and I hope Linux/Intel can put something similar together. Anyway, we began using NFS in our last architecture and then used the same arch on Solaris/Sparc. Huge Mistake. Don't ever run something worth over 1 billion dollars on NFS/RAID with Solaris. Sorry. The downtime/crashing that occurred with it is way above the norm. It crash 2 or 3 times last year. Horrible on the network performance too because the system may have scaled way beyond Solaris' capacity. (60 Nodes communicating with 1 Node grinds the CPU terribly on Solaris.) I know I don't have the numbers to back these up, my only benchmark is how loudly the customer yells.

      Our latest system uses IBM xSeries with dual hard drives in a RAID 1 configuration. Excellent systems. The per computer cost reaches about the same price for Sparc and I would risk to say the hardware stability is there. IBM HDs are extremely reliable and the design of the systems are quite fault-tolerant. Maybe in six months, when /. dupes this story again, I'll give you an additional opinion on the matter.

      The use for Linux in the Enterprise is here and now. If you cannot envision that, then you'll be left behind, plain and simple. The next stable Linux kernel will make it even more so.

      Just my .02, thanks for reading this tome.
  • by maynard ( 3337 ) on Monday March 03, 2003 @07:42PM (#5428302) Journal
    AdvFS: One feature I'd really like see implemented would be the old Veritas Advanced Filesystem either as a commercial product ported over or as a free reimplementation. The ability to clone a filesystem volume and then append changes as deltas to the original is quite a nifty feature. Adding total versioning of all filesystem objects would be even better. A good logical volume manager would be nice too. It's coming along though.

    Display Postscript: Whatever happened to L. Peter Deutsch's old Display Ghostscript X Server extension? It seems like the last update to that was about three years or so back. Now that's a feature we would all love. DPS handles displaying fonts and complex shapes properly. We all know X isn't going to die any time soon, so a good Display Ghostscript server extension would be a Godsend. For that matter, with all the funding being dumped into KDE and Gnome, why did we all forget about GNUStep? But I digress.

    devfs: Please, when are we going to finally transition away from static device nodes to devfs? Solaris had it right, dynamically name the device on detection after its physical properties. This is really important and hasn't been implemented for anything more than testing.

    In kernel Framebuffer/DRM device drivers: The old GGI folks had it right. Physical devices like video cards should be initialized and managed in kernel space. Let the console and applications like an X Server talk to the device through a device node and/or ioctl calls and be done with it. No more video crashes when changing display modes, and real user space video security. Yes, there's framebuffer support in 2.4, but not for any decent, modern, cards. DRM hooks within XFree-4.x have come along nicely for GLX support though.

    NFS: is STILL a mess! Christ, five years after everyone in Linux land finally accepted that Linux needs a major NFS rewrite and we still have to run BSD or a commercial UNIX for a decent NFS server. What a clusterfuck.

    AFS support: OpenAFS is good, real good. But its licensing terms are unacceptable for inclusion into the main kernel tree. AFS is critical for enterprise quality network filesystem support. Notwithstanding, I still thank IBM for their initial code release and the OpenAFS team for the quality work they've done in porting the old IBM/Transarc codebase over to Linux.

    Journaled filesystems: are here, but they're still a bit shaky for heavy use. They're getting pretty damn good feature wise though. A year or two more of long uptimes in the real world and they'll be rock solid for the enterprise. Way to go!

    Raw I/O support: Primarily due to pushing from Oracle and IBM this has come a long way. But it still needs to be banged on for a couple years yet before enterprise folks will trust Linux for large scale database deployments. We also need a ubiquitous 64 bit platform to deploy upon. Alphas and Suns don't count because not enough folks run Linux on those systems to shake out enough bugs such that one would prefer Linux over DU or Solaris. I've seen Linux on an ES40 and it's not pretty. Which leads me to...

    Mainstream 64bit hardware: This is not a Linux fault, but the fault of Intel. When are they going to finally release a decent 64 bit platform suitable for the commodity market? Un-fucking-believable that over ten years after the release of the DEC Alpha we still don't have ubiquitous 64bit computing. And these days RAM is so cheap we're actually running up against the physical memory bus limit, never mind the virtual memory advantages to 64bit memory management. This is just stupid. Hope AMD eats Intel's lunch, they deserve it.

    I'm sure there's more... and JMO for what little that's worth.

    Cheers,
    --Maynard
  • by illumin8 ( 148082 ) on Monday March 03, 2003 @07:42PM (#5428304) Journal
    1. Binary Compatibility - Enterprise customers want to know that the app they wrote on Solaris 2.6 will still run on Solaris 10 when it's released next year, without recompiling it. Sun gives them that, with a binary compatibility promise. The Linux kernel and GNU libraries change too frequently for customers to have that guarantee.

    2. Scalability - Linux needs to be able to scale linearly beyond 4 processors before it will be a serious contender in the Enterprise space.

    3. 64-bit throughout - Sun has spent years removing all 32-bit bottlenecks from every piece of code that makes up the Solaris OE (operating environment). This takes a lot of effort and even when we see the AMD 64-bit processors this fall, it's still going to take at least a couple years of Linux development for all of the various 32-bit bottlenecks to be found and fixed. This is no trivial task, as developers at Sun found out during the move to the 64-bit product line in Solaris 7.

    4. Enterprise Volume Management - I heard Veritas was releasing Volume Manager for Linux, but I'm not sure if it's out yet. When you're carving up 10 terabytes of disk space on your EMC storage, you definitely don't want to be using fdisk... :-) Also, Solaris has built in Solstice Disk Suite which is great for mirroring your root disks and comes free. In my experience it works much better than RAID on Linux.

    5. Journaling File System - Native in the kernel and runs on top of standard UFS. While we're at it, why doesn't Linux make UFS their standard filesystem? If Linux's goal is to be a Unix workalike, why not go that route? It's been good enough for Unix for years, so why not use it? To add journaling support to a UFS volume all I have to do is add "logging" to the end of my mount entry in /etc/vfstab and remount the filesystem.

    6. Enterprise Level Support - This is something that unfortunately nobody has provided for Linux yet. Sure, you can get great support from Dell on the hardware, but if there's a problem with the OS forget about it. If you have talented staff Linux isn't that hard to support, but what if your staff leaves or the brilliant Linux geek that architected your system steps in front of a bus tomorrow? If you have a platinum contract with Sun, you can have 2-hour response time 24/7, to anywhere in the world. That means if I have a remote box somewhere in a closet out in Timbuktu and it goes down, all I have to do is call a 1-800 number and a Sun guy will be onsite, have the problem resolved, and have your box back up within 2-hours. They'll even re-install the OS for you if necessary, not to mention they have a really great backline support team that knows how to analyze core files and can trace back through the stack to help you find the root cause of the crash. I once had a backline Sun kernel engineer that was actually able to tell me that the box had crashed because an admin logged in as root had done a "kill -9" on a process he shouldn't have. How many Linux vendors have support departments that good?

    7. VOS (Veritas Oracle Sun) Alliance - This is an alliance between these three companies that allows them to share support resources. For really high-end OLTP systems like banks, telcos, financial markets, etc., they have a single group to contact for support on their high-end database cluster. These companies literally bleed millions if they are down for even a few hours, so it's important to have one group to point a finger at and get a solution as quick as possible.

    Those are the only ones I could think of right now off the top of my head. I'd love to hear someone with more info than me give me dates that these things could be implemented by.
  • by Feign Ram ( 114284 ) on Monday March 03, 2003 @11:33PM (#5430068)
    How come Google's servers that handle more hits than any Solaris/AIX box/cluster in the world , handle so much traffic with only low-end Linux ?
  • by Anonymous Coward on Tuesday March 04, 2003 @12:36AM (#5430407)
    Missing:

    1. Robust Journaled, Clustered filesystem supporting multiple concurrent mounts by seperate machines, with ACL and Quota Support, extensible by NFS V4 or other IP implementation, without giving up ACL's and Quotas in the process of networking it. That doesn't cost the firstborn's of all my staff, pluss arms, legs, and other vital parts of our anatomy.

    GFS is close...but not there yet.
    GPFS is closer, but has it's own API hooks that make it painful for some apps, and costs as described above.
    CXFS hasn't been ported to Linux yet.
    etc.

    2. A quota management suite that stores quotas limits in a highly flexible SQL DB and applies them to the running system quotafiles via a cron job. Right now, lose or corrupt the quotafile, lose your settings...or restore from backup and wait while you update quotas on a filesystem with 16M files.

    3. Access Database read/write access. Not strictly necessary, but would make that last-bit of selling SOOOOO much easier.

    4. Games - never play them myself, but can't count the number of people who tell me they would move in a heartbeat for their home machine if only the games were there.

    NOT missing
    1. Large System Support
    Got one with 1.8TB of user data files, 36,000 user home directories, and 16M+ files.
    BTW, EXT3 is quite stable, thank you. Thanks in part to one of my staff who beats it to death finding problems with ACL, Quota, Ext3, etc under heavy load and SMP.

    2. Performance - wind it up and watch it go...
    Our mainframe thinks its a big day when they do 180,000 transactions in a day. Our network servers think its a holiday and everyones at home.

    3. Full Commercial Support - GEEEZZZZ I get sick of this one. Of course it has full vendor commercial support. Pay as much to Oracle, RedHat, Dell, IBM, Consultant of your choice, etc. to support it as you do for a support contract for one of those overpriced OS's of yesteryear, and they'll happily support any damned thing you want!

    Or, take the view I cultivate. Be your own support. It's cheaper to hire a couple of kick-ass programmers, and 3-4 hot young sysadmins, than it is to pay full support for scads of OS's and Applications!.

  • by Anonymous Coward on Tuesday March 04, 2003 @02:55AM (#5430977)

    OK, I have loads more Solaris experience than I do Linux experience, so here are some Solaris features that AFAIK Linux doesn't have:

    1. Filesystem snapshots: You type a command, and a few seconds later, your filesystem "forks" into two halves -- one that never changes and one that behaves normally. You then back up the unchanging one, destroying it when you no longer need it. This all happens in a copy-on-write way, so that a 10GB filesystem doesn't need but a few hundred MB of free space to be forked, i.e. you don't have to copy the entire filesystem, just the parts that change. Note that this feature is REALLY nice if you want to back up a bunch of data that absolutely MUST be in a quiescent state. Instead of doing the whole backup in single-user mode (sometimes the only safe way), you reboot, do only the snapshots in single-user mode, then start up into multi-user mode and do the backups at your leisure, because not one bit of the snapshot will change from how it was in single-user mode.
    2. A really damn modular kernel. Every major group of features is a module. EVERY SINGLE driver is a module. Scheduler policies are modules. Protocols are modules. Filesystems are modules. Architecture-specific stuff is a module. They all magically load on demand, always. You can work with Solaris for years without ever learning how to force a module to load. Each module has its own configuration file that is read at RUN TIME, NOT AT BUILD TIME. You can often change module configuration by changing the config file, then reloading the module. Hell, you can even patch the running kernel in some cases by unloading, then replacing the module, then loading it again. If you need to rebuild something from source, 99% of the time it's not the whole kernel, just a few modules. So, you can make the update quickly and without rebooting, if you want.
    3. Binary patches. On Solaris, the operating system comes in releases, and then specific patches are issued to fix specific problems. You don't just "upgrade everything to the latest" (like in, say, Debian). Instead, you can (if you want), apply only the patches that you need. You might not even upgrade a whole package. A patch just changes the files that need to change in order to implement a fix. A patch is generated from a SEPARATE source branch whose goal is stability and bug fixes, not new features. There is real value in this: it makes systems more predictable and stable to differentiate between feature updates and bug fixes. And if the patch does something undesirable, you use "patchrm" to roll back to EXACTLY the set of files you had before applying the patch. That doesn't mean you fetched v. 1.5 off the net and it wasn't stable so you go fetch 1.4 again and install it. It means the old files come out of the local patch database. Except for being compressed and then uncompressed, they are the same files you had on your machine before.
    4. Support for NFS failover from one server to the next, at least for read-only volumes.
    5. An intelligent automounter. One that lets you easily specify filesystems by where they are on the net, and if a filesystem happens to be local, it automatically knows that and does a loopback mount instead of an NFS mount. So, you can have one global, network-wide automounter config file that says "joebob's home dir is on the server 'brubeck'", and if you're on some other host, it NFS mounts it, but if joebob logs into brubeck itself, the automounter figures this out and mounts it as a local loopback mount (not NFS through 127.0.0.1). And this happens with no hostname-based "if" statements. And also, if the server "brubeck" has more than one interface on different TCP/IP networks, the automounter looks to see if any of them are the local net and uses that IP address. So basically, you can use the same automount config file on every single host, without having to create special cases for different hosts.
    6. There's a database of devices. Probe order on the bus has NOTHING to do with what device matches with what entry in /dev. If I have 2 SCSI cards on my PCI bus and then I add a third SCSI card inbetween the two existing ones, every single device on the original SCSI cards stays with the same entry in /dev. The two original controllers are "c0" and "c1", the third (new) one, though it is probed before c1 and after c0, is called "c2", because it is important to keep everything the same for c0 and c1. This is important on servers that have multiple SCSI cards!! (And, this database of devices is stored in ASCII in case you need to fix it manually, unlike the equivalent thing in AIX...)
    7. On Solaris, there isn't a manual page that says "Swap over NFS may not work." How am I supposed to take "may not work"? Should I try it? This is a stupid thing for a manual page to say!
    8. On Solaris, there is cachefs, which is a filesystem that allows you to use local disk space to cache other filesystems (usually NFS). So, if you have 50 front-end web servers, you can set them all up to NFS mount the data from a few servers, but keep a giant on-disk cache of it locally for speed. Thus, when you need to make a minor change to one file under your tree, you don't have to push the changes to 50 systems and wait half an hour for the updates to show up everywhere. It happens more or less instantly. (Yes, you could probably set up AFS or CODA or something to do this under Linux, but Solaris can do it right out of the box.)
    9. Solaris has Jump Start, which means this: you can set up an install server. Then, when it comes time to install the software on a new desktop machine (or server, even!), you perform the following steps: (1) unbox the machine and hook it to the network, and (2) turn it on. And that's all. Without typing anything on the keyboard (unless you order it with Solaris already installed, in which case you have to type 11 keystrokes), it boots from the network out of the box. The network boot starts the install process. The install process figures out how to partition the disks. The software is all copied over. Then, your "finish" script (post-install site customizations) runs, and makes all the necessary changes for your site's configuration. The system does all this on its own and reboots, and is ready to use. If you can hook a machine to the network and plug in its monitor, keyboard, and mouse in 15 minutes, you can leave that machine after the 15 minutes and never have to come back to do another step.
    10. Live Upgrade. Solaris has this feature called Live Upgrade and basically what it amounts to is this: you designate a spare disk as your new root disk. The OS installer then does an "upgrade" install, which means the normal thing: take your existing OS version and upgrade it to the latest, preserving all your config files, data files, etc. The difference is that in this case, the original disk is not touched -- a copy is made onto the spare disk, and the upgrade proceeds as just a normal program in multi-user mode. So you can do your upgrade to the next major version of Solaris while your mission-critical server is running and performing its mission-critical service. Once you are satisfied you've got what you want, you do a quick reboot off the spare disk. If everything's peachy, you're done. Otherwise, you reboot off the original root disk. You still have to schedule downtime for your server's OS upgrade, but there's at least a chance of being able to go home 30 minutes into the time you've scheduled with the system back up and running instead of KNOWING you'll be there for many, many hours doing the entire OS upgrade.

E = MC ** 2 +- 3db

Working...