Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software Linux

Build From Source vs. Packages? 863

mod_critical asks: "I am a student at the University of Minnesota and I work with a professor performing research and managing more than ten Linux based servers. When it comes to installing services on these machines I am a die-hard build-from-source fanatic, while the professor I work with prefers to install and maintain everything from packages. I want to know what Slashdot readers tend to think is the best way to do things. How you feel about the ease and simplicity of installing and maintaining packaged programs versus the optimization and control that can be achieved by building from source? What are your experiences?"
This discussion has been archived. No new comments can be posted.

Build From Source vs. Packages?

Comments Filter:
  • Personally (Score:5, Interesting)

    by jwthompson2 ( 749521 ) * on Tuesday March 30, 2004 @01:48PM (#8715987) Homepage
    I do a bit of both. I predominantly install items from packages, when available, for testing and review of something new that I am interested in. Once I establish whether what I have been playing with may be useful for some particular purpose I will research the source build options. If there are specific optimizations that can be made for my system's hardware or pre-installed software I will then look at installing from source in order to leverage those optimizations, but if there is no advantage to compiling the source due to lack of any worthy optimizations then I will install from packages any time I want that software.

    That is my way of handling things, do what fits your needs best, that's why we have this option.
    • Re:Personally (Score:5, Interesting)

      by allyourbasebelongtou ( 765748 ) on Tuesday March 30, 2004 @01:58PM (#8716163) Homepage
      In short: I have to agree--I do a bit of both, too.

      The main thing I encounter that keeps me from using them all the time is the need for specific add-ons that are available as part of packages but are available when rolling-my-own.

      As an aside, there are certain bits that I just prefer to compile myself for any number of reasons

      That said, there are other bits of software that are pretty generic items that the packages make *trivially* easy to work with, and where compiling those same things from scratch--particularly on older hardware--makes you get a bit long-in-the-tooth waiting for the compile to return.

      To me, this is truly one of the ultimate beauties of open source: you're not stuck with pre-built, but you can leverage it when it makes sense.
    • by Roadkills-R-Us ( 122219 ) on Tuesday March 30, 2004 @01:59PM (#8716183) Homepage
      I agree. What the professor wants is a readily supportable, production environment, and tat's what you should supply. That means packages wherever possible. IFF there is a clear need, build from source- a 5% speed optimization may not be worth it (that's the prof's call). A 50% speed improvement (unlikely, but possible) would probably be worth it (prof's call). Otherwise, I'd only build from source when there was not a trustworthy package available, or to add features, fix bugs, etc.

      I've been in both your and the prof's position, and this is generally the best bet. It'll make the prof's life a lot easier when you're gone, too.
      • by Shakrai ( 717556 ) on Tuesday March 30, 2004 @02:26PM (#8716606) Journal
        Otherwise, I'd only build from source when there was not a trustworthy package available, or to add features, fix bugs, etc.

        If you can't find a site with a trustworthy package what makes you think you can find a site with trustworthy source code? Or are you going to review every line of code to make sure it wasn't tampered with?

        The paranoia works both ways :(

        • by JAgostoni ( 685117 ) on Tuesday March 30, 2004 @02:35PM (#8716736) Homepage Journal
          In the parent post's defense, you can almost always get the source code from the "source" or author. However, sometimes you rely on some other guy to produce a .deb or .rpm or whatever which you might not trust as much as the author.

          I almost always trust packages from the vendor and the distro and only trust "3rd party" packages when there's been tons of anecdotal evidence that they work.

      • I've also been both positions. As a graduate student and then during my 6 years in industry, I was extremely interested in building from source (custom kernels, custom libaries, webservers, the whole nine yards).

        However now as a profiessor, I've become more interested in focusing on building the tools that are part of my research. These I publish (or will publish once they are ready) as open source. But for the other elements such as development libraries, servers, etc.: I just want them to work.

      • If you want to rebuild a package to get the optimizations out of them, you should probably learn how to build the packages.

        Build once deploy everywhere makes it easy to maintain. Last time we had to do a massive openssh upgarde on our equipment, the rpm based boxes were done in 15 minutes, while the source based boxes took about 2 hours. The real kicker is that we had (at that point) about 3 times the number of rpm based systems compared to source based.

        Source is great for the hobbiest, but as a sysadmi
    • by drdanny_orig ( 585847 ) * on Tuesday March 30, 2004 @02:06PM (#8716313)
      I use fedora, and most often I get the *.src.rpm versions, then tweak the SPEC files as required, build my own binary rpms, and use those. Best of both worlds, IMO.
      • by tjwhaynes ( 114792 ) on Tuesday March 30, 2004 @03:13PM (#8717206)

        I use fedora, and most often I get the *.src.rpm versions, then tweak the SPEC files as required, build my own binary rpms, and use those. Best of both worlds, IMO.

        And the tweaking need not be that tricky or time consuming either. Decent defaults for building RPMS can be placed in your ~/.rpmrc file (or /etc/rpmrc, etc.). Once you have set your optimising settings, architectural preferences and packager name and cryptographic signature (if you want to submit them to other people), that's done for all future packages.

        I used to run a mix of RPM packages and tarballs (./configure --prefix=/usr/local && make && su -c "make install") so I could tell what was under RPM control and what was not, but it became annoying when I wanted to build a Source RPM with dependencies on a package I had built from tarballs. These days I usually try and wrap any install up in an RPM - it's not difficult once you get hold of a skeleton spec file for your distro and it saves much hair pulling later on. Also the dependency requirements of RPMs actually save time in the long run because you know when removing a package will hose your system (or part of it) .

        Cheers,
        Toby Haynes

  • Support (Score:5, Insightful)

    by ackthpt ( 218170 ) * on Tuesday March 30, 2004 @01:48PM (#8715993) Homepage Journal
    After you've gone it will be easier for the prof to get support on a package than something custom. From experience, the less something you have resembles what tech support is expecting the more finger pointing and the less gets done.

    As often as I've lamented how much employers spend on PC's, vs build them themselves from parts, they would rather not have to rely on someone in-house to support hardware.

    • Re:Support (Score:5, Insightful)

      by vrTeach ( 37458 ) on Tuesday March 30, 2004 @02:05PM (#8716304)
      This is very much the case. I have managed 15-20 linux machines for the past seven years, and have moved from largely building from source to largely depending on packages. The porting of apt to rpm systems has completely changed my work for the better, so if at all posible I use the packages and a small subset of apt repositories. My next step is probably develop our own apt repository.

      In some cases, the packaged version won't play well with something that I need, or I particularly don't want upgrades to disturb something. In that case I put together a pseudo-script that gets and builds the source and dependencies, and mark the packages as "Ignore" in my apt configuration.

      eks
    • Not always the case (Score:5, Interesting)

      by Anonymous Coward on Tuesday March 30, 2004 @02:21PM (#8716549)
      Sometimes the exact opposite is true, especially in terms of "community support". For instance, mod_perl, which for some reason Red Hat decided to ship a very early version. The typical response on the mailing lists for mod_perl or any other alpha/beta package RH included usually goes "try it from source, then email us" (that's after someone submits a reasonably complete bug report).

      Let's not forget the GCC fiasco and probably dozens of other examples where RH decided to "lead the pack" in terms of version numbers but not stability.

      Of course, then there's Debian woody, living in circa-2001 land.
      • by jd142 ( 129673 )
        Ah, but you see you're asking for support from the mod_perl list. If you are using the package from Red Hat, you should try Red Hat support or Red Hat specific mailing lists.

    • by cbreaker ( 561297 ) on Tuesday March 30, 2004 @02:31PM (#8716663) Journal
      No way.

      Usually when one builds from Source, they install it to wherever the original developer has it set to by default. Unless you did some heavy patching, the software will very likely be more "true" to the original software then many packages.

      RPM's for distributions such as RedHat or Fedors often have to move configuration files all over the place to mesh with the OS properly.

      You're more likely to be able to sit down at a strange Linux box and troubleshoot whatever program when it's compiled from source tarballs versus an RPM. Unless of course, you know the RPM, or the RPM doesn't do anything funky.

      Considering the stuff is Open Source, and chances are the programs are not under a paid-for support contract, it's pretty safe to say that BOTH methods would have to be supported "In House." And if not, your support contract could very well support the source compiled versions anyways.

      I choose the Gentoo way. Everything is compiled from source; it's just nice and automated. Almost never have I run into something where the program had to be modified to fit the distribution.
      • by Sleepy ( 4551 ) on Tuesday March 30, 2004 @06:11PM (#8719280) Homepage
        Usually when one builds from Source, they install it to wherever the original developer has it set to by default. Unless you did some heavy patching, the software will very likely be more "true" to the original software then many packages.

        Correct me if I am wrong, but are you contridicting yourself here? Gentoo DOES use developer source, but they ALSO do what you call "heavy patching".

        I interpret this "source vs package" debate to be something different: What is the NORM for your distribution, and are you using the OS in ways that were not tested by the vendor's SQA team

        For example, ANY of these distros can get borked if you install Ximian on top of them and THEN go back to the vendor for updates. It wouldn't matter if you did it from source or packages.

        Same with Alien packages on Debian, or "Redhat centric" rpms on Mandrake or SuSE.

        Bottom line is don't mix oil and water. :-)

        I agree with your comments about what is good with Gentoo. I happen to like Gentoo and FreeBSD for the very reason that there's a BAZILLION source packages that all have cross-testing against each other. Same for Debian I suppose.

        Best thing RedHat ever did for their desktop distro was set it free. They NEVER wanted to be in the business of supporting user-borked desktops when they install random stuff from the net, and they never wanted to manage and QA a large repository. Now it looks like there's a Fedora community (two actually) addressing the package distribution issue. Good for them.
    • Re:Support (Score:4, Informative)

      by pointbeing ( 701902 ) on Tuesday March 30, 2004 @03:09PM (#8717169)
      As often as I've lamented how much employers spend on PC's, vs build them themselves from parts, they would rather not have to rely on someone in-house to support hardware.

      It's generally more expensive to build hardware than to buy it. I work for DoD and buy about a zillion computers a year. My organization has ~2000 employees and PCs are on a four-year replacement cycle. In order to build machines in-house I'd need at least one additional full-time employee (cost about $70K including benefits) and the space to build the machines.

      Right now I'm *buying* computers from a major manufacturer - 3.2GHz, Intel, 768mb RAM, 40G hard drives - perfect corporate machines - for $907 each. The major manufacturer guarantees hardware compatibility for 36 months so my existing sysprep loads will work, provides 36 month onsite warranty support and will inflict my image on these PCs for free. You can't build 'em that cheap.

      I just bought a bit more than 500 machines this year - the full-time employee alone would add at least $140 to the price of each PC you built and I'm a bit skeptical that you could build and support those machines with only one person.

      In short, you can't build the same PC, guarantee hardware compatability, inflict a standard load on them and provide worldwide onsite warranty support for anywhere near the $907 for each unit I just bought.

  • One word for you... (Score:4, Informative)

    by TwistedSquare ( 650445 ) on Tuesday March 30, 2004 @01:49PM (#8715995) Homepage
    Gentoo! (Combines the best of both worlds)
  • by Novanix ( 656269 ) * on Tuesday March 30, 2004 @01:49PM (#8716000) Homepage
    Gentoo [gentoo.org] is a great OS as instead of having binary packaged systems, it builds everything from source but can build it effeciently and automatically. In addition it can allow you to just use it to manage the source and you compile it yourself. If you were dealing with many systems you could setup your own gentoo sync server and distribute custom copies of various packages exactly to your specs and compiling details. In addition it can easily determine dependencies, and even install them for you if needed. Gentoo is kind of like a bare bones OS that simply makes it easy to install whatever you want and rather helps shortcut the process of dealing with installing things by compiling things for you.
  • by bperkins ( 12056 ) * on Tuesday March 30, 2004 @01:49PM (#8716004) Homepage Journal

    While building from source can be fun, and necessary sometimes, I don't think it makes sense. You spend far too much time tweaking minor issues, and lose sight of major problems.

    One problem that I've noticed is the fact the build from source people tend to install things in a way that's completely different than anyone else. This means that anyone who tried to maintain the machine is hopelessly lost trying to figure out what the previous person did. OTOH, When (e.g.) RedHat does something weird, the explanation and fix is usually just a few google queries away.

    Most (all?) package formats have source packages that can be modified and rebuild in case you need some really special feature.

    • by KenSeymour ( 81018 ) on Tuesday March 30, 2004 @01:55PM (#8716102)
      I would have to agree about using packages. One gripe I have about building from source is
      that most packages do not have "make uninstall".

      With packages, you have a much better chance of removing all the files that were installed with the packages when you need to.
      • Which is why Portage is so handy. It builds from source and takes care of package removal. It also offers config file protection so a new version of a package doesn't stomp all over your carefully configured system.
      • by goon america ( 536413 ) on Tuesday March 30, 2004 @02:46PM (#8716887) Homepage Journal
        which is why you should always save the output of "make install" somewhere. I keep mine in /usr/local/install_logs
    • by Anonymous Coward on Tuesday March 30, 2004 @01:56PM (#8716117)
      Speaking of RedHat doing something weird... RedHat managed to _rename_ p_pptr to parent in task_struct in the kernel. How did they manage to get away with something like that? If there are custom kernel modules that happen to want to use p_pptr, then everything breaks!
    • by adamjaskie ( 310474 ) on Tuesday March 30, 2004 @02:09PM (#8716350) Homepage
      I run Slackware. Most of the major stuff I need is avaliable as official packages from Pat, and quite a bit of other stuff is avaliable on LinuxPackages.net. I will usually look first to see if there is an official package, and if not, I will do a quick look on LinuxPackages.net, but those are usually a bit out of date, so I usually will end up just downloading the source and compiling it. I see nothing wrong with compiling my own stuff, as it doesn't take much longer. With checkinstall, I can even enter it into the package management system to uninstall easier in future.
    • by bwy ( 726112 ) on Tuesday March 30, 2004 @02:14PM (#8716435)
      You spend far too much time tweaking minor issues, and lose sight of major problems.

      Good point. There are probably very few cases where spending the extra hours of tweak time ever ends up being something that adds a significant amount of value to anybody, except yourself of course. I can think of a couple exceptions, but they are exactly that- exceptions to the rule. IMHO the ability to standardize installation packages is an important aspect of modern computing.

      If time didn't matter, I suppose we'd could all go so far as writing all our own software that would do exactly what we wanted.
    • While building from source can be fun, and necessary sometimes, I don't think it makes sense. You spend far too much time tweaking minor issues, and lose sight of major problems.

      I tend to agree, but I have found one case on Redhat where RPMs give me nothing but trouble: Perl.

      I have always had problems with Perl when I go to install a new module from CPAN if Perl was installed with an RPM file or came with the system (i.e. installed when the system was installed). Perl itself works great, but some CPAN pac

    • by jc42 ( 318812 )
      One problem that I've noticed is the fact the build from source people tend to install things in a way that's completely different than anyone else.

      While I'd agree with you in general, I've found one curious case where I've learned to install from the source to make all my machines the same: apache.

      For some reason, every vendor (and sometimes every release ;-) seems to have apache installed in a clever way that's different from everyone else. They put the pieces in different directories; they munge the
    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
  • by untermensch ( 227534 ) * on Tuesday March 30, 2004 @01:50PM (#8716023)
    If you are working for someone else, maintaining servers that are intended for peforming specific tasks, then I think the best solution is to do whatever is most efficient at performing those tasks. If you really don't need the peformance gains brought by compiling from source (and you probably don't) and it's going to take you a long time to do the compiling, time that could be better spend actually doing the research, then it's not worth your effort. If however the compiling doesn't affect the user's ability to be productive and that is what you as sysadmin are most comfortable with, then it seems reasonable that you should be able to maintain the boxes however you like.
    • by ajs ( 35943 ) <ajs.ajs@com> on Tuesday March 30, 2004 @02:15PM (#8716449) Homepage Journal
      the best solution is to do whatever is most efficient at performing those tasks

      And if you've ever had to pick up and maintain a system from someone who left you will know that this is just about 100% wrong.

      The best solution is one that works and is maintainable. If you are willing to put in the extra work involved in making your from-source installations clearly maintainable and upgradable so that the next guy isn't going to have to spend 6 hours learning how everything works when he needs to upgrade foobnitz to version 2.0, then great. If not, think about letting someome else do that work for you.
  • Simply (Score:5, Insightful)

    by AchilleTalon ( 540925 ) on Tuesday March 30, 2004 @01:51PM (#8716038) Homepage
    build packages from source!

    Many sources include the SPEC file required to build the package.

  • OSX (Score:4, Interesting)

    by artlu ( 265391 ) <artlu@3.14artlu.net minus pi> on Tuesday March 30, 2004 @01:51PM (#8716041) Homepage Journal
    I used to be a huge debian fan because of apt-get and the direct install of packages, but I have migrated to OSX and find myself needing to build packages from scratch to work correctly. However, I will never hesitate to use Fink as much as possible. I think for 90% of what gets installed the packages should be fine, but if you know that there are certain optimizations that you can implement, why not build from scratch?
    • However, I will never hesitate to use Fink as much as possible. I think for 90% of what gets installed the packages should be fine

      90% of what gets installed when you use Fink has nothing to do with what you're installing.

      I've given Fink a shot on a couple of occasions over the last two years, and every time I've invoked it, it's come up with false dependencies. X11 is not necessary to install, say, the Python interpreter, and there've been dependencies far more ridiculous than that.

      I've had the same pro
  • My experience (Score:3, Interesting)

    by Jediman1138 ( 680354 ) on Tuesday March 30, 2004 @01:52PM (#8716048) Homepage Journal
    Disclaimer: I'm only 15 and am a semi-newbie to Linux.

    Anyways, I've found that by far the easiest and most simplistic and time-saving method is to use rpms or debs. But of any distro, Lindows has it down to one or two clicks...though, they're software database subscription is a serious money leech..

    If it was up to me, source would always be an option to use, and the install process for rpms and debs would be one click and automatically update themselves into Menus and such..

    Just a few thoughts..

    ___________________________________________

  • --No-Deps (Score:5, Insightful)

    by Doesn't_Comment_Code ( 692510 ) on Tuesday March 30, 2004 @01:52PM (#8716055)
    My biggest grievance against packages is the dependacy fiasco. For instance, I have Red Hat at work. And the majority of the programs are .rpm's. Well there was a certain program that I could only get as source, so I compiled and installed it. It turns out that it was required as a basis for other packages I wanted to install. But when I tried to install those, it didn't recognize the prerequisite programs because they weren't installed via rpm.

    I don't care for the dependancy model of packages, and I'd much rather install programs myself. That way I know I'm getting the program compiled most efficiently for my computer, and I don't have to worry about dependancy databases.
    • Re:--No-Deps (Score:5, Insightful)

      by idontgno ( 624372 ) on Tuesday March 30, 2004 @02:01PM (#8716225) Journal
      I don't care for the dependancy model of packages, and I'd much rather install programs myself. That way I know I'm getting the program compiled most efficiently for my computer, and I don't have to worry about dependancy databases

      That just means that you'll have to store the dependancy databases in your head. A release of a particular software package, whether it's a package or a tarball of source, depends on other software. Always. "config" goes a long ways towards working that out, but if it doesn't work automagically you're going to have to take it by the hand and lead it to wherever your copy of libfoobar.so.17 might happen to be.

      I've just started using yum for RPM management and I'm already liking it a lot. At least dependency management seems a bit cleaner and more automatic.

    • Re:--No-Deps (Score:3, Informative)

      For the record, the problem you describe actually is solveable within the Debian package system, although it comes under the heading of "advanced". You can build an actual package, of course, but short of that, you can make a "pseudo-package" that doesn't install anything, but has the required Debian package "provides" header. Then the apt package database will know about the capability, and will be able to install things which depend on the functionality you've put in by hand.

      I mention Debian because I'
  • My experience (Score:5, Informative)

    by TwistedSpring ( 594284 ) * on Tuesday March 30, 2004 @01:52PM (#8716057) Homepage
    is that compiling from source can sometimes even be slower executing depending on your compiler.

    Also, better to install from packages because:
    1. They WILL work
    2. They install fast
    3. They are easilly de-installed
    4. They are painless
    5. Dependencies are installed automatically sometimes, and other times packages are the only way to resolve a dependency loop
    6. Most other OSes since the dawn of the home computer use pre-compiled binaries, and nobody has complained
    7. It is surely the developers job to make sure it compiles properly and do all the compiler error headache solving

    Packages are just so much nicer. A lot of the time, I can get pentium-optimised versions of the ones I want, and if I can't then 386 optimised versions are OK by me. The difference in speed one sees is pretty much only for the anally retentive, it is so minimal.
  • Run Debian (Score:3, Insightful)

    by jchawk ( 127686 ) on Tuesday March 30, 2004 @01:53PM (#8716066) Homepage Journal
    Run debian, if you absolutely must install from source you can use APT to get grab the source that you need, compile and then build a deb for it so you're still using the debian tracking system. It really is the best of both worlds.

    For most packages though there really isn't a big need to compile from source.
  • by Evanrude ( 21624 ) <david AT fattyco DOT org> on Tuesday March 30, 2004 @01:53PM (#8716076) Homepage Journal
    I used to be a die-hard build from source person myself back when I ran slackware.
    Since that time I have gained more experience with production Linux systems.
    When it comes to managing production servers, I use Debian and typically only install programs that are in the stable tree.
    Every once in a while I have to build a deb from source, but only in rare circumstances.

    Now, when it comes to my development systems I am more likely to compile from source rather than rely on the packages to supply me with the latest and greatest.

    It really all just depends on what kind of stability vs. "new" features you need as well as ease of managment. Installing a package takes 30 seconds vs. compiling/installing from source can take longer and requires more hands on.
  • Depends (Score:5, Insightful)

    by Richard_at_work ( 517087 ) * on Tuesday March 30, 2004 @01:54PM (#8716081)
    I use OpenBSD, which like most of the BSDs has the ports tree, and also has packages. Most of the ports tree are built as packages and are available on the FTP sites, allowing you to either install 3rd party applications from source preprepared for the job, or install the package that has already been preproduced from that port. Best of both worlds, and indeed if you are after customisation and have a number of systems, you can make the changes on one system, and bingo - you have the package ready to roll out to the other systems.

    As for what I use? I used to use solely ports, but now I usually grab all the packages when I do a fresh install, and only use ports for what isnt available as a package, as the packages give me no disadvantage.
  • by mod_gurl ( 562617 ) on Tuesday March 30, 2004 @01:54PM (#8716095)
    If you're responsible for the machines you run how can you abdicate that responsibility by using whatever some package maintainer decides to give you? At the University of Michigan we use Linux from Scratch to manage hundreds of machines that provide everything from web servers to IMAP servers to user Desktops & Laptops. The trick is leveraging the work used to administer one machine well out to hundreds of machines. The tool for this is radmind [radmind.org]. Radmind doesn't require that you build your software from source, but it leverages the work you put into one machine to manage all of your machines. It also integrates a tripwire with your management software which means you can detect unwanted filesystem changes in addition to managing software.
    • by Kourino ( 206616 ) on Tuesday March 30, 2004 @02:17PM (#8716485) Homepage
      If you're responsible for the machines you run how can you abdicate that responsibility by using whatever some package maintainer decides to give you?

      While in principle I can agree with what you're saying, this is a pretty insulting view to take of all the people who work on GNU/Linux distributions. (Or put another way, how am I better than every Debian developer combined? (Substituting Debian for your distribution of choice, of course.))
      • how am I better than every Debian developer combined?
        Because you are most likely to know your exact hardware configuration than some nameless packager, so you can optimize your compile flags accordingly.
  • The answer is .... (Score:4, Insightful)

    by Archangel Michael ( 180766 ) on Tuesday March 30, 2004 @01:55PM (#8716100) Journal
    It depends.

    If you are advanced enough to compile source code in such a way that it performs better or in a tighter controlled manner, which suits the purposes you need better than off the shelf builds (packages), then by all means, build it from source.

    If on the other hand, you don't have a compelling reason to compile the source, then use the packaged product.

    I don't know about you, but for most of my servers, the extra configuration options needed to squeeze an extra few percentage points of performance isn't enough to bother running my own compile.

    Those that say they review ALL code before compiling for security (backdoors, holes etc) problems are probably lying. I am sure there are a couple people who do.

    Basically if you do it just so you can be 1337, you are just vain, as I doubt that most people would see/feel the difference.
  • by gtrubetskoy ( 734033 ) * on Tuesday March 30, 2004 @01:55PM (#8716107)
    Any BSD user will swear by "build-from-source" and talk about how the ports [freebsd.org] are so great (and indeed they are).

    And any RedHat user won't really understand what the BSD user is talking about and will just keep on using binary rpms found from google or rpmfind [slashdot.org]. In a desparate moment one will use any rpm that seems to do the trick - nevermind security, PGP sigs, all that stuff...

    Seriously speaking, building from source is the UNIX way in my opinion. There is just something very heart warming and satisfying about seeing all the compiler messages scroll every time you install a package. (And try installing the native Java from BSD ports - several hours of pure joy!)

    • by idontgno ( 624372 ) on Tuesday March 30, 2004 @02:15PM (#8716456) Journal
      (And try installing the native Java from BSD ports - several hours of pure joy!)

      I'm not sure whether to mod you -2 BSDTroll or +1 BSDFunny. However, I'll comment instead. (Commented earlier downthread, so it's already a foregone decision, but what the hey, you only offtopic once.)

      The only joy I get watching compiler messages scroll by is laughing my butt off watching all the warnings. Don't these people use lint?

      And that's funny only if I'm already in a good mood. Otherwise, I hate having to actually watch the unavoidable visible indicators of the quality of the software I'm about to start using. Just like most people don't like watching sausage being made...from live pigs...

      Yeah, I know, if I know so much, why don't I fix it? Because I didn't sign up to indentured servitude, I just want to use the damn software. I realize that violates the canon of Open Source ethics in the minds of the extremists, but I have a job to do and it's not fixing your damn object cast mismatches.

      OK, ok, cooling down now.

      Thank you, in all sincerity, to the authors of those software packages. Please forgive me if watching 2423 warnings per compile cycle makes me a little crazy.

      And that's why it was the best summer ever!

  • by Spacelord ( 27899 ) on Tuesday March 30, 2004 @01:57PM (#8716141)
    Most package systems allow you to "roll your own" packages from the software you build from source. I use Slackware myself, so I first install my apps into a "staging" directory and build my package from there using the makepkg command.

    It takes an extra minute of your time when you're installing software but it really helps to keepi track of what software is installed on the system, what files belong to it, keeping track of versions etc.
  • Stow! (Score:5, Informative)

    by Abcd1234 ( 188840 ) on Tuesday March 30, 2004 @01:57PM (#8716150) Homepage
    Personally, I use both binary packages and source. Basically, if my distribution has binary packages, and they fit my needs (recent enough version, etc, etc), I'll just use the packages. Why not? However, if I do decide I need to build something from source, I like to use GNU Stow to manage my software. Basically, Stow allows you to install your from-source packages in a nice, sane hierarchy (eg: /usr/local/packages/this-program-1.0, /usr/local/pacakges/other-program-2.4), and then Stow does the job of setting up symlinks into the traditional Unix filesystem (typically /usr/local). So, by using Stow, you get the easy management features of packages (minus dependency resolution) for your from-source build software. It's definitely saved my life... and it's especially useful in an NFS environment, as you can export your packages directory and then use stow on the workstations to install individual packages as you see fit. Quite handy. :)
  • by Theovon ( 109752 ) on Tuesday March 30, 2004 @01:58PM (#8716160)
    I've often had a lot of trouble building programs from downloaded tarballs. Besides mysterious dependencies that I can't track down, sometimes things just don't compile, or they crash, or they produce errors of other sorts. But in many of those cases, I could download, say, an RPM of supposedly the same package, and it would install just fine.

    On the other hand, I've never had any problems. Emerging new packages deals properly with all dependencies, and things always compile correctly. And there's like a review process where packages are first added to portage as "unstable" and then once they have passed everyone's criticism, they are added to "stable". So far, the only "unstable" package I've decided to emerge was Linux kernel 2.6.4, and that all worked out brilliantly.

    Also, if you have a cluster of computers, you can do distributed compiles with, I think, distcc and/or some other package. Gentoo documents this VERY well. Plus, if your cluster is all identical machines, you can build binary packages once and then install them onto all other machines.

    BTW, Gentoo isn't for everyone. The learning curve is STEEP. I had to start from scratch and do it all a second time before I got everything right. (Although I am a bit of a dolt.) Setting up is complex but VERY WELL documented. Only once you've finished building your base system does the extreme convenience of portage become evident.

    Also, there are still a few minor unresolved issues that no one seems to have a clue about.
  • by JM ( 18663 ) on Tuesday March 30, 2004 @02:02PM (#8716247) Homepage
    I used to run an ISP, built everything from source, but eventually it got to the point where it was un-manageable.

    You end up with different versions, different compile options, upgrades are a mess, and it's hard to support.

    Another problem is filesystem pollution. When you do your "make install", it's hard to track what files are installed, and when you upgrade to a new version, you can't be sure it's clean, since you might have configuration files or binaries anywhere on your system.

    So, one day, I started to make RPM packages of stuff I needed, and modified existing RPMS, and sent all the patches to the community.

    What happened is that Mandrake accepted all my packages, so all I had to do was to install the standard distro, and all I needed was there.

    And eventually, I made so many packages that they hired me ;-)

    But even if I wouldn't work for Mandrake, I'm still sold on RPMs. You have a clean SPEC file that contains the pristine source code, plus the patches, and basically all the instructions to build the stuff. You can specify the requirements, you can easily rebuild on another machine, uninstall the old stuff, or upgrade, with a single rpm command.

  • Ports or Portage (Score:5, Interesting)

    by iiioxx ( 610652 ) <iiioxx@gmail.com> on Tuesday March 30, 2004 @02:04PM (#8716290)
    As a FreeBSD user, I build almost everything from source using ports. I never install from packages. My reasons for this are many and varied, but basically, I prefer to build software myself, with the precise options I need. When you use packages, you are at the mercy of the packager and their preference for options and optimizations. Several years ago when I used Linux, I often encountered problems of pre-built packages lacking a particular build option, and sometimes installing to odd places, or other strangeness.

    And once you've started using packages and package management, it gets harder to introduce source-built software into the same environment without screwing up your dependency databases, or worse - breaking things. So if a package lacks a required option, you really have to build your own package with the option included in order to keep things orderly. That's a lot more work than just installing from source.

    I'm not a Linux user anymore (several reasons) but if I were I to go back to Linux, I would use Gentoo, specifically for its Portage system.

    So, in my opinion, building from source may be a little more time and CPU consuming, but it is the better option for a controlled, tailored environment.
  • by polyp2000 ( 444682 ) on Tuesday March 30, 2004 @02:05PM (#8716292) Homepage Journal
    I have a dual processor Athlon MP machine; I use this machine for my Desktop at home every day. I use gentoo because I want the latest and greatest bleeding edge and I want it to runs as fast as possible on my set-up.

    Some distro's (mentioning no names) still build for 386 and I've come across distros that only utilise one processor at kernel level let alone build individual packages for multiprocessor support. I prefer to know that im using my hardware to the best of its ability.

    However if im installing a server; I'd probably choose a tried and tested distro Red-Hat for a colocated machine which i may never even get to see with my own eyes; Reason being a colo shop will have in house support staff able to fix any run of the mill problems that occur.

    For an in house server I might choose Mandrake or SuSe (more likely Suse) and maintain packages that way (last thing you want is to spend several days at work getting a gentoo box up and running!);however, stuff like apache / php etc i often like to compile fresh and configure how i need them. plus it makes patching that little bit easier if you have a specific set up.

    Generally speaking anything mission critical I'd try to use packages that have had a fair crack at being tested well after build.

    Anything personal you might not care too much about uber-stability like a desktop / research/hacking machine its generally fun to hack about with stuff and compile your own from source.

  • by multipartmixed ( 163409 ) * on Tuesday March 30, 2004 @02:08PM (#8716335) Homepage
    ..of time.

    It's like the programmer who spends six hours hand-optimizing the inside of a loop that gets called once a day and already executes in 10ms... but ignores the fact that the program takes 20 times longer to run than it should because of an inefficient algorithm. This programmer doesn't know *why* his program is slow, he's guessing, and he will almost always guess badly. This is why profiling was invented.

    Look at it this way. Installing from the packages you get the following benefits:
    - You save time compiling (multiply this by the number of patches you have to add over the box's life time)
    - You save time tracking down dependencies
    - You have a standard platform you can re-deploy at will
    - You have something that another administrator can work on without asking where you shoved shit.
    - You have a package database you can query for version information, dependencies, etc.
    - You have an easily available source of "known good" binaries if you have a suspected intrusion problem.
    - Depending on the package system you use, you might be able to stay on top of security vulnerabilities with very little (or no) work.

    Now, installing from source, you get the following benefits:
    - You can pick where the files go (whoopie)
    - You tune the performance for your platform
    - You can select specific features
    - You can de-select specific features to save disk space

    The only one which gains you a lot 99% of the time is where you can select specific features which are turned off in the standard package. If you need those options, you build it from source. If you're doing ten machines, though, you build it from source on *one* machine, package it up, burn it, and install it from YOUR package on all ten machines.

    Saving a few CPU cycles is never worth saving a man-hour. You can use the man hour more productively on the macro-optimization level. Similarly, you can take the dollars that you would be pay the man and buy a new CPU with it.

    The same argument goes for saving a kilobyte of disk space. If found out that any of my guys spent *any* significant time trying to cut less than a gigabyte out of our application footprint, I would give him a footprint of my own, right in the middle of his colon. Disk is cheap. People are not.

    If you have an application is which is CPU-bound and running too slow, find out why (profile the system or binary), and build from sources only what you need to make your application conform to the target specification. Or, if that will take too long, just buy more CPU.

    Long story short -- tuning of ANY kind should not be done at the micro-level across the board, that's just a waste of time. Tuning should be done by profiling the system as a whole, identifying the constrants, and relieving them. If that requires micro-tuning of a few things, that's fine... but squeezing every last little bit of performance out of absolutely everything is either impossible or incredibly time-prohibitive. And, of course, if you were going to spend that kind of time, you could either buy new hardware with the money (remember Moore's law), OR you example the system more closely at the macro level and come up with a better way to do things.
  • Context (Score:3, Insightful)

    by Second_Derivative ( 257815 ) on Tuesday March 30, 2004 @02:10PM (#8716384)
    For servers, go with something like Debian: good clean integrated system with timely and automatic security updates. Not bleeding edge, but if it's at all a serious server you really don't want it to be.

    Desktops, Ports based system all the way. Why? Because with something like Gentoo, it might take several days to compile but you can be assured you're not going to dependency hell anytime soon when you want to try the latest and greatest. Headers and such are installed by default, so you can usually compile something by hand and it will Just Work whereas if you're using three different unofficial package streams and you need to do some upgrade of a simple library somewhere which has an anal retentive versioning and dependency specification, attempting to apt-get that new version will cause your entire house of cards to come crashing down. I lived with Debian on a desktop like that for god knows how many years until I decided "No more". Yeah I have to wait a while with Gentoo but at least I only have to do it once.
  • by gosand ( 234100 ) on Tuesday March 30, 2004 @02:11PM (#8716391)
    Obviously, the absolute best way to maintain your systems is to install Windows Update. But since Linux is stuck in the dark ages, you'll have to manage your systems like the cavemen did.

    Now if you'll excuse me, I have to go reboot 100 systems.

  • by DrWhizBang ( 5333 ) on Tuesday March 30, 2004 @02:12PM (#8716407) Homepage Journal
    which is better, vi or emacs? ;-)
  • by thinkninja ( 606538 ) on Tuesday March 30, 2004 @02:15PM (#8716445) Homepage Journal
    Duck and cover, incoming Gentoo zealots :P

    Personally, I install from packages (apt) wherever possible. If something is unpackaged and looks new and shiny, then I'll install from source. I really can't imagine managing a large number of applications without a package manger, even if it's something you've written yourself.

    If installing everything from source is your thing, you're probably already using Gentoo with its package mangagment. So the question is moot.
  • apt-build (Score:3, Informative)

    by _aa_ ( 63092 ) <j.uaau@ws> on Tuesday March 30, 2004 @02:20PM (#8716536) Homepage Journal
    apt-build [debian.org] provides automatic source based package installation in debian [debian.org]. Not every package offers a source package, however. This is something I'd like to see expanded in debian.

    Also note the aptly named, though apparently dead project www.debtoo.org (google cache [64.233.167.104]) which is based on apt-build. Don't let this stop you though, 'apt-get install apt-build' and give it a try.
  • by Kourino ( 206616 ) on Tuesday March 30, 2004 @02:22PM (#8716560) Homepage
    Optimization? Control?

    Man, what is this, Gentoo?

    Any sane distributor these days builds binary package with reasonable optimizations that won't break across architecture submodels, and occasionally releases binaries targetting submodels (e.g. PentiumPro-specific packages). On many machines, for many workloads, however, the model-specific optimizations just aren't that helpful. Obvious exceptions are floating point math on most platforms (especially x86, where x87 math code is a dog and should be replaced with SSE code if possible) and - I'm told - really slow hardware. (I'll be able to test that once I get these Indys running GNU/Linux.) In my experience, Debian hasn't really felt any slower than my LFS systems for personal use.

    So, I'll say this: if you have enough time to build everything you're using, do some careful speed comparisons between your self-built packages and the vendor's binaries. If there's really a significant speed increase, and you need that increase, source is the only way to go for the packages that need the speed increase. Otherwise, it's probably not worth your time.

    Unless whatever you're doing is extremely security critical, you can probably deal with the fact that server app foo has features bar and baz installed that you won't use. If you can't, you're probably auditing the source of everything you use anyway, and that doesn't sound like the case, so "control" probably isn't a real issue here either. Control can be found in config files as well as in the configure script.

    People say, "but package dependencies suck!" Well, yes, rpm (the program) isn't built to deal with dependencies that gracefully. If it annoys you that much, go install apt-rpm or something, or even Debian (gods forbid). Package management isn't rocket science.
  • by bplipschitz ( 265300 ) on Tuesday March 30, 2004 @02:22PM (#8716562)
    this is coming entirely from a *BSD perspective [especially FreeBSD], but the older and slower your hardware, the more you might depend upon packages, just because they take less time to install.

    That said, I routinely build stuff from source on a Pentium Pro 200 MHz dual CPU machine at work. It's not our main server, so the performance hit is never noticed.

    Portupgrade is a absolute must on this machine, as we have all kinds of software running on it. Without portupgrade, I'm sure it would be a nightmare.

    In the end, it's whatever works best in your situation, and to have this as 'news' on slashdot seem really freakin' ridiculous.
  • my $0.02... (Score:3, Informative)

    by Raxxon ( 6291 ) on Tuesday March 30, 2004 @02:23PM (#8716582)
    For quite a while I used RedHat and did enjoy the ease that package management gave to a system. For a workstation equivilant, I still agree with this solution in general. However having run through Linux From Scratch (www.linuxfromscratch.org) I see that on a server-class machine, there is a TON of unnecessary bloat. Why should it take a GIG of space , or more, to host just a Web server with MySQL and FTP access? With LFS I can build a specific purpose system and get that footprint down to around 350 to 425mb and that's including the kernel sources being left for recompile and a full compile environment. I've been told that some people can get the same functions stripped down to less than 200mb (this is all of course NOT counting your SQL databases).

    At this point there needs to be a big fork somewhere to divource the Linux Desktop from the Linux Server. Linux will do both, but one should not cause issues for the other. If a desktop user wants to run a FTP server, they should be able to. If the server admin wants to have a mail client (pine) or an IRC client (BitchX) installed for accessing information, he should be able to. But these features should be implemented with that specifically in mind. Not installing half a million libs because *maybe* the server admin wants to install addon XYZ for pine and it needs this lib while pine itself doesn't...
  • I've been a UNIX sys-admin for about a decade.
    My advice is that for a workstation that is managed by an individual you can let the admin do whatever they want, but for any server that has to be stable and maintainable you want to stick with a well maintained package repository and try to avoid 3rd party packages and tarballs if possible.

    You have to understand that there is a software stack in most services.
    With the kernel and core libs (like glibc) and such at the bottom of the stack, and applications like Evolution at the top of the stack. In between you can have gdb and openssl and various perl modules (in AMAVIS for example) and you have sasl stuff which may be related to pam and openldap and cyrus or wu.... etc..

    The thing is that even though all of those various pieces of the software stack may be linked against different libraries on the box, the maintainer of the library code may not have a QA group to co-ordinate regression testing and compatability testing before the latest CVS commit is enacted to fix a bug referenced in a CERT alert.

    RedHat and Debian and SUSE and all the others have package repositories, the repository maintainers do an amazingly fantastic job of QA and testing to make sure that new patches don't break your software stack. As an individual you simply can't keep up with that.

    For example the Development team that takes care of OpenSSL doesn't backport their bug fixes and security patches to old versions of the code. They just maintain the latest release version and the current CVS version. If you have an old server running IMAPs and HTTPs and SSH and SMTP/TLS and such, and CERT announces a bug in openssl vX.Y, then the OpenSSL development team will certainly release a patch for the latest version which may be version Z!

    That might cause you to have to upgrade APACHE or wu-IMAP or OpenSSH or Postfix etc... Those things might then have divergent dependencies that would cause you to go and rebuild half a dozen other packages, and so on and so on. Also, do you remember all the magic flags you used for configure and make? Do you have the same environment variables set today that you did the last time you built PostFix? The possibilities for problems are endless. And if you do have a problem you are kind of on your own since your system will be a unique box. Whereas if there is a problem with a standard RedHat or Debian package, then you can always go to the general newsgroups and chances are there are a dozen other "me too" posts with answers already.

    It is much easier to use apt or up2date.

    So, unless you have a very good reason for using a tarball on a production server that requires reliability and security and high availability, then you should stick with packages.

    If you want to build the packages from source, feel free! RedHat and Debian and SuSE make the SOURCE packages available so that you can dig in and read all about'em. I'm sure the Debian team could use a new package maintainer, if you are addicted to compiling and testing things, check them out.
  • by tmoertel ( 38456 ) on Tuesday March 30, 2004 @02:26PM (#8716607) Homepage Journal
    Packages and package managers solve a real problem: Keeping track of software installations, their files, and their interdependencies is hard, hard work. By packaging software and using good, "higher-level", package managers (like yum [duke.edu] and apt-get) you can delegate most of this problem to the computer. That's a smart move.

    It's still a smart move if you're building from source. Just package your source. Then you can build the sources under the control of a package manager (like RPM), and install the resulting packages. You get the full benefits of build-from-scratch and the full benefits of using packages.

    This is exactly the approach I use. In fact, I'm a bit more strict about it: My policy is that I don't install any software that isn't packaged. If I need to install something that isn't packaged, I'll package it first. If I don't like the way a packager built an already existing package, I'll repackage it.

    The bottom line is that creating your own packages (or fixing packages you don't like) is much easier than maintaining a from-scratch, unpackaged installation. Or ten of them.

    To get you started, here a couple of RPM-building references:

    Don't give up the benefits of source. Don't give up the benefits of packaging. Have them both.

  • by Dunkirk ( 238653 ) <david@@@davidkrider...com> on Tuesday March 30, 2004 @02:28PM (#8716623) Homepage
    I use SuSE (formerly RH), so I'm "into" using RPM. OTOH, I usually only like RPM's that have been built by the distro's creator. (Noteable exception: PackMan RPM's for Xine.) Anything else, I usually compile from source and stick in /usr/local. Checkinstall is what you need here. After configure and make, you ``checkinstall -R'', and it makes an RPM of whatever would be installed with ``make install''. That way you can take it back out very easily.
  • The real issue (Score:5, Informative)

    by Starky ( 236203 ) on Tuesday March 30, 2004 @02:29PM (#8716632)
    The real issue is whether you feel the time savings you gain from installing packages outweighs the increased performance and your own increased knowledge regarding your computing environment you gain from building from source.


    You can have the best of both worlds with Gentoo [gentoo.org]. I began using it about a year ago, and I am sold.


    Building from source using Portage is almost as easy as installing a Red Hat package. The community is extremely proactive. (I have only had problems installing or updating a couple of times in the last year, and the problems were remedied within a day or two and the portage trees updated after I submitted a bug report.) And you don't give up variety. The number of ebuilds available in the Portage tree is simply astounding.


    I am even using it on my laptop [collinstarkweather.com] these days and am extremely pleased that it seems to work well as both a server and desktop distribution.


    Hope this helps :-)

  • by jellomizer ( 103300 ) on Tuesday March 30, 2004 @02:29PM (#8716639)
    If the professor has some sort of grant he may prefer a package because it is quicker to setup and save time so you can be more productive in other areas. If it is some sort of continuing income then you might as well try to incorage recompiling the source because you get more out of it educationally.
  • by dmccarty ( 152630 ) on Tuesday March 30, 2004 @02:29PM (#8716640)
    I work with a professor performing research and managing more than ten Linux based servers.

    Is that, like, 11 Linux-based servers?

  • GCC optimizations (Score:5, Informative)

    by vlad_petric ( 94134 ) on Tuesday March 30, 2004 @02:31PM (#8716662) Homepage
    If you used Intel's C Compiler then yes, it would be worth it (for instance icc does automatic loop vectorization, and different processors have different vector support) - with gcc however the speedups you'd get are minimal (if not inexistent ...).

    icc, btw, is free for non-commercial use on Linux.

  • by MerlynEmrys67 ( 583469 ) on Tuesday March 30, 2004 @02:31PM (#8716673)
    I would choose a distribution based on either source/binary packaging. Don't bother fighting your distribution (have the worst of both worlds)

    That said - for a work machine, I prefer binary packages. I just want the damned thing to work, work well, and not futz with it.

    For a hobby/play/research machine - I prefer source packages. I have found there are many compilers out there that will massively outperform GCC, especially when you turn on those crazy optimizations that most binary distributions won't (plus optimize for the EXACT processor I am running on, etc.)

  • by strider( corinth ) ( 246023 ) on Tuesday March 30, 2004 @02:35PM (#8716729) Homepage
    My arguments on why to use a source-based distribution have been covered in other posts, so I won't repeat them here. I think Gentoo provides a solution that will satisfy both you and your professor: you can use a source-based, custom-built binary distribution.

    As you probably know, Gentoo is a source-based distribution, but it also allows binary packages. Many (such as Mozilla Firefox) are distributed by Gentoo as source and binary; you can choose to install either. The ability to build a binary package from a source .ebuild (the file that describes to the system where to find the source and how to build it) requires adding only a single flag to the package compile command, ebuild.

    Additionally, since (if I read you correctly) you're probably using similar hardware for each of your machines, it would be trivial to set up a compile box which would produce binary packages for your other boxen. Packages compiled for your architecture would be faster than most binary-only distributions (many are still compiled for the i386 architecture), and writing a new ebuild is trivial compared to writing a new spec file. (Trust me; I spent a quarter writing a paper on the topic while I was in school, not to mention having had to do it myself in the Real World.)

    Finally, Gentoo integrates and tests its packages. Ebuilds come with Gentoo-specific patches, so you don't have to spend the time to make each source package work with the rest. This is probably one reason why your professor likes binary distributions: they all work together, and enough people rely on them that if something breaks, it gets fixed. A package-based Gentoo distribution would allow you to leverage that, while keeping your machines unified in their versioning (as much as you want them to be, at least) and also provide all of the benefits of a source-based distribution.
  • by plcurechax ( 247883 ) on Tuesday March 30, 2004 @02:41PM (#8716816) Homepage
    How you feel about the ease and simplicity of installing and maintaining packaged programs versus the optimization and control that can be achieved by building from source? What are your experiences?

    Humans do not scale well, they have very low bandwidth of information sharing, and have high latency (i.e. you can't get ahold of them). Humans are also expensive, wander off into different jobs, graduate or drop out of college, etc. So I tend to prefer the reducing human cost of the system administration complexity as a default position.

    So my gut feeling is that unless there is a major time or dollar savings in the optimization by building from source (i.e. avoid buying 10+ new CPUs for the systems, or computation runs take a day less) go with the reducing administation complexity by using a package management systems so that you can concentrate on your actual goals (research, profit, or whatever).

  • #portupgrade -a

    or, if you prefer packages

    #portupgrade -aP
  • Different stages (Score:5, Insightful)

    by matt-fu ( 96262 ) on Tuesday March 30, 2004 @02:58PM (#8717028)
    As far as as I can tell, there are four stages of sysadmin as it relates to installed software:

    1) I am a newbie and have to use packages for *.
    2) I know my way around. I like the level of control I get with compiling/know how to code/read far too much Slashdot. I compile by default.
    3) I manage more than three boxes in my basement now. Having the ability to back out of system changes without a full OS reinstall is a necessity. I build my own packages from source that I've compiled.
    4) I manage more than just three boxes in a department now. Now I have to deal with politics, ordering hardware, the freakin' network, and I generally have time for sysadmin. On top of all that I now have a family so spending two or three extra hours per day on my Unix hobby is no longer feasable. Precompiled packages work just fine.

  • by Skapare ( 16644 ) on Tuesday March 30, 2004 @03:06PM (#8717122) Homepage

    I build the mission critical programs from source code, and just let the rest be installed as binary packages. I build from source even if I don't need to just to be sure I won't have extra unexpected issues should I ever need to actually make modifications to source and rebuild. I really don't have very many local modifications, but I'm prepared just in case.

    Additionally, I do this all on one master machine (with a backup of it kept live on another machine), build binary packages of my own from my source builds, and install those packages on the actual servers. That way I have even more consistency, though at the cost of ultimate optimization. But I think it is better to be able to quickly reinstall a machine, as well as use checksum verifications that there are no trojans.

    I use Slackware, but this could be done with most systems, including FreeBSD, Linux (most distributions, including Debian and the RPM based ones), NetBSD, OpenBSD, and even Solaris.

  • The point? (Score:5, Insightful)

    by Anthony Boyd ( 242971 ) on Tuesday March 30, 2004 @03:21PM (#8717320) Homepage

    Wow. There sure are a lot of posts about which is better, but I don't see any comments that deal with the underlying problem. And that is this: don't get into a pissing match with your professor. Seriously, what are you hoping to accomplish here?

    If you were thinking that you'd get tons of pro-compiling comments, and then put that in front of the professor, stop right there. Coming to Slashdot for validation of your side of the argument is about as helpful as those wives who write to Dear Abby about their husbands. Because no husband on Earth is going to appreciate getting chastised by Dear Abby, and if Abby sides with him, he's going to gloat. It's lose-lose for the wife, just like it's lose-lose for you if you try to use Slashdot as leverage. Screw with the computers that the professor relies on, and he'll find a way to "thank" you for it. Don't sabotage yourself.

  • by Angst Badger ( 8636 ) on Tuesday March 30, 2004 @03:25PM (#8717400)
    I prefer to install everything from packages when I can. For stuff that I have to upgrade frequently -- usually server processes that need security patches -- I do it from source, partly because I prefer not to wait for a package to become available, but mostly because it saves me from the tangle of dependencies that come with packages. (The difference between RPM hell and DLL hell, as far as I'm concerned, is only that you don't have to pay for the privilege of RPM hell.)

    In general, I haven't found that there is any real optimization benefit in compiling from source in most cases -- the kernel itself and Apache being the primary exceptions. I'm sure it's there, but it's small enough to be unnoticeable in most cases, and therefore not worth my hourly wage to futz with when I could be doing something that actually generates revenues.

    Mind you, this is at work. At home, I tend to prefer compilation, but that's just because I like screwing around with the source.
  • Build vs use (Score:5, Informative)

    by po8 ( 187055 ) on Tuesday March 30, 2004 @04:46PM (#8718341)

    I've been building from source since the late 80s. What has happened is, I've gotten old, and tired of the same ol' repetition and screwups. These days, I always try the Deb package first. 95 times out of 100, that works fine. Even if it doesn't, the infrastructure to build is typically installable as Deb packages.

    It's not even the compile time that's so significant. It's the pain of figuring out somebody's config/build system, and the even greater pain of configuring the thing once its installed. Deb packages make these problems mostly go away.

    Go ahead and build from source if you like. Someday you'll get old too.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...