Slashdot: News for Nerds


Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cross-Distro Remote Package Administration?

kdawson posted more than 5 years ago | from the i-patch dept.

Operating Systems 209

tobiasly writes "I administer several Ubuntu desktops and numerous CentOS servers. One of the biggest headaches is keeping them up-to-date with each distro's latest bugfix and security patches. I currently have to log in to each system, run the appropriate apt-get or yum command to list available updates, determine which ones I need, then run the appropriate install commands. I'd love to have a distro-independent equivalent of the Red Hat Network where I could do all of this remotely using a web-based interface. PackageKit seems to have solved some of the issues regarding cross-distro package maintenance, but their FAQ explicitly states that remote administration is not a goal of their project. Has anyone put together such a system?"

cancel ×


Remote admin of a UNIX box? (1, Interesting)

Nursie (632944) | more than 5 years ago | (#27727563)

No, nobody ever tried that before.

Hmmm... let's see. SSH ring any bells? Or are you actually going up and sitting at the box to do these updates?

Re:Remote admin of a UNIX box? (3, Insightful)

backwardMechanic (959818) | more than 5 years ago | (#27727583)

Maybe that works for your home network, but SSH'ing to 25 or (maybe a lot) more different boxes to repeat the same task is a bit tedious. Hey, doesn't this sound like the kind of automated task a computer might be good at?

Re:Remote admin of a UNIX box? (5, Insightful)

Nursie (632944) | more than 5 years ago | (#27727649)

Uh, right. Like putting ssh commands into a script?

ssh user@host aptitude update

Set up key based login and you don't even have to type passwords. By the sounds of it he needs to pay some attention to each individual machine anyway, as he has multiple distros and wants to determine which patches he needs for each box.

Re:Remote admin of a UNIX box? (1)

supernova_hq (1014429) | more than 5 years ago | (#27727859)

Actually, I remember reading a year or so ago about a program that would allow you to run a specified command via ssh on a list of machines. You could do this with a shell-script (pass arguments), but I think the program also did it all in parallel and showed some output as well.

Damned if I can find it...

Re:Remote admin of a UNIX box? (4, Informative)

supernova_hq (1014429) | more than 5 years ago | (#27727883)

Sorry to reply to my own post, but circlingthesun actually posted the name of it below!


Re:Remote admin of a UNIX box? (0, Redundant)

Nursie (632944) | more than 5 years ago | (#27727895)

That does actually sound really cool and could be a huge timesaver for admins with a large suite of similar machines.

Re:Remote admin of a UNIX box? (4, Informative)

walt-sjc (145127) | more than 5 years ago | (#27727955)

It's called "dssh". Google is your "search" friend (we will ignore the evil side of Google at the moment... :-)

Re:Remote admin of a UNIX box? (5, Interesting)

BruceCage (882117) | more than 5 years ago | (#27727893)

Set up key based login and you don't even have to type passwords.

Since you basically need root access to do updates this definitely poses a security hazard as when your client is compromised there is direct access to the server. Then again, an attacker could always use a keylogger to capture the password anyways.

If you even attempt to do this I'd setup a different user account specifically for the process of updating and limit the rights accordingly and then I'd restrict the commands that can be executed (you can do this per key).

There may actually be better ways but I'm not a very experienced sysadmin. Most experience I have is from managing a single web server and my local desktop obviously. Be sure to correct me (in a friendly manner) if I'm wrong.

Then again, if you do this from the same machine as your normal account is located on you'll still have the same issues in case of a compromised client. Probably just best to limit every single account to just that what is specifically needed and setup proper host based intrusion detection (OSSEC?) to be notified when something goes wrong. This stuff is hard...

Re:Remote admin of a UNIX box? (1)

walt-sjc (145127) | more than 5 years ago | (#27728099)

You create a local script that runs on each server that "pulls" updates and installs them, logging the results (alerting if something failed.) If you need to do Out Of Schedule updates, you can manually kick off the updates using a limited priv account that has explicit (restricted) sudo ability.

Local packages are easier if they are all the same style package (I prefer dpkg's - apt is available for CentOS too.) Running a mixed distro system still means you have to build packages multiple times which is a PITA however. If you don't mind the size hit however, you can link statically so you can run the same binaries on all systems...

Re:Remote admin of a UNIX box? (2, Informative)

dna_(c)(tm)(r) (618003) | more than 5 years ago | (#27728279)

Never use ssh+password. Always use ssh+PKI. I think you missed "key" and focused on "[no] password"

From Ubuntu wiki SSHHowto [] :

If your SSH server is visible over the Internet, you should use public key authentication instead of passwords if at all possible.

Aptitude is THE DEVIL! (0, Flamebait)

Anonymous Coward | more than 5 years ago | (#27727921)

I used aptitude for a few months after reading that it installed recommended packages in addition to true dependencies. Well, bollocks to that. There are more times when I don't want "recommended" packages (installing exim sucks when postfix is desired, for example).

As to the script to maintain servers ... below a trivial example for debian/ubuntu -based systems. Setup ssh keys to make life easier from an internal, protected system. If you have more than 6 systems, you probably want to mirror the repositories locally. Also, set your TERM environment variable to make interactive install/updates nicer to use.

===== Weekly Updates ======

SRVS="s1 s2 s3 s4 s5 s6 s7 s8 s9"
for D in $SRVS ; do

    ssh root@$D "apt-get update; apt-get upgrade;"


what's wrong with a cron job? (0)

Anonymous Coward | more than 5 years ago | (#27727567)

yum -y update

Re:what's wrong with a cron job? (1)

snugge (229110) | more than 5 years ago | (#27727587)

maybe this requirement?

"determine which ones I need"...

Exactly... (1)

bogaboga (793279) | more than 5 years ago | (#27728053)

You hit the nail on the head. I have a cron job to exactly that and for 3 days after an "auto update", my Mythbuntu box shuts down on its own after running for about 54 hours. I am now wondering whether an update I should have avoided is the culprit.

By the way, can one point me to a resource that could help me determine what's going on on my box? Thanks.

Re:Exactly... (1)

amias (105819) | more than 5 years ago | (#27728093)

your log files maybe .....

Tools exist (5, Informative)

PeterBrett (780946) | more than 5 years ago | (#27727577)

  1. Create a local package repository for each distro.
  2. Set apt/yum to point at only the local repository.
  3. Create a cron job on each box to automatically update daily.
  4. When you want to push a package update out to all boxes, copy it from the public repository to the local one.
  5. Profit!

Re:Tools exist (1)

drsmithy (35869) | more than 5 years ago | (#27727619)

Exactly. This is basically your DIY RH Satellite server It's the model we use, although we don't have the Ubuntu machines to deal with.

Re:Tools exist (4, Informative)

Jurily (900488) | more than 5 years ago | (#27727773)

When you want to push a package update out to all boxes, copy it from the public repository to the local one.

Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

I suggest tentakel [] , and that OP could have found it in 2 minutes with Google. I did. [] The first hit mentions it.

Re:Tools exist (4, Interesting)

value_added (719364) | more than 5 years ago | (#27727929)

Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

And segregating things on the system that hosts the public repository is impossible?

I don't think any of this is exactly rocket science. On my home LAN where I use FreeBSD, for example, I have a motley collection of hardware ranging from Soekris boxes to Opterons. Everything gets built on a central build server and distributed automagically from there using a setup similar to what's suggested the OP. Not a single box has the same collection of userland software installed, while certain boxes do get their own custom world/kernel. None of this really requires more effort or involvement on my part than some careful thought beforehand.

One of the nice advantages of a centralised setup is that it accommodates a clean way of testing things beforehand. Rolling out the latest but broken version of "foo" to multiple systems is something to be avoided.

Re:Tools exist (2, Insightful)

xouumalperxe (815707) | more than 5 years ago | (#27727997)

Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

Regarding having the same packages installed, you "only" need to make sure your local repos have all the packages that are used across your install base. The machines will then pull only their own updates, with no fuss. Regarding the heterogeneity... tough cookie. Either you have something more or less homogeneous and you can automate the process, or you're stuck doing things by hand. Especially once you enter the realm of "review each available update by hand and determine whether it's relevant", as the OP asked for. You can't have that and useful automation with 10 different distro/version setups.

Re:Tools exist (1)

RiotingPacifist (1228016) | more than 5 years ago | (#27728195)

Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

How so? I'm fairly sure most repositories deal with multiple versions, and just because a repo has a package doesn't mean it's installed on all computers that connect to it. As a single box can serve as both a debian and RPM repo.

Re:Tools exist (0)

Anonymous Coward | more than 5 years ago | (#27728329)


    Lets intentionally make our lives more difficult. Pick 3 versions of the same Centos with varying versions of different installed packages, Centos 4.x, 5.0 & 5.2. If my Windows folks did that I'd hit them. One OS with common installed parts and pieces. I have 3000 desktops and 200 Citrix servers and by gosh they had sure better be close ( 100% ) in configuration.

Are you doing this on purpose or did you inherit different OS boxes ?
Hint - Managing / configuring 10 boxes that are the same is roughly akin to managing 1.
But I'll let you figure that out.

Re:Tools exist (2, Interesting)

JohnnyKlunk (568221) | more than 5 years ago | (#27727779)

Totally agree, I know this is /. and we hate windows - but it's similar to the way WSUS works - and since the introduction of WSUS I haven't given this question a second thought. You can set up different boxes to get updates on different schedules so the pilot boxes always get them first, then production boxes over a few days in a rolling pattern.

Re:Tools exist: we do it this way. (5, Interesting)

nick_urbanik (534101) | more than 5 years ago | (#27727801)

I work in a large ISP, and this is the way we manage updates for the various Linux platforms we use. Quite simple, really. You can build tools that help: diff between the downloaded updates and what you have in your own repository, and mail you the ones that you are not using. I find's security pages [] useful in keeping track of what security updates matter to us.

Re:Tools exist: we do it this way. (1)

Dunkirk (238653) | more than 5 years ago | (#27727917)

How about a sample? I use Gentoo. My servers do this every night: /usr/bin/emerge --quiet --color n --sync && update-eix --quiet && glsa-check -l affected

I could just as well apply every fix automatically, but I like to see it before it goes in.

Re:Tools exist: we do it this way. (3, Insightful)

Bert64 (520050) | more than 5 years ago | (#27727963)

You could also use nagios and check_apt/check_yum to alert you of out of necessary security updates, put a script for installing updates on every box (different script for centos/ubuntu, but same syntax), create a user who is added to sudoers for only that script, and create an ssh key for authentication...
Then you can feed the list of hosts that need updating into a script which will ssh to each one in sequence and execute the update script followed if necessary by a reboot..

Re:Tools exist (2, Insightful)

supernova_hq (1014429) | more than 5 years ago | (#27727875)

A cron job? Just set the update-manager to run every morning and automatically download AND install all updates.

You sub 7-digit uid guys always do everything the hard way!

Re:Tools exist (3, Informative)

comcn (194756) | more than 5 years ago | (#27728085)

As another "sub 7-digit guy" - there is a reason for this... There is no way I'm going to let over 60 servers automatically install patches without me checking them first! Download, yes. Install, no.

At work we use cfengine to manage the servers, with a home-built script that allows servers to install packages (of a specified version). Package is checked and OK? Add it to the bottom of the text file, GPG sign it, and push it into the repository. cfengine takes care of the rest (our cf system is slightly non-standard, so everything has to be signed and go through subversion to actually work).

Re:Tools exist (1)

lamapper (1343009) | more than 5 years ago | (#27728237)

As another "sub 7-digit guy" - there is a reason for this... There is no way I'm going to let over 60 servers automatically install patches without me checking them first! Download, yes. Install, no

Great post and my sentiments exactly. Download yes, but install only after I have performed testing on at least one box and really utilized that box. The last thing I want is to load up junk on anyone elses box or a production server.

As for those that mentioned concern with someone hacking in and getting access to your desktops or servers, if your hardware / firewall router is doing its job and you are actively monitoring both outgoing and incoming packets for suspicious activity, you know if the systems behind your firewall/router are secure or not.

Having Linux (probably same with Macs) instead of MS lets me sleep much better. But I am still monitor, test before updating additional machines. Just the smart safe way.

Re:Tools exist (1)

supernova_hq (1014429) | more than 5 years ago | (#27728253)

You have 60 servers and you don't have a controlled local repository?? I feel sorry for the shared repos that have to deal with all you guys updating your 60+ servers all at the same time...

Re:Tools exist (1)

IainCartwright (733397) | more than 5 years ago | (#27728095)

You sub 7-digit uid guys always do everything the hard way!

Like say, reading the summary before opening our yaps?

Re:Tools exist (2, Informative)

Antique Geekmeister (740220) | more than 5 years ago | (#27728103)

For RedHat? No. 'yum update' is fairly resource intensive. And changing applications in the middle of system operations is _BAD, BAD, BAD_ practice. I do _not_ want you silently updating MySQL or HTTPD in the midst of production services, because the update process may introduce an incompatibility with the existing configurations, especially if some fool has been doing things like installing CPAN or PHP modules without using RPM packages, or has manually stripped out and replaced Apache without RPM management.

And heaven forbid that you have a kernel with local modifications and special patches for special hardware whose version number is exceeded by the next RedHa kernel, and it replace yours, and the hardware fail to boot at the next reboot. That is very painful indeed to cope with if you haven't set up remote boot management or spent extra effort to lock down your packages.

We oldtimers, low uid or not, have been burned enough to know not to lick the frozen lightpole.

Re:Tools exist (1)

LinuxAndLube (1526389) | more than 5 years ago | (#27727943)

That seems to easy!

Re:Tools exist (1)

Antique Geekmeister (740220) | more than 5 years ago | (#27728063)

You missed the RedHat model. 5. Insert Oracle server for no good reason. 6. Profit!

Re:Tools exist (0)

Anonymous Coward | more than 5 years ago | (#27728333)

They are in the process of removing the dependency on Oracle. See the Spacewalk project -

Re:Tools exist (1)

SCHecklerX (229973) | more than 5 years ago | (#27728473)

What PeterBrett said. Where I work, we can't do it the easy way because of security requirements, but the methodology still works. We just have to rsync the various repos each day, and the cron job just points to the local repo on each box instead of central ones hanging out somewhere.

Webmin (5, Interesting)

trendzetter (777091) | more than 5 years ago | (#27727603)

I recommend Webmin [] which 100% FOSS. I have found it reliable, flexible and feature-rich.

Re:Webmin (3, Interesting)

ParanoidJanitor (959839) | more than 5 years ago | (#27728043)

I have to second this. Webmin has everything you ask for and then some. If you have an update script on each machine, you could easily update all of your machines at once with the cluster management tools. I know it works well with APT (having used it myself), but I can't speak for any of the other package managers. In the worst case, it's still easy to push an update command to the non-apt machines through the Webmin cluster tools.

Re:Webmin (1)

gbjbaanb (229885) | more than 5 years ago | (#27728405)

it works well with YUM too - in fact, Webmin is one of the best admin things around. I think every project should be mandated to create a webmin administrative module before being allowed into the wild :)

clusterssh (5, Interesting)

circlingthesun (1327623) | more than 5 years ago | (#27727613)

allows you to ssh into multiple machines and execute the same command on all of them from one terminal window. So if you set up a shell script that detects a host's distro and then execute the relevant update command you should be sorted.

You don't want it (4, Interesting)

mcrbids (148650) | more than 5 years ago | (#27727617)

I admin several busy CentOS servers for my company. You don't probably want a fully web-based application:

1) what happens when some RPM goes awry to borken your server(s)? Yes, it's pretty rare, but it DOES happen. In my case, I WANT to do them one by one in asc order of importance so that if anything is borked, it's most likely to be my least important systems!

2) How secure is it? You are effectively granting root privs to a website - not always a good idea. (rarely, never)

Me? I have a web doohickey to let me know when updates are available. Cron job runs nightly to yum and a pattern match identifies whether or not updates are needed, to show on my homepage. So it doesn't DO the update, butit makes it ez to see has been done.

Re:You don't want it (5, Informative)

galorin (837773) | more than 5 years ago | (#27727707)

Depending on how uniform your servers are, keep one version of CentOS and one version of Ubuntu running in a VM, and have these notify you when updates are available. When updates are available, test against these VMs, and do the local repository thing suggested by another person here. Do one system at a time to make sure something doesn't kill everything at once.

Web based apps with admin privs are fine as long as they're only accessable via the intranet, strongly passworded, and no one else knows they're there. If you need to do remotely, VPN in to the site, and SSH into each box. You're an Administrtor, start administratorizing. Some things just shouldn't be automated.

Re:You don't want it (1)

franki.macha (1444319) | more than 5 years ago | (#27728083)

You're an Administrtor, start administratorizing


GWN had something similar a while ago (1)

Letharion (1052990) | more than 5 years ago | (#27727623)

Assuming that you want to do the remote thing with SSH as suggested above (and not actually sit down at each desk), you might find the "Tips and Tricks" section here interesting: []

Can You Script? (1)

AndGodSed (968378) | more than 5 years ago | (#27727625)

If you can script it should be fairly easy. Here is how I would do it (we run mostly gentoo servers and a mixture of windows, Ubuntu (and Ubuntu based) and RPM distros, but the guys using Linux customise so heavily and is tech savvy enough to keep themselves up to date.)

Set up sshd on every desktop, with key authorization (we do this with the gentoo servers.)

With a script and cron job you should be able to push them to run updates regularly. But you can just use the normal update tools and a local repo that is on a server on the lan to keep the packages updated.

Canonical also has a tool to do Ubuntu boxen over a network... cannot remember it's name though.

Any slashdotters happen to remember the name of the utility?

Re:Can You Script? (4, Informative)

dns_server (696283) | more than 5 years ago | (#27727659)

The corporate product is [] Landscape

Re:Can You Script? (2, Insightful)

AlterRNow (1215236) | more than 5 years ago | (#27727781)

I have a question about Landscape.

Is it possible to run your own server?
If not, isn't it just another piece of vendor lock-in?

I'm interested in using it but I don't want to depend on Canonical. For example, what if my internet connection goes down? I'd lose the ability to use Landscape at all, right?

Re:Can You Script? (1)

Bert64 (520050) | more than 5 years ago | (#27727969)

If your internet connection goes down, where will you get updates from?

Re:Can You Script? (1)

AlterRNow (1215236) | more than 5 years ago | (#27727999)

The mirror on my LAN that finished updating 5 minutes before the connection dropped.
I would probably use it more to monitor the machines, as I only own 4 of the 7 active ones on the network, the others are other members of the family, than use it to do updates anyway.

Re:Can You Script? (1)

Antique Geekmeister (740220) | more than 5 years ago | (#27728135)

Mirroring CentOS or Fedora is fairly easy, although if you can afford it, please contribute back to the community by making your mirror available externally. (Rate limit it, but make it available: please don't be a freeloader.)

Mirroring RHEL, on which CentOS is based, is pretty awkward. Since the 'yum-rhn-plugin' knows to talk only to the authorized, secured RedHat repository or a RedHat Satellite Server, you pretty much have to find a way to build a mirror machine for _each RedHat Distribution_, whether i386 or x86_64, whether version 4.x or 5.x, whether 5Client or 5Server. This is a pain in the neck and sucks up licenses, or at least virtual instances on a server.

Re:Can You Script? (1)

AlterRNow (1215236) | more than 5 years ago | (#27728387)

I only mirror Ubuntu as I don't have any Fedora systems.
And I do not have the connection to make my mirror public ( 75% throttle after 500mb upload ) otherwise, I would.

Re:Can You Script? (2, Funny)

magarity (164372) | more than 5 years ago | (#27728153)

If your internet connection goes down, where will you get updates from?
Congrats, you just volunteered to mail him the floppies.

Puppet or CFEngine + Version Control (4, Informative)

hax0r_this (1073148) | more than 5 years ago | (#27727637)

Look into Puppet or CFEngine (we use CFEngine but am considering switching to Puppet eventually). They're both extremely flexible management tools that will trivially handle package management, but you can use them to accomplish almost any management task you can imagine, with the ability to manage or edit any file you want, running shell scripts, etc.

The work flow goes something like this:
1. Identify packages that need update (have a cron job run on every box to email you packages that need updating, or just security updates, however you want to do it)
2. Update the desired versions in your local checkout of your cfengine/puppet files (the syntax isn't easily described here, but its very simple to learn).
3. Commit/push (note that this is the easy way to have multiple administrators) your changes. Optionally have a post commit hook to update a "master files" location, or just do the version control directly in that location.
4. Every box has an (hourly? Whatever you like) cron job to update against your master files location. At this time (with splay so you don't hammer your network) each cfengine/puppet client connects to the master server, updates any packages, configs, etc, runs any scripts you associated with those updates, then emails (or for extra credit build your own webapp) you the results.

Re:Puppet or CFEngine + Version Control (0)

Anonymous Coward | more than 5 years ago | (#27727701)

+1 for puppet, works great. And what about Func? Does it run on debian?

Re:Puppet or CFEngine + Version Control (1)

Random Walk (252043) | more than 5 years ago | (#27727723)

We use cfengine with close to 100 machines and works quite fine. My only gripe is that on Ubuntu 8.04, it has a bug such that it can't determine which packages are already installed. And since Desktop is first priority for Ubuntu, their maintenance of software for larger environments is abysmal.

Re:Puppet or CFEngine + Version Control (1)

Dunkirk (238653) | more than 5 years ago | (#27727927)

My boss was going to have us use Puppet, but changed his mind before we got going. Instead, he now wants us to use chef. We haven't gotten to the point of needing either one yet, so I haven't checked out either one, but he definitely knows what he's doing around a computer, so I thought it worth throwing out.

Re:Puppet or CFEngine + Version Control (2, Funny)

walt-sjc (145127) | more than 5 years ago | (#27727973)

I don't want the Swedish chef Bork Bork Borking up the systems...

Re:Puppet or CFEngine + Version Control (1)

Dunkirk (238653) | more than 5 years ago | (#27728023)

[Citation needed] Ye ol' Swedish Chef is getting a little long in the tooth. For those with higher UID's: []

Re:Puppet or CFEngine + Version Control (0)

Anonymous Coward | more than 5 years ago | (#27728199)

There are concerns "out there" about Chef's longer term business intentions, so tread carefully.

Re:Puppet or CFEngine + Version Control (1)

the_g_cat (821331) | more than 5 years ago | (#27728415)

+1 for puppet too. Great community support, great support from the devs, very active IRC channel, with some guys from high profile companies actively using puppet and at least contributing to the ML and IRC.

In centos you could try (2, Informative)

shitzu (931108) | more than 5 years ago | (#27727661)

/etc/init.d/yum start

and what do you know - the updates are installed automagically without any manual intervention

Re:In centos you could try (1)

shitzu (931108) | more than 5 years ago | (#27727671)

/etc/init.d/yum-updatesd start

in centos 5.x

Re:In centos you could try (3, Interesting)

cerberusss (660701) | more than 5 years ago | (#27727819)

the updates are installed automagically without any manual intervention

I'm not sure that's a good idea on a server. Why would you mindlessly update packages on a server when there's no actual reason to do so?

Re:In centos you could try (3, Insightful)

supernova_hq (1014429) | more than 5 years ago | (#27727925)

Because as any decent linux-server-farm admin, you have a closely controlled local repository mirror that only has updates you specifically add.

Re:In centos you could try (0, Redundant)

Vanders (110092) | more than 5 years ago | (#27727933)

Yes but that implies you are not "mindlessly updating packages": you're carefully controlling the packages that are available via. your local YUM repository. The grandparent seems to be implying that you should just switch on YUM & cronjob it to pull every available update from the global YUM repo, on every server, which is clearly a bad idea.

Re:In centos you could try (2, Funny)

supernova_hq (1014429) | more than 5 years ago | (#27728263)

Thank You so much for responding to my post by simply taking what I said and re-wording it...

Re:In centos you could try (3, Informative)

BruceCage (882117) | more than 5 years ago | (#27728007)

I'd say that it depends on a lot of factors really.

First of all it depends on how mission critical the services that run on that system are considered and what kind of chances you're willing to take that a particular package might break something. The experience and available time of your system administrator also plays a significant role.

There's also the very highly unlikely scenario that a certain update might include "something bad", for example when the update servers are compromised. See Debian's compromises at Debian Investigation Report after Server Compromises [] from 2003, Debian Server restored after Compromise [] from 2006, and Fedora's at Infrastructure report, 2008-08-22 UTC 1200 [] .

I currently manage just a single box (combination of a public web server and internal supporting infrastructure) for the company I work at and have it automatically install both security and normal updates.

I personally trust the distro maintainers to properly QA everything that is packaged. Also, I don't think any single system administrator has the experience or knowledge to be able to actually verify whether or not an update is going to be installed without any problems. The best effort one can make is determine whether or not an update is really needed and then keep an eye on the server while the update is being applied.

In the case of security updates it's a no-brainer for me, they need to be applied ASAP. I haven't had the energy to setup a proper monitoring solution and I've never even seen Red Hat Network in action. So if I had to manually verify available updates (or even setup some shell scripts to help me here) it would be just too much effort considering the low mission criticality of the server. If there does happen to be a problem with the server I'll find out about it fast enough then I'll take a peak at the APT log and take it from there.

Re:In centos you could try (1)

Vanders (110092) | more than 5 years ago | (#27727829)

Yes, but sometimes you want there to be manual intervention. For example, I probably don't want YUM (Or up2date, or whatever) to upgrade the kernel, because I'll probably have a surprise the next time I have to reboot the server. especially if I need to install drivers that are not part of the stock kernel.

The same can sometimes be true of other packages: I may not want YUM to upgrade Apache2 on my web server, for example.

Func (2, Interesting)

foobat (954034) | more than 5 years ago | (#27727663) []

I know it's get Fedora in it's name but it's been accepted into as a package into Debian (and thus ubuntu).

It's pretty cool, designed to control alot of systems at once and avoid having to ssh into them all at once, has a build in certification system, a bunch of modules written for it already , usable from the command line so you can easily add it into your scripts and has a python api so if you really wanted some you could throw together some django magic if you wanted a web front end. OpenSymbolic is a webfront end for it already although I haven't checked it out.

Not exactly what you wanted as there's a bunch of work you'd need to do to get it to do the things you want.

Re:Func (1)

foobat (954034) | more than 5 years ago | (#27727677)

oh i forgot to add it's available for RHEL/CentOS systems through EPEL []

Up2date ? (1)

smoker2 (750216) | more than 5 years ago | (#27727687)

Why don't you determine which packages you DON'T want updated automatically and add them to an exclude list on each machine. Then you can run yum update from cron.daily and update the accepted packages, then set up a cron job to run an hour or so after the update which checks for other available package updates. It's pretty simple to run yum check-update and pipe the output into an email.

I have no idea if you can do this with apt but I don't see why not.

Re:Up2date ? (2, Insightful)

richlv (778496) | more than 5 years ago | (#27727915)

that won't quite work - most likely, submitter does not want a particular list of packages to never update, but instead wants to evaluate individual patches, so decision is based on the exact patch, not made for all possibeel patchs to aa prticular package.

Use script + scriptreplay (2, Insightful)

AVee (557523) | more than 5 years ago | (#27727689)

Instead of a fancy web solutions, you could use the script and scriptreplay commands on each system. Do whatever you need to do once on 1 system but record that using script. After that replay the scripts on each of the other systems. You could either manually or automatically log on to each system and start the replay or you could set up a cron job which fetches and replays the script you publish somewhere.

Not very fancy, but it will get the job done, and it will work for more than just updates, you could also use it to e.g. changes setting or add packages. Or basically anything else you can do from a shell in a repeatable way.

Check man 1 script and man 1 scriptreplay for details.

pssh and clusterssh might be of interest (0)

Anonymous Coward | more than 5 years ago | (#27727691)

The easiest way would probably be to use clusterssh or pssh and define the one cluster for each distro. Set up key-based login and then just have a cron job that does pssh -h ubuntu-machines -l username sudo apt-get update >> /var/log/ubuntu-update.log

Add a cron-entry and hosts file for every distro you need.

Spacewalk (1)

jon273 (619043) | more than 5 years ago | (#27727739)

I think Spacewalk from Redhat ( would at least work with the CentOS machines. I believe it adds something similar to the update and configuration management stuff on RHN. It won't work the the Ubuntu desktops though.

Re:Spacewalk (1)

richlv (778496) | more than 5 years ago | (#27727833)

note that spacewalk currently requires oracle, which means it might not be the best solution.

Re:Spacewalk (1)

xsuchy (963813) | more than 5 years ago | (#27728137)

Spacewalk should have support for PosgreSQL by end of this year. []

In the same time it will probably have support for DEB packages, so you may manage not just Red Hat, Centos, Fedora ... but Debian and Ubuntu as well. []

yum-updatesd is meant for that (5, Informative)

MrMr (219533) | more than 5 years ago | (#27727753)

1) yum -e whateveryoudontneed
2) chkconfig yum-updatesd on
3) Make sure do_update = yes, download_deps = yes, etc are set in yum-updatesd.conf
4) /etc/init.d/yum-updatesd start
This makes your yum system self-updating.

Re:yum-updatesd is meant for that (1)

buchner.johannes (1139593) | more than 5 years ago | (#27728067)

- not cross-distribution
  - yum can make mistakes (e.g. move your config files around)
  - even if your binaries are updated, the running server are still executing the old (unlinked) code, you'll have to restart your services eventually
  - if there is a critical kernel patch, you'll even have to reboot (probably less problematic if you run some virtualisation like Xen)
There is no such thing as a perfect self-updating system that doesn't need your supervision (although I heard good things about the classic Debian).

What a friend of mine suggested to build for a similar use case was a cross-distribution server configuration tool (in Ruby) based on declarative configuration. For example, you provide a central configuration that says:


and maybe some asserts...

Then, you run the program on each host and it figures out which packages it needs to install and fetches configuration files from the central server.

At the end, you should get the desired state on your (heterogeneous) machines, or reports why it failed.

Man, I really should do this some time ...

Great! (4, Funny)

Frogbert (589961) | more than 5 years ago | (#27727785)

I for one look forward to the rational, well thought out, debate on the various pros and cons of linux distributions and their package managers, that this story will become.

Re:Great! (2, Funny)

Opportunist (166417) | more than 5 years ago | (#27727831)

Dunno, should this be modded funny, redundant or flamebait? :)

Re:Great! (1)

sqldr (838964) | more than 5 years ago | (#27728171)


Re:Great! (1)

JohnConnor (587121) | more than 5 years ago | (#27728207)

Oh, what a clever sig!

Puppet, chef, cfengine (2, Informative)

^Case^ (135042) | more than 5 years ago | (#27727847)

Puppet [] is a tool made to do exactly what you're asking for by abstracting the specific operating system into a generic interface. It might be worth checking out. Also there's a newcomer called chef [] . And then there's the oldies like cfengine [] .

red carpet (0)

Anonymous Coward | more than 5 years ago | (#27727953)

red carpet used to be excellent for that, it was developed by ximian back in the days, now this is supposed to be novell. but i never heard much about it anymore.

cron-apt+clusterssh (1)

julian67 (1022593) | more than 5 years ago | (#27727975)

On Debian type systems cron-apt is extremely useful for having remote machines notify via email and/or syslog of available updates. By default it downloads but does not install new packages, though it can be set up to do anything you can do with apt-get, so for example you could set it up to automatically install security patches but not other packages. I don't have enough similar machines to benefit from using clusterssh but it cron-apt+clusterssh would seem to be ideal for remote package management of multiple similar Debian type systems.

Puppet (0)

Anonymous Coward | more than 5 years ago | (#27727989)

Puppets and Puppet Master (Actual application names) works for RPM and Debian Package Management, allows push type update I think.

A win for Windows? (-1, Troll)

Computershack (1143409) | more than 5 years ago | (#27728013)

I thought Linux was supposed to be good at this? It does highlight the massive problem in Linux in so much that choice isn't always a good thing.

Re:A win for Windows? (1)

colinrichardday (768814) | more than 5 years ago | (#27728077)

And is there such a repository for Windows that has roughly the same amount of software?

Apticron (0)

Anonymous Coward | more than 5 years ago | (#27728055)

The apticron package will e-mail you any pending updates on Ubuntu machines...

If you are good at scripting (1)

csshyamsundar (895231) | more than 5 years ago | (#27728071)

try using capistrano. a small script over the weekend will simplify your task.

It's been done easily for years now... (0)

Anonymous Coward | more than 5 years ago | (#27728113)

Although I'm sure there is a *nux version.

After all, it's ready for the desktop and the enterprise. Patch management and application distribution would have been one of the first things they sorted.


Some suggestions (0)

Anonymous Coward | more than 5 years ago | (#27728123)

The key is not to look for "ubuntu and centos solutions" (which sounds more like something typically asked a consultant), but to apply your supposedly broad system administration skills and adapt the already existing administration tools to your environment.

Typically done with small snippets of shell script building on your environments build- and packaging tools, and so on, but more mature things do exist. I mention two:

arusha (ark) []
puppet []

Fully automatic install (1)

rolfc (842110) | more than 5 years ago | (#27728223)

I would say that FAI [] is worth looking at. You will have full control over which updates are applied.

Look at the problem differently (2, Insightful)

JohnConnor (587121) | more than 5 years ago | (#27728245)

I think that you are not putting your efforts where it matters.
What is important is that the critical services run properly on each server. Sure that can be affected by patching but also by many other factors. So don't focus solely on the patching, focus instead on making sure all the services are running properly.
You should have your own scripts that check that each server is responding as required. Make your test suite as strong as you can and improve it each time a new problem crops up that wasn't caught by your spying tools.
Once you have this in place, you can safely do daily automatic updates and stop second guessing the package maintainers. You will have a more reliable system and you will save yourself a lot of work too over the years.

KuRGaN (0)

Anonymous Coward | more than 5 years ago | (#27728271)

Cfengine is your firend ;-)

You can launch update command every day and manage packages installations with the package command.
Sorry for my bad english

Altiris (1)

MikeB0Lton (962403) | more than 5 years ago | (#27728291)

Altiris can do what you ask. You'll have to spend money to get it, but you can do software delivery and patching for Windows, *NIX, and Apple from it (and many other things, depending on the size of your wallet).

Configuration management (1)

cluening (6626) | more than 5 years ago | (#27728327)

There are oodles of configuration management tools out there that do at least most of what you want. My personal recommendation is Bcfg: [] . It doesn't quite have the entire web interface (yet), but it is fantastic for keeping everything up to date and clean and telling you when you have outliers. I currently use it for the 350 or so support machines for the 5th fastest computer in the world [] , and I know _much_ larger installations are using too.

Answer: That's your job. (2, Insightful)

Qbertino (265505) | more than 5 years ago | (#27728397)

That's your job. Bash CLI, the CLI toolkit, CLI Emacs, key-based ssh and a well-maintained, well-documented pack of own scripts in your favourite interpreted PL are just what it takes to do this sort of thing. No fancy bling-bling required or wanted. It would make your worse, not easyer in the long run.

never update a live system .. (2, Interesting)

viralMeme (1461143) | more than 5 years ago | (#27728433)

"I administer several Ubuntu desktops and numerous CentOS servers. One of the biggest headaches is keeping them up-to-date with each distro's latest bugfix and security patches"

My advice is, if it ain't broke don't fix it, especially on a production server. Have two identical systems and test the latest bugfix on that, before you roll it out to the live system. You don't know what someone elses bugfix is going to break and would have no way of rolling it back.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account