Linux Patch Management 87
Ravi writes "Any system or network administrator will know the importance of applying patches to the various softwares running on their servers be it the numerous bug fixes or vulnerability checks. Now when you are maintaining just a single machine, this is really a simple affair of downloading the patches and applying them on your machine. But what happens when you are managing multiple servers and hundreds of client machines? How do you keep all these machines under your control up to date with the latest bug fixes? Obviously, it is a waste of time and bandwidth to individually download all the patches and security fixes for each machine. This is where this book named "Linux Patch Management - Keeping Linux systems up to date" authored by Michael Jang gains significance. This book released under the Bruce Perens' open source series aims to address the topic of patch management in detail." Read the rest of Ravi's review
Linux Patch Management - Keeping Linux Systems Up To Date | |
author | Michael Jang |
pages | 270 |
publisher | Prentice Hall |
rating | 8 |
reviewer | Ravi |
ISBN | 0-13-236675-4 |
summary | This book offers Linux professionals start-to-finish solutions, and examples for every environment, from single computers to enterprise-class networks. |
The book is divided into seven detailed chapters, each covering a specific topic related to patch management. In the first chapter, the author starts the narration by giving an introduction to the basic patch concepts, the various distribution specific tools available for the user including Red Hat up2date agent, SUSE YaST online update, Debian apt-get and also community based sources like those in Fedora. What I found interesting was instead of just listing the various avenues that the user has regarding patching his system, the author goes the extra mile to stress the need for maintaining a local patch management server and also the need to support multiple repositories on it.
The second chapter deals exclusively with patch management on Red Hat and Fedora based Linux machines. Here the author walks the readers through creating a local Fedora repository. Maintaining a repository locally is not about just downloading all the packages to a directory on your local machine and hosting that directory on the network. You have to deal with a lot of issues here, like the hardware requirements, the kind of partition arrangement to make, what space to allocate to each partition, whether you need a proxy server and more. In this chapter, the author throws light on all these aspects in the process of creating the repositories. I really liked the section where the author describes in detail the steps needed to configure a Red Hat network proxy server.
The third chapter of this book namely SUSE's Update Systems and rsync mirrors describes in detail how one can manage patches with YaST. What is up2date for Red Hat is YaST for SuSE. And around 34 pages have been exclusively allocated for explaining each and every aspect of updating SuSE Linux using various methods like YaST Online Update and using rsync to configure a YaST patch management mirror for your LAN. But the highlight of this chapter is the explanation of Novell's unique way of managing the life cycle of Linux systems which goes by the name ZENworks Linux Management (ZLM). Even though the author does not go into the details of ZLM, he gives a fair idea about this new topic including accomplishing such basic tasks as installing the ZLM server, configuring the web interface, adding clients ... so on and so forth.
Ask any Debian user what he feels is the most important and useful feature of this OS, then in 90 percent of the cases, you will get the answer that it is Debian's contribution to a superior package management. The fourth chapter takes an in depth look into the working of apt. Usually a Debian user is exposed to just a few of the apt tools. In this chapter though, the author explains all the tools bundled with apt which makes this chapter a ready reference for any person managing Debian based system(s).
If the fourth chapter concentrated on apt for Debian systems, the next chapter explores how the same apt package management utility could be used to maintain Red Hat based Linux distributions.
One of the biggest complaints of users of Red Hat based Linux distributions a few years back was a lack of a robust package management tool in the same league as apt. To address this need, a group of developers created an alternative called YUM. The last two chapters of this book explores how one can use YUM to keep the system upto date as well as hosting ones own YUM repository on the LAN.
Each chapter of the book explores a particular tool to achieve patch management in Linux and the author gives in depth explanation of the usage of the tool. All Linux users irrespective of which Linux distribution they use will find this book very useful to host their own local repositories because the author covers all distribution specific tools in this book. The book is peppered with lots of examples and walk throughs which makes this book an all in one reference on the subject of Linux patch management."
Michael Jang has specialized in networks and operating systems. He has written books on four Linux certifications and one of them on RHCE is very popular among students attempting to get Red Hat certified. He also holds a number of certifications such as RHCE, SAIR Linux Certified Professional, CompTIA Linux+ Professional and MCP.
You can purchase Linux Patch Management - Keeping Linux Systems Up To Date from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Update: 02/07 14:52 GMT by J : Book rating changed from an intended 4 (of 5) stars to Slashdot-normalized 8 (of 10), by Ravi's request.
Patches using RPM (Score:4, Interesting)
Anyone?
Re:Patches using RPM (Score:1)
Re:Patches using RPM (Score:2)
Just click here and you'll be OK... trust us
Re:Patches using RPM (Score:4, Interesting)
You have two choices:
Make things easy for some who can't remember some cryptic command to download source, compile, install, patch, re-patch, re-re-patch, change the config, find it was the wrong config, hunt for the config, change the config, find they have to re-compile, re-compile, re-install, re-re-re-patch and finally use.
Stop whining.
Re:Patches using RPM (Score:2)
Re:Patches using RPM (Score:2)
Two to three steps needed. (Score:3, Informative)
Step 2 has two parts. Files that simply overwrite existing files can be installed with no further change. There probably wouldn't be too many examples of those. The other step is to install patch files into a patch archive directory.
Step 3 h
Re:Patches using RPM (Score:2, Informative)
Re:Patches using RPM (Score:5, Insightful)
I will try to answer why this probably won't happen for at foreseeable future, and why it probably not is a good idea.
The only advantage that a binary patch system have over distributing the whole rpm package is that it saves bandwith.
A major disadvantage of such a system is that it creates twice the overhead, since most of the work that a Linux distributer have with patching its software, is the (regression) testing. So now the Linux distributer has to track _and_ test two kinds of updates; binary diff packages, and whole packages. They can't skimp testing one of the two types, since that would almost certainly mean, that a trivial error borks the untested package, that then would hose thousends of machines. And if the distro skimps distributing the whole packages, well, then types like me would start to whine about how much is sucks to keep track of "package" +"hotfix_1" +"hotfix_2" +"hotfix_3" instead of just getting "updatedpackage".
The package management systems would also have to be reworked, since they now have to keep detailed track of packages and updates, and the exact order of which to apply these updates. (when I was working with MS Windows servers years ago it was not uncommon that Windosupdate would loose track of updates and installed software, so that old software would overwrite new security patches)
In short, a binary diff patch system would mean a lot of work, for a negliable gain
Way back when I started with Linux, I also thought that it was a good idea just to distribute binary diff updates, since that was what I was used to, and because it somehow seems wastefull distribute a whole package.
I changed my mind when I actually started to manage some Linux servers.
--
Regards
Peter H.S.
Re:Patches using RPM (Score:1)
Right now historically, these have been provided in a tar.gz file that installed OVER the rpm files. Which makes it almost impossible to go through and see what patch level something is at.
With RPM I can
Re:Patches using RPM (Score:2)
The downside to this is that it's prone to errors (e.g., you make a mistake and the rpm database could think that package owns files that don't ex
APT makes this a no-brainer. Perfect debian dist.. (Score:2)
Re:Patches using RPM (Score:2)
NAME
makedeltarpm - create a deltarpm from two rpms
SYNOPSIS
makedeltarpm [-v] [-V version] [-z compression] [-s seqfile] [-r] [-u]
oldrpm newrpm deltarpm
makedelt
ZENworks Linux Management (Score:5, Interesting)
Re:ZENworks Linux Management (Score:2)
Re:ZENworks Linux Management (Score:2)
Re:ZENworks Linux Management (Score:2)
Ximan used to provide this for you.
2 - the current version of ZLM does only run on SLES 9; however ZLM 6.x runs on RHEL and SLES; in the future you may see Novell re-add support for RHEL as a hosting server.
ZLM 7 is required if you run RHEL4 clients.
3 - Shitty Novellisms.. such as?
All of the stuff I'm sure you'd describe as "va
Patch Management (Score:1)
Re:Patch Management (Score:1)
The Smart Package Manager project has the ambitious objective of creating smart and portable algorithms for solving adequately the problem of managing software upgrading and installation. This tool works in all major distributions, and will bring notable advantages over native tools currently in use (APT, APT-RPM, YUM, URPMI, etc).
Does this sound eerily similar to an academic publication to you? Regardless, it does seem to be something aspiring to be useful in the ente
Re:Patch Management (Score:1)
yawn: in linux, it's called a package, not a patch (Score:3, Insightful)
Re:yawn: in linux, it's called a package, not a pa (Score:1, Funny)
While you're downloading your 50 meg package *I'll* be smugly compiling the 300 megs worth of source code to to patch the 3 changed lines of code.
Re:yawn: in linux, it's called a package, not a pa (Score:3, Funny)
Red Hat Enterprise - chkconfig --level 35 rhnsd on (Score:2)
Could Xen help? (Score:2)
I wrote a book on Linux Patch Management (Score:5, Funny)
apt-get update ; apt-get upgrade
Re:I wrote a book on Linux Patch Management (Score:3, Insightful)
For 10 machines? 50? 100? 500? No thanks.
Re:I wrote a book on Linux Patch Management (Score:3)
although that only works when the patch doesn't need human attention
Re:I wrote a book on Linux Patch Management (Score:1)
Re:I wrote a book on Linux Patch Management (Score:3, Insightful)
You don't point the production machines at the distro's repository, but non-retardation is an assumed and hence these bits aren't usually made explicit.
Re:I wrote a book on Linux Patch Management (Score:1)
Re:I wrote a book on Linux Patch Management (Score:2)
do
ssh $i command-to-roll-back-a-patch
done
Re:I wrote a book on Linux Patch Management (Score:2)
1. Use APT (or insert any other similar tool [YUM, Portage, etc.]) which is heavily tested by thousands or even millions of developers and allows you to make all packages uniform as far as installation and packaging whether it is a homebrew package or a distro package.
or
2. Homebrew package management. Don't get me wrong, I'm not saying that this isn't a viable option, there are advantages to
Package management includes testing. (Score:5, Informative)
Before anything goes into production, it goes into test.
YOU are the one responsible if a package breaks a production server.
You can still set a cron job to auto-magically download and install the apps, but you'd point it to your own repository where you put only the packages that have passed your testing.
The more "mission critical" something is, the less you want to automate ANY process that changes ANYTHING on the OS or apps.
For our critical database server, I come in on the weekend and hand apply every patch. And that is AFTER those same patches have been applied to the test server.
I'd patch your book on Linux Patch Management (Score:2)
Anyway, first of all i'd use aptitude instead of apt-get. It has similar command line options (aptitude update, aptitude [dist-]upgrade), it has nice ways to resolve dependency problems, and it keeps a log of the upgrades (more precisely of the upgrade requests, IIRC).
Then, having each box doing an update on its own is an unnecessary waste of band. There is stuff like apt-proxy [sourceforge.net].
Another trick is to copy the
Re:I'd patch your book on Linux Patch Management (Score:2)
Can you also just put them in a nfs share and mount that on your remote hosts? It works for gentoo... But I try to avoid debian because every time I mess with it, I get pis
Re:I'd patch your book on Linux Patch Management (Score:2)
There might be problems when two machines mess with the "partial" subdirectory, which containes unfinished downloads. Of course that can be solved (remounting something over partial is the first thing that comes to mind) but then i'd choose some apt tools instead.
Re:I'd patch your book on Linux Patch Management (Score:3, Informative)
My post was simply meant to make light of someone's attempt to write a book on a topic that seems trivial to me. Although my original comment was quite simple in nature, I was meaning to point to a versatile set of tools. IIRC, debian and the APT tools were developed because of Ian Murdoch's need to keep the Pixar render cluster up to date. Any 'debian in the datacenter' SysAdmin can tell you that the entire suite of APT tools is very handy. RedHat's recent attempt
Re:I'd patch your book on Linux Patch Management (Score:1)
Re:I'd patch your book on Linux Patch Management (Score:1)
I find that apt-cacher [nick-andrew.net] is much simpler and nicer. It doesn't support every possible method of fetching packages like apt-proxy purports to, but how many do you really need? HTTP seems plenty good enough.
Re:I wrote a book on Linux Patch Management (Score:2)
emerge sync; emerge -uD world
Re:I wrote a book on Linux Patch Management (Score:2)
Re:I wrote a book on Linux Patch Management (Score:2)
If you have a really old install and have not done a 'emerge -Du world' then I could see you running into problems. I had problems because I had installed Gentoo about 4 or 5 years ago and was not using the "-D" option for a while which updates libraries
Dumb question (Score:2)
Umm, why? Does a package repository need to be more super-optimized than any other network resource?
details details (Score:2)
Maybe you should re-read the book and pay more attention this time?
Re:details details (Score:2)
"re-read" implies that the book was read once already; from its depth, I assumed this review was based on a hard look at the table of contents.
How about a useful link? (Score:3, Informative)
http://www.phptr.com/promotions/promotion.asp?pro
This book will be there as a PDF in a few months, or you can buy it in dead tree format now.
Other books are also linked there.
Authors Wanted (Score:3, Interesting)
Book summary (Score:2, Interesting)
2. Next, set up your sources.list file to point only to that server.
3. 17 8 * * * root apt-get update; apt-get upgrade
4. ???
5. Profit!!!
Re:Book summary (Score:2)
You need to QA your packages that you push out to all systems. You need to make sure that the patches you install keep the system as stable as it was before. A sysadmin where I work once imaged two systems about two weeks apart. He patched one system (without testing) and it took
Re:Book summary (Score:1)
Common sense applies...then again, this is Slashdot.
four stars out of how many? (Score:2)
-b
Re:four stars out of how many? (Score:1)
Re:You know... (Score:1)
Re:You know... (Score:2)
It is essential for programming however. Which, considering how many programmers are purportedly on slashdot, I find ironic.
apt-proxy (Score:3, Informative)
Gentoo Linux anyone? (Score:2, Interesting)
Did it get a mention, or not?
Re:Gentoo Linux anyone? (Score:4, Funny)
Re:Gentoo Linux anyone? (Score:1)
What's in a name? (Score:2)
But the name "Patch Management" sorry that really grates on me. Almost universally GNU/Linux systems have abandoned patches, and perform upgrades to whole components at a logical level. Its the best way to do it found so far, but I don't think of those as "patches".
Or is that just me?
An alternative approach: Don't patch, use rsync (Score:3, Interesting)
I like making all files on all machines on a LAN, excluding network addressing, electronic licensing and logs, bit-for-bit identical. Doing so massively reduces management overhead and improves management control.
I've managed networks of several hundred machines this way and it works well. I checksum all files and directories on all machines on a regular basis and if anything's different in time or space I find out why and make sure it doesn't happen again. I've found dozens of very obscure and troublesome software and hardware bugs this way, have very good uptime and I can concentrate on making sure the master machines are well configured rather than waste time trying to put out fires all over the network all the time. If individual machine classes need to have different configurations I partition those differences out and manage them separately
Distributing patch packages is error prone. By working at the file level it's easy to be confident everything is okay. You can also often distribute and back out "patches" (just a list of files to be rsync'ed) in the background very quickly at short notice with minimal impact on users.
---
Keep your options open!