Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Do You Create Config Files Automatically?

timothy posted more than 5 years ago | from the hire-7-new-admins dept.

Networking 113

An anonymous reader writes "When deploying new server/servergroup/cluster to your IT infrastructure, deployment (simplified) consist of following steps: OS installation: to do it over network, boot server must be configured for this new server/servergroup/cluster; configuration/package management: configuration server has to be aware of the newcomer(s); monitoring and alerting: monitoring software must be reconfigured; and performance metrics: a tool for collecting data must be reconfigured. There are many excellent software solutions for those particular jobs, say configuration management (Puppet, Chef, cfengine, bcfg2), monitoring hosts and services (Nagios, Zabbix, OpenNMS, Zenoss, etc) and performance metrics (Ganglia, etc.). But each of these tools has to be configured independently or at least configuration has to be generated. What tools do you use to achieve this? For example, when you have to deploy a new server, how do you create configs for, let's say, PXE boot server, Puppet, Nagios and Ganglia, at once?"

Sorry! There are no comments related to the filter you selected.

Here, let me google that for you (-1, Redundant)

Anonymous Coward | more than 5 years ago | (#28663183)

Re:Here, let me google that for you (0)

Anonymous Coward | more than 5 years ago | (#28663259)

That wasn't very helpful.

Re:Here, let me google that for you (2, Insightful)

sopssa (1498795) | more than 5 years ago | (#28663369)

Eh, has Linux server administration really come into this? Hire knowledgeable admins that can script stuff. Linux is perfect for scripting such configuring and set up. You just need to do those scripts once and you're ready to deploy them on all systems after minimum installation.

If you're a large company, just develop your own solutions, its far better than using someones elses. Just look at google or any other succesfull company.

You expect networkers to be coders? LOL (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28663601)

Network Administrator & Network Engineer + Network Tech definition: Users with a better password, that only USE what their blatantly technical superiors, coders, create for them to USE (after they read the manual for the tools they USE, & act as if they know something (further maintaining their facade of being a knowledgeable asset)).

("Oh NOOOooo..." - here come the flocks of angry 'wannabes' coming down on this post, to "mod it down", in effete retaliation... too bad the truth IS the truth, eh boys?)


(Funny part is, the captcha is TRUEST)

Re:Here, let me google that for you (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28663693)

Eh, has Linux server administration really come into this?

Nope, it hasn't. But I did ask the question in the first place to check if I was missing something. Scripting is fun, love it, but doing everything from scratch (althought I am fan of it, as it gives me the knowledge and total control) is a bit time-consuming. So, if there is a simple software with nice web and API interface for this, and with the ability to create custom scripts which do the actual work, I would like to know about it.

Re:Here, let me google that for you (5, Informative)

TooMuchToDo (882796) | more than 5 years ago | (#28664581)

Re:Here, let me google that for you (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664597)

Looks promising! Tnx!

Re:Here, let me google that for you (1)

TooMuchToDo (882796) | more than 5 years ago | (#28664601)

Good luck. It's still not 1.0 release grade, but we're using it with several thousand servers without many problems.

Re:Here, let me google that for you (1)

TooMuchToDo (882796) | more than 5 years ago | (#28664603)

Excellent point. We admin 2500+ linux servers, and while we use several open source toolkits to do a lot of the hefty lifting, they're all glued together with bash scripts and python code (and a SQL backend).

Re:Here, let me google that for you (4, Interesting)

jvillain (546827) | more than 5 years ago | (#28664613)

I put all my config stuff into a noarch RPM and install it when I kickstart the box. When the configs need to be updated I update the rpm and roll it out as an update. That way we know what version of every thing we have and you can use the RPM tools to check if any thing has been changed.

Re:Here, let me google that for you (0)

Anonymous Coward | more than 5 years ago | (#28665395)

If by "come into this" you mean: people started to get a fucking clue, then yes it has.
Custom scripts are rank amateur stuff. Consider an environment where rapid integration and regression testing takes place - do that with scripts. Whats the lead up time ? 2 weeks ? 3 ? Months ? Now what happens when the application devs change something ... repeat that lead up time ?
What most administrators consider scripting is not automation either, its vim reduction. They do less direct text entry but they still are not automating things based on variables. Its usually some bastardized conditional "scp" tripe with very little error checking, prevention or verification.
Consider this - How about moving things from that test/dev environment into stage and ultimately production, whats your lead time, how long are your maintenance windows, how many outages from broken scripts etc.
Does your script automatically adjust based on hardware specs, network name, selinux being enabled, application load, content version etc ? Doubtful.
Now if your using a standard config management system across the board the lead time gets progressively lower the further along you get, the historical "how is this managed" problems die off along with personal eccentricities and poor planning.

Re:Here, let me google that for you (1)

SlashWombat (1227578) | more than 5 years ago | (#28666071)

The very first thing that came to mind was "Isn't this what you lazy bastards were hired for?" Jeez, if you don't wan't to the the marginlly interesting stuff, I would hate to see your performance on the day in, day out tedium that can be IT.

Emacs or vi... (1, Insightful)

Anonymous Coward | more than 5 years ago | (#28663187)

And I type the stuff I need.

(And I start a war on /. )

A Database w/ Config File Generators (5, Interesting)

Anonymous Coward | more than 5 years ago | (#28663189)

At my institution, we run a MySQL database which we use to store information (such as their IP address, SNMP community) about network devices, linux servers, etc. We then have config file generators that query the database and generate the appropriate configs for Nagios and our other tools, and will restart them if needed. The idea is once you seed the initial information in the database, the config generators will pick them up and do their work so we won't have to remember to add the new hosts everywhere.

Re:A Database w/ Config File Generators (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28663783)

Do you use server-push or client-pull method?

Re:A Database w/ Config File Generators (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28664565)

We do something similar with maintenance scripts (written in Perl) which generate configuration files (amongst other functions) based on the contents of a central management database (we're using PostgreSQL).

By default, we do client-pull. A cron-job fires periodically and re-runs all of the maintenance scripts configured for that time interval. (Some scripts run every 15 minutes, some only run overnight.)

In the event that a change needs to be pushed out rapidly, then we make the change the same way as before, then use a mass-trigger utility to trigger the scripts immediately by firing up parallel SSH connections to the subset of machines concerned.

You may also be interested in Bootstrapping an Infrastructure [] .

Re:A Database w/ Config File Generators (1)

TooMuchToDo (882796) | more than 5 years ago | (#28664619)

Have you thought about using Rocks or Redhat's Spacewalk to manage the server configs/kickstarts/etc and then kick that info over to Nagios?

Re:A Database w/ Config File Generators (0)

Anonymous Coward | more than 5 years ago | (#28665477)

That is an excellent idea! I wonder why original poster didn't think about automating the whole process!

How about Debian and aptitude? (1)

G3ckoG33k (647276) | more than 5 years ago | (#28663229)

How about Debian [] , which automatically includes dpkg, aptitude and synaptic?

From my experience it would take care of most aything.

And with a good admin, even more.


Re:How about Debian and aptitude? (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28663379)

"From my experience" doesn't really qualify when all you have been doing is running that server in the closet. Also, how's the 'burging going? Does it feel good creating pointless links? Why didn't you link to your other hit-words?

Create a single boot image (1)

Colin Smith (2679) | more than 5 years ago | (#28663685)

Boot to ramdisk... Depending on how big your image is and how much ram you've got.

The problem with puppet, debian/apt etc is the inevitable gradual divergence of systems as time passes; scripts fail, packages don't get installed etc. It's exactly the same problem that life faces, you'll notice that all large multicellular organisms go through a stage where there is initially only a single cell. That's because mutations creep in otherwise and the cells diverge from one another over time. Eventually you're left with a random slime which is widely divergent in code.

Apply all your updates to a single image, boot the image on all the machines you want to run it on, they are now all running identical code. Guaranteed. Arrange your clusters such that any one machine can be offline. Plus, if you have an image you're booting, you can roll back to older versions trivially.

Re:Create a single boot image (2, Informative)

BostjanSkufca (1596207) | more than 5 years ago | (#28663993)

Can't boot to same image, servers are collocated at different providers. For configuration management I find puppet working quite reliably and it does notify me about failed scripts/installations. And I prefer restarting only services, not whole servers, unless really necessary. When I get to deploy a new server, the workflow I would like to achieve goes like this: 1. I input all the relevant data (MAC/IP/mounts/purpose/misc) into some sort of application, via browser (or API for larger installs) 2. This application then creates necessary config files for: - PXE boot server (which does initial install of the bare OS with functional puppet), - puppetmaster (which completes the installation and creates a fully functional server by compiling packages) or whatever configuration management SW, - Nagios (or whatever monitoring software) - Ganglia (or whatever performance metrics software) 3. I just power up the machine and all the work gets done automatically, The sysadmin's job should not primarily consist of repeating items from step #2 mentioned above, and those unnecessary steps are what I am trying to avoid. I still have to create templates for all the above stuff, but that is the fun part anyway.

Re:Create a single boot image (1)

Colin Smith (2679) | more than 5 years ago | (#28666015)

Can't boot to same image, servers are collocated at different providers.

We have servers all over the world, at multiple different providers, you just need a pxe, tftp server at each site.

And I prefer restarting only services, not whole servers, unless really necessary.

Servers provide services. Without a service, the server is useless. You only need to reboot the server when the binaries are updated. i.e. you are performing an upgrade. Anyway. with an OS image, the workflow is:

Add mac address to dhcp server.
Confg bios to pxe boot.
Power it on.

Image boots and is immediately functional. No additional installation, no performing upgrade steps. No work needing to be done.

Re:Create a single boot image (1)

SanityInAnarchy (655584) | more than 5 years ago | (#28664003)

Boot to ramdisk... Depending on how big your image is and how much ram you've got.

In what way is that better than booting to ramfs? Then, if you have a local disk, map it as swap. Done.

Dear Slashdot.. (1)

Anonymous Coward | more than 5 years ago | (#28663271)

How do I automate away a sysadmin position?




Heh, the Captcha word is "unions"

Re:Dear Slashdot.. (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28663733)

I am a sysadmin and all I would like is to spare some time by eliminating unnecessary typing/programming/scripting and rather spend it on evaluating, testing, heck, even thinking.

Re:Dear Slashdot.. (1)

maharb (1534501) | more than 5 years ago | (#28664853)

That's how the smart sys admins do it. Then their brains melt away because they have too much time to make first posts on various web forums and only the dumb ones are left.

Generate config files (4, Interesting)

atomic-penguin (100835) | more than 5 years ago | (#28663323)

That is what configuration management is supposed to do, as far as I know puppet and cfengine do this already. I believe puppet compiles configuration changes and sends its hosts their configuration automatically, every 30 minutes.

Don't know what Unix or Linux vendor you're using puppet with. Whenever you do your network install, assuming you have some unattended install process, there should be some way to run post installation scripts. Create a post install script that will join your newly installed hosts to your puppet server. Run this post install script with kickstart, preseed, etc. at the end of the install process. Once newly installed hosts are joined to your central puppet server, then puppet can manage the rest of the configurations.

Re:Generate config files (1)

mindstrm (20013) | more than 5 years ago | (#28665731)

Puppet actually pulls - the clients pull from the master (where the config tree lives) by default every 30 minutes - but this also can be configured to whatever granularity you want.
This makes it trivial to have multiple masters and things like that - as far as I can tell, the master doesn't keep track of any state or anything like that, it only provides relevant configuration information to authorized clients.

a bit of a special case (1)

ILongForDarkness (1134931) | more than 5 years ago | (#28663347)

but at my work we use PXE boot and cfengine on one of our centos clusters. The nodes PXE boot off of the disk array of the cluster, after the install the next stage of the PXE/kickstart script installs and runs cfengine which gives the node all its NFS mounts, etc. I don't see why you couldn't do a similar thing for nagios configuration and ganglia. In fact for clusters I think that Rocks which uses centos, PXE, and Sun Grid Engine just like our cluster has the option of having ganglia for monitoring too so you probably can steal their setup and see how they automated it.

OpenNMS (0)

Anonymous Coward | more than 5 years ago | (#28663361)

OpenNMS runs a scan every 10 hours on my network. You tell it what your network ranges are and it finds hosts and brings them into the configuration by itself without having to generate config files. If you partition your network correctly and only use certain IP ranges for production hosts you can bring a system into monitoring quickly. Depending on the size of the netblocks you could also set OpenNMS to scan more frequently. Lets say you assign a window of 8 hours for a host to be in production. Just have openNMS scan every 8 hours and you won't be bugged by the NOC paging you about the new server you keep rebooting.

Re:OpenNMS (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664047)

"Brings them in configuration..."

For monitoring? Or for other things also, like configuration management?

Re:OpenNMS (1)

Sadsfae (242195) | more than 5 years ago | (#28665749)

With a properly setup configuration management system you can have it all.

One button, dummy-mode provisioning - os install, configuration files, daemons, monitoring and metrics, authentication and external NAS/SAN storage in one swoop.

I would recommend checking out cobbler/puppet/koan or a tuned cfengine/pxe+kickstart setup.

XCAT and post scripts (2, Informative)

clutch110 (528473) | more than 5 years ago | (#28663393)

We have XCAT and post scripts setup to do the majority of our work. Images the machine (PXE generation, DHCP config), installs files based on group, sets the ganglia config. I don't have any monitoring setup on compute nodes as I have ganglia open daily to watch for cluster node failures. Zenoss is done afterwards as I have yet to find a good way to automate that.

xorg (1)

FudRucker (866063) | more than 5 years ago | (#28663415)

#!/bin/sh X -configure \ cp /root/ /etc/X11/xorg.conf

Re:xorg (1)

FudRucker (866063) | more than 5 years ago | (#28663839)

#!/bin/sh X -configure \
cp /root/ /etc/X11/xorg.conf

fixed it

Templates (2, Interesting)

Bogtha (906264) | more than 5 years ago | (#28663423)

I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [] . Run it periodically against the database, check in changes and email diffs to the admin.

Re:Templates (4, Interesting)

johnlcallaway (165670) | more than 5 years ago | (#28664229)

We did something even simplier on our Sun servers. We used a master server with directories that held the different app and web servers we had. Everything that needed a configuration file that had server specific items, like Apache, had a server-specific script to generate environment variables. A configuration script was created using the template:

cat <<EOD >realConfigFile
## put config file here replacing any server specific items
## with $envVariable from the script

We could redeploy a server in 10 minutes from an empty hard drive. Creating a new one took about 10 more minutes to create the file.

This also gave us the ability to take scripts from dev to qc to production without having to change anything. Part of the script set things like home directories and such. We could even have multiple environments on one machine.

Re:Templates (1)

vrmlguy (120854) | more than 5 years ago | (#28668551)

I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [] . Run it periodically against the database, check in changes and email diffs to the admin.

I've always used cpp [] as my template engine, but then again, I've been doing this since the '80's.

standard VM image? (0)

Anonymous Coward | more than 5 years ago | (#28663427)

have a standard virtual machine image, copy it and voila

Re:standard VM image? (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664025)

And if the servers are of more heterogenic nature and/or distributed across multiple datacenters?

Re:standard VM image? (4, Informative)

Decker-Mage (782424) | more than 5 years ago | (#28664959)

Actually this is one of the goals VMWare is proposing to meet with their vSphere. vCenter, ad nauseum initiatives. [full disclosure I've beta'ed VMWare software since v1]. This also presupposes full P2V, V2P cross machine conversions if required. The goal here is be anywhere, and run anywhere.

Now if I had the money, I'd toss full de-dup into the storage array mix as well, so much of the image file size essentially disappears unless there is simply no duplication anywhere. And if you are in that situation, take my advice. Quit, or just shoot yourself and get it over with.

It's been a long time since I played at that level (six mainframes, eighteen mini's, 575 desktops, and I never got an accurate count of the 100+ laptops) but at some point you have to ask yourself, when does the customization end? Standardization was the only thing that kept myself and my team of four !relatively! sane.

If you seriously need customization of that level, then you aren't doing things right. Reduce each VM to a single app (Apache, MySQL, IIS, network appliance, whatever) and use virtual switches to create a topology as required. Think of each VM as a particular Lego block, or IC: Systems Componentization as it were. And this is where de-dupe will also shine.

Which explains why a certain storage company bought VMWare, and a certain switching company has created a virtual switch. Now if you don't have the big bucks, you have a slight problem. However you can create this kind of topology if each box has more than one physical network adapter AND you get creative. Now that job I also wouldn't mind trying here. Time to resuscitate some old boxes and see what I can come up with. Been a while since I setup an enterprise class simulation :-).

It's high time that we all realize that the lines between the various (computer) engineering disciplines are now blurred. Sure, be a subject matter expert but know How the other people think and work.

Anyone know of a F/OSS de-dupe?!

Re:standard VM image? (0)

Anonymous Coward | more than 5 years ago | (#28666129)

Anyone know of a F/OSS de-dupe?!

Yes. It's called ZFS. And the OS that has the most up to date version of it is called Solaris.

Re:standard VM image? (1)

Decker-Mage (782424) | more than 5 years ago | (#28666287)

Well, Duh!!! I haven't thought of Solaris in years although I recall it's a BSD derivative of some sort. That'll work since I still have the Daemon book and experience running it on mi Amiga back in the 80's.


Re:standard VM image? (1)

Rob Riggs (6418) | more than 5 years ago | (#28669303)

SunOS was BSD. Solaris is full-on SysV. And as others have already noted, de-dupe is on the wish list, not implemented. People don't read through Sun's marketing literature very well in these parts. Just yell "ZFS! ZFS! OMG!! ZFS!" and you'll get along fine here.

Don't get me wrong, ZFS is a nice, modern file system. But the hype around it is just bizarro. I don't think most folks really get what it can do today and what Sun *says* it will do at some undefined point in the future. It is certainly better than anything previously available as part of the core Solaris OS. People shelled out megabucks to Veritas to deal with the lack LVM and a decent file system in older versions of Solaris.

Re:standard VM image? (1)

paulius_g (808556) | more than 5 years ago | (#28666953)

ZFS has de-dupe and it's free and open source. There are some companies making (some even open source and free) storage appliances using ZFS with all it's amazing capabilities. Then, you can connect to it via iSCSI for virtualization or FTP, SMB, etc for the rest.

Re:standard VM image? (0)

Anonymous Coward | more than 5 years ago | (#28668949)

ZFS does *not* have de-dupe yet.

FAI - Fully Automatic Installation (1)

Clark Rawlins (22060) | more than 5 years ago | (#28663487)

I have successfully used FAI to install Debian servers in the past. For what I needed it worked great. It is supposed to support other distributions and automatic updates as well but I haven't tried it for either of those uses.

LDAP (2, Interesting)

FranTaylor (164577) | more than 5 years ago | (#28663535)

Keep all your config information in LDAP.

Configure your servers to get their information from LDAP wherever possible. Then the config files are all fixed, they basically just point to your LDAP server.

If you have servers apps that cannot get their configuration from LDAP, write a Perl script that generates the config file by looking up the information in LDAP.

If you are tricky you can replace the config file with a socket. Use a perl script to generate the contents of the config file on the fly as the the app asks for it, and make sure the the app does not call seek() on the config file.

Re:LDAP (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664199)

I find LDAP more useful for storing data about "end-user" of our systems, like usernames, email accouts, quota data and such, and not that much useful for storing the actual server configurations. But there could be something to it...

Re:LDAP (1)

ckaminski (82854) | more than 5 years ago | (#28665615)

Have you done this or are you just talking out of your ass? j/k :) Make sure your app doesn't "seek()"? How'd this work with apache??

Re:LDAP (1)

mindstrm (20013) | more than 5 years ago | (#28665711)

I'd like to know that too.... while plausible - this sounds like something that's more overhead than it's worth... it's adding several layers of abstraction and complexity for what gain?

Pick and Choose the best (0)

Anonymous Coward | more than 5 years ago | (#28663591)

Just go with whatever works best for your environment.

OpenNMS for example uses discovery tools to automatically find new hosts, which works well unless you have a couple of hosts that have specific 1-off monitoring requirements. That makes it a heck of a lot easier to use compared to Nagios, which is a pain to install and manage.

Re:Pick and Choose the best (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664249)

Do you know I one can add a new host for monitoring to openNMS via some sort of API?

Re:Pick and Choose the best (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664287)

I = if

Re:Pick and Choose the best (1)

Ranger Rick (197) | more than 5 years ago | (#28668547)

The unstable version (what will be come stable 1.8) does have a RESTful API for adding nodes. Additionally, 1.6.x and higher have an API for specifying your nodes manually, which can be called from external tools. This feature has been enhanced in what will be 1.8 to still scan interfaces on the nodes you specified, and such.

M4 baby, M4 (4, Interesting)

cerberusss (660701) | more than 5 years ago | (#28663607)

Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis. I'm a developer and I used M4 in my projects.

In a build process for example you often have text files which are the input for some specialized tool. These could be text files in XML for your object-relational mapping tool. These probably won't support some kind of variable input and this is where M4 comes in handy.

Create a file with the extension ".m4" containing macro's like these (mind the quotes, M4 is kind of picky on that):

    define(`PREFIX', `jackv')

Then let M4 replace all instances of PREFIX:

    $ m4 mymacros.m4 orm-tool.xml

By default, m4 prints to the screen (standard output). Use the shell to redirect to a new file:

    $ m4 mymacros.m4 orm-tool.xml > personalized-orm-tool.xml

Sometimes, it's nice to define a macro based on an environment variable. That's possible too. The following command would suit your needs:

    [jackv@testbox1]$ m4 -DPREFIX="$USERNAME" mymacros.m4 orm-tool.xml
The shell will expand the variable $USERNAME and the -D option tells M4 that the macro PREFIX is defined as jackv.

Re:M4 baby, M4 (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28666349)

These could be text files in XML for your object-relational mapping tool.

That, mate, represent much of what is broken in the current state of this industry.

The fact that so many developers waste most of their time dealing with the object/relational impedance mismatch is one the biggest mistery of our IT time.

I *think* it's because said developers need the guarantees made by top notch SQL DBs.

But why live and do plumbing between OO and RDB ? Either use an OO DB, or don't use an OO languages.

I picked one of these two solutions, and I'm laughing all my way to the bank. Also laughing very hard when I read sentences like:

These could be text files in XML for your object-relational mapping tool.

XML and OO/RDB impedance mismatch in one supposedly serious sentence. yeah, right.

Happy plumbing.

Re:M4 baby, M4 (1)

Bazer (760541) | more than 5 years ago | (#28666519)

You'd get a cookie if I had my mod points. I would be twice as productive if I knew all the tool sets that come with a standard Unix installation. Problem is, most of those tools are older then me and getting to know them takes a lot of time.

Re:M4 baby, M4 (1)

cerberusss (660701) | more than 5 years ago | (#28666579)

Problem is, most of those tools are older then me and getting to know them takes a lot of time.

Very true. I try to get to know them at the bare minimum level and then be done with it. Also, when digging up treasures like M4 it's not to say that your colleagues appreciate it. In the case of M4, some saw it as violating graves instead :-)

Re:M4 baby, M4 (1)

illumin8 (148082) | more than 5 years ago | (#28668889)

Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis. I'm a developer and I used M4 in my projects.

Excuse me, but I'd rather gouge my eyeballs out of their sockets with a rusty spoon than try to read someone else's M4 macros. M4 fails at being readable, unlike other config generating tools like Cfengine, which has code that tells even a non-programmer exactly what it does. Have you ever tried to read If you have you'll know what I'm talking about.

Re:M4 baby, M4 (1)

arth1 (260657) | more than 5 years ago | (#28669659)

And this is easier than creating a batch script HOW, exactly?

I had a discussion with a sysadmin-wannabe who wanted to use abstractions on absolutely everything. His idea was to use substitutions like you subscribe, thinking it was easier that way. I told him I could do the same with a single sed line. He then said "A-ha, but what if you need a second replacements -- all *I* have to do is add two lines to my m4 source file and regenerate it!!!" (yes, he would speak with multiple exclamation points). Whereupon I pointed out that all I had to do was add /one/ more line to the sed... And that in all likelihood, when a new and incompatible version of the config file comes out with the next version of the software, the .m4 will have to be rewritten, while the simple sed script likely will keep on working.

There /is no/ substitute for understanding. Any attempts of introducing automation without understanding will invariably introduce more points of failure, and make it harder to upgrade, migrate, or troubleshoot. And if you understand, why then you don't /need/ abstractions. They get in the way of quicker and less fragile methods.

Old school sysadmin: Spends 7 hours on understanding something, then 5 minutes on writing a script, and 25 minutes rewriting it to be self-documenting and take into account any possible contingencies or race conditions. Management thinks he's slacking, because he is only doing productive work for an hour a day.

New school sysadmin: Spends 5 minutes not understanding something, 5 minutes on Google, then two full days on obtaining and installing OTS software to do magic for him, then applies for a training course to use that software. Management thinks he's the bee's knees, cause not only does he do productive work much more of the time, but he also proactively seeks out training! And the software ends up running with horrible default configurations, because he never got that training BEFORE he had to use the software the first time.

Sounds like an Ubuntu user (-1)

Anonymous Coward | more than 5 years ago | (#28663653)

On the mature Linux distributions (eg Redhat, Suse and Mandriva), there are numerous wizards, usually written in Perl, that will configure everything you can possibly dream of at the click of a mouse. You can also use Redhat Kickstart (on any of the above distros) to automatically install and configure a system.

If you need to deploy lots of new machines, then Ubuntu is the wrong solution...

Re:Sounds like an Ubuntu user (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664091)

Nope, a Slackware user, and on those servers I manage every software that interacts with external world (clients) is compiled from source as well as all the required libraries. But hey, I might be getting lazy just by not posting this from some Slackware shell telnet client, but from - you have guessed it - Ubuntu :)

Re:Sounds like an Ubuntu user (1)

mysidia (191772) | more than 5 years ago | (#28665017)

So you're looking for enterprise capabilities like automated deployment and configuration management, and yet you chose a setup that doesn't have any vendor providing them, and requires you to build them yourself, why?

Of course you can cobble something together by writing custom scripts, and setting up puppet, bcfg2, or cfengine.

Which also involves some custom scripting. No matter how you slice it, there's going to be some initial manual programming work to get it working.

There's really no end-to-end pre-made CM solution you will find for Linux, for free, that's not tied to an Enterprise Linux offering of some sort, and doesn't require you to do manual scripting at least, and some initial manual config writing on your own.

Re:Sounds like an Ubuntu user (1) (1195047) | more than 5 years ago | (#28665879)

Right, because Debian isn't a mature operating system, and Ubuntu couldn't possibly be based on Debian...

That aside, good luck with your pretty point-and-click crud on servers that don't have X installed (about 99% of deployed Linux servers, probably).

too variable to automate (1)

bzipitidoo (647217) | more than 5 years ago | (#28663669)

In the small shops where I have worked, I find the uses and specific hardware a little too variable to easily automate configurations. One machine is a database server, another is part of a file server cluster, another is a web server, and yet another is a firewall and spam filter. One will have a single large hard drive, another will use software RAID, the others will have hardware RAID. Some have multiple network connections. A large organization that sets up many identical servers every day might find automatic configuration useful. But in that case, why not just use imaging? Much faster than installing an OS over and over.

If that isn't enough, things change so quickly. New versions of OSes come out a few times a year. Specific hardware might be available only in a 6 month window. Expect any automatic configuration to take lots of maintenance or quickly rot.

Re:too variable to automate (1)

mysidia (191772) | more than 5 years ago | (#28665045)

Software RAID is the devil, don't use that, except for testing, it's definitely not suitable for live use, and is not all that reliable (esp in RAID5 configurations). Oh yeah, and "fakeraid"/"hostraid" RAID "controllers" count as software RAID, not hardware RAID, those are even worse.

Use virtual machines (Xen or KVM) for application load scale-outs, instead of lots of physical servers. I suggest setting up a base 'virtual machine' image preconfigured with everything except hostnames and IP addresses, and virtual NICS configured to load in a quarantine/test network.

When you are setting up a new server, you clone the gold master, and then adjust its configuration as needed.

E.g. for a large file server, you add a second virtual drive.

For a large DB server, you manually add the second drive, and reconfigure the DB so its tablespaces live on drive 2.

Making the small tweaks to the master to suit the needs of the app is simple; system boot drives should have a standard layout and be separate from the application data files, anyways.

Re:too variable to automate (1)

Sadsfae (242195) | more than 5 years ago | (#28665781)

Software RAID is the devil, don't use that, except for testing, it's definitely not suitable for live use

Linux mdadm and FreeBSD's gmirror are both very stable, mature implementations of software RAID - both a viable solution in a production environment.

Especially so if you have servers without dedicated asics HW controllers.

Re:too variable to automate (1)

mysidia (191772) | more than 5 years ago | (#28668759)

You can forego having a real UPS on your live servers too, but that doesn't mean it's a good idea.

mdadm/gmirror may be stable but both still suffer from the basic problems of software-based RAID. There are serious failure modes with software RAID implementations, the disk IO performance and system performance is poor. These characteristics make it unsuitable for live servers, no matter how mature the code gets.

And there are also hard drive failure modes that a hardware RAID controller will detect, but software RAID tools such as mdadm will not detect, for instance, abnormal write latencies, and failures that are normally detected by RAID controller periodic scrubbing (or 'surface scan') and metadata detections.

RAID5 write hole due to system crash (or power loss) between data and parity updates. Resulting in loss of redundancy and eventual data corruption.

Your mdadm or gmirror lives on top of a general purpose HDD driver, instead of a controller that presents just one volume. So, there is no means of mounting the array until the OS RAID drivers are loaded, hence, a method of loading boot code before the RAID driver has initialized is required: if your boot drive fails in a manner that allows access to bootsector but blocks access to the kernel image on Drive0, the system will not boot.

Drive hot swap is complicated by the additional requirements of running mdadm commands, this massively increases the possibility of human error which is unacceptable for a production environment.

Array health monitoring does not display red lights on failed drives, as it does on an integrated RAID controller. In an Enterprise environment: hot swap must be extremely simple, so a datacenter tech can be assigned to do it, there must be visible indications of drive failure, and replacement must not require OS commands.

Integrated RAID devices typically integrate with system monitoring software and can send proper alerts to admins via SNMP and e-mail, in a manner that integrates with common production grade monitoring solutions. On a system running mdadm, there is no method of doing so, short of cobbling together an ad-hoc script, that would be error prone.

Re:too variable to automate (1)

mindstrm (20013) | more than 5 years ago | (#28665699)

"We don't need configuration management because our configuration is an unmanaged mess and managing it would just be more overhead we don't have time for"... ?

Puppet, for one, is very generic. Even if you only use it to push out basic packages and standard configs, even if you don't use any of the templating and fancy hooks and stuff - you are saving yourself work down the road, whether it's moving to virtualizing, switching from linux to bsd, or requiring test/qa/production systems, or maybe even a backup solution. It's got very little to do with rolling out systems every day, and everything to do with consistency and policy enforcement.

Yes, it will require maintenance as your requirements change - but without it, so does the ragtag set of systems you are running.... and unless you are really picky with your documentation and procedures, most of the important details are probably in your head. If you force yourself to define them in puppet (or something similar) then you can focus your efforts better.


Novell ZENwork Linux Management (0)

Anonymous Coward | more than 5 years ago | (#28663775)

Novell's ZENworks Linux Management (ZLM) is great for deployment, patching, and configuration management. It works with SUSE Linux Enterprise and Redhat Linux Enterprise. Combine this with Autoyast and a network install point,and it should do everything you need and more.
I use it to manage a large deployment of SUSE Linux Enterprise, with a small number of Redhat systems thrown in. It has a steep learning curve and is poorly documented, but once you have it up and running, it will make your life much easier.

Gentoo Ebuilds, CVS (3, Interesting)

lannocc (568669) | more than 5 years ago | (#28663831)

I run Gentoo on all my systems, and since the .ebuild file format was easy for me to understand (BASH scripts) I started creating Ebuilds for everything I deploy. These ebuilds are separated into services and machines, so emerging a machine will pull in the services (and configs) that machine uses.

Here's an example:
- lannocc-services/dhcp
- lannocc-services/dns
- lannocc-servers/foobar

On machine "foobar" I will `emerge lannocc-servers/foobar`. This pulls in my dhcp and dns profiles.

I use CVS to track changes I make to my portage overlay (the ebuilds and config files). I keep config files in a files/ subdirectory beneath the ebuild that then follows the root filesystem to place the file in the right spot. So lannocc-services/dhcp will have a files/etc/dhcp/dhcpd.conf file. I've been doing this for the last few years now and it's worked out great. I get to see the progression of changes I make to my configs, and since everything is deployed as a versioned ebuild I can roll it back if necessary.

Re:Gentoo Ebuilds, CVS (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664143)

Do you log into the machine to emerge? Look at puppet for that...

Re:Gentoo Ebuilds, CVS (1)

lannocc (568669) | more than 5 years ago | (#28664301)

Thanks. Puppet is worth taking a look at, but one problem for me is it does not recognize USE-flag changes.

Solution (1)

Bluebottel (979854) | more than 5 years ago | (#28663889)

I found! Its already on slashdot! Heres the link [] . Oh, wait...

Puppet cr@p... (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28663973)

...submitter is a schill for Puppet. Real admins acheive network system convergence using Cfengine.

Anyone using Puppet has been duped by it's primary developer...someone who befriended Mark Burgess, author of Cfengine, and then betrayed and stole his code and ideas.

And still managed to fail it at miserably.

Re:Puppet cr@p... (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28664355)

How can you steal a free software?

Anyway, what are the pros of Cfengine compared to Puppet, in your opinion?

Re:Puppet cr@p... (0)

Anonymous Coward | more than 5 years ago | (#28665279)

he stole the source code, rebranded it, and didn't give credit. he stole his ideas and then still failed to see the "big picture". the pros of Cfengine are convergence - something neither Puppet is able to do or, for that matter, something very few people in general - even here - seem to get. anytime someone hypes Puppet here it's because they're schills. i'm 'flamebait'? people who steal open source code, rebrand it and then try to profit off of it - wait - aren't they the mortal enemies of /.? must be bizarro day here.

Re:Puppet cr@p... (1)

BostjanSkufca (1596207) | more than 5 years ago | (#28668995)

Thanks for the info.

Re:Puppet cr@p... (1)

Sadsfae (242195) | more than 5 years ago | (#28665727)

cfengine is great for what it does. It really just depends on your use case. The only downside is that I am not certain cfengine is still actively maintained.

If you want to customize cfengine you are going to use perl, if you are going to customize puppet you are going to use ruby.

Both are fine, you need to figure out your infrastructure and scalability needs - I have found puppet scales a bit better for large, complex stacks but cfengine is easier for more static, less changing environments.

Re:Puppet cr@p... (1)

runslothrun (524157) | more than 5 years ago | (#28666695)

not maintained? they just released a total rewrite as v3 and a commercially supported version as well. cfengine is designed for large, complex environments. mar burgess talks cfengine to google: []

RedHat Satellite Server (3, Interesting)

giminy (94188) | more than 5 years ago | (#28663991)

RedHat's satellite server has some pretty options for this, if you dig deeply enough.

RHSS lets you create configuration files to deploy to all of your machines. It lets you use macros in deployed configuration files, and you can use server-specific variables (they call them Keys iirc) inside of the configuration files to be deployed on remote servers. For example, you create a generic firewall configuration with a macro block that queries the variable SMBALLOWED. If the value is set, it includes an accept rule for the smb ports. Otherwise, those lines aren't included in the deployed config. Every server that you deploy that you expect to run an SMB server on, you set the local server variable SMBALLOWED=1. Satellite server can also be set up to push config files via XMPP (every server on your network stays connected to the satellite via xmpp, the satellite issues commands like 'update blah_config' to the managed server, and the managed server retrieves the latest version of the config file from the satellite server).

Satellite is pretty darned fancy, but also was pretty buggy back when I used it. Good luck!


Re:RedHat Satellite Server (1)

Swampcritter (1165207) | more than 5 years ago | (#28664679)

And if you are using CentOS or Fedora, I recommend looking at Spacewalk (an Open-Source version of RHEL's Satellite w/o the expensive license).

Spacewalk is an open source Linux and Solaris systems management solution. It allows you to:

        * Inventory your systems (hardware and software information)
        * Install and update software on your systems
        * Collect and distribute your custom software packages into manageable groups
        * Provision (Kickstart) your systems
        * Manage and deploy configuration files to your systems
        * Monitor your systems
        * Provision virtual guests
        * Start/stop/configure virtual guests

Wiki/Documentation --

Look at SME Server for Inspiration (1)

grcumb (781340) | more than 5 years ago | (#28664699)

If you want inspiration about automated configuration management done right, take a look at SME Server [] . It's got a template-based, event-driven configuration management system [] with a mature, well-documented API that could easily be appropriated for in-house use.

The SME Server distro itself is a general-purpose small office server, so it's likely not appropriate for your shop, but their approach to configuration management is simple, well-designed and extremely well-implemented.

Full disclosure: I worked for the company that developed SME Server for a couple of years, and I continue to deploy and support it widely.

Your configuration management toolkit should.. (1)

bol (152634) | more than 5 years ago | (#28664951)

Puppet can do all of that for you, including adding the host to nagios if you manage nagios's configuration with Puppet that is.

For my installations I'm currently using Cobbler to deploy a base install, which handles installing the OS and its configuration (IP, hostname, etc.) Cobbler also installs a number of post-install scripts which then run on first boot to install things like vendor specific drivers/packages (eg the HP PSP) and does an initial run of puppet, which automatically registers with puppermaster. The node will pull down everything else it needs based on its standard configuration and any assigned classes. Cobbler can also control Puppet, via external files, to allow all of this to be configured via Cobbler on the command line when you add a host. If you control Nagios via Puppet, it can generate all of the nagios configurations for it as well.

As far as I'm concerned generating configuration files lies solely with the configuration management system, eg Puppet or your own tools (stored in version control!) I use Puppet for everything possible and for things that I am too lazy to put together in Puppet I generate them via custom tools and have the output stored in svn (apache vhosts, etc.)

It's also important to make things as generic as possible and try to use standard tools wherever possible, eg SNMP for monitoring.

ticketmaster's (0)

Anonymous Coward | more than 5 years ago | (#28664961)

Mod parent up (0)

Anonymous Coward | more than 5 years ago | (#28665611)

Seriously, I have a good friend who works on the filesystem backend for MySpace. (he once gave me some traffic/load numbers once, wish I could remember them, they were so crazy)

While he couldn't tell me specifics or even say what they use or not, he told pointed out Spine and Provision from Ticketmaster, and more or less hinted that they are using it.

Trade secret (0)

Anonymous Coward | more than 5 years ago | (#28665049)

When did Slashdot become #techsupport for #india?

Seriously, I've done the R&D to find out what works and doesn't. Why should I tell you, Mr. Anonymous? Why not hire someone instead of insulting them.

Huh! (1)

liquibyte (1151139) | more than 5 years ago | (#28665345)

Do/you/speak/english and/or any/other/language? AYFKM!!!

config management (1)

Sadsfae (242195) | more than 5 years ago | (#28665693)

We use a robust configuration management/provisioning system consisting of puppet, cobbler and koan.

Puppet is easily scaleable for just about any sort of server need, cobbler and koan take care of the heavy lifting for provisioning. It's also fairly easy to write your own puppet types and modules for various tasks.

With one command we are able to provision a server from bare metal (or vm) to a fully working server, complete with SAN/NAS storage, fully operational daemons and authentication.

PECL (0)

Anonymous Coward | more than 5 years ago | (#28665743)

.pl's and a PHP interface that calls them.

Re:PECL (1)

CarpetShark (865376) | more than 5 years ago | (#28667245)

Yeah, .pls and php.

Also, anyone wanting to build a moonbase using an army of robots should start with a single robot arm, some materials, and a compiler. ;)

IBM Tivoli Provisioning Manager ... if you have $$ (0)

Anonymous Coward | more than 5 years ago | (#28665881)

TPM or TPM for OSD ...

If you have money ... Voyence (1)

DougReed (102865) | more than 5 years ago | (#28666937)

At the risk of sounding like some sort of an advertisement for EMC, If you are working for a company with money... Voyence is a WAY cool product. It will do just about anything you could possibly want to network devices. It will even tell you if you screw up something.

Reading it again (1)

mindstrm (20013) | more than 5 years ago | (#28667965)

Reading the original post again - I'm a little unclear what the question is.

If the question is "How can I manage all this stuff" - you can manage it through puppet.

If the question is "Is there something that can automaticaly do EVERYTHING for me" then the answer is "No" - no matter how much you want to abstract things, at some point, you are going to have to plan and put the system together.

You could roll something sweet with OpenQRM to make it all drag and drop - but you'd have to put in the wrench time to model it after the types of things your organisation has/needs, and you'd have to roll quite a bit of infrastructure out underneath it to make it work.

What you are really asking, I think, is are you missing something in the big picture - and I don't think you are - it's just a matter of scale.

UniCluster (1)

CE@UIC (14343) | more than 5 years ago | (#28668495)

There is an open source cluster management stack called UniCluster available at (disclosure: I work for the company that makes UniCluster). Its intended for managing HPC clusters but it can do everything that you're looking for in one tool. It has support for ganglia, nagios, cacti already built in and adding new third party components is pretty simple. It has a tool to push config files around and will do bare metal provisioning (ie. setup PXE and kickstart for you).


Wrong direction (1)

vlm (69642) | more than 5 years ago | (#28668595)

But each of these tools has to be configured independently or at least configuration has to be generated.

You write that like its bad or something. Decentralized is always more reliable overall.

The correct way is to work it thru in reverse. Automated tools should find things they can monitor, and then humans think about what to do.

NMAP periodically dumps its results in a DB. Watch your CDP too. Maybe sample your ARP cache on your switches. And keep an eye on your RANCID router configs.

One simple script analyzes the nagios config and emails a complaint to either one individual, a mailing list, or a gateway that autogenerates a ticket. The script sends one alert for each issue it finds, something like "WTF nmap found a device at that is not configured or commented as ignore in Nagios". I haven't met a plain text config file yet, that doesn't allow comments, so if you desire not to monitor something you have a syntax in the config file "# ignore" and your script understands that.

Nothing wrong with your script generating alerts that contain sample "cut-n-paste" info to add to your configs.

Repeat for reverse DNS, munin monitoring system, MRTG polling of anything with an open SNMP port, etc.

Also you need well backed up and replicated wiki with a page for every device your network monitoring tool detects.

Finally don't forget that if something has been "red" in nagios for perhaps a week and/or its gone from the ARP table for a week, maybe it's time to formally delete it, also necessitating alert emails.

Conveniently this scheme also "forces" people to explain what they think they are doing, to at least one other sentient being, which can be very educational for all concerned if the end users are doing something crazy.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?