Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Improving Software Configuration Management?

Cliff posted more than 8 years ago | from the follow-the-growth-of-your-software dept.


Elvis77 writes "I am managing a project looking at some software to help us with software change management. There are numerous good applications around for doing this and our purchasing is complete, but I have been amazed during my investigations at how many organizations rely on good manners, good intentions and good luck to manage the configuration management of their organization's software assets, even in light of the Sarbanes-Oxley Act of 2002 (SOX) that affects US companies (I am in Australia). Organizations outside of the USA, without SOX implications, are usually still concerned about the quality of their software. What do my fellow Slashdot readers consider to be the best practices for configuration management?"

Sorry! There are no comments related to the filter you selected.

Best practices -- (1)

oneiros27 (46144) | more than 8 years ago | (#14988555)

What do my fellow Slashdot readers consider to be the best practices for configuration management?
There are way too many to list -- I've been in a number of organizations -- some had a whole 'workflow' process, where all of the managers were informed of any changes to be made a weekly meeting (and because they didn't understand the implications of all of the changes, and didn't let the tech folks under them know, it was basically useless), to ones where it's completely informal.

I'd say the practices depend on the risk involved with each change, and that depends on the business -- where a single bad change might cost you $5mil, you're going to behave differently than a place that doesn't even pull in $100k in a year.

My rule of thumb -- good backups. Make sure they're good before a chance (ie, you can restore from them, not just that they ran), and don't make changes on Friday -- no one wants to spend their weekend cleaning up something that went wrong. (and if management wants me to come in and do a change on the weekend -- I want 2 weeks notice ... if I have to give them 2 weeks notice to use leave, I want 2 weeks notice before I lose it).

And management signoff on changes is basically useless -- you get bitched at when something goes wrong, no matter how many many notes you make in change management that it's a bad idea, and it's going to break things.

Re:Best practices -- (1)

ScrewMaster (602015) | more than 7 years ago | (#14998659)

And management signoff on changes is basically useless -- you get bitched at when something goes wrong, no matter how many many notes you make in change management that it's a bad idea, and it's going to break things.

Useless insofar as not getting chewed out is concerned, but it can help you keep your job, or even keep you out of jail. Never doubt the power of good record-keeping to help point the fickle-finger-of-blame in the proper direction. On the other hand, if you're in a job where you are being unfairly held responsible for management failures, then maybe it's time to polish that resume.

First, try to buy software that logs this for you. (1)

xxxJonBoyxxx (565205) | more than 8 years ago | (#14988609)

First, try to buy software that logs this for you. That makes it easier to just point the auditors to the logs when they come sniffing around, and keeps your operators and system custodians from wasting time double-entering notes about what they did on some other third-party system.

Tripwire+CFEngine (2, Informative)

Asgard (60200) | more than 8 years ago | (#14988663)

Combine a Tripwire [tripwire.com] -like tool with a computer-immune-system like CFEngine [cfengine.org] , add in a Change Control workflow that isn't too painful and you'll be in good shape. Reconcile the Tripwire reports with the change control paperwork to check that changes are being properly recorded in the workflow.

Re:Tripwire+CFEngine (0)

Anonymous Coward | more than 8 years ago | (#14990853)

Combine a Tripwire [tripwire.com] -like tool with a computer-immune-system like CFEngine [cfengine.org] , ...

... and you'll have something that looks a lot like Radmind [radmind.org] .

Start small (3, Interesting)

Mark_Uplanguage (444809) | more than 8 years ago | (#14988684)

If you get to the point where you realize how vital souce control is to your project(s) then I'll assume that you're already bogged down with work and deadlines. In this case start small. Just get your data into CVS or Subversion (for teams put the repository on the LAN/web/Sourceforge/etc)

This first step will lead you to the next - increased communication in the team and good documentation. For documentation I've been please with the ReadySet [tigris.org] project at Tigris.org. Start with the basics and work up from there. Bringing a whole team along for the ride is time consuming, challenging and in the end - absolutely necessary (make it part of their objectives if you can).

Continue from there

Two paths (2, Insightful)

gclef (96311) | more than 8 years ago | (#14988714)

There are really two paths you're talking about here, and folks tend to confuse the two:

1) Software *development* change mangement. Meaning: tracking things like changes to software code.

2) Software *configuration* change management. Meaning: tracking changes to the configuration of the software. I presume you're talking about this one, but it's not completely clear.

Development change management is well served by tools of varying complexity from ClearCase to Subversion/CVS. Subversion/CVS seem to be very common, as they're free, but I've worked in offices using ClearCase before (not that anyone was terribly happy about it, though).

Configuration change management is much harder, as you're talking about managing the configuration of applications across potentially hundreds of machines. The tools for this vary widely, depending on how hard-core you want to be. They vary from CFengine to full-on provisioning systems (openQMS, for example)...none of them tend to be easy to set up or manage, which makes them less common.

Re:Two paths (1)

FatMacDaddy (878246) | more than 8 years ago | (#14989684)

Good post, and I agree that CM covers so much ground as to make this question very open ended. I also agree about the less than gleeful opinion of ClearCase, which we've been forced to use. I started out with simple SCCS in Unix, which gives you all the basics of version control. But CM can entail much more than version control. It's version control, software configuration control, and instance/evironment control if you're working in an advanced or large-scale project.

I would say that setting up some sort of version control and problem tracking system are the two key first steps from a development standpoint. But you could write a book (and I'm sure many people have) about what's required for a good, comprehensive CM effort on a big project.

Use RCS for config files (2, Informative)

suso (153703) | more than 8 years ago | (#14988718)

I would recommend using RCS for tracking changes of config files. CVS is based on it, but RCS allows you to work better with just single files instead of entire directories and sets of source codes. Just create an RCS directory in the directory that the config file is in, and do this:

ci -u configfile

when you want to edit:

co -l configfile

check it back in:

ci -u configfile

Easy enough.

Re:Use RCS for config files (2, Insightful)

Asgard (60200) | more than 8 years ago | (#14988848)

Now, go make that same change on 100 servers, without disturbing the other contents of the file.

RCS is great for single-server applications or one-off config files, but it doesn't scale really well to large applications.

Re:Use RCS for config files (1)

jilles (20976) | more than 8 years ago | (#14989045)

If you are going to use version management, do not standardize on obsolete toolsets. RCS is very limited. CVS works around some of its restrictions. Both tools are fairly limited. I'm sure it was pretty cool to put your three files in RCS in the eighties but modern times require more modern tools like that. There might actually be changes in two files that are related that neither cvs nor rcs is capable of keeping track off.

So use subversion. It's dead easy to set up, there's plenty of tool support. You can commit changesets (or rather, each commit is a changeset), which especially in the case of configuration files is nice since configuration changes are rarely limited to just one file. Alternatively you can tag/branch working configurations. You can refer to specific revisions of the entire system rather than specific revisions of specific files. In cvs tagging and branching are the only way to keep together the right versions of the right files, in subversion the revisionnumber tells you everything you need to know.

Re:Use RCS for config files (1)

jgrahn (181062) | more than 8 years ago | (#14990763)

So use subversion.

I note, with satisfaction, that you fail to mention any specific benefits of using Subversion, over CVS.

[reordered] There might actually be changes in two files that are related that neither cvs nor rcs is capable of keeping track off.

And SVN would do that better CVS, in some magical way? I cannot see how. If you have to rack out of a change, you either have a tag or a known good date. In either case, you'd 'cvs diff' against it, review the changes, then revert all of them or cherry-pick. Both tools can do that (and probably RCS too, with some scripting).

Re:Use RCS for config files (1)

jilles (20976) | more than 8 years ago | (#14991447)

Not in some magical way but by design and without scripting, I refer you the subversion book for further details. Really there's a lot in that book you clearly don't know. Basically subversion can do anything that cvs does (that was a design goal) and much more (because it couldn't be designed back into CVS, really it is quite bad). It turns out that the much more part is actually quite valuable if you care about version management. Many CVS users don't quite know what they are missing. You seem to be a good example.

anyway to counter your example how about
svn commit -m "really stupid change on a really big directory affecting 10000 files"
svn delete svn+ssh://foo/trunk/dir -m "oops"
svn copy -r99 svn+ssh://foo/trunk/dir svn+ssh://foo/trunk -m "copy back last good version of dir"

r100 breaks the system
so you delete the change (all the files under dir), don't worry no bits are actually lost because this is a proper versioning system. r100 is there to stay, you can always go back to it.
So r101 does not have dir at all which is probably not something you want.
So copy back the last good version from r99 which results in r102 (including version history and without messing with a stupid attic).
which results in r102 which is identical to r99, no information lost.

Of course you'd want to maybe examine the changes between r100 and r101 in more detail before you embark on your litlle undo mission so you do a svn diff -r100:101 svn+ssh://foo/trunk which gives you a complete patch for that change (i.e. all diffs for all files that were changed in that commit). Just figuring out this information using cvs is a lot of work because cvs has no concept of all files in a particular commit because it only commits individual files. If you are used to svn log, cvs log really sucks I can assure you.

If you decide later on that revision 100 wasn't so bad after all but needs more work on a separate branch you do svn copy -r100 svn+ssh://foo/trunk svn+ssh://foo/branches/newbranch -m "interesting new stuff".

This little scenario is something that is quite impossible in cvs. You might be able to do it manually but good luck if the commit affected 10000 files under dir. BTW. cvs users would probably not dare to do such a commit in the first place (and rightly so, cvs is likely to not handle this gracefully).

Indeed you'd probably end up cvs diffing your way through the directory, scavenging what little information cvs did bother to put in the commit message which of course conveniently not includes what other files were committed at the same time. Have fun on a repository where 20 other people are committing while you are fixing things on the repository. The odds are that somebody manages to land a commit before you can fix your mess.

Subversion protects you in two ways: commits are atomic (so you can actually commit 10000 files at once if you want to) and the above scenario is time limited mainly by your typing skill so you could land your bad fix and have it removed in under 30 seconds. Should the worst happen (some one commits another change before you undo), you can actually apply an inverse diff to the head. That is you, do a svn diff -r100:99 > patch and then apply the patch to head and commit the change, barring conflicts this procedure is quite foolproof.

One change at a time! (0)

Anonymous Coward | more than 8 years ago | (#14988865)

First step, forget about the tools.

Your organization has a process that it uses. The process may not be very formal, but it exists, and your employees are used to it, whether they like it or not.

Most people go out and find a tool that they think will solve all their problems, and then they try to force the tool (and the tool's internally defined process) on the organization as a single step.

Find the control process you want to be using, and work on getting your people to change their processes to match with your desires. Nope, this ain't easy. But guess what? Anything else will end up with rogues who learn the tool well enough to end-run it. People who break the process you depend on because they figured out how to do so.

Find the process you want in place. Teach it. Reinforce it. Encourage it.

While you are doing that, examine the tools, and find one you like that matches your process. As the process becomes institutionalized, as the people in your organization are not only using it, but are helping to keep each other in line with the process, then you can introduce the tool which MATCHES the process they are already using.

There will still be some resistance to the tool. But a good tool makes following the process it supports easier and simpler. If you have a good process and tool match, selling the tool actually becomes MUCH easier.

If you pick a tool, and then use implementation of that tool to force process changes, you will get to face all the resisters of both changes at the same time AND you will generate extra opposition because of the larger amount of change you are requiring in one step . . ..

Former USAF Capability Maturity Model Software Process Improvement Team member . . ..

NetDirector (1)

Mur! (19589) | more than 8 years ago | (#14988974)

Check out NetDirector from Emu Software [emusoftware.com] .

NetDirector does *NIX service configuration management across multiple servers on multiple Linux distros, Solaris, and BSD. And for SOX, it does change tracking and audit trailing, offers role-based management, change scheduling and rollback - all from a nifty Web-based GUI!

I've seen it in action and it's pretty cool. They have an online demo on their site. Personally, I've always just hand-edited config files (vi is your friend), but for a multiple-server environment, this looks like it would be a winner.

Re:NetDirector (1)

owensamo (964807) | more than 8 years ago | (#15060295)

Not only is NetDirector pretty cool, it's now freely (as in beer) available via a modified Mozilla Public License. Check out the community site [netdirector.org] .

More than code versioning (2, Insightful)

$ASANY (705279) | more than 8 years ago | (#14989070)

There's a whole other layer to CM, which is the issue tracking, requirements tracking, resource management and documentation management piece that probably is more critical than whether you can checkin/checkout changes and be able to back out a change.

We supported a government client where issues were managed with word document templates and emails. It was a disaster, with things getting lost and weekly meetings being the only times decisions were made. They were spending a boatload of money developing something that looked suspiciously like a hobbled version of bugzilla, so we recommended and got approval to stand up bugzilla for issue tracking. It's been a big success and has been expanded to handle requirements management as well.

The key was to set up "products" that matched operational areas for a product rather than thinking that a "product" was defined as a single deliverable. We set up the standard "product" as deliverable, with subcomponents which somewhat matched the functional areas where developer responsibilities were broken down. Then we established a "product" which was essentially an area where management issues could be handled, and had subcomponents for tester access control, requirements definition, external coordination issues and the like. So when we went into testing and found an issue that we decided to defer to a subsequent release, it was moved to the admin area and the requirements subcomponent. This kept policy and requirements control out of the hair of the developers and allowed parallel workflows for requirements, design, development and testing.

It's not a perfect tool for all of that, but it's pretty close to "good enough", and the price is right.

What would I add to this mix if I were God? I would wiki the work-in-process user's manual so developers could flag issues that should be addressed in documentation rather than in code. I would probably wiki and/or subversion the test plans, as word documents utterly suck for test plan management. And I'd spend some time with wiki "special pages" and bugzilla customized components to integrate it all -- linking and content sharing between a wiki and bugzilla would bring the solution set into the 90%+ range of requirements matching.

Coders rarely are the problem in software CM. It's the management, architects, functionals and tester coordination that really has the potential to kill a project. But if you can coordinate all of them well, the flow of functional requirement - use case - design - develop - test - debug - requirement generation & traceability comes together cleanly and raises a development process into a portfolio management/enterprise architecture execution process.

Not tools, but process (2, Insightful)

Grab (126025) | more than 8 years ago | (#14989072)

The general process required is that a change can be tracked from a customer request, through the design, to a code file, and back up the V (or round the spiral) to the testing/verification/validation used to make sure that the design and code do what the customer asked for. And then that you can track all the changes that went into a particular version build, which means recording what files you used to build it.

How do you do that? I don't care, and nor does Sarbanes-Oxley. We used to use Access databases at our place (and still do on some projects). Other projects use a tool called Visual Intercept, which I'd recommend people avoid like a rabid maneating tiger. Some projects have even used Excel (heavens preserve us!). Or simply use multiple text files (checked into your version control system of choice), one per customer request. Or Bugzilla or any number of other tools will serve your purpose. Tracking versions of files could involve some hugely complex build system, or it could just involve slapping a label across the files and doing "get all".

But if you don't grok the basic premise that every line in your code must be attributable to a customer request, then no tool you can buy will help you. This is a process you need to follow, not a magic-bullet tool you can install and everything will be fine. Repeat after me - There Is No Quick Fix For Quality. :-)

The most important thing you can do make sure your system works is therefore to review each change before it gets signed off to make sure changes are complete. That doesn't just mean reviewing that files are changed, but that they're all checked in under version control, AND that the change details (however you do it) are fully filled in. Even if there's automated fields to list all files and their versions in, review them anyway - there's bound to be cases where someone says "well I've got one test file listed, so that'll keep the tool happy, never mind about the other 20".


Re:Not tools, but process (2, Insightful)

PacketKing (106950) | more than 8 years ago | (#14989346)

I will wholeheartedly agree with the parent post on many points.

Coming from someone who was a ConfigMgmt person for my company, I faced a lot of these issues. First and formost, get a plan, even a simple one, and get it written down. Then modify it as needed. Also, Label every time you build. Any decent source control tool will allow you to do this. Be consistent with your labels, and be clear with them. This way, when a need to build version x.y.z arises, you can get back to it. Trackability is key. Make sure your plan is built around it, because ultimately what you're looking at is being able to build at any time the same x.y.z that you released.

I also must second the phrase "There Is No Quick Fix For Quality". Do it right the first time (even if it takes a lot of time) because you won't need to go back and fix it later. This goes for your end product, but also for your process, as ripping up processes to replace them is tough. THis is not to say that you can't use prototype processes, but when you decide on one, stick with it.

Other tips & tricks:

- the wiki idea posted above is wonderful. I've worked with them for quite a few years now, and in a dev environment, they can be awesome. One caveat: make sure you get a wiki that does revisioning of the pages, one example would be TWiki (http://twiki.org./ [twiki.org.] This can be a godsend, just like figuring out code changes with CVS.

- I've managed 6 different revision control systems in my career, even being certified as a ClearCase admin. I'd have to say, Subversion is a pretty competent version control tool, and not terribly hard to learn. It's worth the time for the features it offers. As for the others? I'd say stick with subversion or CVS because:
  1. the userbase is larger, therefore easier to find help
  2. they're more than adequate for most development houses
  3. they don't cost anything (compared to $3k per seat for ClearCase)
  4. usually, you can get your data out of them in case disaster strikes (and don't think it won't, cuz it's happened to me more times than i care to think of).

drop the wooly language (3, Insightful)

The Pim (140414) | more than 8 years ago | (#14989245)

Any effort such as yours should begin with clear thinking, which is aided by purging yourself of such vague, overloaded, and literally nonsensical terms as "configuration management". In case you think this is a troll, note that the posters so far have given your message a range of completely unrelated senses. Describe precisely what you're trying to accomplish if you want useful answers.

Rule Number 1: Process First, Tools Second (1)

SlySpy007 (562294) | more than 8 years ago | (#14989289)

Having seen a number of Bad Things happen as a result of incomplete CM processes, I can say first hand that process is the most important part of any CM strategy (BTW, this rule also holds for other practices such as systems engineering). If I had a nickel for every time I asked someone what their CM strategy was and they respond with something like "We're using tool XYZ...", I'd be living in the bahamas by now.

It's downright foolish to believe that a tool will solve your problems. But you sound like a smart guy, so I'm sure you've figured that out. However when in doubt, it's always a safe bet to peruse the SEI site (here's the section on SCM [cmu.edu] ). As far as tools go, I'm always partial to the free ones, but here's my take:

  • CVS: Free and easy, but very basic. Difficult to manage in a large environment with lots of parallel development threads.
  • Subversion: Improves on some of CVS's shortcomings.
  • Arch: Haven't gotten a chance to try it out, but I hear good things. It's distributed, so there's no central repository.
  • ClearCase: IMHO, CC is not necessary unless you've got a project with a very large team and an extremely large code base (I'm thinking 1 million SLOC and up...)
  • Harvest: Avoid this tool like the plague.


Re:Rule Number 1: Process First, Tools Second (1)

eclectechie (411647) | more than 8 years ago | (#14992613)

ClearCase: IMHO, CC is not necessary unless you've got a project with a very large team and an extremely large code base (I'm thinking 1 million SLOC and up...)

Last year a team of some 35 of us actively worked on a project based on Mozilla. The Mozilla codebase is spread through more than 3500 directories, and $DEITY only knows how many (large) files. If each directory contained an average of 1000 SLOC, that's 3.5 million SLOC. Subversion handled this with no problems whatsoever.

Subversion gets 2 thumbs up from me (and thanks, Rick, for being the subversion expert!).

Re:Rule Number 1: Process First, Tools Second (1)

MeerCat (5914) | more than 8 years ago | (#14993411)

ClearCase: IMHO, CC is not necessary unless you've got a project with a very large team and an extremely large code base (I'm thinking 1 million SLOC and up...)

I'm working on a project of around 2.5 million lines of code with about 100 developers, and unfortunately Clearcase (which is what we're stuck with) is nowhere near up to the job - the merge tools die on merging large files, the reporting tools fall apart when the revision counts for a file get into the thousands, and the whole thing is so slow to do almost anything that we waste hours waiting for it and end up working around it - global use is, in particular, beyond a joke.

Clearcase may have been good 15 years or so ago, but nowadays it's just overpriced rubbish, and it seems to have been abandoned by the current owners... I'd ditch it for perforce or something else a little newer and faster at the drop of a hat...

You may have already failed the first test (2, Insightful)

rossifer (581396) | more than 8 years ago | (#14989670)

It doesn't sound as though you trust the professionalism of your staff. Not that you shouldn't provide the tools they need to be effective, but I notice that you're equating SOX and top-level control with software quality, which leads me to the conclusion that you don't trust your staff. In my experience, top-level control has almost nothing to do with software quality, while trust has been highly correlated with high quality products.

Step 1: Get the best staff you can. "10x" developers do exist, but you should be aiming for a staff of "3x" to "5x" developers of varying experience who work well together. The best developers won't really want to manage other people, but will want to be trusted with significant responsibility (i.e. they don't want to be fed detailed "specs"). Being good team members is at least as important as having top-notch skills.

Step 2: Get out of their way. At this point, you need to make it easier for them to get their jobs done. Most software development processes are about making it harder to do the wrong thing, which inevitably makes it harder to do the right thing. For many managers, software processes are like violence: "If some isn't good enough, more must work better." Don't fall into this trap.

Various things that will help (and not hinder) your developers:
  • Provide direct access to stakeholders with requirements, including customers if possible, domain experts, etc.
  • Fire poor performers quickly. Sooner if possible. Let the team decide who these people are (you'll hear the complaints quickly if you are receptive).
  • Get the team to agree on formatting and design conventions. This will save you more time and frustration than you know. People who insist on their own conventions are unprofessional and should have failed the criteria for being hired (they're not good team members).
  • Identify the team tie-breaker. You may use the title "team lead", "project manager", "architect", whatever. This is the person who makes the call when consensus doesn't happen quickly.
  • Get an effective issue tracking system. Bugzilla is a free and minimal option. Scarab may be ready for prime time (but wasn't a year ago when I last checked). Trac is simple but clean (and once they add the issue process enhancements... woo hoo). Rally is commercial and hosted, but very effective for agile teams.
  • Install subversion (use the file-based installation, the db installation option is still flaky). Understand and utilize subversion's features (branches and variations on branches are #1 on this list).
  • Protect the tip with a minimum of a nightly build. There are build checkers (calavista is good, there are some open source alternatives that can frequently check the status of the tip... I wish I could remember some names).

When you get to this point, you'll have read a lot more about software development (your good developers can recommend some fantastic books) and you'll have much more precise questions.

If you don't read at least "The Mythical Man Month" [amazon.com] by Fred Brooks before managing a software team, you will fail and you won't understand why.


Re:You may have already failed the first test (1)

Elvis77 (633162) | more than 8 years ago | (#14993095)

Thankyou for your response.

We do have very good and well trusted developers and fairly good processes and systems for bug tracking and so on...

We are a PeopleSoft site and our problem is that the complexities of releasing changes has become an issue for quality, we are concerned about developers overwriting other developer's changes because of the poor release tools we're using rather than the professionalism of the staff. We have a dedicated release team and each week we release our latest updates to the system.

Throw in PeopleSoft patches and it becomes a challenge. It is enough of an issue for us to run a project to rectify the problem but we haven't really had too many problems with this (yet)...

Excellent website (1)

smitherz (112924) | more than 8 years ago | (#14989769)

http://www.cmcrossroads.com/ [cmcrossroads.com] Everything you would want to ask.

good answers (0)

Anonymous Coward | more than 8 years ago | (#14989808)

This is the most interesting Ask Slashdot thread I've seen in years.

Back away from the micromanager's seat (0)

Anonymous Coward | more than 8 years ago | (#14990120)

..and take a look at the bigger picture: CoBIT and ITIL

Dimension - Serena ? (1)

zijus (754409) | more than 8 years ago | (#14990459)


I worked in a team trying to adapt XP to their work. I cared especially about nightlies automation. About the process, what I kind of remember ( a bit fuzy though) is: yes do have a theoretical process pretty clear in mind but trying to build the real solution without an "experimenting game" with the developpers is pretty hard. We had regulars meeting in order to change the process we where following. My point: I agree with previous post, start simple, and expend. The tools : ClearCase + a bug/service_request tracking system I don't remember the name. Some issues : both where not integrated. Not easy to track changing causes. We where using a file as a flag for mainline merges, and as a chage log. It worked ok, though was relying on good will. The bug tracking system was traditional heavy client : developpers hated it for it was providing a lot of things but not the essential in an easy manner. Some other point: we had the nightlies to automaticaly label everything ( all sources, test material *and* all the built material ). Great ! ( We were building middleware stuff which has to be compiled on combination of OS versions, compilers versions, JDK versions... about 15 platforms) I'd say yes: label well and strive for the "single label" for one release, even for multi components, multi team, apps. And automate evrything.

Now. I started 3 weeks ago a new job as ConfigMAnager. I am faced with Dimension tool from Serena. This tool aim at version control _and_ change management. Theoreticaly possible to track things up and down, as well as providing lifecycle for items, delegation, co/ci... I never heard about this tool before. And the thread here does not mention it. It gets me a bit more uncomfortable. Seeing so little in terms of forums, community. Hum. Any opinions on that tool ?

In general, one thing I am concerned of, is I feel some people imagine change tracking comes for free (not speaking about tool here). But if you do want to follow procedures, it actually _is_ complex, it takes some time and can not be done as a side task. Let me rephrase that: it _will_ somehow come in the way of the developpers. Painfull but real. How else could it be ? Obviously, we can make it easier, more comfy...

An other point. I would like to have more opinions on version tracking system + change tracking system. Any worked on real scenarios here ? Any integrated solutions ? Or integratable ones ? Subversion + JIRA for example ? I don't know Subversion. I liked JIRA.


I started with the installer. (1)

umeboshi (196301) | more than 8 years ago | (#14991004)

I feel that making a clean well preconfigured install is the first step in configuration management. It is also crucial to your backup plan, as it relieves you from making complete system backups. This is not to say that you shouldn't be tracking your installed files via IDS, but the actual files should be already in your repository. I use debian with apt-repositories, but the general idea should be universal. This method lets me make more selective (and smaller) backups.

I started with FAI - http://www.informatik.uni-koeln.de/fai/ [uni-koeln.de]
which is really good. FAI shares its configuration style with cfengine. You can even use fai and cfengine in tandem with some sort of install/update strategy. I would highly recommend taking a good look at both of these systems.

Both fai and cfengine are written in perl. I can't stand perl, and since I have desperate need of similar tools, I decided to roll my own in python. The project is here -> http://paella.berlios.de/ [berlios.de] . This code is still immature and isn't fit to be used for any activites deemed to be critical.

Another method I am using is simple tracking of changes in the /etc directory. I made a simple program for this too. http://developer.berlios.de/projects/etcsvn [berlios.de]
This program really shines a little more if you have multiple similar hosts, because you can manage some config files with a working copy, patch the corresponding files in the relevant host config directories, commit the changes, and then restore/update the config on those hosts. Its really nothing more than a simple tool to keep your /etc from being a working copy, and keeping track of ownership and permissions of those files.

I am currently looking at bcfg2 http://www.mcs.anl.gov/cobalt/bcfg2/ [anl.gov] , as a replacement for cfengine. I just found out about it recently, but it's also written in python and has limited client-side dependencies.

Probably the most important thing is to be prepared to spend a great deal of time in planning, implementing and testing your system. Every tool I have seen so far makes assumptions, or has requirements, that don't match yours. Mine do too, and there is really no way to get around this.

As a general rule, you will want to look for a system that stores the configuration in a manner that you can deal with it the easiest, regardless of the configuration that it exports. The mechanics of the configuration processing should be implemented by a language that you are comfortable enough with to make the changes necessary for future strategies.

Software Config Mgmt \= Hardware Config Mgmt (1)

IgLou (732042) | more than 8 years ago | (#14991545)

This is the first thing that needs to get established. I have too often seen people trying to equate these things and it's bound for failure. I've been there and I'm in Config let me tell you it's not pretty.

Software development is one process in of itself.
The goal of software development is to produce a stable release that is suitable to be installed on a "live" or realworld or production system. In this sense a Software Config Mgmt system keeps track of configuration items which are commonly known as source code and documentation. All revisions to source and documentation should be checked into your SCM database. From this database you build and label your source to produce a software and then send that to testing and eventually you release your software to be Generally Available. It's a hand off to whoever owns the application (the customer or your sysadmin) who will eventually install your software. If the end user has a problem they take it up with support (internal or external).

On the hardware or systems side you want to keep track of the exact setup of your production systems. Configuration items in this sense are your overall systems and software and all of the various settings that are possible. Any change to those systems need to be tracked and recorded.

These are entirely different processes/controls yet some shmoe coined the phrase over both disciplines (and hopefully he rots in hell for it). In the end they are entirely different skillsets, processes and disciplines. Take for example parallel development and code merges. There's nothing like that in a systems sense.

radmind (1)

More Trouble (211162) | more than 8 years ago | (#14992261)

Radmind [radmind.org] leverages tripwire-like information to provide very large scale configuration management. It supports certificate-based authentication of servers and managed machines. The latest release supports compress in the network layer for cases where CPU is more plentiful than network bandwidth. :w

From the Quality Assurance guy (0)

Anonymous Coward | more than 8 years ago | (#14993700)

SCM is one part of your processes, they all have to work reasonably well to hold. What you write makes me feel little is in place.

Some suggestions:
  - document how things work (or do not work) today, no matter how painful the truth is
  - write down what you want to achieve. Look into ISO9000, CMM and CMMI for ideas. Particularly CMMI is very hands on, based on real life experiences (best practices) of real life companies and academia.
  - make dure your test crew and quality assurance crew have organisatorical independence sufficient to tell managemnt the current status stinks when it does.
  - at each iteration and at the end of each project document status, what went right, what went wrong, lessons learned, and how to fix it
  - at the start of new projects look over lessons learned and make sure you do learn from your costly mistakes
  - make the tools for your processes and your processes fit your needs, not the other way round
  - make sure the QA crew audits everything and that their integrity remains at the highest stanrads, seriously. This is a high risk job and I lost mine quite possibly due to my integrity...
  - never ever allow Excel to be used for your processes. So far in my career it has bee a surefire sign of a broken process, not even once has that failed to indicate process troubles.
  - record metrics but use common sense
  - make sure all infomation gained is used. Information pipelines that are open in either end is a sign of trouble.
  - allow for your processes and tools to change but don't change those for a running project.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?