Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Ask Slashdot: System Administrator Vs Change Advisory Board

samzenpus posted about 9 months ago | from the get-along dept.

IT 294

thundergeek (808819) writes "I am the sole sysadmin for nearly 50 servers (win/linux) across several contracts. Now a Change Advisory Board (CAB) is wanting to manage every patch that will be installed on the OS and approve/disapprove for testing on the development network. Once tested and verified, all changes will then need to be approved for production. Windows servers aren't always the best for informing admin exactly what is being 'patched' on the OS, and the frequency of updates will make my efficiency take a nose dive. Now I'll have to track each KB, RHSA, directives and any other 3rd party updates, submit a lengthy report outlining each patch being applied, and then sit back and wait for approval. What should I use/do to track what I will be installing? Is there already a product out there that will make my life a little less stressful on the admin side? Does anyone else have to go toe-to-toe with a CAB? How do you handle your patch approval process?"

Sorry! There are no comments related to the filter you selected.

SCCM (0)

Anonymous Coward | about 9 months ago | (#46777413)

Microsoft System Center Configuration Manager?

Re:SCCM (5, Insightful)

gl4ss (559668) | about 9 months ago | (#46777635)

sure, but how does that help with having to run the CAB through 102 patches?

I think go for easy solution. introduce the patches in batches for the board. ("monday updates for week 32").

the fucking board will not care after 2 weeks anyways so just do lip service for two weeks.

Nonsense (4, Insightful)

ruir (2709173) | about 9 months ago | (#46777417)

They want bureaucracy, they make the paperwork. Tell them to track windows and distro security pages, the changes are there. I would be toasted with that kind of tape, I updated my servers in a pinch immediately after the first news of heartbleed at 3 in the morning. 0300AM right. How about dusting your resume and changing jobs? Let them play the shuffling reports game alone.

Re:Nonsense (0)

Anonymous Coward | about 9 months ago | (#46777469)

Agreed, but with a caveat. tell them about heartbleed, tell them your response time and then ask them for a test run. When it is complete, inform them of how much time it took to approve the necessary patches, and then submit the results to risk management to review the potential for loss in the given time discrepancy. let the folks who assign a dollar sign to risk have their say.

Re:Nonsense (5, Insightful)

Anonymous Coward | about 9 months ago | (#46777563)

Any decent change control process should have an emergency change category and process.

Re:Nonsense (5, Insightful)

Opportunist (166417) | about 9 months ago | (#46777639)

This. Ask them if they have taken care of things like this. The answer to this alone will tell you whether there is some kind of deep consideration behind it or whether some PHB had a consultant toss the cool buzzword "change advisory board" in his direction.

If it's the latter, run. Run like the wind.

Re:Nonsense (0)

Joce640k (829181) | about 9 months ago | (#46777487)

They want bureaucracy, they make the paperwork. Tell them to track windows and distro security pages, the changes are there.

Yep. They're the "experts". Just tell them the Microsoft KB number, that's all the information they need.

Re:Nonsense (4, Interesting)

N1AK (864906) | about 9 months ago | (#46777533)

Any remotely well organised IT department will have processes for handling both emergency deployments and retrospective approval. I'm not going to be cheerleader for the concept of CAB but if you're going to make a case against it then at least make a reasonable one because hiding behind obvious nonsense like this will just make you look stupid and change averse to your employer.

Re:Nonsense (1)

mikelieman (35628) | about 9 months ago | (#46777655)

Pretty much this. Change Management is a process. I wonder if they even have any systems in place to manage this. Tracking migrations really benefits from a good system behind it. Maximo is however, not that system.

Re:Nonsense (2)

Antique Geekmeister (740220) | about 9 months ago | (#46777673)

> Any remotely well organised IT department will have processes for handling both emergency deployments and retrospective approval

Not when the architect is offline and is needed for every significant change. If there is going to _be_ a policy, a manager needs to be ready to enforce it, or it's going to be everyone making up their own undocumented and impossible to synchronize policies.

Re:Nonsense (3, Insightful)

mysidia (191772) | about 9 months ago | (#46777871)

like this will just make you look stupid and change averse to your employer.

No... it's obviously just aversity to excessive, unnecessary and crippling micromanagement. It's obviously some idiots in suits who are change averse and feel they need to justify their existence by "approving" or "disapproving" of each and every required security update or patch or system admin action.

Which involves real costs. With this kind of bullshit, they need to hire additional system admins for systems to approach proper management just to deal with the reduced time efficiency and increased waste caused by bureaucracy.

Re:Nonsense (4, Interesting)

Anonymous Coward | about 9 months ago | (#46777583)

Somehow reminds me of that joke where initially there's just one worker, then layers and layers of staff are added to manage that worker, then finally the worker is fired for underperforming.

Can't find it on Google or Bing though for some reason.

Re:Nonsense (5, Insightful)

timepilot (116247) | about 9 months ago | (#46777671)

Dr. Seuss: “Oh, the jobs people work at! Out west near Hawtch-Hawtch there's a Hawtch-Hawtcher bee watcher, his job is to watch. Is to keep both his eyes on the lazy town bee, a bee that is watched will work harder you see. So he watched and he watched, but in spite of his watch that bee didn't work any harder not mawtch. So then somebody said "Our old bee-watching man just isn't bee watching as hard as he can, he ought to be watched by another Hawtch-Hawtcher! The thing that we need is a bee-watcher-watcher!". Well, the bee-watcher-watcher watched the bee-watcher. He didn't watch well so another Hawtch-Hawtcher had to come in as a watch-watcher-watcher! And now all the Hawtchers who live in Hawtch-Hawtch are watching on watch watcher watchering watch, watch watching the watcher who's watching that bee. You're not a Hawtch-Watcher you're lucky you see!”

Re:Nonsense (1)

fatp (1171151) | about 9 months ago | (#46777683)

In the version I heard, the worker was fired because the department is over-staffing.

Re:Nonsense (5, Funny)

sg_oneill (159032) | about 9 months ago | (#46777603)

Back when I worked as a web administrator at my local university back in the early 2000s, the admin make-work types decided to bash out a web policy , mostly to keep standards up and guard against legal liability (Admittedly we had students setting up websites on chemistry lab pcs turned webserver with novel meth recipes and all sorts of shenanigans before that). All good and fine, I asked to be on the committee as an advisor, and so I was.

Then the whole thing went off the rails, every page needed to be approved by a department head, 10,000+ pages of previously existing data had to be retrofitted with full dublin core metadata descriptions, and so on and so on for about 400 pages of rules and policy that despite my best efforts I could not stop. These people had no fucking idea.

The crown was an insane rule that every new hyperlink had to be aproved not just by a department head but by the vice chancellor himself.

And so thats what I did, and I made sure it was done good and proper. I wrote a perl script that took all new pages on the webserver network (about 50-100 new pages a day) and then whenever a hyperlink appeared it spat out a 1 page document for approval *per link* requiring the vice chancellor and a lawyer to co-sign off on. All with witnesses. All in all about 400 pages a day of paperwork for the vice chancellor and a lawyer.

The policy lasted 3 days before I was dragged into the admin building to be ordered to stop producing the reports. I went in with my union rep. I said "Sorry , no , thats the official policy as passed by the university senate and the website will need to be shut down if this isn't done.". Since the next senate meeting was two weeks away, I made sure every god damn day that stack of paperwork was done by the vice chancellor for a glorious fortnight before the senate could revoke the whole damn policy.

It was a magical and golden time to be a union protected government (Universities are mostly run by the state in australia) employee.

For some reason later that year I was passed over for a promotion though. I wonder why, lol.

Re:Nonsense (1)

JosKarith (757063) | about 9 months ago | (#46777647)

You should have discussed being passed over for promotion with your union rep - you'd have had a pretty strong case.

Re:Nonsense (0)

Anonymous Coward | about 9 months ago | (#46777797)

" I went in with my union rep."

My god, I'd kill for a sysadmin union of some kind that actually mattered ...

Re:Nonsense (0, Troll)

Anonymous Coward | about 9 months ago | (#46777809)

So... the business made a stupid decision, and when they realised the error of their ways, rather than trying to reach agreement on the best way forward, you delighted in rubbing their noses in it, using processes designed to protect you to hurt your employing organization instead.

What a rebel. How stupid is your employer, eh?

Re:Nonsense (4, Insightful)

OzPeter (195038) | about 9 months ago | (#46777855)

So... the business made a stupid decision, and when they realised the error of their ways, rather than trying to reach agreement on the best way forward, you delighted in rubbing their noses in it, using processes designed to protect you to hurt your employing organization instead.

If he had said .. "OK .. sure I'll stop sending you those 400 pages of paper per day", then the policy would still have been left in place, and sometime win the future his employer could have used his inability to follow policy as an excuse to ream him over. Yes its CYA, but some employers are not above using any tool at their disposal to justify their actions.

Only by being a genuine PITA does the stupid police get removed, rather than ignored until convenient.

Re:Nonsense (0)

Anonymous Coward | about 9 months ago | (#46777939)

Then you get a signed exemption to the policy from the vice chancellor for this specific instance.

You can CYA and not be an asshole at the same time.

Re:Nonsense (2)

rjune (123157) | about 9 months ago | (#46777899)

You should have asked for them to put that in writing. In fact, you should have made a written request for them to give you a written directive to stop producing the reports. Failing to generate the required reports would have given them grounds to fire you. What a glorious fortnight of rubbing their noses in their "Official Policy"!

Re:Nonsense (0)

Anonymous Coward | about 9 months ago | (#46777929)

+1 for strategic use of unions.

Re:Nonsense (4, Informative)

RabidReindeer (2625839) | about 9 months ago | (#46777853)

They want bureaucracy, they make the paperwork. Tell them to track windows and distro security pages, the changes are there. I would be toasted with that kind of tape, I updated my servers in a pinch immediately after the first news of heartbleed at 3 in the morning. 0300AM right. How about dusting your resume and changing jobs? Let them play the shuffling reports game alone.

I've served on a change control board. Every application and system update was supposed to be bundled to make the sysadmin's job easier, include a document that outlined the nature of the change and why it was needed, the instructions on how to apply the change, and the instructions on how to recover if it didn't work.

Change committee met once a week, approved/scheduled, deferred, or rejected changes. In case of emergency, the CIO or designated proxy could approve an out-of-band change request.

We didn't attempt to micro-manage changes, just understand the business risks and rewards. Obviously, the more details you could capture the better prepared you were to understand the consequences and the ways you could recover. But when Microsoft hands you a CAB that includes patches for SSL, IE, 6 GDI bugs and Windows notepad, that's their problem, not yours.

The one thing that we didn't do (obviously!) was allow automated Windws updates. Then again, considering the damaged that some Windows Updates have done to desktop machines, I didn't even allow that on my desktop machine.

Re:Nonsense (1)

Threni (635302) | about 9 months ago | (#46777907)

> The one thing that we didn't do (obviously!) was allow automated Windws updates.
> Then again, considering the damaged that some Windows Updates have done to
> desktop machines, I didn't even allow that on my desktop machine.

You have to perform OS updates in some industries. You might disable automatic updates, but that doesn't prevent damage, just that you'll be kicking it off manually.

Patching.... (5, Informative)

Anonymous Coward | about 9 months ago | (#46777419)

What we normally do is get a blanket approval if its coming from the OS provider with an understanding that patching will be done on a specific schedule.

IE. If all the patches come from Redhat there is no approval its necessary to keep them up to date for security purposes. The same is true for patches pushed out from Microsoft.

Then your only dealing with 3rd party applications. Even those the more common ones we get added to the blanket approval, ie. Adobe. This way you are only telling them you are bringing them into line with the latest set of patches provided by the OS vendor without having to list all the packages that are being updated. Then they only have to ask you if a program has or does not have a certain bug.

Re:Patching.... (4, Insightful)

rioki (1328185) | about 9 months ago | (#46777483)

I totally agree with the above. These change review rigmarole is often done for reasons of security and operational stability. This is a laudable goal, but often the added red tape make the entire system more vulnerable when they want to decide which security fixes get applies. You need to hammer it home that each second between the time the security fix is published and the time the fix is applied the systems are vulnerable. This is because, once the security fix is published, every hacker knows about the issue too. If you have something worthwhile to protect, which is probably the reason why a change review board was established, you do not want add more time to that time window. If they need red tape, you should get a blanket agreement that you apply security fixes from vendors for critical software (OS, databases, etc.) ASAP and they get a notification of when and what patch was installed.

Re:Patching.... (0)

Anonymous Coward | about 9 months ago | (#46777529)

or simply institute a fast-track process in the change management process...

Re:Patching.... (5, Insightful)

N1AK (864906) | about 9 months ago | (#46777553)

If you have something worthwhile to protect, which is probably the reason why a change review board was established, you do not want add more time to that time window.

No, CABs often get implemented because someone is worried about the damage a borked patch/update could do and doesn't have confidence that it could be reliably fixed quickly. Most of the 'admin' in a change request is things like a process plan (which surely you already know if you're deploying an update to a critical live system) and a rollback process (which again, surely you should be considering before risking fubaring the system).

What I will say is that you should ensure that the CAB members are aware of the need to be able to handle emergency requests (meet, agree and deploy in hours) and should have some process to handle retrospective requests if a business critical update comes out and you can't wait for CAB approval. Normally the requirements for retrospective requests is that it's genuinely critical and that you send a completed request before the update. It might sound odd, but the idea is that they can use that to see if you had properly thought through the process and not just gone Rambo on it.

Re:Patching.... (2)

ixl (811473) | about 9 months ago | (#46777581)

In addition to the above two comments, if the policy changes the CAB is instituting impair sysadmin efficiency (and it sounds like they do), then the CAB should be held accountable for the effects of those changes. This means that they should have to find additional funding for additional sysadmins for these servers.

Re:Patching.... (3, Insightful)

Zocalo (252965) | about 9 months ago | (#46777613)

Blanket approvals and template documents that you can cut and paste notifications into are the way to go, especially when it's on a schedule like MS, Adobe & Oracle. If they push back, suggest a documented process (this is ITIL, right? You can avoid the need for a CAB if it's an approved and documented procedure) where you push the patches to a few test systems on Tuesday (in the case of MS) then deploy to the rest later in the week - whatever they are happy with - if there are no issues. Depending on your timezone Tuesday PM or Wednesday AM are good slots for weekly CABs to pick up this; push to the test servers on the day, then the rest at the end of the week. For *nix, i've done updates this way for anything that didn't require a reboot so only stuff like Kernel updates and major low-level libraries needed to get approval via a CAB.

For everything else, it's your call. Either the patch waits for the next regular CAB or you play the game and keep calling emergency CABs when there are justifiably critical updates, such as Heartbleed, or for the inevitable critical updates from MS every second Tuesday that impact your systems. The best tactic is to embrace ITIL and make it work for you, not allow them to make you jump through hoops and spend your time crafting unique documents for every patch. It also serves as a useful procedure check to make sure you don't mess up and have a contingency plan for when you do, and ultimately, if you get it right, you still get to dictate the schedule and make them do things in ways that you are happy to work with.

find another job... (0)

Anonymous Coward | about 9 months ago | (#46777421)

If they have an entire board to bord reviewing patches and micromanaging the system, AND you are the sole admin for 50 servers (and what probably several hundred if not thousands of users) then I would say you should go fine another job. Obviously they can afford a bunch of paper pushers, but no help in the trenches... I'm just sayin...

What helps... (4, Funny)

ProfessionalCookie (673314) | about 9 months ago | (#46777423)


Re:What helps... (2)

LifesABeach (234436) | about 9 months ago | (#46777763)

"...Is there already a product out there that will make my life a little less stressful on the admin side?..."

I was thinking along the lines of Meth, it's not going to change anything, but so what.

Re:What helps... (1)

OzPeter (195038) | about 9 months ago | (#46777865)


Yeah .. but methanol is a better solution - especially if its not you drinking it

perhaps (4, Insightful)

dimko (1166489) | about 9 months ago | (#46777433)

New product your comapny requires is called: junior admin? Expensive stuff but does the job.

Always quit dumb jobs (2)

koinu (472851) | about 9 months ago | (#46777435)

You know that stress reduces your life expectancy? You have most stress with dumb supervisors/bosses. Go and quit there. This has also the effect that you've ultimately showed your position about it.

I do this (5, Interesting)

beezly (197427) | about 9 months ago | (#46777439)

I have to do this and it's no problem at all, although our change management process doesn't sound quite as onerous as yours (I suspect yours will adapt over time -- the CAB will soon get bored if they have to approve every single OS patch).

I have to do a risk analysis for each change that gets made to a system (not just patches). Sometimes this risk analysis is fairly informal, for example if the change is to add more RAM to a VM, it's very unlikely to have a significant adverse impact and is easily reversible, so low risk. Other times the risk analysis (and processes that come out of that) may take a long time and require significant co-ordination with other parts of the organisation I work in.

A good example is if we make a change to a service that impacts the look and feel of that service. It will require co-ordinating with our communications, helpdesk, training and documentation teams as well as other parts of the technical group I work in and the CAB really acts as a check to make sure all of that has happened properly.

There are still a few people in our organisation who see the CAB as a barrier to getting work done, but for me it is really a check to make sure we're delivering changes in a proper way.

I can recommend you take a look at The Phoenix Project by Gene Kim, Kevin Behr and George Spafford. http://itrevolution.com/books/... [itrevolution.com] - I had quite a few "this is where I work" moments whilst reading it :)

Re:I do this (0)

Anonymous Coward | about 9 months ago | (#46777693)

Holy shit you guys make things difficult.

We install patches one week after they are dropped, wsus automagically installs them, and we check the ws/servers monthly to ensure they have. We wait a week because if there is a bug in the patch we have a week to hear about it and pull it from being deployed. Third party patches are installed to the latest version, always. My customers are small banks, if we don't install the patch we better have a damn good reason for the auditors. The auditor does not care if the patch may cause a work stoppage, if they are not fully patched, they are not in compliance, if they are not in compliance they get written up. Trust me they do not want to be written up, and you will wish you borked their computer with a bad patch if they do get written up.

Re:I do this (2)

beezly (197427) | about 9 months ago | (#46777799)

It needn't difficult at all and it doesn't have to impact your ability to apply security patches. For example, patches from Microsoft released on the 8th April were applied to roughly 500 servers on the 11th. A couple of hundred of our servers applied the software remedy for heartbleed within hours of it being released, without any intervention from a human at all.

A change management process should take into account an organisations appetite for risk. For us, we're keen to apply security patches quickly, so they are pre-approved by our CAB.

Re:I do this (2)

mysidia (191772) | about 9 months ago | (#46777923)

The auditor does not care if the patch may cause a work stoppage, if they are not fully patched, they are not in compliance, if they are not in compliance they get written up

Sounds like they need to fire and hire new auditors.

Re:I do this (1)

OzPeter (195038) | about 9 months ago | (#46777883)

I have to do a risk analysis for each change that gets made to a system (not just patches)

Which sounds like its straight out of the OSHA playbook for considering the health and safety aspects of a physical job before performing it. While it is a PITA sometimes, when the shit does hit the fan you are glad that you have all the correct responses ready to roll.

Re:I do this (1)

beezly (197427) | about 9 months ago | (#46777913)

Indeed. When we introduced our change management process I realised that I was informally doing this risk analysis anyway. The change management process and CAB just formalise it.

Risk analysis can be as simple as thinking "is this low impact" for a second and then deciding it is and continuing. Most of these types of changes are pre-approved by CAB and we just have to record the change. If we started creating outages from these types of changes then that pre-approval would probably be reviewed.

There are other times when that pre-approval is temporarily revoked when the organisation cannot tolerate the risk of any downtime caused by changes, but that only happens twice a year, and I get to put my feet up a bit and work on interesting hobby projects for a couple of weeks :) A few of my colleagues get irritated that they "can't get anything done", but if my employer chooses to stop me making changes and let me have a rest for a bit, I'm not going to complain!

i know that feel (0)

Anonymous Coward | about 9 months ago | (#46777447)

Are you working for Citi, by any chance?

Setup a WSUS server (5, Informative)

will_die (586523) | about 9 months ago | (#46777451)

Setup a WSUS server, you probably already have the licenses. From there you can pull the patches to it and then push it to needed servers as approved.
There are commercial products that can also this in a nicer manner but they cost money.

Ask one question about patches (1)

Deon Lasini (3619671) | about 9 months ago | (#46777453)

Ask a simple question, will this patch cost lives if it is applied. If the answer is no then apply the patch. Justification of applying the patch , no people will die if the patch is applied.

Re:Ask one question about patches (0)

DoofusOfDeath (636671) | about 9 months ago | (#46777473)

Unless you work for the CIA, in which case the question becomes, "Will this patch cost enough lies?"

Re:Ask one question about patches (1)

DoofusOfDeath (636671) | about 9 months ago | (#46777475)

Erm... I meant to write "lives", not "lies." Freudian slip.

Re:Ask one question about patches (0)

Anonymous Coward | about 9 months ago | (#46777657)

That's the NSA, not the CIA. They both end in A, so the confusion's understandable.

Re:Ask one question about patches (2)

csnydermvpsoft (596111) | about 9 months ago | (#46777521)

I'm sure that the "well, at least no lives were lost!" response will fly really well when a patch causes the company to lose $100,000 in worker productivity.

It's your job (0, Troll)

Anonymous Coward | about 9 months ago | (#46777455)

Don't like doing what you are told? Then leave. Life is too short. You've had to too cushy thus far, and clearly just apply patches by default. Welcome to the real world!

Are you still partying like its 1999, or what? (0, Informative)

Anonymous Coward | about 9 months ago | (#46777457)

Well, welcome to the big leagues.

Any company of any reasonable size NOT doing something like this is stupid.
Not that I have any love for CABs, Change Management, quite the opposite.

However, when the shit hits the fan, someone is going to be doing an Root Cause Analysis, and having all that stuff available is useful/necessary/legally required in some cases.

You're not the only one out there that has to deal with this. Some places you need CAB approval via a Change Request in Remedy just to change port speeds.....

Some sort of Blanket Approval as mentioned earlier will solve a lot of the hassle, and let you minimize required Changes to a smaller subset of actions.

Re:Are you still partying like its 1999, or what? (2)

gbjbaanb (229885) | about 9 months ago | (#46777525)

oh god Remedy....I used that once.

But the concept is good- you need a 'bug tracker' where the requests for patches can be made to you, and you can then assign tot he CCB. Once they agree it, then assign it back to you for implementation.

Any dev bugtracker will provide you with this kind of audit trail - think 'requirements' for the CCB authorisation, 'development' for the implementation, 'test' for the testing. You might want to rename these though.

I'd make it web based so access is simple for everyone involved - last thing you need is a Excel based solution. I've used Mantis, or Redmine but Bugzilla would work too as would any number of web based bug/task tracker tools. Get one installed before someone on the CCB says "we'll use a spreadsheet", seriously.

Re:Are you still partying like its 1999, or what? (1)

Stumbles (602007) | about 9 months ago | (#46777681)

Ug. Remedy is such a bitter pill.

Re:Are you still partying like its 1999, or what? (4, Insightful)

sjames (1099) | about 9 months ago | (#46777617)

That's not the big leagues, that's the short bus.

yes, changes need to be documented. They should be deployed on a test server before going into production. The rest is just people who were presumably traumatized by falling out of a tree as a child seeking revenge.

Take the people in the CAB and replace them with extra admins who are bright enough to know what I said in the 2nd paragraph.

Re:Are you still partying like its 1999, or what? (1)

clickclickdrone (964164) | about 9 months ago | (#46777675)

Heck, we have a CR process for anything that touches a live server. I even had to go through the process to get details of a file as it would have resulted in an unexpected file write. By way of background, the server used to fill up during the day's processing and empty out overnight. It got very tight sometimes and when someone made a copy of a file without checking the size, it filled the filesystem and the server fell over. That particular outage cost several million given what the server did.

ask yourself *why* and do the right thing (4, Insightful)

flinkflonk (573023) | about 9 months ago | (#46777461)

This is known as the change process in ITIL, and it does have a remedy. The remedy is pre-approved changes (standard changes), which should include patching the OS with patches approved by the vendor. It's meant for exactly this situation, and if your change process doesn't have them it's just a paper wall.
The ITIL change process is all about reducing risk. If there is a risk with patching your OS (there is, especially since you mention Windows, it's not that unheard of that a Windows patch makes your whole network inoperative) you have to weigh it against the risk of not patching it (meaning you leave known security holes in).
So, my advice is to get OS patches for your OSes pre-approved by the CAB, that is, when a vendor releases a set of patches you are allowed to patch your systems in the way and the order of that pre-approved change. Of course it's paper-pushing, but use it to your advantage and push some paper yourself. If a server gets compromised and you have the papers (changelog) to prove that you followed procedure, blame will be placed somewhere else. And things will be done differently from there on, since it has been proven that the procedure didn't work, and everybody wins.
Or you could go find another job (like some other posters recommended) where you are the sole *cowboy*-admin and nothing gets done properly. Your choice really.

Re:ask yourself *why* and do the right thing (1)

Maxo-Texas (864189) | about 9 months ago | (#46777491)

This lowered productivity at my last place from 2 days to 47 days for a similar change level for changes from 1 to 400 lines and from 3 months to 6 to 9 months to never for larger changes. Once the cost was recognized, it also resulted in a lot of small changes not being done because their benefit no longer justified the cost.

However, it lowered our critical errors which effected production from about 6 unscheduled downtimes per year to about 6 unscheduled downtimes per year so it was worth it.

All kidding aside- a few time a year, it prevented different departments from really stepping on each other's toes hard. As in, "But we have a major upgrade that's going to take 20 hours to install this weekend and you are going to have the system down for O/S patches the entire weekend!?!?!" and "But we have a major upgrade that requires Jane from your team. She's in Europe for the next two weeks!?!?!?!"

But other than those... seriously.. reduced unscheduled downtime from 6 a year to 6 a year. I.e. no benefit. All the successful testing in the world IS NOT PRODUCTION. Most companies can't afford to maintain a test system identical to production. It's always a subset in some way.

Re:ask yourself *why* and do the right thing (1)

Maxo-Texas (864189) | about 9 months ago | (#46777547)

Oh and the worst case scenario was that the cab meeting was a fixed length and the number of changes took to long so all changes not approved were pushed back to the next cab meeting unless you got a senior director to hold a special meeting for the projects. In one really bad time- a lot of critical projects slid over 90 days due to this.

But they were very serious about it. The CEO or president's ass was on the line if a change went in which wasn't approved or recorded. So it was a firing offense. You *did* follow procedures.

And those procedures changed... constantly. From cab meeting to cab meeting the procedures changed. The only notice was a series of emails. They did not maintain a central change procedure process document.

So the point here is that the people on your CAB may be very powerful and not follow the rules themselves and may change the rules with little notice to suit their own needs.

Re:ask yourself *why* and do the right thing (2)

sjames (1099) | about 9 months ago | (#46777661)

What you needed was a CAB CAB to maintain the change procedure process document. And then, of course the CAB CAB CAB to maintain the change procedure document change procedure process document.

They might need to lay off their production people to afford another layer of CAB or two, but that's OK, with the constant change in the change procedure change procedure, none of them knew what they were supposed to be doing anymore anyway,.

Re:ask yourself *why* and do the right thing (2)

Pikewake (217555) | about 9 months ago | (#46777579)

I've been involved in setting up ITIL processes for several organiztions and agree 100% with the above. The main benefit of a change process and a CAB is that you can get an overall picture of all incoming changes, compare it to the available resources and prioritize based on fact instead on who screams the loudest. Hope you have a compentent change manager who can keep that focus and avoid greasing the squeaky wheels.
If the CAB starts micromanaging they will self-destruct.
After you get your standard changes approved, just make sure that the CAB is aware of the time you spend on doing them. CABs tend to forget that any work they're not actively involved in approving needs time to get done.

Re:ask yourself *why* and do the right thing (1)

Krokant (956646) | about 9 months ago | (#46777623)

Totally agree with this! Best is to introduce a classification of patches, e.g. "minor", "significant", "major" and "emergency" patches. Agree that you get the "minor" patches pre-approved (those with minor risk & impact, "minor" to be defined in agreement with the CAB). Other patches like service packs (significant?) and OS upgrades (major) should really go through a CAB, even if it is just to inform the other IT staff members about what you are doing (and to give them a chance to point out that application X or Y can break). Finally, also agree up front who you can call at night to get a "carte blance" in case an emergency patch needs to be deployed to fix some 0-day expl0it.

micromanager jerk (3, Funny)

Anonymous Coward | about 9 months ago | (#46777463)

I bet your CEO or upper level boss is the typical dimwit/jerk, knows nothing about the business, microcontroller type of guy, stupid games of power, calls you on purpose once his secretary tells him you are out of the door. Small guy, stupid looking, may beard of a goatee, cheap-looking suit. Tell him to sod off and change jobs...

Run away! (4, Insightful)

arcade (16638) | about 9 months ago | (#46777467)

Given your description, you're the sole sysadmin. This means you're the person who should take these decision - nobody else. If the company disagrees with this, then either you've done a poor job previously, or they don't trust you to do your job for some strange reason.

Now, if it's you that have fscked up on previous occasions, then it's understandable that they want the red tape.

If you haven't, then it's time to put down the foot and say "Nope, that's my job". If they disagree with that - linkedin should be a relatively short distance away, and after you find yourself a new job - simply hand in your resignation pointing out that you have no interest in having babysitters.

No good news (0)

Anonymous Coward | about 9 months ago | (#46777489)

- Explain to them that having a full board consideration of routine job tasks ain't gonna work and tell them to make some streamlined process (like just letting you do your job).

  - Hire a second sysadmin to do your bitchwork.

  - Find a new job with more dealable policies.

They have a point (3, Insightful)

distilate (1037896) | about 9 months ago | (#46777495)

As a software developer I have multiple times had a development box screwed over by an IT department pushing unneeded drivers and patches that cause problems. I say prove they are good or needed before you waste other peoples time. If you just want to push any random patch that comes along then you should be forced to resolve all issues without the traditional reinstall the machine.

Re:They have a point (4, Interesting)

Anonymous Coward | about 9 months ago | (#46777595)

As a software developer I have multiple times had a development box screwed over by an IT department pushing unneeded drivers and patches that cause problems.

I say prove they are good or needed before you waste other peoples time.
If you just want to push any random patch that comes along then you should be forced to resolve all issues without the traditional reinstall the machine.

Er, waste people's time?

As a software developer, you have no fucking idea how difficult it is to pick and choose patches and driver updates to get pushed out to machines while also trying to maintain any sort of consistency with patch levels across the enterprise, but apparently this is something you want me to waste my time on in order to ensure you've not lost a spare second on the rare and random occurrence that you experience a problem in 1 out of 200 patches (my patching track record over 15+ years shows the frequency quite less than that)

And if you're doing patching correctly, you're mainly concerned about patches deemed "critical", so again, you're not really afforded the luxury of picking and choosing here without risk.

As a seasoned sysadmin, I have a fix for you. It's called VMs. Play till your hearts content and press the rewind (snapshot) button when find the environment screwed up (shockingly, it's not always the IT department that screws computers up...yes, I know this is breaking news)

Re:They have a point (2, Insightful)

Anonymous Coward | about 9 months ago | (#46777791)

And you sir, are why most people hate IT.

In short, yes, I do expect you to waste your time to pick and choose which patches so I can not lose that spare second. After all, it's YOUR JOB to keep the computers running well. If you can't be bothered to do it, then what's the point of you being employed? My job as a developer is to develop products, not to battle with my machine. By you not doing your job properly and by approving a patch that takes out my machine, I'm now unable to do my job. And likely, I'm not the only person who will have been effected by this issue, so thus, by you not doing your job, you may have now cost 10, 20, 1000 people a couple hours where they're not doing their assigned tasks.

If you can't be bothered to as a sys admin, then in my opinion, a company shouldn't be bothered to employ you.

Fight absurd with absurd (1)

Anonymous Coward | about 9 months ago | (#46777497)

An absurd idea or ludicrous non-sense can be fought with even more surrealistic silliness.

Just apply this dogma.

Follow the instructions diligently, put the administrative burden on them with tons of notices and emails, and don't forget to ask for a quick answer every now and then, because security is at stake.

Stop caring and give them what they want. (0)

Anonymous Coward | about 9 months ago | (#46777513)

Sounds like you want to be efficient, the CAB doesn't want that, they want this. If this is impacting other aspects of your job that needs to be communicated. If you personally can't stand it, as others have pointed out they will probably get tired of this. If they don't, it doesn't hurt to make sure you're resume is up to date, there could be better opportunities out there for you, which others have pointed out.

Windows servers aren't always the best... (1)

chentiangemalc (1710624) | about 9 months ago | (#46777531)

Scares the hell out of me how many SYS admins don't know what a Microsoft KB article is... And you are not paying attention not knowing about "patch Tuesday" and where Microsoft announces out-of-band patches... get WSUS and half the work done for you.

Staff? (0)

Anonymous Coward | about 9 months ago | (#46777537)

Simple really. Ask for another five - ten staff members to manage those servers.

Ha-ha! (2)

hsa (598343) | about 9 months ago | (#46777539)

In the voice of Nelson from the Simpsons: Ha-ha!

They want to make your work more transparent. Apparently, they think you have too much spare time, too. Or you getting fired/outsourced, and this is a gentle reminder to document your work..

Since all the reports are similar, I would just create a script to handle the documentation needs. I would also do extra work: create report how much this affects the efficiency of patch / hotfix distribution and how time all these process changes take (and maybe inflate that number a bit, just a bit).

This would also be a great time to ask for an assistant to ease the workload.

Give them what they want (0)

Anonymous Coward | about 9 months ago | (#46777549)

If they want bureaucracy, give it to them. These people pay you and are entitled to tell you how to work.

Spend lots of time testing windows patches, let other things go. On no account increase your hours to do this.

Also, be sure to mention that testing windows patches on anything other than an exact simulacrum of your development environment is not going to be effective. Get them to allocate a few devs as test subjects. Watch as their efficiencey drops too.

Oncethey've realised that things slow down and you've stuffed so much paperwork down their throats that they choke people may loosen up and realise that each patch doesn't need its own cr.

Also, be sure to mention that testing windows patches on anything other than an exact simulacrum of your development environment is not going to be effective. Get them to allocate a few devs as test subjects. Watch as their efficiencey drops too.

I do actually like change control, but I've got the situation set up where I don't have to ask for every patch.

As a Change Manager... (5, Insightful)

Pete (big-pete) (253496) | about 9 months ago | (#46777571)

I work in Change Management for a major telco, I chair the IT CAB, and I oversee server and client patching (amongst many other changes!). When we patch clients, we are patching up to around 30,000 real and virtual desktops - when we patch servers, they also number in the thousands.

There is no way we would allow a sysadmin to patch anything at any time without some level of oversight, an individual admin has no oversight on other patches, hardware interventions, application releases, network upgrades, business campaigns, etc that may be happening on our environment at any given moment (this isn't their job to be keeping track of all of that info). For server and client patching is as light as possible, but we still maintain a close oversight.

On the Wednesday following the second Tuesday of each month (for example), I sit down with the Windows server guys and the Windows client guys, and we review their proposals to patch - usually we have a fairly rapid timescale that we can meet to ensure that the patches are deployed (including pilot testing, etc to catch any issues before everyone's desktop is broken!), sometimes there are other major interventions that overlap, and then we need to make prioritisation decisions and decide which has priority. We have made similar agreements with the Linux teams, where they have a special process to patch, and we have close oversight on Unix patches, as upgrading these servers with a reboot can be a very big deal.

The last thing you want is an application version release of a critical ordering application happening at the same time as a system software patch, and then to have an issue afterwards - is it the application version, is it the systems patch, was there some conflict with the activties being performed at the same time? Troubleshooting gets more difficult, teams point fingers at eachother, and the whole time the business is screaming blue murder.

Of course in an Incident situation there is more flexibility to get things fixed fast, and with security issues I am keen to break open the S-CAB process to expedite a rapid approval flow to ensure that security holes are fixed as fast as possible - of course most changes are encouraged to follow the rules though, the change calendar is published, and everyone knows when the "standard" slots for deployment are, and if most people manage to schedule their changes within those windows, then it minimises potential conflict for everyone.

Change management are not your enemy, they are your friend - once you register your change with them, they have your back, they will guard from other interventions clashing with you, will stop you from inadvertently upsetting the business, and will decrease change related Incidents. However, with great power comes great responsibility, and Change Management need to find the right process for the right type of change - we cannot have a full in depth investigation into every configuration change, every patch, every bug-fix, every new server to be provisioned. A good Change Management team will guide changes to the appropriate flow, and grease the wheels for certain types of interventions - it seems that the CAB mentioned in the summary are still finding their feet a little, and I am sure they will evolve over time as they start to understand which changes are high risk, and which can be allowed to pass with a lighter touch.

-- Pete.

Re:As a Change Manager... (4, Interesting)

Anonymous Coward | about 9 months ago | (#46777677)

OP is managing 50 servers, you are managing tens of thousands of Systems - the situations are hardly comparable.

Like the OP I am the sole admin for our companies IT (60+ on prem servers, mix of WinTel and Linux, plus 10 Azure hosted servers) and I am in charge of patch management.

If a committee in the organisation came and told me they were taking responsibility for patching away from me I would either tell them to sod off OR I would hand over all the admin accounts and wish them luck.

Re:As a Change Manager... (0)

Anonymous Coward | about 9 months ago | (#46777699)

Troubleshooting gets more difficult, teams point fingers at eachother, and the whole time the business is screaming blue murder.

Mod parent up. Start adding a complex software stack into the picture where you may be working with multiple software vendors, they will assume you changed something regardless if it a true defect in their product. If you can honestly answer "I have a change control process, we know exactly what happened on this system" you'll be able to troubleshoot issues that much quicker when they happen.

Maintenance Windows (1)

Fotis Georgatos (3006465) | about 9 months ago | (#46777575)

Probably all you are missing over there are scheduled maintenance windows.

You give them a list once per month about what is about to change, get a confirmation, proceed with them available on standby for fixes on the spot or, rollback.

Try to think the big picture: how would you maintain the systems, if they were life-supporting medical equipment? Why not give same quality of service?!

System Administrator Vs Change Advisory Board (5, Funny)

wonkey_monkey (2592601) | about 9 months ago | (#46777589)

System Administrator Vs Change Advisory Board

50 quatloos on the newcomer!

1 of 3 possibilities... (1, Informative)

Anonymous Coward | about 9 months ago | (#46777597)

1 of 3 possibilities:
1. You are perfect. You NEVER screw up. In this case, the CAB is just being a PITA.
2. You can make certain types of updates quickly, with little or no risk, and you never screw up. The CAB should agree to make these standard changes with very low overhead. The other types of updates are likly to help YOU, not to mention everyone else in the company that depends on you.
3. It's hard to say in advance - most of the time, things work OK, but sometimes problems arise and there is unexpected downtime (it's NEVER your fault, however). Bit the bullet. You are not running a world class shop and you need help to improve. Anyway, downtime in production always takes more of your time than filing an RFC.

I've had this situation. (0)

Anonymous Coward | about 9 months ago | (#46777599)

Posting as AC for obvious reasons. I had this situation. change board was announced; I predicted productivity would take a nose dive; it has. Job satisfaction has taken a nose dive as well. Stuff that used to take hours now is wrapped in red tape and takes weeks instead. I'm currently looking for something else.

Re:I've had this situation. (0)

Anonymous Coward | about 9 months ago | (#46777697)

This is exactly why I quit my job at a huge company for a position at a much smaller one.

I used to be in charge of certain systems (not hundreds or thousands like some comments I see) but over the past year I've been relegated to little more than a ticket monkey. My latter days were like "submit a ticket about X" "follow up about Y" "schedule Z" .. no thanks, go hire a damn secretary.

Ask for a good vulnerability scanner (1)

Anonymous Coward | about 9 months ago | (#46777615)

Turn this request proactive. Ask for a good vulnerability scanner, one that can perform authenticated scans. Qualys or Rapid7 would be good choices. The scanner will list out all of the vulnerabilities on each server including those that have patches available and those that don't. Let the scanner do the work and then present the report of both patchable and unpatchable vulnerabilities and let them work off that. This is how we do the CAB at our 300 server and 2,000 desktop bank.

After the patches are installed run the same scan again and now you have proof that the patches did in fact close the vulnerability. Both the "before" and "after" scan becomes part of the CAB documentation.

This in fact will seriously increase your workload for months because there are a whole lot more vulnerabilities that you know about and many of those will be configuration issues. But for fifty servers it should be less than six months and then you'll be in a good place. And the CAB will lighten up a lot as things show improvement. Too many sysadmins think that Windows Update and the RHN are the only tools they need for vulnerability management and that is not anywhere close to the truth.

Change management can cover your ass (5, Insightful)

Madman (84403) | about 9 months ago | (#46777619)

There is genuine value in a well-run change management program. Organizations need to know what is going on in their infrastructure, and plan things properly. In many industries there is a growing regulatory requirement to have change management, and auditors are looking for these things more often too. Many smaller shops are bringing in change control, so rather than handing in your badge my advice would be to deal with it and learn the lessons.
One lesson is rather than fight it, use it to your advantage. Yes, there's paperwork, however if you follow the system correctly they cannot blame you if things go wrong. What you thought of as freedom was also a risk to your own position as you had sole responsibility - change control means less freedom, but you are covered. Also, you can get budget for better management systems which will make your life easier. Put together a realistic list of what you need and get involved with setting up the change control process. If you stay silent or fight it you won't get a say.

I'm not a fan of CAB (3, Interesting)

Anonymous Coward | about 9 months ago | (#46777641)

I used to work for a Fortune 100 company. I'm not sure how CAB works at other companies but I get the impression that their implementation was flawed. 1) You could easily go around the process. 2) I'm certain nobody reviews the code - They just kind of discussed it. In my opinion this is a half-baked solution to prevent things from getting pushed to production which could cause problems (errors, leak sensitive info, etc). I am 100% confident that I could have gotten CAB approval for nearly anything. I understand the idea behind CAB but in my experience it isn't effective.

I actually quit that job partially due to things like CAB. Increasingly control was taken away from people in the IT department, and handed to things like CAB or to 3rd parties who managed our systems, databases, etc. The jobs of myself and others in IT staff were being reduced from "actually doing the work" to "submitting tickets and following up on tickets." Nothing like being on hold when calling the 3rd party for a critical issue you yourself know how to fix in 5 minutes. It's also a blast when I had to tell the support guy what commands to run because he wasn't familiar.

And no we didn't fuck up anything to deserve this treatment. It was dictated to us from upper management.

Re:I'm not a fan of CAB (1)

Anonymous Coward | about 9 months ago | (#46777737)

You shoudnt have said the guy anything. First rule of consulting, saying no more than it is required.

Sounds like you work for the federal government (1, Funny)

MikeRT (947531) | about 9 months ago | (#46777659)

Do exactly what they say to the letter. After the second "patch Tues" where they pound the ever lovin fuck out of Windows Server with updates and the CAB has a pile of paperwork big enough to roast a wild boar they'll suddenly regain a measure of common sense.

Lack of proccess (0)

Anonymous Coward | about 9 months ago | (#46777667)

The paper trail for the process is the easy part it's the part where some manager need to be hand guided through the process of making a decision he is not qualified to make that adds cost, and reduces productivity to the point where it might affect stability.

And that's why enterprise system are always 2 years behind on patches and conically unstable and ridden with security holes.
The end result of a underscored CAB process is always less patching, higher costs, and worse or at best similar stability, with the most common Root Cause for downtime being lack of due diligence in patching/maintaining systems.

There really should be some kind of predetermined rule-set for when a patch get deferred and when it gets implemented, if there is you dont need a board to look at every patch and if there is not the board will always lead to worse results.

Get a vulnerability scanner (1)

dremspider (562073) | about 9 months ago | (#46777669)

Buy something like Tenable Nessus or Rapid7. Make reports very easy and works across Windows, Linux, Cisco, etc. If you get Security Center it will track changes over time and you can see trends over time with patching.

Must have missed the TLA (3, Funny)

Kahn_au (1349259) | about 9 months ago | (#46777679)

Where I come from CAB stands for "Change Acceptance Board", they don't get to make dumb decisions...

Pre-approvals (3, Interesting)

AndyCanfield (700565) | about 9 months ago | (#46777707)

Seems to me that you need to establish a list of pre-approved changes. For example, if you're running Windows and IIS, make sure there's a clause that says anything that comes down the pipeline via Windows Update does not need formal approval. That way you can offload the responsibilty, and work, onto Microsoft. You can keep your core software up-to-date. Third party software, same thing for corporations. Student projects and your own shell scripts might need more examination; not a bad idea actually. But if there's a new version of Firefox, why in the world would a Change Advisory Board think it knows more than Mozilla?

fuck this site and popups (2)

Anonymous Coward | about 9 months ago | (#46777727)

fuck this site and popups


In my experience... (1)

amalek (615708) | about 9 months ago | (#46777757)

.. as the admin for a couple of hundred Windows servers, an efficient CAB is your friend. As another said, they have your back, and that of the business (and by extension, the poor guy who is up at 4am fixing any issues introduced). That said, I've also worked with companies and CABs that know how everything is written in the ITIL handbook, but with no clue of how to put it into (an efficient) practice. It sounds like your CAB just wants the paperwork done - did you bring on consultants recently? - and think/hope it will mitigate the risks involved with patching. Change request for patching on a development environment? Routine change. Keep up with the news for any issues from this month's patches. You patch dev, or your pre-prod environment or whatever you have, monitor for a few days and if all is good you apply the same patches to your production machines. This is enough risk mitigation for most, and it gets the job done at the end of the day. Make up a nice RACI chart (Responsible, Accountable, Consulted, Informed) for the whole process - you are probably R/A for successful patching, but, the CAB will provide the approval for you to go ahead. They won't allow you to do it if there's a big release, or some on-going issues. Then you only need to know how to push the patches and have a good engineer to fix anything that might occur on the night, and the accountability trail takes care of any finger-pointing and addresses any gaps in the process you might have noticed. Start slow, start small. Work your way up in volume as the becomes more like a routine change.

It is called ITIL (0)

Anonymous Coward | about 9 months ago | (#46777773)

You need to join the modern world, your actions affect more than just the servers and need to be communicated throughout the organization. If you cannot speak to what a patch is going to do, then why the fuck would you apply it?

zOS Maintenance and CAB (1)

jacobsm (661831) | about 9 months ago | (#46777781)

I'm the zOS Systems Programmer at a Fortune 500 company. When we do system maintenance cycles our CRB just wants to know when the system environment is changing, not what's changing.

If anyone ever does want to know I do have detailed logs and a before and after image of the maintenance management database (SMP/E Consolidated Software Inventory) for them to peruse. They never do; since they don't understand zOS Systems Programming, and they shouldn't have to. It's their job to manage system availability and to ensure that proper testing and system validation activities were performed. It's my job to manage the environmental change.

For anyone who's foolish enough to ask for detailed documentation of every module, macro, load module, dataset, file in the Unix System Services file system that's being modified, well enjoy yourself.

What I won't stand for, is for someone to have veto power over what maintenance goes on. That's my decision, and since I'm the best person in the organization to decide, I do so.

Big company experience comes to small company (3, Interesting)

erroneus (253617) | about 9 months ago | (#46777783)

Yes, I know how they are thinking and the pain you are feeling. To accomplish the implementation of this change management process you will need a lot of people working for you. Use this to your advantage. Quickly study up on the subject so your experience with the systems will not leave you with a dog pile of new bosses to tell you how to do your job. Instead insist that you need to hire more people to manage the overhead.

In the end that probably won't work and you'll be kept "at the bottom" where you are now.

These changes are going to be enormously expensive and despite all you have done, it will be perceived that you created this mess by not having a change management system in place to begin with. Of course, they will also see that you don't know about change management and will prefer to hire someone who already knows about it.

Now I'm not going to down change management processes. They can prevent problems and identify people who would otherwise deflect blame and hide in the shadows. But from what I have seen, you're just getting the beginning of the tsunami of changes.

Push for testing systems and additional hardware to support it. Of course it will also require more space and other resources. Try to get ahead of this beast.

Routine changes. (1)

Vermifax (3687) | about 9 months ago | (#46777829)

We got our CAB to agree to a certain class of routine changes that require minimum review. They don't need anymore detail than, Test servers updated on Tuesday, Production one week later per maintenance windows.

Change Management is good (2)

MrNemesis (587188) | about 9 months ago | (#46777831)

...and necessary* but that doesn't stop some change management boards being needlessly obstructive.

Years back, I was working at a company where all of our servers got patched at build and then never patched again "in case it broke something". Myself and the rest of the ops team begged and pleaded for the business to allow us maintenance windows, allowed to reboot the OS outside of business hours, install patches... all to no avail.

Until the company lost a bidding on a contract because they had no maintenance or patch management policy in place so the business comes running at us screaming why we don't patch our servers (they would listen to their potential clients about computer security and whatnot, but not to their own staff). Cue us showing them the dozen or so draft maintenance policies that we'd submitted over the years, all of which were rejected by the directors. Red faces all round in that meeting :)

So the latest draft gets pushed into force by a wheelbarrow full of cash and we go out and buy Shavlik, a really rather nice patch management solution... and then our change management board goes nuts when they see our report. Lots of w2k and w2k3 boxes had literally hundreds of service packs and patches oustanding before, and like the OP wanted an individual change raised for each patch going on each server. We then set up an email direct to the change board that gave them Shavlik's automated PDF thingy which gives a list of all the patches outstanding on a server along with a hyperlink to the MS KB or similar... but that wasn't good enough. They wanted a report on what each patch did, which files it altered, all the usual stuff. Now as another poster had pointed out, under ITIL this should all have been "standard change" without needing so much paperwork (seriously, they should be at least aware of ITIL even if they're not going to follow it to the letter) but we could sympathise with them that, even with our planned dependency-based staggered rollout over a 4 week period, this was both a radical shift in company culture and posed a significant opportunity for breakage... but still. Filing about 20,000 change requests it was to be.

So obviously, since we were dealing with obstructive officials, we did exactly that. Did a few dozen hacky shell scripts that took the PDFs that Shavlik made, CURLed down the contents of the link to the KB page and then posted it off into the change management system - one request per patch per machine. After about twenty minutes of this we'd submitted about 400 requests and the change management system (an in-house pile o' shite that wasn't so much written as congealed out of various bits of sharepoint and was universally hated) had slowed to a crawl enough that it took 10mins to open the page. It used funky whizz-bang ajax to load *all* of the pending change requests in the background ("who needs a LIMIT on this SQL parameter?! We're never going to have more than fifty open change requests!" The developer in question also seemed to think that using a LIMIT statement was akin to taking the go-fasta stripes off your car. Wonder if he's doing webscale development now). After some brief arguing where they actually suggested we should open a change request to submit changes - at which point we cackled at the prospect of submitting another 20,000 pre-change-request changes - and after finding their ITIL manual down the back of the sofa they finally agreed that yes, actually, they didn't need quite such a detailed report, and were prepared to accept our risk assessment report as a single change for the first weekend's rollout.

So about 20,000 patches/service packs were staged and installed over the next two months, and luckily we didn't have a single failure due to the patches (yes, I also thought this was miraculous considering the crufty applications). From then on, every patch cycle needed just four changes, one for each week. That's how it should be done.

* Yes, necessary! I've done more than my fair share of JFDI but that just doesn't scale when you're working in teams of more than a few people - and completely falls apart when you scale up to multiple teams. Perhaps most important, aside from scheduling potentially conflicting changes ("what do you mean the routers are down for an hours' maintenance whilst we're uploading the new data?!") is making sure we admins document our changes and document a rollback plan. Version control for config files and the like... once you're used to it, you wonder how you ever lived without it.

FWIW, I'm still a sysadmin and I still hate the paperwork of doing change management - why do I need to do this? It's never going to go wrong! But I've seen (and perpetrated in) so many changes going wrong that I can see its value; you never actually miss it until it's gone.

QA (2)

NapalmV (1934294) | about 9 months ago | (#46777887)

This makes no sense unless you also have a QA department were all these patches would be tested. Then the CAB would need to get a list of the patches description, justification, and impact to existing enterprise applications. Based on this list they could select what can be applied immediately, bundled in a weekly/montly release, scrapped or postponed until a remediation plan is completed. Without QA results the CAB is useless.

What went wrong before? (3, Informative)

bwcbwc (601780) | about 9 months ago | (#46777893)

In my experience a CAB usually gets introduced in a small organization if something really got screwed up under the old process. There are exceptions - you could get a CTO who is gung-ho for ITIL, or you may have a new, important customer who insists on "process". But a CAB is an attempt to manage change and prevent problems in the working environment. So unless you have a better solution that will prevent negative impacts from your change process, go do the paperwork, with special attention to any risks or issues associated with the change (extended maintenance window, complex install or backout process, partial or incomplete fixes that still leave issues open). You can probably half-ass the CAB and get your work done almost like the old days, but when the next failed change occurs and they find out you hid risks or didn't do proper research, your ass could be out the door.

OTOH, if you really hate bureaucracy that much, hauling your ass out the door could be your best option - as long as you have a different career in mind besides sysadmin.

Patching - why bother (0)

Anonymous Coward | about 9 months ago | (#46777909)

Posting as anon for obvious reasons. We run an estate of c.4-500 servers, 3500 pc's and we never patch, unless we absolutely have to. It solves all sorts of problems, so our XP estate is still running on SP2! We have a full ITIL change management process, but we don't patch! Go figure.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?