Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Do You Manage Dev/Test/Production Environments?

timothy posted about 5 years ago | from the hotbed-of-hotbeds dept.

IT 244

An anonymous reader writes "I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers (mostly Drupal). I've written a few scripts that check out dev/test/production environments from our repository, so web developers can access the site they're working on from a URL (ex: site1.developer.example.com). Developers also get FTP access and MySQL access (through phpMyAdmin). Additional scripts check in files to the repository and move files/DBs through the different environments. I'm finding as our company grows (we currently host 50+ sites) it is cumbersome to manage all sites by hacking away at the command prompt. I would like to find a solution with a relatively easy-to-use user interface that provisions dev/test/live environments. The Aegir project is a close fit, but is only for Drupal sites and still under heavy development. Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community. How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"

Sorry! There are no comments related to the filter you selected.

How slashdot does it (5, Funny)

sopssa (1498795) | about 5 years ago | (#29811471)

How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"

I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fix

it - while actually never fixing it. After all the results are not that significant:

- if someone posts about it on a thread, mods will -1 offtopic it and no one will hear your complain
- many people will "lol fail" at the weird effects, like when kdawson decides to merge two different stories together [slashdot.org]

Re:How slashdot does it (0)

Anonymous Coward | about 5 years ago | (#29811607)

Aaahh, I see the Angel of Cynicism [dilbert.com] has blessed you.

Re:How slashdot does it (3, Insightful)

cayenne8 (626475) | about 5 years ago | (#29812659)

"I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fix"

That's not that far from the truth in MANY places and projects I've seen.

I've actually come to the conclusion, that on many govt/DoD projects, that the dev. environment in fact becomes the test and production environment!!

I learned that it really pays, when spec'ing out the hardware and software that you need, to get as much as they will pay for for the 'dev' machines....because, it will inevitably become the production server as soon as stuff is working on it, the deadline hits, and there is suddenly no more funding for a proper test/prod environment.

Slashdot and this company ... (2, Funny)

Krishnoid (984597) | about 5 years ago | (#29812709)

Just roll them into one [thedailywtf.com] . It's even got a catchy name.

You are not a n00b (5, Insightful)

davidwr (791652) | about 5 years ago | (#29811537)

You may be a new system administrator, but you are not a n00b.

A n00b wouldn't realize he was a n00b.

Re:You are not a n00b (0)

Anonymous Coward | about 5 years ago | (#29811613)

A n00b wouldn't realize he was a n00b.

You, sir, just changed my life.

- mr. n00b

Re:You are not a n00b (-1, Offtopic)

sopssa (1498795) | about 5 years ago | (#29811675)

but if nobody tells a n00b that he is a n00b, is he really a n00b then?

Re:You are not a n00b (0)

Anonymous Coward | about 5 years ago | (#29811845)

the n00bs wouldn't believe it.

Re:You are not a n00b (-1, Offtopic)

paimin (656338) | about 5 years ago | (#29811863)

What if the n00b is in the forest? Does anyone hear the n00b fart?

Re:You are not a n00b (0, Offtopic)

N!k0N (883435) | about 5 years ago | (#29811909)

probably not... but then said n00b will be spamming help/shout/local/whatever about how they changed the controls since beta...

Re:You are not a n00b (0, Offtopic)

sopssa (1498795) | about 5 years ago | (#29812059)

but since the n00b is a n00b, will anyone actually hear a n00b screaming?

Re:You are not a n00b (0)

N!k0N (883435) | about 5 years ago | (#29812259)

depends on whether or not they're watching aforementioned shout/local/help/whatever channels...

Re:You are not a n00b (1)

Sam36 (1065410) | about 5 years ago | (#29812611)

Hey I am a n00b you insensitive cod!

Re:You are not a n00b (0, Offtopic)

sakdoctor (1087155) | about 5 years ago | (#29811665)

They could be a n00b.
If they only have a wooden sword and shield, and 100 gold pieces then they're probably a n00b.

Re:You are not a n00b (3, Funny)

hydroponx (1616401) | about 5 years ago | (#29812537)

OR they're Link.....

Re:You are not a n00b (3, Interesting)

Eil (82413) | about 5 years ago | (#29812663)

If he is indeed allowing FTP logins over the public Internet (as the submission suggests), he is a n00b whether or not he realizes it.

Re:You are not a n00b (0)

Anonymous Coward | about 5 years ago | (#29812723)

and your alternative is?

Re:You are not a n00b (1)

TaggartAleslayer (840739) | about 5 years ago | (#29812935)

VPN, SSH, SFTP.

Re:You are not a n00b (0, Redundant)

hesaigo999ca (786966) | about 5 years ago | (#29812941)

Yeah, right, see if that one works on WoW!

Separate SVN deploys (3, Informative)

Foofoobar (318279) | about 5 years ago | (#29811547)

Create separate SVN deploys as separate environments. Deploy them as subdomains. If they require database access, create a test database they can share or separate test databases for each environment. Make sure the database class in the source is written as DB.bkp so when you deploy it, your deployed DB class won't be overwritten by changes to the source DB class.

Re:Separate SVN deploys (2, Informative)

Antique Geekmeister (740220) | about 5 years ago | (#29812459)

Do _not_ use Subversion for this. Use git, even if you have to use gitsvn to point it to an upstream Subversion repository. Subversion's security models in UNIX and Linux are exceptionally poor, and typically wind up storing passwords in clear-text without properly notifying you. (Now it notifies you before storing it, but usex it automatically.) Subversion also has very poor handling of multiple upstream repositories, and there is no way to store local changes locally, for testing or branching purposes, and only submit them to the central repository when your changes are complete.

git is faster, lighterweight, and performs far better for distributed systems each of which may require local configurations.

Re:Separate SVN deploys (4, Insightful)

Foofoobar (318279) | about 5 years ago | (#29812693)

Git does not have integration with Apache and other tools that developers still find useful. TRAC integrates with Subversion as do several other tool. You also cannot coordinate Git with your IDE. Don't get me wrong, it is definitely where version control will be in the future but the tools to support it have to get there first before widespread adoption should be advised for day to day use.

Re:Separate SVN deploys (2, Informative)

Antique Geekmeister (740220) | about 5 years ago | (#29812829)

What do you mean, git doesn't integrate with Apache? It works well as an Apache client, there's 'viewgit' if you need a bare web GUI. And for this purpose, locally recordable changes seems critical.

Re:Separate SVN deploys (1)

Foofoobar (318279) | about 5 years ago | (#29813007)

You are talking a repo browser vs an Apache module. Git does not have anything like mod_dav_svn yet so there is not full integration with Apache yet. Many people in the Git community want something like mod_dav_svn but there isn't anything like it yet.

As I stated, the tools aren't there yet. Good version control but ther support tools have to be built yet.

Re:Separate SVN deploys (3, Informative)

garaged (579941) | about 5 years ago | (#29812837)

Re:Separate SVN deploys (0)

Anonymous Coward | about 5 years ago | (#29812961)

By integration with apache I believe he means the ability to serve content from branches within git. For example if you have 3 branches dev, prod, stage. You can't point apache to stage ... because with a branch there are no copies of files

Re:Separate SVN deploys (4, Informative)

AlXtreme (223728) | about 5 years ago | (#29812785)

Subversion's security models in UNIX and Linux are exceptionally poor, and typically wind up storing passwords in clear-text without properly notifying you.

Auth token caching can be easily disabled and svn export, not svn checkout, should be used for deploying test/prod environments (like I've seen way too many people do).

Git (or any other distributed version control system) is great if you are into distributed development, but don't blame the tool when you don't know how to use it properly or expect it to be something that it's not.

Re:Separate SVN deploys (1)

Antique Geekmeister (740220) | about 5 years ago | (#29813051)

"Auth token caching" is enabled by default, with no server or system to disable it. It's only disabled on a client by client basis: this is completely unacceptable in security terms and always has been. So unless you have direct control of the source code for Subversion for all the systems, then no, you can't "easily disable it".

'svn export' is fairly insane for most configuration environments, since it provides little single hint of what files have been altered or modified locally against the base repository's version, provides no access to the log history, does not report 'svn:ignore' properly to ignore or report differences as needed, and it does not provide any later hint of where the configuration files came from. One can _write_ a complex set of configurations and tools to try and provide that, but once you do this, you've once again gotten into more scripting than the original poster wants to deal with.

Don't blame subversion's poor excuse for a security model nor its mishandling of multiple simultaneous branches on my "lack of understanding". I understand it thoroughly, and these are unnecessary flaws when git is around.

Re:Separate SVN deploys (1)

IMightB (533307) | about 5 years ago | (#29812939)

I prefer Mercurial>git>SVN, otherwise grandparent is a good suggestion.

Re:Separate SVN deploys (1)

Jack9 (11421) | about 5 years ago | (#29813021)

Do _not_ use Subversion for this. Use git, even if you have to use gitsvn to point it to an upstream Subversion repository. Subversion's security models in UNIX and Linux are exceptionally poor, and typically wind up storing passwords in clear-text without properly notifying you. (Now it notifies you before storing it, but usex it automatically.) Subversion also has very poor handling of multiple upstream repositories, and there is no way to store local changes locally, for testing or branching purposes, and only submit them to the central repository when your changes are complete.

Ahem, DO_USE_SUBVERSION for this. It's simple, fast, easy to manage and teach. Keeping your dev/test secured by ip filtering to your network, there's no reason to use anything more elaborate for security. There's no need to version local changes as the users can decide and implement their own local versioning. Sheesh.

happy with phing (2, Informative)

tthomas48 (180798) | about 5 years ago | (#29811585)

There's really only so much you can do generically. I'm really happy with phing. I use the dbdeploy task to keep my databases in a similar state. I build on a local machine, deploy via ssh and then migrate the database.

I'd suggest that rather than checkout at each level you create a continous integration machine using something like cruise control or bamboo, then push out build tarballs and migrate the database.

Re:happy with phing (1)

eratosthene (605331) | about 5 years ago | (#29812207)

I second this approach. My company has used phing quite successfully to manage both the staging and production instances of our code.

global config (0)

Anonymous Coward | about 5 years ago | (#29811601)

I use a config file that sets a global variable that all scripts reference to determine what site (dev/test/prod) the code is running on. I also use CVS and RPM to handle code management and pushes.

Have the hosts email problems to an email account (1)

denis-The-menace (471988) | about 5 years ago | (#29811669)

When I did this years ago, each server would run scripts to read logs, etc and if they found something bad they would email me with what they found.

Simple and scalable

Re:Have the hosts email problems to an email accou (3, Informative)

BitZtream (692029) | about 5 years ago | (#29811747)

Never heard of a loghost eh?

Re:Have the hosts email problems to an email accou (1)

denis-The-menace (471988) | about 5 years ago | (#29812373)

Ever heard of Unixware!

remember this WAS 10 years ago!

(Now, get off my lawn!)

Re:Have the hosts email problems to an email accou (1)

Simon (S2) (600188) | about 5 years ago | (#29812591)

That's interesting. At the moment we have a loghost, and all logs of all applications go to that syslog server. Now we face the problem of allowing access to those logs to developers. Say you have 50 production apps logging to that logserver, do you know some software (best would be a webapp) that can be configured to let developers login and see the logs for the application they are responsible for? We could simply share the log files with a samba share, but a webapp that has some kind of integrated tail, deep linking to specific lines, color highlighting and stuff like that would be über cool.

Re:Have the hosts email problems to an email accou (1)

UNIX_Meister (461634) | about 5 years ago | (#29812899)

Try splunk. This should do exactly what you need - a way to "grep" through all the logs intelligently.

puppet (0)

Anonymous Coward | about 5 years ago | (#29811695)

Not sure if this is what you're looking for, but a common solution is a configuration management tool.

Try puppet http://reductivelabs.com/products/puppet
It's simple, fast, and written in ruby

Duct tape (0)

Anonymous Coward | about 5 years ago | (#29811709)

We do it the unix way: duct tape svn, sqsh, rsync, and sendmail together with shell scripts. Reconciliation of what went where can be a little hairy, documentation is sparse, and some safeguards I'd like to see are not there, but it's a good base. I'm actually taking a break from documenting the deployment system at this very moment...

go virtual (1, Interesting)

Anonymous Coward | about 5 years ago | (#29811721)

Have a perfect virtual machine image ready. You can bring up a new server in about 5 minutes.

Re:go virtual (2, Interesting)

palegray.net (1195047) | about 5 years ago | (#29812329)

I was going to suggest using virtualized environments as well, but that still leaves the admin with the task of automating the management of all his difference systems. Frankly, I've always approached this the way he's already doing it: a set of scripts that manage things.

I'm not aware of any systems that do all this for you while still being flexible enough to accommodate lots of unique requirements. The script-based systems I've employed over the years all followed some basic rules:
  • Increment the version number and make a new DNS entry for each committed changeset.
  • Only allow migrations to "test" from "dev", and from "test" to "prod".
  • Allow automatic reverting to a previous version in test or prod, but require manual merging of changesets from later revisions to put them back into the older upper stage.

I've successfully managed hundreds of sites and web apps in this manner, with minimal fuss. Virtualization adds in extra complexity in some respects, but makes other things easier as each VPS can have its own customized environment. As long as everything in dev, test, and prod uses the same base VPS environment, problems should be minimal.

Final Solution (1)

Javarufus (733962) | about 5 years ago | (#29811725)

Put all of your application workspaces in an easily escapable situation involving an overly elaborate and exotic death.

But, if in doubt, add laser beams.

I thought... (1)

pyrr (1170465) | about 5 years ago | (#29811765)

...testing was what the production environment was for. Nothing like having dozens of end users flooding the help desk with calls because someone messed with a server or an active database. They take care of all that pesky and tedious testing for you!

/sarcasm (in case you couldn't tell)

Start an OSS Project (1)

royallthefourth (1564389) | about 5 years ago | (#29811777)

If there's not a project to fit your bill, develop it internally and release it as an OSS project. It'll add some nice OSS experience to your resume and also add visibility to your employer. If it succeeds, it'll be a big deal for your company. If it doesn't succeed, at least you got the project done. Sounds like everyone wins.

I've never actually done this (my employer balks at the suggestion), but I'd love to have that sort of opportunity.

Re:Start an OSS Project (1, Insightful)

Anonymous Coward | about 5 years ago | (#29812035)

Or... if you want a solution that's actually ever completed: go with some of the other suggestions ;)

Acronym hell... (1, Funny)

Anonymous Coward | about 5 years ago | (#29811779)

I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers (mostly Drupal). I've written a few scripts that check out dev/test/production environments from our repository, so web developers can access the site they're working on from a URL (ex: site1.developer.example.com). Developers also get FTP access and MySQL access (through phpMyAdmin). Additional scripts check in files to the repository and move files/DBs [...]

If you have a WYSIWYG front end done DIY style then you need to CYA and RTFM, simply because the newer style AJAX IDEs dont support IDEA. Make sure that you mind your P's and Q's, or the FBI will make you MIA thanks to the P.A.T.R.I.O.T. act. It's pretty much a PEBKAC issue. Oh, did I mention that you should leverage as many TLA's as possible?

Re:Acronym hell... (0)

Anonymous Coward | about 5 years ago | (#29812871)

To bad he only used 5 TLDs for the 8 you highlighted (and one of those is part of a larger name 'phpMyAdmin', but I gave credit for it).

Don't fret, 5/8 isn't bad!

SVN etc. (2, Informative)

djkitsch (576853) | about 5 years ago | (#29811839)

My company (for upwards of 10 years) has been using:
  • An SVN (Subversion) server on our dev box
  • Developer or group specific subdomains in IIS / Apache on the dev server, to which working copies are checked-out
  • Deployment to live servers via SVN checkout when the time comes
  • Global variables to check which server the app's running on, and to switch between DB connection strings etc.

Still not figured out an efficient way to version MSSQL and MySQL databases using OSS, though. Open to suggestions!

Re:SVN etc. (0)

Anonymous Coward | about 5 years ago | (#29812155)

version MSSQL and MySQL databases using OSS

mysqldump, mysqlimport, hg or git, and a couple short bash scripts

Re:SVN etc. (1)

dhasenan (758719) | about 5 years ago | (#29812401)

My company wrote a small project for this (not released in any form, though). It has a collection of SQL scripts identified by date (eg "2009-10-15 1415 Renamed Foo.Bar to Foo.Baz.sql") and a table with columns for script name and date applied. Any scripts it finds that aren't listed in that table, it applies in order according to the date in the script name.

You should be able to hack this together in a day or so.

Re:SVN etc. (0)

Anonymous Coward | about 5 years ago | (#29812605)

I've had decent results with Liquibase.

"LiquiBase is an open source (LGPL), database-independent library for tracking, managing and applying database changes. It is built on a simple premise: All database changes (structure and data) are stored in an XML-based descriptive manner and checked into source control. "

http://www.liquibase.org/

Leverage your issue tracking and cvs (3, Interesting)

dkh2 (29130) | about 5 years ago | (#29811871)

If you're able to script deployments from a configuration management host you can script against your CVS (SVN, SourceSafe, whatever-you're-using).

There are a lot of ways to automate the management of what file version is in each environment but a smart choice is to tie things to an issue tracking system. My company uses MKS (http://mks.com) but BugTracker or BugZilla will do just as well.

Your scripted interface can check-out/export the specified version from controlled source and FTP/SFTP/XCOPY/whatever to the specified destination environment. For issue-tracker backed systems you can even have this processes driven by issue-id to automatically select the correct version based on issues to be elevated. Additionally, the closing task for the elevation process can then update the issue tracking system as needed.

Many issue tracking systems will allow you to integrate your source management and deployment management tools. It's a beautiful thing when you get it set up.

Ant. (0)

Anonymous Coward | about 5 years ago | (#29811873)

Ant.

Hilarity (1, Interesting)

eln (21727) | about 5 years ago | (#29811917)

Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community. How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"

I'm sure you have a legitimate problem, and there are lots of ways to solve it, but this line just cracks me up. You COULD write it yourself or pay someone but if you use someone else's Open Source work (note: nothing is said about contributing to an OSS project, just using it) you'd be "giving back to the community.

Translation: I have a problem, and I don't want to spend any of my own time or money to solve it, so I'm going to try and butter up the people on Slashdot in hope of taking advantage of the free labor force that is the OSS community.

Simply using Open Source software is not giving back to the community...using open source software is what gives you the moral imperative to give back to the community, which you can do through contributing code, documentation, beta testing, providing support on the mailing lists, or whatever.

Mod Parent Up! (1)

acklenx (646834) | about 5 years ago | (#29812557)

How can this be a troll? This is so true. I would however grant the extending the user base for an OSS project isn't exactly a bad thing, nor is having your company exposed to more OSS - but I sure would call it "giving back to the community".

Re:Mod Parent Up! (3, Insightful)

Like2Byte (542992) | about 5 years ago | (#29812997)

Where eln failed is in how his post turned into nothing more than a personal attack against the parent story poster. In the open source world there are users, documenters, developers and visionaries. And guess what - a majority of those are users and most users will never contribute to your project.

Simply attacking the guy with crass, harsh statements is not in the vein of "The Gift Culture."

So, yes, eln's comment is a troll comment.

As for a "moral obligation?" That's laughable. If you give someone something for free don't expect them to do anything for you. Maybe that person doesn't have the time to invest in giving back at the moment. Making inflammatory comments will certainly push them away from your base of constituents. And that means less users. So attacking people who don't know is counter-productive and does not serve the OSS causes or beliefs.

If the someone feels to compelled to "give" or "give back" to the open source community - in whatever manner - count the community fortunate. Expecting anything is counter to the ideals of "The Gift Culture."

Please reread ESR's book.

Re:Mod Parent Up! (1)

Like2Byte (542992) | about 5 years ago | (#29813033)

If the someone feels to compelled to "give" or "give back" to the open source community - in whatever manner - count the community fortunate. Expecting anything is counter to the ideals of "The Gift Culture."

Gad! How did I miss that in my 4th preview!?

CORRECTION: If someone feels compelled to "give" or "give back" to the open source community - in whatever manner - count the community fortunate. Expecting anything in return is counter to the ideals of "The Gift Culture.

I manage them with an Iron Fist Of Death (1)

wiredog (43288) | about 5 years ago | (#29811935)

Or I would if I were in management. For some reason they won't promote me here.

Most important thing in my book (5, Interesting)

BlueBoxSW.com (745855) | about 5 years ago | (#29811941)

Most important thing is to treat your code and data separately.

Code:

Dev -> Test -> Production

Data:

Production -> Test -> Dev

Many developers forget to test and develop with real and current data, allowing problems to slip further downstream than they should.

And make sure you backup you Dev code and you Production Data.

Re:Most important thing in my book (0)

Anonymous Coward | about 5 years ago | (#29812247)

Many developers forget to test and develop with real and current data, allowing problems to slip further downstream than they should.

Some companies don't allow developers to test with real/production data. Production data may not belong to the company (e.g. cloud computing providers) so they have strict controls over who has access to the data.

Re:Most important thing in my book (1)

elnyka (803306) | about 5 years ago | (#29812779)

Many developers forget to test and develop with real and current data, allowing problems to slip further downstream than they should.

Some companies don't allow developers to test with real/production data. Production data may not belong to the company (e.g. cloud computing providers)

Production data might not belong to the hosting company, but the developers that will test against it work for the client company. So, for a hosting company or a sysadmin, it is simply a matter of moving production data (belonging to the client company) from a production env (owned by the hosting company and leased to the client) into another environment (the test environment, owned by the hosting company, but leased to the client as well.)

This would be different from the hosting company moving production data around for purposes other than backups or recoveries without approval from the client company that owns the data.

so they have strict controls over who has access to the data.

Nope, they might have strict controls over who has access to a production environment. That is, the only entity accessing production data in a production environment is a production app running on a production server using a specific mean (.ie. JDBC on port 1521 from machine A to machine B.) Nobody else, not even developers get access to it.

But accessing production data moved at the client's request to a non-production server (done off-hours) that developers have access to according to the SLA, that's totally feasible, cloud or not.

The only time where accessing production data is a concern is for things that fall under HIPAA (.ie. medical records.) Then and there, other restrictions apply, but these are business side restrictions (forced on a business by itself to comply with federal law), not hosting or data-ownership issues.

Re:Most important thing in my book (1)

elnyka (803306) | about 5 years ago | (#29812827)

The only time where accessing production data is a concern is for things that fall under HIPAA

(.ie. medical records.) Then and there, other restrictions apply, but these are business side restrictions (forced on a business by itself to comply with federal law), not hosting or data-ownership issues.

I meant to say that the only time where accessing production data is with data subject to some sort of confidentiality agreements, security clearance, or data subject to privacy laws (such as medical records and HIPAA).

Re:Most important thing in my book (1)

mcrbids (148650) | about 5 years ago | (#29812399)

Dang. Out of mod points, so I'll reply.

Parent covers an EXCELLENT point. We've gone to great lengths to replicate data from production to test/dev modes. We have scripts set up so that in just a few commands, we can replicate data from production to test/dev, and that do data checks to make sure that something stupid isn't done. (EG: copying a customer's data from test -> production and wiping out current data with something 2 weeks old, etc)

In our case, each customer has their own database, and their own set of files. A single command sends it all, EG:

production$ senddata.sh customer3 testserver;

And that sends all the data for "customer3" to the test server, to a temp folder where it can be loaded as needed. This last bit is important, because often, when testing data, you screw things up and need to "start fresh" without having to wait another hour for the data to re-replicate over rsync. In order to keep things fast, all customers' data gets sent over to the test server nightly, and to the dev server weekly. (a la cron) By keeping the data off-site fresh, it takes some 8-12 hours to get all of our customers by rsync at night.

testserver$ loaddata customer3;

That loads the data for customer3 from the temp directory into the test server. We have similar interfaces for publishing scripts from our dev server to test and production servers. We do something similar for backups, which are off-site to a separate location, behind a strict firewall, mirrors across multiple drives. (no, not RAID, 3 actual separate copies on separate disks) We back up our entire SVN repo, all scripts, all databases, and all files for all customers offsite nightly.

We have our test environment virtually identical to our production, only with fewer servers in the cluster. In this way, we have a "hot fail" server that has recent data at all times, and enough performance to do a meaningful job if we should somehow lose our primary production cluster.

All 4 environments would have to be compromised before we lose meaningful amounts of data. We have a tested and continuously verified D/R server that doubles as our test environment. We use SVN in our dev environment so that we can all work together smoothly.

All with virtually zero administration overhead after setup. It's amazing what you can do with bash, cron and a few PHP/perl scripts!

Re:Most important thing in my book (1, Insightful)

Anonymous Coward | about 5 years ago | (#29812765)

And don't forget: it's DTAP:

  Development -> Test -> ACCEPT -> Production

  and vv.

Puppet (2, Informative)

philipborlin (629841) | about 5 years ago | (#29811957)

If you are in the unix/linux world take a look at puppet. You provision out a set of nodes (allows node inheritance) and manage all your scripts, config files, etc from one central location (called the puppet master). Changes propagate to all servers that the change applied to automatically. It is built around keeping the configuration files in a versioned repository and is ready to use today.

Tools, Practices and Standards (5, Informative)

HogGeek (456673) | about 5 years ago | (#29811973)

We utilize a number of tools depending on the site, but generally:

Version Control (Subversion) for management of the code base (PHP, CSS, HTML, Ruby, PERL,...) - http://subversion.tigris.org/ [tigris.org]
BCFG2 for management of the system(s) patches and configurations (Uses svn for managing the files) - http://trac.mcs.anl.gov/projects/bcfg2 [anl.gov]
Capistrano/Webistrano for deployment (Webistrano is a nice GUI to capistrano - http://www.capify.org/ [capify.org] / http://labs.peritor.com/webistrano [peritor.com]

However, all of the tools above mean nothing without defining very good standards and practices for your organization. Only you and your organization can figure those out...

build a self service virtual lab (0)

Anonymous Coward | about 5 years ago | (#29811981)

If you have some $$ off the shelf tools like vmware lab manager will kick some serious butt in this type of environment.

Check out Springloops (3, Informative)

Fortunato_NC (736786) | about 5 years ago | (#29812005)

It's hosted Subversion, with a slick web interface that walks you through darn near everything. You can configure development / test / production servers that can be accessed via FTP or SFTP and deploy new builds to any of them with just a couple of clicks. It integrates with Basecamp for project management, and it is really cheap - it sounds like either their Garden or Field plans would meet your needs, and they're both under $50/month.

Check them out here. [springloops.com]

Not affiliated with them in any way, other than as a satisfied customer.

Look at Capistrano, steal ideas from Rails (4, Informative)

bokmann (323771) | about 5 years ago | (#29812029)

Capistrano started life as a deployment tool for Ruby on Rails, but has grown into a useful general-purpose tool for managing multiple machines with multiple roles in multiple environments. It is absolutely the tool you will want to use for deploying a complex set of changes across one-to-several machines. You will want to keep code changes and database schema mods in sync, and this can help.

Ruby on Rails has the concepts of development, test, and production baked into the default app framework, and people generally add a 'staging' environment to it as well. I'm sure the mention of any particular technology on slashdot will serve as flamebait - but putting that aside, look at the ideas here and steal them liberally.

You can be uber cool and do it on the super-cheap if you use Amazon EC2 to build a clone of your server environment, deploy to it for staging/acceptance texting/etc, and then deploy into production. A few hours of a test environment that mimicks your production environment will cost you less than a cup of coffee.

I have tried to set up staging environments on the same production hardware using apache's virtual hosts... and while this works really well for some things, other things (like an apache or apache module, or third party software upgrade) are impossible to test when staging and production are on the same box.

Re:Look at Capistrano, steal ideas from Rails (0)

Anonymous Coward | about 5 years ago | (#29812351)

This is the best advice. I use Capistrano with Django and some PHP apps and it works like a dream. It's pretty easy to setup symbolic links to control different environments on deploy. Another option from the Ruby world is Vlad the Deployer, though I haven't used it as much.

Seperate Development and Production First . . . (1)

crrkrieger (160555) | about 5 years ago | (#29812081)

. . . everything else comes after that. A small illustration:

When I was system admin for a small brokerage, one of my first tasks was to determine the hardware configuration of every server. There was one particular server that I needed to shutdown in the process. I asked every employee (it was that small) if there were any critical services on that machine. All agreed it was ok to take it off line. For the next 15 minutes, while the machine rebooted, no trading happened because the main program was linking to some libraries that were served off of that server.

I immediately put a new task at the top of my to-do list: reconfiguring the network. Thereafter, production was done on one network and development on another. The router between them would not allow nfs mounts. Production users were not given accounts on development machines. Developers were no longer given the root password, but it was kept in a safe for emergencies.

I know that wasn't what you were asking, but that is the first thing I would take care of.

Bash and git (1)

Phred T. Magnificent (213734) | about 5 years ago | (#29812121)

I do mine with ssh, bash and git, for the moment. I'm looking at moving to something like puppet [reductivelabs.com] for system configuration, though. I've also heard good things about cobbler for initial provisioning, but it's mainly aimed at an RHEL environment and that's not what we're using.

The 2009 Utah Open Source Conference [utosc.com] had several good presentations on infrastructure automation. See, in particular, Phil Windley's slides on puppet and cobbler [windley.com] (hopefully audio and maybe video will be available soon).

best practice (1)

petes_PoV (912422) | about 5 years ago | (#29812175)

... is just to call everything beta, then you never have to bother with testing, or documenting anything (though, to be fair, you didn't ask about documentation - so I guess you'd already decided not to bother with that detail). That way you get much faster development time and keep your time to market down to the same as your competitors - who are using the same techniques.

The trick then is to move on to another outfit just before it hits the fan. Don't worry about your customers - if they are running web based businesses, chances are most of them will have gone down the tubes in a year or so. Long before they get anywhere near release 1.0.

What's a DEV environment? =:O (1)

starglider29a (719559) | about 5 years ago | (#29812179)

People are supposed to TEST this stuff first!?


Did he forget the Sarcasm Mark ~, or does he not know about it?

TPS reports (1)

Joe The Dragon (967727) | about 5 years ago | (#29812201)

TPS reports with lots of cover letters.

I didnt know (1)

SolarStorm (991940) | about 5 years ago | (#29812209)

that /. allowed religious discussions

Re:I didnt know (0)

Anonymous Coward | about 5 years ago | (#29812617)

I'm agnostic, you insensitive clod!

My advice (0)

.Bruce Perens (150539) | about 5 years ago | (#29812237)

You're still new. Get out and choose a new career before you lose too much retirement.

Packaging Packaging Packaging... (4, Informative)

keepper (24317) | about 5 years ago | (#29812269)

Its amazing, how this seemingly obvious question, always gets weird and overly complex answers.

Think about how every unix os handles this. Packaging!

Without getting into a flame war about the merits of any packaging systems:

- Use your native distributions packaging system.
- Create a naming convention for pkgs ( ie, web-fronted-php-1.2.4, web-prod-configs-1.27 )
- Use meta-packages ( packages, whose only purpose is to list out what makes out a complete systems )
- Make the developers package their software, or write scripts for them to do so easily ( this is a lot easier than it seems )
- Put your packages in different repositories ( dev for dev servers, stg for staging systems,qa for qa systems , prod for production, etc et c
- Use other system management tools to deploy said packages ( either your native package manager, or puppet, cfgengine, func, sshcmd scripts, etc )

And the pluses? you always know absolutely whats running on your system. You can always reproduce and clone a systems.

It takes discipline, but this is how its done in large environments.

-

Re:Packaging Packaging Packaging... (1)

Antique Geekmeister (740220) | about 5 years ago | (#29812619)

Now hire me 3 engineers to preserve and bundle all this arcane packaging, and to do the package management itself. If you've created something smart enough to compare and manage all the packages, you've simply transferred the scripting to the package management, with a net cost in engineering of all the time building all those packages.

There are several several tools I favor:

1: Checklists (to keep track of features needed or enabled for each machine)
2: Actual package management: installing Apache or Perl modules needs to be of a specific version with a well-defined release and configuration files, not just "install the one from CPAN" or "update it from the Subversion repository".
2: git, not Subversion, to store local configuration files. Changes can be recorded locally, then submitted centrally, which Subversion and CVS have never worked well to do.

Re:Packaging Packaging Packaging... (2, Informative)

chrome (3506) | about 5 years ago | (#29812687)

+1 also, use the package signing system to verify that the packages distributed to machines are really released. use the package dependencies to pull in all the required packages for a given system. If you do it right, all you need is an apt repository, and you type "apt-get install prod-foobar-system" and everything will be pulled in and installed. In the correct order. I converted a site to this method (on Fedora Core many years ago) and we went from taking a day to build machines to 30 minutes. 1) Put the mac address in the kickstart server and assign the appropriate profile. 2) Boo the machine from the network 3) Watch it build. The profile for that machine would have the packages for the environment we were building listed. 4) Reboot. Machine would have the right IP and be completely configured and running. It just works.

Hudson (0)

Anonymous Coward | about 5 years ago | (#29812315)

Deploy a continuous build server. You've already done most of the work by writing all of these scripts. It will transfer nicely over to Hudson or maybe CruiseControl. As a bonus, if you guys use Selenium tests, you can automate those too.

Also, get a ticketing system (Jira/Trac/whatever) with a configurable workflow that you're comfortable with so you can track deployments and approvals/disapprovals. The workflow should be configurable because you never know if you'll start adding environments or release steps (your company's packaged releases won't really get deployed to production as the final step or you might add a customer environment).

Finally print out instructions for the developers and post them on a wall or wiki where they are plainly visible. It's all useless if you're the only one who knows how the system works.

No, thanks (0)

Anonymous Coward | about 5 years ago | (#29812355)

OSS CMS !? Call me old fashioned (and it won't be the first time), but I will use plain HTML, thank you.

How do i manage those environments? (1)

Mister Whirly (964219) | about 5 years ago | (#29812367)

Carefully. And you?

Use CVS if you're feeling burly (1)

rho (6063) | about 5 years ago | (#29812495)

CVS (of whatever flavor) can help you do this. It's a pain in the ass, and everybody will hate it, but it works.

I've done this with virtual machines as well. It's kinda whizzy to do, but probably overkill.

The simplest way for me was to simply use rsync. Rigid delineation between live and test/dev environments is important. Use a completely separate database (not just a different schema), and if possible a completely separate database server. Changes to the database schema should be encapsulated in update scripts and tightly controlled and thoroughly tested in the test environment. Use a database that supports transactions and use them. Updating the live site should be performed by updating a clone of the live site in another directory. That way if everything goes tits up for some unexpected reason you can revert back to the old site while you lick your wounds. Virtual machines definitely make this all techy and bitchin', but editing httpd.conf and restarting Apache also works.

The best solution is going to be customized to the needs of the project. Most projects don't need a dev/test/live arrangement. Dev and live are sufficient. The most important thing is to establish a framework of how changes are to be made to the code base or database, and stick to it. CVS will help enforce this, but at the cost of having to use CVS.

KVM/Vmware/OpenSolaris zfs go virtual (1)

jwhitener (198343) | about 5 years ago | (#29812573)

The simple answer? Virtual Machines. If you have to stay with linux, go with vmware or for a free solution, KVM. See http://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine

If you want to run LAMP on open solaris/solaris, ZFS has very robust and easy to manage virtual machines called zones. Sun also provides enterprise ops center software that can be used to manage the zones via a gui. Copy/create/rollback, etc..

After that, smart system administration is required to keep things easy to manage.

How you choose to separate data, apps, and the OS will depend on your requirements, but in general, keeping them separate is a good idea.

Another good idea, is a /mnt/safe or area mounted inside the prod/test/dev boxes that is nfs shared between them. Often times, I'll make a request of my sys admin "Please refresh test and dev with prod". So I copy changes or other work to /mnt/safe and then he overwrites dev and test with a recent zfs snapshot (virtual machine snapshot) of production.

  I see you use the word check-in/out. I'm assuming you have subversion or something similar running that you use to check in/out to a new location. Do your developers need access to a CVS? If so, I'd just build it into the virtual machine, so each developer has their own subversion installation.

The only thing you need to do when using zones/virtual machines (at least in zfs) is change the hostname and IP, but that is easily scripted.

Re:KVM/Vmware/OpenSolaris zfs go virtual (1)

rho (6063) | about 5 years ago | (#29812819)

Sun also provides enterprise ops center software that can be used to manage the zones via a gui. Copy/create/rollback, etc..

This is very important. One of the most important traits of a 3-tiered development system is setting it up so that the "test" environment can be rebooted back to a clone of the live site. "Test" should be just that--for testing. If your test environment goes pear-shaped, who cares? Clone the live site, run the updates from "dev", and your "test" is back.

In general it's rarely a good idea to provide a migration path from "test" to "live". As development teams get larger it may make sense to have intermediate changes done on "test" and then ported up to "live" and down to "dev". At this point you'll be looking at a migration manager of some sort.

Re:KVM/Vmware/OpenSolaris zfs go virtual (1)

garaged (579941) | about 5 years ago | (#29812911)

justo to be the anal in the thread, have you head of linux-vserver or openvz ?? sticking with linux would be much more efficient with those that vmware or kvm

frist 4s0t (-1, Offtopic)

Anonymous Coward | about 5 years ago | (#29812615)

give BSD credit [goat.cx]

Quick Brief (4, Informative)

kenp2002 (545495) | about 5 years ago | (#29812641)

Develop 4 Environment Structures

Development (DEV)
Integration Testing (INTEG)
Acceptance (ACPT)
Production (PROD)

For each system create a migration script that generically does the following:
(We will use SOURCE and DEST for environments. You migrate from DEV->INTEG->ACPT->PROD)

The migration script as it's core does the following:

1) STOP Existing Services and Databases (SOURCE and DEST)

2) BUILD your deployment package from SOURCE (This means finalizing commits to an SVN, Creating a dump of SOURCE databases etc.) If this is a long process then you can leave the DEST running and STOP DEST at the end of the build phase. I do this as builds for my world can take 2-3 days.

3) CONDITION your deployment package to be configured for DEST environment (simple find and replace scripts to correct database names, IP address, etc. These should be config files that are read and processes.) This is common if there are different security SAPs, Certificates, Etc that need to be configured. For instance you may not have SSL enabled in DEV but you might in INTEG or ACPT.

4) BACKUP DEST information as an install package(this is identical to the BUILD done on the source. This BACKUP can be deployed to restore the previous version.) This should be the same function you ran on SOURCE with a different destination (say Backups verus Deploys)

5) MIGRATE the install package from SOURCE to DEST
START DEST

6) TEST TEST and RETEST

7) If all tests pass then APPROVE. This is the green light to re-start the SOURCE services so development can move on.

That is a brief of my suggestion.

DEV is obvious
INTEG is where you look for defects and resolve defects. Primary testing.
ACPT is where user and BL acceptance testing occurs and should mirror PROD in services available.
PROD ... yeah...

I handle about 790+ applications across 2000+ pieces of hardware so this may appear to be overkill for some but it can be as simple as 4 running instances on a single box with a /DEV/ /IT/ /ACPT/ /PROD/

Directory structure with MYSQL running 4 different databases. The "Script" could be as simple as dropping the DEST database and copying the SOURCE database with a new name. Other options are creating modification SQLS for instance that are applied onto the exist database.

e.g. STOP, UPDATE, START

to preserve existing data. In the case of Drupal your DEV might pull a nightly build and kick out a weekly IT, a biweekly ACPT, and a monthly PROD update.

JUST REMEMBER THAT YOU MUST MAKE SURE THE PROCESS IS ALWAYS REVERSABLE!!

The script to deploy needs to handle failure. There has to be a good backout.

You should have a method to backup and restore the current state. Integrate that into the script. Always backup Before you do changes and AGAIN after you change. DEV may need to look at the failed deploy data (perhaps a substitution or patch failed, they need to find out why.)

Before Backup and After Backup in the migration script.

And always 'shake out' a deployment in each environment level to make sure problems to propogate. You find problems in IT, you test to make sure what you found in IT is resolved in ACPT. Your testers should NOT normally be finding and filing new defects in ACPT environments with the exception of inter-application communication that might not be available in earlier environments. (Great example might be ACPT has the ability to connect to say a marketing companies databases where you use dummy databases in IT and DEV.) 80/20 is the norm for IT/ACPT that I see.

Good luck. Use scripts that are consistent and invest in a good migration method. It works great for mainframes and works great in the distributed world too.

A special condition is needed for final production as you may need temporary redirects to be applied for online services (commonly called Gone Fishing pages or Under Construction Redirects)

AppLogic (1)

ldgeorge85 (1660791) | about 5 years ago | (#29812657)

I would suggest using some virtualization technologies for that. Something that would make it easy to deploy multiple copies of the same template, easily manage different large scale architectures, and such. I have personally used 3tera's AppLogic, and have had a lot of great experiences there. With a few physical servers you can manage multiple separate VM's, create templates, automate functionality... blah blah. Good luck finding the best solution for you though.

FTP? phpMyAdmin? (1)

Culture20 (968837) | about 5 years ago | (#29812857)

Web devs need to have security enforced or they won't think about it for their sites. Shut off FTP and enforce SFTP only. If bandwidth is a factor is choosing FTP over SFTP, at the very least, use kerberized FTP. Make certain that phpMyAdmin is behind https and that authentication is required. Yes, this means they have to use two passwords. Tough.

Read "Pro PHP Security" (1)

garyebickford (222422) | about 5 years ago | (#29812863)

I'm just now reading "Pro PHP Security" (Snyder & Southwell, Apress), and it's got a lot of good information - hands-on examples, best practices and technical background that is useful whether you support PHP or not. It covers both local and web-based attacks such as XSS, SQL injection, vuln exploits, etc.

Among other things, it suggests you set up virtual servers for each domain user. You could use FreeBSD 'jails', linux virtualization tools, etc. - the book is agnostic on which ones you use, and doesn't cover a lot of detail in this area, at least so far. Virtualization of this type almost completely eliminates the ability of any client user accessing anything outside their virtual server space.

It also suggests that you automate the creation of new domains and hosts, with a form-entry input of some kind, perhaps a well-secured web-based front end, that assures everything gets done properly. Such a parametric front end (of whichever type) helps in preventing the sysadmin from forgetting or purposely ignoring certain setup tasks. I have not kept up to date in this area, because it's not something I do any more, but I'm sure there are some packages that do most or all this work.

You might even use webmin for some of this.

I also have on my shelf the Cisco book "Data Center Fundamentals" available directly from Cisco. It's $60 but has a slew of information.

I'm sure there are other books with more information, I just don't have them off the top of my head.

The answer's simple (1)

Shadow-isoHunt (1014539) | about 5 years ago | (#29812881)

Segue in to a new paradigm and experience increased synergy - consolidate already!

Kidding, naturally.

If it's anything like where I work ... (0)

Anonymous Coward | about 5 years ago | (#29812963)

The Dev environment has better hardware than the Production environment. God forbid Dev and Test actually have the same specs so, you know, you can TEST ON IT!

Naming of servers end in "dev" for Development, "tst" for Test, and "prd" for Prod, except for where they end in "d", "t", "p", "pg", "02"... Unless it's SQL Server, in which case it's ending in "Dev", "Tst", "Prod" ... mostly.

All three of them have different names for Roles granting the same access to the same tables. And I have to fill out authorization forms before the databases are built, so good luck on schema names. And we're in a project where the technical spec was just rewritten yesterday, and process testing (that thing after dev) was supposed to finish 2 weeks from now. We had to start writing test cases two weeks ago, before I had access to the database, and I might as well throw a good chunk of the work away.

A small note (0)

Anonymous Coward | about 5 years ago | (#29812965)

I modded, so posting as AC.

A minor technique we use is a versioned install structure, with symlinks to the current version. Eg., /opt/application_root /application_1023 (1023 is our build number) /application_1034 /application_1045 /current ==> /application_1045 (symlink to one of the install trees)

This allows easy roll back if an upgrade fails without having to reload from staging. Our update scripts build the new directory, stop the server processes, re-aim the symlink, and restart the server processes. All the execution scripts use the path /opt/application_root/current/...

Hudson Build Promotion (1)

bihoy (100694) | about 5 years ago | (#29813057)

One solution that I have implemented at several commpanies is to use Hudson [hudson-ci.org] and the Hudson Promoted Builds [hudson-ci.org] plugin. Read this brief introduction [blogspot.com] to the concept.

Symbolic links (1)

CyberDong (137370) | about 5 years ago | (#29813071)

I've found it's useful to put any env-specific properties in external properties files, and then make a copy for each env. On each environment, there's a one-time exercise of creating symbolic links to point to the appropriate files.
    e.g.
      ln -s db.properties.dev db.properties
      ln -s server.properties.dev server.properties ...

Then just use the links in the app code.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?