Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software Technology Hardware

Environment Variables - Dev/Test/Production? 77

Woody asks: "It's common knowledge that the development environment should be the same as the test environment which should mimic the production environment whenever possible. I'm currently working on a project where we develop on Dell, test on IBM, and the production system is up in the air. For embedded systems the differences can be running on a VM versus running on the actual hardware. What I want to know is what kind of problems have Slashdot readers been faced with because of different environments being used for different phase of their projects, and how have they been overcome?"
This discussion has been archived. No new comments can be posted.

Environment Variables - Dev/Test/Production?

Comments Filter:
  • My company maintains a full Production, Disaster Recovery, UAT, Systems Test, and Development environment.

    There are plenty of issues however, the breakdown is as such:

    Dev: Development only. New code, new app, etc.
    Systems Test: Dev+/Integration testing
    UAT: Pre-production. Used for 'User Acceptence Testing' (Prod with stale data)
    DR: Full mirror of production located 50 miles away.
    PROD: Full production environment.

    • Re:Apropos (Score:4, Interesting)

      by SpaceLifeForm ( 228190 ) on Wednesday January 19, 2005 @02:46PM (#11410762)
      That is all good and fine, however, you didn't address the issue of environment variables, which your testing/QA people are typically afraid to change (or conversely, they do change and create a mess).

      The best approach is to create an environment variable that defines the development environment, example DEVENV=DEV, DEVENV=TEST, DEVENV=IT, DEVENV=QA, DEVENV=PROD (or not needed for production at all), and then elsewhere (controlled and basically kept hidden from the testers or users), other environment variables are set based upon the value of DEVENV. Examples of these environment variables would be your PATH variables, ORACLE_SID, etc.

      Then, the final problem is educating the testers what DEVENV means, and more importantly, why that one has to be correct and that they should not mess with any other environment variables.

      If the testers can't understand that, you need smarter testers.

  • by BigLinuxGuy ( 241110 ) on Wednesday January 19, 2005 @01:40PM (#11409798)
    I've worked on several similar projects and one of the items that bit us was when the developers made assumptions based characteristics of the development platform that were somewhat radically different than the production platform. Assumptions around the size of data (not just word size, but buffer sizes for some text retrieved from "standard" libraries) caused a number of problems on several Java projects I've worked on (not to mention the ones in other languages). Data size seems to be one of the more commonly overlooked items in design and implementation, so I'd urge you to make sure that you've done your due diligence to ensure that you don't find those after the fact.

    Another area is performance. For reasons I have yet to understand, there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset). The truth of the matter is that performance has to be engineered in from the beginning or you simply spend a lot of time and money rewriting code that should never have been written in the first place. Sadly, educational institutions don't appear to place any emphasis on actual performance or teach the principles of performance tuning as part of their curricula.
    • by GoofyBoy ( 44399 ) on Wednesday January 19, 2005 @01:57PM (#11410034) Journal
      >there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset).

      Its not a just myth; its a damned lie that hurts all of society as a whole.
      • by Godeke ( 32895 ) on Wednesday January 19, 2005 @06:38PM (#11413549)
        So, which "performance" are we optimizing for? Memory footprint? Disk access? CPU utilization? Network utilization?

        It turns out you rarely know which will bite your face off until you get a representative data set running. If you made the wrong choice, you probably made things *worse* than if you had opted for working code that could be re-factored from "easy to read" to "mostly easy to read and performs where it counts".

        When you are working on server based solutions that will be hit hard, all of those could be your bottleneck: or none.
        • >So, which "performance" are we optimizing for? Memory footprint? Disk access? CPU utilization? Network utilization?

          Thats a good question.

          You need to find out that at the start of programming. And yes, its difficult but there are ways to help things. It won't be perfect but it can't be an after thought.
          • I would agree that being an "after though" isn't wise... but I have watched many people do really stupid things like "I want this to run as fast as possible so I will cache all my data close to me in objects" only to watch the server thrash pages so hard that it was useless. I have watched the other extreme as well happen: "Let's just hit the DB when I need data" and watched poorly thought our queries bring the system to its knees.

            For every algorithm there are trade offs. The most common are memory vs time
          • You need to find out that at the start of programming. And yes, its difficult but there are ways to help things. It won't be perfect but it can't be an after thought.

            The problem is, to really know for sure, you have to build an extensive test framework and dataset. So extensive that when done it will look suspiciously like version 1.0

            That's what is meant by avoiding premature optimization. Instead, the better approach is to keepit in mind, but focus on correctness and modularization. That way you can

    • Here I would disagree. Performance should rarely even be a consideration until the product works.

      As such, this does not mean to use braindead implementations, but worry about a working product first.

      The first setp to a serious project is to work through the design. This means looking at the interfaces that the software will provide. Whether those are UI or API. Those are your targets that your users will work with.

      Then you will need to have test cases for the interfaces that you have agreed on. Thes
      • by SunFan ( 845761 ) on Wednesday January 19, 2005 @03:24PM (#11411267)
        Performance should rarely even be a consideration until the product works. ...until the prototype works. The final product has to perform well. Otherwise, people will find trivial excuses to say it sucks and needs to be replaced, even if, on a whole, it is a decent product.
      • Throwing hardware at performance problems isn't a viable solution.

        Unless your problem is CPU-bound, disks are going to be the big server bottleneck, and disk performance doesn't provide enough improvements to make a huge difference.

        Painstakingly engineering an elegant, but slow solution to a business problem only results in that elegant, tested code being ripped out to provide performance benefits later.
        • Its a bad assumption that you are taking a terribley long time implementing. The only painstakingly slow part of the process should be the design of the interfaces. The rest of the code after that point should be modular enough to replace a poorly performing module, but the interface still exists.

          This allows the best use of developer time by producing "cheap" code for most work. Then the "expensive" code can be written only in the cases where it is needed. Certainly there will be a certain amount of re
    • For reasons I have yet to understand, there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset). The truth of the matter is that performance has to be engineered in from the beginning or you simply spend a lot of time and money rewriting code that should never have been written in the first place.

      Myth, eh? Personally, I do no "bolting". The steps I happily follow are
      • make it work
      • make it right
      • make it fast

      The notion here is that

      • I absolutely agree.

        There is an addendum that I-can't-remember-who added, but it has to do with how a programmer often does most of the important optimizations automatically, anyway. Basic "duh" stuff about reducing nested loops, which data structure to use in which circumstances, etc. Things that experience, if not common sense, illuminate in such a way that making the "optimization" is quick, easy, and so natural that you probably didn't notice you were doing it.

        Sure, you might be wrong, or a hidden gotc
    • Yes...and yet, no. (Score:3, Insightful)

      by oneiros27 ( 46144 )
      There's another myth about projects -- the requirements were actually correct.

      Odds are, if someone is rushing for you to get a project done on an unrealistic timeline, they haven't done their analysis of the project correctly, either. Having _any_ prototype up there can help drive the requirements analysis, so that you can figure out what needs to be changed.

      But yes, then you scrap that entire thing, so you can do it correctly.

      If you're making minor modifications to an existing system, then yes, you mos
    • I worked on a project that didn't take any consideration of performance for a web environment. The project was to build a web interface, using XML, JSP and servlets to connect to various legacy database systems. The company was trying to cash in on the web craze and get everything to use a web interface for all of their products.

      On the individual machines and with the small sample databases, the system worked fine. However, I was the first one to connect it a development copy of a live database. On the fi

    • I partly agree with you. Performance should be implemented from the start through good programming practice and clean development standards - Programmers should be aware of the programming practices resulting in efficient applications and programming practices that result in slow applications.

      However, I disagree that this should be a primary concern because if this was the case then all development would be made in assembly. Programmers need to understand how to program continguencies into their code, so i

  • Cost? (Score:1, Insightful)

    by Anonymous Coward
    What if you can't afford a second machine to cover the "duplicate" test environment?
    • That's actually a good question. I've worked for ultra-tiny companies that ran across the cost-prohibitive issue. We would develop on what we had: a generic Windows 9x distributed system and some elderly woman with a Mac. We would drag beta-projects onto the production server so she could hit them and let us know what rendered correctly and what didn't. Nine times out of ten what rendered right for her would screw up the Windows side. So we had to go back and clean up what we could.

      We called it our Cool R

    • Re:Cost? (Score:2, Insightful)

      by dubl-u ( 51156 ) *
      What if you can't afford a second machine to cover the "duplicate" test environment?

      About 95% of the time I hear this, it's false economy. Most hardware is pretty cheap these days, and good developers are very expensive. It takes very little time savings to justify the purchase of new hardware.

      In the few cases where it's too expensive to duplicate hardware, then you can fall back on careful profiling and simulation. For example, if you know that your production hardware has X times the CPU and Y times t
      • Most hardware is pretty cheap these days, and good developers are very expensive. It takes very little time savings to justify the purchase of new hardware.

        Hardware is a fixed cost that bone-headed managers can wash away and claim a savings. Salaries are harder to get rid of. The result: no hardware, development costs twice as much, and it is easier to talk away labor delays than you might think.
      • About 95% of the time I hear this, it's false economy

        About 95% of the time I hear this, it's office politics getting in the way.

  • Couldn't get the production build team to set -DNEBUG because it hadn't been done on development/test builds. SEI level whatever!

    No amount of explaining or arguing could do it: the idea just broke their concept of software production, plain and simple.

    Just had to pull the assert()s out.
  • When you're in an environment where you have to develop software for hardware that is also being developed internally, the problem gets much worse. You have to deal with the fact that the hardware you're developing on is pre-production hardware that is almost certain to be different from the shipping hardware.

    There's not really too much to be done about it. We don't want to have the code check to see which hardware it's on, as that would make it easier to have more bugs. In some ways, this can be good,
    • by marcus ( 1916 )
      Sometimes you just have to live with it and slog your way through.

      Schedules and design reviews be dammed we have a product to deliver!

      Design, document, go over hardware design docs, code, configure simulator to mimic the hardware, test and debug in sim, debug the sim configuration, build various utils that you expect to help test the hardware...It all sounds great and proper by the book but doesn't mean jack until you and the hardware EE get your hands on the real stuff. First you have to pass the smoke t
      • I recall one chip that was shipping installed backwards. It happened it worked in most situations too - except for a few corner cases that was blamed on software for a long time. (I'm not an EE, so I can't give you any more detail)

        Another time I spent a month chasing down bugs in my code only to discover all the test system was broke, but since it passed all other tests when the work around for my code not being done, they blamed me. It is not a good feeling to find out that your code was bug free afte

  • One issue I encountered was a win 98 vs win NT issue. The way that they handle memory allocating and freeing. In the win98 machine it seemed to always give the same memory for the login screens (yes screens). In particular the title of the screen was always the same memory location. We were developing on Windows 98 at that time. When we started testing / QA on NT. NT allocated different memory for the title bar. It was a bizare C memory leak. NT started showing garbage in the title of the second login
  • Forcing your project to run in different environments is

    • a real PITA.
    • a great way to uncover problems in your project before release
    Of course, sometimes the problems you uncover aren't in your project, but in the underlying platforms. Ugh.
  • The app we are developing has to run on JRun 3.1 under HP Virtual Vault (an obsolete discontinued derivative of HP-UX), with JDK 1.3.1. Please don't ask why we are using such ancient infrastructure for a new application :(

    We were allocated only one development environment on JRun for this project, but we have to use it as a user test environment, so we develop and do QA on Tomcat 3.3 under Linux. Tomcat 3.3 and JRun 3.1 supposedly implement that same version of the JSP and Servlet specifications, but JRun

  • There are many dimensions of environment. Shell Environment is one of my favorites. Many many moons ago, a customer changed the root .profile/.cshrc or whatever so that "ls" became "ls -CF" at all times. This was on a clustered server.

    The next time the cluster tried to reconfigure, it failed mysteriously. Customer couldn't figure it out, local support people couldn't figure it out, I was one of those people but out of town (I'm convinced given the nature of the problem that I would have figured it out),

    • Try "su - oracle". The dash (or "-l") makes it a "login shell" the upshot being that it imports all your environment stuff.

      -Peter
      • Yes, I know how su works, I was MAKING A POINT. Many DBA's do NOT know how su works, and would be confused by the difference.

      • For reasons I don't understand, I've seen the "su -" syntax not get quite all the user's environment. It still seems to inherit some stuff no matter what.
        • I've noticed that too. It's particular irksome in my production environment when I login to my administrative account, then `su - mysql`. For some reason, the MySQL CLI tool continues to connect as my admin user, not the 'mysql' users.

          Sure, I can override with --user=mysql, or I could take the time to track down the issue. I'm just using it as an example that 'su -' is still different from an actual login (and a potential warning for people using Role accounts w/ MySQL CLI).

          Although it is handy for slo
    • In my experience, /bin/ls can be aliased just as well as ls can (I'm assuming thats how it was etup in .cshrc). If you want to make sure junk like that isn't there in your scripts. Setup the environment from scratch. There are ways to start a shell with absolutely no environment. The easiest being to start the shell with no login scripts and no interactive scripts turned on.

      Just hard coding /bin/ls is just as suseptible to the problem you are talking about. The real problem here is that you are using

  • What I want to know is what kind of problems have Slashdot readers been faced with because of different environments being used for different phase of their projects, and how have they been overcome?

    If possible, convince management that your Test/Staging env needs to be beefy enough to function as an "emergency backup system" and GET the marginally equivalent hardware.

    On the side, if your production DB server is a cluster and your development DB server is not then where is your DBA going to practice
  • For development, I write C code which is converted by a tool so that it compiles and work under Windows as a binary plugin for a simulator. The production system is an OS-less Big-Endian microcontroller. Since the simulator is running on a little-endian PC, the tool we wrote has to modify all the code so that it stores its working variables in Big-Endian format and converts it before use. It also has some cleverness to convert all the bit-fields so that their storage and access is the same on both systems.
  • by _LORAX_ ( 4790 ) on Wednesday January 19, 2005 @02:56PM (#11410898) Homepage
    Repeat after me, it's not about the platform unless the production units are seriously constrained in some way that cannot be replicated on development or testing. The one thing that has consistantly hamstringed projects in the past is not being able to replicate a dataset that comes close to replicating production bugs on development and testing. Without having a full production dataset in either development or testing you push something out that works only to find that the data causes some completly unrelated thing to break.

    This is of course assuming that the software platform is adequitly compatible not to introduce stupid bugs because of diffrences between servers.
  • by Pathetic Coward ( 33033 ) on Wednesday January 19, 2005 @03:07PM (#11411053)
    test environment: well, none.

  • Please, take the effort to initialize a whole new instance of the database for each environment. Sure, it's a bit of work, but just don't go around stomping on each other's feet thinking it's more convenient. Yes, I have witnessed such stupidity before.
  • Where I work we have Dev, QA, and Production.

    All of it's Windows-based (this company drank the MS Kool-Aid), but all three platforms are basically equivalent.

    The SQL servers are physically three different machines, but they're all clones of each other.

    The web servers are either local machine running XP (Dev) or Web Farms running Win 2K3.

    We have a transfer utility to facilitate moving the files back and forth, but it doesn't get everything. This is a mixed blessing, but it's handy when you DON'T want so
  • I have encountered some problems regarding the differences in the number of CPUs in the development+test environment, and in the production environment. The development and test systems usually have 1 or 2 CPUs. I am currently hunting a timing-sensistive bug that only occurs on logical partitions with more than 3 CPUs assigned, and we don't have that kind of hardware ourselves. We are not allowed to log into the production system (even though it is only a customer test setup) and we have to debug via additi
  • by maxmg ( 555112 ) on Wednesday January 19, 2005 @06:01PM (#11413139)
    We are developing J2EE applications using a continous integration server (currently anthill open source [urbancode.com], but others are available). Ant is used for building, testing and deploying.
    Now we have a number of environment-specific settings, for example database connection details, etc.
    All environment-specific stuff goes into .properties files which are included conditionally by the ant script (based on a single environment variable or ant parameter). All of those properties files live in a directory conf/<environment name>, where environment name is either a developer's name, or "test", "production", "staging", etc. Each night, new deployment packages for each of the different deployment targets (test, prod, etc.) are built and made available through anthill. Some of those targets are also automatically deployed for the testing team every night, so the latest features are always available to be tested somewhere the next day.
    Every successful build is tagged in CVS with an autoincrementing build number. When we have identified a release candidate, it is as simple as instructing anthill to (re)build a deployment bundle for a particular target with a specific build number. That deployment bundle (usually a .ear or .war) is then simply dropped into the production environment - remember that all the environment-specific settings are already included in that particular bundle. The benefit of this is that all environmental settings are maintained in the main source repository, the downside being that different packages exist for the different targets, but in practice that has not proved to cause any problems.
    An additional benefit is that each environment's individual settings (including development machines) is always available to all developers for comparison and troubleshooting.
    I guess the lesson learned is this:
    • Automate your build!
    • Extend your build system to include the environmental configuration
    • Automatically build separate targets for different environments
  • It's common knowledge that the development environment should be the same as the test environment which should mimic the production environment whenever possible.

    This is quite mixed up.

    The test environment should always be *identical* to the production environment, in all aspects where it is feasably possible. Running tests in an environment that is in any way significantly different than your production environment is basically a waste of time and can lead to huge amounts of unforseen holes in the softwa

  • As you will be running in a VM environment, I'm assuming it's Java. I develop on a Windows box, check into source control, build on a Solaris box, and deploy to Linux machines for some of our systems. Fun! Well, it's really not that big of a deal. Two major complaints.

    One of the things that you will find as you migrate around different OS's is the default character set. Sometimes it's UTF-8, other's is ISO sets. You may even be Kanji or some other one that you're not prepared for if you distribute yo
  • I've had to manage Java and .Net projects that dealt with corporate applications (no imbedded or driver-dependant routine), so take this comment for what it's worth...

    My philosophy is to let developpers quite free. Of course, they have to get authorized to run a newer version of the VM (and that version has to be part of the roadmap), but I give them flexibility for their configuration. This is done with the vision of helping innovation through trial and mistake. The key there is to have clear guidelines
  • Where I work, pelople estarted using absolute paths for the new Java application. Dumb...
    Moved to production server and it obviously did not work.
    • Also shared library paths.

      Different sites have differing conventions about where software should be installed. I'm surprised at how many people seem to have no conception of what it's like to run on anything other than a standalone system.

      Sure, on a system like that, it may make sense to install software in /usr/local, and have home directories in /home. But remember that those conventions are nominal at best. The real world is a lot more varied, because it has to be.

  • I've had far more problems because of 'bad' developers who do things use single character variable names, fail to add up accounting systems or code using local dates so that we have to make sure out SQL servers run US local.

    Embedded systems and blackbox development all have there problems, but you should be able recognise them and compensate, having a few experienced engineers on the team will help everything go smoothly (and tell you when it's not)

    First off,

    Make sure your procedures are reasonally well
  • hyperthreading (Score:3, Interesting)

    by marcovje ( 205102 ) on Thursday January 20, 2005 @08:05AM (#11418203)

    One of the worst problems I had was related to hyperthreading. Turned out an 3rd party component that was a black box was threadsafe, but not multi-processing safe.

    When I finally figured it out (of course it happened only once in days, had to write and tune exercisers for weeks to get it to occur within minutes), an update became available.
    (we nailed 4 from the 6 spots in that patch already ourselves by then)

  • Well, how long do you have?

    - C programs with host names hard-coded in #ifdef...#else...#endif blocks that aren't compiled with the right macro defined for the given environment.

    - 20-character wide terminal device that vomited all over itself when the program tried to display a 21 character string, which we only tested on 80 character terms before sending to production

    - Intermittent communication failures because of a different version of firmware on a network device in production versus dev/QA.

    - WAR fil
  • I develop Java/J2EE apps on a Dell and our test and production environments are AIX. Other than filenames, which can be stored in a persistence layer, there's rarely any changes that have to be made between environments. I guess one answer is, use the appropriate tool for the job. I see a lot of pissing and moaning about Java not being truly portable, but unless you are using bleeding-edge libraries like non-blocking I/O (JDK 1.4.x revisions have had all kinds of inconsistencies with this), you are very un
  • Generally, this is what we do at our site (we're a MS shop). We use a four stage system, where the first two stages are managed by the Developers and the last two stages are managed by the Production team (usually System Engineers and DBAs).

    Development
    Fully patched Windows XP or Windows 2000 Workstations, installed with all the development tools we need. Developed applications are _NEVER_ executed on the development workstations; this keeps the development platforms from being 'corrupted' by miscoded app

  • The main problem I have faced with development projects were groups/teams work on different environments is:

    Having to deal with compatiblity issues on my own due to a variety of environments and having this workload not properlly being tagged as part of the development of the product (just-fix-it-don't-want -to-hear-about-it syndrome).

    In the embedded arena, powerful+expensive tools like Windriver Tornado tackle this problem quite well. Support..support..support.

    Whith less sophisticated tools the amount o
  • <sarcasm>You mean the ones besides %windir%, %tmp% and %systemroot%?</sarcasm>

    In my case, home, home, and, um...oh yeah, home.

    Problems? Well, on Windows, errors, the occasional 0xC0000005 and the occasional total system crash; on Linux? I'm still learning, ask again in a few years...yes, I just program for fun. I'll go now...

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...