Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Environment Variables - Dev/Test/Production?

Cliff posted more than 9 years ago | from the as-consistent-as-possible dept.

Programming 77

Woody asks: "It's common knowledge that the development environment should be the same as the test environment which should mimic the production environment whenever possible. I'm currently working on a project where we develop on Dell, test on IBM, and the production system is up in the air. For embedded systems the differences can be running on a VM versus running on the actual hardware. What I want to know is what kind of problems have Slashdot readers been faced with because of different environments being used for different phase of their projects, and how have they been overcome?"

cancel ×

77 comments

Sorry! There are no comments related to the filter you selected.

Apropos (1)

mjpaci (33725) | more than 9 years ago | (#11409729)

My company maintains a full Production, Disaster Recovery, UAT, Systems Test, and Development environment.

There are plenty of issues however, the breakdown is as such:

Dev: Development only. New code, new app, etc.
Systems Test: Dev+/Integration testing
UAT: Pre-production. Used for 'User Acceptence Testing' (Prod with stale data)
DR: Full mirror of production located 50 miles away.
PROD: Full production environment.

Re:Apropos (3, Interesting)

SpaceLifeForm (228190) | more than 9 years ago | (#11410762)

That is all good and fine, however, you didn't address the issue of environment variables, which your testing/QA people are typically afraid to change (or conversely, they do change and create a mess).

The best approach is to create an environment variable that defines the development environment, example DEVENV=DEV, DEVENV=TEST, DEVENV=IT, DEVENV=QA, DEVENV=PROD (or not needed for production at all), and then elsewhere (controlled and basically kept hidden from the testers or users), other environment variables are set based upon the value of DEVENV. Examples of these environment variables would be your PATH variables, ORACLE_SID, etc.

Then, the final problem is educating the testers what DEVENV means, and more importantly, why that one has to be correct and that they should not mess with any other environment variables.

If the testers can't understand that, you need smarter testers.

Re:Apropos (0)

Anonymous Coward | more than 9 years ago | (#11421433)

That is all good and fine, however, you didn't address the issue of environment variables, which your testing/QA people are typically afraid to change (or conversely, they do change and create a mess).

I hope that you didn't think that the title, "Environment Variables", was referring to actual environment variables. It's just referring to variations in the environments.

Re:Apropos (0)

Anonymous Coward | more than 9 years ago | (#11436475)

Lucky sod. Some of us have to do all of the development, debugging/testing and production builds on our own, on one machine, with no idea of what machine the code is going to rollout on.

Any wonder some programs have bugs?

What a dumb question. (0)

Anonymous Coward | more than 9 years ago | (#11409730)

And misleadingly titled "Environmental Variables", no less.

The only kind of environmental variables I worry about are the ones preceeded by an "export" statement.

Environment variables (1)

Bat_Masterson (250306) | more than 9 years ago | (#11410536)

But those are exactly the ones that should be analyzed and changed according to the environment you're working in.

Depends on the discipline of the developers (3, Insightful)

BigLinuxGuy (241110) | more than 9 years ago | (#11409798)

I've worked on several similar projects and one of the items that bit us was when the developers made assumptions based characteristics of the development platform that were somewhat radically different than the production platform. Assumptions around the size of data (not just word size, but buffer sizes for some text retrieved from "standard" libraries) caused a number of problems on several Java projects I've worked on (not to mention the ones in other languages). Data size seems to be one of the more commonly overlooked items in design and implementation, so I'd urge you to make sure that you've done your due diligence to ensure that you don't find those after the fact.

Another area is performance. For reasons I have yet to understand, there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset). The truth of the matter is that performance has to be engineered in from the beginning or you simply spend a lot of time and money rewriting code that should never have been written in the first place. Sadly, educational institutions don't appear to place any emphasis on actual performance or teach the principles of performance tuning as part of their curricula.

Re:Depends on the discipline of the developers (3, Funny)

GoofyBoy (44399) | more than 9 years ago | (#11410034)

>there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset).

Its not a just myth; its a damned lie that hurts all of society as a whole.

Re:Depends on the discipline of the developers (4, Insightful)

Godeke (32895) | more than 9 years ago | (#11413549)

So, which "performance" are we optimizing for? Memory footprint? Disk access? CPU utilization? Network utilization?

It turns out you rarely know which will bite your face off until you get a representative data set running. If you made the wrong choice, you probably made things *worse* than if you had opted for working code that could be re-factored from "easy to read" to "mostly easy to read and performs where it counts".

When you are working on server based solutions that will be hit hard, all of those could be your bottleneck: or none.

Re:Depends on the discipline of the developers (1)

GoofyBoy (44399) | more than 9 years ago | (#11414673)

>So, which "performance" are we optimizing for? Memory footprint? Disk access? CPU utilization? Network utilization?

Thats a good question.

You need to find out that at the start of programming. And yes, its difficult but there are ways to help things. It won't be perfect but it can't be an after thought.

Re:Depends on the discipline of the developers (1)

Godeke (32895) | more than 9 years ago | (#11431721)

I would agree that being an "after though" isn't wise... but I have watched many people do really stupid things like "I want this to run as fast as possible so I will cache all my data close to me in objects" only to watch the server thrash pages so hard that it was useless. I have watched the other extreme as well happen: "Let's just hit the DB when I need data" and watched poorly thought our queries bring the system to its knees.

For every algorithm there are trade offs. The most common are memory vs time and some people even think about those. Forgetting that they have to push the data down a narrow network channel which they just been saturated or being unaware of the impact on the disk array of the "clever" access patterns their algorithm creates are common things people forget. Assuming they though through the first part.

Personally, I would prefer to think about these implication and then choose a middle of the road, easy to understand implementation. If testing (please note: not slapping your assumptions into production based on your guesswork, but testing in controlled load test environments) shows that we have made performance errors, only then do we swap in harder to understand algorithms, caching mechanisms and other complexity inducing methods. I would say that 10% of our code has been refactored based on such considerations.

Since our programs are web based and we can swap our underlying mechanisms in a day if necessary, perhaps that gives us more flexibility. Perhaps also because we build business systems where "finding that out at the start of programming" is an absurd thing to say, as regulations changes and business stake holder changes mean that anything we do today is likely to be ripped out in a month or a year. Premature optimization seems foolish in the fact of the reality on the ground: "premature expecting to know what this module will *actually* do" in the real world. I would say that 50% of our code has been refactored (over the course of the five years in business) based on business considerations.

Some days I wish I was writing code for some well defined arena that didn't have stakeholders defining the requirements, but some constraints that remained stable through the hirings and firings and mood swings of the clients. That isn't my world.

Re:Depends on the discipline of the developers (1)

sjames (1099) | more than 9 years ago | (#11439251)

You need to find out that at the start of programming. And yes, its difficult but there are ways to help things. It won't be perfect but it can't be an after thought.

The problem is, to really know for sure, you have to build an extensive test framework and dataset. So extensive that when done it will look suspiciously like version 1.0

That's what is meant by avoiding premature optimization. Instead, the better approach is to keepit in mind, but focus on correctness and modularization. That way you can later drop more optimal routines in, and test against a slow but known correct routine. The problem with highly optimal code is that it often tends towards indecipherability.

Later, the slower, more obvious code can be retained as documentation.

To use the database example, don't directly query the database oo cache it in the main code, instead call magic black boxes. In the first rev, the magic boxes will likely just pass everything to the database. Later versions will likely cache the database results transparently.

Re:Depends on the discipline of the developers (0)

Anonymous Coward | more than 9 years ago | (#11436577)

I bolt on performance after the fact all the time. Usually cleaning the behinds of people who won't or can't listen to common sense, and don't have any of their own.

Usually it's a developer(s) who views me as a rival or something, is told to consult with me, doesn't, goes and does whatever they want, it satisfies qa, they put it in production, and it melts down. Then my boss calls me into a conference call and asks me why I let this happen. Often I don't even know the project exists until that point.

Then the idiot who built it fires off suggestions on how to fix it on the call, which are usually idiotically wrong (or else we wouldn't be in this situation) I completely ignore them, and fix their application for them, and let them think that I used their suggestion, to avoid the argument and tug-o-war that would surely follow if I disagreed with them. It's just more efficient that way. The downside is my boss remembers that "I let this happen" and the idiot who built the application, which I "allowed to fail", gets the credit for the application, and the fix.

While it's the wrong way to do it, and it causes me a great deal of stress, it's not a myth. bolting on performance after the fact is just a great big pain in the ass and takes 3x as long as doing it right in the first place.

I prefer peer review over emergencies 100% of the time. Unfortunately, because I am the only one who has any inkling what "development best practices" really means, my calls for mandatory peer review, supported by documentation, over the past 5 years have been largely ignored, along with most of my other attempts to bring sanity to our development process. The idiots have won and I am on monster looking for a new job ; ).

I can't wait until I get a new job, leave on good terms, and get the call with the offer for 20k more a year from this company of fools, then turn it down so I won't have to work with schmucks.

And people wonder why I have an attitude.

l8,
AC

Re:Depends on the discipline of the developers (3, Insightful)

FroMan (111520) | more than 9 years ago | (#11410491)

Here I would disagree. Performance should rarely even be a consideration until the product works.

As such, this does not mean to use braindead implementations, but worry about a working product first.

The first setp to a serious project is to work through the design. This means looking at the interfaces that the software will provide. Whether those are UI or API. Those are your targets that your users will work with.

Then you will need to have test cases for the interfaces that you have agreed on. These test cases should validate the accuracy of the system. Accuracy is key, far more than performance.

Finally, work on implementation. The implementation of the project is the most fluid of the rest of the system. You do not want to be changing APIs or UIs after they have been agreed upon unless absolutely necessary.

Performance, while not exactly an after thought should only be worried about once the problem is known to exist. Often more/better hardware can be thrown at the situation, if not now, 6 months down the road. If there is a problem with the algorithm, then you can change the implementation without affecting the interface.

Re:Depends on the discipline of the developers (3, Insightful)

SunFan (845761) | more than 9 years ago | (#11411267)

Performance should rarely even be a consideration until the product works. ...until the prototype works. The final product has to perform well. Otherwise, people will find trivial excuses to say it sucks and needs to be replaced, even if, on a whole, it is a decent product.

Re:Depends on the discipline of the developers (1)

duffbeer703 (177751) | more than 9 years ago | (#11411388)

Throwing hardware at performance problems isn't a viable solution.

Unless your problem is CPU-bound, disks are going to be the big server bottleneck, and disk performance doesn't provide enough improvements to make a huge difference.

Painstakingly engineering an elegant, but slow solution to a business problem only results in that elegant, tested code being ripped out to provide performance benefits later.

Re:Depends on the discipline of the developers (2, Insightful)

FroMan (111520) | more than 9 years ago | (#11411913)

Its a bad assumption that you are taking a terribley long time implementing. The only painstakingly slow part of the process should be the design of the interfaces. The rest of the code after that point should be modular enough to replace a poorly performing module, but the interface still exists.

This allows the best use of developer time by producing "cheap" code for most work. Then the "expensive" code can be written only in the cases where it is needed. Certainly there will be a certain amount of rewrite as the expensive code replaces the cheap code, but that only happens where necessary. Also, the second iteration through the development alone will usually help the situation because the problem domain is better known.

Re:Depends on the discipline of the developers (3, Insightful)

dubl-u (51156) | more than 9 years ago | (#11410956)

For reasons I have yet to understand, there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset). The truth of the matter is that performance has to be engineered in from the beginning or you simply spend a lot of time and money rewriting code that should never have been written in the first place.

Myth, eh? Personally, I do no "bolting". The steps I happily follow are
  • make it work
  • make it right
  • make it fast
The notion here is that you get something basic working. Then you refactor your design to be the right one for where you ended up: a simple, clean design. Then you do performance testing, discover what the real bottlenecks are, and find ways to get the speed with minimal design distortion here. The you go back to step 1 and add more functionality.

There are a couple of benefits to doing the optimization last. One is that a clean design is relatively easy to optimize. But the big one is that by waiting until you have actual performance data, you get to spend your optimization time on the small number of actual bottlenecks, rather than the very large number of potential bottlenecks. That in turn means that you don't have a lot of premature optimization code of unknown value cluttering up your code base and retarding your progress.

Of course, this is not a license to be completely stupid. Before you start building, you should have a rough plausible architecture in mind. If there are substantial performance risks at the core of a project's architecture, it's worth spending a day or two hacking together an experiment to see the if your basic theories are sound.

As Knuth says, "We should forget about small efficiencies, say about 97% of the time: PrematureOptimization is the root of all evil."

Re:Depends on the discipline of the developers (1)

ACPosterChild (719409) | more than 9 years ago | (#11557107)

I absolutely agree.

There is an addendum that I-can't-remember-who added, but it has to do with how a programmer often does most of the important optimizations automatically, anyway. Basic "duh" stuff about reducing nested loops, which data structure to use in which circumstances, etc. Things that experience, if not common sense, illuminate in such a way that making the "optimization" is quick, easy, and so natural that you probably didn't notice you were doing it.

Sure, you might be wrong, or a hidden gotcha might show up, but you only spend time worrying about those issues when they actually come up.

Yes...and yet, no. (2, Insightful)

oneiros27 (46144) | more than 9 years ago | (#11412409)

There's another myth about projects -- the requirements were actually correct.

Odds are, if someone is rushing for you to get a project done on an unrealistic timeline, they haven't done their analysis of the project correctly, either. Having _any_ prototype up there can help drive the requirements analysis, so that you can figure out what needs to be changed.

But yes, then you scrap that entire thing, so you can do it correctly.

If you're making minor modifications to an existing system, then yes, you most likely wouldn't need a whole new prototype, but then again, you'd not be designing from the ground up, either, I would hope. [unless you get one some idiot manager who decides a new language is better, or you have to make some sort of fundamental internal change]

Oh -- and if an outside contractor asks for a couple of weeks of logs of the former systems, get rid of them -- a couple of _months_ will not identify cyclic trends that may be present. [especially when you work for a university, and it's the summer]

But be realistic of your goals for the project -- sometimes you're working to optimize on CPU, optimize on memory, optimize on disk usage, or optimize on the programmer's time. Until you get _everything_ running, you won't know which one will be the bottleneck. [although prior experience can give clues]

Re:Depends on the discipline of the developers (1)

realsablewing (742065) | more than 9 years ago | (#11414128)

I worked on a project that didn't take any consideration of performance for a web environment. The project was to build a web interface, using XML, JSP and servlets to connect to various legacy database systems. The company was trying to cash in on the web craze and get everything to use a web interface for all of their products.

On the individual machines and with the small sample databases, the system worked fine. However, I was the first one to connect it a development copy of a live database. On the first query for all of the contacts, which worked fine with the sample database of only 20 contacts, the servlet kept dying. Turned out it was trying to return several thousand records back and there wasn't quite enough memory to handle that. So, a design change was made in how the contacts were brought up and selected.

The other item I remember is the first time our team did testing with more than one person logged into the system at a time, on an 8 CPU system, with tons of RAM, tons of drive space, basically a super top of line server system we were able to grab for testing. There were around 8-10 people trying to login, enter data, check data, etc. Within 5 minutes, at the most, the system would crash, because it couldn't handle that many users. We tried this test for a half hour and couldn't get the system to stay up for longer then 5 minutes at a time. Time for the developer to go back and redesign some more code, when we were supposedly close to releasing. The code was finally released 2-3 months late.

Internally the company had an IT team that did get those versions to work, by using a load handler and some extreme tweaking in the systems but even that system could only handle a maximum of 130 users at a time. The company actually sold copies of those systems to customers, for which I heartily apologize, if you'd asked me, I would've said run away quickly. Perhaps that explains why I'm no longer employed by them.

Finally, for a new version and new marketing name version of the product, the developers were allowed to go in and start from scratch in developing the architecture and testing was done using multiple users on lower end machines instead of depending on single machine/single user and the final system for that version could support a lot more people.

I think a prototype has it's place, which is basically what the first version was and after you get done, throw that one away and then focus on the performance design. Unfortunately, too many managers think that throwing away a prototype is 'wasteful' and it's better to spend twice as much money fixing it and dealing with the bug reports instead of thinking ahead.

Re:Depends on the discipline of the developers (1)

L1TH10N (716129) | more than 9 years ago | (#11415126)

I partly agree with you. Performance should be implemented from the start through good programming practice and clean development standards - Programmers should be aware of the programming practices resulting in efficient applications and programming practices that result in slow applications.

However, I disagree that this should be a primary concern because if this was the case then all development would be made in assembly. Programmers need to understand how to program continguencies into their code, so if something is not working as well as they would have liked they can replace that bottleneck in their application without disrupting the rest of the application. You can also use performance analysers to find exactly where your application is faltering and this is usually turns out to be areas you least expect. Unit tests are also invaluable because they allow you tune your application with a guarantee (almost) that nothing will break.

Cost? (1, Insightful)

Anonymous Coward | more than 9 years ago | (#11409814)

What if you can't afford a second machine to cover the "duplicate" test environment?

Re:Cost? (1)

TeleoMan (529859) | more than 9 years ago | (#11409905)

That's actually a good question. I've worked for ultra-tiny companies that ran across the cost-prohibitive issue. We would develop on what we had: a generic Windows 9x distributed system and some elderly woman with a Mac. We would drag beta-projects onto the production server so she could hit them and let us know what rendered correctly and what didn't. Nine times out of ten what rendered right for her would screw up the Windows side. So we had to go back and clean up what we could.

We called it our Cool Realtime Electronic Environment Process. Creepy, indeed.

Re:Cost? (2, Insightful)

dubl-u (51156) | more than 9 years ago | (#11411046)

What if you can't afford a second machine to cover the "duplicate" test environment?

About 95% of the time I hear this, it's false economy. Most hardware is pretty cheap these days, and good developers are very expensive. It takes very little time savings to justify the purchase of new hardware.

In the few cases where it's too expensive to duplicate hardware, then you can fall back on careful profiling and simulation. For example, if you know that your production hardware has X times the CPU and Y times the I/O bandwidth, you can set performance targets on your development environment that are much lower. Or if you can't afford a network of test boxes to develop your distributed app, then things like VMWare or User Mode Linux will let you find some things out.

Of course, every time your tests diverge from your production environment, you add risk. A classic mistake is to develop a multithreaded app on a single-processor box and then deploy it on a multiple-processor box. So as you get cost savings by reducing hardware, it's good to keep in mind the added cost of inadequate testing.

Re:Cost? (1)

SunFan (845761) | more than 9 years ago | (#11411312)

Most hardware is pretty cheap these days, and good developers are very expensive. It takes very little time savings to justify the purchase of new hardware.

Hardware is a fixed cost that bone-headed managers can wash away and claim a savings. Salaries are harder to get rid of. The result: no hardware, development costs twice as much, and it is easier to talk away labor delays than you might think.

Re:Cost? (1)

cerberusss (660701) | more than 9 years ago | (#11417688)

About 95% of the time I hear this, it's false economy

About 95% of the time I hear this, it's office politics getting in the way.

Re:Cost? (0)

Anonymous Coward | more than 9 years ago | (#11442307)

About 95% of the time, the statistics people make up on the spot like this are false statistics.

Re:Cost? (0)

Anonymous Coward | more than 9 years ago | (#11416943)

What if you ran a car repair shop and couldn't afford the emissions test equipment? You'd have to restrict the work your were willing to take on, or you could take out a loan, or you could lease the equipment, or you could go into another line of work.

To the other extreme with NDEBUG/assert() (1)

jamsho (721796) | more than 9 years ago | (#11409863)

Couldn't get the production build team to set -DNEBUG because it hadn't been done on development/test builds. SEI level whatever!

No amount of explaining or arguing could do it: the idea just broke their concept of software production, plain and simple.

Just had to pull the assert()s out.

Re:To the other extreme with NDEBUG/assert() (0)

Anonymous Coward | more than 9 years ago | (#11410130)

Why wouldn't you have asserts in the production code? As I see it, it would only make it harder to track down a bug in production (and give a small performance hit, but that shouldn't be noticeable).

And a different question: Why would you compile with different options in the test and production builds? That would kind of invalidate the testing.

Re:To the other extreme with NDEBUG/assert() (2, Insightful)

SunFan (845761) | more than 9 years ago | (#11411353)

Why wouldn't you have asserts in the production code?

If the code were properly designed to fail gracefully in production, a failed assertion isn't very graceful.

Re:To the other extreme with NDEBUG/assert() (2, Informative)

stoborrobots (577882) | more than 9 years ago | (#11412137)

And a different question: Why would you compile with different options in the test and production builds? That would kind of invalidate the testing.

You shouldn't compile with different options between test and production, but you should do so (for things like -DDEBUG) between dev and test... developers need their extra debugging statements, and asserts, but they interfere with appropriate user interfacing in production.

As I see it, the steps are:
1. Develop
2. Test in production mimiced environment with -DDEBUG (still part of development phase)
3. Hand off to QA phase to test in production mimiced environment without -DDEBUG ("production version")
4. Release "production version" to production.

(obligatory...
5. ???
6. CODE NIRVANA! )

Re:To the other extreme with NDEBUG/assert() (1)

sjames (1099) | more than 9 years ago | (#11441722)

Why wouldn't you have asserts in the production code?

Assert is by design very useful in development and testing, but is the antithesis of desirable production behaviour. In testing/development, you want a failed assert to kill the app dead right there so you can examine the state that lead to the problem. In production, you'd probably rather have it return some sort of error and continue to perform 99.9% of it's functions correctly. Imagine if a server feature used once in 6 months brought the server down for the other 99,999 users instead of saying "internal error" and moving on.

Performance can be another good reason. Asserts can stomp all over your cache. In development, you have them all over to verify correctness. In production you want them turned off. You still need them as a regression test for the next version. Editing them away rather than defining them away introduces added risks of bugs from simple typos.

Even worse with custom hardware (1)

crow (16139) | more than 9 years ago | (#11409907)

When you're in an environment where you have to develop software for hardware that is also being developed internally, the problem gets much worse. You have to deal with the fact that the hardware you're developing on is pre-production hardware that is almost certain to be different from the shipping hardware.

There's not really too much to be done about it. We don't want to have the code check to see which hardware it's on, as that would make it easier to have more bugs. In some ways, this can be good, as dealing with buggy hardware is very similar to dealing with failing hardware, and we need to have robust code that can detect and handle hardware failures gracefully.

In fact, sometimes it is best to have different development and testing environments, as the differences will sometimes help obscure bugs to show up more easily on one platform so that they can be fixed earlier.

Agreed (1)

marcus (1916) | more than 9 years ago | (#11410251)

Sometimes you just have to live with it and slog your way through.

Schedules and design reviews be dammed we have a product to deliver!

Design, document, go over hardware design docs, code, configure simulator to mimic the hardware, test and debug in sim, debug the sim configuration, build various utils that you expect to help test the hardware...It all sounds great and proper by the book but doesn't mean jack until you and the hardware EE get your hands on the real stuff. First you have to pass the smoke test, then the clocks, then flash preload. Does it boot? Can we reload the flash via JTAG? No, can you flash an LED? No, does the BDM work? No, what's the matter? Eventually you find that all of the address/data lines on the processor are hooked up backwards. Instead of signals 0-31 attached to pins 0-7, 8-15, 16-23 and 24-31 they are all reversed 31-24 0-7, etc. How you ask? It turns out that the intern who did such a standup job building the schematic capture symbols and package database back in the summertime(who is now back in school) screwed up. *Sigh* Arrgh Did anybody check his work? We should have done a line-by-line after we laid out the board. Are there any other errors? Can we re-map the hardware and the address decoders? Yes, thankfully most of it goes through an FPGA...

That's just the beginning of bringing up some new hardware.

If you are lucky (1)

bluGill (862) | more than 9 years ago | (#11412433)

I recall one chip that was shipping installed backwards. It happened it worked in most situations too - except for a few corner cases that was blamed on software for a long time. (I'm not an EE, so I can't give you any more detail)

Another time I spent a month chasing down bugs in my code only to discover all the test system was broke, but since it passed all other tests when the work around for my code not being done, they blamed me. It is not a good feeling to find out that your code was bug free after months of being yelled at for bugs in your code. (I'm not going to comment on the schedule that didn't tell me about that part of the code until a work around was needed - those problems always happen)

lots of issues (1)

josepha48 (13953) | more than 9 years ago | (#11409965)

One issue I encountered was a win 98 vs win NT issue. The way that they handle memory allocating and freeing. In the win98 machine it seemed to always give the same memory for the login screens (yes screens). In particular the title of the screen was always the same memory location. We were developing on Windows 98 at that time. When we started testing / QA on NT. NT allocated different memory for the title bar. It was a bizare C memory leak. NT started showing garbage in the title of the second login screen. It turned out that it was either a none null terminated char pointer, or unallocated memory. I can't remmeber which.

Its hard to test and hit every scenerio. That's just a fact of software development that we all have to live with. You cannot ever test every situation so you go with what you got.

At my current job, there is a HUGE difference between dev and prod, and QA is someone closer to prod, but still on a dev box. In our dev world out env vars are much different than our prod world. Also our boxes are not pristine, they have been used and abused.

Re:lots of issues (0)

Anonymous Coward | more than 9 years ago | (#11410939)

I would imagine whatever it was that was wrong with the code, was something you did because apparently you have issues converting your thoughts to text. That post was very nearly incomprehensible due to all the misspellings, bad word selections, and poor grammer. Good lord man!

environment variables? (0)

Anonymous Coward | more than 9 years ago | (#11410146)

I'm not really understanding the title of this article.

But I always try and use the exact same environment for all three. Even buying the same machine as the client if necessary (I'm a consultant, who only uses FreeBSD and Linux, btw).

However don't let that stop you from writing your apps and code so they are completely "relocatable". For instance if you have a big web app, you should be able to check it out, set it up, and run unit tests *anywhere*. On your powerbook, on your Linux test machine, on your OpenBSD VMWare machine, whatever. I can't stress how nice it is to have everything self-contained. Be sure to write step-by-step install instructions and test them every now and then on a fresh vmware install of your client's OS.

It's great for testing and development, it makes sure you haven't "forgotten" any settings, and when the client says, "okay, we need 10 more machines like this", you can install from scratch with confidence (or you can ghost I guess).

Re:environment variables? (1)

sfjoe (470510) | more than 9 years ago | (#11410428)

But I always try and use the exact same environment for all three. Even buying the same machine as the client if necessary (I'm a consultant, who only uses FreeBSD and Linux, btw).

Yes, but sometimes that simply isn't possible. Can you imagine trying to replicate the traffic and complexity of a Google or eBay on a dev machine? His question was to uncover some of the problems with testing on a facsimile of the prod environment.

Camel Eye of Needle (1)

4of12 (97621) | more than 9 years ago | (#11410156)

Forcing your project to run in different environments is

  • a real PITA.
  • a great way to uncover problems in your project before release
Of course, sometimes the problems you uncover aren't in your project, but in the underlying platforms. Ugh.

Tomcat vs JRun and Linux vs HP-UX (1)

carlos92 (682924) | more than 9 years ago | (#11410194)

The app we are developing has to run on JRun 3.1 under HP Virtual Vault (an obsolete discontinued derivative of HP-UX), with JDK 1.3.1. Please don't ask why we are using such ancient infrastructure for a new application :(

We were allocated only one development environment on JRun for this project, but we have to use it as a user test environment, so we develop and do QA on Tomcat 3.3 under Linux. Tomcat 3.3 and JRun 3.1 supposedly implement that same version of the JSP and Servlet specifications, but JRun 3.1 has a few bugs that we discovered when we first tested the half-developed application on JRun.

We use Apache Ant to build the application from source, and every test environment (we have one for each developer, plus one for QA, plus one for User Testing, plus another as Pre-Production) has a separate file of "environmental properties", that we substitute in several configuration files, so we can forget about different paths, JDBC URLs, JDK location, etc.

Running the application on different base software has some minor inconveniences, but most of the time it works fine everywhere, and it has a benefit: we definitely HAVE to make everything configurable and portable, and that forces us to do good design.

There is just one thing that we can't test accurately on Linux: SPEED. The HP-UX machines we use are *slow*, and we can't emulate that accurately.

Environment meaning what? (1)

elmegil (12001) | more than 9 years ago | (#11410256)

There are many dimensions of environment. Shell Environment is one of my favorites. Many many moons ago, a customer changed the root .profile/.cshrc or whatever so that "ls" became "ls -CF" at all times. This was on a clustered server.

The next time the cluster tried to reconfigure, it failed mysteriously. Customer couldn't figure it out, local support people couldn't figure it out, I was one of those people but out of town (I'm convinced given the nature of the problem that I would have figured it out), so they ended up having higher level engineers driven in from 6 hours away to come look at why the cluster reconfiguration was failing.

They found the problem--apparently some part of the scripts that were running the cluster reconfiguration process trusted "ls" from the environment instead of calling /bin/ls directly (as any good security person would tell you is the right way to do it). The characters at the end of filenames confused the reconfiguration scripts, breaking them. And yes, that obvious failure on the clustering scripts was corrected, and that version of the clustering software is antique now anyway, so you needn't worry about whether what I'm talking about will affect you.

This isn't really a production/dev environment difference so much as a cautionary tale about how things in the environment you may not expect can still effect what's going on. Even the difference between "su oracle" vs logging in as oracle for example, could change your environment noticeably. I've had other customers make similar changes with unwanted effects, told them specifically to look at the root .profile etc. to see why two "identical" machines behaved differently, only to have them pooh pooh the whole idea until I had them send me the files in question and was able to prove to them that the cause was indeed in some environmental setup difference.

Re:Environment meaning what? (1)

pete-classic (75983) | more than 9 years ago | (#11410731)

Try "su - oracle". The dash (or "-l") makes it a "login shell" the upshot being that it imports all your environment stuff.

-Peter

Re:Environment meaning what? (1)

elmegil (12001) | more than 9 years ago | (#11411435)

Yes, I know how su works, I was MAKING A POINT. Many DBA's do NOT know how su works, and would be confused by the difference.

Re:Environment meaning what? (2, Funny)

pete-classic (75983) | more than 9 years ago | (#11411595)

Well, maybe they'll read my post and you'll be able to chill out.

-Peter

Re:Environment meaning what? (1)

SunFan (845761) | more than 9 years ago | (#11411515)


For reasons I don't understand, I've seen the "su -" syntax not get quite all the user's environment. It still seems to inherit some stuff no matter what.

Re:Environment meaning what? (1)

lewiscr (3314) | more than 9 years ago | (#11412677)

I've noticed that too. It's particular irksome in my production environment when I login to my administrative account, then `su - mysql`. For some reason, the MySQL CLI tool continues to connect as my admin user, not the 'mysql' users.

Sure, I can override with --user=mysql, or I could take the time to track down the issue. I'm just using it as an example that 'su -' is still different from an actual login (and a potential warning for people using Role accounts w/ MySQL CLI).

Although it is handy for slowing down crackers. From the web servers, neither root nor nobody are llowed to connect to the database. It would help slow down attacker while they figure out how to connect, giving me more time to intervene.

Re:Environment meaning what? (1)

ComputerSlicer23 (516509) | more than 9 years ago | (#11412513)

In my experience, /bin/ls can be aliased just as well as ls can (I'm assuming thats how it was etup in .cshrc). If you want to make sure junk like that isn't there in your scripts. Setup the environment from scratch. There are ways to start a shell with absolutely no environment. The easiest being to start the shell with no login scripts and no interactive scripts turned on.

Just hard coding /bin/ls is just as suseptible to the problem you are talking about. The real problem here is that you are using shell scripts to do real work. Stop that. If you really want something to work, write it in a real language, you really have control of. That's reliability. Shell scripts are nice, they are wonderful. However, it's preicsely these sorts of problems that lead me to believe that re-writing the scripts in python, C, or perl is a good idea. Especially if you avoid "system", "popen" like the plauge. In those cases, you control the environement much better, and have native data structures with well defined interfaces. Instead of using "ls", you use "readdir" and a loop of some kind.

Kirby

Re:Environment meaning what? (0)

Anonymous Coward | more than 9 years ago | (#11413689)

Actually any good security person should tell you that all scripts must run in a *clean* environment, with know resource limits. Well, that's what *I'd* tell you anyway.

I use djb's daemontools for all periodic and long-running programs. One of the many awesome things about it is that it runs each program in an *empty* environment, in a *known* directory. You can also set resource limits and what user it runs as, etc., so your programs always run in a reproducable environment.

I always recommend this and it eliminates all kinds of goofy problems.

Test/Staging == Backup (1)

JMandingo (325160) | more than 9 years ago | (#11410301)

What I want to know is what kind of problems have Slashdot readers been faced with because of different environments being used for different phase of their projects, and how have they been overcome?

If possible, convince management that your Test/Staging env needs to be beefy enough to function as an "emergency backup system" and GET the marginally equivalent hardware.

On the side, if your production DB server is a cluster and your development DB server is not then where is your DBA going to practice all those hideously complex operations like failover and zero downtime promotes? If that is your situation then that is asking for major trouble.

I'm working on something like this. (1)

sproket (568591) | more than 9 years ago | (#11410592)

For development, I write C code which is converted by a tool so that it compiles and work under Windows as a binary plugin for a simulator. The production system is an OS-less Big-Endian microcontroller. Since the simulator is running on a little-endian PC, the tool we wrote has to modify all the code so that it stores its working variables in Big-Endian format and converts it before use. It also has some cleverness to convert all the bit-fields so that their storage and access is the same on both systems. The code has to do a lot of IO with other systems (both simulated and not) so internally storing the data in Big Endian format on the Little Endian system prevents us from having to change the code much or adding special purpose Endian converters on all the IO interfaces. Since the production system is a few orders of magnitude slower than the simulator we don't care about wasting extra cycles there. Using this approach, we were able to convert the Big Endian code to work under simulator pretty quickly.

Tell me about the data, not some minor HW changes. (2, Insightful)

_LORAX_ (4790) | more than 9 years ago | (#11410898)

Repeat after me, it's not about the platform unless the production units are seriously constrained in some way that cannot be replicated on development or testing. The one thing that has consistantly hamstringed projects in the past is not being able to replicate a dataset that comes close to replicating production bugs on development and testing. Without having a full production dataset in either development or testing you push something out that works only to find that the data causes some completly unrelated thing to break.

This is of course assuming that the software platform is adequitly compatible not to introduce stupid bugs because of diffrences between servers.

Development environment: India ... (2, Funny)

Pathetic Coward (33033) | more than 9 years ago | (#11411053)

test environment: well, none.

Re:Development environment: India ... (1)

jrumney (197329) | more than 9 years ago | (#11415162)

You must be doing really badly then if you don't have any customers to provide you with a test environment.

Production environment? Whats that about?

Use separate databases, please (1)

SunFan (845761) | more than 9 years ago | (#11411223)


Please, take the effort to initialize a whole new instance of the database for each environment. Sure, it's a bit of work, but just don't go around stomping on each other's feet thinking it's more convenient. Yes, I have witnessed such stupidity before.

Develop in Production like a Man (1)

Gothmolly (148874) | more than 9 years ago | (#11412241)

Its what real men do. Real men work in over-capacity, understaffed environments, where the "Test Lab" budget is instead spent on things like "Six Sigma Training" and "Employee Appreciation Week".

Where I work (1)

Benanov (583592) | more than 9 years ago | (#11412663)

Where I work we have Dev, QA, and Production.

All of it's Windows-based (this company drank the MS Kool-Aid), but all three platforms are basically equivalent.

The SQL servers are physically three different machines, but they're all clones of each other.

The web servers are either local machine running XP (Dev) or Web Farms running Win 2K3.

We have a transfer utility to facilitate moving the files back and forth, but it doesn't get everything. This is a mixed blessing, but it's handy when you DON'T want something to be pushed to one of the servers.

Generally that utility works only in one direction.

One exception: The business people work on QA. This means that sometimes we'll have to pull something down, but it does mean that they're discouraged from monkeying with the files directly on production. :)

CPUs, network and data (1)

isj (453011) | more than 9 years ago | (#11412988)

I have encountered some problems regarding the differences in the number of CPUs in the development+test environment, and in the production environment. The development and test systems usually have 1 or 2 CPUs. I am currently hunting a timing-sensistive bug that only occurs on logical partitions with more than 3 CPUs assigned, and we don't have that kind of hardware ourselves. We are not allowed to log into the production system (even though it is only a customer test setup) and we have to debug via additional logging and detailed instructions on how to do a few tests. This is not exactly ideal.

Network setup can also be a difference that is difficult to emulate in development/test setups. You have to emulate network latency, bandwidth, packet drop etc. NIST Net [nist.gov] looks good, but it is a bit heavyweight for everyday use. I usually stick to suspending servers/clients with kill -STOP in order to test latencies, packet drop, and timeouts. In one case where latencies really mattered, I ended up being granted access to development servers on 2 different cities and also using my home computer as a remote server.

Another area which can be difficult to emulate is the data size. One thing just generating enough data to match production environments, say 3 million records, but another is generate the data in such a way that the database becomes fragmented in the same way it does in production environments. No real way emulate this properly, except possibly mishandling the database setup to the horror of your DBA :-)

Related to the data size is the difference in data, that being the variation in usernames, the fraction of invalid passwords, fraction of unused accounts, usage patterns, etc. You have to know your customer's environment. Your own customer support people are your friend here, as you can rarely get a complete data dump and traffic logs for the whole week - it is usually sensitive data. The most productive way to handle this is getting a very good grasp of the production environment and the usage patterns, and then spending the time it takes develop a test tool that emulates it closely.

Continous integration (3, Informative)

maxmg (555112) | more than 9 years ago | (#11413139)

We are developing J2EE applications using a continous integration server (currently anthill open source [urbancode.com] , but others are available). Ant is used for building, testing and deploying.
Now we have a number of environment-specific settings, for example database connection details, etc.
All environment-specific stuff goes into .properties files which are included conditionally by the ant script (based on a single environment variable or ant parameter). All of those properties files live in a directory conf/<environment name>, where environment name is either a developer's name, or "test", "production", "staging", etc. Each night, new deployment packages for each of the different deployment targets (test, prod, etc.) are built and made available through anthill. Some of those targets are also automatically deployed for the testing team every night, so the latest features are always available to be tested somewhere the next day.
Every successful build is tagged in CVS with an autoincrementing build number. When we have identified a release candidate, it is as simple as instructing anthill to (re)build a deployment bundle for a particular target with a specific build number. That deployment bundle (usually a .ear or .war) is then simply dropped into the production environment - remember that all the environment-specific settings are already included in that particular bundle. The benefit of this is that all environmental settings are maintained in the main source repository, the downside being that different packages exist for the different targets, but in practice that has not proved to cause any problems.
An additional benefit is that each environment's individual settings (including development machines) is always available to all developers for comparison and troubleshooting.
I guess the lesson learned is this:
  • Automate your build!
  • Extend your build system to include the environmental configuration
  • Automatically build separate targets for different environments

Mixed up process (1)

brunes69 (86786) | more than 9 years ago | (#11414133)

It's common knowledge that the development environment should be the same as the test environment which should mimic the production environment whenever possible.

This is quite mixed up.

The test environment should always be *identical* to the production environment, in all aspects where it is feasably possible. Running tests in an environment that is in any way significantly different than your production environment is basically a waste of time and can lead to huge amounts of unforseen holes in the software.

Assuming the above, the development environment should then be *as close* to the test environment as is reasonable. However, this difference is always going to be somewhat substantial, since you are going to have development/debug copies of libraries, development software, probably more CPU/Ram/Disk on the box, etc. The important thing to remember is that the closer the development environment is to the test environment, the easier it will be for you to find, reproduce, and fix any problems that come up during testing.

I'm currently working on a project where we develop on Dell, test on IBM, and the production system is up in the air

To me, this sounds like a disaster in the making. I would really go straight to your manager and tell him that testing and production really *have* to be identical, or at least reasonably close. If they refuse to give you the resources, I would just reply that you can not not guarentee the testing, and save that email thread. If it impacts the timeline, or the project fails, at least you have an "out".

There is no way I would stand behind testing done on a totally different machine than the target platform.

Re:Mixed up process (0)

Anonymous Coward | more than 9 years ago | (#11421284)

You must work for Government, or a well heeled corporate. While I can agree in principle with what you're saying, many smaller shops have to use scaled down versions of production for test.

I've even had the misfortune to work (once) in a shop where testing was done by clustering the developers machines once a week to mimic production.

Nope. (1)

brunes69 (86786) | more than 9 years ago | (#11422703)

I work in an office of some 30-40 people. We are just all smart people :P You can cut some things back, but not testing. It would be better to cut back on developer's machines than to have a sub-standard testing environment that doesn't match production.

Improper testing and QA procedures is a major cause of failure for startups. Your product needs to be *rock solid*. In order to do this, you should **always** be running QA testing on your target platform. Anything else could lead to improper assumptions, which in turn could lead to lost sales, and eventual failure of the company.

Character Sets and Scripting (1)

flyboy974 (624054) | more than 9 years ago | (#11414420)

As you will be running in a VM environment, I'm assuming it's Java. I develop on a Windows box, check into source control, build on a Solaris box, and deploy to Linux machines for some of our systems. Fun! Well, it's really not that big of a deal. Two major complaints.

One of the things that you will find as you migrate around different OS's is the default character set. Sometimes it's UTF-8, other's is ISO sets. You may even be Kanji or some other one that you're not prepared for if you distribute your system externally. Most VM's have a way to force a character set. Always use this.

The other thing I find frustrating is that start/stop scripts for servers. Windows uses their own system, as does Solaris and Linux. But, in Linux, you may want to take advantage of some of the tools, like chkconfig. Also, where your VM's are installed, you normally have /usr/bin/java on some systems, or /opt/java/..blah.. or C:\java etc. No fun.

Test is the key (1)

Goglu (774689) | more than 9 years ago | (#11414458)

I've had to manage Java and .Net projects that dealt with corporate applications (no imbedded or driver-dependant routine), so take this comment for what it's worth...

My philosophy is to let developpers quite free. Of course, they have to get authorized to run a newer version of the VM (and that version has to be part of the roadmap), but I give them flexibility for their configuration. This is done with the vision of helping innovation through trial and mistake. The key there is to have clear guidelines and a published roadmap, so that "innovation" pays off!

When we hit the first beta, things get serions... They are deployed on test environment. At that point, there is no compromise: the environment has to be as similar as possible to the production environment. This attitude payed off time and time again.

By having the applications run in a production-like environment, from the first beta, it helped us isolate problems due to overly-optimistic configuration changes, very early in the game, when it doesn't cost too much to fix. We even had a time where we modified the roadmap to allow a new version of Tomcat earlier than planned, because tests demonstrated that the great new features were more costly to back-port than the cost of crushing a config change.

In summary:
- Development: Let them play, but with a nanny around;
- Test: Bring-in the confuguration-nazi;
- Production: This is what pays your salary, trust this anal administrator...

Absolute X Relative paths (1)

fok (449027) | more than 9 years ago | (#11414844)

Where I work, pelople estarted using absolute paths for the new Java application. Dumb...
Moved to production server and it obviously did not work.

Re:Absolute X Relative paths (1)

starfishsystems (834319) | more than 9 years ago | (#11429006)

Also shared library paths.

Different sites have differing conventions about where software should be installed. I'm surprised at how many people seem to have no conception of what it's like to run on anything other than a standalone system.

Sure, on a system like that, it may make sense to install software in /usr/local, and have home directories in /home. But remember that those conventions are nominal at best. The real world is a lot more varied, because it has to be.

not that many. (1)

oliverthered (187439) | more than 9 years ago | (#11415947)

I've had far more problems because of 'bad' developers who do things use single character variable names, fail to add up accounting systems or code using local dates so that we have to make sure out SQL servers run US local.

Embedded systems and blackbox development all have there problems, but you should be able recognise them and compensate, having a few experienced engineers on the team will help everything go smoothly (and tell you when it's not)

First off,

Make sure your procedures are reasonally well tied down, automation helps a lot since people always forget to cross the t's.

Then make sure that you development environment is robust, put in good revision management software and make sure it intergrates properly with the feature tracking and bug software.

Make sure the flow of communication between production and QA, QA and developement is good, if possible try to give production 'some' access to the bug-feature tracking system.

Finally,
Make sure that as much as possible is proofed, bugs should have test cases (where possible), and releases should be 'delta' and have rollbacks. Try to test the rollout before going gold. You don't need the entire production environment just a few boxes.

In the past I've done sandboxed and VM based testing, I've also used a small server in the production environment to test rollouts and preform regression testing that the software could switch to allow the clients QA team to perform testing.

Portability and migration issues never amount to the problems you get from poor initial design, programmers who don't get enough sleep or failing to hire enough experienced employees.

4 environments (0)

Anonymous Coward | more than 9 years ago | (#11417411)

I work at a web development shop. We have (depending on the client), up to four different environments: dev, preview, qa, and production.

We write code on dev, push to preview for the clients to view, double check on QA (it's on the premisis with the production machines and maintained by the client's IT staff), and finally move things to production when it's time to go live.

The client is a big media company, and accepts tar files and SQL scripts from us. All changes must go through dev -> preview -> qa -> production, without any exceptions.

The transfer of files from dev to preview is automated using a web based internal tool. Every time files are transferred, a uniquely numbered tar file is created and placed on an FTP server. When we want to move something live, we ask the client's IT staff to grab the tar file from our FTP server and move it to QA and then production.

At first it took a lot of getting used to not having access to the production machines, but now we just have to factor in the extra time it takes to move things live. Since our client's IT department is the one who requires these changes, it doesn't reflect badly on us, and lets us go home if the live servers crash.

hyperthreading (2, Interesting)

marcovje (205102) | more than 9 years ago | (#11418203)


One of the worst problems I had was related to hyperthreading. Turned out an 3rd party component that was a black box was threadsafe, but not multi-processing safe.

When I finally figured it out (of course it happened only once in days, had to write and tune exercisers for weeks to get it to occur within minutes), an update became available.
(we nailed 4 from the 6 spots in that patch already ourselves by then)

well... (0)

Anonymous Coward | more than 9 years ago | (#11421587)

In my experience, the kinds of problems you run into are differences in:
- shared libraries
- OS configurations
- users/groups
- differences in external resources within the tiers (dev/prod databases, app servers, etc)

what kinds of problems? (1)

ecklesweb (713901) | more than 9 years ago | (#11422346)

Well, how long do you have?

- C programs with host names hard-coded in #ifdef...#else...#endif blocks that aren't compiled with the right macro defined for the given environment.

- 20-character wide terminal device that vomited all over itself when the program tried to display a 21 character string, which we only tested on 80 character terms before sending to production

- Intermittent communication failures because of a different version of firmware on a network device in production versus dev/QA.

- WAR files that won't deploy in production because prod runs SunONE and dev runs WSAD.

- System commands not behaving the same because dev was Linux and prod was Unix.

- Integration test failures because someone copied the production encryption key to one dev/test server but not the other.

- Programmers forgot to take the hacks out of the makefile so the program will run in their little world before they compiled the test/QA version of the program.

- Version management commands that are different on different OSes.

- Automated test scripts fail because of some minute difference between dev, test, and QA (that one really pisses me off).

Shall I continue?

Java (1)

alan_dershowitz (586542) | more than 9 years ago | (#11423114)

I develop Java/J2EE apps on a Dell and our test and production environments are AIX. Other than filenames, which can be stored in a persistence layer, there's rarely any changes that have to be made between environments. I guess one answer is, use the appropriate tool for the job. I see a lot of pissing and moaning about Java not being truly portable, but unless you are using bleeding-edge libraries like non-blocking I/O (JDK 1.4.x revisions have had all kinds of inconsistencies with this), you are very unlikely to run into problems. I've been doing this for 3 years now, and NIO was the only thing to give me problems.

This is how we do it (1)

aprosumer.slashdot (545227) | more than 9 years ago | (#11423638)

Generally, this is what we do at our site (we're a MS shop). We use a four stage system, where the first two stages are managed by the Developers and the last two stages are managed by the Production team (usually System Engineers and DBAs).

Development
Fully patched Windows XP or Windows 2000 Workstations, installed with all the development tools we need. Developed applications are _NEVER_ executed on the development workstations; this keeps the development platforms from being 'corrupted' by miscoded applications.

Testing
VMware server, running a simulated NT domain as well as simulated test platforms. Our VMware simulated test platforms are 'out of the box(OEM)' Windows or Linux images as well as images which match the Pre-Production environment. As the Pre-Production Acceptance environment changes, we can simulate those changes in the test environment to see how our applications may break. We then work on a solution. When we find a solution, we commit the Pre-Production Acceptance environment changes to the test platforms. Applications which pass the Q/A tests, are then sent to the Pre-Production Acceptance.

Pre-Production Acceptance
The Pre-Production Acceptance is an environment managed by the Production team (not the developers!). This is where Application bugs found in the Pre-Production Acceptance environment are sent back down as reports to the Developers. This is where the application is integrated, and the latency and load testing are done, using real production data and load which is replicated from the Production environment.

Production
Applications that pass the Pre-Production Acceptance finally are integrated here by the Production Team.

The main problem being...identifying it. (1)

cabazorro (601004) | more than 9 years ago | (#11424268)

The main problem I have faced with development projects were groups/teams work on different environments is:

Having to deal with compatiblity issues on my own due to a variety of environments and having this workload not properlly being tagged as part of the development of the product (just-fix-it-don't-want -to-hear-about-it syndrome).

In the embedded arena, powerful+expensive tools like Windriver Tornado tackle this problem quite well. Support..support..support.

Whith less sophisticated tools the amount of work related to tackle down compatibility issues (works fine in my system thank-you-very-much) is usually ignored and management tends to squint to much attempting to assess/understand the overall impact...(is that really a problem?).

That being said, multiplataform development is not a curse but a given in the embedded development arena.

The solution is again, buy expensive tools were the paid support takes care of cross-plataform issues or build a well-disciplined cross-department environment control group/resolution team.

Either way costs money.

cheers.

Environment variables? (1)

game kid (805301) | more than 9 years ago | (#11477542)

<sarcasm>You mean the ones besides %windir%, %tmp% and %systemroot%?</sarcasm>

In my case, home, home, and, um...oh yeah, home.

Problems? Well, on Windows, errors, the occasional 0xC0000005 and the occasional total system crash; on Linux? I'm still learning, ask again in a few years...yes, I just program for fun. I'll go now...

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>