Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Slashdot: Taming a Wild, One-Man Codebase?

timothy posted about 2 years ago | from the seeks-same dept.

Programming 151

New submitter tavi.g writes "Working for an ISP, along with my main job (networking) I get to create some useful code (Bash and Python) that's running on various internal machines. Among them: glue scripts, Cisco interaction / automatization tools, backup tools, alerting tools, IP-to-Serial OOB stuff, even a couple of web applications (LAMPython and CherryPy). Code has piled up — maybe over 20,000 lines — and I need a way to reliably work on it and deploy it. So far I used headers at the beginning of the scripts, but now I'm migrating the code over to Bazaar with TracBzr, because it seems best for my situation. My question for the Slashdot community is: in the case of single developer (for now), multiple machines, and a small-ish user base, what would be your suggestions for code versioning and deployment, considering that there are no real test environments and most code just goes into production ? This is relevant because lacking a test environment, I got used to immediate feedback from the scripts, since they were in production, and now a versioning system would mean going through proper deployment/rollback in order to get real feedback."

cancel ×

151 comments

Sorry! There are no comments related to the filter you selected.

first thought: (5, Interesting)

Tastecicles (1153671) | about 2 years ago | (#41402669)

rectify the testbed lack.

'cos there's nothing more likely to cause immediate termination of your employment than a bit of rogue code taking down the bread of the business.

Test it first.

Re:first thought: (1)

Anonymous Coward | about 2 years ago | (#41402871)

The scripts are irrelevant if not ran on the real environment, the test environment would have to be a clone of the production environments. Good luck with that with the described environment! He could test each piece of the scripts in testing - which he probably does - but that only gets you so far and tells you that there's no typos.

Devops and CI (2)

dna_(c)(tm)(r) (618003) | about 2 years ago | (#41404167)

[...]the test environment would have to be a clone of the production environments. Good luck with that with the described environment![...]

There is stuff like Puppet [wikipedia.org] (for declaratively deploying "services") and Vagrant [wikipedia.org] to provision Virtualbox guests.

Downsides:

  • It's only really efficient when your production environment can be provisioned with Vagrant/Puppet as well and no manual work is done on these guests. The way the question is formulated, I suppose that is not the situation.
  • Virtualbox is only usable for desktop usage. I would love something similar and simple for KVM

Re:first thought: (4, Insightful)

luis_a_espinal (1810296) | about 2 years ago | (#41404869)

The scripts are irrelevant if not ran on the real environment,

Well, that's an oxymoron. Any program, large or small, is irrelevant if it never runs on the intended target platform. That's no excuse for having a test server, however feeble compared to production it might be.

the test environment would have to be a clone of the production environments.

A clone does not have to be equivalent in terms of hardware or data. A good example is a test db box for testing your SQL scripts. Such a box can have the exact same software, OS and patches, and with equivalent database configuration and schemas, but on lower-cost hardware and with a fraction of the data. As long as a test bench can provide a reasonable, objective measure of comfort of your code, that is all you need. You do not need an absolute guarantee (as there is never one anyways.)

Good luck with that with the described environment!

Yeah, because the task is so hard, he might as well give up, right, right, right? Let's do the paralysis-by-analysis chicken dance, shall we?

He could test each piece of the scripts in testing - which he probably does - but that only gets you so far

Which is better than nothing, and it is always better to carry tests, however little they might be on a test/sacrificial box than on production. It's not rocket science man.

and tells you that there's no typos.

No. It can also tell you that it will not do something bad, like deleting all records in a table, or initiating a shutdown, or filling up the /tmp partition. Better to detect such things on a mickey mouse test box than on the real thing. It might not catch bugs that are triggered by the actual characteristics present in a production environment, but it will most likely catch up bugs (annoying or fatal) that are not dependent on such characteristics.

Ideal? No. Better than nothing? Hell yeah.

Re:first thought: (2, Interesting)

Anonymous Coward | about 2 years ago | (#41403011)

Yes, by all means test. Then, deploy your tests into production by mistake, like on Wall Street or something, LOL. Seriously though, testing is good and I unit test right from the start; but there are no silver bullets.

Re:first thought: (4, Insightful)

Stiletto (12066) | about 2 years ago | (#41403547)

It's not a silver bullet, but lack of a test environment is sure to eventually cause disaster. It's by far the biggest problem mentioned above, even more of a problem than lack of version control.

Re:first thought: (2)

rvw (755107) | about 2 years ago | (#41404195)

It's not a silver bullet, but lack of a test environment is sure to eventually cause disaster. It's by far the biggest problem mentioned above, even more of a problem than lack of version control.

I would start with a versioning system. That's a lot easier to get working. You could get that working in one day. And it doesn't need a test environment. Yes it should, but it's not a requirement. You can use the trunk as the production codebase. The big advantage is that you can rollback easily. You can even code on the server itself, and then update the codebase from there. No, not the wisest thing to do, but it's possible and probably a lot wiser than coding on the server without versioning. And use comments for each version update!

Re:first thought: (1)

arth1 (260657) | about 2 years ago | (#41403941)

testing is good and I unit test right from the start

Out of curiosity, what tools do you use to unit test bash scripts?

Re:first thought: (1)

Relayman (1068986) | about 2 years ago | (#41403033)

There's also the issue of professional pride: If I were in his situation and I got hit by a bus (see the Six Feet Under pilot), I would want someone else to be able to pick up where I left off. I would also want this replacement to compliment the quality of my work. You need a test environment to do this.

Re:first thought: (1)

MaxQuordlepleen (236397) | about 2 years ago | (#41404279)

It's unlikely that an inheriting developer is ever going to compliment the quality of your code. If he did, he'd never get the green light to trash all your work and start again from scratch.

Re:first thought: (4, Interesting)

ILongForDarkness (1134931) | about 2 years ago | (#41403053)

rectify the testbed lack, like Yoda is it. I agree you need a testbed. Heck run a few vm's on a workstation. If you can't build a vm to test something it shouldn't be deployed IMHO.

Re:first thought: (2)

Anne Thwacks (531696) | about 2 years ago | (#41403625)

And spend a week or two reading http://thedailywtf.com/ [thedailywtf.com]

Most interesting coder (2)

rfrenzob (163001) | about 2 years ago | (#41403807)

Proclaim yourself the most interesting coder thinkgeek [thinkgeek.com] style.

I don't often test my code, but when I do, I do it in production.

Re:first thought: (1)

jekewa (751500) | about 2 years ago | (#41403813)

How does that meme go? Something like this, I think:

I don't always test my code, but when I do, I test in production.

git (0)

Anonymous Coward | about 2 years ago | (#41402681)

git

Re:git (4, Insightful)

ThorGod (456163) | about 2 years ago | (#41403065)

git

Yes!!! Create git repos of all those various parts on some central git server. Create backups of those repos periodically, like a sane person...

Git really doesn't require a ton of understanding to "just start using git" competently. It's not going to trash whatever you have in place; it's mathematically proven to *not* lose data.

Also, freaking set up a dev server already! (That's like 2 machines, or a private, 3rd party git repo (bitbucket is what I use) and a dedicated test/dev machine).

Re:git (0)

Anonymous Coward | about 2 years ago | (#41403455)

Mod this up. Nothing works as well as git, as long as you're not dealing with binary blobs.

Re:git (1)

Stiletto (12066) | about 2 years ago | (#41403471)

Yes!!! Create git repos of all those various parts on some central git server. Create backups of those repos periodically, like a sane person...

The great thing about git is that there is no need for "central servers" or other such infrastructure. Just back up the current repositories wherever they are located.

Re:git (1)

gorzek (647352) | about 2 years ago | (#41403781)

Count me in with the others recommending git. I'm using it in a very diverse environment, where different teams have different platforms, workflows, and deployment strategies. I convinced them all to use git. They are extremely happy with it, and they love the flexibility it offers them over their old (or nonexistent) systems.

Re:git (2)

GoogleShill (2732413) | about 2 years ago | (#41404623)

... it's mathematically proven to *not* lose data.

I love git and use it on a daily basis, but you can't mathematically prove that it won't lose data. It is written by humans, and I have encountered bugs in it. You also still have to deal with manual merges, which are error prone. I've also had my local repo get in weird states that are very difficult to get out of. When this happens, I always copy out all my changes because I'm afraid of losing anything.

Re:git (1)

Barryke (772876) | about 2 years ago | (#41403509)

Smartgit I can recommend.

Re:git (1)

Mordok-DestroyerOfWo (1000167) | about 2 years ago | (#41403745)

I don't think I could live without git. I used it at home to keep changes to family photos when I (attempt to) enhance them with Photoshop. At work to keep my vim scripts, and bash files consistent between machines. Needless to say, all of my code is also there too. It's dead easy to setup and tutorials abound.

Re:git (1)

Ken_g6 (775014) | about 2 years ago | (#41404063)

He's using Bazaar, which is a lot like Git. The main problem I found with Bazaar is that it's S...L...O...W...! Git does things almost instantly.

Code versioning and deployment? (5, Insightful)

MetalliQaZ (539913) | about 2 years ago | (#41402683)

I don't understand how code versioning has to be coupled with deployment? You have no test environment, as you said... so just make releases and deploy them manually. Since you are going straight to production, you had better be there in person to roll it back if you screwed up. Right? So, SVN should be all you need...

Re:Code versioning and deployment? (2, Insightful)

dgatwood (11270) | about 2 years ago | (#41403397)

Git is a cleaner model in a lot of ways. In particular, the fact that you have a local copy of the entire repository makes it easier to roll back mistakes you make while editing the code. This isn't always important, but if you decide you're going to do a significant rewrite of some piece of code (and in particular, if you are ever remote while you're doing so), it helps a lot.

Re:Code versioning and deployment? (2)

inKubus (199753) | about 2 years ago | (#41403743)

Here's what I did, pre-git:

Create svn repo, e.g. svn.company.lan/systems
Create structure ./trunk, ./branches, ./tags
Create a directory for each hostname e.g. ./trunk/sql1, ./trunk/web1, ./trunk/web2, etc.
Then you can svn import configuration directories on the host into the repo, e.g. svn import svn.company.lan/trunk/sql1 /etc
Then check out svn co svn.company.lan/trunk/sql1/etc /etc
From that point forward if you make changes locally you can svn ci OR you can make them externally (i.e. in a test environment) then svn up to update your local conf
I keep the same directory structure, so if I have some tomcat conf like /opt/jira/tomcat/conf it will be in svn as svn.company.lan/trunk/web1/opt/jira/tomcat/conf

With some scripts, I automated the process and since then it's been really easy to maintain. I understand that cfengine is quite a bit more complex and can do a lot more, like verifying your configuration and that sort of thing, but for a small shop this is good enough to prevent Oh Shit moments with minimal extra work and almost no maintenance.

Need to make a change? First, check in to make sure repo has latest version. Make your changes, restart your daemons..if it works, check in. If it doesn't work you can keep working or svn revert back to the previous version.

With git, you'd have a similar thing but the repo would be local and you'd have to find a way to back it up, or you could have something like stash running to be a central hub. DO NOT use github to store configs out of habit, because sometimes conf files have private keys and stuff and it is extremely likely that github will be targeted by crackers at some point. Svn is real easy to set up on a random utility server or even a workstation...

Re:Code versioning and deployment? (1)

arth1 (260657) | about 2 years ago | (#41404191)

I do something similar with svn, but the main problem with this is that Subversion doesn't preserve Unix ownerships, permissions, acls or attributes.
A secondary problem is the .svn directories - some directories are parsed automatically by various systems and all files and folders there are acted on. Then a version control system needs to be external to the directory structure.

Re:Code versioning and deployment? (1)

amicusNYCL (1538833) | about 2 years ago | (#41404117)

Git is a cleaner model in a lot of ways. In particular, the fact that you have a local copy of the entire repository makes it easier to roll back mistakes you make while editing the code.

What does that mean? It's trivial to either check out or export any part of the repository from SVN. What does git bring to the table that I don't already have with SVN?

Re:Code versioning and deployment? (1)

dna_(c)(tm)(r) (618003) | about 2 years ago | (#41404287)

What does git bring to the table that I don't already have with SVN?

A lot. (as does bzr, mercurial or any other distributed versioning system)

  • You don't need a central server (but you can have one).
  • You don't need to have a network available to check in changes.
  • You don't need to have a network available to roll back or switch to another branch. E.g. you could edit /etc/init.d/networking break stuff and roll back...
  • It is really fast - it is mostly local stuff.
  • ...

Re:Code versioning and deployment? (1)

LordThyGod (1465887) | about 2 years ago | (#41404247)

Bazaar does the same, or at least can.

Re:Code versioning and deployment? (1)

Skewray (896393) | about 2 years ago | (#41403597)

I don't understand how code versioning has to be coupled with deployment? You have no test environment, as you said... so just make releases and deploy them manually. Since you are going straight to production, you had better be there in person to roll it back if you screwed up. Right? So, SVN should be all you need...

I used to, as a single programmer, use SVN, but I found it nothing but a burden. It left files all over the place, and was really not convenient when no interlocking with another programmer is needed. Now I just make a tarball of everything at obvious breakpoints and store it away.

Re:Code versioning and deployment? (0)

Anonymous Coward | about 2 years ago | (#41404161)

spoken like someone who hasn't really used modern scm...

With svn worse is not better, worse is just worse.

blast it with piss (-1)

Anonymous Coward | about 2 years ago | (#41402695)

...

It's too late (5, Funny)

Antipater (2053064) | about 2 years ago | (#41402723)

Given the situation you describe, it won't be long before the whole system falls into corruption. Your only hope is to save two lines from every script on a USB stick, then flood the rest.

Re:It's too late (0)

Anonymous Coward | about 2 years ago | (#41404567)

and, 7 lines of every script you deem "clean"

Simple answer (4, Insightful)

girlintraining (1395911) | about 2 years ago | (#41402725)

My question for the Slashdot community is: in the case of single developer (for now), multiple machines, and a small-ish user base, what would be your suggestions for code versioning and deployment, considering that there are no real test environments and most code just goes into production ?

The simple answer is, "Whatever works best for you." You're the only developer for these projects. Unless your manager is giving you direction on a specific process or requirements, it's your ball game. You know how you work best -- pick your tools accordingly.

Re:Simple answer (1)

tool462 (677306) | about 2 years ago | (#41403043)

Pretty much. This is a very hard thing to answer in general terms. With only one developer, over-engineering the system can be very costly. You'll spend more time maintaining the dev/test/release environment than the actual code itself. But at the same time, some tools and scripts can be absolutely critical to the business and a bug could be disastrous enough that it warrants all the overhead of a more formal dev environment.

What you do is going to depend a lot on the exact details, and may not even be consistent across all the code you support. There's only one of you. Make sure you spend your time on the most critical pieces.

The only thing that I would consider absolutely mandatory is proper revisioning and branching so that you can recover quickly when something goes wrong.

Re:Simple answer (1)

Barryke (772876) | about 2 years ago | (#41403467)

Even using Git without merging or branching would be worthwhile.

A few things (5, Informative)

jlechem (613317) | about 2 years ago | (#41402763)

1. Buy or get a machine to host SVN for version control. I work on my wife's company website and some basic management tools. SVN has saved my bacon on multiple times where I thought I had lost some code.

2. Get a pre-production server and test your code! Sounds like you're living in the wild west and that shit flies until something goes horribly wrong and you're the guy who gets blamed.

Re:A few things (5, Insightful)

jellomizer (103300) | about 2 years ago | (#41402811)

If you can't get the hardware. Try to Virtualize a Test Envionment with like VM Ware or Virtual Box.
At least you have something to play in before it you put it out on the open.

Re:A few things (0)

Anonymous Coward | about 2 years ago | (#41403457)

I am in the same situation (sort of) and found SVN to be overkill. Mercurial is a good alternative to SVN in this case because you don't need to set up a separate server just to host the repo. You can use something like BitBucket (which offers free private repos) to backup online if your company policies allow.

Re:A few things (3, Insightful)

KingMotley (944240) | about 2 years ago | (#41403821)

Not sure why you think you need a separate server just to host the repo. Just host it on the same machine.

Sure at the office we have a server that hosts the repo, but at my house, I have the repo running on the same machine I develop on. Of course the repo is on a RAID-6, and my local copy I develop on is on a RAID-0, but I didn't need to buy another machine just to host the repo.

No real change (4, Informative)

chthon (580889) | about 2 years ago | (#41402775)

You can still change everything in place. Then you can run the script and get feedback. When it works, you commit. When it doesn't, you remove the problem, check and commit.

Or you can make your changes, review them and commit them, then do a run. When you have a problem, you commit again.

It is not because you use a versioning system that you need extra formality. You can still work the way you used to, but now you have an extra safety measure due to the versioning system.

Using trac is a way to better organise your problems. The main thing I can say about using trac effectively is that you always need to have a browser window open on it, and when you have an idea, or notice something, or have problem, then enter it immediately. Afterwards, take your time to look at new and open problems, classify them and process them.

Re:No real change (1)

cowboy76Spain (815442) | about 2 years ago | (#41403259)

And a few days after you put the changes in production and nothing has burned, make it a tag.

Better yet, make it a tag before putting changes in production (TAGbeta) and a few days later (TAGrelease). Tags are cheap.

Re:No real change (1)

crontabminusell (995652) | about 2 years ago | (#41403979)

Thanks for saving me the effort of typing that. =) What you wrote, chthon, sounds like the sanest application of version control to the OP's environment. And if another dev gets added to the mix down the road, the version control system will already be in place.

proper deployment/rollback (4, Insightful)

turbidostato (878842) | about 2 years ago | (#41402777)

You say that "now a versioning system would mean going through proper deployment/rollback in order to get real feedback."

But then, no, it wouldn't.

Storing your code on a versioning system doesn't mean but that: that you store your code in a versioning system, nothing more, nothing else.

I'm starting to be an old fart so you can believe me when I tell I've already been in your position.

Back then I used CVS and it didn't change my deployment procedures in the slightest -only that I had all those scripts in a single convenient place and I could look in past history when I found a regression or I wanted to look for the way I did something in the past.

The most naive approach is you just got working just the way you are doing now, only that when you are confident on a script/set of scripts you check them in for posterity. You mainly develop in your own desktop and you push your scripts to the servers with an rsync-based script. A bit over this, you use a CM tool (say, puppet) so instead of pushing to the servers you push to the puppetmaster and then run a `puppet agent --test` on the servers: that way configuration becomes code and therefore, repeatibility.

It allows for almost a novel but the basic idea is just the same: SCM is SCM is SCM; nothing more, nothing less.

Re:proper deployment/rollback (4, Informative)

turbidostato (878842) | about 2 years ago | (#41402909)

Oh, by the way, you really should listen to those that tell you *need* some development environment.

Again, I've already been there, so I know you pain: even for the silliest development the developers will have their development environment but for us, systems people, it's expected that everything just fits in place at first try, no second chances. Of course, next heavy refurbish will be near to impossible because while being a good professional allows for more or less "clean" kaizen-style development, anything a bit dangerous is an almost impossibility because of lack of test environments (with luck, next "heavy test window" will be in three/four years when all the servers are decomissioned and new ones come in place) but that's the way it is, take it of leave it.

The good news is that, while not a panacea, virtualization, even at desktop level (you surely need to have a look at vagrant[1]) allows for a lot of testing, impossible in the age or "real-iron only".

[1] http://www.vagrantup.com/ [vagrantup.com]

Re:proper deployment/rollback (3, Insightful)

SQLGuru (980662) | about 2 years ago | (#41403267)

Another benefit of a versioning system is that you don't have to keep large chunks of commented out code. If it needs to go, delete it. It's in the history if you need to go back to it. This alone will clean up most of the spaghetti that a one-coder shop faces.

Stay with what you have. (0)

Anonymous Coward | about 2 years ago | (#41402797)

You are going to put yourself through a heap of misery changing over to something you have to learn all over again. Best stay where you are with Python.

Subversion. (0)

Anonymous Coward | about 2 years ago | (#41402855)

Check it into subversion. You can get your build/packaging tools to embed the svn revision into the artifact.

For all the git lovers out there - r564 is so much easier for a human deal with than a large hex string, and most git advantages don't really apply a single developer.

Mercurial then (0)

Anonymous Coward | about 2 years ago | (#41404797)

with mercurial you can also uses the -r564 syntax, and as with git you don't need a repository sever to run anywhere (whether it's on the same machine or a distant server). So if your only objection to git is "I don't want large hex string", then use mercurial, not subversion. really.

as far as "most git advantages don't really apply a single developer" is concerned, once you get used to high speed versionning, full local history, rollback, topic branching and merging, trust me you never want to got back to svn, even when you're developing alone. never.

Stopped reading 20% of the way through (-1)

Anonymous Coward | about 2 years ago | (#41402875)

I'm sorry, what? "Automatization"? Would you happen to mean automation?

I would be even more shocked at this, but on the other hand I've heard plenty of people use the word "equivilate" before, too.

Rename the files f1, f2, f3, etc. (4, Funny)

Maximum Prophet (716608) | about 2 years ago | (#41402895)

Quick! Rename all the files f1, f2, f3 etc, rename all the variables i1, i2, i3, etc and remove all whitespace.

Keep a translation sheet on you at all times. Suddenly, you're irreplaceable.

(:-) for the humor impaired. This is actually a riff on a joke from WKRP, when an engineer said he was replacing all the color-coded wiring with black wires for job security. (B.t.w. the engineer was played by one of the writers of the show)

Re:Rename the files f1, f2, f3, etc. (0)

Relayman (1068986) | about 2 years ago | (#41403077)

Score:5 Funny (No mod points today, sorry. It looks like I have to comment to get mod points.)

Re:Rename the files f1, f2, f3, etc. (1)

HornWumpus (783565) | about 2 years ago | (#41403685)

irreplaceable == unpromotable.

Granting it's a one man shop, not likely to be much in the way of upward mobility anyhow.

Re:Rename the files f1, f2, f3, etc. (1)

Medievalist (16032) | about 2 years ago | (#41404051)

I wasn't familiar with the WKRP schtick, but I actually worked for a DOD subcontractor and saw a guy wire an entire harpoon missile controller using nothing but blue wire. It was for the test environment, of course, not for combat use. Hundreds of individual wires, all pale blue... most of them would be printed circuits in the real controllers.

That is one of the experiences (building out the Internet was another) that convinced me there is no such thing as a non-trivial test environment. You cannot simulate the interactions of very large amounts of completely normal human stupidity. You can't even really get close.

Re:Rename the files f1, f2, f3, etc. (1)

SolitaryMan (538416) | about 2 years ago | (#41404145)

What is "WKRP"?

Re:Rename the files f1, f2, f3, etc. (0)

Anonymous Coward | about 2 years ago | (#41404659)

WTF? You need to brush up on your 70's/80's TV shows. WKRP was awesome. http://en.wikipedia.org/wiki/WKRP_in_Cincinnati

The Story of Mel is instructive here. (5, Interesting)

Anonymous Coward | about 2 years ago | (#41402903)

Most of you whom have seen this may have read it in the Jargon File. It's relevant. The short answer is "you don't":

The Story of Mel, a Real Programmer

This was posted to USENET by its author, Ed Nather (utastro!nather), on May 21, 1983.

A recent article devoted to the *macho* side of programming made the bald and unvarnished statement:

                  Real Programmers write in FORTRAN.

          Maybe they do now,
          in this decadent era of
          Lite beer, hand calculators, and "user-friendly" software
          but back in the Good Old Days,
          when the term "software" sounded funny
          and Real Computers were made out of drums and vacuum tubes,
          Real Programmers wrote in machine code.
          Not FORTRAN. Not RATFOR. Not, even, assembly language.
          Machine Code.
          Raw, unadorned, inscrutable hexadecimal numbers.
          Directly.

          Lest a whole new generation of programmers
          grow up in ignorance of this glorious past,
          I feel duty-bound to describe,
          as best I can through the generation gap,
          how a Real Programmer wrote code.
          I'll call him Mel,
          because that was his name.

          I first met Mel when I went to work for Royal McBee Computer Corp.,
          a now-defunct subsidiary of the typewriter company.
          The firm manufactured the LGP-30,
          a small, cheap (by the standards of the day)
          drum-memory computer,
          and had just started to manufacture
          the RPC-4000, a much-improved,
          bigger, better, faster --- drum-memory computer.
          Cores cost too much,
          and weren't here to stay, anyway.
          (That's why you haven't heard of the company,
          or the computer.)

          I had been hired to write a FORTRAN compiler
          for this new marvel and Mel was my guide to its wonders.
          Mel didn't approve of compilers.

          "If a program can't rewrite its own code",
          he asked, "what good is it?"

          Mel had written,
          in hexadecimal,
          the most popular computer program the company owned.
          It ran on the LGP-30
          and played blackjack with potential customers
          at computer shows.
          Its effect was always dramatic.
          The LGP-30 booth was packed at every show,
          and the IBM salesmen stood around
          talking to each other.
          Whether or not this actually sold computers
          was a question we never discussed.

          Mel's job was to re-write
          the blackjack program for the RPC-4000.
          (Port? What does that mean?)
          The new computer had a one-plus-one
          addressing scheme,
          in which each machine instruction,
          in addition to the operation code
          and the address of the needed operand,
          had a second address that indicated where, on the revolving drum,
          the next instruction was located.

          In modern parlance,
          every single instruction was followed by a GO TO!
          Put *that* in Pascal's pipe and smoke it.

          Mel loved the RPC-4000
          because he could optimize his code:
          that is, locate instructions on the drum
          so that just as one finished its job,
          the next would be just arriving at the "read head"
          and available for immediate execution.
          There was a program to do that job,
          an "optimizing assembler",
          but Mel refused to use it.

          "You never know where it's going to put things",
          he explained, "so you'd have to use separate constants".

          It was a long time before I understood that remark.
          Since Mel knew the numerical value
          of every operation code,
          and assigned his own drum addresses,
          every instruction he wrote could also be considered
          a numerical constant.
          He could pick up an earlier "add" instruction, say,
          and multiply by it,
          if it had the right numeric value.
          His code was not easy for someone else to modify.

          I compared Mel's hand-optimized programs
          with the same code massaged by the optimizing assembler program,
          and Mel's always ran faster.
          That was because the "top-down" method of program design
          hadn't been invented yet,
          and Mel wouldn't have used it anyway.
          He wrote the innermost parts of his program loops first,
          so they would get first choice
          of the optimum address locations on the drum.
          The optimizing assembler wasn't smart enough to do it that way.

          Mel never wrote time-delay loops, either,
          even when the balky Flexowriter
          required a delay between output characters to work right.
          He just located instructions on the drum
          so each successive one was just *past* the read head
          when it was needed;
          the drum had to execute another complete revolution
          to find the next instruction.
          He coined an unforgettable term for this procedure.
          Although "optimum" is an absolute term,
          like "unique", it became common verbal practice
          to make it relative:
          "not quite optimum" or "less optimum"
          or "not very optimum".
          Mel called the maximum time-delay locations
          the "most pessimum".

          After he finished the blackjack program
          and got it to run
          ("Even the initializer is optimized",
          he said proudly),
          he got a Change Request from the sales department.
          The program used an elegant (optimized)
          random number generator
          to shuffle the "cards" and deal from the "deck",
          and some of the salesmen felt it was too fair,
          since sometimes the customers lost.
          They wanted Mel to modify the program
          so, at the setting of a sense switch on the console,
          they could change the odds and let the customer win.

          Mel balked.
          He felt this was patently dishonest,
          which it was,
          and that it impinged on his personal integrity as a programmer,
          which it did,
          so he refused to do it.
          The Head Salesman talked to Mel,
          as did the Big Boss and, at the boss's urging,
          a few Fellow Programmers.
          Mel finally gave in and wrote the code,
          but he got the test backwards,
          and, when the sense switch was turned on,
          the program would cheat, winning every time.
          Mel was delighted with this,
          claiming his subconscious was uncontrollably ethical,
          and adamantly refused to fix it.

          After Mel had left the company for greener pa$ture$,
          the Big Boss asked me to look at the code
          and see if I could find the test and reverse it.
          Somewhat reluctantly, I agreed to look.
          Tracking Mel's code was a real adventure.

          I have often felt that programming is an art form,
          whose real value can only be appreciated
          by another versed in the same arcane art;
          there are lovely gems and brilliant coups
          hidden from human view and admiration, sometimes forever,
          by the very nature of the process.
          You can learn a lot about an individual
          just by reading through his code,
          even in hexadecimal.
          Mel was, I think, an unsung genius.

          Perhaps my greatest shock came
          when I found an innocent loop that had no test in it.
          No test. *None*.
          Common sense said it had to be a closed loop,
          where the program would circle, forever, endlessly.
          Program control passed right through it, however,
          and safely out the other side.
          It took me two weeks to figure it out.

          The RPC-4000 computer had a really modern facility
          called an index register.
          It allowed the programmer to write a program loop
          that used an indexed instruction inside;
          each time through,
          the number in the index register
          was added to the address of that instruction,
          so it would refer
          to the next datum in a series.
          He had only to increment the index register
          each time through.
          Mel never used it.

          Instead, he would pull the instruction into a machine register,
          add one to its address,
          and store it back.
          He would then execute the modified instruction
          right from the register.
          The loop was written so this additional execution time
          was taken into account ---
          just as this instruction finished,
          the next one was right under the drum's read head,
          ready to go.
          But the loop had no test in it.

          The vital clue came when I noticed
          the index register bit,
          the bit that lay between the address
          and the operation code in the instruction word,
          was turned on ---
          yet Mel never used the index register,
          leaving it zero all the time.
          When the light went on it nearly blinded me.

          He had located the data he was working on
          near the top of memory ---
          the largest locations the instructions could address ---
          so, after the last datum was handled,
          incrementing the instruction address
          would make it overflow.
          The carry would add one to the
          operation code, changing it to the next one in the instruction set:
          a jump instruction.
          Sure enough, the next program instruction was
          in address location zero,
          and the program went happily on its way.

          I haven't kept in touch with Mel,
          so I don't know if he ever gave in to the flood of
          change that has washed over programming techniques
          since those long-gone days.
          I like to think he didn't.
          In any event,
          I was impressed enough that I quit looking for the
          offending test,
          telling the Big Boss I couldn't find it.
          He didn't seem surprised.

          When I left the company,
          the blackjack program would still cheat
          if you turned on the right sense switch,
          and I think that's how it should be.
          I didn't feel comfortable
          hacking up the code of a Real Programmer.

This is one of hackerdom's great heroic epics, free verse or no. In a few spare images it captures more about the esthetics and psychology of hacking than all the scholarly volumes on the subject put together.

[1992 postscript --- the author writes: "The original submission to the net was not in free verse, nor any approximation to it --- it was straight prose style, in non-justified paragraphs. In bouncing around the net it apparently got modified into the `free verse' form now popular. In other words, it got hacked on the net. That seems appropriate, somehow."]

Re:The Story of Mel is instructive here. (1)

gorzek (647352) | about 2 years ago | (#41403961)

I never, ever get tired of this story.

Re:The Story of Mel is instructive here. (0)

Anonymous Coward | about 2 years ago | (#41404321)

Mel Kaye should cock slap all the current breed of VMed language weenies and teach them a thing or two about efficient coding.

DVCS for the win (1)

AwesomeMcgee (2437070) | about 2 years ago | (#41402933)

Create a git repository on 'production' and then a fork on your development machine. (Or a fork on a test machine would be better really, which you then fork to development)

Do your development, checkin and then pull to test, execute there, if all goes well, pull to prod and execute there.

Revision Control and Deployment (5, Insightful)

MrSenile (759314) | about 2 years ago | (#41402953)

Before it gets out of hand, I'd look to set up four things.

1. Set up a proper split environment. Even if you don't have the hardware for it, set it up in such a way that when the hardware becomes available, you can move it appropriately. That being, a standard dev -> qa -> stress -> prod infrastructure.
2. Set up a good revision control. I've started to really enjoy using GIT for this, as there's other software like gitolite that can give you fine-grained access control to your repositories. However, feel free to use subversion or any other well contained revision control platform.
3. Set up a good method for deployment. My suggestion? Try puppet. It's free, and it's powerful, and if you get it configured, adding new systems to it is exceedingly easy to do.
4. Packaging for your deployment. If you are installing a bunch of software (scripts, job control, etc) package it and give it a revision, then it's easy to upgrade systems with the 'new package', or revert it to the 'previous package' instead of having to manually copy around files or (re)editing them.

Hope that helps.

Re:Revision Control and Deployment (0)

Anonymous Coward | about 2 years ago | (#41403617)

Second git recommendation. Bazaar has almost no tools for inspecting the history, no bisect, and no rebasing is allowed. As a result it is almost completely useless for analyzing existing code.

Re:Revision Control and Deployment (1)

cc1984_ (1096355) | about 2 years ago | (#41404857)

Please, if you're going to troll try [canonical.com] a little harder [canonical.com] next time.

Hmm (2)

jameshofo (1454841) | about 2 years ago | (#41402967)

Yea that's interesting actually, I just ran into this myself. We're putting a project together and when something breaks I end up doing small fixes and losing the changes across deployments (we only have 3 active) so its very small. But I feel your pain, I'm not totally convinced that a full SVN system is necessary but once you break down the problems it likely is. Given your closed infrastructure you may want to consider adding some phone home features to your scripts, something intelligent enough to auto update smoothly in an automated way or manually. Make things easy for yourself so they're not difficult to work with and you will be encouraging yourself (and others) to use it.

The absolute best advice I can give is keep it simple, there are a million different ways to do it, try not to do a massive migration of everything all at once or you may find out later that some minute bug is hindering everything you do.

Lastly plan what you want it to look like and how, it will save you weeks of work.

Git + Vagrant + Puppet (0)

Anonymous Coward | about 2 years ago | (#41403013)

Use Git for source control.

Use Vagrant to create virtualized testing environments (via headless Virtualbox) that you can ssh into, develop in, and test... all running directly within your laptop. Use Puppet or Chef to create recipes for all of your servers, and you can virtualize all of them in a "pretend" network of virtual machines. All of that can be checked into Git too.

Store your central Git repositories in Github or some other reliable place (you can stick them on one of your own servers too). Code in your virtual machine, commit, and push up to the central Git repo. Then pull it down to your live servers to automatically update them.

You can use Puppet in server/client mode to automate the deployment of server configuration changes out to your live machines also.

And if you want to get REALLY fancy, just throw a third set of machines in there and use that as your "staging" environment, where changes go after your virtualized environment, but before your live environment (mimicking the live environment as closely as possible).

Re:Git + Vagrant + Puppet (0)

Anonymous Coward | about 2 years ago | (#41403637)

This.

Perforce and a VirtualBox/VMware (1)

scorp1us (235526) | about 2 years ago | (#41403029)

I know there are plenty of OpenSource tools out there, but I still prefer perforce. Also, recently (as of February) Perforce opened up its 2-user license to 20 users/20 workspaces! This is fantastic news!

Check in your mainline (or migrate) to perforce under /depot/mainline
Integrate to a non existent branch /depot/testing/VERSION, and check that in.
Integrate /depot/testing/VERSION to a non existent branch /depot/release/VERSION, and check that in.

Now with P4V, moving changesets from mainline to testing is as simple as drag and drop. Then move changesets from mainline to, then testing (and the changes found in testing) to release, and drag those back to mainline. (Dragging is in 'integrate' step) You now have come full circle and you have 2 places where you can make changes and have a release snapshot.

Now, get VirtualBox because it supports snapshotting. Set up perforce on that and take a snapshot. Then sync from perforce, run your tests, and deploy as needed. Then revert the snapshot to after you installed perforce.

Then you can make packaging/deployment scripts that only work on release branches.

OMG@! (-1)

Anonymous Coward | about 2 years ago | (#41403035)

20,000 lines of code?! How do you get into your office?!
lol

You're making it too hard on yourself, throw a good name on the script files and keep in one folder, done.
When you get to 20,000 scripts, then give us a call.
Oh, and someone above said you didn't test? Newb.

Documentation (3, Informative)

Hjalmar (7270) | about 2 years ago | (#41403051)

Yes, set up a test environment. And implement some kind of versioning system, even if it's just "cp current_code old_code". You should always be able to fall back if you have a botched deployment.

But one of the best things you can do is to start writing documentation. I like to write my documentation assuming it will be my replacement reading it, and so I try to include everything. Justify every unusual implementation detail, explain why each task was down the way it was. List bugs, and any code you had to write to work around it. The best part of documenting your project will be that as you work through it, you'll find things that no longer make sense and make them better.

Any system, just use it. (1)

niftymitch (1625721) | about 2 years ago | (#41403059)

If you work on a single server install RCS. You only need to
Learn ci & co to start.

If you work on many boxes you need a network friendly tool.
The obvious ones are git and mecurial (CVS too).

Simple cp works too.

More important may be version tags and date time hints in the scripts.

Git. (2)

blackcoot (124938) | about 2 years ago | (#41403105)

A great deal of the version wrangling you are facing is best done with a tool like Git.

The bigger problem (development discipline) is much harder to fix.

Chef & Jenkins (2)

terbeaux (2579575) | about 2 years ago | (#41403125)

You want something to track changes, deploy changes, and test software. Bazaar will track your changes.

Chef is open source infrastructure management. The central server maintains a searchable database of your nodes and all of the scripts (recipes) that run on them. The nodes query this database and run the scripts that they are supposed to. This is similar to your environment now. You can also check your chef-repo into scm. This allows you to mess around with production and only commit back into scm when you are fairly certain that it works.

Jenkins has a similar setup but each node is ostensibly there to build and test software although we have used it for deployment and integration testing.

Chef & Jenkins can definitely help in deploying code and maintaining your infrastructure but you will need to take responsibility for testing your code somewhere along the process whether it be with on-commit with Jenkins or on deploy with unit or other tests. I definitely feel the value after investing time to learn these powerful tools.

Re:Chef & Jenkins (1)

undeadbill (2490070) | about 2 years ago | (#41404373)

Definitely Jenkins for code pushes. Not only can you decide how to push the code (even build and deploy through RPM), you can also use the Jenkins interface to manage testing and QA as well. Build can be distributed through virtual machines, and automation can be tied into something like chef or puppet. That includes cleaning and restarting virtual host images during testing, automating deployment milestones, etc.

Also, the other benefit of using Jenkins is that you can manage future contributors through the Jenkins management interface, limiting their access privileges while preserving a lot of flexibility. This means you can start out someone with a few privileges, say for QA validation only, and then add more as they show they are trustworthy.

I have a bunch of personal code that I tote around (2)

Omnifarious (11933) | about 2 years ago | (#41403217)

I keep it in a Mercurial repository and use symlinks into the repository to deploy it. I also make free use of Mercurial's subrepo feature for tools that others wrote that are not yet found as packages on the Linux distributions I use.

Yes, there is still a testing issue. For most of this code it's not a big deal because I'm the only user. I test it as I write it with a few simple hand tests and then it's good to go.

If I were doing this for something where the code mattered to other people I would just add unit tests for various subsections as made sense. I would also start sectioning off the tools and making them into separate repositories of their own. I'd also make much sparer use of the sub-repo feature and instead have deployment scripts that handled making sure the correct version was in place.

You still need test environments though for integration testing. And as the code grows, ad-hoc test environments stop being very practical. You should dedicate a VM or two (or even a machine or two) to replicating miniature versions of the real-world setups the code is expected to work in.

Lastly, it's never too early to start using source control on your code. 98% of my code is under source control, even most stuff I think is 'throwaway' or ad-hoc.

I would also strongly recommend Mercurial (or git (if you must)) over Bazaar. It's faster, and the mental model those two tools encourage is a much more accurate representation of what they're really doing. Bazaar lets you pretend that branching is still a big deal and takes some effort to resolve. It lets you continue to think in the model of centralized source systems even though it's not. You will be doing yourself a huge favor in productivity (yes, even for a single developer) to not use it and go for something that doesn't let you pretend anymore. Of those tools, I think Mercurial has a far more carefully thought out and better set of commands and options than git does.

test motivations (1)

Eponymous Hero (2090636) | about 2 years ago | (#41403339)

and now a versioning system would mean going through proper deployment/rollback in order to get real feedback.

not true. using a versioning system does not necessitate testing. just to be clear, testing is always necessary, and not enforced by any versioning system. you can use svn or git or cvs to keep versions of your files so when you do your testing on the production environment (shame on you) you won't have a stack of the same files with extensions like .bak, .bak.bak, .old, .delete, .undo, etc. sitting on your server.

test because it's the right thing, the proper thing, to do. not because you think some tech you choose to use is forcing you. you should be forcing you.

Git in production, version numbers are a nuance. (1)

Barryke (772876) | about 2 years ago | (#41403355)

Keep the files per project in whatever production directory you want and start a Git repository in it. Version numbers are irrelevant and only a nuance, you have every version of every file with any (commit) comment you want now! Then add scripted backup (such as FTP) to a central location of course to recover from disasters if your production files get damaged.

Add version number if you start rolling out to multiple sites.

It's possible to exchange files between git repositories, or merge back changes made in another production system.

Pretend you're a team (3, Insightful)

slim (1652) | about 2 years ago | (#41403477)

Forget that you're a lone programmer. Set up a proper environment anyway.

This is going to seem like hard work, but once you've done the upfront effort, it will pay dividends.

Do *everything* that you'd do if you were a team. There are plenty of books / web sites on the subject.

Pick a version control system -- since you're starting from scratch, Git or Mercurial. Get your code into it.
Pick a continuous build system -- Jenkins is popular and free.
Write one unit test, and make Jenkins run it as part of the build process.
Decide on some sort of repository for your build artefacts.
Establish an integration testing box, and have your CI system deploy to that every build. Ideally use something like Puppet for this, and also use Puppet on your production machines.
Write one integration test, and make Jenkins run it after deployment.

You can dedicate a server to all of this, several servers, run it all on your laptop or in VMs; it really doesn't matter. But think ahead so that you can move it to dedicated machines later if you need to.

Lots of work, but now you have a nice, confidence inspiring build / code management system.

Once that's going, you can decide how to fix your lack of tests. One approach is to take a few weeks just writing tests. Another is to write tests as the need arises -- for new code as you write it; to demonstrate bugs before you fix them. Or somewhere in between.

Python isn't my area, but there is probably an ecosystem of pythonesque tools for a lot of this stuff. pyUnit, code coverage tools, etc.

You will have problems unit testing, since you won't have designed the code for testability. The choice is, live with fewer tests than might otherwise be possible, or refactor your design into something more unit testable. (IOC is unit testing's best friend)

GitHub (2)

the eric conspiracy (20178) | about 2 years ago | (#41403585)

Just get one of the inexpensive commercial subs for GitHub. This solves all sorts of issues. Remote backup, robust version system, issue tracking etc.

Re:GitHub (0)

Anonymous Coward | about 2 years ago | (#41403777)

And it adds all the problems of the cloud. This guy is a networking system admin working at an ISP. He should be able to handle running his own server, or at the very least, should learn how to do all those things.

Use Python's unittest framework (0)

Anonymous Coward | about 2 years ago | (#41403635)

I almost posted this exact question about 9 months ago. I ended up using git + GitHub for version control. There are enough comments posted already about version control so all I'll say is that even working by myself, using git for real branching is why I still have a job.

Python's unittest package is really great, at least for the small (10k lines of code) project I'm working on. Using no third-party code at all, you can set up a testing package for yourself so all you have to do is type "python test" before you commit to your repo (make a Python package called "test" and put a "__main__.py" module in it that calls "unittest.main()"). Python is great for glue code so you can write Python unit tests for your Bash code as well as your Python code, and make it all automatic using Python's unittest test discovery (start your file names with "test_*.py").

Perhaps unit tests aren't as good as having a whole box dedicated to testing the entire environment, but it's *dead simple* to maintain, and its simplicity will encourage you to test before every commit (or at least before you merge your topic branches back into master).

Take it out back (1)

dutchwhizzman (817898) | about 2 years ago | (#41403659)

Take it out back and shoot it. If it's rabid, there is no cure.

Bazaar Hate? (0)

Anonymous Coward | about 2 years ago | (#41403677)

What's with the Bazaar hate people? The summary says he's already migrating to bazaar so there's no reason to say 'switch to my XXX version control system because I use it thus it's better for you too'. All the free distributed version control systems (git, mecurial, bazaar, etc..) have the same feature sets (with slightly different names) and none of them have to be used in a distributed fashion. Like RCS, CVS, SVN, and everything else you'll only be using two main commands: check in and check out. The other main commands you'll use are tag, revert, and diff. All the control systems are equally easy to setup and maintain.

Both RCS and CVS don't track changes across multiple files (feature called atomic commits). They should be instantly dropped from consideration for that alone.

I would recommend Bazaar for a beginner as it has excellent step by step tutorials for many use cases and has good GUI tools (which reduce the learning curve until one's ready for custom command line scripts). SVN is fine, but a distributed control system would provide more flexible as you add more people.

Personally, I try to stay away from git and mecurial because of their communities. They tend to have a lot of people saying 'Git/Mecurial is the best versioning control system ever because it can do and the can't.' They never even try to explain the how their distributed system is better than any other distributed system. Those other ones simple don't exist in their warped fanboyism world. The less popular products tend to have a truer view of the world (and thus in my view are better overall).

Ignore any claims of speed. 20,000 lines of code isn't a large project. It might be large for you personally, but in the field of software development it is a largish small project. Large projects have millions of lines. While you're under a million lines and don't have a lot of binary files under version control (the commercial systems handle binary files much better than all the OSS version control software), ignore speed. The minor differences won't add up to the time you spend figuring out which ones are faster under your work conditions.

One last point, you don't need a dedicated computer to host your source code. You're not using 486s are you?

Jenkins for deployment (1)

THE_WELL_HUNG_OYSTER (2473494) | about 2 years ago | (#41403733)

Use Jenkins [jenkins-ci.org] for deployment. You can automate the entire process. For example, imagine automatically deploying after checking in a revision that contains the word "***DEPLOY***" in the commit comment.

Please pass this to your boss. (don't read it) (3, Informative)

HornWumpus (783565) | about 2 years ago | (#41403767)

You need to fire this cowboy. He doesn't think he needs to test his scripts.

I know he seems irreplaceable. That should be a big red flag.

Testing and modern Versioning System (1)

Morpf (2683099) | about 2 years ago | (#41403889)

At first, please try to test your code. At least if you can't formally proof it's right testing is the only way to get the most bugs out of the code. Working test first can improve code quality substancially in function and in form, as you can refactor safely with tests in place. Try writing some mock-ups for things outside your own code.

I would choose some distributed versioning system. Not so much because it's distributed, but the most known ones (git, mercurial, bazar) behave way better than svn. The merging algorithms are better and checking out/in on svn with many small files is really, really slow, as it transfers one file after another. Bonus point: working on a local "copy" is fast as no network is going to slow you down.

stop writing code, you are dangerous (1)

arkowitz (1185265) | about 2 years ago | (#41403935)

you are having too much fun dabbling and playing

wait (1)

mapfortu (2567463) | about 2 years ago | (#41403991)

Wait, I know...

Among them: glue scripts, Cisco interaction / automatization tools, backup tools, alerting tools, IP-to-Serial OOB stuff, even a couple of web applications (LAMPython and CherryPy

And then I thought of my old lost code vowel to accomplish aLFS [linuxfromscratch.org] .

Looking for the ttervo's old church of xut pic (tux holding a smoking high calibre street sweeper and walking away from the monitor) I walked across hakin9. One of the cleaner front pages I have seen in a while. The ad: "become a pentester" as a legal hacker. You know, before they began demanding buggy software on store shelves, beta tester was going to be a very happy career.

Well ... (4, Insightful)

gstoddart (321705) | about 2 years ago | (#41404073)

My question for the Slashdot community is: in the case of single developer (for now), multiple machines, and a small-ish user base, what would be your suggestions for code versioning and deployment, considering that there are no real test environments and most code just goes into production ?

If I'm the people who run the company, I start firing people. If I'm the developer, I run like hell before anybody realizes what a complete mess I've made.

No versioning, no test environment, live changes in production ... these are warning signs of something which has been cobbled together, and which continues working by sheer dumb luck.

I had a developer once who edited a live production environment without telling anybody and broke it even worse -- he very quickly found himself with no access to the machines and being told that we no longer trusted him with a production environment.

Having worked in highly regulated industries where the stakes are really high, I've had it drilled into me that you simply have no room whatsoever to be doing this kind of thing that ad hoc.

Glad you're starting to use something. But the risk to your employer of all of your stuff tanking and becoming something you can't recover is just too great. From the sounds of it, if you get abducted by aliens or hit by a bus, your company would come to a screeching halt.

apt for deployment (1)

grewil (2108618) | about 2 years ago | (#41404075)

In case you use debian, you can setup your own repository with reprepro. It's really easy to deploy stuff using apt.

keep it simple and comment changes in the logs (0)

Anonymous Coward | about 2 years ago | (#41404343)

I personally keep all my admin config files, scripts, etc under RCS control. I want per file granularity of comments describing changes. git, mercurial & company solve a different problem than what you have doing system administration.

I'm working on a project right now that will probably have RCS for tracking individual files and mercurial for tracking the project at release level. I use RCS a lot during development. Write some code, test it, check it in, write some more, test it, check it in. Saves time if I find something doesn't work as expected. On one code I currently have 51 versions saved w/ comments describing the changes and reasons for making them. It scientific research code so lots of experiments in different methods of doing things.

SVN + Jenkins (1)

Gripp (1969738) | about 2 years ago | (#41404527)

I commit everything to an SVN, then use jenkins to manage updates. Once you create the jenkins job all you have to do in the future is run it. and you can string jobs together to that if the change needs to be pushed to a number of servers it is still one click.

SQLite (1)

ericbg05 (808406) | about 2 years ago | (#41404549)

My personal programming hero, D. Richard Hipp, works with a very small team on SQLite (which you may have heard of). He uses his own, home-grown SCM called fossil [fossil-scm.org] . It probably doesn't scale to a zillion contributors but, like all of Hipp's work that I'm aware of, it's super clean and easy to use. Sounds pretty great for your use case.

And, as other people on this thread have already said: your habit of throwing stuff into production without testing it is similar to playing Russian Roulette with your company. Stop that. Stop that right now.

If his Boss Is An Idiot (0)

Anonymous Coward | about 2 years ago | (#41404855)

..he should continue that risky practice. The bozo will only ask him "why things have slowed down", if he does some proper testing. Apparently, it is not (yet) necessary.

There are situations where you can do this - especially if you have excellent people who can quickly patch the problems up. And if you don't operate these commercialware clusterfucks from HP to manage your network.

Please tell me what company / product this is for (2)

bratmobile (550334) | about 2 years ago | (#41404557)

Because I never, ever want to rely on anything you build this way. You are headed for a disaster, unless you 1) set up a test environment, and 2) use a revision control system.

Really, anything less than that is just a complete waste of everyone's time.

Source Control Axiom (1)

QuickBible (1143641) | about 2 years ago | (#41404667)

You won't need source control until you start using it and then you'll wonder how you ever lived without it. Then you'll start making wild changes because, hey, you have source control now so you can always roll back. This quickly leads to needing a testing environment.

Its all good (0)

Anonymous Coward | about 2 years ago | (#41404717)

dude don't worry; within a few weeks you'll be able to deal with all the spaghetti nonsense, since you'll eventually just learn the crazy, non-standard, and ridiculous code anyway...

and hey, once you're fired or quit, it won't be your problem anyway. So, short answer, there's NEVER a good time/reason to do tests unless required by regulations.

Open a bitbucket account (1)

Slackus (598508) | about 2 years ago | (#41404725)

Bitbucket supports both Git and Mercurial and has free accounts for unlimited private repositories. In addition to version control you get issue tracking, wikis etc.

SVN Repository (0)

Anonymous Coward | about 2 years ago | (#41404807)

Check in all major changes individually. This allows for nice rollback and for later analysis if something goes wrong. Also, if you fuck up something while editing you can always roll back. SVN is quite easy after some usage experience and it can be scripted. Don't use the graphical B.S. tools for SVN. Use proper commit comments.

Also, Check in all important documents (e.g. important configurations, network topology plans, setup procedures etc). Protect the repository well, as it will probably contain passwords (I know that is not optimal, but the God Of the Mighty Dollar dictates that).

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>