Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The P.G. Wodehouse Method of Refactoring

kdawson posted more than 6 years ago | from the on-the-wall dept.

Programming 133

covertbadger notes a developer's blog entry on a novel way of judging progress in refactoring code. "Software quality tools can never completely replace the gut instinct of a developer — you might have massive test coverage, but that won't help with subjective measures such as code smells. With Wodehouse-style refactoring, we can now easily keep track of which code we are happy with, and which code we remain deeply suspicious of."

Sorry! There are no comments related to the filter you selected.

Grok it. (5, Insightful)

symbolset (646467) | more than 6 years ago | (#22834864)

It's only 30k lines of code. This is no problem.

First, take ownership. This is your project. Identify your resources, name the gates you must get through to succeed. If you have help make sure they understand their changes must hit the corner cases or it's junk, then give them ownership of their piece explicitly. Create a safe environment for testing changes, with forward and backward versioning.

Define success. So many projects skip this essential step. If you cannot identify the destination you cannot tell when you've won.

Skip the 50,000 foot view and proceed directly to "what does this do and how can it be done better"? Believe it or not flowcharts and Venn diagrams are not obsolete. Create tree views of function calls. Identify processes that should be libraried. Create policies like "maximum function call depth", "Maximum process share", etc.

If you're the lead, look at issues like memory allocation and process management. Do your profiler due diligence.

If you're the lone ranger on this just absorb the whole thing and integrate it. Force feed your brain huge quantities of what-ifs until it gives you the right answer in self defense - and then have somebody else check the result.

30 days development and 60 days testing. Remember to give a nice presentation at the end and sell it!

Good luck.

Goatse (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22834974)

Goatse. [twofo.co.uk] [goatse.ch]

You nerds love it.

In other news, Zeus sucks cock.

Divide and conquer (0)

Anonymous Coward | more than 6 years ago | (#22835276)

I agree with assigning ownership, but I don't think it is enough. How do you aggregate the combined efforts of all the teams into a birds eye perspective and how do you know where the boundaries of the project lie? If the project is big enough and the requirements are complex enough, you really need an evolutionary method to control the cost of adjusting direction. The number of questions you propose will simply be too many to deal with. On the other hand, if it's all about producing a release (prototype, proof-of-concept, save-my-butt-development) and then move on you may have the benefit of never having to taste your own dog food and the investors will pick up the bill instead and draw strange conclusions about the lifetime of an invention.

Re:Grok it. (3, Insightful)

jonaskoelker (922170) | more than 6 years ago | (#22835286)

Skip the 50,000 foot view and proceed directly to "what does this do and how can it be done better"?
While your post has many good and clearly expressed ideas, I'm not quite sure where you're driving at right here. The question "how can this be done better" can be asked at each node in the call graph, and the question is very broad.

To ask this question near the root, for architectural purposes, I think what you want is exactly the 50 kilofoot view. There's of course utility in asking the same question closer to the leaves, but I think it's a mistake to overlook the big perspective in favour of going low-level.

Re:Grok it. (4, Informative)

Like2Byte (542992) | more than 6 years ago | (#22835612)

I totally agree. Even if you are a developer for a large, corporate, multi-vendor project - knowing how components that feed components you directly interface with will allow you to become a better developer for the project and to point out problematic architectural design issues.

And if I hear one more project manager say, "Let's not worry about this corner case" (usually said with no idea how this is going to negatively effect the entire process tree) I'm going to punch them in the colon.

There are two ideas of thought about corner cases (and the GP pointed out one).
Thought #1) (GP) There's no such things as a corner. It is a requirement - it may be that fewer people/fewer processes use it; but, it is still a section of the total solution that must be designed to overcome some problematic section. Otherwise, why is the code being written?

Thought #2) Corner cases only effect a small number of your user-base; therefore, code to satisfy 95%-99% of your customers. The underlying principle here is that the manager will wait for another release. This approach is usually taken when the project manager failed to account for something and says (and I quote), "We'll just re-design it after the first release."

If you find yourself in an environment where #2 (hehe) permeates the thought structure of management you have few options available to you.
a) Kindly (because wrapping your hands firmly around their neck is just not understood these days) explain to them the flaw in that kind of thinking. It usually involves educating the manager to a level they've never even considered before. Completion of this project will be long and arduous. Good luck to you.

b) If step 'a' fails - inform management. Project Managers (in large corps) are not, usually, the final decision maker. Elevate this threat (to the project) to the PM's manager - a Director, perhaps.

c) If you're able, move to a new project within the company where the project manager in case 'a' has no influence. I know that's not feasible in most segments.

d) Find a new job.

If the project is sufficiently high profile enough then recourse option 'a', above, is your only solution. Mitigate the damage by engaging the offending PM and try to keep them under thumb by sharing your expertise with them. Good luck with that brick wall. YMMV.

There are no corner cases in really good gode (4, Interesting)

Terje Mathisen (128806) | more than 6 years ago | (#22836352)

There are two ideas of thought about corner cases (and the GP pointed out one).
Thought #1) (GP) There's no such things as a corner. It is a requirement - it may be that fewer people/fewer processes use it; but, it is still a section of the total solution that must be designed to overcome some problematic section. Otherwise, why is the code being written?

Thought #2) Corner cases only effect a small number of your user-base; therefore, code to satisfy 95%-99% of your customers. The underlying principle here is that the manager will wait for another release. This approach is usually taken when the project manager failed to account for something and says (and I quote), "We'll just re-design it after the first release."
I have taken part in a few optimization competitions, and each time #1 has been a crucial part of the solution:

The usual approach is to optimize the 90-95% case, then bail on the remainder, but this will almost always be beaten by code which manages to turn everything into the "normal" case, with no if/else handling, no testing, no branching.

When I was beaten by David Stafford in Dr.Dobbs Game of Life challenge, I had lots of specialcase code to handle all the border cases, while David had managed to embed that information into his lookup tables and data structures. (He had also managed to make the working set so much smaller that it would mostly fit in the L1 cache. :-)

When my Pentomino solver won another challenge, being twice as fast as #2, the crucial idea was to make the solver core as tiny as possible, with very little data movement and the minimum possible number of tests.

Terje

Re:Grok it. (-1)

Anonymous Coward | more than 6 years ago | (#22836984)

I have an engineer like that. He's a terrific designer and above-average coder with absolutely no business sense whatsoever. As a result, he's got no sense of good enough nor does he have any realization that a change that makes a problem better (eg taking us from 10 outages/year across the customer base to 1 or 2) with 1 unit of work is better than the ultimate solution that'd take 10 units of work and in that time he'd be unable to work on anything else.

*chuckle* In fact, I've watched him actively resist good enough engineering solutions since he understands that we'll tradeoff the 1-2 outages/year as a business decision so other features can be added or other subsystems stabilized and he'll never get to do it perfectly.

In a perfect world, we'd have an infinite amount of time for people to create the perfect beautiful design. In the real world, successful companies design something adequate for v1.0 and evolve their way there *if necessary*.

And I almost forgot, he had his latest feature pulled from the last release because he couldn't figure out minimum ship and has put the latest release in danger. Yes, it'll be less problematic code than most but releasing something is better than releasing nothing.

Re:Grok it. (2, Insightful)

Anonymous Coward | more than 6 years ago | (#22838688)

moderated -1, Asshat

If your engineer wasn't a perfectionist you would probably be a failure

Re:Grok it. (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#22839320)

Ah, the gleeful bleating of yet another programmer who never delivers a project on time, and would rather see the company go bust than ship something that is even 0.0000001% imperfect.

-- a project manger

Re:Grok it. (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22835432)

Ummm... you're answering a question no one asked

Re:Grok it. (3, Interesting)

blahplusplus (757119) | more than 6 years ago | (#22835518)

"Believe it or not flowcharts and Venn diagrams are not obsolete."

Believe it or not I use mindmapping software to help plan out the structure of a program and draw relationship lines arbitrarily, I wish someone made these mindmapping programs and made them more accessable to programs and programming.

http://www.thebrain.com/ [thebrain.com]

Also great flowchart drawing tools:

http://www.smartdraw.com/ [smartdraw.com]

Re:Grok it. (4, Informative)

cbart387 (1192883) | more than 6 years ago | (#22835632)

Doxygen [stack.nl] is my favorite tool for C/C++/Java programming. It also handles some other random languages as well. Its main purpose is to create documentation (think javadoc but Open Source, handles more than just java, and better results). Here [usc.edu] 's an example of what it can do.

Anyways, related to your post, doxygen can map out the call graph [usc.edu] from functions and dependency/include graphs [usc.edu] of files. It may be helpful in understanding the structure.

What? (2, Insightful)

Peaker (72084) | more than 6 years ago | (#22838972)

Create policies like "maximum function call depth"
I would rather have a policy of maximum function size (which may increase function call depth) than this policy.

Do you want to encourage people to inline their functions manually, and not divide things into small, cute trivial functions?

Is this a misguided attempt to increase efficiency?

How can anyone turn down a refactoring story! (1)

iluvcapra (782887) | more than 6 years ago | (#22834868)

Just burning up the comment threads on this one.

Here's how I tell what needs refactoring (4, Funny)

Cecil (37810) | more than 6 years ago | (#22834898)

Code that was written while drunk, high, or half-asleep I will be deeply suspicious of, and probably needs to be refactored immediately. Anything else probably needs refactoring as well, but less urgently.

Re:Here's how I tell what needs refactoring (0)

Anonymous Coward | more than 6 years ago | (#22834962)

Code that was written while drunk, high, or half-asleep I will be deeply suspicious of, and probably needs to be refactored immediately.
Obviously you don't get drunk, high, or sleep deprived. (:

Wait.. how else would you know it was suspicious code!?

Re:Here's how I tell what needs refactoring (2, Funny)

ByteGuerrilla (918383) | more than 6 years ago | (#22835846)

It's suspicious if you're reading it and you don't realise it's your code because you keep thinking "What the hell is this guy doing?"

Re:Here's how I tell what needs refactoring (2)

0123456789 (467085) | more than 6 years ago | (#22836258)

It's suspicious if you're reading it and you don't realise it's your code because you keep thinking "What the hell is this guy doing?"


Ah, someone else who writes perl...

Re:Here's how I tell what needs refactoring (2, Insightful)

kaens (639772) | more than 6 years ago | (#22838638)

This happens to me pretty regularly if I write a section of code, wait three months, and then read it again.

It's happening less frequently though, so perhaps my skill level is leveling out.

Either that or I'm stagnating.

Re:Here's how I tell what needs refactoring (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#22835654)

Well at one company I coded at I had a certain way of telling what code was suspicious. That was any almost any code written by a guy I'll call Chaz. I swear Chaz was allergic to whitespace or something. He didn't use structured coding. He used paragraph coding with run on sentences. Long sections of code would be all glommed together in multi-line chunks. Sure the compiler wouldn't flinch at it but it made it hard for mere humans to read and grasp.

I'd have to spend time inserting whitespace and some indentation. Once this was done and having been told what the section of code was intended to do, I'd have to determine what the code was actually doing and then decide whether or not the code could be made to do what it was intended to do in the first place. Sometimes this was possible, sometimes it wasn't.

What is sort of funny is that if Chaz had bothered to insert some whitespace he might not have lost track of what his code was doing and might have been able to make it work.

Re:Here's how I tell what needs refactoring (1)

fatnutz (988615) | more than 6 years ago | (#22836100)

Unless you were at your Balmer Peak [xkcd.com] .

Refactoring vs Sleep/Napping (1)

pg--az (650777) | more than 6 years ago | (#22838388)

>> Code that was written while ... half-asleep has 25 minutes of video which are worth watching just to catch e.g. the beautiful French accent when the researcher Van Cauter remarks that Americans regard going without sleep as a "badge of honour". The idea that you might actually be more effective at linking together all those pieces of information while asleep than awake - if true, it's a paradigm shift from the jolted-caffeine-philosophy.

qualitative vs. quantitative (0)

Doviende (13523) | more than 6 years ago | (#22834904)

Nice article! I thought it was an interesting way to bring a qualitative feel back into software development. In a word of mathematics and code, we often lose sight of those qualitative things in favour of hard numbers. I think developers too often live in the analytical world like european Chess when they should be combining intuition with analysis like in Go / Weiqi.

This sure beats the other literary refactorings (4, Funny)

plover (150551) | more than 6 years ago | (#22834912)

I hate the e.e. cummings [wikipedia.org] method of refactoring, which is to run all your code through a lower-case filter. Never seems to help very much.

Re:This sure beats the other literary refactorings (5, Funny)

Stanistani (808333) | more than 6 years ago | (#22835020)

I prefer the Raymond Chandler method - if you're having a problem with a section of code, have a man come through the door holding a gun in his hand.

Re:This sure beats the other literary refactorings (3, Funny)

martin-boundary (547041) | more than 6 years ago | (#22835256)

How about the Joel Spolsky method of refactoring [joelonsoftware.com] ?

10 Never throw old code away.

20 If code is broken, GOTO 10.

Re:This sure beats the other literary refactorings (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22836882)

you forgot the part about having buttsex with other dudes.

Re:This sure beats the other literary refactorings (1)

Chemisor (97276) | more than 6 years ago | (#22835802)

> if you're having a problem with a section of code, have a man come through the door holding a gun in his hand.

Unfortunately, if you are having problem with spaghetti code, like I am, your man would have to crawl on his belly for several miles in twisty passages all alike before reaching the actual problem.

Re:This sure beats the other literary refactorings (5, Funny)

gbjbaanb (229885) | more than 6 years ago | (#22836612)

I say Jeeves, cancel my engagements for the morning, Aunt Agatha has decided that I must refactor my code so the Drones Club annual 'ship without testing' party will have to wait.

Adversity strikes when one least welcomes it Sir.

She claims my code 'smells'. I'll have her know my code smells as spiffly as a, as a, well, as a whatnot Jeeves.

Indeed sir.

Yes, a whatnot. I check my code against the very latest coding practices, and sometimes I even run it through unit tests!

Admirable qualities in a coder, if I may say, Sir.

Yes you may Jeeves. Now. to work! beastly testing.

Sir, perhaps one could use some automated tool or other method of achieving the requisite level of quality desired.

You know Jeeves, you've hit it right on the head there. I'll get Bernie Smetherington-Smythe to do it, he's such a ghastly bore but, well, when it comes to code review testing, there's no-one that can cut the mustard quite like him. Zip the source up Jeeves, we're to go pay Bernie a visit.

Certainly Sir, but what if Aunt Agatha finds out?

Pish Jeeves, pish! The auditors won't be around for months, no-one'll be any the wiser, and I can go to the ship-without-testing party after all. Life just falls into place sometimes doesn't it Jeeves? After all, What could go wrong?

Yes Sir.

Re:This sure beats the other literary refactorings (4, Funny)

The Fun Guy (21791) | more than 6 years ago | (#22837596)

There's someone at the door, Jeeves.

Very good, sir. Mr. Fink-Nottle, sir.

What ho, Gussie.

Oh, Bertie, thank heavens you're here! Someone is appropriating the prose style of the greatest author the English language has ever produced, and doing it in the most dreadful manner! He's even capitalizing the word "sir", and having Jeeves make interrogatory rather than simple declarative statements!

Sorry, Gussie, he did a simple what?

Oh Bertie, you ass, Jeeves would never actually question you! He would never say, "Certainly Sir, but what if Aunt Agatha finds out?" because that's a flat out question! Besides, he certainly wouldn't refer to your relative as "Aunt Agatha"! He might say, "Certainly Sir, but I might draw attention to the fact that Mrs. Gregson would take a dim view of such an approach." Bertie, you have to do something!

All well and good, Gussie old thing, but what am I to do? The hands of the Woosters are tied, as it were.

Not you, you fathead. We want Jeeves for this sort of thing!

Ah, of course. Jeeves?

Yo, Mr. B, what up?

Jeeves, if you could forego the anachronistic and inappropriate argot for the moment, we have a problem, or rather a sort of quandry which requires your attention.

Word. I talk my talk, yo.

Sharpen your wits, Jeeves, for this is unlike any you have faced before, and I fear that even you may not be up to the task.

De nada, boss. I got yer solution right here.

Jeeves? Do I hear correctly? We've not yet set the problem before you, and you have an answer for us?

Damn, bitch, didn't I just say that? Can't I hear my own self talking? Sh*t, I know what the problem is and I got the answer. It's self-referential code, dude. The problem is the solution, and vice versa. Get the code to recognize it's own faults, and set it to modify itself.

And we would then end up with...?

Undying prose, sir.

Yes, Jeeves. How appropriate.

With sincerest apologies to the Master, P.G. Wodehouse, whose writings gave me so much pleasure over the years, until I tried to write novels myself. Then they made me want to kill myself for my inadequacies as a writer.

Where are my mod points when I need them? (1)

SpammersAreScum (697628) | more than 6 years ago | (#22838686)

Of course, then I'd be in a quandry, choosing between "Funny" and "Informative"...

Smell-capable gut? (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#22834920)

Is my gut truly lined with olfactory receptors? If my gut can smell bad code, what's next? Will it be discovered that the soles of my feet are capable of excreting phlegm? Come here honey, and massage my feet... if you do it right, you may receive a slimy prize!

The idea of physically printing code... (5, Interesting)

nullchar (446050) | more than 6 years ago | (#22834934)

...is a neat idea. Besides the mentioned practice of raising and lowering pieces of code that the developers are happy and dissatisified with, hanging code encourages peer review.

Perhaps not in-depth code review, but physically hanging code in your office might "scare" developers into adhering to their organization's standards for fear of their coworkers mockery of poor code.

It might be difficult to hide shitty code when anyone can walk by and look at what *you* think is good.
(At least it might take just as much effort to hide bad code as it does to make it good.)

I know it's archaic, (4, Insightful)

symbolset (646467) | more than 6 years ago | (#22834990)

But the screen resolution of fanfold paper hanging on the wall cannot be beaten by the best modern monitors.

Sometimes just printing the stuff out, papering the floor with it and literally crawling over it yields answers that otherwise escape.

If the line width won't fit on the paper at a reasonable pitch, there's a clue right there.

Re:I know it's archaic, (1)

cheater512 (783349) | more than 6 years ago | (#22835230)

For some reason I dont think Linus uses that technique. ;)

BTW (4, Interesting)

symbolset (646467) | more than 6 years ago | (#22835030)

I'm agreeing with you. 30k lines is 500 pages. That's roughly 8' high by 50' wide. Definitely doable.

Not about the scaring though -- just about it being useful. Anxiety isn't something I'd want to deliberately introduce to a working programmer. Most of the ones I've known had enough performance anxiety issues of their own without adding any.

Hanging the code makes some errors more visible. Not all errors are bugs. Some are structural. Structural fixes sometimes repair "pernicious" bugs.

BTW-flagging performance (0)

Anonymous Coward | more than 6 years ago | (#22835392)

"Most of the ones I've known had enough performance anxiety issues of their own without adding any."

Wow! Lends new meaning to the term, performance review.

Even better idea! (4, Funny)

johannesg (664142) | more than 6 years ago | (#22835038)

I have an even better idea: instead of printing the code on paper, maybe we could represent it by making corresponding holes in little cards. The cards you could hang in front of the window. As the classes get simpler, the holes can get bigger (because less total space is needed) and they get spread around more easily, so more and more light filters through. This way we can emulate the "sun rising on the project", "light at the end of the tunnel" feeling we all love so dearly.

Need a status update? Just look into the room - if you can see sunlight, the work is done!

Re:The idea of physically printing code... (0)

Anonymous Coward | more than 6 years ago | (#22835690)

One day at work I was digging through some old documentation on one of the network drives and found a 2500 page scanned, OCRed, PDF file of previously printed code from one of the older systems. You haven't lived till you've browsed through 2500 pages of Z80 assembly language.

Re:The idea of physically printing code... (1)

Whiteox (919863) | more than 6 years ago | (#22835926)

Virtual Papier Mache is good for that kind of stuff.
My favourite is to blow up a virtual balloon, cover it with 2500 sheets of mangled and drenched Z80 assembly, let it dry and paint it. My last one was of Pluto....
Changed jobs after that. :)

Big Visible Charts (5, Interesting)

EponymousCoder (905897) | more than 6 years ago | (#22835094)

I really like the concept, and it fits in with a bunch of techniques we've been using at work in line with the "Big Visible Charts" ideas. Things like this and Agile stories written on index cards and pinned to the wall do sound hokey. A number of people like Johanna Rothman http://www.pragprog.com/titles/jrpm [pragprog.com] however point out, that these techniques are a lot more inclusive and (as I've found) you get much more animated discussions than the pm/architect/team lead writing a document "for discussion."
If nothing else it's fun to watch management trying to cope with your walls being covered with sheets of paper, cards and string when they've paid all this money for MS Project and the Rational Suite.
 

Wow, I like it! (4, Funny)

S3D (745318) | more than 6 years ago | (#22835144)

My code is not ugly. It's battle-scarred

The art (3, Insightful)

www.sorehands.com (142825) | more than 6 years ago | (#22835164)

Most of the books and documents that I read in the last 20 years go towards metrics, statistical analysis of code. This ignores the Zen and art of coding and debugging. While much of coding is science, there is a part of it that is feel. If it is only science, then code generators would have already eliminated programmers.

Don't do minor cleanups (1)

tinkerton (199273) | more than 6 years ago | (#22835190)

When refactoring dirty code, avoid doing minor cleanups on the other code. This way the places where you still need to work on stand out from the rest. In any case, as soon as minor cleanups go beyond layouting, it also means you're doing changes in code without test coverage. Even straightening out if/else clauses easily leads to errors.

Re:Don't do minor cleanups (1)

petermgreen (876956) | more than 6 years ago | (#22835670)

the problem with that is without doing minor cleanup it is sometimes rather hard to work out what a peice of code is trying to do. I'm talking the kind of function that has a cryptic name, one or two letter parameter/variable names and no comments.

Re:Don't do minor cleanups (2, Insightful)

tinkerton (199273) | more than 6 years ago | (#22835780)

Agreed. And you have to change code anyway when you're moving functions that are defined elsewhere, so the code does change.
The key idea though is, you have an array of visual cues that tell you instantly this code still needs to be refactored. These cues often can be removed in bulk, even automated with scripts. Indentation for example. Or use of deprecated functions. Certain types of comments. It's attractive to do these bulk cleanups because they give the overal code a healthier outlook. But they remove the cues. The actual rule would more be something like "don't work on cosmetics".

Visual perception is "easy" (5, Insightful)

jonaskoelker (922170) | more than 6 years ago | (#22835262)

The article highlights a principle which we all know (either explicitly or implicitly): we are highly vision-oriented creatures; visual perception is (relatively) easy for us. A quick convincer: coloured and neatly indented code is easier to read than monochromatic unindented code, right? So perception of colour and position is faster than that of symbols and their relationships.

The methods in the article plays right into this: by viewing the code zoomed out greatly, one can readily see the density of code, and get a visual "fingerprint" of each chunk. By coupling printout position to satisfaction with the printed code, one can readily see which piece of code needs the most work.

Interesting additions: adding colour to each class and method based on how memory they allocate (or how many objects they construct); or colouring functions relating to their position in the call graph, or their in-degree.

Re:Visual perception is "easy" (1)

DerekLyons (302214) | more than 6 years ago | (#22836644)

Interesting additions: adding colour to each class and method based on how memory they allocate (or how many objects they construct); or colouring functions relating to their position in the call graph, or their in-degree.

Careful, down that path lies dragons. Adding too much detail not only raises the temptation to waste time by fiddling with the presentation, it also risks turning your 'visible display' from simple line art into a pointillist [wikipedia.org] painting, where you can not only no longer see the broad details - but also where the eye can fool you into seeing something that isn't there. KISS.

Refactoring (1)

ettlz (639203) | more than 6 years ago | (#22835266)

...takes a very long time on the product of two large prime codes.

Or you could just ask Jeeves to do it for you. (4, Funny)

IainMH (176964) | more than 6 years ago | (#22835324)

Talk about decoupled classes..

What ho.

Only 30K lines anyway... (5, Insightful)

johannesg (664142) | more than 6 years ago | (#22835346)

I was under the impression that "large projects" started somewhere around the million lines of code mark, not at a mere 30K lines. But here is what I do, and none of this require any special insights into the source code (note that I do this primarily for C++):

1. Ruthlessly delete lines. Get rid of ***anything*** that does not contribute to correct operation or understanding. Even including things like version history (that's why you have the damn tool, use it already (1)!), inane comments (but keep the stuff that actually helps with understanding), code that is commented out (if you really need it, it will be in the aforementioned version tool), code that is not called, and code that is not doing anything at all (such as empty constructors or destructors).

2. Decrease the scope of everything to be as tight as you possibly can. Make everything that you can private, static, or whatever else your language offers to decrease scope. Declare variables in the innermost scope. Make them all const if possible.

3. Anything that belongs together should be in one file (even if that files becomes 5000 lines long). Anything that *doesn't* belong together should be split into separate files (but don't make a file for just a single function - instead create a file with "leftovers").

4. Anything that has a non-descriptive name is to be renamed to what it really represents. No more "int x; // x is the number of blarglewhoppers" - just use "int NumBlargleWhoppers" instead.

5. Keep an eye open for duplicate code. Get rid of the duplicates.

6. Any special insights gained, write them down as comments in the appropriate place. Anything you do NOT understand, also write them down as comments. Mark those with something you can grep for.

7. Any homegrown version of something that is available in STL or boost, to be replaced by its "official" alternative.

8. And that goes double for string operations! No more "char *" anywhere; it is the 21st century, use strings already! I'll make an exception for functions that allow "const char *" to be passed in, but only with the "const". If I find a "char *" without the "const", I *will* come to your office and bash your head against the wall. Repeatedly. Just so you know.

9. Any error handling through error return codes, probably to be replaced by exceptions, unless it turns the calling code into a wild mass of try/catch blocks.

10. Pointers, to be replaced by references where possible.

11. Negative logic and names, to be replaced by positive logic and names. Don't have "if (!NoPrinterAvailable()) {A();} else {B();}" - instead do "if (PrinterAvailable() {A();} else {B();}".

12. Anything that looks like it was written by drunk lemurs or the French, to be deleted on principle and replaced by something sane.

So there you have it. In my experience, doing this will remove about half of the lines of code (more if there was a significant number of lemurs on the team), at the gain of considerable clarity and usually performance.

(1) And honestly, I don't give a flying fuck which one of you messed up on the 29th of february 1823 or why you thought it was a good idea in the first place. I'm concerned with what the code will be doing in the future, not how it came to be in this sorry state. Chances are, whatever you thought at the time is long obsolete anyway. Get rid of the cruft. Get rid of anything that doesn't help - it just clutters the mind.

Re:Only 30K lines anyway... (1)

Anne Thwacks (531696) | more than 6 years ago | (#22835428)

Of course you could just chuck all this object-disoriented stuff and write in good, old fashioned C, like the rest of us.

If its too big to fit in the address space of a 6502, then you are doing it all wrong. (or maybe it should have been done in SNOBOL in the first place.)

Re:Only 30K lines anyway... (4, Insightful)

siride (974284) | more than 6 years ago | (#22835786)

> 9. Any error handling through error return codes, probably to be replaced by exceptions, unless it turns the calling code into a wild mass of try/catch blocks.

Exceptions should be used to mark, well, exceptional failure. I really really hate this pattern that Java (and perhaps from elsewhere) has foisted upon us where we get frickin exceptions because we reached the end of the file. That is technically an error condition in the reader function, but it is not exceptional and it shouldn't require me to write the "wild mess of try/catch blocks" just to read in data from a file. Exceptions say "we are really in a mess and have to abort this operation, and potentially the program. They do not say "could not find element x in array".

If that's what you were saying, however, then I apologize.

Re:Only 30K lines anyway... (1)

dubl-u (51156) | more than 6 years ago | (#22836842)

I really really hate this pattern that Java (and perhaps from elsewhere) has foisted upon us where we get frickin exceptions because we reached the end of the file. That is technically an error condition in the reader function, but it is not exceptional and it shouldn't require me to write the "wild mess of try/catch blocks" just to read in data from a file. Exceptions say "we are really in a mess and have to abort this operation, and potentially the program. They do not say "could not find element x in array".

If you're writing in Java, I couldn't disagree more. Trying to read past the end of a file, trying to reference elements in an array that aren't there? That should not happen, making them exceptional conditions. If you are doing them, that means that you messed up, and things should indeed blow up.

If you're writing in a DWIM language like Perl, then yeah, nothing should really blow up ever. Reference something that doesn't exist? Sure, you must have meant for that to exist, so we'll create it. And return you an nice undefined value, that other things will handle gracefully. And give you a cookie, just 'cause Larry Wall loves us all.

Note also that in Java those catch blocks for reading a file are not for the usual case, or generally even the retarded-programmer case. You have to catch IOException when trying to read a file because IO can always fail. You may be writing a program where it's ok to sail on past a disk or network failure, but some of us read and write data that people actually care about.

In that kind of coding, I'm me grateful for a well-designed set of checked exceptions: it makes me consciously think about and explicitly handle all of the weird little failure cases that I would rather pretend were impossible.

Re:Only 30K lines anyway... (1)

siride (974284) | more than 6 years ago | (#22836874)

What's wrong with reading until you get to the end of the file? That's how the idiom seems to be done in every other language I've used. Why is that an exception in Java? If it's the end of the file because there was an IO error, that's one thing. But in the case I'm talking about, it's not.

Re:Only 30K lines anyway... (1)

matfud (464184) | more than 6 years ago | (#22837166)

Cos the size of the file can change while you are reading it.

Re:Only 30K lines anyway... (1)

Limerent Oil (1091455) | more than 6 years ago | (#22837848)

Cos the size of the file can change while you are reading it.

Sine the size of the file can change too, but (sin^2 + cos^2) won't.

Re:Only 30K lines anyway... (1)

dubl-u (51156) | more than 6 years ago | (#22837566)

What's wrong with reading until you get to the end of the file? That's how the idiom seems to be done in every other language I've used. Why is that an exception in Java?
It's not. The typical Java idiom for reading lines from a file looks like this [exampledepot.com] . (Actually, in production, you would actually handle the errors in that exception block rather than swallowing them. I would personally put the close() call in a finally block, not the main try block, as you still want to close even if a read fails. But you get the idea.)

As you can see, you read until it comes back empty, and then you're done. No exceptions will be used except when things are exceptional.

That's still slightly ugly, but that should rarely matter in Java or in any well-built OO system. You never read files for the sake of it; you're always up to something specific. Even when I'm dealing with files all the time, I almost never write code like this, because I'm reading XML [cafeconleche.org] or reading a properties file [exampledepot.com] or something where all the details are taken care of in a method that I almost never see.

Re:Only 30K lines anyway... (1)

shutdown -p now (807394) | more than 6 years ago | (#22837276)

I really really hate this pattern that Java (and perhaps from elsewhere) has foisted upon us where we get frickin exceptions because we reached the end of the file. That is technically an error condition in the reader function, but it is not exceptional and it shouldn't require me to write the "wild mess of try/catch blocks" just to read in data from a file.
It depends on what you're doing. Quite often, getting EOF while reading a file of some known format is an exceptional situation - it means that file is corrupted, and you'd want to have a single try-catch block around the whole parsing routine to catch that. On the other hand, sometimes you actually expect it to happen often, and be handled like another valid branch of code. That's why I like the .NET pattern of having pairs of methods like Foo() and TryFoo(); the first one throws exceptions on soft failures, the second one has some more efficient way to report them (usually an out parameter).

Re:Only 30K lines anyway... (1)

siride (974284) | more than 6 years ago | (#22837304)

Something like that would be nice in Java. And it follows the pattern I described above much better. If you expect errors and failures or end of file as part of normal processing, it's not an exception and exceptions should need to be used. I wish over-use of exceptions was the only thing that bothered me about Java...

Re:Only 30K lines anyway... (4, Interesting)

Enleth (947766) | more than 6 years ago | (#22835950)

I'd disagree on pointers and references. If you pass something in by reference, you need to know it goes in there by reference, it's not visible in the calling code. If something's not visible - well, that's a bug just waiting to crawl in there. If you pass something by pointer, the calling code shows it clearly and you know that whatever was passed is likely to be changed by the called function. That's the rationale used by Trolltech [trolltech.com] and it is quite convincing to me.

Besides, using char * is a must sometimes, when using C libraries that accept, modify and return strings or just some chunks of arbitrary data as char *.

Re:Only 30K lines anyway... (1)

Wrath0fb0b (302444) | more than 6 years ago | (#22836860)

I disagree on your dislike of references - when you type the function name, intellisense (or whatever) pops up the relevant prototypes and you can see immediately whether the parameters are type& or const type&. This gives the benefit of uniform function calling, since passing by reference or const reference should be the default unless there is an explicit need for pass-by-value.

As to char*, there are very few places I can see it being necessary (ifstream.read((char*)(&data),sizeof(data) does NOT count). You can pass char* to the string constructor and you can get back with string::c_str() -- what else do you need?

Re:Only 30K lines anyway... (2, Insightful)

Enleth (947766) | more than 6 years ago | (#22837006)

That means you are dependent on a big, clunky IDE for writing your code. Not everyone uses them even for big projects - for example, KDE's Kate is sophisticated enough to handle those, yet still lightweight. Even worse if you are writing an API for a library: you are forcing everyone using it to memorize where the references were or use a big, clunky IDE. And even if you use an IDE, you sometimes need to read a piece of code and see such things without retyping the paren to force a dumb IDE to display the prototype or even hovering the mouse over each function for a smart IDE to give a hint.

Re:Only 30K lines anyway... (1)

Wrath0fb0b (302444) | more than 6 years ago | (#22839558)

First off, I'd hardly call Visual Studio either clunky or big - I have an XP VM almost solely for VS and it has always been responsive. The interface is fairly customizable and the debugging is top-notch (much better that SunStudio, the only other one I've tried).

If you don't like IDEs (which is apparent), then use your favorite text editor or shell to search for "functionName(*);" in all files. Shit, `grep -e WriteForce*\&* *.h*` ought to get you less than a screenfull of results, one of which is the prototype (regexp wizards can do better, I'm sure).

Finally (and I think most importantly), if you have to wonder whether a function changes a passed variable then either it is incorrectly named or you have more serious issues with program flow. A sensible naming scheme makes it clear what data is input versus output.

Space efficiency? (1)

tepples (727027) | more than 6 years ago | (#22837110)

You can pass char* to the string constructor and you can get back with string::c_str() -- what else do you need?
What I need is the extra RAM that the GNU libstdc++ implementation of the string class takes up. I develop for a handheld device, and its 4 MB of RAM is a lot smaller than the 1 GB of RAM that you're probably used to working with.

Re:Space efficiency? (1)

Wrath0fb0b (302444) | more than 6 years ago | (#22839328)

What I need is the extra RAM that the GNU libstdc++ implementation of the string class takes up. I develop for a handheld device, and its 4 MB of RAM is a lot smaller than the 1 GB of RAM that you're probably used to working with.
Absolutely. I should have qualified my post by excepting cases like yours where RAM is super-tight (embedded/handheld/etc. . . ) and cases where performance is at a huge premium (tight loops/bottlenecks/mission-critical/real-time). That said, in the vast majority of cases the performance gain from using char* are vastly outweighed by the simplicity, clarity and power of std::string. Heck, automatic destruction when the string goes out of scope is, IMHO, worth the cost of admission alone.

Re:Only 30K lines anyway... (0)

Anonymous Coward | more than 6 years ago | (#22837228)

Actually, saying it's always bad is flawed. I don't think there's any significant concept in programming that applies 100% of the time. For instance, if you make your in-parameters constant references & out-paramters just references then that's consistent too.

And as for being visible in code, most modern IDE's let you look up the function prototype inline with the code as a tooltip or whatnot. Sure if you're using a text-editor, then that doesn't help you, but even VIM & Emacs have ctags support.

Re:Only 30K lines anyway... (1)

shutdown -p now (807394) | more than 6 years ago | (#22837312)

If you really want the out parameters to be distinctly marked at call site (and I agree that it is a good idea, but it may be just my C# experience), just write a simple ref-wrapping class with an explicit constructor, and use it everywhere. Using pointers can lead to subtle problems not just because they can be null, but also because they have plenty of operators defined on them, and it can be all too easy to forget to dereference sometimes, with very interesting results that might manifest themselves a long way down the line (consider the difference between (p - q) and (*p - *q); then, note that there are more possible valid combinations).

Re:Only 30K lines anyway... (2, Insightful)

CaptainPinko (753849) | more than 6 years ago | (#22836450)

12. Anything that looks like it was written by drunk lemurs or the French, to be deleted on principle and replaced by something sane.

I'm sorry but I find all this French bashing racist. Unless, of course, you have some information on the coding tendencies of the French that I do not. But having worked with a handful French people and I can say nothing bad about them. I know French "jokes" may be acceptable in the U.S. but this is the Internet and try to behave yourselves. As a rule of thumb: replace the word 'French' with 'Blacks' or 'Jews', if you wouldn't get away with saying the resulting expression at work, the don't say the original.

Disclaimer: Not French, no French heritage, no French family, don't speak French (though I plan to learn one day), have no French friends, never dated anyone French, and don't live in France.

Re:Only 30K lines anyway... (1)

johannesg (664142) | more than 6 years ago | (#22837052)

You know, *I* have worked with French code for over ten years; from a multitude of companies and individuals. I feel fully qualified to state that most of the time, it is REALLY BAD.

Have *you* worked with any code written by the French? Or are you just randomly bashing people because that is such a fun, safe thing to do on the internet?

Re:Only 30K lines anyway... (1)

CaptainPinko (753849) | more than 6 years ago | (#22837114)

No I haven't and I said as much in my post. But I see French bashing everywhere and it was a general comment, and not a direct response to yours. That said I've never worked with anything but shit north american code... and I live there. So the question comes up as to whether you have a wide enough sample to make such a claim. And the point is if you had worked for 10 years with african-american code and it was all shit would you make such a comment? Unlikely.

Re:Only 30K lines anyway... (2, Informative)

ralphdaugherty (225648) | more than 6 years ago | (#22837648)

I know French "jokes" may be acceptable in the U.S. ...

      only for a small conservative subset of the U.S., which as we know have proven themselves to be a joke.

  rd

Re:Only 30K lines anyway... (5, Insightful)

jsebrech (525647) | more than 6 years ago | (#22837016)

So there you have it. In my experience, doing this will remove about half of the lines of code (more if there was a significant number of lemurs on the team), at the gain of considerable clarity and usually performance.

I work on a 2 million line code base, written by a few dozen people, most of them off-shore, that is poorly commented, poorly documented, and has many modules of code that no one in our team understands well. In other words, a typical large commercial code base.

At first, I would routinely aggressively clean up sections of code as I made changes in them. But then I started to notice a pattern: there were bugs in the functionality of that code that weren't there before I "cleaned it up". When you refactor highly convoluted code, it is seductive to make assumptions about the working of that code (especially in how the code interacts with the rest of the system), because it is hard work to actually figure it out completely. Those assumptions have a nasty tendency to be wrong.

Nowadays I approach code changes like this: if I don't understand the code 100 percent, I make my changes as low impact as possible, even if it means uglifying the code. If some part of the code base needs refactoring to allow implementing a new feature, I first figure it out fully, document its existing behavior (often line-by-line, call-by-call, class-by-class), look at every place in the entire code base where it is called (and document those places), and only then do I refactor it.

The point is this: if code is ugly and slow, but it works, it is better code than clean, fast, beautiful code with bugs. Better in the sense that it makes the user happier, and the user is one of only two metrics that truly matter in software development (the other being cost). Always resist modifying code just for the sake of cleaning it up. If it works, don't touch it.

Re:Only 30K lines anyway... (1)

ralphdaugherty (225648) | more than 6 years ago | (#22837770)

Always resist modifying code just for the sake of cleaning it up. If it works, don't touch it.

      This of course is wisdom of the ages which appears to have been lost somewhere where the term "refactoring" became vogue.

  rd

Re:Only 30K lines anyway... (1)

chromatic (9471) | more than 6 years ago | (#22838832)

... only if your "refactoring" doesn't include a comprehensive, fully-passing, and regularly run test suite.

Re:Only 30K lines anyway... (0)

Anonymous Coward | more than 6 years ago | (#22838068)

Refactoring as a concept was based on code transformations which do not alter semantics of the transformed code. Somewhere, along the way, that semantic part was somehow lost in the noise.

Re:Only 30K lines anyway... (2, Interesting)

dubl-u (51156) | more than 6 years ago | (#22837768)

I agree with most of your comments, and especially the spirit of keeping everything shipshape and avoiding the endless game of "who can be blamed". Two minor improvements:

Any error handling through error return codes, probably to be replaced by exceptions, unless it turns the calling code into a wild mass of try/catch blocks.
Sometimes instead of return codes, there are other good options. For example, you can spin a state machine out into an object, which in effect keeps the return codes safely in one place until you want to check them. In some cases, I love the Null Object Pattern [oberlin.edu] . And sometimes it makes sense to have a request object and a response object, with the response carrying possible error-related info.

Anything that *doesn't* belong together should be split into separate files (but don't make a file for just a single function - instead create a file with "leftovers").
For functions, maybe. But a lot of good objects really can have just one method. Ruby does that all the time with anonymous methods, for example, and sometimes that pattern is worth using in a more explicit language.

Also, more generally, I feel like unit tests are a much better place to store knowledge than comments.

Other than that, I agree completely!

For more details, consult... (1)

tiluki (74844) | more than 6 years ago | (#22835390)

``The Code of the Woosters''

http://www.amazon.co.uk/Code-Woosters-Penguin-Modern-Classics/dp/014118597X/ [amazon.co.uk]

What ho! Spiffing!

Re:For more details, consult... (0)

Anonymous Coward | more than 6 years ago | (#22835740)

Oh come on, mods. That's funny!

refactoring is not a word (1)

Dr. Tom (23206) | more than 6 years ago | (#22835500)

Nope, it's not. And it's a stupid practice, the way some people try to define it.

"What are you doing?"

"I'm refactoring code."

"Oh, you aren't doing anything."

Code can be sexy and beautiful (1)

DavidV (167283) | more than 6 years ago | (#22835532)

quicksort [] = [] quicksort (x:xs) = quicksort less ++ [x] ++ quicksort greater where less = [ y | y = x ]

Re:Code can be sexy and beautiful (2, Insightful)

jlarocco (851450) | more than 6 years ago | (#22835744)

Elegant as it may be, that version of quicksort is so slow that (IIRC) even the Haskell documentation suggests against using it in "real" code.

Personally, I think the C++ way is even easier to read, and it has the benefit of being really fast:

sort(xs.begin(), xs.end());

So, SeeSoft then... (0)

Anonymous Coward | more than 6 years ago | (#22835572)

I just scanned the article and I see that the big idea is to reimplement SeeSoft but not as well. Awesome.

It's not enough for the code to "just work" (4, Insightful)

kurisuto (165784) | more than 6 years ago | (#22835616)

From TFA:

"The problem is that warty old code isn't always just warty - it's battle-scarred. It has years of tweaks and bug-fixes in there to deal with all sorts of edge conditions and obscure environments. Throw that out and replace it with pristine new code, and you'll often find that a load of very old issues suddenly come back to haunt you. So, a total rewrite is out. This means working with the old code, and finding ways to wrestle it into shape."


There's a big difference between having code which just happens to somehow work, and having code which works because the code is clearly written and documented, where the person in charge of maintaining it actually understands what the code is doing.

Whether you rewrite from scratch or work with the legacy code, it's your job as the programmer to understand and document all of the tweaks, bug fixes, edge conditions, and obscure environments. If there aren't comments in the existing code to explain these things, then it's your job to understand why the code is doing what it is doing, and add the comments as needed. If the code isn't clear, it's your job to make it clear.

The author correctly points out that when you do a total rewrite, then the undocumented special cases handled by the old code will make themselves felt. As these problems present themselves, it takes time to fix them. However, you also get the opportunity to understand the undocumented special cases and get them clearly coded and properly documented, which reduces maintenence costs over the long term. Your judgment whether to maintain or to rewrite should take both of these factors into consideration.

When the code is the documentation... (1)

argent (18001) | more than 6 years ago | (#22836308)

There's a big difference between having code which just happens to somehow work, and having code which works because the code is clearly written and documented, where the person in charge of maintaining it actually understands what the code is doing.

But the point is that when the code is the documentation, which is what you have when you have undocumented code, you're throwing out the documentation with the code if you start from scratch. Refactoring includes documenting the code you're rewriting. In fact I've found that comments added when refactoring do a lot more to explain why the code does what it does than the comments that were already there. I don't mean that the existing comments were wrong, or didn't match the code, but that the comments added by the person who did the refactoring describe the things that were hard to understand in the original code, and often explain why the battle scars were necessary.

I've found that happening with my code that other people have worked on, with other people's code that I have worked on, with other people's code that other people have worked on, and with my code that I have worked on (because, after all, "you three years ago" might as well be another person... if that's not the case for you, you better ask yourself if you've stopped learning).

The author correctly points out that when you do a total rewrite, then the undocumented special cases handled by the old code will make themselves felt. As these problems present themselves, it takes time to fix them.

Sometimes. And sometimes they don't... yet. Sometimes you're reinstalling a time bomb that you thought you'd already defused.

With either a rewrite or a refactoring, too, you need to understand and document the result. If by the end of a refactoring project you haven't documented the depth charges... then you haven't really finished the job yet.

Your judgment whether to maintain or to rewrite should take both of these factors into consideration.

Indeed. I love rewriting code, myself. Start over from scratch. Throw out the old and crufty. But I gotta keep telling myself to watch out for deceptive arguments about why I'm rewriting... and I think this is one of the ones I've tried on myself often enough that I just don't trust it any more.

Re:It's not enough for the code to "just work" (1)

dubl-u (51156) | more than 6 years ago | (#22836862)

it's your job as the programmer to understand and document all of the tweaks, bug fixes, edge conditions, and obscure environments.

And in my view, it's my job as a programmer to document all of those things in readable automated tests. Only a programmer can check that a comment or a document has been followed. Automated tests mean that the computer can do the dirty work. And I'm all for that.

Form follows function? (4, Interesting)

martyb (196687) | more than 6 years ago | (#22835764)

FTFA:

... A better solution would be to print a class per page. At the start of the project, the application had about 150 classes, and the refactoring effort is focussed on about 80 of those. Initially, gigantic classes would be an incomprehensible smudge of grey, but as the refactoring process starts tidying the code and factoring out into other classes, the weekly printout would start to literally come into focus, hopefully ending up with many pages actually containing readable code (which happens roughly when the class is small enough to fit on no more than 3 pages at normal size).

Brilliant! Absolutely brilliant! "Smell test?" Yah, right. But then I got to thinking, "Why are code formatting standards such a hot topic?" The computer doesn't care if indentation is expressed with 2 spaces, 3 spaces, or a tab. But, I do! Over time, I've learned how to see coding errors just from the slight aberrations in the LOOK of code. Couldn't tell you WHAT it was, at first, it just felt (or smelled) wrong. So call it what you will, but I could now see how "smell test" has some basis behind it. Then, I got to thinking of an age-old question:

How do you find a needle in a haystack?

  1. Make the haystack smaller, and/or
  2. Make the needle(s) bigger

The technique in the article accomplishes BOTH of these. I'd suggest running the code through a pretty printer [wikipedia.org] to get consistent layout throughout the whole project. The more the semantics of the project can be represented by syntax, the more visible the troublesome code becomes.

Re:Form follows function? (1)

hcdejong (561314) | more than 6 years ago | (#22836402)

How do you find a needle in a haystack?

      1. Make the haystack smaller, and/or
      2. Make the needle(s) bigger
You forgot
      3. Run the haystack through an MRI machine.

Re:Form follows function? (1)

tepples (727027) | more than 6 years ago | (#22837136)

You forgot
3. Run the haystack through an MRI machine.
In your analogy, what's the equivalent of an MRI machine for finding places where a program could be improved?

Re:Form follows function? (1)

DerekLyons (302214) | more than 6 years ago | (#22836662)

I'd suggest running the code through a pretty printer to get consistent layout throughout the whole project.

Umm... running it through a pretty printer wipes out the very details that printing out is supposed to bring out. After pretty printing, you are no longer seeing the 'native' code - but rather you are seeing the patterns hard coded into the pretty printer.

Re:Form follows function? (2, Insightful)

martyb (196687) | more than 6 years ago | (#22836964)

I'd suggest running the code through a pretty printer to get consistent layout throughout the whole project.
Umm... running it through a pretty printer wipes out the very details that printing out is supposed to bring out. After pretty printing, you are no longer seeing the 'native' code - but rather you are seeing the patterns hard coded into the pretty printer.

I respectfully disagree. Consider a piece of code that has 8 levels of nesting. With a judicious use of short variable names, parentheses, and curly braces, it is possible to make it *look* not so bad in the original code. It might look like there's only a half dozen levels. With a pretty printer, nesting LOOKS exactly as nesting IS. At a *GLANCE* I can see where things are getting hairy.

That does NOT mean that it must be refactored. Only that it is an area that may be worthy of additional consideration.

On second thought, nothing says this is a black-or-white choice. If it works for you to print out code as is, great! My experience has been that a pretty printer can be helpful. YMMV.

30k? (0)

Anonymous Coward | more than 6 years ago | (#22835850)

I know the author means well, but it's really hard to take him seriously when he calls 31 kloc a "fairly large" application. Besides that, too much of the "article" (if a blog entry can be called an article) comes off as ego stroking.

Netscape (1)

BorgDrone (64343) | more than 6 years ago | (#22835894)

I love the way the article uses the complete rewrite of Netscape as an example of why you shouldn't rewrite from scratch. Cause we all know how big a failure Firefox is </sarcasm>

Re:Netscape (1)

argent (18001) | more than 6 years ago | (#22836252)

Not to mention that Netscape was already doomed as a browser company well before that rewrite started.

Re:Netscape (3, Informative)

balster neb (645686) | more than 6 years ago | (#22836472)

I love the way the article uses the complete rewrite of Netscape as an example of why you shouldn't rewrite from scratch. Cause we all know how big a failure Firefox is
Have to disagree with you.

While the Mozilla story did have a happy ending, the rewrite resulted in IE getting a near monopoly of the browser market. The "new" Netscape was massively delayed, and was finally released as a rebranded version of the bloated Mozilla suite. It was in the period between about 1999 and 2004 that IE expanded it's market share. In other words, Netscape lost as a result of throwing away the old code base.

It was only from around 2004 onwards, with Firefox, was Mozilla able to present a viable alternative to IE.

Re:Netscape (2, Insightful)

dubl-u (51156) | more than 6 years ago | (#22837408)

I love the way the article uses the complete rewrite of Netscape as an example of why you shouldn't rewrite from scratch. Cause we all know how big a failure Firefox is </sarcasm>
Firefox is not Netscape.

The Netscape browser, like the company that made it, indeed ended up a failure. And my pals who were there at the time tell me that the poor code quality was a major factor in the inability to get anywhere. How long did it take between the last decent release of Netscape and the 1.0 release of Firefox? Four years? Six? I guess it depends on what you consider the last decent release. No matter how you count it, though, there were years of thrashing trying to get something based on the old code out the door. Eventually, they just gave up.

Firefox is a big success now, but Netscape was a giant crater of a failure during a crucial period, leaving Microsoft effectively unopposed in their attempts to take over the browser market.

And will the result be as delightful? (1)

tringtring (1227356) | more than 6 years ago | (#22836748)

Will the result of refactoring code using the PGW method be as funny as his books that the users of the code will laugh till they cry?

Perl 6 as a Cautionary Tale ... (4, Interesting)

joe_n_bloe (244407) | more than 6 years ago | (#22838868)

I'd like to focus on the author's comments about rewriting vs. refactoring. From July 25, 2000 [perl.com] :

Last Monday, nobody knew that anything unusual was about to happen. On Tuesday, the Perl 6 project started. On Wednesday, Larry announced it at his "State of the Onion" address at the Perl conference.

It's one thing to decide to rewrite rather than refactor a product that is losing market share because it is not performing as well as its competitors. (E.g. Netscape.) It's another thing to decide to rewrite (and redesign) rather than refactor a wildly successful and popular product because its continued development has become difficult. Just shy of eight years later, Perl 5 is still creaking along nicely, and Perl 6 (White Elephant Service Pack) is still under design as much as development.

Is Perl 5 so hard to refactor that a determined effort couldn't have made progress, or been completed twice over, in 8 years? Along the way, a lot of the cruft and inelegance in the language could have been removed, and more elegant features inserted.

It happens over and over again - developers, even experienced ones, can't see the impracticality of what they're getting into, and can't see that they're doing work that isn't needed.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?