Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Estimating the Size/Cost of Linux

CmdrTaco posted more than 12 years ago | from the now-thats-a-lotta-dough dept.

The Almighty Buck 196

2bits writes "Wow... A Billion Dollars Worth Of Software On My System For Free! Check This Guy Out, He Came Up With A Counting / Pricing Method For Quite A Few Types of Source Code. Here is the Program. The results on the site are sorta dated, based on RH 7.1, but the app is pretty cool!... Hey, I can finally find out how much all my side projects are worth / costing me..."

cancel ×

196 comments

Sorry! There are no comments related to the filter you selected.

CLIT is teh rawx (-1)

News For Turds (580751) | more than 12 years ago | (#3827204)

and canadia is teh sux

Re:CLIT is teh rawx (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827218)

dammit when r u going to give us cowards a chance!!!

Re:CLIT is teh rawx (-1)

Fucky the troll (528068) | more than 12 years ago | (#3827224)

When you learn to log in.

uuuh.... no wait.... uuh....

OMG, that guy looks like a fucking tool (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827237)

Check that guys pic. Nice helmet hair, buddy.

fist post (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827206)

check it!

Re:fist post (-1)

News For Turds (580751) | more than 12 years ago | (#3827227)

dude. no way. not while the CLIT is in the house.

the CLIT are all blowing eachother (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3827439)

in a big gay frenzy, bunch of ass fuckers

Re:fist post (-1)

Big Dogs Cock (539391) | more than 12 years ago | (#3827314)

I checked it. It was not a first post.

Notice there is a "create new account" link next to the login box.

Re:fist post (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827379)

suck my big dogs cock!!!

Re:fist post (-1)

Big Dogs Cock (539391) | more than 12 years ago | (#3827405)

I think you missed out a comma, an apostrophe and an apology.

Frosty Penis. (-1)

Fucky the troll (528068) | more than 12 years ago | (#3827207)

This is not an FP claim as such. I'm just saying Frosty Penis because I can. Thank you.

Re:Frosty Penis. (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827234)

Since the heat wave in the Northeast has been broken, your post is very appropriate & on topic.

Re:Frosty Penis. (-1)

Fucky the troll (528068) | more than 12 years ago | (#3827274)

Thanks for posting positively. I enjoy such encouragement, and will continue to post about my penis in relation to the weather.

Here in Cornwall, UK, it's around 15 degrees - quite a long way below average for this time of year. This makes my penis sad, as it's not getting anywhere near as much exposure as usual.

Re:Frosty Penis. (-1)

News For Turds (580751) | more than 12 years ago | (#3827310)

Fucky, Being a typical American, I must say that I am very offended by your p0st. You assume that the whole world uses your centigrade crap and that we are worthless. Therefore, I hereby request that in your further postings, you post temperatures either in Fahrenheit or in Kelvin. Thank you.

Re:Frosty Penis. (-1)

Big Dogs Cock (539391) | more than 12 years ago | (#3827342)

I do not wish to stir up dissent within the CLiT - as you know, none of my posts have been in any way critical of the US or its citizens - but I must point out that the US is only 5% of the world's population and everyone else uses Celsius.

Posted by Big Dogs Cock at 6:87:72 pm metric time.

Re:Frosty Penis. (-1)

News For Turds (580751) | more than 12 years ago | (#3827452)

Being a typical American, I must say that we are the only 5% that count.

Metric time is teh rawx

Re:Frosty Penis. (-1)

Fucky the troll (528068) | more than 12 years ago | (#3827473)

Please, sir Turds, calm yourself. I mean no offence with any of my anti-american posts. Unlike many, I have gotten over racism and prejudice, and see such posts as funny rather than offensive. Funny in a retarded way.

Suck on that, yankee pig-dog. And learn to say "aluminium". :)

loveyoubyebye

Re:Frosty Penis. (-1)

News For Turds (580751) | more than 12 years ago | (#3827478)

Hmmm.. Do we pronounce it wrong?

Re:Frosty Penis. (-1)

Fucky the troll (528068) | more than 12 years ago | (#3827567)

Well, one of us does. :-)

Billion dollars? (1)

SpatchMonkey (300000) | more than 12 years ago | (#3827223)

Where did he get the billion dollar estimate from? I see no direct correspondance between lines of code and monetary value.

Re:Billion dollars? (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827231)

it costs money to pay programmers

Re:Billion dollars? (1)

SpatchMonkey (300000) | more than 12 years ago | (#3827260)

I understand that, but I find the methods he has used to come to that figure. His very simplistic formula is listed in section 3.7. Compared to the analysis listed in the rest of the document, which is very interesting, this cost estimation seems relatively niave.

Re:Billion dollars? (3, Insightful)

Oculus Habent (562837) | more than 12 years ago | (#3827309)

Sure, but what about the time spent in bug fixes, patches, etc? I supposed you can do something like this:

  • Standard programming takes A minutes per line on average.
  • Bug fix/patching programming takes B minutes per line on average.
  • Standard/Patch programming take up C/D percent of the time.
  • Average (mode, perhaps) programmer salary is E dollars.

Programming cost = E dollars * ((X lines * C * percent * A minutes) + (X lines * D percent * B minutes))

You could even go fancy and calculate lines-per-minute based on each langauge. But then, what about Man pages, documentation, support sites, etc. These are things you would pay for in commercial software. Shouldn't these be a factor as well?

Yeah, right (1)

af_robot (553885) | more than 12 years ago | (#3827391)

So 10 billion lines of bad bloated code will worth more that 10.000 lines of pure, clean and fast code?

Re:Yeah, right (2, Insightful)

Oculus Habent (562837) | more than 12 years ago | (#3827499)

I think Microsoft has proved that true.

Bloated code may not be best, but it gets out the door faster.

Can you imagine what would happen in Microsoft cleaned the code to Windows XP? Imagine, they release an 40-mb service pack that trim's the OS size down 300MB, decreases boot-time by 75%, improves program launch speed 300%, improves security, stability, and functionality; all while making the OS easier to upgrade, and implement.

Of course, when this release is finally out in 2057, it won't make much difference.

Re:Yeah, right (1)

martyn s (444964) | more than 12 years ago | (#3827512)

See this post [slashdot.org]

The study talks about cost, not value.

Re:Billion dollars? (-1)

Big Dogs Cock (539391) | more than 12 years ago | (#3827415)

NO WAY!!! If it cost money to pay programmers, then you would have to charge for software rather than giving it away for free.

Re:Billion dollars? (2, Informative)

virve (63803) | more than 12 years ago | (#3827276)

Where did he get the billion dollar estimate from? I see no direct correspondance between lines of code and monetary value.

He specifically talks about cost not value. But you are right that the correlation between sloc and cost is a non-trivial one. That is one reason why cost estimation is hard but it is far easier than guessing cost of a project before one has the source.

--
virve

lets see here..... (4, Funny)

Anonymous Coward | more than 12 years ago | (#3827225)

[cmdrtaco@localhost]$ est slashcode
Analyzing slashcode.....
Result: $6.66

[cmdrtaco@localhost]$

Six dollars and 66 cents? (0, Funny)

Anonymous Coward | more than 12 years ago | (#3827279)

For slashcode? Dude, you got ripped off!

Re:Six dollars and 66 cents? (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827285)

AC misses 666 reference. Film at 11.

Re:Six dollars and 66 cents? (0)

Anonymous Coward | more than 12 years ago | (#3827423)

Did not, asswipe. Just because I chose to take another angle does not mean I missed his reference.

Here's one applicable to you: Slashbot shows the world he has a single-digit IQ. Film at 11.

Re:Six dollars and 66 cents? (0)

Anonymous Coward | more than 12 years ago | (#3827489)

Did not

Slashdotter whines about being made fun of. Film at 11.

Re:lets see here..... (0, Funny)

Anonymous Coward | more than 12 years ago | (#3827288)

according to the VA Linux's most recent quarterly filings, slashcode has a negative value.

Re:lets see here..... (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827410)

Wasn't 666 the NUMBER OF TEH BEAST ?

The lameness filter is too sexual for posts.

Re:lets see here..... (1, Insightful)

Anonymous Coward | more than 12 years ago | (#3827449)

who the fuck is modding that offtopic? did you not read the article? the article deals with cost estimation of source code. In this post, we see a satirical representation of what CmdrTaco might experience if he were to run the cost estimator tool (the topic of the article) against slashcode, the code that runs slashdot. The little value returned is the running gag that slashcode is a pos (similar to the one gag of slashdot always being infected with the latest IIS security hole)

The resulting value of 666 is also a common joke among geeks.

sigh -- Maybe this is why some people have .sigs saying offtopic means the moderator missed the joke.

WTF? (1, Flamebait)

Ctrl-Z (28806) | more than 12 years ago | (#3827229)


Okay, so now Slashdot is posting this story that is over a year old?

From the header of the paper:

More Than a Gigabuck: Estimating GNU/Linux's Size
David A. Wheeler (dwheeler@dwheeler.com)
June 30, 2001 (updated November 8, 2001)
Version 1.06

Re:WTF? (-1)

News For Turds (580751) | more than 12 years ago | (#3827243)

No. News For Turds! Get it right!

Re:WTF? (2, Funny)

be-fan (61476) | more than 12 years ago | (#3827252)

The funny thing is that this story was posted on Slashdot a year ago!

Re:WTF? (0)

Anonymous Coward | more than 12 years ago | (#3827397)

Yes, i remember having read that one a while ago.

Moderation on CRACK ! (0)

Anonymous Coward | more than 12 years ago | (#3827466)

This guy is correct.
The story already appeared on /. a years ago.
If you use moderation to abreact your sexual impotence, then get viagra or stop moderating.

Slow news day, Taco? (5, Interesting)

damiam (409504) | more than 12 years ago | (#3827239)

Good god, people. This app has been out there for years. It's been mentioned in prevoius /. stories. Most people already know about it. This isn't news.

I know I'll get modded down for saying this, but Taco, as an "editor", couldn't you at least have fixed This Guy's Moronic Capitalization Scheme?

Re:Slow news day, Taco? (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3827272)

Has anyone ever told you you'd look sexy covered in bird feces and jism?

because you would.

Jamie@iliketogetsuckedoffbymonkies.vg is a fag.

Re:Slow news day, Taco? (3, Funny)

carlos_benj (140796) | more than 12 years ago | (#3827326)

...couldn't you at least have fixed This Guy's Moronic Capitalization Scheme?

That's not a scheme. The entire post is a very long title for a very short book he's writing...

Re:Slow news day, Taco? (0)

Anonymous Coward | more than 12 years ago | (#3827448)

Actually, it's wrong even for title/headline case.

Yeah.... (3, Funny)

graphicartist82 (462767) | more than 12 years ago | (#3827240)

A Billion Dollars Worth Of Software On My System For Free!

Yeah, that's what happens when you use P2P _WAY_ too much

Re:Yeah.... (0)

Anonymous Coward | more than 12 years ago | (#3827566)

Some one mod this up!

I don't believe it!!!! (0, Troll)

joshsnow (551754) | more than 12 years ago | (#3827242)

Someone finally acknowledging that OpenSource/Free(beer) software actually has an associated cost - what next? Wait - is that a flying pig I see?

Re:I don't believe it!!!! (-1, Flamebait)

Anonymous Coward | more than 12 years ago | (#3827284)

No, that's just your mom committing suicide.

Jamie@Ifuckdeadpeople.vg is a fag.

Re:I don't believe it!!!! (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827305)

Nope, that's warcraft III!

Interesting. (2)

jellomizer (103300) | more than 12 years ago | (#3827246)

Although I rember this article in the Past a fiew months ago. But I am to lazy to look it up. But it is instering how the Open Source movement just by a lot of people just doing a lot of little things (and some not so little) has created a product that would take a lot resources for a large company to complete. Open Source Software in my opinion is the only way the Little Guy to play with the Big Guns.

Re:Interesting. (2)

rnd() (118781) | more than 12 years ago | (#3827347)

Open Source Software in my opinion is the only way the Little Guy to play with the Big Guns

Not the only way. A bunch of coders could put together a software company and develop great products and recruit top talent. The company would grow and might eventually displace Microsoft.

Microsoft was once a couple of college-age kids who stayed up all night writing code who happened to get the DOS contract.

Companies have an advantage over OSS developers in that when the company is poised for success, people want to invest money in the company in order to reap larger returns later. This gives the company the advantage of more money to recruit top full time talent, etc. Most people regrettably have bills to pay, and the poorly funded nature of most OSS projects will always limit the amount of some people's time that the projects can obtain.

Re:Interesting. (2, Interesting)

carlos_benj (140796) | more than 12 years ago | (#3827469)

Microsoft was once a couple of college-age kids who stayed up all night writing code who happened to get the DOS contract.

The chances of that happening again are fairly slim. This was clearly a case of being in the right place at the right time. A couple of years later and they would have found themselves trying to supplant the standard desktop OS. The combination of the right hardware platform, a 'new' OS and a viable business app all had to click at the same time. Had the PC revolution started years earlier and those same two college kids tried to unseat that alternate universe's Microsoft juggernaut it wouldn't happen, no matter how good a marketeer Bill is.

Companies have an advantage over OSS developers in that when the company is poised for success, people want to invest money in the company in order to reap larger returns later.

Precisely. Given the dominance of Microsoft in the market, those savvy people aren't likely to gamble with funds they want a return on. That's why OSS really is a viable way significant inroads can be made in the market. You now have several companies helping to fund that development. Entire countries are looking to OSS to free them from the Microsoft treadmill of costly upgrades and zany licensing fees. The momentum is building and Microsoft sees it. They don't have a problem with Apple because they see them as a niche player, but I don't think they'd be writing licenses with anti-GPL language in it if they didn't genuinely see it as a threat to marketshare. As much as some of us like to bash Microsoft the executives are not stupid and are quite capable of interpreting the GPL and understanding that their 'take' on the license just isn't supported by the GPL's language.

Re:Interesting. (1)

carlos_benj (140796) | more than 12 years ago | (#3827351)

Although I rember this article in the Past a fiew months ago. But I am to lazy to look it up. But it is instering how the Open Source movement just by a lot of people just doing a lot of little things (and some not so little) has created a product that would take a lot resources for a large company to complete. Open Source Software in my opinion is the only way the Little Guy to play with the Big Guns.

--
If My spelling bugs you. Then my work is done.


In that case, you can go home now.

Hidden costs (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827249)

It costs oh so much more when you factor in the tears, the sadness, the ass exams. Lunis makes you sad and gay and you shouldn't use it there, fella!

Re:Hidden costs (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3827287)

Jamo!

EP (-1, Flamebait)

Anonymous Coward | more than 12 years ago | (#3827250)

In honor of penis birds, guys who are too dumb to realize that the security gaurd at the EL-AL ticket counter was most likely in the Israeli special forces, News for Turds, Sexual Asspussy, and all the other leet CliT meat collections.

fo shizzle my nizzle

Jamie@myassiselastic.vg is still a hamster-loving faggot.

oh right, and linux can't have a value, the whole point of it's existance is to make sure that all intellectual property becomes worthless. There for Linux, /. reading geeks, MacOS X, and yes, the guys who make mathematical lego sculptures are all worthless.

Re:EP (-1)

News For Turds (580751) | more than 12 years ago | (#3827290)

Oh hellZ yeah, bizzzzzznitch!

SMOEK CRIZACK!! (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3827418)

we aer teh rawx.

How to put MS's 40 billion to good use. (0)

Anonymous Coward | more than 12 years ago | (#3827256)

Maybe MS could spend the money for a working OS in about a week or two.

I want to know (0)

Anonymous Coward | more than 12 years ago | (#3827257)

how much can I charge for my 5 line C++ "Hello world\n" program? ;-)

Re:I want to know (0)

Anonymous Coward | more than 12 years ago | (#3827570)

come on, some moderater has to find this funny ;)

At last! The real name of the X-Window System(tm) (0)

Anonymous Coward | more than 12 years ago | (#3827261)

This must be about the first time I've read an article about Linux, (GNU/Linux, if that's what you like to call it), that hasn't called the X-Window-System(tm), X-Windows.

As far as I am aware, Windows(tm) is a trademark of Microsoft Corporation(tm). The X Consortium actually give recommended names for X, in the X man page.

bad news for Linux? (5, Funny)

tps12 (105590) | more than 12 years ago | (#3827262)

This looks like a serious problem for Linux distributors like Red Hat, Mandrake, and Debian. They sell their products (which consist of software and support and manuals) for $40-$100, usually. Now we see that what they put into their product (i.e., the cost) is orders of magnitude beyond that. Even if Red Hat sold every single copy it packaged (it doesn't even come close), and even if nobody downloaded it for free or copied the CDs for a friend (again, an incredibly optimistic assumption), it would still be looking at huge losses.

This might have worked a few years ago, but with accounting practices coming under scrutiny across the board, I fear that these companies are headed for trouble.

Re:bad news for Linux? (2, Flamebait)

John Hasler (414242) | more than 12 years ago | (#3827381)


This looks like a serious problem for Linux
distributors like Red Hat, Mandrake, and Debian.
They sell their products ... for $40-$100,
usually.

Wrong. Debian doesn't sell anything.

Now we see that what they put into their product
(i.e., the cost) is orders of magnitude beyond
that.

Wrong again. Red Hat's costs are what they actually spend, not what the stuff they distribute would have cost if it had not been given to them.

even if nobody downloaded it for free...

There's your clue: _Red_ _Hat_ downloads the stuff they distribute for free.

Re:bad news for Linux? (2, Insightful)

dattaway (3088) | more than 12 years ago | (#3827440)

A serious problem for them?

The IRS is going to love me come audit day...

This guy is a fraud (0, Flamebait)

Randy Rathbun (18851) | more than 12 years ago | (#3827265)

Why? He obviously does not use Linux. Just look at his picture! What Linux user out there is gonna be caught dead wearing a white shirt and a tie? Okay, maybe to a wedding/funeral, but that's it.

He also went off and shaved and combed his hair for his picture.

The man just ain't right, I tell you!

Yup (-1)

News For Turds (580751) | more than 12 years ago | (#3827281)

I Agree With This Post

Hmmm... sloccount, you say? (1)

jaunty (56283) | more than 12 years ago | (#3827268)

woody:~#apt-cache search sloccount
sloccount - Programs for counting physical source lines of code (SLOC) ...so it appears theres a *.deb of it already (or is this an old story...) Hmmm... you be the judge.

Re:Hmmm... sloccount, you say? (1)

Ctrl-Z (28806) | more than 12 years ago | (#3827303)


EVAL: it appears theres a *.deb of it already (or is this an old story...)

RESULT: TRUE.

1000 LOC (-1)

bryans (555149) | more than 12 years ago | (#3827271)

1000 lines of code: $500
SLOCCount program: Free
Knowing you have a billion dollars worth of code: Priceless [mastercard.com]

value? (3, Insightful)

rnd() (118781) | more than 12 years ago | (#3827292)

It's fun to see someone do somthing like this. However the fact that most people don't use Linux means that the value of using Linux is less than the cost of using linux. Therefore, since the source code is free there must be other costs that are preventing most people from using Linux.

Instead of wasting time figuring out ficticious pricing based on the way that corporate america prices software, why not figure out a way to remove the aforementioned hidden costs from Linux so that the masses can begin to see what many of us on /. have known for a while: That GNU Linux and Open Source Software represent a great choice.

value / payback Linux-centric? (2)

fw3 (523647) | more than 12 years ago | (#3827454)

the fact that most people don't use Linux means that the value of using Linux is less than the cost of using linux.

The cost analysis was done based on linux, however most of the code analysed in fact is for things that run on other platforms, and much of which was in development for years before linux 0.9 hit the 'Net.

So the measure of value based on who uses Linux includes everyone who uses linux-hosted apache servers. The more general case includes everyone who accesses servers that depend on (Perl, BIND, sendmail, mysql .... etc) or were/are developed using (X11, CVS, bitkeeper, emacs, gcc .... etc)

The economic value isn't small. That much I'm pretty certain of, just how big, well it works for me, I'll leave the analysis to the economists.

Hmmm (1)

cca93014 (466820) | more than 12 years ago | (#3827297)

It may well containt "over 30 million physical source lines of code (SLOC)", but what about the lines of source code? Eh?

Didn't think about that, did you?

Nonsense (2, Interesting)

qlmatrix (588417) | more than 12 years ago | (#3827317)

I don't think the measurement of the length of code or the time one has or might have been taken to produce the code is in any way related to the value for the use of the software produced.

The same people that argue in these categories do also try to legitimate open source software by their better "quality" in terms of fewer errors. The result of this argument is that MS software would be great to use if it contained less errors. But that's not the main point. As can be seen when MS does such horrible things like allowing themselves to destroy your software (DRM EULA change) the problem is not the result but the way they produce their software. I'd argue that because their development model is bad the resulting software is bad, too, bad that's only a minor problem in comparison to the harm they do to the software culture in general.

slashdotted! (0)

Anonymous Coward | more than 12 years ago | (#3827325)

This paper analyzes the amount of source code in GNU/Linux, using Red Hat Linux 7.1 as a representative GNU/Linux distribution, and presents what I believe are interesting results.

In particular, it would cost over $1 billion ($1,000 million - a Gigabuck) to develop this GNU/Linux distribution by conventional proprietary means in the U.S. (in year 2000 U.S. dollars). Compare this to the $600 million estimate for Red Hat Linux version 6.2 (which had been released about one year earlier). Also, Red Hat Linux 7.1 includes over 30 million physical source lines of code (SLOC), compared to well over 17 million SLOC in version 6.2. Using the COCOMO cost model, this system is estimated to have required about 8,000 person-years of development time (as compared to 4,500 person-years to develop version 6.2). Thus, Red Hat Linux 7.1 represents over a 60% increase in size, effort, and traditional development costs over Red Hat Linux 6.2. This is due to an increased number of mature and maturing open source / free software programs available worldwide.

Many other interesting statistics emerge. The largest components (in order) were the Linux kernel (including device drivers), Mozilla (Netscape's open source web system including a web browser, email client, and HTML editor), the X Window system (the infrastructure for the graphical user interface), gcc (a compilation system), gdb (for debugging), basic binary tools, emacs (a text editor and far more), LAPACK (a large Fortran library for numerical linear algebra), the Gimp (a bitmapped graphics editor), and MySQL (a relational database system). The languages used, sorted by the most lines of code, were C (71% - was 81%), C++ (15% - was 8%), shell (including ksh), Lisp, assembly, Perl, Fortran, Python, tcl, Java, yacc/bison, expect, lex/flex, awk, Objective-C, Ada, C shell, Pascal, and sed.

The predominant software license is the GNU GPL. Slightly over half of the software is simply licensed using the GPL, and the software packages using the copylefting licenses (the GPL and LGPL), at least in part or as an alternative, accounted for 63% of the code. In all ways, the copylefting licenses (GPL and LGPL) are the dominant licenses in this GNU/Linux distribution. In contrast, only 0.2% of the software is public domain.

This paper is an update of my previous paper on estimating GNU/Linux's size, which measured Red Hat Linux 6.2 [Wheeler 2001] [dwheeler.com] . Since Red Hat Linux 6.2 was released in March 2000, and Red Hat Linux 7.1 was released in April 2001, this paper shows what's changed over approximately one year. More information is available at http://www.dwheeler.com/sloc [dwheeler.com] . 1. Introduction The GNU/Linux operating system has gone from an unknown to a powerful market force. Netcraft found that, of the systems running web servers on June 2001, GNU/Linux was now the second most popular operating system (with 29.6%, versus Windows' 49.6%) [Netcraft 2001] [netcraft.com] . Another survey, of primarily European and educational sites, found that GNU/Linux was used more than any other operating system (of the sites it surveyed) [Zoebelein 1999] [leb.net] . IDC found that 25% of all server operating systems purchased in 1999 were GNU/Linux, making it second only to Windows NT's 38% [Shankland 2000a] [cnet.com] .

There appear to be many reasons for this, and not simply because GNU/Linux can be obtained at no or low cost. For example, experiments suggest that GNU/Linux is highly reliable. A 1995 study of a set of individual components found that the GNU and GNU/Linux components had a significantly higher reliability than their proprietary Unix competitors (6% to 9% failure rate with GNU and Linux, versus an average 23% failure rate with the proprietary software using their measurement technique) [Miller 1995] [wisc.edu] . A ten-month experiment in 1999 by ZDnet found that, while Microsoft's Windows NT crashed every six weeks under a ``typical'' intranet load, using the same load and request set the GNU/Linux systems (from two different distributors) never crashed [Vaughan-Nichols 1999] [zdnet.com] .

However, possibly the most important reason for GNU/Linux's popularity among many developers and users is that its source code is generally ``open source software'' and/or ``free software''. A program that is ``open source software'' or ``free software'' is essentially a program whose source code can be obtained, viewed, changed, and redistributed without royalties or other limitations of these actions. A more formal definition of ``open source software'' is available from the Open Source Initiative [OSI 1999] [opensource.org] , a more formal definition of ``free software'' (as the term is used in this paper) is available from the Free Software Foundation [FSF 2000] [gnu.org] , and other general information about these topics is available at Wheeler [2000a] [dwheeler.com] . Quantitative rationales for using open source / free software is given in Wheeler [2000b] [dwheeler.com] . The GNU/Linux operating system is actually a suite of components, including the Linux kernel on which it is based, and it is packaged, sold, and supported by a variety of distributors. The Linux kernel is ``open source software''/``free software'', and this is also true for all (or nearly all) other components of a typical GNU/Linux distribution. Open source software/free software frees users from being captives of a particular vendor, since it permits users to fix any problems immediately, tailor their system, and analyze their software in arbitrary ways.

Surprisingly, although anyone can analyze GNU/Linux for arbitrary properties, I have found little published analysis of the amount of source lines of code (SLOC) contained in a GNU/Linux distribution. Microsoft unintentionally published some analysis data in the documents usually called ``Halloween I'' and ``Halloween II'' [Halloween I] [opensource.org] [Halloween II] [opensource.org] . Another study focused on the Linux kernel and its growth over time is by Godfrey [2000] [uwaterloo.ca] ; this is an interesting study but it focuses solely on the Linux kernel (not the entire operating system). Paul G. Allen posted some results from running Scientific Toolworks, Inc.'s tools on the Linux kernel [randomlogic.com] , but this analysis only considered C code (including headers) - ignoring the many other languages used in constructing the Linux kernel (e.g., assembly language), and only concentrating on the kernel. The Free Code Graphing Project at http://fcgp.sourceforge.net [sourceforge.net] generates a graphical representation of a program (currently, the Linux kernel), but only of the C code. In a previous paper, I examined Red Hat Linux 6.2 and the numbers from the Halloween papers [Wheeler 2001] [dwheeler.com] .

This paper updates my previous paper, showing estimates of the size of one of today's GNU/Linux distributions, and it estimates how much it would cost to rebuild this typical GNU/Linux distribution using traditional software development techniques. Various definitions and assumptions are included, so that others can understand exactly what these numbers mean. I have intentionally written this paper so that you do not need to read the previous version of this paper first.

For my purposes, I have selected as my ``representative'' GNU/Linux distribution Red Hat Linux version 7.1. I believe this distribution is reasonably representative for several reasons:

  1. Red Hat Linux is the most popular Linux distribution sold in 1999 according to IDC [Shankland 2000b] [cnet.com] . Red Hat sold 48% of all copies in 1999; the next largest distribution in market share sales was SuSE (a German distributor) at 15%. Not all GNU/Linux copies are ``sold'' in a way that this study would count, but the study at least shows that Red Hat's distribution is a popular one.
  2. Many distributions (such as Mandrake) are based on, or were originally developed from, a version of Red Hat Linux. This doesn't mean the other distributions are less capable, but it suggests that these other distributions are likely to have a similar set of components.
  3. All major general-purpose distributions support (at least) the kind of functionality supported by Red Hat Linux, if for no other reason than to compete with Red Hat.
  4. All distributors start with the same set of open source software projects from which to choose components to integrate. Therefore, other distributions are likely to choose the same components or similar kinds of components with often similar size for the same kind of functionality.

Different distributions and versions would produce different size figures, but I hope that this paper will be enlightening even though it doesn't try to evaluate ``all'' distributions. Note that some distributions (such as SuSE) may decide to add many more applications, but also note this would only create larger (not smaller) sizes and estimated levels of effort. At the time that I began this project, version 7.1 was the latest version of Red Hat Linux available, so I selected that version for analysis.

Note that Red Hat Linux 6.2 was released on March 2000, Red Hat Linux 7 was released on September 2000 (I have not counted its code), and Red Hat Linux 7.1 was released on April 2001. Thus, the differences between Red Hat Linux 7.1 and 6.2 show differences accrued over 13 months (approximately one year).

Clearly there is far more open source / free software available worldwide than is counted in this paper. However, the job of a distributor is to examine these various options and select software that they believe is both sufficiently mature and useful to their target market. Thus, examining a particular distribution results in a selective analysis of such software.

Section 2 briefly describes the approach used to estimate the ``size'' of this distribution (more details are in Appendix A). Section 3 discusses some of the results. Section 4 presents conclusions, followed by an appendix. GNU/Linux is often called simply ``Linux'', but technically Linux is only the name of the operating system kernel; to eliminate ambiguity this paper uses the term ``GNU/Linux'' as the general name for the whole system and ``Linux kernel'' for just this inner kernel. 2. Approach My basic approach was to:

  1. install the source code files in uncompressed format; this requires carefully selecting the source code to be analyzed.
  2. count the number of source lines of code (SLOC); this requires a careful definition of SLOC.
  3. use an estimation model to estimate the effort and cost of developing the same system in a proprietary manner; this requires an estimation model.
  4. determine the software licenses of each component and develop statistics based on these categories.

More detail on this approach is described in Appendix A. A few summary points are worth mentioning here, however. 2.1 Selecting Source Code

I included all software provided in the Red Hat distribution, but note that Red Hat no longer includes software packages that only apply to other CPU architectures (and thus packages not applying to the x86 family were excluded). I did not include ``old'' versions of software, or ``beta'' software where non-beta was available. I did include ``beta'' software where there was no alternative, because some developers don't remove the ``beta'' label even when it's widely used and perceived to be reliable.

I used md5 checksums to identify and ignore duplicate files, so if the same file contents appeared in more than one file, it was only counted once (as a tie-breaker, such files are assigned to the first build package it applies to in alphabetic order).

The code in makefiles and Red Hat Package Manager (RPM) specifications was not included. Various heuristics were used to detect automatically generated code, and any such code was also excluded from the count. A number of other heuristics were used to determine if a language was a source program file, and if so, what its language was.

Since different languages have different syntaxes, I could only measure the SLOC for the languages that my tool (sloccount) could detect and handle. The languages sloccount could detect and handle are Ada, Assembly, awk, Bourne shell and variants, C, C++, C shell, Expect, Fortran, Java, lex/flex, LISP/Scheme, Makefile, Objective-C, Pascal, Perl, Python, sed, SQL, TCL, and Yacc/bison. Other languages are not counted; these include XUL (used in Mozilla), Javascript (also in Mozilla), PHP, and Objective Caml (an OO dialect of ML). Also code embedded in data is not counted (e.g., code embedded in HTML files). Some systems use their own built-in languages; in general code in these languages is not counted.

Re:slashdotted! (0)

Anonymous Coward | more than 12 years ago | (#3827349)

2.2 Defining SLOC

The ``physical source lines of code'' (physical SLOC) measure was used as the primary measure of SLOC in this paper. Less formally, a physical SLOC in this paper is a line with something other than comments and whitespace (tabs and spaces). More specifically, physical SLOC is defined as follows: ``a physical source line of code is a line ending in a newline or end-of-file marker, and which contains at least one non-whitespace non-comment character.'' Comment delimiters (characters other than newlines starting and ending a comment) were considered comment characters. Data lines only including whitespace (e.g., lines with only tabs and spaces in multiline strings) were not included.

Note that the ``logical'' SLOC is not the primary measure used here; one example of a logical SLOC measure would be the ``count of all terminating semicolons in a C file.'' The ``physical'' SLOC was chosen instead of the ``logical'' SLOC because there were so many different languages that needed to be measured. I had trouble getting freely-available tools to work on this scale, and the non-free tools were too expensive for my budget (nor is it certain that they would have fared any better). Since I had to develop my own tools, I chose a measure that is much easier to implement. Park [1992] [cmu.edu] actually recommends the use of the physical SLOC measure (as a minimum), for this and other reasons. There are disadvantages to the ``physical'' SLOC measure. In particular, physical SLOC measures are sensitive to how the code is formatted. However, logical SLOC measures have problems too. First, as noted, implementing tools to measure logical SLOC is more difficult, requiring more sophisticated analysis of the code. Also, there are many different possible logical SLOC measures, requiring even more careful definition. Finally, a logical SLOC measure must be redefined for every language being measured, making inter-language comparisons more difficult. For more information on measuring software size, including the issues and decisions that must be made, see Kalb [1990] [usc.edu] , Kalb [1996] [usc.edu] , and Park [1992] [cmu.edu] .

Note that this required that every file be categorized by language type (so that the correct syntax for comments, strings, and so on could be applied). Also, automatically generated files had to be detected and ignored. Thankfully, my tool ``sloccount'' does this automatically. 2.3 Estimation Models

This decision to use physical SLOC also implied that for an effort estimator I needed to use the original COCOMO cost and effort estimation model (see Boehm [1981]), rather than the newer ``COCOMO II'' model. This is simply because COCOMO II requires logical SLOC as an input instead of physical SLOC.

Basic COCOMO is designed to estimate the time from product design (after plans and requirements have been developed) through detailed design, code, unit test, and integration testing. Note that plans and requirement development are not included. COCOMO is designed to include management overhead and the creation of documentation (e.g., user manuals) as well as the code itself. Again, see Boehm [1981] for a more detailed description of the model's assumptions. Of particular note, basic COCOMO does not include the time to develop translations to other human languages (of documentation, data, and program messages) nor fonts.

There is reason to believe that these models, while imperfect, are still valid for estimating effort in open source / free software projects. Although many open source programs don't need management of human resources, they still require technical management, infrastructure maintenance, and so on. Design documentation is captured less formally in open source projects, but it's often captured by necessity because open source projects tend to have many developers separated geographically. Clearly, the systems must still be programmed. Testing is still done, although as with many of today's proprietary programs, a good deal of testing is done through alpha and beta releases. In addition, quality is enhanced in many open source projects through peer review of submitted code. The estimates may be lower than the actual values because they don't include estimates of human language translations and fonts.

Each software source code package, once uncompressed, produced zero or more ``build directories'' of source code. Some packages do not actually contain source code (e.g., they only contain configuration information), and some packages are collections of multiple separate pieces (each in different build directories), but in most cases each package uncompresses into a single build directory containing the source code for that package. Each build directory had its effort estimation computed separately; the efforts of each were then totalled. This approach assumes that each build directory was developed essentially separately from the others, which in nearly all cases is quite accurate. This approach slightly underestimates the actual effort in the rare cases where the development of the code in separate build directories are actually highly interrelated; this effect is not expected to invalidate the overall results.

For programmer salary averages, I used a salary survey from the September 4, 2000 issue of ComputerWorld; their survey claimed that this annual programmer salary averaged $56,286 in the United States. I was unable to find a publicly-backed average value for overhead, also called the ``wrap rate.'' This value is necessary to estimate the costs of office space, equipment, overhead staff, and so on. I talked to two cost analysts, who suggested that 2.4 would be a reasonable overhead (wrap) rate. Some Defense Systems Management College (DSMC) training material gives examples of 2.3 (125.95%+100%) not including general and administrative (G&A) overhead, and 2.81 when including G&A (125% engineering overhead, plus 25% on top of that amount for G&A) [DSMC] [osd.mil] . This at least suggests that 2.4 is a plausible estimate. Clearly, these values vary widely by company and region; the information provided in this paper is enough to use different numbers if desired. These are the same values as used in my last report. 2.4 Determining Software Licenses A software license determines how that software can be used and reused, and open source software licensing has been a subject of great debate. The Software Release Practice HOWTO [Raymond 2001] [linuxdoc.org] discusses briefly why license choices are so important to open source / free software projects:

The license you choose defines the social contract you wish to set up among your co-developers and users ...

Who counts as an author can be very complicated, especially for software that has been worked on by many hands. This is why licenses are important. By setting out the terms under which material can be used, they grant rights to the users that protect them from arbitrary actions by the copyright holders.

In proprietary software, the license terms are designed to protect the copyright. They're a way of granting a few rights to users while reserving as much legal territory is possible for the owner (the copyright holder). The copyright holder is very important, and the license logic so restrictive that the exact technicalities of the license terms are usually unimportant.

In open-source software, the situation is usually the exact opposite; the copyright exists to protect the license. The only rights the copyright holder always keeps are to enforce the license. Otherwise, only a few rights are reserved and most choices pass to the user. In particular, the copyright holder cannot change the terms on a copy you already have. Therefore, in open-source software the copyright holder is almost irrelevant -- but the license terms are very important.

Well-known open source licenses include the GNU General Public License (GPL), the GNU Library/Lesser General Public License (LGPL), the MIT (X) license, the BSD license, and the Artistic license. The GPL and LGPL are termed ``copylefting'' licenses, that is, the license is designed to prevent the code from becoming proprietary. See Perens [1999] [oreilly.com] for more information comparing these licenses. Obvious questions include ``what license(s) are developers choosing when they release their software'' and ``how much code has been released under the various licenses?''

An approximation of the amount of software using various licenses can be found for this particular distribution. Red Hat Linux uses the Red Hat Package Manager (RPM), and RPM supports capturing license data for each package (these are the ``Copyright'' and ``License'' fields in the specification file). I used this information to determine how much code was covered by each license. Since this field is simply a string of text, there were some variances in the data that I had to clean up, for example, some entries said ``GNU'' while most said ``GPL''. In some cases Red Hat did not include licensing information with a package. In that case, I wrote a program to attempt to determine the license by looking for certain conventional filenames and contents.

This is an imperfect approach. Some packages contain different pieces of code with difference licenses applying to different pieces. Some packages are ``dual licensed'', that is, they are released under more than one license. Sometimes these other licenses are noted, while at other times they aren't. There are actually two BSD licenses (the ``old'' and ``new'' licenses), but the specification files don't distinguish between them. Also, if the license wasn't one of a small set of common licenses, Red Hat tended to assigned nondescriptive phrases such as ``distributable''. My automated techniques were limited too, in particular, while some licenses (e.g., the GPL and LGPL) are easy to recognize automatically, BSD-like and MIT-like licenses vary the license text and so are more difficult to recognize automatically (and some changes to the license would render them non-open source, non-free software). Thus, when Red Hat did not identify a package's license, a program dual licensed under both the BSD and GPL license might only be labelled as having the GPL using these techniques. Nevertheless, this approach is sufficient to give some insight into the amount of software using various licenses. Future research could examine each license in turn and categorize them; such research might require several lawyers to determine when two licenses in certain circumstances are ``equal.''

One program worth mentioning in this context is Python, which has had several different licenses. Version 1.6 and later (through 2.1) had more complex licenses that the Free Software Foundation (FSF) believes were incompatible with the GPL. Recently this was resolved by another change to the Python license to make Python fully compatible with the GPL. Red Hat Linux 7.1 includes an older version of Python (1.5.2), presumably because of these licensing issues. It can't be because Red Hat is unaware of later versions of Python; Red Hat uses Python in its installation program (which it developed and maintains). Hopefully, the recent resolution of license incompatibilities with the GPL license will enable Red Hat to include the latest versions of Python in the future. In any case, there are several different Python-specific licenses, all of which can legitimately be called the ``Python'' license. Red Hat has labelled Python itself as having a ``Distributable'' license, and package Distutils-1.0.1 is labelled with the ``Python'' license; these labels are kept in this paper.

No more functions for me... (3, Funny)

evilviper (135110) | more than 12 years ago | (#3827336)

I'll never use macros, functions, classes, or the stl again!

"Look, I wrote a program which does the exact same thing as another program, but mine is worth much, much more!"

Yeah, whatever... (1)

st0rmshad0w (412661) | more than 12 years ago | (#3827355)

Just try explaining it to your insurance company after your house gets robbed, or some idiot airport security inspector accidently trashes your laptop.

Heck, given that theory, one fire should net me more than enough to retire on.

Slashdot costs industry $1billion/year (5, Interesting)

pubjames (468013) | more than 12 years ago | (#3827359)


I love these kind of stats.

Slashdot has, say, 100,000 US readers per day.

Each spends an hour reading slashdot when they should be working.

Let's say an average Slashdot reader is worth say, $40 an hour, and they read Slashdot on 300 days during the year.

That means Slashdot costs the USA $1,200,000,000 dollars a year! Crikey! Don't tell Bush!

PWPBOT IS DEAD (0, Funny)

Anonymous Coward | more than 12 years ago | (#3827429)

I just heard some sad news on talk radio - troller/crapflooder pwpbot was found dead in its basement this morning. There weren't any more details. I'm sure everyone in the Slashdot community will miss it - even if you didn't enjoy his work, there's no denying its contributions to popular culture. Truly an Slashdot icon.

Re:Slashdot costs industry $1billion/year (0)

Anonymous Coward | more than 12 years ago | (#3827488)

Problems with your analysis:

1) Less than 1% of slashbots have ever held a job.
2) The average slashbot is worthless as a human being.

Your final result is correct, however.

That means Slashdot costs the USA $1,200,000,000 dollars a year!

Yes. In welfare payments.

Re:Slashdot costs industry $1billion/year (0)

Anonymous Coward | more than 12 years ago | (#3827593)

If you're going to analyze things to death, let me give it a whirl. Based on your two points, and you objection to the parent post, the final result is NOT correct. In order for slashdot to cost the US 1.2B a year in welfare, that would have to mean that the slashbots don't have a job BECAUSE they read slashdot. But you seem to disagree with that contention.

But.. (1)

iONiUM (530420) | more than 12 years ago | (#3827360)

A shorter program that did the same thing as a longer program, but was more efficient than a longer program might have taken much more time/effort to code.. I don't think it could possibly take this into consideration.
Personally, I'd feel bad if I wrote a program which was just a bunch of spaghetti.

Now we know why... (2)

Navius Eurisko (322438) | more than 12 years ago | (#3827368)

Microsoft puts so much code bloat into their programs...

Re:Now we know why... (0)

Anonymous Coward | more than 12 years ago | (#3827456)

look at the article, Mozilla (M18) has more lines of code than the Linux Kernal!!!
CAN SOMONE SAY BLOAT?

Makes you wonder... (0)

Anonymous Coward | more than 12 years ago | (#3827383)

Part of that $1bil could have helped feed a programmers family and gone toward making a more stable OS. And with all the layoffs in the industry, don't you just feel aweful for patronizing such software?!

That's 1 Billion (year 2000) US dollars (0)

Anonymous Coward | more than 12 years ago | (#3827411)

But how many rupees?
Just think how much they could have saved if they had outsourced it to an Indian contractor!

His Paper Is Bunk (5, Insightful)

dbretton (242493) | more than 12 years ago | (#3827412)

To put it mildly...

In his paper, he uses the basic COCOMO model for estimating the cost. This model, quite frankly, sucks. Boehm's book even states, more or less, that the COCOMO model is only accurate to a factor of 10.

Since I no longer have the Boehm book, this quote from a google-found web page will have to do. This is a quote of a quote from Boehm's book, Software Engineering Economics:

"Basic COCOMO is good for rough order of magnitude estimates of software costs, but its accuracy is necessarily limited because of its lack of factors to account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and other project attributes known to have a significant influence on costs."

Basically, this means that the estimate could be anywhere from $100M->10B in true cost.

At the very least, this kid should have stated which of the model variants he was using.

Better yet, he should have subdivided the source code into multiple categories: kernel+drivers, tools, productivity software, etc. etc., and then applied the various models to them.

Just my 2 bits.

BTW, here [nasa.gov] is the google-found page which has the quote I stole. Plus, it gives a nice, albeit brief, overview of COCOMO.

-d

Re:His Paper Is Bunk (2)

sean23007 (143364) | more than 12 years ago | (#3827471)

If it's off by a factor of 10, how could it range between 100M and 10B? Wouldn't that be 2 factors of 10? And that's a whole hell of a lot of linux!

No, he's right (2)

vrt3 (62368) | more than 12 years ago | (#3827558)

Because we don't know if it's off to the low or to the high. If his estimate was 10 times too low, it was really 10B; if it was 10 times to high, it was really 100M.

Re:His Paper Is Bunk (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3827479)

Just my 2 bits.

I would agree, that your IQ could be expressed in binary using two bits.

You aer teh gay.

Just like Jamie@Iwishiwasntacubanfaggot.vg, who is also a fag.

Don't be confused (2, Interesting)

EdMcMan (70171) | more than 12 years ago | (#3827417)

Well, when I saw the tidbit on /., I thought, wow, a billion dollars worth of software in a Linux distro? That is not what this article says. It simply says that RedHat (would have) had to pay the developers a billion dollars to complete that much work. To find out how much it should probably cost, add some money for profit, and divide that by how many probably users there are. This would only make sense for Linux as a whole, and not just RedHat.

isn't SLOC junk? (3, Interesting)

*weasel (174362) | more than 12 years ago | (#3827433)


if analyzing SLOC says nothing about developer contributions, efficiency, or effectiveness - then isn't estimating value based off SLOC fundamentally flawed?

i mean, you can't have it both ways. Either SLOC shows how productive programmers are, or it doesn't.

if it does - then get over the SLOC analysis in your job reviews.
if it doesn't - then you cannot even remotely accurately guage monetary worth through SLOC.

good luck to the people trying to estimate worth of OSS. good luck to the people trying to estimate the worth of programmers.

i just don't know why people don't count 'Customer Problems Solved Over Time' as the end-all, be-all.

(and time and energy fixing software bugs doesn't count. that's not the customers problem. it's the developers)

who cares how many SLOC are in a product. how many needs of the end user does it fulfill, and how long did it take to get done from the word 'go'?

yeah, you'd need to define customer needs much more carefully than most shops do... but isn't that part of the eXtreme Programming retinue /. loves to trumpet?

Inflated prices? (2)

sean23007 (143364) | more than 12 years ago | (#3827457)

I kind of hope that nobody uses this to price software that they're selling to a company, lest they lose their credibility. There is no assurance that this guy did not lean toward making this software seem more valuable than it really is, thus making open source software more attractive (because you're getting something for nothing). I'd be careful using this in any other capacity than your home computer for the purpose of having fun.

On a similar note, do the prices seem accurate, for those of you who have used this thing?

Good Lord (0)

Numeros (145682) | more than 12 years ago | (#3827475)

<sarcasm>
Good lord, taco, you should have known that ~7000 stories ago somebody posted this already!!!
</sarcasm>

Come on people, cut the guy some slack. I am sure you can't remember every story posted!!

handle (1)

stud9920 (236753) | more than 12 years ago | (#3827503)

"2bits writes"
With such a handle, how much pennies is his opinion worth ?

Computer Guy says... (-1, Offtopic)

Computer Guy (590478) | more than 12 years ago | (#3827522)

I'm scared of Linux. The penguin gives me nightmares. ( ? ) I'm finished

An interesting thumbsuck (5, Interesting)

Twylite (234238) | more than 12 years ago | (#3827535)

Running the same SLOC figures against the statistics from the Function Points methodology and you get a different picture. You are looking at 2500 person years of effort, with a cost optimum development time of 6.5 years. However, to deal with the complexity involved you will need approximately 3000 average and 1500 above average developers (at average development rate you could expect a 13 year delivery!). Total price tag: around $2 billion (that's 2e9, in case your definition of billion is different).

Of course, this is still a very skewed figure. There is no accounting for the quality of code (at the end of such a complex development cycle, you could expect as many as 7 million defects!), and both FP and COCOMO estimate development effort inclusive of design work and documentation, which in OpenSource typically don't match those in mature commercial development environments (from which the FP and COCOMO statistics are derived).

There is also a huge, and invalid, assumption made by the author, regarding the application of COCOMO (and my FP calculations suffer the same problem). The complexity of a system is MORE than the sum of its parts. This is because developer productivity declines as system complexity increases.

At 10,000 FP, as developer is often only 60% as productive compared to 1,000 FP. The situation is obviously far worse at 300,000 FP (the entire distribution), yet the kernel itself only weighs in at around 20,000 FP. And even then, clear modularisation reduces complexity for individual developers. So it is grossly unfair to base calculations on the system as a whole.

The kernel (around 2.5 MLOC) as a single system would be a task for 300 skilled developers over around 3 years, while the Gimp (around 500 KLOC, still near the top of the list in size) would be looking at 50 developers over 18 months. More complex projects need relatively more time and more developers. Doing all these projects in parallel (assuming it were possible - which is isn't because of dependancies, and that's another factor) would take less than the most complex task (kernel = 3 years) and relatively less developers than estimated based on the complexity of that task (30 MLOC / 2.5 MLOC * 300 developers = max 3600 for entire distribution). Max cost: 3600 * 3 * $55k = $594 million.

And you're STILL not accounting for the fact that employing someone costs a lot more than just paying a salary. Which puts all estimates (mine and the authors) up.

as everybody knows : (2, Insightful)

stud9920 (236753) | more than 12 years ago | (#3827556)

<flamebait>
Linux is free (as in beer) if you time is worthless.
</flamebait>
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?