Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Kernel Benchmarks

michael posted more than 13 years ago | from the retrospective dept.

Linux 136

kitplane01 writes: "Three students and a professor from Northern Michigan University spent the semester benchmarking a bunch of Linux kernels, from 2.0.1 to 2.4.0. Many functions improved in speed, but some did not. Total lines of code have tripled, and are on an exponential growth curve. Read their results here."

cancel ×

136 comments

Sorry! There are no comments related to the filter you selected.

Re:silly graphs (1)

Anonymous Coward | more than 13 years ago | (#233528)

Most of these graphs contain a curve labeled Expon or something (once again, great legend). Why exponential. Why not some polynomial or some other function. What is the error in the fit/correlation coefficient(s). Just tell me something that gives me a reason to believe that this curve means something.

It's a good thing Moore didn't have a pompous ass like you as an instructor, or he might have been too traumatized to make the observation that processing capacity doubles every 18 months.

`strings' strikes again (1)

Anonymous Coward | more than 13 years ago | (#233529)

http://euclid.nmu.edu/~benchmark/total_growth.gif [nmu.edu]

$ strings total_growth.gif | head -2
GIF89a
Software: Microsoft Office

Nice graphwork (3)

Anonymous Coward | more than 13 years ago | (#233530)

http://euclid.nmu.edu/~benchmark/null_call.gif [nmu.edu]

This shows why computer guys are not scientists. My first year phys chem prof would tear his own arm off and beat you to death with it if you gave him a graph that looked that ugly.

The Excel defaults may be ugly, but you can change them.

GCC optimizations and benchmarking (5)

Anonymous Coward | more than 13 years ago | (#233531)

One problem with benchmarking is the optimizations settings for GCC. GCC is very sensitive to the proper choice of optimizations. Several years ago I did an extensive test of GCC using the Byte benchmark suite. I experimented with the various optimizations settings. The most important were the settings of -malign-jumps -malign-loops and -malign-functions. These flags each take a numerical argument representing a power of 2 on which the object will be aligned.

Thus "0" indicates byte alignment, "1" word (16 bit) alignment, "2" doubleword (32 bit), "3" quadword (64 bit), and "4" paragraph (128 bit). The other optimization of interest is the "-O" setting. Here arguments can take the value of 0, 1, 2, or higher. Personally, I found that -O2 was not necessarily the best setting, although it seems very common to find it set to that in Makefiles. I found using -O1 and tuning the alignment optimizations by hand provided better results.

My findings by benchmarking all the combinations of settings were that for a Cyrix 5x86, optimal alignment values were lower numerically lower than might be expected. For example, close to optimal settings as I recall were:

gcc -O1 -m386 -malign-jumps=1 -malign-functions=1 -malign-loops=1
It wouldn't be a bad starting point for any Intel processor. On modern processors, it is more important to achieve high cache hits, which is thwarted by certain wrong optimizations such as aggressive loop unrolling and excessive alignment. One particular setting to avoid is -m486. It should be avoided for most processors other than a 486, because the 486 alignment requirements are less than optimal (i.e. tends to over-align) for both its predecessors and descendents. And if you don't need a debugging version of your code -fomit-frame-pointer is usually always useful as it frees up an extra general purpose register.

Yeah, but... (1)

Wakko Warner (324) | more than 13 years ago | (#233532)

...between 2.0 and 2.4, mmap() got 40 times faster, so there's still a little room for improvement, I'd say...

I can DEFINITELY tell the difference between 2.2.x and 2.4.x -- 2.4 beats the hell out of 2.2.

- A.P.

--
Forget Napster. Why not really break the law?

Some analysis comments (2)

Trepidity (597) | more than 13 years ago | (#233533)

Except for the lines of code graph, I don't see how they justify fitting exponential curves to any of the other graphs. Since the resulting "exponential" curves that were fit are nearly straight lines there's really no basis for doing anything other than a linear fit.

They note that this was all run on the same hardware, but all that means is that the results are valid *for* that hardware. Some of the drastic changes in some areas might be due to, for example, the replacement of a generic driver with a specific driver optimized for one of the pieces of hardware they used. Obviously this change wouldn't carry over to all other systems.

All in all not bad though. It would've been nice to see some more rigorous data analysis though (the data analysis expected in a typical college freshman chemlab class is more extensive than this).

the lines they should have counted... (1)

peterjm (1865) | more than 13 years ago | (#233535)

here's how they counted the lines in the kernel.

We counted lines of code in all files that:
ended in ".c" or ".h"
were in one of the following directories:

arch drivers fs include init ipc kernel lib mm net


when it really should have just been something like.

(root@mustard)-(/dev/tty0)-(/usr/src/linux)
(Wed May 9)-(05:53pm) 19 # find . -name *.[ch] -exec egrep "&lt some terrible curse words >" {} \; -print | wc -l

yeah, that would'a worked.

Benchmarks are fun. (1)

Forge (2456) | more than 13 years ago | (#233536)

I'll be taking a little of my time to dig through this to see how many of the well hyped performance hacks actually work as advertised.

Too bad the do little detailed things like lines of code and Stat rather than how much RAM/CPU dose your dynamic web server need to saturate a T1.

Still educational for the none kernel hacker in any case.

Re:Signal handling - so what? (1)

osu-neko (2604) | more than 13 years ago | (#233537)

Naw, just have programs use SIGUSR1 for dot and SIGUSR2 for dash, and you can have programs use morse code for interprocess communications...

--

Re:How to lie with statistics (1)

osu-neko (2604) | more than 13 years ago | (#233538)

Why does the 'Linux Lover' use MSExcel for his plots??

Why not? Just because you love Linux doesn't mean you don't use anything else. Heck, doesn't mean you don't love anything else. I'm in a polyamourous relationship with both Linux and NetBSD... :)

--

What about older kernels? (1)

Bwah (3970) | more than 13 years ago | (#233539)

I would be interested in seeing the pre 2.0 kernels stuck in there too ... (not interested enough to dust my TOWER of old cds and start compiling though :-)

I heard from some people who were using 1.2.something in an embedded project that it's context switch times were quite a bit better than the latest.

Anyone out there know how the older kernels stack up?

Re:I learned something (1)

PD (9577) | more than 13 years ago | (#233540)

Must be a snow cow from Michigan that modded me down...

Why can't you admit that it's boring up there! Come on, you know it is! All they talk about is how many feet of snow will be left on the ground when June comes around.

Re:I learned something (1)

PD (9577) | more than 13 years ago | (#233541)

Because I can.

Re:Yeah, but... (1)

PD (9577) | more than 13 years ago | (#233542)

How about this one? I'm logged in. I have the karma. I can do what I want. If I post at +2, there's just as many levels above me as there are below me, so set your damn threshhold appropriately.

I learned something (2)

PD (9577) | more than 13 years ago | (#233543)

I knew it was boring way up in Northern Michigan, but until now, I never imagined just how boring it actually is. I guess in Manitoba they must be benchmarking DOS calls in various MS operating systems. I guess it beats watching caribu mate.

Re:Devices (2)

PD (9577) | more than 13 years ago | (#233544)

The performance that I care about is "do it work???" and the NE2000 cards give me no trouble at all. 3c509 cards are also sweet and trouble free.

Re:I learned something (2)

PD (9577) | more than 13 years ago | (#233545)

Nothing beats when caribu mate.

Except YOU maybe. heh heh.

Re:Yeah, but... (4)

PD (9577) | more than 13 years ago | (#233546)

That's a lot of work just to print out a negative number on your screen...

Re:GCC optimizations and benchmarking (5)

The Famous Brett Wat (12688) | more than 13 years ago | (#233548)

...which just goes to prove that optimization is (justifiably, as it happens) much -maligned.

How to lie with statistics (2)

MSG (12810) | more than 13 years ago | (#233549)

Rule #4:
One can have a graph of any shape that he wants by carefully choosing the axis'.

features and drivers (2)

josepha48 (13953) | more than 13 years ago | (#233550)

It is not suprising though. Consider the number of driver and new features that have been introduced. As well as the S390 archetecture. There are numberous new drivers, as well as framebuffer, as well as better SMP. I am sure that there is more. It would be interesting if the kernel developers had a debug feature in there that if you build it with that on it would tell you the time of execution of each function (not sure if they do), similar to perl benchmark.

I quote" Hardware compatibility is a large part of the growth."

I don't want a lot, I just want it all!
Flame away, I have a hose!

Re:I am Michael's raging bile duct (1)

ethereal (13958) | more than 13 years ago | (#233551)

I find it interesting that both Michael and Jamie McCarthy post stories on /. - you would think Michigan wouldn't be big enough for the both of them :)

Caution: contents may be quarrelsome and meticulous!

silly graphs (4)

rangek (16645) | more than 13 years ago | (#233555)

Silly graphs is a pet peeve of mine. I hate it when my students give me graphs like these. Needless gridlines, unlabeled legends, connected dots, and poor statistical analysis.

  • I hate gridlines and they usually distract from the graph
  • what the fuck is "Series 1". For Christ sakes, take a minute and either delete the needless legend or at least overwrite the stupid defaults to make them meaningful
  • Connecting the dots means something. If you plot linux 2.1.1 and linux 2.1.14 and draw a line or someother curve between these points, you are telling me that if I pick up linux 2.1.7 it will lie on that curve. That is not a correct interpretation of this data.
  • Most of these graphs contain a curve labeled Expon or something (once again, great legend). Why exponential. Why not some polynomial or some other function. What is the error in the fit/correlation coefficient(s). Just tell me something that gives me a reason to believe that this curve means something.

I also find it ironic that they used MS Excel (which they don't say they did, but it sure looks like it)...

Re:Page fault latency: in all of 2.2, or fixed? (1)

Petrus (17053) | more than 13 years ago | (#233556)

If they tested stable kernels, they would probably get only one big step with each major
release - without explaining where it actually came about.<BR><BR>

That's because stable kernels are rather on the security maintenance and driver update path, it does not tinker with the scheduling, memory, signal and disk I/O routines.<BR><BR>

Ploitting development kernels is actually more relevant.

Re:Another study: (1)

Petrus (17053) | more than 13 years ago | (#233557)

That study seems to show, that the exponenital nature as well as bulk of the code comes mainly from drivers. Some subsystms, e.g. FS seems to have actually decreasing LOC.

Shortly, supported hardware grows exponentially.
Notice, that if the hardware driver development grew linearly, the cumulative amount of drivers would be quadratic. Since the rate of adding hardware drivers is probably a little bit faster than linear, the curve seems to be quadratic to exponetial.

This is far from being signs of bloat disease. This is actually quite healthy grow.

Wah. (1)

Mr. Piccolo (18045) | more than 13 years ago | (#233558)

Based on these numbers and the test I just ran, Linux 2.4.0 kicks FreeBSD 4.0-STABLE's ass all over the place in every category.

Sure, I only have a 400MHz K6-III vs. their 850 MHz Pentium III, but it's not like Linux does everything twice as fast; it's much worse than that.

This benchmark was not that useful (4)

cartman (18204) | more than 13 years ago | (#233559)

First, the university benchmarking team simply ran lmbench (a free, popular, old kernel benchmarking utility) on a variety of kernels. Claiming that:

Three students and a professor from Northern Michigan University spent the semester benchmarking a bunch of Linux kernels

...somewhat exaggerates this accomplishment

Second, no data were presented on the main areas of the kernel that were improved. How is SMP performance in kernel space? Did the finer grained locks help? How is the performance from the threaded IP stack? Does it prevent IO blocking?

THAT kind of information would have been interesting. They tested only things that the kernel has done forever.

lmbench for BSD or Windows? (1)

cpeterso (19082) | more than 13 years ago | (#233560)

If lmbench is a standard benchmark, I wonder what the same tests runs across FreeBSD 2/3/4 and Windows NT 3.51/4/2000 would show.

For those who are interested, here [bitmover.com] is the LMbench home page.

Re:Quite limited really (1)

Black Parrot (19622) | more than 13 years ago | (#233563)

> Changing BIOS memory setting from CAS 2 to CAS 3 : 3.7% speedup.

Oops. Make the obvious correction.

--

Re:Quite limited really (3)

Black Parrot (19622) | more than 13 years ago | (#233564)

> I definitely noticed a jump in performance between 2.2.16 and 2.4.0 so they must be missing something here.

I use a "real world" benchmark (which of course might be completely irrelevant to you, however relevant it happens to be to me).

Here are some recent observations regarding this specific benchmark, ranked in order of effect:
  • Changing BIOS memory setting from CAS 2 to CAS 3 : 3.7% speedup.
  • Changing to a different brand motherboard, and matching the original's BIOS settings as well as possible : 2.1% speedup.
  • Upgrading 2.4.3 to 2.4.4 : 1.1% speedup.
  • Running under kernel compiled as "Athlon" rather than "i686" : no substantial difference.
Moreover, although I have not had time to test it, a well-informed friend tells me that using certain recent versions of gcc rather than certain older ones can give a whopping 30% slowdown, even using the same flags for compilation. (N.B. - He did not say "gcc is getting worse with time". He merely remarked re two specific versions, whose numbers escape me at the moment.)

If performance tuning is your forte, then clearly you've got your work cut out for you.

--

Re:Yeah, but... (3)

the eric conspiracy (20178) | more than 13 years ago | (#233565)

Over three years it's still positive.

Re:Yeah, but... (4)

the eric conspiracy (20178) | more than 13 years ago | (#233566)

Every evening I run a disk/memory intensive program that does a 3 year analysis of the US stock market. When moving from 2.2.x to 2.4.x I obtained a run time decrease from 270 to 190 seconds. This to me was a VERY impressive upgrade. The same code running on Win2000 takes 1300 seconds to run.

Re:Yeah, but... (4)

the eric conspiracy (20178) | more than 13 years ago | (#233567)

It's the same code running on the same box - a dual P2 400 with 0.5 GB of RAM. No ifdefs. Programs are invoked from the command line. Relatively small results datasets are saved to files. Because of the size of the input dataset, and the crappy indexes the main performance determinant is the efficiency of disk i/o and buffering thereof.

For this application the 2.4 kernel kicks butt up and down the street all day. YMMV.

Another study: (3)

MrClean (23413) | more than 13 years ago | (#233568)

Annother more extensive linux evolution study is at:
http://plg.uwaterloo.ca/~migod/papers/icsm00.pdf

Re:Aaaah! Exponential! (1)

Tower (37395) | more than 13 years ago | (#233573)

Don't forget about the added S/390 arch files, too...
--

Re:This is off topic as hell but.... (1)

QuantumG (50515) | more than 13 years ago | (#233575)

in any language it is impossible (except maybe on alt.sex.stories.

Re:This is off topic as hell but.... (2)

QuantumG (50515) | more than 13 years ago | (#233576)

The other side [vidomi.com] of the story is on their site.

Re:Devices (3)

CJ Hooknose (51258) | more than 13 years ago | (#233577)

What I wish is that hardware manufacturers would just use one standard interface, then only one driver for each device would be necessary. Impossible you say? Look at current modems, old sound cards (all sound blaster compatible), NE2000 network cards (I won't buy any other kinds) ATAPI CD-Roms....

Yeah, right. The problem with this approach is that it leads to unnecessarily narrow definitions of functionality, and can prevent hardware manufacturers from doing things cheaper. Not only that, but the examples you chose are kind of screwy. "Current modems" without a qualifier implies the N+1 varieties of WinModems out there, which all do things differently. Many old sound cards did things their own way and had a small DOS TSR that provided SB compatability in software. The floppy, IDE, and ATAPI command sets, as well as the RS232 serial-port standards, are published and standardized, but these are properly communications protocols between devices, not the devices themselves. The PCI and ISA busses are, again, more like protocols to allow devices to communicate rather than devices themselves. I don't see too many non-PCI, non-ISA devices that plug into the insides of an x86.

Non-x86 hardware platforms have it easier; one vendor like Apple/Sun/IBM says, "This is the list of hardware that works on our platform," and you use it. The multitude of hardware vendors for x86 boards and devices has led to a large amount of conflicting standards and weird, proprietary hardware. (If a vendor can save $0.10 per unit on a device by leaving out hardware functions which can be replicated by a kludged binary driver, they will. Think WinModems.) This approach has also made x86 hardware cheaper than the alternatives.

Simply put, things will change and change quickly in hardware. Standards are a good idea, but they quickly become lowest-common-denominator, think "VGA".

Re:Quite limited really (3)

norton_I (64015) | more than 13 years ago | (#233582)

2.4.0 has a dramatically improved mm system, most of the benefits of which don't show up on these tests, yet make a world of difference in real life.

RAM, base mem & extended mem. (2)

dizzy_p (66204) | more than 13 years ago | (#233583)

Why the h*** does they list both RAM and a combination of base mem & extended mem on their 'resources' page. I would have mattered if they had tested MS-DOS 6.2, but not Linux!!

Re:GCC optimizations and benchmarking (2)

levendis (67993) | more than 13 years ago | (#233584)

is this stuff documented anywhere? has anyone gone through and done a thourough analysis of when each gcc option is best used? doing so might be very beneficial for linux overall....

----

Re:answered my own question (1)

Baki (72515) | more than 13 years ago | (#233588)

This is pretty useless. It compares different machines, and vastly out-of-date versions (of FreeBSD, at least). Tonight I'll run a test of current Linux and FreeBSD on the same hardware (both vmware virtual machines running on the same physical box) and post some results.

Re:This benchmark was not that useful (3)

Baki (72515) | more than 13 years ago | (#233589)

Another thing making this benchmark useless is that it only tests Linux performance under no-load conditions (i.e. the benchmark is the only thing that runs), it doesn't tell anything about scaleability and keeping up performance under heavy load.

And that is exactly the point that Linux is often criticized for, compared to competitors (Solaris, FreeBSD): it may perform well under no- or light-load conditions, but it doesn't scale well. It would have been interesting to check whether this criticism is still valid for the 2.4 kernels.

Pretty sloppy presentation. (3)

cananian (73735) | more than 13 years ago | (#233590)

This was really a pretty sloppy writeup. The "performance note" from linus was linked a page too early, there were no convenient navigation links, and far too little effort was spent to identify the sources of the performance improvements identified. In addition, "capabilities" are blamed for what was really the result of a debugging-printk excess, and in at least one point "kernel 2.1.92" was blamed (a convenient culprit) when looking at the graph it is obvious that kernel 2.1.*32* was the outlier.

I'm not impressed.

Re:Aaaah! Exponential! (1)

mabinogi (74033) | more than 13 years ago | (#233591)

Most of the growth is in the drivers...and that is a good thing.

Three Students and a Professor (2)

nihilogos (87025) | more than 13 years ago | (#233592)

I rented that video last week. Very racy.

gotta love the mis-wordings (1)

thelaw (100964) | more than 13 years ago | (#233593)

check out the quote on http://euclid.nmu.edu/~benchmark/index.php?page=nu ll_call [nmu.edu] :

"As mentioned in our methodology section, this is due to a bug in the kernel code that lead to a feature freeze in subsequent kernels."

if a bug in the kernel code can cause a feature freeze, someone better debug the developers! :)

jon

Re:Signal handling - so what? (1)

notsoanonymouscoward (102492) | more than 13 years ago | (#233594)

It seems to me that any program firing off thousands of signals per second has a serious design flaw.

Does your brain have a serious design flaw?

Re:We'll beat Microsoft yet! (1)

-brazil- (111867) | more than 13 years ago | (#233596)

The original statement is bullshit. The LOC of the kernel have increased almost exclusively to provide oodles of device drivers and support for more architectures, not because of bloat in the core parts of the kernel. All of it just increases the size of the full source download, not of the final compiled binary.

Re:MS Graph? (1)

wizbit (122290) | more than 13 years ago | (#233597)

why the FUCK must everyone insist on political correctness in linux-related stories? the fact that microsoft exists and that people choose to use their products is NOT reason to just blindly post inflamatory criticisms of their methods. if i want to use some in-house graphing program that produces graphs identical to the ones displayed by MS Excel, should i avoid treading linux waters with my statistical analyses simply because i'm afraid of bullshit backlash? give me a break.

Re:Yeah, but... (1)

SClitheroe (132403) | more than 13 years ago | (#233598)

bet you're using Cygnus...be gone, troll!

Re:We'll beat Microsoft yet! (5)

joto (134244) | more than 13 years ago | (#233600)

So when will line count surpass Windows 2000?

Depending on point of view, that has already happened long ago...

To make the comparison meaningfull, you have to get systems of somewhat equal capacity. The linux kernel by itself is in no way comparable to Windows 2000.

In addition we need various fileutilities, an accelerated X11-server (with Mesa/OpenGL, the video-extension, and antialiasing), one of Gnome/KDE (filemanager, basic desktop utilities, a simple texteditor, something akin to COM (which would be Bonobo or Kparts)), a working web-browser (Mozilla or Konqueror), some userfriendly utilities to replace the control-panel, a user-friendly email-client and newsreader, a simple webserver, basic networking utilities (Samba with a user-friendly network neighborhood browser, telnet, ftp, ping, ...), a good media-player (capable of playing at least wav, mp3, CD's, mpeg, avi, mov and preferably asf and wmf), minicom, a ppp-dialer, and probably quite a few other goodies I've forgotten to mention.

If we put all this into a linux-distribution, I doubt we would do much better than W2k. But to make things even worse, that wouldn't make much of a linux-system. Most linux-users wouldn't be too happy without emacs, gcc with friends, perl, python, tcl/tk, and most of the common command-line utilities (sed, awk, find, etc...) (, and probably also apache, MySQL or PostgreSQL, gimp, etc...).

Line-count? Well, guess what... Linux has become bloatware... Even more than what's produced in Redmond!

Re:Devices (1)

evilviper (135110) | more than 13 years ago | (#233601)

I have 10/100Base-T cards in multiple systems (full-duplex of course) and they perform just as well as my SiS900, 3com509, Realtek, and others.

Re:Devices (1)

evilviper (135110) | more than 13 years ago | (#233602)

My point is, don't buy the crap that they make propritary just to save a buck. 99% of the time the cheaper one has lost some functionality or stability (i.e Winmodems) While it hasn't made a huge impact, people aren't buying WinModems as much as their hardware based-counterparts. Why? Because they've been told what's wrong with just picking the cheapest one. Now if we could do that with other types of hardware....

Re:You contradicted yourself there pal.. (1)

evilviper (135110) | more than 13 years ago | (#233603)

I was speaking of odd hardware, not odd implimentations of hardware. (i.e. Data aquisition cards, video capture cards, MPEG boards, etc)

Re:Devices (1)

evilviper (135110) | more than 13 years ago | (#233604)

i86 has long been touted as the standard because of the lack of propriety as with Apple. The problem is, the devices comming with the CPU and motherboard are just as proprietary as Apple's systems.

Besides... Apple only seems like it qualifies because there isn't much different hardware for it. It's not that all video cards use one driver, it's that there's only 2 video cards (exaggeration I know). If Apple got popular, they would be in the same boat. At least if i86 set the precident, other platforms could take over and not run into the same problems later.

Re:Devices (2)

evilviper (135110) | more than 13 years ago | (#233605)

You've just hit on the killer problem there. OS developers just take it for granted that they have to write drivers for every device out there. What I wish is that hardware manufacturers would just use one standard interface, then only one driver for each device would be necessary. Impossible you say? Look at current modems, old sound cards (all sound blaster compatible), NE2000 network cards (I won't buy any other kinds) ATAPI CD-Roms (all recent ones are) Floppy drives, and many more devices. If people would put their foot down an say 'I want compatibility' then driver problems under any device would be a distant memory, OSes would be far smaller, hardware would be truely interchangeable, and Windows wouldn't be the only option for those with exotic hardware.

The most important 'benchmark' (3)

big.ears (136789) | more than 13 years ago | (#233606)

The most important benchmark they showed was their charts--ugly products of Microsoft Excel. Even though a lot has changed in those 4.5 years, its still easier to make your charts in windows.

Re:The entire semester? (1)

Pakaran2 (138209) | more than 13 years ago | (#233607)

It's NOT all they spent the semester doing. I assume it was an independent study for the students; as such it might have been 2-4 hours of coursework, but not all they did the entire semester. If so, it would indeed be ridiculous.

Re:Quite limited really (1)

Lozzer (141543) | more than 13 years ago | (#233609)

That old open source saw

If you interested in some results that no one appears to have produced, go do them yourself. Don't criticise someone who has scratched their itch.

Re:Pretty sloppy presentation. (1)

KidSock (150684) | more than 13 years ago | (#233610)


Good points. But numbers are numbers. And as long as they performed the benchmarks consistently across all kernels tested these numbers should be usefull. Besides, do you think a professor would put his best grad student on something like this?

Very Poor 2.2 Page Fault Latancy (2)

KidSock (150684) | more than 13 years ago | (#233611)


According to this graph [nmu.edu] page fault latencies suck in kernel 2.2. Is this true? I think I'm running a 2.2.17 AC kernel though and if I'm just doing development and not causing swapping then it doesn't matter though right?

Re:Aaaah! Exponential! (1)

Sheetrock (152993) | more than 13 years ago | (#233612)

Isn't a large part of the growing Linux code base hardware support (drivers/alternate architectures?) The exponential increase in the number of lines of code in *.c/*.h files doesn't necessarily mean that Linux is bloatware; rather, I think that it's a result of better support for the hardware out there.

I'd worry more if vmlinuz and modules start to grow exponentially.

---

---

is that the sound of pigs flying? (1)

STREMF (156983) | more than 13 years ago | (#233613)

Yeah sure, let me know when Windows2000 becomes open source, then we'll be able to figure that out.

The waffle mafia (1)

cr@ckwhore (165454) | more than 13 years ago | (#233614)

This article contains some excellent information! I think the information would be 100x more valuable if every released kernel version was documented... This kind of stuff is cool! Perhaps somebody is willing to do these benchmarks on each kernel release and publish the results at the time of each release.

knowing the expected performance of a kernel before installing it may be quite handy.

Re:NMU is not a real university (1)

de Selby (167520) | more than 13 years ago | (#233615)

Are you from tech?

Some Info on NMU (1)

de Selby (167520) | more than 13 years ago | (#233616)

I'm a soph now and a CS major at Northern, so I can tell you a few things about it.

1. No one--student or faculty--can communicate. From the application to the degree it's ambiguity and confusion.
3. There is no research being done; at least nothing more than could be done in a good high school.
2. Computer Science is not a major focus.
4. It gives liberal a bad name. (If liberal isn't already a bad name.)
5. For those expecting quality education, it can only serve as an very cheap stepping stone to graduate school.

Randy Appleton was my advisor a semester back. Strange man.

Lines of Code (1)

zesnark (167803) | more than 13 years ago | (#233618)

While counting lines of code is all very well and good, if you really want any semblance of a measure of the kernel's complexity you've got to count just the core kernel and exclude drivers. Just because there are more drivers now doesn't mean that the kernel is inherently that much more complex or that much bigger (unless you build everything and put it into a static kernel...). z

We'll beat Microsoft yet! (3)

Beowulfto (169354) | more than 13 years ago | (#233619)

Total lines of code have tripled, and are on an exponential growth curve.

So when will line count surpass Windows 2000?
----

Re:Yeah, but... (2)

j-pimp (177072) | more than 13 years ago | (#233620)

If its the same code then it has nothing to do with his develpoment skills. Most calculations of that nature are done using programs that read input from a text file perform the calculation and dump it to the screen or another file. That should be completly portable with no #ifdef __POSIX. Now what could be to blame is the libraries that are being linked against.

MS Graph? (2)

Fervent (178271) | more than 13 years ago | (#233621)

Uh, maybe it's just me, but does anyone else think it's funny they used MS Graph (and presumably Excel) to draw the result graphs? You'd think they use StarOffice.

Re:GCC optimizations and benchmarking (1)

fatphil (181876) | more than 13 years ago | (#233623)

And I've always barfed at the line
-fomit-frame-pointer.

It just _reads_ badly when I see it.

FP.
--

Re:This benchmark was not that useful (2)

psyclone (187154) | more than 13 years ago | (#233624)

I agree that the benchmark was not very useful, but it was still interesting. However, testing only the "basics" of the kernel enabled them to show a long-term trend over several kernel versions.

Re:Why I would mod you down (1)

fons (190526) | more than 13 years ago | (#233625)

Do you see the problem? You cant discuss slashdot without some moron like you coming along and modding you down.

What i'm saying is that, i would NOT mod you down if you didn't call me a moron, idiot or fuckwit, and just made your point.

The link above wasn't modded down by me, and I probably wouldn't mod it down. BUT I can see why it is modded down. It IS offtopic at first sight. If there would have been an explanation as to WHY this offtopic post is justified i WOULDN't understand any modding down.

I don't mean to, but i'm probably pissing you off bigtime by writing this reply. But could you explain to me what the whole michael problem is? Because i'm not native English and i just don't get it all by just reading the link. Call me a moron (wich, eh, youd did, but I need someone to summerize it for me)

tnx!

Why I would mod you down (2)

fons (190526) | more than 13 years ago | (#233626)


Yesterday I modded some of these Michael related posts WAY down. Why?

1. Because they are often insulting, and I don't like to read lame insults on my slashdot.
If you make an offtopic comment about a delicate subject, it really doesn't help if you start insulting.
Just state your opinion calmly and have respect for other people. If you'd post like that I would mod it up. (But sadly i wasted all my points modding you down yesterday :-)

2. You also always post so mysteriously. Why? I still don't really understand what all the fuss is about. And that's also really irritating. So would you please explain thoroughly what the problem is. Only if we all know what the problem is can we solve it.

So please post something abjective and insightful about this, so we can discuss and solve the whole thing. If you keep posting like this you will only get modded down > get frustrated > post more insults > ...

Kernel Compilation for Performance (2)

PSUdaemon (204822) | more than 13 years ago | (#233627)

I've read that the Kernel Team has recomended use of egcs 1.1.2 as an alternative to gcc 2.95.2 for compiling the 2.4.0 kernel. How much affect does that have on the performance of an OS?

Is it worth the trouble?

Re:exponential growth curve? (1)

angry old man (211217) | more than 13 years ago | (#233628)

Somebody studied their Calculus I. Do you have a final exam this week, you young whipper-snapper?

Re:silly graphs (1)

baywulf (214371) | more than 13 years ago | (#233629)

"I also find it ironic that they used MS Excel (which they don't say they did, but it sure looks like it)... " I'm not so sure if they did it in Excel or not. I know from experience you can get pretty close to Excel appearence if you do it in Star Office.

Don't know what the benchmarkers were smokin'... (1)

drumsetdrummer (237047) | more than 13 years ago | (#233630)

...but I've noticed a performance gain when I compiled the 2.4.X kernels on my old Pentium I. Before I was using 2.2.18. This is a Red Hat 6.2 system. Apache & PHP under 2.4.X are definitely faster than with the various 2.2.X versions I compiled.

Are there any real benchmarks out there that compare the different kernels?
--

Re:Linux Summarized Nicely (2)

einhverfr (238914) | more than 13 years ago | (#233631)

Awww....

Then they are saying that it will take twice as long for Linux to tell my apps that I have ordered them killed.... (-1) so maybe that extra 1.5 microseconds might prevent a -9 switch.

Page fault latency: in all of 2.2, or fixed? (3)

rknop (240417) | more than 13 years ago | (#233632)

One thing that I wonder about: that huge performance hit on the page fault latency shown in 2.2.6. Is it still there as of 2.2.19? Did the fix make its way back into the 2.2 series, or is it only fixed as of the later 2.3's and the 2.4 series? 2.2.6 is the only 2.2 in their study, so the study doesn't answer the question.

-Rob

dont forget: (1)

ConsumedByTV (243497) | more than 13 years ago | (#233633)

Remember in your notes to say that you were testing (insert OS to be tested) on a (insert emulated chipset) on (insert real chipset) inside of (insert host OS).... etc.


Are you on the Sfglj [sfgoth.com] (SF-Goth EMail Junkies List) ?

Aaaah! Exponential! (1)

TheSHAD0W (258774) | more than 13 years ago | (#233637)

Exponential growth of program code always alarms me. Nothing worse than feeping creaturism is my belief. But don't be too alarmed; so long as the doubling rate is lower than Moore's Law (18-24 months, depending on Moore's mood), you'll still have an OS that is more efficient on newer hardware. The only worry is if the code base becomes so large no human can handle it.

Or has that already taken place? :-P

Converging in the Cauchy sense. (2)

refactored (260886) | more than 13 years ago | (#233638)

The results give a feeling that linux is converging in the Cauchy sense.

ie. There is not much fat to trim left...

Therefore the next dramatic improvements if they are to come will not be from tweaking this part or that part of the kernel, but rather from implementing entirely new classes of functionality.

ie. Linux has arrived. It's settled down, time for it to start exploring as yet unimagined new things to do instead of new ways to do old things.

The future will be, umm, fun.

This post is not designed or intended for use in on-line control of aircraft, air traffic, aircraft navigation or aircraft communications; or in the design, construction, operation or maintenance of any nuclear facility.

Re:Signal handling - so what? (1)

Superkind (261908) | more than 13 years ago | (#233639)

Naw, just have programs use SIGUSR1 for dot and SIGUSR2 for dash, and you can have programs use morse code for interprocess communications...

SIGUSR2 SIGUSR1 SIGUSR2 SIGUSR2
SIGUSR2 SIGUSR2 SIGUSR2
SIGUSR1 SIGUSR1 SIGUSR2

SIGUSR1 SIGUSR1 SIGUSR1
SIGUSR1 SIGUSR1 SIGUSR2
SIGUSR2 SIGUSR1 SIGUSR2 SIGUSR1
SIGUSR2 SIGUSR1 SIGUSR2

More lines of code...so what? (1)

EllisDees (268037) | more than 13 years ago | (#233640)

The article says that most of the increase in lines of code came in the form of device drivers. How is this news? Linux supports a much wider variety of devices in 2.4 than in 2.2. Would you expect fewer lines for more device support?

Re:Yeah, but... (1)

Ayende Rahien (309542) | more than 13 years ago | (#233641)

How did you compiled it on both systems?

You contradicted yourself there pal.. (1)

Breakfast Pants (323698) | more than 13 years ago | (#233646)

"What I wish is that hardware manufacturers would just use one standard interface"

"and Windows wouldn't be the only option for those with exotic hardware"

Uhh I don't think there would be any exotic hardware if they all followed rigid standards.

--

Re:The most important 'benchmark' (2)

Canonymous Howard (325660) | more than 13 years ago | (#233647)

So what? Are you suggesting their conclusions are somehow invalid because they don't use a Linux-based system to draw charts?

Reread his post. He's not suggesting anything of the sort. He's suggesting that a) many people still find it easier to use Windows than Linux, and b) that's a more important benchmark than speed.

An on-going study would be really useful. (3)

Lethyos (408045) | more than 13 years ago | (#233648)

It would be nice to see updates to the data here as new versions of the kernel are released. For example, some users are not particularly concerned with newer versions of the kernel unless there are significant improvements. Consider this example: you're concerned mostly with performance aspects of the kernel. A new version is released that shows no improvement (or a decrease) in performance. No sense in upgrading immediately (of course, you may be one of those people who actually looks for and reports bugs) and you can wait until you see a downward trend in the graph before taking your time. There are other potential uses for "live" data such as this. I think it'd be nice if these guys would keep maintaining it. :)

Quite limited really (4)

Professor J Frink (412307) | more than 13 years ago | (#233649)

Where are the results for IDE/SCSI transfer rates/latency?

Where are the results for networking?

I definitely noticed a jump in performance between 2.2.16 and 2.4.0 so they must be missing something here.

They note the large increase in hardware support, but don't seem to realise that this new support and improved support has given Linux much more performance than their benchmarks might show.

Maybe the improvements in X etc have helped but no real performance difference between 2.1.38 and 2.4.0? Put any such machines through real world work and you'll soon spot the difference...

Re:Linux Summarized Nicely (1)

Publicus (415536) | more than 13 years ago | (#233650)

I know, I got snagged by a troll. But I have one question -

For what other reason would you make an explicit request to the OS? At least when my Linux app hangs because of a divide by 0 or whatever I don't have to reboot and lose unsaved info in other apps.

Keep dreaming AC, someday the Linux/Windows battle will be settled. When it is, I'll either be using Linux or paper.

Re:Very Poor 2.2 Page Fault Latancy (1)

benjamindees (441808) | more than 13 years ago | (#233651)

This seems to be right. That was the first thing I noticed when I upgraded to 2.4 from 2.2.16: programs load about 100 times faster. With all the bugs already in RH7 (linuxconf can't even set the damn default gateway correctly; even Windows can consistently get this right;), I wonder why they didn't just go ahead and throw in 2.4 as the default. As for the researchers, these people did some interesting work, but they came up with the wrong conclusions. 2.4 is much, much faster than 2.2, precisely for this reason. It makes for a much better desktop system, if maybe slightly slower for a constantly-running machine.

Signal handling - so what? (1)

shawnmchorse (442605) | more than 13 years ago | (#233652)

So far as I see, all areas of Linux performance that they tested steadily improved over time with the singular exception of signal handling. But isn't this at least partially the goal? You optimize the performance of commonly executed code (e.g. context switching) at the expense of code that is not executed as often (e.g. signal handling). It seems to me that any program firing off thousands of signals per second has a serious design flaw.

Re:How to lie with statistics (1)

warmiak (444024) | more than 13 years ago | (#233653)

Because it works well ?

Re:GCC optimizations and benchmarking (1)

anonymous cupboard (446159) | more than 13 years ago | (#233655)

The Linux Kernel is very good for GCC exercises. There is often a close linkage between a given kernel version, the GCC version that can compile it and the options used.

If you change the options, bits of the kernel can and does sometimes break the version of GCC used.

Playing with the optimisations is therefore a separate issue to the performance of a given kernel. However, it is an interesting exercise.

Re:We'll beat Microsoft yet! (1)

anonymous cupboard (446159) | more than 13 years ago | (#233656)

LOL, but think of it this way. Win2K is around 30 million lines of code (wonder how is that measured, executable statements or what?).

If Linus got every man, woman and child in his native Finland to each write six lines of code, he would be able to reach Win 2K levels....

More seriously, the size of the code base is not a problem when you are discussing things like drivers and the range of hardware supported.

Re:exponential growth curve? (1)

danyelf (449491) | more than 13 years ago | (#233658)

Saying that it is currently on an exp. curve doesn't mean that it will someday take over the universe. If the portion of their curve under examination matches an exponential curve, this is a fact. But I like the statistics!

More info on the growth of linux (2)

migod (450880) | more than 13 years ago | (#233661)

For those of you who were interested in the "exponential growth" issue, I did a much more detailed study on the growth of the Linux kernel that was published in the 2000 Intl Conference on Software Maintenance. I think it's very readable by non-academics. Comments welcome. -- MWG http://plg.uwaterloo.ca/~migod/papers/icsm00.pdf [uwaterloo.ca]
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>