×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux Kernel Gets Fully Automated Test

CmdrTaco posted more than 8 years ago | from the just-like-a-real-project dept.

IBM 159

An anonymous reader writes "The Linux Kernel is now getting automatically tested within 15 minutes of a new version being released, across a variety of hardware and the results are being published for all to see. Martin Bligh announced this yesterday, running on top of IBM's internal test automation system. Maybe this will enable the kernel developers to keep up with the 2.6 kernel's rapid pace of change. Looks like it caught one new problem with last night's build already ..."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

159 comments

Would be nice (-1, Redundant)

Anonymous Coward | more than 8 years ago | (#12729383)

Wouldn't it be nice if Linux had an automated test platform for the kernel?

now all we need is automated.... (4, Funny)

3seas (184403) | more than 8 years ago | (#12729387)

code generation...

Re:now all we need is automated.... (1)

caluml (551744) | more than 8 years ago | (#12729509)

Actually, that could be done, could it not? Throw in some random functions, if/while/do loops in, return random variables, etc. It could create some funky new software. :)

Re:now all we need is automated.... (4, Interesting)

Curtman (556920) | more than 8 years ago | (#12729592)

Actually, that could be done, could it not?

Apparently it works for Samba [samba.org]. :)

Re:now all we need is automated.... (1)

petermgreen (876956) | more than 8 years ago | (#12729888)

code generation is good for repetitive stuff especially if your language doesn't have much in the way of a built in preprocessor

say for example producing similar load on demand wrappers for a load of functions in a dynamic library.

p.s. /. seems to be restricting me to one post every 15 mins right now dunno why (the error says Slashdot requires you to wait 2 minutes between each successful posting of a comment to allow everyone a fair chance at posting a comment.

It's been 14 minutes since you last successfully posted a comment)

Re:now all we need is automated.... (1)

Curtman (556920) | more than 8 years ago | (#12730005)

code generation is good for repetitive stuff especially if your language doesn't have much in the way of a built in preprocessor

There's a fair bit of repetitive code in the kernel. I had to do some hacking to make some RS-422 cards we had work properly, and found that a lot of the char drivers especially contain very similar code, and structure. Code generation might help with older drivers that nobody cares about until they break. They tend to rot from the looks of things.

NOT FUNNY: Chinese Military Software (0)

Anonymous Coward | more than 8 years ago | (#12729527)

We should consider keeping the automated test technology under the label, "top secret". The Chinese are aggressively trying to modernize their military [phrusa.org]. Computer software for coordinating the various pieces of battlefield hardware is integral to Chinese plans for the next-generation of warfare.

Re:NOT FUNNY: Chinese Military Software (0)

Anonymous Coward | more than 8 years ago | (#12729745)

Let me guess... You're American?

Re:NOT FUNNY: Chinese Military Software (0, Offtopic)

dhakbar (783117) | more than 8 years ago | (#12730012)

Um, he's the well known "phrusa troll" that usually only posts in China-related stories.

Sometimes, though, he interjects his posts into unrelated articles.

Don't sweat it (0)

Anonymous Coward | more than 8 years ago | (#12730044)

GWB has been selling all sorts of technology to the chinese. This will end up in their hands so that halliburton can make a buck.

Re:now all we need is automated.... (2, Insightful)

maxwell demon (590494) | more than 8 years ago | (#12729573)

No problem. The following is an automated code generator. It generates a hello world program in C and writes it to stdout. (untested)
#include <stdio.h>
int main()
{
char const* program_pattern = "%s%s";
char const* include_pattern = "#include <%s>\n";
char const* function_declaration_pattern = "int %s(%s)";
char const* function_definition_pattern = "%s\n{\n %s;\n}\n";
char const* print_pattern = "printf(%s)\n";
char const* string_pattern = "\"%s\"";

char const* stdio_header_name = "stdio.h";
char const* main_function_name = "main";
char const* main_arguments = ""; // we don't read command line arguments
char const* output_string = "hello world!";

char string[15];
char print[23];
char main_decl[11];
char include[19];
char main_func[42];

sprintf(string, string_pattern, output_string);
sprintf(print, print_pattern, string);
sprintf(main_decl, function_declaration_pattern, main_function_name, main_arguments);
sprintf(main_func, function_definition_pattern, main_decl, print);
sprintf(include, include_pattern, stdio_header_name);
printf(program, include, main_func);
return 0;
}

Re:now all we need is automated.... (1, Insightful)

Anonymous Coward | more than 8 years ago | (#12729716)

Its called lisp ;-)

Simple. (0)

Anonymous Coward | more than 8 years ago | (#12730070)

Just get MS virus to work on Linux. Lots of code generation that way.

Re:now all we need is automated.... (1)

MemoryDragon (544441) | more than 8 years ago | (#12730322)

Whats so funny about it, code generation is used left and right in modern projects, this stuff is great to shift the grundwork away from the developers and not having to go into outsourcing hell.

Why has it taken so long? (1, Redundant)

Beatlebum (213957) | more than 8 years ago | (#12729390)

Why has it taken so long?

Re:Why has it taken so long? (2, Insightful)

Anonymous Coward | more than 8 years ago | (#12729414)

Bitkeeper.

Re:Why has it taken so long? (0, Insightful)

Anonymous Coward | more than 8 years ago | (#12729454)

How many tests have your written? That's why.

Re:Why has it taken so long? (1, Funny)

Anonymous Coward | more than 8 years ago | (#12729479)

How many tests has his what written?

Re:Why has it taken so long? (3, Informative)

teh_cn (887491) | more than 8 years ago | (#12729588)

mod me troll, but (free)bsd had this for years and not only for the kernel, but for world, too.

Re:Why has it taken so long? (0)

Anonymous Coward | more than 8 years ago | (#12730168)


OpenSolaris will have had this for years, too.

Re:Why has it taken so long? (1)

ikewillis (586793) | more than 8 years ago | (#12730268)

Good question, especially considering FreeBSD Tinderbox [sentex.ca] has been doing this sort of thing for years, and not just with the kernel but with the entire base system.

Within 15 Minutes? WTF (1, Insightful)

LCookie (685814) | more than 8 years ago | (#12729394)

"The Linux Kernel is now getting automatically tested within 15 minutes of a new version being released"

Would be much better to test it BEFORE a new version is being released, otherwise this is completely useless...

Re:Within 15 Minutes? WTF (0)

Anonymous Coward | more than 8 years ago | (#12729398)

Why do you think they call it the 'bleeding edge'. Look at all that red.

Re:Within 15 Minutes? WTF (1)

Thing 1 (178996) | more than 8 years ago | (#12729424)

Great idea. You should ask IBM to integrate their test platform into Linus' processes. He might be dubious after BitKeeper (that idiot) about another company helping him, but in this case I think it's a great idea.

There may be (and probably are) other test beds out there, testing releases. It would be better for Linus (and the world) if he could release already-tested code to the world, instead of having the world duplicate all the testing effort, and IBM seems like a perfect solution.

Re:Within 15 Minutes? WTF (5, Informative)

oxfletch (108699) | more than 8 years ago | (#12729463)

I automatically test every nightly -git snapshot release, so it's fairly well tied in anyway. This also means my heaviest usage of our machines is at night, when most of the (US) developers are asleep.

So it's fairly well tied in already ... and the whole -rc cycle should enable us to catch a lot of stuff.

Re:Within 15 Minutes? WTF (1)

netdur (816698) | more than 8 years ago | (#12729793)

> when most of the (US) developers are asleep
as far as I know (US) developers sleeping during the night time in... China

Re:Within 15 Minutes? WTF (1)

DegeneratePR (889051) | more than 8 years ago | (#12729593)

In any case, most people, especially in mission-critical processes, don't compile a new kernel as soon as it's released. Myself, I try kernels after a while, when no major issues are found. Even then, I test them out first in different test machines. So 15 minutes before, 15 minutes after, it's all the same.

Re:Within 15 Minutes? WTF (3, Insightful)

DigiShaman (671371) | more than 8 years ago | (#12729432)

Sounds like the solution to this problem is clear. Always use the second to latest kernel released. Stay away from the new one untill it's fully tested to your satisfaction.

Re:Within 15 Minutes? WTF (2, Informative)

ideut (240078) | more than 8 years ago | (#12729630)

Which would mean, for the last several 2.6.x releases, that you are always using a version with a known root hole in it. Here's an idea: use your vendor's QA-tested kernel that they package for your distribution.

Re:Within 15 Minutes? WTF (3, Insightful)

digitalunity (19107) | more than 8 years ago | (#12730431)

Ummm...

If everyone did this, the newest kernels would never get tested. I think it is important that we have a diverse range of users using new, almost new, and older but well tested kernels.

Re:Within 15 Minutes? WTF (3, Insightful)

doshell (757915) | more than 8 years ago | (#12729460)

"Release" in the open source world has a broader sense than in commercial software. In open source not all "released" versions are meant for general public consumption; they include unstable versions targeted mostly at developers, so that severe isues can be detected and patched quickly.

Taking this into account, I believe this is meant to catch bugs mainly in nightly (unstable) builds and release candidates, not in "final" versions (those should, at least in theory, have no serious bugs left around as the latter have already been eradicated from release candidates).

Re:Within 15 Minutes? WTF (0)

Anonymous Coward | more than 8 years ago | (#12729474)

A lot of the 'releases' are also point releases and -footree releases. Most of what's tested is probably development kernels anyway; before or after doesn't completely matters in those case.

Same goes for -rc and -bk releases (though I guess neither really exists anymore).

Think of it this way: x.y.z is tested by all the x.y.z-rc. releases coming before it.

Re:Within 15 Minutes? WTF (5, Informative)

Metteyya (790458) | more than 8 years ago | (#12729489)

because they are nightly builds, that is - versions with applied patch, but untested yet.

Wait a minute... (2)

RoLi (141856) | more than 8 years ago | (#12730055)

So let me summarize wether I understood it right:

You say it's "completely useless" because you have to wait 15 minutes when a kernel is released.

And this is modded "insightful".

Question: (4, Interesting)

bogaboga (793279) | more than 8 years ago | (#12729396)

How were the previous kernels being tested? Were sources for improvement/change/modification, bugs and areas requiring refactoring being discovered by chance?

Re:Question: (3, Informative)

Anonymous Coward | more than 8 years ago | (#12729421)

" How were the previous kernels being tested?"
Hey guys, new kernel is out, bang away at it and let me know what you think.

Re:Question: (0)

Anonymous Coward | more than 8 years ago | (#12729537)

" How were the previous kernels being tested?"

You basically use the bleeding edge kernel, while doing other stuff on your computer.

Re:Question: (1)

ignorant_coward (883188) | more than 8 years ago | (#12730156)


After a new kernel was released, power meters on mothers' basements everywhere saw a little blip. Add up all these blips, and you get a (somewhat) tested kernel.

How much testing? (2, Interesting)

anthony_dipierro (543308) | more than 8 years ago | (#12729400)

This is good, and long overdue (I'm surprised it hasn't been around for years), but just how much testing is being done? Compiling? Booting? Or are there actual functional and reliability tests which are being performed?

Re:How much testing? (5, Informative)

oxfletch (108699) | more than 8 years ago | (#12729483)

Compiles, boots, runs dbench, tbench, kernbench, reaim, fsx. If one test fails, it'll highlight it
in yellow, rather than green or red. I have a few of those in the internal tests, but not the external set.

This is only the tip of the iceberg as to what can be done. We're already running LTP, etc internally, and several other tests. Some have licensing restrictions on results release (SPEC) ... LTP is a pain because some tests always fail, and I have to work out the differential against baseline. Will come later.

Yikes! (0)

Anonymous Coward | more than 8 years ago | (#12729738)

Compiling is considered a test?

Re:Yikes! (0)

Anonymous Coward | more than 8 years ago | (#12729766)

I remember when "It compiles. Ship it!" was a joke about Microsoft. Now it's codified as the Open Source Development Methodology.

Re:Yikes! (1)

anthony_dipierro (543308) | more than 8 years ago | (#12730484)

Testing if it compiles on a certain platform is a test, yes. It's by no means the sole test which is sufficient to ship a product, but I'd definitely call it a test.

What took so long (3, Interesting)

Timesprout (579035) | more than 8 years ago | (#12729405)

Most projects of any complexity use automated continuous build and testing as a standard development practise.

Presumably... (4, Insightful)

Kjella (173770) | more than 8 years ago | (#12729433)

...the cross-platform, cross-hardware part? Setting up one machine to build automatically is easy. Setting up a whole bunch of them (and all unique, read administration nightmare) and tie them together to a system, that's quite a bit of work.

Kjella

Re:Presumably... (5, Informative)

oxfletch (108699) | more than 8 years ago | (#12729514)

Indeed. The automation system I wrote is just a wrapper around an internal harness called ABAT that has a massive amount of work behind it. If systems crash it can detect that, power cycle them, etc.

Going from 90% working to 99.9% working is frigging hard. I had all this working 3-6 months ago, but the results weren't good enough quality to be published. Several people internally put a massive amount of work into improving the quality and stability of the harness.

Re:Presumably... (0)

Anonymous Coward | more than 8 years ago | (#12729783)

so um... you're getting paid to devise more effective computer BDSM techniques. where do i sign up?

"improving the quality and stability of the harness." ... was leather involved?

Re:Presumably... (1)

TCM (130219) | more than 8 years ago | (#12729857)

...the cross-platform, cross-hardware part?

It's magic [netbsd.org]! A single script and I can build a complete operating system for a big-endian 64bit architecture on a 32bit little-endian architecture, or any of the other 48 supported archs. More than that, I can build a complete NetBSD for any arch on any halfway POSIXish system.

build.sh bootstraps its own contained build utils (compiler, binutils et al) and builds the system with that. You can even build the complete system as non-root and get tarballs that you can use to install a complete system.

To think that my weak SPARCstation 2 should build its own system would be madness. Instead, I just run build.sh -m sparc distribution on a dumb, fast i386 box and have a world in an hour instead of week(s).

Oh, and NetBSD "feels" the same on any arch, no administration nightmare, no matter what arch you run it on.

Just FYI.

Re:Presumably... (1)

duffbeer703 (177751) | more than 8 years ago | (#12730103)

We've been playing with some IBM tools at work that automate server setup and provisioning... its pretty amazing stuff.

You can basically retask servers in something like 10-60 minutes depending on what you are doing, and its a completely automatic process.

Maybe... (3, Interesting)

ratta (760424) | more than 8 years ago | (#12729406)

automated performance regression tests may be useful too.

Re:Maybe... (5, Informative)

oxfletch (108699) | more than 8 years ago | (#12729552)

The results are all there if anyone wants to play with them. Go to the results matrix, and click on the numerical part of the green box. Pick a test, and drill down to the results directory.

The numbers are there, it's just a question of drawing graphs, etc. I have some for kernbench already, but I'm not finished automating them. If anyone wants to email me code to generate them from the directory structure published there, feel free ;-) Preferably python or perl into gnuplot.

Re:Maybe... (1)

Nutria (679911) | more than 8 years ago | (#12730201)

Instead of just reading a bunch of complaints, let me be 1 Slashdotter to thank you for your efforts.

It's too bad the Stanford Checker can't be integrated into your system.

This is awesome (5, Insightful)

jnelson4765 (845296) | more than 8 years ago | (#12729410)

But it can't catch everything - the 1394 bus was screwed in 2.6.11. There are a lot of regressions that show up - and even that healthy cluster of systems will not show every problem.

Sound issues? Older network and SCSI cards? There are a lot of drivers that break, and no one notices it because there is nobody with the hardware testing the -rc or -mm kernels.

Wouldn't it make more sense to package these tools for someone to install on their collection of oddball equipment, and assist in the debugging/testing?

Where's the ARM, MIPS, and SH?

Re:This is awesome (5, Insightful)

Meshach (578918) | more than 8 years ago | (#12729520)

But it can't catch everything...
But that is not the point of automated testing. As a member of a qa team who is developing automated tests I get comments like that every day

Automated tests are not intended to catch everything or test strange permutations of pre-conditions. There purpose is to provide a mechanism for verifying that a build satisfies the basic requirements of the project.

More exotic configs need to be tested manually as usual but automated tests can provide a "failsafe" just in case a basic part of the build is broken.

Furthermore, it prevents regressions (3, Insightful)

xant (99438) | more than 8 years ago | (#12730389)

Reliable, repeatable testing is a great way to prevent fixes in one area from causing bugs in another. When I fix A, I generally only test A manually. I don't test every other conceivable code path, even though my fix for A might well impact them.

An automated test for B will catch regressions caused by my fix in A, making it harder to backslide. Backsliding is very expensive because bugs are far removed from their cause. If an automated test sees that changes in A caused a regression in B, the cause is immediately obvious.

Re:This is awesome (1)

zappepcs (820751) | more than 8 years ago | (#12729551)

I agree with jnelson4765, new buids would be well served to be tested on a great many machines with a wide variety of hardware setups.

Who should map the hardware testing platforms? I don't know, but I do know that if the new kernel builds are tested for a generic group of hardware and released, then other testers report on their tests using hardware X, you would end up with a relatively quick listing of a new build against many variants of hardware. Published correctly, it would allow people to search for problems regarding the new build and any particular hardware that they might be using.

That should allow reasonable release schedules and capture errors with older/arcane/little-used hardware. I think that would be quite acceptable.

As for ARM and other platforms that are not exactly mainstream, they could also be in the second round of testing. If an ARM supplier wanted to be certified in the early build testing, they could support that with hardware and other resources, thus pushing certification earlier for their hardware. Even MB or HD manufacturers can support the testing with their hardware for early certification, so that it is a community effort rather than just that if a small group of testers.

All together, I can see the process as quite workable.

Re:This is awesome (1)

Cylix (55374) | more than 8 years ago | (#12730098)

Unfortunately, organizing that kind of odd ball testing would be a management nightmare unless you want to go out and collect all of the hardware. Remember, some people do post patches and whole driver releases without stepping inside of the kernel team's realm.

The only real way to automate something like that would be a dummy load facility. Some software which would emulate the hardware being in place. Something conceptually similar to that effect anyway.

So then, for every driver for a device, you have a software kit which tests the driver to see if its functioning. I have no idea how practical this is as I've never really done any serious kernel hacking.

Re:This is awesome (1)

Nutria (679911) | more than 8 years ago | (#12730231)

Where's the ARM, MIPS, and SH?

IBM doesn't sell any ARM, MIPS or SH-based systems. So, they don't test them.

The Debian buildd system is an automatic building and semi-testing system for, of course, all the archs that Debian supports, and that includes ARM, MIPS, and SH.

Re:This is awesome (1)

team99parody (880782) | more than 8 years ago | (#12730247)

Wouldn't it make more sense to package these tools for someone to install on their collection of oddball equipment, and assist in the debugging/testing?

That's how the PostgreSQL build farm [pgbuildfarm.org] works. People with wierd hardware [onlamp.com] apply to be added to the automated test farm. ARM, MIPS, PARISC, Alpha, PowerPC, Sparc, etc. are all represented well in the postgresql automated tests.

ARM Linux has something similar (5, Informative)

kyllikki (88559) | more than 8 years ago | (#12729411)

ARM Linux has had something similar in Kautobuild [simtec.co.uk] for some time.

Although the testing and building is limited to the ARM platform.

The site also has a whos who thats worh looking at ;-)

News Flash (4, Informative)

sirReal.83. (671912) | more than 8 years ago | (#12729464)

Red Hat (and probably Novell/SuSe, since they use over one thousand kernel patches) runs a myriad of tests on each of its own kernel builds nightly - and has been doing so for years. On more than just the 3 architectures covered by this test.

That said, pushing tests upstream is a great idea. Just not revolutionary or anything.

Re:News Flash (0)

Anonymous Coward | more than 8 years ago | (#12729705)

Those internal tests could not keep pace with the rate of development of the Linux kernel however. This testsuite will test _every_ kernel release instead of just the few cherrypicked by the big vendors.

Long uptimes (4, Interesting)

rice_burners_suck (243660) | more than 8 years ago | (#12729465)

This is a very smart system. The Samba team uses something very similar. The key to finding regressions with this method is to create tests for every piece of functionality, and to integrate it with the rest of the testing suite, so that each function of the kernel will be continuously tested. For new features, it is preferable to create these tests as the features are being coded. For existing millions of lines of code, it is necessary for some brave souls to go in and create these tests.

I hope they are using code from the Linux testing suite. That piece of work has already formed a nice set of tests. Also, I hope that the kernel is automatically built with many different combinations of options. And with time, I hope this will become better. The more tests, with the more hardware configurations, with the more kernel configurations, with the more types of input data (including many imaginative forms of incorrect input data to test that the kernel handles it gracefully and thwarts attacks based on such methods), the better quality we will have in the kernel, and it is likely that Linux will be unmatched in quality, stability, efficiency (well, maybe not efficiency necessarily), and long uptimes.

through the looking glass... (3, Funny)

moviepig.com (745183) | more than 8 years ago | (#12729485)

With an automated test suite, what happens when a class of bug is discovered to be untested-for? Presumably, the suite is modified to detect it. Then, is the resulting new suite itself subjected to an automated test suite? And, then...[divide-by-zero error...]

Re:through the looking glass... (4, Informative)

oxfletch (108699) | more than 8 years ago | (#12729538)

There is indeed an internal self-test suite on the harness. It's not desperately sophisticated, and I wouldn't dare show it to anyone ;-) However, it does catch a lot of stupid bugs. It requires some manual intervention/inspection to work.

Plus, there's a separate development grid where we test new test-harness code before it's put onto the
production grid.

Re:through the looking glass... (1)

EvanED (569694) | more than 8 years ago | (#12729665)

You're not looking at a divide-by-zero error, but a stack overflow from the infinite recursion.

Re:through the looking glass... (2, Funny)

moviepig.com (745183) | more than 8 years ago | (#12729955)

You're not looking at a divide-by-zero error, but a stack overflow from the infinite recursion.

You're right, I made a mistake. I shall modify my test suite forthwith... [divide-by-zero error]

Does this mean... (2)

blixel (158224) | more than 8 years ago | (#12729506)

Does this mean we'll get back to 2.6.x releases? Instead of new version of 2.6.x being released as 2.6.x.x every third day?

One downside (0)

Anonymous Coward | more than 8 years ago | (#12729565)

One downside is that people can become sloppy if they rely too much in the test suite caching all bugs (it obviously cannot).

Anyway, I think that if put to good use it can be an step forward.

Regarding sharing code with other test projects... I hope they do not! What we need is as much different tests and testing methodologies as we can generate. The more the better, so let's go invent a better wheel!

cool to see this publicly announced (1)

emmastrange (768051) | more than 8 years ago | (#12729591)

I got to work on part of this system, which IBM calls Autobench, for my senior project at PSU. The system is a highly configurable framework which can download, compile, and run various benchmarks and profilers (for example while compiling a kernel). Its all centrally administered, so IBM can run a battery of tests on a variety of different machines at once.

I think Martin Bligh said that IBM has been using this for a while now, automatically downloading kernels upon release and testing them. The new thing is the regression matrix which is posted to the web. Very cool.

Re:cool to see this publicly announced (0)

Anonymous Coward | more than 8 years ago | (#12729690)

You must mean Portland State University where there are NO grammer or English requirements! (UNST doesn't count)

2.6.12 on amd64 (1)

scharkalvin (72228) | more than 8 years ago | (#12729725)

needs work! The latest builds all failed!

Re:2.6.12 on amd64 (1)

StupidKatz (467476) | more than 8 years ago | (#12730216)

Considering that 2.6.12 hasn't been released yet, it just might be the case that they are still, oh, I don't know, working on it?

Re:2.6.12 on amd64 (0)

Anonymous Coward | more than 8 years ago | (#12730555)

Considering the VIA chipsets have cold-boot failures that weren't really fixed in 2.6.10, I'm surprised amd64 boots at all!

Call me cynical. My SK8V motherboard with ECC memory never boots on the first try. It always hangs at a random point in the kernel and needs a reboot before it will boot fully.

"The least they are required to do" (0)

Anonymous Coward | more than 8 years ago | (#12729804)

IBM's efforts vis-a-vis Linux are a real eye opener when you look at the scraps that Apple gives back to the open source community after taking so much.

User Mode Linux builds (0)

Anonymous Coward | more than 8 years ago | (#12729982)

I hope all this translates to easier compilation of linux. I had worked with User Mode Linux a while back, and it takes a trip to hell and back to compile.. you need just the right setup of gcc, config files to get it to compile, which took me forver to get right, and honestly a waste of time.

Linux enters the world of QA 101! (1)

mrkitty (584915) | more than 8 years ago | (#12730000)

Years later and finally it is getting some *basic* QA testing done! What will they think of next!

Re:Linux enters the world of QA 101! (0)

Anonymous Coward | more than 8 years ago | (#12730126)

Every good distribution is already doing this, and they all benefit from each others work.

If you want a rock solid system, why are you running a vanilla kernel? Grab a RHEL or a SLES kernel for that.

Re:Linux enters the world of QA 101! (1)

SlashMaster (62630) | more than 8 years ago | (#12730412)

I'd expect the community to start advocating unit testing, an agile development practice, at some point to increase the reliabilty of code before it is even merged into the nightly builds.

I realize that this is not the same as testing the entire package on dissimilar hardware like he is doing here; For instance, there are bound to be a few issues when developers of code and its underlying code base both submit updates the same evening. IMHO, it'd especially help new developers if there existed unit tests distributed with the source packages allowing the developers to test to ensure that they aren't breaking anything else with their new/revised code.

In any case, if this is a start, its a nice start, and it means that linux is going to get a LOT easier to compile and install for people from my perspective...

Re:Linux enters the world of QA 101! (1)

LnxAddct (679316) | more than 8 years ago | (#12730421)

Individual distros have been doing this for years. Red Hat is one company that is known for its extensive testing of the kernel (as well as many other OSS projects). Don't use a vanilla kernel if you're running a production environment.
Regards,
Steve

Is this even worth anything? (1)

xenocide2 (231786) | more than 8 years ago | (#12730145)

One of the main goals appears to be whether the kernel builds or not. I shouldn't have to tell slashdot that build errors are among the most trivial of OS programming errors. They certainly exist, as the chart shows, but whoever is in charge of this project has a long way to go, by adding real tests of functionality. Consider it job security ;)

Re:Is this even worth anything? (1)

oxfletch (108699) | more than 8 years ago | (#12730610)

For one, did you actually bother to look at the results at all, and what tests are being run, and
published?

For another, this is only the tip of the iceberg as to what can be done, but I'm not going to lock whatever I have now in some dingy dungeon until it's "finished". What's there is useful, ableit incomplete. Testing is *never* complete.

The main goal, as you put it, is to improve the quality of the linux kernel. If we can ensure the kernel builds, boots, and runs basic tests ... in a fully automated way ... then it frees up other resources to do more sophisticated tests, without getting dragged down by basic crap.

Onwards, and upwards ...

Well, this time I am really unhappy! (-1, Troll)

kompiluj (677438) | more than 8 years ago | (#12730188)

1) the very need for such tests means that current 2.6.x kernels are very unstable - this means that Linux currently does not have any stable version - not good
2) remember Microsoft? they have been always doing nightly builds of Windows since the beginning of the time; the only thing sure is that it did not improve the quality of the code...

Re:Well, this time I am really unhappy! (0)

Anonymous Coward | more than 8 years ago | (#12730242)

1) Automated build/testing should be a standard part of any large development project.

2) Riiiiight. I'm really sure 2000 and XP aren't an improvement over Windows 95.

Re:Well, this time I am really unhappy! (1)

unleashedgamers (855464) | more than 8 years ago | (#12730467)

2) Riiiiight. I'm really sure 2000 and XP aren't an improvement over Windows 95.

2000 and XP are way diffrent than 95.

Windows '95, '98 and ME are descended from DOS and Windows 3.x, and contain significant portions of old 16-bit legacy code. These Windows versions are essentially DOS-based, with 32-bit extensions. Process and resource management, memory protection and security were added as an afterthought and are rudimentary at best. This Windows product line is totally unsuited for applications where security and reliability are an issue. It is completely insecure, e.g. it may ask for a password but it won't mind if you don't supply one. There is no way to prevent the user or the applications from accessing and possibly corrupting the entire system (including the file system), and each user can alter the system's configuration, either by mistake or deliberately. The Windows 9x/ME line primarily targets consumers (although Windows '95 marketing was aimed at corporate users as well).

The other Windows product line includes Windows NT, 2000 and XP, and the server products. This Windows family is better than the 9x/ME line; at least these versions use new (i.e. post-DOS) 32-bit code. Memory protection, resource management and security are a bit more serious than in Windows 9x/ME, and they even have some support for access restrictions and a secure filesystem. That doesn't mean that this Windows family is as reliable and secure as Redmond's marketeers claim, but compared to Windows 9x/ME its additional features at least have the advantage of being there at all. But even this Windows line contains a certain amount of 16-bit legacy code, and the entire 16-bit subsystem is a direct legacy from Microsoft's OS/2 days with IBM. In short, all 16-bit applications share one 16-bit subsystem (just as with OS/2). There's no internal memory protection, so one 16-bit application may crash all the others and the the entire 16-bit subsystem as well. This may create persistent locks from the crashed 16-bit code on 32-bit resources, and eventually bring Windows to a halt. Fortunately this isn't much of a problem anymore now that 16-bit applications have all but died out.

Of course Windows has seen a lot of development over the years. But in fact very little has really improved. The new features in new versions of Windows all show the same half-baked, patchy approach. For each fixed problem, at least one new problem is introduced (and often more than one). Windows XP for example comes loaded with more applications and features than ever before. While this may seem convenient at first sight, the included features aren't as good as those provided by external software. For example, XP insists on supporting DSL ("wideband Internet") networking, scanners and other peripherals with the built-in Microsoft code instead of requiring third-party code. So you end up with things like DSL networking that uses incorrect settings (and no convenient way to change that), scanner support that won't let you use your scanner's photocopy feature, or a digital camera interface that will let you download images from the camera but you can't use its webcam function. WiFi network cards are even more of a problem: where manufacturers could include their own drivers and client manager software in previous versions of Windows, users are now forced to use XP's native WiFi support. Unfortunately XP's WiFi support is full of problems that cause wireless PCs to loose their connection to the wireless access point with frustrating regularity. Also XP's native WiFi support lacks extra functions (such as advanced multiple-profile management) that manufacturers used to include in their client software. And of course applications (such as Internet Explorer and Outlook) have been integrated in the operating system more tightly than ever before, and more formerly separate products have been bundled with the operating system.

Any other Open Source projects have similar? (1)

team99parody (880782) | more than 8 years ago | (#12730213)

I think the PostgreSQL buildfarm [pgbuildfarm.org] is one of the coolest ones I've seen. It's distributed across a bunch of volunteer-run machines representing a broader selection of architectures than most any other automated-test projects I'm aware of. A nice article on it can be found here [onlamp.com]

Any other projects out there with similar transparency in their automated testing?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...