Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×

52 comments

5 years of first posts! (-1, Flamebait)

Anonymous Coward | more than 3 years ago | (#34123404)

and 5 years of you sucking my asshole.

Re:5 years of first posts! (0)

Anonymous Coward | more than 3 years ago | (#34137822)

Or so trolls would like to believe.

Windows Kernels (5, Interesting)

Anonymous Coward | more than 3 years ago | (#34123456)

What about running the same study on the Windows kernel from XP to 7?

Re:Windows Kernels (2, Insightful)

coolsnowmen (695297) | more than 3 years ago | (#34125766)

While interesting, it isn't exactly the same; in linux, you can actually just change the kernel, without changing all the services and starting software.

Re:Windows Kernels (1)

BrokenHalo (565198) | more than 3 years ago | (#34126388)

Even changing kernels can be problematic if you go back far enough; you start running into problems (as mentioned in TFA) with not being able to build your kernel with the same version of gcc. If it were not for this factor, I would be more interested to see a comparison against the 2.0-2.4 kernels. Having said that, since 2.4.37.10 was only released last September, I would imagine that that should be compatible with current compilers.

phoronix - bleh (0, Troll)

Anonymous Coward | more than 3 years ago | (#34123544)

phoronix is an embarrassment to open source and linux devs. Worst site ever.

Re:phoronix - bleh (0, Flamebait)

Ant P. (974313) | more than 3 years ago | (#34130830)

The results from these 26 pages of advertisements show that, for the most part, sensationalist bullshit and trolling is as profitable as ever.

Fixed that for them.

Virtual machine, really? (5, Insightful)

edelholz (1098395) | more than 3 years ago | (#34123604)

They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?

Re:Virtual machine, really? (0, Troll)

Anonymous Coward | more than 3 years ago | (#34123724)

They're phoronix, you can't seriously expect them to do anything approaching a proper job of the benchmarks, can you?

Re:Virtual machine, really? (4, Informative)

mtippett (110279) | more than 3 years ago | (#34123804)

Considering the efforts going into VM these days and the massive deployments in Fortune 500 companies, the performance of VM based systems is predictable. All the testing with Phoronix Test Suite is repeated until there is less than 3% variance between the results - or the result set is discarded.

Realistically, looking at older kernels on modern hardware is actually a very critical dimension for corporate server environments. There are applications in that space that are deployed and supported only on some old distribution. Being able to achieve and understanding how Red Hat 7.1 will act vs Red Hat 5 is critical for some environments.

Re:Virtual machine, really? (1)

chrb (1083577) | more than 3 years ago | (#34124788)

the performance of VM based systems is predictable

I agree that benchmarking a single VM on a VM host is a valid thing to do, and will give fairly reproducible results. But it can get more difficult with more complex setups. You need to be able to manage the complexity and eliminate or randomise all the factors. Benchmarking a single VM running on a VM host with 20+ other active VMs, with snapshots being created and merged, and with variable network and disk configurations, gets more difficult.

All the testing with Phoronix Test Suite is repeated until there is less than 3% variance between the results - or the result set is discarded.

What is the minimum number of replicates for each setup? 3% variance may occur easily with a low number of replicates, but that is not going to give you statistically significant results. Also, what order are the experiments run in? There may well be some autocorrelation caused by caching etc.

Re:Virtual machine, really? (1)

BrokenHalo (565198) | more than 3 years ago | (#34126618)

One thing I'm curious about is the kernel configuration these guys used - I couldn't find it. Unless they built the kitchen sink into the kernel in the first place, I find it difficult to see how they could have used the same .config for that many builds.

Until a year or two ago, I used to be an inveterate kernel stripper; any driver or service that wasn't used or supported by my hardware got ruthlessly taken out. This did leave me with more responsive machines at the minor cost of my time. More recently I have become lazy, and I have adopted the default kernels that come with my preferred distro [archlinux.org] ; now that I am no longer at university, I'm not pushing my machines as hard with molecular modelling as I used to.

Re:Virtual machine, really? (1)

RichiH (749257) | more than 3 years ago | (#34143176)

How do you know that running in a VM doesn't affect one kernel version more than another?

Being too lazy/stupid to start a machine on bare metal? Come the fuck on.

Of course, Phoronix being the vile pretend-useful bottom-feeding site that it is, they would never care about making sure there are no outside factors over generating page impressions quickly and cheaply.

Re:Virtual machine, really? (1)

mtippett (110279) | more than 3 years ago | (#34143258)

How do you know that running on an AMD doesn't affect one kernel version more than another vs Intel. The same argument stands. It's a machine layer for running code.

Sure, it's not what you want, but don't consider it completely invalid. There are many people who have interest in virtualized performance.

Re:Virtual machine, really? (1)

RichiH (749257) | more than 3 years ago | (#34143430)

> How do you know that running on an AMD doesn't affect one kernel version more than another vs Intel.

It does, at least if you compile for it.

> There are many people who have interest in virtualized performance.

I am amongst them. We run a few hundred VMs.

> Sure, it's not what you want, but don't consider it completely invalid.

Not completely invalid. Yet, a very basic mistake in benchmarking was made due to inability and/or laziness which could have a major impact on the validity.
We are used to this behavior by Phoronix. I am sick and tired of people pretending that is not the case.

Re:Virtual machine, really? (2, Interesting)

chrb (1083577) | more than 3 years ago | (#34124624)

They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?

If they test in a VM, on only one particular hardware configuration, then the results only apply to that specific test setup. If the fact that the experiments are run inside a VM introduces variability into the results, then this will show up as a large variance. [wikipedia.org] However, having a larger variance does not in itself negate the results - but remember that the results can't be generalised to other configurations - they only apply to this particular setup.

In order to produce experimental results that can be generalised you need to run your experiments on a randomised configuration of hardware and VM host software. Either test every possible combination of factors - hardware, VM host sw, sw under test - (full factorial) [wikipedia.org] , or some subset (fractional factorial). [wikipedia.org]

I'm usually one of the first to bash Phoronix for not doing multiple replicates or any statistical analysis of their experiments, but things appear to have changed this time. Some of the big criticisms of Phoronix's benchmarks in the past were that they didn't consider whether or not their results were significant - instead doing only one replicate for each configuration, plotting a barchart, and concluding "X was 5 FPS faster. Therefore it wins!" Apparently they're now doing multiple replicates and some proper statistics to calculate whether or not observed differences are actually statistically significant ("our kernel test results were automated, easily reproducible, and statistically significant"). Also the graphs are showing error bars +/- 1SD [wikipedia.org] . This is good. This means that if you want to reproduce their experiments, it should be easy to do so. You can get an idea from the graphs whether a difference is significant.

(Having said that, I'm not sure why some of the data points don't have error bars - presumably the standard deviation was very low? I also can't see the number of replicates mentioned anywhere - maybe he used his "dynamic number of trials" [phoronix.com] scheme, but statistically speaking this may well be a bad thing - if he is using only 2 or 3 trials there is some probability of getting the first few samples with similar variance, he should probably stick to doing a fixed 10 to 30 replicates instead).

Re:Virtual machine, really? (2, Interesting)

mtippett (110279) | more than 3 years ago | (#34124974)

The "get to statistical variance" has been in Phoronix Test Suite for the better part of a year.

As part of the new work happening with Phoronix Test Suite, and the online aggregation site OpenBenchmarking.org, we'll be looking to expose the raw data and allow people to view a particular set of results in a possible more meaningful way. What is being examined now is raw data (scatter diagram), box plot (percentiles), violin plots (kernel function based), full standard error reporting (error bars, numerical reference to SD and SE.

Of course the general articles just show a simple form.

Obviously, infinite time and infinite runs with a broad variance of hardware would be better. As per usual, contact us at Phoronix with a fully baked suggestion for improvements in Phoronix Test Suite or a benchmark suggestion or article suggestion and we are more than willing to consider it.

Re:Virtual machine, really? (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34124900)

They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?

Does it matter?
They are after delta's not absolutes.

*IF* they test each kernel in the same VM on the same metal then any change is valid. The numbers are abstract, the difference between release is what is key

Re:Virtual machine, really? (1)

Anonymous Coward | more than 3 years ago | (#34128640)

They tested in a VM. Now where's the proof that by itself doesn't affect performance in an unpredictable way?

The real problem with running this kind of comparative benchmark in a VM isn't even predictability. It's that virtualization affects kernel performance in many profound ways. Many performance metrics you might choose to test will depend on the host kernel and virtualization environment and how it interacts with the guest kernel. In other words, you're not testing the performance of the guest kernel in isolation.

For example, say you use a combination of host and guest which supports native IO (where the guest OS has special IO drivers which interface to the virtual machine instead of trying to drive hardware which is faked by the VM). Any attempt to benchmark guest IO performance will actually be benchmarking host IO performance with some relatively lightweight overhead.

If, on the other hand, you use the more primitive form of virtualization where the overhead is huge because the VM simulates IO hardware and guest OS drivers poke at it the simulated HW, you end up testing not just the guest kernel's IO performance, but also the massive overhead of emulation.

Same principle applies to virtual memory. Either the guest is making API calls to the host to do VM on its behalf, or the VM emulates all the instructions which manipulate page tables etc. and adds a lot of overhead. And you can't benchmark real loads without involving the VM layer (VM isn't just for when you start hitting the swap partition, it's everywhere).

So basically, this set of tests tells you nothing useful beyond "kernels x.y.z to x.y.z+N all perform similarly inside the particular VM+hostOS combo we chose".

Re:Virtual machine, really? (2, Interesting)

arth1 (260657) | more than 3 years ago | (#34130302)

In addition, a VM will use available assigned cores on the host, without locking them 1:1. This changes the behavior quite a bit, especially when it comes to CPU cache. The guest thinks it is running on the same core, but in reality it jumps between them, and has to reload from higher level cache or even memory.

Worse, from a benchmarking standpoint, hyperthreading will be exposed to the guest as separate CPUs. An intelligent scheduler would want to run distinct tasks on different cores, but can't do so in the VM.

And, it also depends on what other VMs are running on the hosts. Because virtual machines these days do "intelligent paging" and keep only one copy of identical pages. So if you're running two VMs with the same kernel or OS level, they're likely to run faster than two different OSes.

Anyhow, the test is horribly flawed from another point of view -- they test new hardware with old kernels. That's not fair, because the old kernels don't have optimizations for new hardware that didn't exist.
Anyhow, I'd be much more interested in finding out how old hardware would perform if upgrading to a new kernel. The more than 50% increase in size for a basic kernel over the last few years is probably why my old server with little RAM by today's standards runs faster on 2.6.17 than on 2.6.34. It's quite possible that the kernel itself runs faster, but if it leaves less memory for the system, and with the apps you run it starts swapping, it's overall going to be much slower.

Obviously... (1)

basotl (808388) | more than 3 years ago | (#34123614)

Obviously... they forgot the bloat feature.

Fourth Post! (0)

Anonymous Coward | more than 3 years ago | (#34123616)

WHAT?!!? Nobody cares about old Linux benchmarks?!

Y'all musta forgot (2, Funny)

mark72005 (1233572) | more than 3 years ago | (#34124400)

I find this hard to believe, what with 2010 being the year of Linux on the desktop and all.

Re:Y'all musta forgot (0, Redundant)

BrokenHalo (565198) | more than 3 years ago | (#34126728)

Yawn. When will you kiddies grow up? Linux has been on the desktop for over a decade - in my case, since 1995.

Re:Y'all musta forgot (0, Redundant)

mark72005 (1233572) | more than 3 years ago | (#34132326)

It's an obligatory response, and your response is the obligatory response to the obligatory response! Should not be modded Redundant!

The jump between 2.6.28 to 2.6.30. (0)

Anonymous Coward | more than 3 years ago | (#34123620)

It would seem that disc performance was better on 2.6.28, then 2.6.29 something was introduced that wasn't fixed and has been the legacy of 2.6.30.

The question is: what prioritized process arrived in 2.6.30 that causes disk access to be inferior?

Also of note, BENCHMARK IS IN VIRTUAL MACHINE not an actual installation: Failure.

Re:The jump between 2.6.28 to 2.6.30. (0)

mtippett (110279) | more than 3 years ago | (#34123838)

As mentioned in a different comment. The only place where you will find older kernels in production these days will be in a VM. These are completely relevant for the people who will be running older kernels. Old hardware dies and the services migrate. Old software doesn't die, it just keeps on living in a VM.

Re:The jump between 2.6.28 to 2.6.30. (1)

arth1 (260657) | more than 3 years ago | (#34130412)

As mentioned in a different comment. The only place where you will find older kernels in production these days will be in a VM.

Not so. Red Hat Enterprise Linux and CentOS still ship with 2.6.18, for example.

And for embedded linux, there are products shipping with far older kernels than 2.6.17, the oldest kernel in this test.

In general, expect the "long term stable" kernels (like 2.6.18, 2.6.27 and 2.6.32) to be in production for a long time. When the life cycle of a product is 5+ years, having a stable kernel outweighs having a new one.

Re:The jump between 2.6.28 to 2.6.30. (1)

mtippett (110279) | more than 3 years ago | (#34130764)

At the last companies I have worked at, the modern hardware doesn't run the old version of Red Hat. The result is that they run RH or other legacy versions of Linux are easier to deal with a consistent and simple hardware abstraction provided by a VM. I haven't seen a bare metal deployment of anything older RHEL 5 for a while (either that or the system it's running on is on life support).

I agree about embedded devices having older kernels - I'm regularly involved in "shiny and new" vs "old and known" discussions. Of course we're talking about PC class hardware that EOLs faster than the software. Again, my comments are focused on the enterprise use of older kernels which is a sensible interpretation of the article.

Re:The jump between 2.6.28 to 2.6.30. (1)

arth1 (260657) | more than 3 years ago | (#34134304)

I wasn't talking about old versions of Red Hat Enterprise Linux. The newest version, 5.6, uses kernel 2.6.18.

Results don't support conclusion (2, Interesting)

QuantumBeep (748940) | more than 3 years ago | (#34123696)

It seems almost every benchmark that had any difference was slower in more modern kernels. It's not all sunshine and roses.

Re:Results don't support conclusion (1)

TheBlackMan (1458563) | more than 3 years ago | (#34124782)

Actually watching the same graphs you did i concluded the opposite.

Re:Results don't support conclusion (1)

olau (314197) | more than 3 years ago | (#34125320)

Yeah, note that some of the benchmarks are measuring bytes/sec so higher is better. :)

Re:Results don't support conclusion (4, Informative)

timeOday (582209) | more than 3 years ago | (#34125834)

I would agree it's not all sunshine and roses, but let's at least look a little more closely. There are some disturbing regressions in there, although keep in mind other improvements (such as moving to a journalling filesystem) may come at a cost to performance, which may be justified.

Better

  • Apache Compilation: 40% less time
  • Disk Transactions: 50% less time

Worse

  • GnuPG File Encryption: 60% more time
  • time to transfer 10GB via the TCP network loop-back: 100% more time
  • Apache static web page serving: 50% more time
  • IOZone Writes - 20% more time

Same

  • CAMELLIA256-ECB cipher
  • OpenSSL
  • NASA's NPB
  • TTSIOD 3D rendere
  • C-Ray multi-threaded ray-tracing
  • Crafty, an open-source chess engine
  • MAFFT multiple-sequence alignment test that deals with a molecular biology
  • Himeno Poisson Pressure Solver
  • Blowfish performance with John The Ripper
  • LAME MP3 encoding
  • 7-Zip compression
  • Dhrystone 2
  • FS-Mark
  • IOZone Reads
  • Threaded IO tester
  • Parallel BZip2 compression

Re:Results don't support conclusion (2, Interesting)

CAIMLAS (41445) | more than 3 years ago | (#34128172)

Not only that, but they only looked at the kernel with a specific version of GCC. Due to this, the performance differences could theoretically be not only accounted for by minute differences in how the compiler handles things.

The bigger thing with Linux performance isn't just the kernel - it's the entire stack. You've got the kernel, sure - and then you've got the core libraries (glibc, etc.) and the compiler which built them. These all can change performance significantly, and in real-world environments, the two are usually associated.

I'd be interested in seeing the results if they went back and looked at the kernel readme files and applied "requires version x or newer of y" and built everything that way. I suspect you'd see a performance curve inversely related to the kernel version.

wops (1)

dropadrop (1057046) | more than 3 years ago | (#34123702)

It seems that Phoronix needs a faster kernel on their server...

Seriously though, Some of the performance drops (and how they have been sustained in later kernel versions) makes me wonder if there is adequate load testing as part of the kernel QA process.

Re:wops (1, Insightful)

gmack (197796) | more than 3 years ago | (#34123884)

Keep in mind that the biggest drop was most likely do to ext4 adding data journaling rather than the usual medtadata journaling to make file contents less likely to be corrupted after an unplanned shutdown(power outage etc)

I didn't see any mention of them turning that feature off to find out one way or another.

Re:wops (2, Insightful)

ustolemyname (1301665) | more than 3 years ago | (#34124874)

Some off the changes noted in the Linux 2.6.30 kernel change-log that was used throughout the Linux testing process included...

Yeah, that new EXT4 filesystem that they didn't use for obvious reasons. Huge impact on the results.

Re:wops (1)

ustolemyname (1301665) | more than 3 years ago | (#34125036)

Sorry, slashcode seems to be blocking my ability to copy paste today (opensuse 11.3, chrome 7 beta, asus 701...)

change-log for the EXT3 file-system that was used throughout

Quote is available on the third page of the article, first paragraph.

Overkill (3, Funny)

TrailerTrash (91309) | more than 3 years ago | (#34124138)

What more Linux benchmarking do you need besides bogomips? Jeez.

Re:Overkill (1)

null8 (1395293) | more than 3 years ago | (#34125712)

1337 bogomips is enough for everybody!

You call those kernel benchmarks? (5, Insightful)

m4c north (816240) | more than 3 years ago | (#34124594)

Where are the kernel-level tests that do more than exercise the filesystem and network driver (singular) and the scheduler? More than half of those charts were flat, which could mean they weren't making appropriate measurements.

For example, show how mutexes have improved, or copy-on-write, or interrupt handlers, or timers, or workqueues, or kmalloc, or anything else that a system and kernel programmer would care about. I like the user-centric perspective: it's very good information to have and share, but don't call what you've done a kernel benchmark. Maybe call it a kernel survey of its impact on users.

Re:You call those kernel benchmarks? (0)

Timmmm (636430) | more than 3 years ago | (#34124746)

The only thing they changed was the kernel. Performance differences can only be due to changes in the kernel. In what way is that not a kernel benchmark?

Re:You call those kernel benchmarks? (1)

jdgeorge (18767) | more than 3 years ago | (#34125600)

It's not a COMPLETE kernel benchmark in that it only exercises certain parts of the kernel.

And since you obviously needed a car analogy: It's still like ONLY testing how fast a car goes 0 to 60 miles per hour, but not the towing capacity, fuel efficiency, braking distance, or crash performance, and a bunch of other things.

Re:You call those kernel benchmarks? (1)

c (8461) | more than 3 years ago | (#34125748)

> Performance differences can only be due to changes in
> the kernel. ... or to the VM having better support for certain features used in that particular kernel version, or that particular VM being configured in such a way that some kernel run better than others, or the host kernel somehow having better support for some features of the VM and benchmarked kernel, or...

Which is perfectly fine as long as it's made very clear that the benchmarks are subject to all of those conditions. Personally, I think the use of a VM for the benchmarking throws in enough noise that I wouldn't rely on the results for anything more than garnering /. page hits.

Re:You call those kernel benchmarks? (2, Insightful)

CAIMLAS (41445) | more than 3 years ago | (#34128222)

IF you were running the tests on real hardware, I'd be more likely to agree.

They weren't. They were running it on a virtualized host in KVM. This means that not only were their results largely determined by the specific network, etc. drivers they used (which can see significant revision between kernels and not accurately reflect the kernel itself), but any idiosyncratic behavior in KVM in how it treats guest interfaces may account for the discrepancies.

ugh (5, Informative)

buddyglass (925859) | more than 3 years ago | (#34125408)

I love that Phoronix is willing to take the time to run tests like this. I just wish they'd learn how to run meaningful tests. For instance, why are they testing a bunch of CPU-bound things? Kernel won't affect that unless we're talking about SMP performance. If you want to test the kernel, test how well it handles SMP, network I/O and disk I/O. And bear in mind that disk I/O will be hugely affected by which filesystem is used and its configurable settings.

Another problem with their article is that it tests individual kernels. Most folks don't use a vanilla kernel. They use one provided by their distro, which may have distro-specific patches that address some of the performance problems (or add new ones). What I would have preferred to see is a comparison of different distro releases over the last 5 years, focusing on the most popular ones (say Ubuntu, Fedora and SuSE).

The meaningful tests (and their results) were:

1. GnuPG: avoid 2.6.30 and later.

2. Loopback TCP: avoid 2.6.30 and later.

3. Apache Compilation: avoid 2.6.29 and earlier.

4. Apache static content: avoid 2.6.12, 2.6.25, 2.6.26, then 2.6.30 and later.

5. PostMark: avoid 2.6.29 and earlier.

6. FS-Mark: avoid 2.6.17 and earlier, 2.6.29, then 2.6.33 to 2.6.36.

7. ioZone: unless you're willing to run 2.6.21 or earlier, avoid 2.6.29 and you're fine.

8. Threaded I/O: avoid 2.6.20 and earlier, 2.6.29, then 2.6.33 to 2.6.36.

Based on these results, #1 and #2 seem to be testing the same thing, and tests #3 and #5 seem to be testing the inverse of whatever that thing is. 2.6.29 seems to be especially crappy, performing worse than the kernels immediately before and immediately after it on tests #6, #7 and #8. In terms of recent kernels, tests #6 and #8 suggest a regression in 2.6.33 that has been resolved in 2.6.37.

If it were me, I'd look at either running 2.6.37 (when its released) or fall back to 2.6.32 if my hardware was supported.

Re:ugh (3, Insightful)

mtippett (110279) | more than 3 years ago | (#34126934)

This made me laugh - in a good way, not at you :).

When Phoronix does a distro-comparison the crowd calls out that the tests are only really testing gcc differences, and should have less variables changing. When Phoronix does a fixed comparison varying only one part of the system, the crowd calls out that it isn't a good basis since people don't run it that way.

Phoronix runs tests in different ways to explore the performance landscape. For some it precisely gives the information that they need, for other it's completely irrelevant. In this particular case, I'm glad that the data gave you enough to have some open questions about 2.6.32 vs 2.6.37. If people walk away with those sorts of first order interpretation, the article served it's purpose.

Of course the next step would be how do we take a tighter look at the delta between 2.6.32 and 2.6.37 - any thoughts?

Regarding meaningful vs meaningless tests. The tests Phoronix runs are a collection of tests to explore. The tests were run, and for some of them, the results yielded nothing interesting but were still reported. You don't know until you run the tests, and if the tests are run, you report on them. Some tests may be stable now, but may have sensitivity to other parts of the systems. Even CPU bound tests will yield different results in different cases (scheduler, etc).

Re:ugh (2, Insightful)

TheLink (130905) | more than 3 years ago | (#34129196)

I suspect the scheduler would make a bigger difference if you were running multiple processes at the same time.

e.g. multiple processes in various scenarios:
CPU intensive.
disk IO intensive.
network IO intensive, single NIC.
network IO intensive, two NICs.
network IO intensive, four NICs.
And various combinations of CPU, disk, network.

Then latency tests:
One to X processes with high CPU, while measuring latency experienced by another process.
One to X processes with high IO, while measuring latency experienced by another process.

Running your own business (1)

aotian (1915400) | more than 3 years ago | (#34131678)

Running your own business can be very rewarding however it is certainly challenging too. You'll find yourself working long hard hours and making difficult decisions day in day out, so it is definitely not an easy option.nhl jerseys [nfljerseyspub.com] , in an initial day of meetings with China's leadership nfl jerseys [nfljerseyspub.com] , stressed cooperation on pressing economic nba jerseys [nfljerseyspub.com] , security and environmental challenges mlb jerseys [nfljerseyspub.com] , rather than focusing on issues like human rights and soccer jerseys [nfljerseyspub.com] religious freedom that have historically divided the U.S. and MBT shoes [buy2shoes.com] China .Mrs. Clinton announced Saturday that her Chinese counterpart cheap mbt shoes [buy2shoes.com] , Minister of Foreign Affairs Yang Jiechi wholesale ugg boots [buy2shoes.com] , will visit Washington in early March to help coordinate a U.S.-China response wholesale Christian Louboutin shoes [cl-shoescom.com] to the global economic crisis ahead of the Group supply cheap Christian Louboutin shoes [cl-shoescom.com] of 20 summit in April in London.Hillary Clinton attends sell discount Christian Louboutin shoes [cl-shoescom.com] a news conference A heavy dose of realism and plenty of research is a must before you take that first step and approach your bank manager for finance.

Phoronix, really? (1)

Gothmolly (148874) | more than 3 years ago | (#34145262)

What's next, we all believe Eugenia from OSNews when she spews about BeOS? These guys are just page-view leeches, ignore them and they'll wither and die.

CPU-bound no better, disk & network worse (1)

WindShadow (977308) | more than 3 years ago | (#34180610)

This comes as no surprise. In any activity which is mostly limited by CPU in user mode, not much changes, you can track that over a number of operating systems. What has gotten slower is disk io and network transfer time, and some tests, such as web serving, may be using all or mostly pages in memory, so this is not as obvious as it might be.

In addition, the test was run in a virtual machine, so to some extent the huge host memory provided more resources, and the very fast disk hides poor choices in the io scheduling and provides additional write cache and buffers. In other words, neither the tests chosen, or the environment used, were typical for small server or generous desktop.

For a meaningful test no more than four CPUs (or two with hyperthreading) should be used, and all io should go to a real rotating disk, like a $100 1TB WD or Seagate, and the filesystems should be on that, not some fancy large SSD. Then some numbers can be identified which reflect the performance on machines in the small server or fast desktop price range of a motivated home user or budget limited small business. Then the limitations of the CPU and io scheduler changes will be more evident, and perhaps the performance using the deadline scheduler should be included, since discussions on Linux-RAID mailing list indicate that many of us find the default scheduler is a bottleneck for typical loads (particularly raid-[56]).

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...