×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

In Tests Opteron Shows Efficiency Edge Over Intel, Again

Zonk posted more than 6 years ago | from the testy-testy dept.

AMD 98

Ted Samson writes "In their latest round of energy-efficiency tests between AMD Opteron and Intel Xeon, independent testing firm Neal Nelson and Associates find AMD still holds an edge, but it's certainly not cut-and-dried. Nelson put similarly equipped servers through another gauntlet of tests, swapping in different amounts of memory and varying transaction loads. In the end, he found that the more memory he installed on the servers, the better the Opteron performed compared to the Xeon. Additionally, at maximum throughput, the Intel system fared better, power-efficiency-wise, by 5.0 to 5.5 percent for calculation intensive workloads. For disk I/O intensive workloads, AMD delivered better power efficiency by 18.4 to 18.6 percent. And in idle states — that is, when servers were waiting for their next work load — AMD consistently creamed Intel."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

98 comments

Frist Ps0t! (1)

Vulva R. Thompson, P (1060828) | more than 6 years ago | (#20428367)

Because I'm using an Opteron!

I'll bet it checks off "post anonymously" even better than an Intel too!

Re:Frist Ps0t! (0)

Anonymous Coward | more than 6 years ago | (#20428449)

Scrappy little company with 1/10th the resources makes a better chip?

Oh well, back to being a marketing company.

Blue man group standing by to answer all fanboy distress calls.

Horrible picture in my head . . . (5, Funny)

StefanJ (88986) | more than 6 years ago | (#20428397)

Opteron?

Xeon?

Why do these top of the line processors sound like character names from crummy 1980-vintage cartoons about giant robots who talk like street thugs?

"I'm calling you out Xeon! You will be defeated and all Processaria will bow before my superior power stats!"

"You're a fool if you believe those benchmarks Opteron! The true power is Inside!" (duh-Dah-dumm!)

Re:Horrible picture in my head . . . (3, Funny)

lotho brandybuck (720697) | more than 6 years ago | (#20428549)


Maybe because the folks currently running these companies grew up watching crummy 1980-vintage cartoons about giant robots who talk like street thugs?

Nope... (2, Insightful)

Colin Smith (2679) | more than 6 years ago | (#20430093)

It's the people buying the things they are appealing to. It's your intelligence they're insulting.

 

Re:Horrible picture in my head . . . (0)

Anonymous Coward | more than 6 years ago | (#20428639)

I don't disagree, but aren't you like a decade late in complaining about those names? What else bothers you, the Macarena? The Spice Girls?

Re:Horrible picture in my head . . . (1)

Xybre (527810) | more than 6 years ago | (#20428741)

Actually your dialouge there sounded like a combination of Reboot and Mr. T.

I pity da foo.

Re:Horrible picture in my head . . . (0)

Anonymous Coward | more than 6 years ago | (#20429835)

I pity the FOO$?

Re:Horrible picture in my head . . . (0)

Anonymous Coward | more than 6 years ago | (#20429511)

It's because anime is all the rage with geeks these days.

I specifically remember a Kingdom of Xeon from Gundam, and I know that one dude in Gundam wing was piloting a huge red beast by the name of Opteron near the end of the series. :P

Re:Horrible picture in my head . . . (1)

Ortega-Starfire (930563) | more than 6 years ago | (#20430955)

If you think it is bad now, Read about the Asus breakup. One of the companies will be named (I kid you not) Pegatron.

Efficient Post! (5, Funny)

Anonymous Coward | more than 6 years ago | (#20428401)

This just in! AMD is more efficient than Intel when doing nothing!

For a really good test, they should compare AMD to an empty carboard box, and see which one uses more power when processing no transactions.

Re:Efficient Post! (4, Insightful)

Ajehals (947354) | more than 6 years ago | (#20428479)

Idle power consumption may not be important for systems that are under a constant workload all the time, but for office file servers, where any given server may be under heavy load for 8 hours a day (probably closer to 6 and probably not "heavy load at that), having it draw less power in the remaining 16 hours would be rather beneficial, after all a server like that would be idle 2/3 of the time.

Obviously ideally you would be using all your kit at 95% capacity all the time, but even then you would need some idle kit stood by to take case of any additional demand. Sadly company' who aren't planning their IT systems with load in mind (but rather by which vendor takes them to lunch more often or which has the coolest flashing lights) are probably not too interested in power consumption stats anyway

Re:Efficient Post! (0)

Anonymous Coward | more than 6 years ago | (#20429193)

* biggest tits *

Re:Efficient Post! (1)

try_anything (880404) | more than 6 years ago | (#20431481)

Obviously ideally you would be using all your kit at 95% capacity all the time
Yep, if your developers aren't working 24 hours a day, you need to outsource half of them to the other side of the world. That way you get your money's worth out of the development servers.

Re:Efficient Post! (4, Insightful)

Bert64 (520050) | more than 6 years ago | (#20428507)

Most servers spend a lot of time idle, often far more time idle than busy...
You don't buy a server that is just barely fast enough for your workload, your over-spec so that it can easily handle spikes in load and allow for future growth.
Also, many business operations have busy hours and quiet hours, for instance internal servers at a company will usually only see much load during working hours.

Depends on the kind of server (2, Insightful)

Sycraft-fu (314770) | more than 6 years ago | (#20428733)

A file server or webserver doing static pages, sure. A computation server or server doing lots of dynamic content, not so much. A more useful benchmark would be to measure the actual loads for various tasks, then see how they perform for that. Say "If you have a server doing X, this is what you can expect form these processors." Servers aren't a "one size fits all" kind of deal. I agree idle efficiency is something worth considering but let's not pretend like all servers just idle. Also, I know many places are like us in that the more a server idles, the older the server that tends to go there. We don't have much acting as your LDAP servers, but then we don't need it. However our computation servers are rather powerful, and loaded almost 24 hours a day.

Indeed. Screw the CPU. Benchmark the vendor. (1)

Colin Smith (2679) | more than 6 years ago | (#20430295)

Get numbers for actual server product lines. As another poster has pointed out, the PSU, case design, RAM configuration, disk config can all make a difference to power consumption.

So, benchmark the whole system. And don't bother with MIPS or FLOPS they're arbitrary and don't allow you to compare differing architectures. So give us SPECmarks per watt or TPC-? per watt as well as per dollar.

Then you can simply choose a particular make/model based on requirements.

 

Re:Efficient Post! (0)

Anonymous Coward | more than 6 years ago | (#20429173)

Mod. No that was not insightful. It was just plain stupid!

In other news (0, Offtopic)

weaponx86 (1112757) | more than 6 years ago | (#20428417)

And in idle states -- that is, when stopped at a red light -- Ford consistently creamed Ferrari."

Re:In other news (2, Interesting)

Xybre (527810) | more than 6 years ago | (#20428717)

In gas usage? Probably. Which matters, especially if you do a lot of city or stop-and-go driving. ;)

No matter.... (3, Insightful)

hurting now (967633) | more than 6 years ago | (#20428421)

Even if its not cut and dry, this is EXCELLENT for the CPU industry. We need to see competition between the manufacturers.

Don't let that get lost in the arguments between which is better or what have you. Continued improvements and development benefits everyone.

Just for the record (1)

Tau Neutrino (76206) | more than 6 years ago | (#20428553)

This applies to the original post as well.

The idiomatic expression is cut and dried. It means ready-made, predetermined and not changeable. For example, "The procedure is not quite cut and dried. There's definitely room for improvisation."

It originally referred to herbs for sale in a shop, as opposed to fresh, growing herbs.

Re:Just for the record (0)

Anonymous Coward | more than 6 years ago | (#20428979)

No really it isn't The idiomatic expression is that it is "a cut and dry example" or that this "is cut and dry." The use here is that of an adjective. Maybe in the part of the world you are from people use the verb phrase, but around here people use the adjectival form pretty much exclusively. It is perfectly fine to use the phrase referring to the state of the example rather than referring to the process of drying.

As for the testing, these test while important are far from conclusive, and are pretty well tied up in the methodology. Real world results will vary, as what specifically it is that is driving the processor use can make a difference, as can the types of connections.

Re:Just for the record (1)

nuzak (959558) | more than 6 years ago | (#20429061)

> It originally referred to herbs for sale in a shop, as opposed to fresh, growing herbs.

I'm under the impression that it had to do with firewood -- you have to cut it and dry it before burning it. Chopping firewood seems like a far more universal activity of the time. But it certainly got applied to many other things in time, all with the same connotation of convenience, suitability, and uniformity.

It's amazing, and kind of depressing, how many "word origins" sites only serve to repeat long-debunked urban legends.

Boy, what a link search (5, Informative)

Anonymous Coward | more than 6 years ago | (#20428439)

Here [worlds-fastest.com] is the whitepaper, instead of the summaries.

MOD PARENT UP (3, Informative)

Bill Dimm (463823) | more than 6 years ago | (#20428631)

The submitted article is terrible. The full paper is much more informative. For example, the full paper gives the system specs (both systems at 3.0GHz) and shows that the Opteron system is much cheaper ($2800 vs. $4170 for 2GB configuration).

Re:MOD PARENT UP (-1, Troll)

geekoid (135745) | more than 6 years ago | (#20428847)

except price is irrelevant when just doing a performance comparison.
It's only need for a price/performance comparison.

OTOH, don't let me stand in the way of your fan-boyism.
 

Re:MOD PARENT UP (2, Informative)

Bill Dimm (463823) | more than 6 years ago | (#20429051)

except price is irrelevant when just doing a performance comparison.
As I said, the full paper is much more informative. You may consider that extra information to be irrelevant, but that doesn't change the fact that there is a lot of info in the full paper that the submitted article doesn't even hint at. The paper, by the way, focuses on power efficiency, not performance. If people are looking at power efficiency because they want to save money on electricity (there may be other reasons to consider it, or course), then the fact that the systems themselves have very different prices seems pretty relevant to me.

OTOH, don't let me stand in the way of your fan-boyism.
75% of the computers I've bought have been Intel based. Give it a rest.

Re:MOD PARENT UP (1)

MojoStan (776183) | more than 6 years ago | (#20430553)

the full paper gives the system specs (both systems at 3.0GHz)
Unfortunately, the white paper doesn't say if the Xeon 5160s they benchmarked are from the relatively new G stepping. The new G stepping cuts idle power consumption by at least 30W for two Xeon 5160s. The Tech Report reported this a few weeks ago: New Xeons bring dramatically lower idle power [techreport.com] .

30 watts is a very significant difference, but I'm not sure if it would make up for those power-sucking FB-DIMMs.

Re:MOD PARENT UP (3, Informative)

born2benchmark (1008349) | more than 6 years ago | (#20432423)

These tests were not run with the new G stepping. If someone can loan me a pair of the new chips for about a week I will re-run the tests and promptly publish the results. Neal Nelson

What sort of Xeon? (1)

bcmm (768152) | more than 6 years ago | (#20428483)

I am confused about Intel branding. Last time I checked, Xeons were not their most efficient cores. Are these ones based on the Conroe architecture or something?

Re:What sort of Xeon? (3, Informative)

Wesley Felter (138342) | more than 6 years ago | (#20428541)

Xeon 51xx (Woodcrest) is essentially the same as Core 2 Duo (Conroe); it is very power-efficient.

Re:What sort of Xeon? (1)

atrus (73476) | more than 6 years ago | (#20431577)

But you lose some power efficiency from the use of power sucking FB-DIMMS.

SOIC beats 65nm (0)

Anonymous Coward | more than 6 years ago | (#20444723)

Because CoreDuo is 65nm, it is twice as efficient as 90nm technology on dynamic load. The 3.0 GHz Opterons made on 90nm Windsor core on the other hand, are not the most efficient AMD offerings. AMD makes 65nm chips only up to 2.6GHz. But when it comes to doing nothing, which is the norm rather than exception, small geometry does not help enough. Intel chips are made on intrinsic Silicon, that has high of substrate parasitic current DC and AC. AMD uses SOIC, Silicon on Insulator (SiO2) and that being an excellent dielectric, draws nothing when idle.

I can't wait till AMD comes out with ZRAM [wikipedia.org]

    Petrus

Re:What sort of Xeon? (0)

Anonymous Coward | more than 6 years ago | (#20428579)

The woodcrest core Xeon is indeed based on the same architecture as the Conroe/Core2

Horribly Written (1)

Anonymous Coward | more than 6 years ago | (#20428509)

"...in cases where Intel outperformed AMD in power efficiency, the servers were configured with smaller larger memory sizes..."

"...At the maximum throughput, based on transactions per watt hour, the Intel system delivered better power-efficiency..."
This seems to imply that they are measuring throughput in transactions per watt hour, but those units are appropriate for power-efficiency, not throughput. At best, this is unnecessarily confusing.

FTFA (4, Funny)

JedaFlain (899703) | more than 6 years ago | (#20428513)

"Further, in cases where Intel outperformed AMD in power efficiency, the servers were configured with smaller larger memory sizes."

It's all so clear dark to me now...

Original Submission (0)

Anonymous Coward | more than 6 years ago | (#20428551)

What the editors removed from the original submission was the submitter pondering how either processor had any power with little or no hair.

sort of useless (1, Insightful)

krog (25663) | more than 6 years ago | (#20428571)

Only a fool would specify an Opteron or a Xeon in a power-critical application. You might as well compare fuel consumption among a group of muscle cars; the very act of comparison indicates that you missed the point entirely.

not really (0)

Anonymous Coward | more than 6 years ago | (#20428613)

If I could get a muscle car that also gets 100mpg, I'd certainly do so, even if just for the money saved in gas. Similarly, the cost for electricity might not be much when you run 1 or 2 machines, but many companies have hundreds of servers, and can save a lot of money by using servers that require less power.

Re:not really (1)

krog (25663) | more than 6 years ago | (#20428681)

Yeah, we'd all like to be driving muscle cars that got 100mpg with zero emissions, but unfortunately we live on planet Earth, subject to reality and all the engineering tradeoffs it entails. If you care about gas mileage, truly care about it, you're going to have to leave that Dodge Charger on the lot. Same with these fine, pedal-to-the-metal CPUs.

Re:sort of useless (2, Insightful)

Applekid (993327) | more than 6 years ago | (#20428621)

Depends on the rest of the specs. If you have a muscle car with more power for less fuel, certainly it's worth noting.

Re:sort of useless (2, Insightful)

geekoid (135745) | more than 6 years ago | (#20428763)

Nope. Muscle cars are about power. If you car has more power and less fuel, you win. If your car has more power and more fuel, then you win.
It's not even worth noting.
Now if you are talking about high performance race cars, then it is pretty important.

Re:sort of useless (1)

smoker2 (750216) | more than 6 years ago | (#20431915)

There is not much point to a "muscle car" if it uses so much fuel that it can only run for 2 seconds - so I would say it IS worth noting !
It has to have a certain amount of fuel economy OR huge ferkin tanks !

Re:sort of useless (2, Informative)

Tinyn (1100891) | more than 6 years ago | (#20428683)

No, the point is people are starting to care about the total power usage of their 500-zillion server colo facility, where even a 5% reduction in power usage can mean hundreds or thousands on the power bill.

Re:sort of useless (2, Insightful)

Wesley Felter (138342) | more than 6 years ago | (#20428735)

The point of the study is the relative power efficiency of the two processors, not absolute power efficiency. If you need the performance of an Opteron or Xeon, why wouldn't you choose the more efficient one (all else being equal)?

Re:sort of useless (1)

XaXXon (202882) | more than 6 years ago | (#20428791)

You missed the point. You care about power efficiency in a server because when you get outside being a rinky-dink operation and start designing entire data centers, you realize that there's a huge multiplier on your power consumption. You have to remember that increase in power use causes an increase in heat which requires an increase in cooling requirements. This also increases your generation requirements.

additional cost =
power delta * 10,000 (machines) * 2 (for cooling)
+ additional cooling hardware + additional generation hardware

Re:sort of useless (1)

Craig Davison (37723) | more than 6 years ago | (#20428849)

The Opteron HE is AMD's best processor in terms of performance per watt for a given rack or blade unit. Sure, you could theoretically run a server farm of Intel Centrinos, but you would get far less computing speed overall, and a modest savings in power.

Performance per watt per... (3, Insightful)

SanityInAnarchy (655584) | more than 6 years ago | (#20428989)

It's all about performance per watt. Well, and other considerations, like how much the hardware costs up front, and how much physical space it will require.

The bottom line is: You want to spend your money in the most efficient way possible.

If you have two potential architectures, and one offers more performance per watt, then ignoring up front hardware costs, it's cheaper to run the one that costs you less power. That's a bit different than suggesting they just use a bunch of laptop CPUs.

Re:sort of useless (1)

nuzak (959558) | more than 6 years ago | (#20429083)

> Only a fool would specify an Opteron or a Xeon in a power-critical application.

Google has a power bill slightly higher than most people's home PC's. They don't run their bricks on ARM, do they? Any company with a big data center wants to see its electric bill go down.

Neal Nelson? (0)

Anonymous Coward | more than 6 years ago | (#20428583)

Mike Nelson is funnier.

What's an "In Test"? (-1, Troll)

Anonymous Coward | more than 6 years ago | (#20428607)

Commas, motherfucker. Do you speak it?

Tests show xeon performs equal to opteron (5, Insightful)

edxwelch (600979) | more than 6 years ago | (#20428655)

Actually, if you look at the raw test data (rather than the conclusions) you will see that both servers performed nearly equally. The xeon doing slightly better on some tests, while the opteron better in others. In most tests the results are about the same (5% difference)

Can we actually see the damn test config (2, Insightful)

arivanov (12034) | more than 6 years ago | (#20428679)

Amazingly skimpy article. No effing data whatsoever.

I can bet a case of beer that this was run in a standard server config under Winhoze Server 2003. These are the results you more or less expect in that case.

If that is the case neither Opteron, nor Xeon utilise CPU frequency scaling as there is no OS support. If you use CPU frequency scaling under let's say current RHEL or Debian, the idle and IO efficiency picture tends to reverse because AMD is still not as good at this as Intel. In fact it not even supported on many server BIOSes/Motherboards.

As a result even if supported (and it usually isn't) AMD power utilisation with reduced frequency in idle is higher than that of a Xeon system which consumes nearly nothing when you slam it down to 250MHz. If the OS drops and ramps up the CPU frequency correctly Intel should win on idle and IO-only benchmarks.

Not that it matters in the slightest as AMD will cream it on most real life loads anyway due to better memory and IO bandwidth.

Re:Can we actually see the damn test config (1)

Bill Dimm (463823) | more than 6 years ago | (#20428767)

Amazingly skimpy article. No effing data whatsoever.
No argument there.

I can bet a case of beer that this was run in a standard server config under Winhoze Server 2003
What kind of beer? The full paper [worlds-fastest.com] says they were running 64-bit SUSE Linux Enterprise Server 10.

Re:Can we actually see the damn test config (1)

arivanov (12034) | more than 6 years ago | (#20431999)

They still ran it like Windoze. They did not use a single of the linux power control options and tunables leaving everything at defaults. This is not how you run a power efficient installation. There are plenty of tunables under the cpufreq and some less relevant ACPI stuff that make up to 70% power consumption difference on a 1U server. They touched none of them.

RTFA (4, Informative)

Wesley Felter (138342) | more than 6 years ago | (#20428771)

http://www.worlds-fastest.com/d.pdf/wfw991.pdf [worlds-fastest.com]

(Granted, it was buried several links deep.)

The article does not mention it, but SLES 10 enables cpufreq and the ondemand governor by default.

AMD power utilisation with reduced frequency in idle is higher than that of a Xeon system which consumes nearly nothing when you slam it down to 250MHz.

Uh, the lowest frequency of the Xeon 5160 is 2GHz.

Re:RTFA (1)

arivanov (12034) | more than 6 years ago | (#20431987)

Uh, the lowest frequency of the Xeon 5160 is 2GHz.

Utter bullshit. That is the base frequency. The lowest frequency adjustable through the cpufreq standard P4 runtime frequency interface is 200MHz or 256MHz. If the base is 2GHz it is 256 (it is usually in 8 equal steps).

To see the frequencies:

  • modprobe p4_clockmod
    modprobe cpufreq_ondemand
To enable dynamic scaling (via kernel)
  • echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_gover nor
Watch either /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_f req or the cpu MHz part of /proc/cpuinfo change as the system load changes. For Xeon at the lowest frequency in the table the power consumption is under 10W. AMD currently cannot beat that. Their power vs frequency curve is not that steep. This is if the motherboard manufacturer have not turned off the interface in the first place which is quite common for AMD SMP motherboards.

I just saw the article. While they did not run Winhoze, they still did not know how to run a power efficient server installation under Linux (or BSD for that matter). They ran it like Winhoze. They installed with the belief that the OS itself has no power control and all you need to tweak is BIOS. They tweaked all kinds of shit which has very little bearing on the system power consumption and performance. Classic case of pseudotweakers that have no clue whatsoever in how to set up a server.

Re:RTFA (1)

born2benchmark (1008349) | more than 6 years ago | (#20433243)

Your statement "They tweaked all kinds of shit ...." is incorrect. Appendix B in the white paper lists the two changes that we made to the Apache2 configuration files. One increased the number of user sessions and the other turned off logging. As noted in the text of the white paper we also set the BIOS fan speed control to automatic. I hardly think that these changes can be accurately described as tweaking "all kinds of shit". If you will send me a list of what you consider to be the proper tweaking changes for both the Xeon and the Opteron I will run a partial set of tests and publish: 1) Xeon before your changes compared to Xeon after your changes, 2) Operon before your changes compared to Opteron after your changes, and 3) Xeon after your changes compared to Opteron after your changes. Neal Nelson

Re:RTFA (1)

arivanov (12034) | more than 6 years ago | (#20433731)

1. They claim to be showing power vs performance statistics including for idle. If you do so, you need to know the factors affecting this for the OS used. They have shown to know only the ones relevant under Winhoze.

2. They have not configured the system for optimum power vs performance neither for idle, nor for IO load, nor for varied load. In all of these cases you can improve power consumption and heat produced by anything from 30% to 70%. Instead of that they are scratching their testicles by tweaking settings that yield a couple of percent. Further to this, by keeping the power dissipation in a server much lower, using this also allows to improve MTBF by several times. Knowing this and knowing how to do this is an essential skill for a server admin (I used to start interviews with this and march people out if they claimed to _know_ and _understand_ x86 hardware without knowing this).

Overall, classic example of lack of clue. For the reference - the difference in power consumption at lowest frequency compared to highest in some server designs (Intel OEM boards come to mind here) may be as much as 10 times. Compared to that all other system settings yield crumbs that are not worth picking.

Re:RTFA (1)

born2benchmark (1008349) | more than 6 years ago | (#20434445)

I read your reply but I do not see any recommended settings. Can you provide some link to a "howto" or some published paper where this information is provided for the Xeon and the Opteron? Thanks, Neal Nelson

Re:RTFA (1)

arivanov (12034) | more than 6 years ago | (#20434723)

Go a couple of posts up the chain. There is a post by me that says how to do it in the simplest form using a kernel ondemand governor (no need to repeat it) including actual commands.

This is good enough for 99% of typical enterprise server loads including nearly any file serving, webserving, etc. The only area where you may find this approach problematic are cases where the rampup from 0 to 100% load is not stepwise, but nearly instantaneous and the latency of the rampump time is critical. There are a few applications like this out there (not many).

The reason for this limitation is that the latency for changing CPU frequency is in the ms range. It is not by any means instantaneous (and the governor may also introduce extra latency to avoid oscillating between frequencies). This usually does not matter for file serving requests or simple webserving request. In that case the load is proportional to the number of connections and these do not rampup from 0 to max nearly instantaneously even if you are getting slashdotted. This also does not matter if you have complex CPU intensive requests like a very complex web-front-ended transaction or a CPU intensive SQL query. It takes ms (if not s) to complete it so ramping up the frequency in the meantime is more or less OK and the overall transaction time with and without the rampup ends up being comparable.

You can further improve on this using userspace daemons like cpufrqeqd which can run complex policies. Frankly, I used to use them only before the ondemand governor became stable enough and have not needed them since. They provide better use of the interim frequency steps between minimum and maximum they will provide a more smooth rampup curve.

While the tools are out there and they are part of the OS nowdays there is no one-size-fits all here. It is a good idea to start with the kernel ondemand governor and benchmark. If you do not like the results try to write a policy for one of the userland daemons and benchmark.

Personally, I would recommend using this interface on any P4 or Core based CPU. There it rocks. Pentium M is a bit special case as most of them usually come on motherboards that also support voltage frequency alteration. While the drivers will whinge that you can achieve better results with those the latency of the change is much bigger so I would stay away from those. From the non-Intel CPUs Via has a similar interface. At least at some point it f***ed it up very badly by having to reprogram half of the PCI bus to change the frequencies and there are kernels where it is outright disabled. Transmeta often has bad frequency tables with only one frequency in them so there it is quite useless. AMD also has its dose of bugs. There is a significant amount of errata especially for SMP motherboards. In many motherboards the interface is outright disabled and cannot be reenabled (Tyan Thunder comes to mind). And so on. Your mileage can vary, you have to try.

Re:RTFA (1)

born2benchmark (1008349) | more than 6 years ago | (#20436469)

I checked both the Xeon and Opteron servers, and just as Wesley said ondemand governor was already set on both boxes. Neal Nelson

Re:RTFA (1)

Wesley Felter (138342) | more than 6 years ago | (#20434883)

Clock modulation makes the processor slower but less efficient, so I don't recommend it. I think the results from these benchmarks would actually be worse if clock modulation was used. I wish Linux would go ahead and remove the p4_clockmod driver so that people like you will stop making their systems less efficient.

For a Xeon you want to use EIST instead of clock modulation; the proper driver is speedstep_centrino or cpufreq_acpi (depending on kernel version) and SLES 10 loads this driver automatically, so there's no need to tweak settings.

Re:Can we actually see the damn test config (0)

Anonymous Coward | more than 6 years ago | (#20428933)

Actually, there's no Xeon that runs on 250MHz. The smallest multiplier is 6, so on a 333MHz FSB you end up with ~2GHz. Anyway, frequency doesn't matter that much, voltage is much more important to save / waste power.

Re:Can we actually see the damn test config (1, Informative)

Anonymous Coward | more than 6 years ago | (#20429159)

Actually, a close look at the tests show is that they got the AMD to cycle down under no-load conditions, but couldn't get the Intel chip to do the same.

As you said, this probably has more to do with the OS, Motherboard, and BIOS than the chip being used.

cut and DRIED. (0, Troll)

Radon360 (951529) | more than 6 years ago | (#20428731)

If you're going to use a clichéd expression, at least try to use the correct tense [wsu.edu] .

cut and dry - wrong
cut and dried - correct

---

Please bitchslap the next person you encounter that writes the phrase "for all intensive purposes," thank you.

Re:cut and DRIED. (0)

Anonymous Coward | more than 6 years ago | (#20428799)

Come on—your link specifically notes that the "cut and dry" form of the idiom is listed in the OED. Language evolves; if the language you're willing to accept evolves slower than that listed in the OED, you're probably doing something wrong.

Re:cut and DRIED. (0)

Anonymous Coward | more than 6 years ago | (#20428851)

Did you bother to read your link? Cut and dry is acceptable. The etymology of the phrase relates to the practice of cutting and drying herbs. In which case the "cut and dr(y|ied)" in "cut and dr(y|ied) herbs" is adjectival and the whole refers to herbs that have been cut and are dry (equivalently, have been dried).

Re:cut and DRIED. (1)

Radon360 (951529) | more than 6 years ago | (#20428897)

Yes, I did read my link. Why don't you as well. Go hover your cursor over the top of each of the images.

DUH. (0)

Anonymous Coward | more than 6 years ago | (#20428863)

This is an OBVIOUS test result. Intel boards use FBDIMMs that almost always REQUIRE heatsinks and in large numbers, maybe even active cooling. AMD boards can run standard desktop DDR2/3.

AMD will always be greener if you stack up the ram as this test has.

If AMD ran FBDIMMs as well, then I suspect Intel would win.

Something I've noticed... (4, Informative)

NerveGas (168686) | more than 6 years ago | (#20429059)


    If you fully load them down, my X2s use nearly as much as the Core2 systems - but when lightly loaded, my experience mirrors that of the article, that the X2 systems use significantly less power.

    In our call center, we built a large batch of X2-based systems - nothing too fancy, just an X2/3800, two gigs of memory, a 250-gig drive, a DVD burner, a 6200tc video card, and 19" LCD monitors. The cases and power supplies were pretty cheap - I think $35 for the case and a "400-watt" power supply. (Yes, the quotes are there for a reason.)

    In order to size out the UPS units, we broke out the old, trusty Kill-A-Watt. In logging into a PDC server, browsing the web, checking email, etc., then logging out, the peak draw for one machine and monitor together was 140 watts, with the load *most* of the time at 80-100 watts. Those are some spankily low numbers, especially when you consider that the monitor's contribution was probably 25-40 watts.

    And, as we speak, I have a dual-socket, dual-core opteron with a 15K SCSI raid array and 8 gigs running just a few feet away from me, with 4 instances of Prime95 running. Kill-A-Watt says 296 watts with all of that going on. This is going to replace an old 4x700 MHz Xeon server which draws 500-700 watts. The power factor, however, is just 0.7 - I really need a better power supply in there.

Re:Something I've noticed... (0)

Anonymous Coward | more than 6 years ago | (#20430291)

This is going to replace an old 4x700 MHz Xeon server which draws 500-700 watts.
Keyword there is "old". 700 MHz? What's that? PIII-type architecture or P4-Netburst? The new Xeons are more similar to (in some cases, simply rebranded) Core 2 Duo Conroes, with much lower power usage.

Just because you didn't make that at all clear....

Re:Something I've noticed... (1)

NerveGas (168686) | more than 6 years ago | (#20430463)

It's the old P3 architecture. I wasn't trying to imply that current C2D-based Xeons use that kind of power. Sorry for the confusion.

I don't recall exact numbers, but my C2D and new Xeons definitely use more power at idle and low loads. I don't have apples-to-apples comparisons, though. :-(

When Will They Learn (4, Informative)

jonesy16 (595988) | more than 6 years ago | (#20429163)

Over and over again people try and compare the efficiencies between two "seemingly" identical servers / machines. But truly, how can you declare a winner (and base it on something like a 5% efficiency margin) when the two machines are using different power supplies? A 600 Watt for the Intel, 500 Watt for the AMD. I can't find those models listed on Delta's website at quick glance, but it'd be a stretch to imagine that two different power supplies have the exact same efficiency curves. I mean, I'd believe if they were accurate to within maybe 3%, so now we're arguing over whether or not Intel and AMD are more than 2% different in efficiency? Come on people. The whitepaper does say they assume there might be a 1% difference between the two power supplies, but that's based on "eyeballing" the efficiency curves.

We know that Intel takes a hit with FB-DIMM memory especially as you add more memory modules.

Another inconsistency appears to be related to the case design, where the cases for the Intel machines appeared to be providing inadequate cooling for the memory modules, causing the system management controller to bump up fan speed considerably. So now we're comparing two systems with different power supplies and with different requirements for cooling which may or may not be related to the actual architecture but may be impacted by a design consideration made by the case manufacturer. How would these results change with different power supplies or a different case. Are the differences the same in a 2U case? A tower? Does it get worse? Better? I know that our Mac Pro's NEVER speed up the fans above the 500/600 RPM's that they bottom out at.

As noted by others, the paper is completely devoid of any discussion regarding CPU frequency / voltage scaling that may or may not be handled by the BIOS or Linux resident programs (cpuspeed daemon). It's possible they haven't even checked for it. As our company has both Intel and AMD linux boxes, I can testify that linux is very sensitive to motherboard/cpu combinations when it comes to cpu scaling and it's "possible" that this could be playing a MAJOR role in the idle performance values. It'd be nice to see it addressed.

Lastly, there's no discussion as to the optimizations made to the software being run on each of the boxes. Is the code compiled for each architecture individually taking into account support for 3DNow / SSE instructions, cache sizes, etc? Obviously more efficient or less efficient code execution would have a MAJOR impact on these studies, enough so that companies usually spend a large amount of time playing with compiler options to get the best performance on a given architecture. And when you're arguing over performance comparisons in the sub 20% difference arena, code efficiency should be addressed, especially if it's not a big commercial package that "everyone" in the industry would be using. Anyhoo, just my thoughts.

Mod back up (2, Insightful)

Wesley Felter (138342) | more than 6 years ago | (#20429429)

I don't fully agree with the parent post, but it's not a troll. Some of these are legitimate issues.

there's no discussion as to the optimizations made to the software being run on each of the boxes. Is the code compiled for each architecture individually taking into account support for 3DNow / SSE instructions, cache sizes, etc? Obviously more efficient or less efficient code execution would have a MAJOR impact on these studies, enough so that companies usually spend a large amount of time playing with compiler options to get the best performance on a given architecture.

In the real world, people use the binaries that are provided by the distro, which is also what was done in this test. Apache and MySQL are not particularly amenable to compiler optimization anyway.

Nope (0)

Anonymous Coward | more than 6 years ago | (#20431911)

They are showing an average database workload. MySQL can't take advantage of SSE/3Dnow instructions anyway. As far the power supply goes - who cares? They're measuring the power used at the wall. It's all factored in the tests.

AMD machines simply use less power in most situations. I'm sorry it does not agree with Intel marketting literature, but how many times does it have to be proven?

Re:Nope (1)

Vlad_the_Inhaler (32958) | more than 6 years ago | (#20432013)

The GP's comment on the Power Supply is spot on. Different power supplies behave differently under different loads (doh!). Some are more efficient when the load is low, others are better under high loads.

Where AMD has an advantage is having the memory controller in the processor.

Must be said... (1)

Araxen (561411) | more than 6 years ago | (#20429219)

Imagine a Beowulf cluster of those!

Re:Must be said... (0)

Anonymous Coward | more than 6 years ago | (#20432841)

>>This just in! AMD is more efficient than Intel when doing nothing!

>Imagine a Beowulf cluster of those!

Look at the government.

Too bad... (0)

Anonymous Coward | more than 6 years ago | (#20429453)

In many places, efficiency means little, when the performance means more.

Love how AMD is trying to make it seem like they have an advantage, when they lost the processor chip war fair and square.

AMD better than Intel? hmm... (0)

TheSpengo (1148351) | more than 6 years ago | (#20429673)

Considering how Intel is raping AMD in the mobile and desktop market, I'm not sure if I can believe these 'tests.' AMD really has nothing to compete with Intel's conroe CPUs. They had their time as king of the hill when Athlon 64 was where it was at. Nowadays though, Athlon x2's really suck when compared to Core 2 Duos. The same goes for Opteron vs. conroe Xeon I expect.

Re:AMD better than Intel? hmm... (1, Informative)

Anonymous Coward | more than 6 years ago | (#20430245)

Don't expect, though. Core 2 Duo is ahead of Athlon 64 X2 in gaming performance... but when you go multi-socket, Opteron servers still spank similar Xeon systems thanks to the integrated memory controllers and the superior HyperTransport interconnect. In enterprise or scientific use data traffic can be the crucial thing to overall performance -- more so than actual ALU horsepower -- and that's what AMD got right many moons ago with the very first K8 'Hammers.

Re:AMD better than Intel? hmm... (3, Informative)

MojoStan (776183) | more than 6 years ago | (#20430425)

Considering how Intel is raping AMD in the mobile and desktop market, I'm not sure if I can believe these 'tests.' AMD really has nothing to compete with Intel's conroe CPUs.

Athlon x2's really suck when compared to Core 2 Duos. The same goes for Opteron vs. conroe Xeon I expect.
It's all about the FB-DIMMs. Woodcrest (dual-core) and Clovertown (quad-core) Xeons are probably better and more power-efficient than Opterons in most "real world" dual-processor server/workstation benchmarks. However, a computer is much more than just a CPU. The only widely available chipsets for these Xeons use FB-DIMMs, which suck way more power than the standard DDR2 used in the Opteron chipsets. Ars Technica had a good article about this last month: AMD vs. Intel: power efficiency in the server room rests on RAM [arstechnica.com]

Re:AMD better than Intel? hmm... (1)

born2benchmark (1008349) | more than 6 years ago | (#20432601)

Stanley, You say "The only widely available chipsets for these Xeons use FB-DIMMs," I want to do a comparison test of Xeon DDRII to Xeon FB-DIMM. I did a search for a motherboard with the Xeon Woodcrest and DDR-II. I could not find one anywhere. Do you know if anyone makes a motherboard for the Xeon Woodcrest with DDR-II? If so, who? Thanks, Neal Nelson

Servers are underutilized (1)

OrangeTide (124937) | more than 6 years ago | (#20429745)

Almost all servers are extremely underutilized according to the research I've seen (which is why companies like VMware say they can sell you expensive virtualization products, to better utilize your equipment). If you're lucky it might have some bursty load where the big CPUs you put on it are going to be taxed for a significant amount of time, but most people simply average around 10 to 30% utilization during business hours. (so we're not even counting the mostly idle time for the 14 or so hours a day)

Seems like good efficiency on the lower end of your operating range could have some value to a customer. Would like to see a real cost analysis though, most AMD power comparisons I've seen are just raw data that is hard to make sense from.

FB-DIMMS (1)

Joe The Dragon (967727) | more than 6 years ago | (#20430111)

hurt intel as they need a lot more power and give off more heat then DDR2 ECC ram.
and in amd systems the ram is linked to EACH cpu built in ram controller and the cpu have a better cpu to cpu link. Also the chipsets use less power.

Re:FB-DIMMS (1)

born2benchmark (1008349) | more than 6 years ago | (#20430323)

Is anyone aware of any other published power efficiency data? It would be pretty easy to plug in a "Kill A Watt" or "Watts Up" device and measure the power at idle. Is there any data for other server configurations? Has anybody compared an Intel "Desktop Server" with DDRII to a Xeon based server with FB-DIMMS? Has anybody reported idle power under Windows versus idle power under Linux on the same machine?
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...