Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Server Benchmarking Lone Wolf Bites Intel Again

ScuttleMonkey posted about 7 years ago | from the everyone-loves-a-homecourt-ruling dept.

Intel 90

Ian Lamont writes "Neal Nelson, the engineer who conducts independent server benchmarking, has nipped Intel again by reporting that AMD's Opteron chips 'delivered better power efficiency' than Xeon processors. Intel has discounted the findings, claiming that Nelson's methodology 'ignores performance,' but the company may not be able to ignore Nelson for much longer: the Standard Performance Evaluation Corp., a nonprofit company that develops computing benchmarks, is expected to publish a new test suite for comparing server efficiency that Nelson believes will be similar to his own benchmarks that measure server power usage directly from the wall plug."

cancel ×

90 comments

Sorry! There are no comments related to the filter you selected.

Great (1, Insightful)

Constantine XVI (880691) | about 7 years ago | (#20512251)

Now if they can get their laptop chips to be more efficient than Intel's, I'll be happy again.

FBDIMM (2, Informative)

RightSaidFred99 (874576) | about 7 years ago | (#20512273)

Yeah yeah, we all know. FBDIMM is a power sucker. FBDIMM is going the way of the dodo before long, though.

AMD also typically has lower idle clock multipliers so when they're not doing anything, they draw less power. If you have a room full of computers sitting there doing nothing, you'll certainly use less power in that case.

Re:FBDIMM (1)

Joe The Dragon (967727) | about 7 years ago | (#20512435)

not that fast intel next xeon chipset will FB-DIMMS in the high end chipset with pci-e 2.0 also not all of the pci-e lanes will be 2.0 and DDR2 ECC in the lower end one with less pci-e lanes and no pci-e 2.0.

Re:FBDIMM (2, Insightful)

visualight (468005) | about 7 years ago | (#20512595)

If you have a room full of computers sitting there doing nothing, you'll certainly use less power in that case.

That is what most servers spend most of their time doing - nothing. There's peaks and valleys sure but there are *a lot* of idle cycles.

Re:FBDIMM (1)

TheRaven64 (641858) | about 7 years ago | (#20512859)

That's what virtualisation is for. Even if you have all your peaks in the same place you can save a lot of power by running something like Xen and migrating the virtual machines to a smaller number of nodes during the troughs and turning off a few machines, then spreading them out again on the peaks.

Re:FBDIMM (3, Insightful)

RingDev (879105) | about 7 years ago | (#20513601)

And the percent of Netadmins who have the time, budget, knowledge, and inclination to do so is right about .001%

I agree that Virtualization is a great solution, but the vast majority of IT shops around the world don't have the knowledge or budget to pull it off these days. Give it another 5-10 years and it'll be the new standard, but right now it just doesn't have the market or education penetration. For the cost of investing in a Xen system and training, most IT shops will be financially better off just paying the extra electric bill.

-Rick

Re:FBDIMM (1)

asifyoucare (302582) | about 7 years ago | (#20519283)

I heard a rumour that servers could actually run more than one application at a time. Imagine that, running (say) 10 applications without needing to run 10 operating systems!

Yeah I know - applications suck and operating systems suck - meaning that virtualisation frequently IS the best option. I just wish that applications and operating systems sucked less.

Re:FBDIMM (1)

jgc7 (910200) | about 7 years ago | (#20516833)

Which is why measuring computations per watt at peak load, is not a great indicator of energy efficiency. We typically buy enough hardware to handle a load that rarely happens, thus measuring actual energy efficiency is tricky. Since we build out to peak load, absolute performance means fewer total processors. On the other hand, it is also important how much the usage decreases under light load. For instance, a processor that uses 30% of peak power at a 10% load may be more efficient than a processor that uses 60% of peak power under a 10% load, even though the first process is less efficient at peak load.

Re:FBDIMM (3, Informative)

InvalidError (771317) | about 7 years ago | (#20512645)

The original Advanced-Memory-Buffer-based FBDIMMs might be going away next year but Intel has not given up on off-chip memory bridges since they announced plans for AMB2. Instead of having the AMB2 chip on-DIMM, it will be either on multi-DIMM AMB2 risers or on the motherboard.

BTW, AMD also announced plans for off-chip AMB2-like memory bridges with multiple multi-gigabit serial lanes... they called it G3MX: G3 (socket) Memory eXtender.

So, while FBDIMMs may be going away soon, the idea of using external bridges to dump the RAM further away from the CPUs/chipset using serial interfaces is gaining traction - at least in the server space.

Re:FBDIMM (1)

Laxator2 (973549) | about 7 years ago | (#20514693)

Most corporations have large numbers of desktops which are left on 24/7 but they sit idle from 5PM to 9AM.
In such a case idle power becomes an issue. That is, of course, unless the desktops are busy doing their share of work for various botnets.

Re:FBDIMM (1)

ScrewMaster (602015) | about 7 years ago | (#20516757)

If you want to even out the difference between AMD and Intel in terms of server CPU utilization, just post a link to said servers here on Slashdot.

Tor like oatmeals! (-1, Offtopic)

Anonymous Coward | about 7 years ago | (#20512277)

Tor like oatmeals!

We're three days from the AMD Barcelona launch (1, Funny)

Anonymous Coward | about 7 years ago | (#20512293)

So who cares about those ancient CPUs.

Why all the hate for Intel? (-1, Troll)

Anonymous Coward | about 7 years ago | (#20512295)

sure, I'm glad there's an AMD to compete with Intel...helps spur development and, most importantly, helps drive prices down.

But time and time again, the AMD chips and BIOSes for AMD chipsets have many, many more flaws than comparable Intel systems. That's just a fact. So, yes, maybe right here and now AMD saves a few percent more power under certain extenuating circumstances, but in the end, who cares? Intel is still where it's at, for my processing dollar. Nobody ever got fired for buying MS? You can say that doubly so for buying Intel.

Re:Why all the hate for Intel? (1)

Reality Master 101 (179095) | about 7 years ago | (#20512615)

Who has ever gotten fired for buying AMD? Your troll makes no sense.

Re:Why all the hate for Intel? (1, Interesting)

jimstapleton (999106) | about 7 years ago | (#20512631)

Yes, that's just a fact.

My question is - is it a true fact.

"Commander Taco is really an onion wearing a Fedora doing that dance doing a jig on the top of the Vadicant" Is a fact, but certainly not a true fact.

Interestingly, AMD chips, to my experience, have been just as good as Intel. The problem has been motherboard chipsets. These are very rarely produced by AMD, and until nVidia came along and made them, did suck. As of nVidia producing them, AMD has been just as stable/reliable of a platform as Intel. Actually if you knew where to look in the VIA chipsets, you could find a few gems pre-nVidia-chipset as well. I have an old Tyan Trinity MVP3 with a K6-III on it that is rock solid.

Re:Why all the hate for Intel? (2, Informative)

Vancorps (746090) | about 7 years ago | (#20512789)

All of my Opteron based servers are rock solid with multiple chipset vendors. The days when that was a problem for AMD are long gone. There is a reason I have to reboot my Xeon servers once a week and my Opteron servers stay up until my maintenance window. They are both configured identically but the Xeons just aren't as stable. I haven't been able to play with the newer Xeons, only the crappy P4 based ones. I've got some new servers coming though so I'll get an update on the stability issue.

Through the history of the Opteron though stability has never been an issue in my experience. The Athlon had problems as you were describing. There were plenty of Intel and AMD desktop chipsets that were horrible during that time. More of a chipset maker problem than a CPU maker. In both cases Intel and AMD had their own chipset out which did work. Although Intel motherboards declined sharply in quality around that time too. I remember having a bunch of xeons that would reboot and if you were lucky everything would come up okay. Firmware updates came out which gradually improved the issue. I do believe it took three firmware updates to get stability to what you would expect for a 24/7 server. Wasn't a problem with the CPU though.

Re:Why all the hate for Intel? (0)

Anonymous Coward | about 7 years ago | (#20512993)

Why do you have to reboot the Xeon servers once a week? What are the symptoms?

I've run 4 Dell Poweredge servers with 2x3GHz P4 Xeons for over two years. 3 run Linux, one runs Windows Server 2003. They are heavily used running Java/Python web applications, Oracle 10g, and brutally CPU-intensive video transcoding. I've never had to reboot them outside of OS maintenance windows.

Come on, put up or shut up.

Re:Why all the hate for Intel? (1)

Vancorps (746090) | about 7 years ago | (#20513303)

They are web servers for a site seeing millions of users. Over the course of the week they become less and less responsive. I schedule a reboot late at night preemptively to keep them available. The Opteron web servers stay up until I reboot them willingly because of security patching. Both sets are running Windows and running 32bit. I don't have any 64bit capable Xeons yet, I will soon though. We shall see. The servers in question are Dell PowerEdge 1750s running 2x2.8ghz Xeons with 4gigs of ram and have mirrored OS drives.

Re:Why all the hate for Intel? (0)

Anonymous Coward | about 7 years ago | (#20513267)

How often do you have to reboot servers? Our maintenance window is the last saturday night of the month. I am responsible for about 70 servers which are HP Proliants and most are Xeons. I would say that maybe once every two or three months, I may have a server reboot outside of the maintenance window (it may reboot itself through the automatted server recovery system or I may do it manually). These are almost all MS servers and even a Citrix farm on some of them (not the most stable thing). If I had to reboot a server once a week, I'd be calling the company that made that server.

Re:Why all the hate for Intel? (1)

Joe The Dragon (967727) | about 7 years ago | (#20513555)

so you wait 2 + weeks to install the windows updates?

Re:Why all the hate for Intel? (1)

razorh (853659) | about 7 years ago | (#20514163)

Is that so unusual? Just because a patch will fix one security problem, doesn't mean it won't break production code. Sometimes waiting a week or two and maybe doing some testing is worth more than installing every patch immediately.

Re:Why all the hate for Intel? (1)

Vancorps (746090) | about 7 years ago | (#20516129)

Actually I patch about once a month on production web servers that face the Internet. Unless there is an IIS patch or something critical. Only port 80 and 443 face the Internet since my servers are firewalled so I can get away with extending my patch window. My DNS Windows based servers are the same way. The RPC vulnerability recently meant jack to me because I don't manage the servers directly over the Internet. I have a point to point connection which is totally private so most things you don't need to patch right away.

My Linux servers have the same patch schedule. It ensures that I'm looking at the server at least once a month. I recently deployed MOM however, so I don't need to monitor my servers physically anymore. I had to remote into a server all of once today.

Also, 99% of Windows updates actually don't require reboots. They merely require certain services to be restarted. Knowing which services allows you to do hot-patching. Of course I'm moving into a VM environment now so physical reboots will be pretty much unheard of. My patch window can then shrink significantly setting up a pilot set of servers with SMS. If the patch goes well the rest get patched and life goes on.

Gotta love the modern world we live in.

Re:Why all the hate for Intel? (0)

Anonymous Coward | about 7 years ago | (#20516903)

My Linux servers have the same patch schedule. It ensures that I'm looking at the server at least once a month. I recently deployed MOM however, so I don't need to monitor my servers physically anymore.

You make your mom maintain your servers? Dude, that's throwing the line "your mother doesn't work here" back in their faces! hard core

Re:Why all the hate for Intel? (0)

Anonymous Coward | about 7 years ago | (#20512943)

A fact is always a statement, but a statement isn't always a fact. A statement != a fact.

Re:Why all the hate for Intel? (1)

Chris Burke (6130) | about 7 years ago | (#20513885)

A fact is a statement that can (theoretically) be objectively evaluated as true or false. "AMD chips have stability problems."

As opposed to an opinion, a statement that is only subjectively true or false. "AMD is the best CPU company ever."

The most common usage of "fact" is a statement that has been determined to be true, but the GP was using the other definition to make his point.

Re:Why all the hate for Intel? (0)

Anonymous Coward | about 7 years ago | (#20514403)

A fact is a statement that can (theoretically) be objectively evaluated as true or false. "AMD chips have stability problems."

Wrong. Those are called "statements", or "propositions", or in a specific context, "sentences". Facts are statements that evaluate to true.

Examples: "You are obviously literate" is a statement, but is clearly false. It is not a fact. "You seem unable to use a dictionary" is a fact.

http://dictionary.reference.com/browse/fact [reference.com]
1. something that actually exists; reality; truth: Your fears have no basis in fact.
2. something known to exist or to have happened: Space travel is now a fact.
3. a truth known by actual experience or observation; something known to be true: Scientists gather facts about plant growth.

Re:Why all the hate for Intel? (1)

Chris Burke (6130) | about 7 years ago | (#20515097)

Ah. So I take it you would consider the statement "Dictionary.com is the only thing you need to have a complete understanding of the English language" to be a fact, since you clearly believe it to be true.

I agree that this is a fact, but not a true fact, since truth be told dictionary.com is a good quick reference but an overall shitty dictionary. The wikipedia page for "fact" includes the definition as something which may be true or false, but the citation is for the print version of the Oxford English Dictionary. I'm not paying for a subscription to their online version just to prove to an AC retard that "search results on the internet" is not the same as "something known to be true" or the opposite for that matter.

Re:Why all the hate for Intel? (1)

TapioNuut (615924) | about 7 years ago | (#20519177)

If some "facts" are not to be trusted or not true, they're just that: "facts", not facts.
Granted, Wikipedia (Oxford English Dictionary) tells that the meanings allegation or stipulation have a long history in English. It does not mean that it's really the case in contemporary every-day language.

And actually the Wikipedia (OED) example quote is: "the author's facts are not trustworthy". It would sound silly to say "author's facts are not facts" because it does not emphasize the point which is that the "facts" are not trustworthy. They might not be true.

And all this is even without taking the context into account. Here we are talking about scientific (or technical) facts, and in that context you can not talk about "true facts" or "non-true facts" because there is no such thing as a non-true fact. Fact is a fact is a fact. Fact is something that exists, has existed or something that has "the quality of being actual" (i.e. it is true). Even your own sources tell that this is the case in our context.

http://www.google.com/search?q=define%3A%20fact [google.com]
http://wordnet.princeton.edu/perl/webwn?s=fact [princeton.edu]
http://www.m-w.com/dictionary/fact [m-w.com]

Re:Why all the hate for Intel? (1)

Aranykai (1053846) | about 7 years ago | (#20513059)

Indeed. Pre-Nvidia, VIA was the bomb. Course, in my experience, it was the motherboard itself more so than just the chipset.

Ive had Soyo and MSI boards last me literally weeks, while I still have an old KT series gigabyte trucking along just fine.

Its about the package, not just the processor.

Re:Why all the hate for Intel? (1)

ThosLives (686517) | about 7 years ago | (#20513195)

I'm pretty sure that for something to be called a 'fact' it has to have the property of being true.

I'm deeply disturbed that there are people out there who think that 'fact' can be used to describe something that isn't true.

words often have more than one definition (1)

brokeninside (34168) | about 7 years ago | (#20513365)

A fact can be ``something that actually exists; reality; truth'' or ``a truth known by actual experience or observation'' but it can also be ``something said to be true or supposed to have happened'' or even ``an actual or alleged event or circumstance''.

Only the first of these definitions points to a meaning where a `fact' is unequivocally and always true.

Re:words often have more than one definition (1)

ThosLives (686517) | about 7 years ago | (#20513501)

I can't seem to find anywhere which defines "fact" as the latter definitions you mentioned. Google [google.com] doesn't think so anyway: all theirs seem to indicate 'truth' as a prerequisite (one of them says "possible to be evaluated as being true or false." and that's about as close to your definitions as I could get.

Those definitions were out of a dictionary (1)

brokeninside (34168) | about 7 years ago | (#20551781)

Are you really so functionally illiterate that you're unable to use a dictionary? If so, clicky: dictionary.reference.com [reference.com] . Note the usage note cited from the American Heritage Dictionary:

Fact has a long history of usage in the sense "allegation of fact," as in "This tract was distributed to thousands of American teachers, but the facts and the reasoning are wrong" (Albert Shanker). This practice has led to the introduction of the phrases true facts and real facts, as in The true facts of the case may never be known. These usages may occasion qualms among critics who insist that facts can only be true, but the usages are often useful for emphasis.

I also highly recommend taking a trip down to your local library and asking to see the Oxford English Dictionary. If you do, you can see the etymology of various senses of the word. You'll discover that the sense of the word ``Something that is alleged to be, or conceivably might be, a 'fact''' goes back at least to the early eighteenth century.

BULLSHIT (1)

Khyber (864651) | about 7 years ago | (#20518881)

Intel BIOSes have been getting worse than AMD's as far as diagnosing problems goes in HP laptops. AMD-based HP laptops will give you a post beep code if something is wrong, Intel-based ones do not. This applies to both Commercial and Consumer-version laptops. I know, I spent six months repairing them.

The FB-DIMMS are sucking up alot of power......... (2, Insightful)

Joe The Dragon (967727) | about 7 years ago | (#20512313)

The FB-DIMMS are sucking up alot of power and giving off a lot of heat. That is bad for intel as there chipsets use alot more power as well and that looks bad next to a AMD system with cheaper DDR2 ECC ram.

Intel new 4p systems with 4 FSB, L3 cache in the chipset and FB-DIMM may even use a lot more.

Amd systems can have more then one chip set link and more pci-e lanes as well.

Does it matter? (1)

comrade k (787383) | about 7 years ago | (#20512351)

I didn't RTFA, but my question is, are power savings a real necessity? I'd imagine that the answer depends on the size of the server farm. If you only have a few servers, the additional savings from the lower power consumption may be peanuts to the raw processing power of another processor bought at a similar price. Then, when you take the obsolescence of the processors into consideration, the power savings may be even more negligible.

As the size of the farm scales, however, I'd hazard to guess that the power consumption differences would be far more noticable.

Power usage means heat. (3, Informative)

khasim (1285) | about 7 years ago | (#20512407)

The other side of that is that lowering the power consumption means lowering the heat generated which means lowering the cooling requirements.

And cooling requires electricity also. So by reducing the power usage of one component, you can save money on your cooling costs, also. It's twice the savings.

Re:Power usage means heat. (1)

InvalidError (771317) | about 7 years ago | (#20512927)

It is far less than twice the savings unless you have a woefully inefficient air conditioner.

Since ACs usually have COPs better than 9, the AC would use less than 25W to offset the heat generation of a 200W system. So the savings from not having the extra heat to pump out in the first place far outweighs (>8:1) the cooling costs themselves.

As far as datacenters and server/render/etc. farms are concerned though, lower-power and faster units only means they can pack more units per rack and more racks per room in their current buildings without upgrading their power distribution and cooling systems.

Re:Power usage means heat. (1)

afidel (530433) | about 7 years ago | (#20513407)

I think the 2:1 power usage is a total system measure, that is the inverter inefficiency, the heat from the inverter and batteries, the power lost in transit, power factor, etc. I think that a typical datacenter uses about 2x the power draw of the servers it houses.

Re:Power usage means heat. (1)

InvalidError (771317) | about 7 years ago | (#20515049)

If you include the heat from rectifiers, inverters and batteries, you are talking online-UPS. In that case, the power-factor should be a non-issue if your online-UPS' rectifier front-end is properly balanced and PFC'd - most current models perform at 99+% out-of-the-box. During normal operation, most of the power goes right through from rectifiers to inverters with nearly no loss in the batteries other than floating charge and ~5% AC ripple from the tri-phase full-wave rectifiers. On top of that, rectifiers and inverters in the 10+kW range are at least 95% efficient with 97% being more typical. I guesstimate that a datacenter's power distribution is 85-90% efficient under normal operating conditions.

Considering these extra losses does change the balance but I do not think it would do as badly as to drop below 5:1.

Re:Does it matter? (3, Funny)

moderatorrater (1095745) | about 7 years ago | (#20512581)

Let's not forget the environmental factor of using less electricity. More electricity means more carbon, and even if it doesn't matter to your company, it matters to other companies that your company will deal with.

Re:Does it matter? (-1, Troll)

everphilski (877346) | about 7 years ago | (#20512837)

Al Gore, Inc.

Re:Does it matter? (4, Informative)

Azarael (896715) | about 7 years ago | (#20512697)

It's fairly common for 3rd party data centers to charge based on power consumption. If you want to rent spaces to have a few machines hosted, you can save a bunch of money by building servers that aren't power hogs. Any data center worth hosting at pays very close attention to how much power they have available, so even in the event of power loss, then have an alternate circuit to draw from and/or sufficient emergency generator power.

Re:Does it matter? (0)

Anonymous Coward | about 7 years ago | (#20512863)

For my thesis work, we looked at power-saving with these larger clusters. I can't speak for data warehousing or web farms, but, IIRC, for computing clusters (and I'm talking about VERY LARGE SCALE, here), the energy budgets are approaching the millions of dollars per year. That's REAL money, there...so energy savings CAN be a useful selling point (or, more appopriately, unit computation per joule).

Re:Does it matter? (1)

AJWM (19027) | about 7 years ago | (#20513975)

Yes, power savings are necessary. Lower power used by the logic also means lower power needed by the cooling fans, which overall translates to less heat put out by the box, which means less cooling needed for your data center.

This is especially critical in older data centers. I know of one where they can't put more than a couple of blade enclosures in a rack because the DC wasn't designed to put that much power and cooling into one spot. Physical space is no longer the limitation.

Since there's always a demand for more computer power (and more storage), more efficient equipment means they can pack more of it in without having to build a new datacenter.

Re:Does it matter? (1)

VENONA (902751) | about 7 years ago | (#20515727)

It's difficult to answer your question, because you haven't RTFA, which talks about primarily CPU v primarily disk workloads, power consumption at idle, etc.

Overall, data center power consumption is a big deal. It's one of the main reasons that some corporations are after virtualization. It's one of the main reasons that Google is locating a datacenter in the Columbia Gorge.
http://www.iht.com/articles/2006/06/13/business/se arch.php [iht.com]

While you were 'hazarding to guess', and 'imagining' and thinking various things 'may be', you probably could have RTFA. At that point, the need to ask the question may have been obviated. If not, you would have been able to better frame the question, and possibly gotten an answer that actually *supplied you with useful information*.

Discussions are enriched when participants actually know at what's being discussed. When a participant *doesn't* know what's actually being discussed, you're mostly adding entropy.

She Cannot Be Fooled (1)

ObsessiveMathsFreak (773371) | about 7 years ago | (#20512367)

...a new test suite for comparing server efficiency that Nelson believes will be similar to his own benchmarks that measure server power usage directly from the wall plug.
OK, I'm intrigued. What kind of fudge do the current efficiency tests consist of? Measuring generated heat with a thermometer?

Re:She Cannot Be Fooled (1)

XaXXon (202882) | about 7 years ago | (#20512411)

And I don't understand them not saying "well, we'll need 25% more amd-based servers, so lets factor in those extra machines into the power equation.."

I think Intel has a reasonable beef with the test. I'm not an intel fanboy.. except that I think they have better stuff right now in the 2-socket (4 and possibly 8 core) arena.

Re:She Cannot Be Fooled (4, Interesting)

Vancorps (746090) | about 7 years ago | (#20512609)

Except that there is a very small performance difference in the 32bit world and a non-existent performance difference in the 64bit world. The Opteron actually outperforms quite commonly in the 64bit world much like the Athlons do against Core 2 Duo on the desktop side. Intel has an edge on 32bit optimization right now which is why the Core 2 Duo looks so good right now.

Add 4 and 8 sockets and you've got to be joking considering Intel's shared bus. They cores are chocked for memory throughput at that point while the Opterons just perform better and better as they scale. In a 2 socket system they compete very well. In a 4 socket system the Opteron is by far the superior choice both with power consumption and performance especially with 64bit database,email, and web servers.

Not true... (1)

Junta (36770) | about 7 years ago | (#20513993)

Opteron at least on floating point is lower than Woodcrest/Clovertown IPC in 64-bit. Note the top500 and the increase of Intel presence as of the Core2 generation. Barcelona is supposed to either meet or beat the Intel floating point IPC, but that's yet to be proven publicly. There is at least one significant 64-bit operation that Core2 creams AMD with. I don't know much about other types of instructions in general though.

I agree though, AMD's architecture scales *much* better with socket count and memory architecture wise blows Intel away.

Re:Not true... (1)

Vancorps (746090) | about 7 years ago | (#20514775)

I was talking specifically talking about the Opteron vs Xeon, not Athlon vs Core2. The database benches I had done clearly put the Opteron in the winning spot but Intel has had time to improve. I'm not saying either are bad choices at this point. There is clearly healthy competition now. My experience with 64bit Xeon performance was the initial EMT64 offerings. They are not impressive by any means. I was not aware there has been significant improvement in this area.

Of course that is we research every year come acquisition time. Things change, I like AMD personally but there are times when I buy Xeon because it best suited for specific tasks such as some of my video streaming and encoding processes.

Re:Not true... (2, Insightful)

ShapeGSX (865697) | about 7 years ago | (#20516503)

The latest Xeons are all Core 2 derived parts. Your comparison is horribly dated.

Re:She Cannot Be Fooled (3, Insightful)

bockelboy (824282) | about 7 years ago | (#20512703)

These tests *did* factor performance into this (well, that's what the tester says. Intel is contesting this claim. You decided who you believe). In fact, those tests draw the same conclusions as folks I know who recently bought Opteron servers.

The Intel chips have great performance per watt *as a chip*. Perhaps even better than AMD does; I've never measured a chip's power usage.

The Intel servers, on the other hand, have worse performance per watt *as a fully loaded server*. Unless you're running the chip without a server, you generally should care about the power draw from the outlet - like these tests did.

The Intel servers seem to have the edge in performance per watt when the server is going nearly unused. However, in my area, usually the CPU is pegged 24/7 (unlike, say, a webserver).

It's good to see the chip wars are still alive and kicking. When the competition is healthy, consumers benefit instead of stockholders.

Re:She Cannot Be Fooled (0)

Anonymous Coward | about 7 years ago | (#20513183)

The Intel servers seem to have the edge in performance per watt when the server is going nearly unused. However, in my area, usually the CPU is pegged 24/7 (unlike, say, a webserver).
That's interesting, because it contradicts the findings in the article. The article demonstrated that the Intel server with properly placed fan baffles was more efficient at 20% load, 40% load, 60% load, 80% load, and 100% load. The AMD server was only more efficient at 0% load.

Re:She Cannot Be Fooled (4, Funny)

click2005 (921437) | about 7 years ago | (#20512543)

OK, I'm intrigued. What kind of fudge do the current efficiency tests consist of? Measuring generated heat with a thermometer?

They used to but now they time how long it takes to toast a marshmallow. Its useful because you can use the melted mallows as thermal paste. Its not as efficient as Arctic Silver 5 but I hear its better than the standard ceramic stuff.

There's something i just don't understand... (2, Insightful)

Spy der Mann (805235) | about 7 years ago | (#20512405)

If intel chips are constantly exposed as being inferior to AMD's, why can't intel improve its engineering, with all that money flowing to them?

What do AMD have in their design methodologies that Intel don't?

Re:There's something i just don't understand... (2, Insightful)

geekoid (135745) | about 7 years ago | (#20512507)

Your premise is flawed. They are not constantly exposed as being inferior to AMD. People supporting their biases constantly expose AMD or Intel.

In fact, both are so close that only very specifically myopic tests makes one the 'leader'. There is no noticeable performance difference between the two that matters.

Re:There's something i just don't understand... (1)

jimstapleton (999106) | about 7 years ago | (#20512785)

I wouldn't necessarily say myopic. It varies based on the time frame.

The original K7/Athlon, (actually, even P3) was noticably better than the first generation P4, without cpu beer-goggles.
Later P4s managed to overtake the 32-Bit Athlons noticably, until the Athlon64 came out, which took the lead again.
It didn't hold that lead for long, because the Core2s seemed to be major ass-kickers.

During these timeframes, there were extended periods where one was on-par with the other, or they were too close to call. However, by "better", I mean that the CPU performed noticably better on most benchmarks. The not-best manufacturer of any given time usually still has one or two tasks/benchmarks at which it excels at over the first.

The thing is, neither manufacturer seems to be always-better, or better-at-everything. They do bounc back and forth on better-at-most-things, or even which tasks they are better at.

Re:There's something i just don't understand... (1)

InvalidError (771317) | about 7 years ago | (#20513455)

One of the funniest eras was the year that followed the P4's introduction. At the time, the 1.6GHz P4 was competing against the 1.3GHz P3T and ~1.3GHz Athlons... and it got ridiculed. It was not until the 2GHz Northwoods that the P4 gained a clear lead over the older P3T. After that, the P4's clocked ramped up explosively, leaving the Athlon and P3 in the dust performance-wise for a year or so until the P4 crashed into the 3.6-3.8GHz brick wall, with Intel unexpectedly stalled for over a year thanks to Prescott's spectacular failure to scale beyond Northwood-class clocks. Half-way through this, Athlon64 came along and dominated most benchmarks... until Intel finally replied with C2D over a year later.

The P4 saga was a rather odd and funny story, the rest is typical see-saw motion.

Re:There's something i just don't understand... (4, Informative)

Gr8Apes (679165) | about 7 years ago | (#20512919)

It depends upon what's important to you. Is power consumption important? AMD wins. Is multiple CPU cores in single servers important? Anything over 4 until recently, and now 8, is an AMD win. Do you need the most processing power possible for a single process in a 2P or less unit? Intel wins that one. Need high density stacked CPUs with loads of RAM? AMD wins that one (That's a power/heat/space issue). Need to process web calls? Sun wins that one hands down on a /$, /kW, and /J measure.

There are definite differences in performance between the various CPUs. A mere 5% difference in power draw across a day times 1000s of CPUs is significant. Same with a 5% thermal dissipation difference, as that turns into increased cooling requirements.

These things all matter in the server world.

Re:There's something i just don't understand... (1)

Joe The Dragon (967727) | about 7 years ago | (#20512535)

On board memory controller, Better cpu to cpu link, a lot of chipsets to choose from with intel xeons you can only use intel chipsets, more cpu to chipsets links, chipsets with more pci-e lanes and other stuff.

Re:There's something i just don't understand... (4, Interesting)

conteXXt (249905) | about 7 years ago | (#20512587)

"What do AMD have in their design methodologies that Intel don't?"

Digital Equipment Corp's Alpha engineers.

Sorry to beat a dead horse to a pulp but those that know still know.

Please explain (1)

tacokill (531275) | about 7 years ago | (#20514971)

DEC was bought by Compaq way back when (1997?). Compaq was bought by HP more recently. AMD was not involved with either of those takeovers.

So how did AMD get the DEC Alpha engineers? As far as I know, the DEC Alpha guys are still within HP. Did I miss something?

Re:Please explain (1, Informative)

Anonymous Coward | about 7 years ago | (#20515501)

Intel employs a large number of the former DEC Alpha team, many of which helped develop CSI and the next generation Itanium architecture. AMD was able to snag some of the former Alpha engineers during the HP takeover, and then later when Intel was given that department from HP. The mere fact that people change jobs doesn't mean AMD is filled with super-star Alpha people, many of whom wouldn't like AMD's culture of minimal R&D/innovation.

Re:Please explain (2, Insightful)

VENONA (902751) | about 7 years ago | (#20515995)

"AMD's culture of minimal R&D/innovation."

What? Who brought 64-bit instructions to x86, when Intel and HP were trying to drive everyone to high-dollar (and at the time miserably performing) Itanium for 64-bit? Who brought out an architecture that would let you plug FPGAs, etc., into CPU slots?

IMHO, AMD is lagging in semiconductor manufacturing processes. Their geometries are larger, etc. I doubt that they get the yields that Intel does, and that counts against them in price wars. But developing new fab processes costs a lot of money, and Intel has always had a huge financial edge. There's no conceivable way that AMD isn't doing there best with the resources they have available on this front, as it has a direct impact on the bottom line. Hence their history of fabrication R&D agreements with IBM.

BTW, I've worked for both companies (but some years ago) and did process engineering work for Intel. I have at least some clue, which is more than the A/C parent poster has.

"AMD's culture of minimal R&D/innovation" is completely unjustified bullshit.

Re:Please explain (1)

cheese_boy (118027) | about 7 years ago | (#20515781)

>DEC was bought by Compaq way back when (1997?). Compaq was bought by HP more recently. AMD was not involved with either of those takeovers.
>So how did AMD get the DEC Alpha engineers? As far as I know, the DEC Alpha guys are still within HP. Did I miss something?

Alpha team was spun off to Intel.
http://news.com.com/Intel+gets+more+key+Alpha+alum s/2100-1006_3-1023146.html [com.com]
http://www.theinquirer.net/?article=20024 [theinquirer.net]

How many people were working on Alpha EV7 are still working at Intel would be a valid question -one you couldn't easily answer.
Probably there are even some engineers that worked on EV7 that changed jobs to work for Intel outside of Intel acquiring the teams.

Also - IIRC, Intel acquired some engineers from DEC (mostly process folks I think) back around when Compaq acquired them.

I'm sure there are people who work for AMD now that had worked on an Alpha project.
Just like I am sure there are people who work for Intel that worked on an Opteron project.
People switch jobs, and there's only so many companies that do microprocessor design. (or even more broadly, only so many semiconductor design companies)

Re:There's something i just don't understand... (-1, Troll)

Anonymous Coward | about 7 years ago | (#20512663)

1# AMD aren't dicks to other companies. And is more open minded.

2# Other companies are more willing to work with AMD over Intel. (ie Z - RAM)

3# Did I mention Intel are bunch of arrogant bastards? See the OLPC they laugh at the creator face when he asked Intel if they could make a cheap chip for a $100. But when he asked AMD they were like "We'll see what we can do."

Re:There's something i just don't understand... (1)

tonywong (96839) | about 7 years ago | (#20515373)

AMD does not have market leadership, so they can make radical gambles for better efficiencies to attempt for better marketshare.

From what I can recall, Intel fellow Bob Colwell mentioned the CPU designers could integrate ethernet onboard but they faced a fight from the ethernet chip group which have their own marketshare, budget and design group.

I suppose that as long as chipsets (Northbridges and Southbridges) make money for Intel, memory controllers will stay on the Northbridge and use more power than having memory controllers on CPU.

Bob Colwell presentation:
http://stanford-online.stanford.edu/courses/ee380/ 040218-ee380-100.asx [stanford.edu]

Re:There's something i just don't understand... (1)

tonywong (96839) | about 7 years ago | (#20515507)

I forgot, the relevant statement is at 12:40ish in the presentation.

Re:There's something i just don't understand... (1)

tabby (592506) | about 7 years ago | (#20515731)

"with all that money flowing to them?"

You just answered your own question

This has nothing to do with Intel's "chips" (2, Informative)

RecessionCone (1062552) | about 7 years ago | (#20515905)

This benchmark is a system benchmark, meaning that it takes into account power dissipation of much more than the processor alone. It is fair to say that Intel's current server platforms use more power than AMD's server platforms, but this is actually due to their memory technology, and not to the processors themselves.

To be more specific, the Xeon processor in this review is the same processor core as the Merom/Conroe Core 2 Duo core. If you benchmark Conroe on a platform using the same memory technology (DDR2) as AMD, you'll find that Intel's power consumption is significantly less than AMD's. But Intel decided to use a different technology (FBDIMM) for its server platforms, in order to increase maximum memory capacity, whereas the Opteron used a simpler technology which is severely limited in memory capacity per channel, since the outdated parallel multidrop DDR2 bus can't go at speed when heavily loaded.

FBDIMM is like PCI-Express or Hypertransport for a memory interface, meaning that it's serial and point to point, instead of parallel and multidrop. This allows Intel to add many more loads to the memory channel without slowing the channel down, because it is Fully Buffered (the FB part of FBDIMM), which increases memory capacity per channel. However, FBDIMM also turns out to be very power hungry, and Intel is now being forced (by benchmarks such as this one) to release server platforms without FBDIMM in order to lower power consumption for people who don't need large memory capacities. (for some confirmation of this, look here: http://theinquirer.net/?article=42183 [theinquirer.net] )

In any case, the results of this benchmark aren't about "chips", they're about platforms. Intel's current chips are pretty good, but their server platforms need some work. That's why Intel's coming out with a whole new platform next year (here's some reading material for you: http://realworldtech.com/page.cfm?ArticleID=RWT082 807020032 [realworldtech.com] ).

So a quick answer to your question: Intel's chips ARE better than AMD's, but their platforms aren't. Here's the question you should have asked: Why are Intel's platforms always behind AMDs? The answer to that is basically that Intel has lots more internal politics, and therefore it is slow to change things that have impact across the company, like platforms. Intel has a lot of internal competition: lots of separate groups working on various competing processors, so the processors themselves are usually pretty good (Darwin at work). But the teams making the processors don't have the freedom to change the platform, since that's outside their scope and requires lots of corporate maneuvering. So Intel's platforms are much slower to change than AMDs.

Summing up: don't confuse a system benchmark for a processor benchmark! TFA isn't about processors at all, it's about systems.

Re:This has nothing to do with Intel's "chips" (0)

Anonymous Coward | about 7 years ago | (#20525167)

You are right about the multidrop thing. Keep in mind though the difference in bus architecture here. The memory bus on the AMD is per CPU. Being DDR2, this means that each CPU is limited to a relativly small amount of RAM. On intel on the other hand the bus is per system. So on a single CPU Intel can scale to much more memory than AMD. But intel needs FBDIMM on multi-proc machines just to be able scale to as much memory as a multiproc AMD can have with DDR2.

So, unfortunatly, as always in the processor market, we're largely comparing apples and oranges. Are apples better than oranges or vice-versa? Well, depends on your workload.

Hypertransport (1)

killmister (686470) | about 7 years ago | (#20512503)

I am sure AMD Hypertrasport is the king who rules! Unless Intel will come with something similar...

Intel's version of hypertransport (1, Informative)

Anonymous Coward | about 7 years ago | (#20513033)

is http://en.wikipedia.org/wiki/Intel_QuickPath_Inter connect [wikipedia.org] , formerly known as CSI (common system interface).

Fp cO3k (-1, Offtopic)

Anonymous Coward | about 7 years ago | (#20512603)

Whether you Very distracting to Share. MFrreBSD is Jesus Up The systems. The Gay ink splashes across big picture. What gloves, condoms have left in

YAY! (2, Interesting)

Colin Smith (2679) | about 7 years ago | (#20512721)

At last we'll be able to determine server power efficiency.

London, the world financial centre has real problems with datacentre power supplies. Any new ones pretty much have to be built outside the M25. There's pressure on the ones inside to use less power.
 

More Slashvertising for IDG! (-1, Redundant)

Frosty Piss (770223) | about 7 years ago | (#20512867)

Yet another IDG (ComputerWorld) story from and IDG shill in how many days? How many TODAY?

Looks like IDG (ComputerWorld, ITWorld, NetworkWorld...) is really hitting Slashdot HARD, either that or they have a deal with Slashdot. Here's a partial list of the shills that regularly show up and have almost 100% article acceptance rates: Ian Lamont [slashdot.com]
Lucas123 [slashdot.com]
coondoggie [slashdot.com]
inkslinger77 [slashdot.com]
narramissic [slashdot.com]
jcatcw [slashdot.com]

Looks like they spread out the work over a few shill user accounts, which is to be expected. If it's all OK and everything with the corporate ownership of Slashdot to be played by IDG, I suppose that's their business, but one would hope that they are actually getting PAID for being part of IDG's advertising program. And of course there should be disclosure so that visitors to Slashdot realize they are reading advertisements and not an article submitted by a "real" user...

Re:More Slashvertising for IDG! (2, Insightful)

6Yankee (597075) | about 7 years ago | (#20513073)

Are you going to paste this comment onto every post from one of these individuals? Despite the fact that you keep getting modded down for it? You must be really obsessed or really, really dense. Give it a rest already - or at least say something new.

I have mod points and could just smack you into oblivion, but decided to post instead and let others do the smacking.

Bias in the Front Page selections? (0)

Anonymous Coward | about 7 years ago | (#20513333)

I think the parent has a good point: Why is IDG getting so much "facetime" at Slashdot? It certainly can't be because the content is the best out there; of every single topic posted by the IDG trolls, there are much better sources. I mean, come on! What's with this? We don't deserve to know when there is bias in the Front Page selections?

Re:Bias in the Front Page selections? (1)

VENONA (902751) | about 7 years ago | (#20516107)

Maybe just variance. Until somebody has some statistics, I wouldn't read too much into it. Given that somebody has an issue, I'd guess that some stats will be forthcoming, if there really is a problem.

I think it is just a PR ploy. (1)

ngoy (551435) | about 7 years ago | (#20513773)

Does anyone else see any bias with a website, called "worlds-fastest", seemingly dedicated to pro AMD benchmarks, that he has done out of the goodness of his heart? And all the custom software he has written, doesn't have any CPU specific optimizations? None of the open source software, has any optimizations slanted toward one side or another? Coming out with an AMD Opteron vs Intel Netburst test result, when the newer Intel stuff had been out for 6 months? It all looks like a bunch of PR to generate business for Neal and AMD. Plus some ego stuff going on, everytime someone feels Intel isn't giving them the time of day, they go all out into a pissing contest. (IE, on his website since "Intel claimed they had no Core systems to loan him" he goes off to benchmark 4 year old Xeon and AMD machines. Remind me to pay a non-biased company for benchmarks, thank you.

Re:I think it is just a PR ploy. (1)

born2benchmark (1008349) | about 7 years ago | (#20517507)

You are correct that NNA hopes to generate some business from these tests but I feel that your other speculations are wrong. The "worlds-fastest" web site is not a put up job by AMD. It is true that several recent sets of test results show AMD as superior but that could change, possibly as soon as Xeon servers are available with DDR2 RAM. The test methodology is valid and when Intel starts building servers that use less power, the test will report that.

I don't believe that MySQL or SuSE linux has a bias toward AMD. I know that the NNA code (some of which was listed in the white paper appendix) was not written to favor any vendor.

Regarding the Netburst test, an Inquirer story in the same time frame reported that although the Core chips had been out for a while, 4 out of 5 processors being shipped by Intel at that point in time were Netburst processors. So it was reasonable to test Netburst because 4 out of 5 people buying Intel machines were getting Netburst. The report clearly identified the processors involved and both were models and revisions that were actively being sold as new products at the time of the test. I also feel that the test was worthwhile because I believe it was the first publicly released client/server transaction test where the CPU clock, memory size, disk drives, host adapter, operating system, application code and system tunables were all exactly the same. Neal Nelson

NetBurst? (1)

DimGeo (694000) | about 7 years ago | (#20514287)

Excuse me, NetBurst? You are testing against NetBurst? That's like comparing Core2Duo against Duron, imho. Nice try, astroturfers.

Re:NetBurst? (1)

wild_berry (448019) | about 7 years ago | (#20522955)

Read that Fine Article: they're Xeon 5160's. That's Woodcrest, the first generation of Core-based Xeon chip (Xeon Woodcrest at Wikipedia [wikipedia.org] ). No longer Netburst, Dim.

Di3k (-1, Redundant)

Anonymous Coward | about 7 years ago | (#20516171)

it simpl3, [goat.cx]

No relation (1)

Russ Nelson (33911) | about 7 years ago | (#20516269)

No relation, but Neal is a sharp cookie nonetheless. I've worked with him before.

Imagine... (1)

ArAgost (853804) | about 7 years ago | (#20520091)

Imagine a Lone Beowulf cluster of these!

Let's Talk About the Benchmark (1)

born2benchmark (1008349) | about 7 years ago | (#20520299)

I would like to discuss how the benchmark could be improved. In its current form: 1) It is a client/server test with web clients talking to an Apache2 web server, 2) The server runs SuSE Linux Enterprise Server, 3) The server's database tables are built on MySQL, 4) The transaction is a gasoline credit card purchase, 5) The test measures power consumed at 7 different transaction activity levels: Idle, 5 different constant transaction rates and the maximum that the server will deliver, 6) At each activity level the benchmark collects power used for 30 minutes, 7) The test reports its data for all levels tested, 8) The transactions are coded so that as the level of user activity increases larger and larger areas of the database tables are accessed. This means that at lower user counts the disk I/O is cached and the test is calculation intensive while at higher user counts the database working set may exceed the kernel disk cache size and thus the test is limited by physical disk I/O.

Many real world servers process web transactions against RDMSs and are idle evenings and weekends. This test lets people: 1) compare the maximum throughput of different machines, 2) Review the power consumed at maximum throughput, 3) Review the power used at various intermediate transaction arrival rates, and 4) Review the power consumed at idle.

Would this be a better benchmark with Oracle rather than MySQL or RedHat rather than SuSE? Would it be a better test without the client/server network traffic? Would it be better if is was based on floating point calculations that did not do any database access or disk I/O? What can be done to make this a more useful benchmark? Neal Nelson

link for SPEC power benchmark (1)

walterbays (1136723) | about 7 years ago | (#20540885)

The forthcoming energy efficiency benchmark from SPEC is generally described at http://www.spec.org/specpower/ [spec.org]
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>