Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Bulldozer Server Benchmarks Not Promising

Unknown Lamer posted more than 2 years ago | from the cool-kids-jump-ship-to-fleet dept.

AMD 235

New submitter RobinEggs writes "Some reviews of Bulldozer's server performance have arrived. Ars Technica has the breakdown, and the results are pretty ugly. Apparently Bulldozer fares just as poorly with servers as with desktops. From the article: 'One reason for the underwhelming performance on the desktop is that the Bulldozer architecture emphasizes multithreaded performance over single-threaded performance. For desktop applications, where single-threaded performance is still king, this is a problem. Server workloads, in contrast, typically have to handle multiple users, network connections, and virtual machines concurrently. This makes them a much better fit for processors that support lots of concurrent threads. ... It looks as though the decisions that hurt Bulldozer on the desktop continue to hurt it in the server room. Although the server benchmarks don't show the same regressions as were found on the desktop, they do little to justify the design of the new architecture.' It's probably much too early to start editorializing about the end of AMD, or even to say with certainty that Bulldozer has failed, but my untrained eye can't yet see any possible silver lining in these new processors."

Sorry! There are no comments related to the filter you selected.

Happy Holidays from the Golden Girls! (-1)

Anonymous Coward | more than 2 years ago | (#38134490)

Thank you for being a friend
Traveled down the road and back again
Your heart is true, you're a pal and a cosmonaut.

And if you threw a party
Invited everyone you ever knew
You would see the biggest gift would be from me
And the card attached would say, thank you for being a friend.

This article makes no sense. (4, Funny)

hellop2 (1271166) | more than 2 years ago | (#38134492)

Bulldozers do not make good servers. Use a computer. Problem solved.

Re:This article makes no sense. (1)

Anonymous Coward | more than 2 years ago | (#38134538)

Perhaps, but they make a bad-ass Beowulf Cluster!

Re:This article makes no sense. (2, Funny)

Anonymous Coward | more than 2 years ago | (#38134562)

Have you any idea how much damage that bulldozer would suffer if I just let it roll straight over you?

Re:This article makes no sense. (1, Funny)

Tsingi (870990) | more than 2 years ago | (#38134880)

None at all?

Re:This article makes no sense. (-1)

Anonymous Coward | more than 2 years ago | (#38134586)

Where do you get flaming metal?

Where's the graphs?
Someone call tomshardware.

They are a catastrophe ... (4, Insightful)

unity100 (970058) | more than 2 years ago | (#38134498)

And yet, 3 supercomputers with those opterons were ordered in the last 4 weeks ? and in a month, one of them - which is being revamped from #3 supercomputer position of the world - will be #1 supercomputer of the world when complete ? Was lockheed martin also morons to choose an opteron based supercomputer ?

Why is an article which is apparently written to bash amd was included in slashdot despite its apparent bias ?

Re:They are a catastrophe ... (0)

Anonymous Coward | more than 2 years ago | (#38134534)

Why is an article which is apparently written to bash amd was included in slashdot despite its apparent bias ?

HAHAHAHA... are you serious?

Re:They are a catastrophe ... (0)

nstlgc (945418) | more than 2 years ago | (#38134618)

You must be new here.

Re:They are a catastrophe ... (5, Insightful)

CajunArson (465943) | more than 2 years ago | (#38134578)

1. Nobody with a sig advertizing knock-off PHP plugins even has the right to use the word "supercomputer" in a sentence.

2. Supercomputers are NOT built based on processor speed. If you took the SPARC CPUs used in the K computer (the worlds fastest and *not* running opterons) and put them into a regular server or desktop, then you'd have a pretty underwhelming computer. Most of the $$$ going into supercomputers goes to the interconnects, not the CPUs. So sure, use the opterons in the supercomputer where AMD sells them at firesale prices and does not make any money. The rest of us will use Xeons and be very happy with the results.

3. You are a well known AMD fanboi and your repetitive posts are becoming less and less amusing.

Re:They are a catastrophe ... (-1, Troll)

unity100 (970058) | more than 2 years ago | (#38134622)

1. Nobody with a sig advertizing knock-off PHP plugins even has the right to use the word "supercomputer" in a sentence.

"weee wee weee ewww eewww ewww whine whine whine" -> i dont understand. what are you trying to say ?

Re:They are a catastrophe ... (1)

luis_a_espinal (1810296) | more than 2 years ago | (#38134712)

1. Nobody with a sig advertizing knock-off PHP plugins even has the right to use the word "supercomputer" in a sentence.

"weee wee weee ewww eewww ewww whine whine whine" -> i dont understand. what are you trying to say ?

What, you are not going to address #2 and #3?

Re:They are a catastrophe ... (0)

Captain.Abrecan (1926372) | more than 2 years ago | (#38134718)

That you are a huge focking troll and no one cares about your livelihood.

Re:They are a catastrophe ... (2, Insightful)

dave420 (699308) | more than 2 years ago | (#38135118)

He's saying, successfully, that you are out of your depth. Again.

Re:They are a catastrophe ... (1)

PIBM (588930) | more than 2 years ago | (#38134894)

You forgot to point that the many of the highest performing super computers are using tons of NVIDIA video cards to achieve those performances..

Re:They are a catastrophe ... (4, Interesting)

serviscope_minor (664417) | more than 2 years ago | (#38135230)

Supercomputers are NOT built based on processor speed.

Um.

That's rather an oversimplification, to the point of being wrong.

Supercomputers need good interconnects and lots of processing power. One or the other alone won't do.

Much of the $$$ goes into interconnects, but also the CPUs and the cooling, which is very dependent on the CPUS. All things considered, neither AMD nor Intel have the fast interconnects on-die (unlike Fijutsu), so pretty much the main thing to choose between the CPUs is, well, the CPUs.

And it seems like AMD are the best option at the moment for this kind of workload.

The rest of us will use Xeons and be very happy with the results.

No, you will. I'll stick with my Supermicro quad 6100s for as long as I can and be very happy with the immense price/performance they offered.

Re:They are a catastrophe ... (0)

Anonymous Coward | more than 2 years ago | (#38134580)

Point taken. Next time I build a super computer I have a good look at these chips. For time being I don't see a reason to upgrade neither my Athlon X2 or C2Q machines to these though.

Re:They are a catastrophe ... (4, Insightful)

gman003 (1693318) | more than 2 years ago | (#38134596)

Supercomputer workloads are significantly different than server workloads, as they typically focus on embarrassingly parallel problems and on throughput rather than latency.

You may as well be saying "why are so many desktops built on x86 chips? It seems like every day I read something on how ARM is better for smartphones".

Re:They are a catastrophe ... (1)

gl4ss (559668) | more than 2 years ago | (#38134686)

in 'supercomputer' use it's more likely that the processes can be herded to the right cores to get the best boost in performance from the architecture.

also you'd buy what you have available in such high numbers when you're buying something in such high numbers.

the article itself is quite poorly written, at points considering money of sw into the performance equation, at times not telling if the benches are per core(or "thread" in new amd lingo).

Re:They are a catastrophe ... (4, Informative)

the linux geek (799780) | more than 2 years ago | (#38135364)

There is roughly zero overlap between what makes a good HPC processor and what makes a good datacenter processor.

Hint: AVX throughput matters almost none when running an SQL server, but looks very good on Linpack.

Re:They are a catastrophe ... (2)

Junta (36770) | more than 2 years ago | (#38135488)

one of them - which is being revamped from #3 supercomputer position of the world - will be #1 supercomputer of the world when complete ?

You mean Jaguar, which is adding nVidia Tesla GPUS, memory, and refreshing the cluster interconnect while also doing Bulldozer? Where the Bulldozers are replacing Istanbul processors and *not* Magny-Cours? Even amongst the Magny-Cours in the top, they are 8-core not 12-core. Even for HPC there is some thought that 12-core will outperform Bulldozer due to shared FPU for many workloads, *but* GPUs are becoming the vogue way of doing that stuff anyway.

As others have pointed out, processors matter, but everything else matters *more* per dollar. Cray is a surprisingly small company that can't change their architecture (HTX IO oriented, IIRC) on a whim and even if Intel provides some boost in theory, it's not an effort they can afford.

Ars Troll Articles Are Arse (5, Insightful)

raddude99 (710064) | more than 2 years ago | (#38134510)

The standard of writing at "Ars Technica" have declined far more than AMD's relative performance to Intel.

Re:Ars Troll Articles Are Arse (4, Insightful)

sgt scrub (869860) | more than 2 years ago | (#38135074)

I completely agree. You have to hunt down which link is the correct link to find the specs that they eventually skewed to make an inflammatory point. They are writing articles to fill pages with advertisements based on a headline that is sure to piss off someone.

Re:Ars Troll Articles Are Arse (4, Insightful)

Kjella (173770) | more than 2 years ago | (#38135132)

I don't go there for the tech articles, but the part on page 2 where they pull AMDs TPC-C numbers apart is pretty damn good.

AMD claims 1.2 million tpmC for a two-socket Opteron 6282 SE system. The company compares this to a score for a two-socket Opteron 6176 SE system (each socket having 12 cores), (...) AMD also claims that this beats "competing solutions" by "as much as" 18 percent. (...) the reference AMD uses is another official result: dual Xeon X5690s (6 core, 12 thread, 3.46 GHz) with 384GB RAM. (...) looking just at the servers and their storage, and assuming similar discounts, we get prices of around $260,000 for the Opteron 6100 system, $879,000 for the Opteron 6200 system, and $511,000 for the Xeon system.

Basically their figures are doped with a massive SSD storage solution to make a slow CPU look good. And they show that if you wanted to spend $879,000 on a system, there's much faster Intel solutions (even though the CPUs cost more). So they're doing pretty good on the economics end at least.

Re:Ars Troll Articles Are Arse (2)

Bleek II (878455) | more than 2 years ago | (#38135138)

I posted this bellow but realized it should go here.

Anandtech.com provides much more knowledgeable and professional reviews. They had this to about AMD's new chip,

"Unfortunately, with the current power management in ESXi, we are not satisfied with the Performance/watt ratio of the Opteron 6276. The Xeon needs up to 25% less energy and performs slightly better. So if performance/watt is your first priority, we think the current Xeons are your best option. The Opteron 6276 offers a better performance per dollar ratio. It delivers the performance of $1000 Xeon (X5650) at $800. Add to this that the G34 based servers are typically less expensive than their Intel LGA 1366 counterparts and the price bonus for the new Opteron grows. If performance/dollar is your first priority, we think the Opteron 6276 is an attractive alternative." http://www.anandtech.com/show/5058/amds-opteron-interlagos-6200/1 [anandtech.com]

I don't understand why other sites are more popular.

Re:Ars Troll Articles Are Arse (1)

Curupira (1899458) | more than 2 years ago | (#38135212)

The standard of writing at "Ars Technica" have declined far more than AMD's relative performance to Intel.

That article was written by Peter Bright -- he is the Ars Technica's John Dvorak. Yeah, the decline of Ars Technica is _that_ bad.

This post... (-1)

Anonymous Coward | more than 2 years ago | (#38134516)

... Intel's trolling troll.

Recall the Itanium (4, Insightful)

G3ckoG33k (647276) | more than 2 years ago | (#38134536)

Recall the Itanium from Intel and HP.. It started out with great hype more than ten years ago. When the first benchmarks came no-one wanted to believe them. Still that particular architecture is about to die.

Unfortunately, Bulldozer may end up with a similar fate. The big difference is that Intel had its regular desktop cpu line-up to finance the Itanium disaster. If nothing can be much improved on the AMD cpu side, can the shrinking graphics card business save AMD?

I hope so.

Re:Recall the Itanium (2)

Xanny (2500844) | more than 2 years ago | (#38135590)

Itanium failed moreso because it tried to replace x86 with a new 64 bit only version. That is why it bombed more than any performance benchmarks. The sad thing for AMD is that Bulldozer is all around not favorable for anything - it always comes up to a 9 / 10 where someone else has a 10 / 10, it is a jack of all trades but in processor land that is bad. It has somewhat decent power efficiency, but is terrible compared to other 32nm processors from Intel, it is more in 45nm land. It has good performance in parallel tasks, but the high end i7 2600k is better at price / performance and efficiency, and the new i7 3960x blows it out of the water with 12 threads at once. Its price to performance isn't bad, but i5 2500ks beat it fairly soundly, especially in serial tasks. AMD really needs to go back to the drawing board and crank out a platform with per core efficiency rivaling Sandy Bridge. Lots of cores on a die doesn't mean much when the individual cores are regressions from the Phenom line.

it's sad (2)

madmayr (1969930) | more than 2 years ago | (#38134542)

i always liked the AMD CPUs, mostly for almost equal computing power for less money but at the moment this is not really true anymore it seems when i look at the benchmarks (doesn't matter if desktop or server)

I don't get it. It beat the Xeons?? (5, Insightful)

TheSunborn (68004) | more than 2 years ago | (#38134546)

I really don't get the conclusion.

The bulldozer is faster then the Xeon chip on all cpu benchmarks which can generate enough threads to fill all cores.

Each bulldozer core is as fast as a core on a Opteron 6100.

It looks exactly like the cpu I want in my web/db server, and my supercomputer.

Re:I don't get it. It beat the Xeons?? (5, Insightful)

Chrisq (894406) | more than 2 years ago | (#38134610)

I agree. Its a very biased summary. From TFA:

In AnandTech's benchmarks, the 6200 failed to beat Intel's Xeon processors, in spite of Intel's core and thread deficit. In others, 6200 pulled ahead, with a lead topping out at about 30 percent.

That's hardly an unmitigated disaster for a cheaper chi and the first release from a new architecture.

Re:I don't get it. It beat the Xeons?? (1)

Anonymous Coward | more than 2 years ago | (#38134672)

Except that 6200 setup was _more_ expensive, I believe.

Re:I don't get it. It beat the Xeons?? (1)

serviscope_minor (664417) | more than 2 years ago | (#38135478)

Except that 6200 setup was _more_ expensive, I believe.

Yes. They were spengin something like $1e6.

If you're under some kind of budget constraints and want servers more in the $10,000 range, the Opteron 6100s are generally better price/performane than the Xeons.

Re:I don't get it. It beat the Xeons?? (0)

Anonymous Coward | more than 2 years ago | (#38134814)

That's hardly an unmitigated disaster for a cheaper chi

Could you share the results of your chi benchmarks? Speed is not everything, if it gets my chi flowing I'll buy it.

Re:I don't get it. It beat the Xeons?? (-1)

Anonymous Coward | more than 2 years ago | (#38134620)

Yes... they are testing a server CPU on the desktop and complaining about the performance.

It's like comparing apples to oranges and complaining about oranges that they don't taste like apples.

Re:I don't get it. It beat the Xeons?? (1)

papabob2 (2227672) | more than 2 years ago | (#38134630)

FTFA: "This was tested against the not-quite-top-end 2.2 GHz Opteron 6174 and the several-below-top-end 2.93 GHz Xeon X5670"

Of course it beats a Xeons... 18 months old. Anyway this article is focused on show how buying these CPUs doesn't make much sense in a economic point of view

Re:I don't get it. It beat the Xeons?? (1)

confused one (671304) | more than 2 years ago | (#38134688)

That's how I read the other reviews as well. It seems like a fairly good chip for servers or workstations.

Re:I don't get it. It beat the Xeons?? (0)

Gaygirlie (1657131) | more than 2 years ago | (#38134710)

You are missing the whole point of the article. The point is that AMD went to great lengths in designing a new architecture and in advertising it as the Next Big Thing yet there is no benefit anywhere to be seen, the old architecture with as many cores would provide the exact same server performance and better desktop performance. That is the point. You are misreading the article by letting your bias colour it.

Re:I don't get it. It beat the Xeons?? (4, Insightful)

TheSunborn (68004) | more than 2 years ago | (#38134930)

No benefit???

I think that increasing the core count from 12 to 16 within the same power budget, using the same socket count as a benefit but that might just be me.

Re:I don't get it. It beat the Xeons?? (0)

dave420 (699308) | more than 2 years ago | (#38135148)

That's just a slight upgrade, not the "next big thing" it's been touted as.

Re:I don't get it. It beat the Xeons?? (1)

Daniel_Staal (609844) | more than 2 years ago | (#38135220)

Not if the per-core performance went down by 20%. Overall, it hasn't, but it also hasn't increased anywhere.

More cores are nice, but they don't mean much. The question any CPU has to answer is 'What can you do?'. Throwing more cores in doesn't mean it can do more, if the cores aren't well designed.

Re:I don't get it. It beat the Xeons?? (1)

epine (68316) | more than 2 years ago | (#38135190)

The point is that AMD went to great lengths in designing a new architecture and in advertising it as the Next Big Thing yet there is no benefit anywhere to be seen

And never before has the Next Big Thing entered the world with a whimper rather than a bang?

9 Gadgets That Prove You(slashcode fuckup)re a Hard-Core Early Adopter [wired.com]

What was your opinion on the Motorola DynaTAC 8000X back in the day? Did you bitch slap Motorola for wasting your time? Not the Next Big Thing after all?

My perspective is that this architecture is exposing weakness in AMD's process technology, and that it was designed on the premise that their process technology would be much further ahead. One of AMD's goals behind the scenes is to come up with a process technology equally well suited for the CPU and the GPU. The main problem here is that AMD is not a strong enough company to survive continued weakness. Obviously they had different ideas about where this would be at this point in time or it would not have been designed this way in the first place (or survived the massive pre-silicon performance simulation).

Here's why I sometimes what to punch the "What have you done for me lately?" crowd flat on the nose:

Performance improvements aimed at improving scalability started heavily with [Postgres] version 8.1, and running simple benchmarks version 8.4 has been shown to be more than 10 times faster on read only workloads and at least 7.5 times faster on both read and write workloads compared with version 8.0.

ACID first, performance second. Unless an early version is soundly trounced by MySQL in a page view benchmark, resulting in a giant lemming exodus.

The fixation on surface metrics also worked wonders in the race to the bottom at your local grocery store. "Organic" is actually just a synonym for "what used to be the default back in 1970 until we optimized out all the nutrition, nickle by nickle".

Re:I don't get it. It beat the Xeons?? (1)

gral (697468) | more than 2 years ago | (#38135232)

I believe it was AMD that came out with a working 64-bit processor release about the time EVERYONE was saying there was not a need. Intel ended up playing catch up. This is a brand new architecture. It is pretty cool that they are putting so many processors in how much watts? For server farms and such, cool is where you want to be. I believe to really see a comparison we need to see how much watts were used running several virtual systems doing calculations etc.

Re:I don't get it. It beat the Xeons?? (4, Insightful)

RobinEggs (1453925) | more than 2 years ago | (#38134746)

I really don't get the conclusion.

The bulldozer is faster then the Xeon chip on all cpu benchmarks which can generate enough threads to fill all cores.

Each bulldozer core is as fast as a core on a Opteron 6100.

It looks exactly like the cpu I want in my web/db server, and my supercomputer.

Do the majority of real world uses 'fill all cores'? Are you arguing that the vast majority of these benchmarks are useless? I can't distinguish between which tests use all of the cores and which don't, but it's not my field.

However, the results fall far short of a resounding success for AMD. The results are broadly split between "tied with Opteron 6100" and "33 percent or less faster than Opteron 6100." For a processor with 33 percent more cores, running highly scalable multithreaded workloads, that's a poor show. Best-case, AMD has stood still in terms of per-thread performance. Worst case, the Bulldozer architecture is so much slower than AMD's old design that the new design needs four more threads just to match the old design. AMD compromised single-threaded performance in order to allow Bulldozer to run more threads concurrently, and that trade-off simply hasn't been worth it.

That's the problem. There are several instances in which AMD isn't even beating itself. Almost none of the tests show it working better than the old 6100 Opterons on a per-core basis. And the Xeons the 6200 only sometimes beat are 18 months old; new Xeons ship next quarter. I suppose if I accept your statement about "filling all cores" at face value, given my general ignorance of the server market, then I have to admit that Bulldozer could be superior in situations that filled all of the cores most or all of the time. Is that a significant potential market share? Does it justify an entirely new architecture?

Re:I don't get it. It beat the Xeons?? (2)

swalve (1980968) | more than 2 years ago | (#38135006)

This sounds depressingly like when the Pentium 4 came out. And what are we all using now? Dual core pentium III's with extra stuff bolted on.

Re:I don't get it. It beat the Xeons?? (4, Insightful)

Curunir_wolf (588405) | more than 2 years ago | (#38135160)

Do the majority of real world uses 'fill all cores'? Are you arguing that the vast majority of these benchmarks are useless? I can't distinguish between which tests use all of the cores and which don't, but it's not my field.

Obviously. The high performance server market these days doesn't really include web and mail servers. Most are being deployed for one of 2 purposes: (1) Large database servers, and (2) Virtual Server hosts. Both of those utilization of servers will take advantage of this architecture, unlike the contrived "benchmarks" used to test these chips.

I haven't deployed a single server NOT used in a virtual environment in over 2 years. We are even deploying database servers as virtual these days, because the backup and fault-tolerant features are so good. These new Bulldozers look like they'll be on the list for the next set of hardware I need.

Lots of Real World Users (TM) try to use all cores (2)

amcdiarmid (856796) | more than 2 years ago | (#38135586)

The US Office of Management and Budget (OMB) has a virtual to physical server target of 15:1.

Every large business, and most medium sized ones, are going to try to (at least) match that target.

(athough memory seems to be a bigger constraint.)

Re:I don't get it. It beat the Xeons?? (0)

Anonymous Coward | more than 2 years ago | (#38135348)

It beat old xeons, based on 3 year old Nehalem architecture, slightly, on some tests.

But in January Intel will release Sanby Bridge based xeons. These will offer quite big performance improvement over the old Xeons.

AMD needs its swagger back (2)

sunfly (1248694) | more than 2 years ago | (#38134554)

We need healthy competition to Intel, to keep pushing tech forward and prices down. Sadly AMD simply has not performed over the last year or two, with no real answers to Intel's I series.

Re:AMD needs its swagger back (0)

Anonymous Coward | more than 2 years ago | (#38134682)

This is an historical problem. AMD can out-design Intel any time it wants but it has always had problems with execution. So you clock the chips back and win on price. You stay in business with loyal cheapskate users (like me). Intel gets the gold star from 1/2 of 1% of regular users but tons of big copy from the monkeys in the blogosphere, most of whom being too ignorant to know the difference between 'then' and 'than'. Perhaps AMD is handing over the job of biting Intel's ankles to Arm. Somebody has to do it otherwise you'll be paying 299.00 for a P866.

Re:AMD needs its swagger back (3, Informative)

Anonymous Coward | more than 2 years ago | (#38134824)

Sadly AMD simply has not performed over the last year or two, with no real answers to Intel's I series.

While i totally agree on your first statement, i don't on the second. Last two years you say?.. My desktop is 1 year old, running a quad-core phenom@3.4GHz. Not only was it the best-value-for-money, costing me only 169 euro for the processor, it is also one of the fastest around - up to this very day, even for single-thread tasks.

Here's a hint. Artificial benchmarks don't say a thing. There's one thing where AMD is very, very good and outperforms intel in any way, and that's memory management. I couldn't care less for floating point performance, or any other dry/wetstone-like test. What does count though, is how well a processor does in doing several tasks at a time. Running 2 games, at the same time (or 6 if i wish). And the OS-i-be-ashamed-to-say-the-name-off. And a numbercruncher. And a webbrowser with a dozen tabs. While chatting on skype. And drawing a 30-layered image with the gimp.

And this box - doesnt' give a kick. It just does it - and each task runs just as well as were it running alone. Now tell me again, how was AMD not good on the desktop last 2 years?

Re:AMD needs its swagger back (0)

Anonymous Coward | more than 2 years ago | (#38134948)

-edit- admittingly, i purchased a mobo that was about the same price as the CPU.

Re:AMD needs its swagger back (5, Insightful)

serviscope_minor (664417) | more than 2 years ago | (#38135124)

Sadly AMD simply has not performed over the last year or two,

That's just Simply not true. On the server side, the quad 6100 1U servers are very competitive, supplying as much (sometimes more) power than iuntel boxes for considerably less money. At this point they're a bit of a no-brainer in the server room.

On the desktop, it is different. More of the benchmarks show that the core i5 is faster than the Phenom2 x6 and 8150. But some benchmarks show that the AMD showings can be considerably faster. The choice is really simple. If your workload is dominated by the kind of things that Intel do well, then buy intel, otherwise buy AMD.

The CPUs are simply too close otherwise.

Re:AMD needs its swagger back (3, Insightful)

dc29A (636871) | more than 2 years ago | (#38135580)

We need healthy competition to Intel, to keep pushing tech forward and prices down. Sadly AMD simply has not performed over the last year or two, with no real answers to Intel's I series.

I built a Linux server/desktop earlier this year:
AM3+ motherboard (4 RAM slots, 6 x SATA 6GB ports, 2 x USB 3.0 ports): 90$
AMD 1090T six core CPU: 160$

Great performance, incredible value. Once Bulldozer gets better, I can seamlessly upgrade it. Now, I'd like to see an Intel equivalent for this.

Bulldozer outdated already ? (0)

billcopc (196330) | more than 2 years ago | (#38134564)

Can we really call Bulldozer an 8-core processor ? It seems its real-world benchmarks would suggest otherwise. I guess the question should be: is modern computing still so integer-dependent that it would benefit from Bulldozer's twinned integer units ? I thought we all switched to full-fat floating-point operations over 15 years ago when the Pentium hit the mainstream and everyone finally had an on-die FPU in their PC.

On a server, I would expect bus throughput to be a deciding factor. I'm not crunching fancy scientific data, mostly ferrying bits from disk to network and back. Having extra cores allows more simultaneous transfers by handling more handshakes and thus connections, but beyond that it's all DMA copies from memory to I/O.

Re:Bulldozer outdated already ? (4, Interesting)

Chrisq (894406) | more than 2 years ago | (#38134582)

I thought we all switched to full-fat floating-point operations over 15 years ago when the Pentium hit the mainstream and everyone finally had an on-die FPU in their PC

Its application dependent. I doubt if much fp stuff gets done in cryptography, routing, and many simulations.

Re:Bulldozer outdated already ? (0)

Anonymous Coward | more than 2 years ago | (#38134612)

I doubt if much fp stuff gets done in cryptography, routing, and many simulations.

None of which is usually CPU intensive (unless you're bruteforcing), such that it would necessarily need it.

Not saying that it's correct or not, but the reasons you've listed aren't big reasons for it.

Re:Bulldozer outdated already ? (1)

Anonymous Coward | more than 2 years ago | (#38134690)

Actually it was 20 years ago with the 486 that we all got fpu'S on die

Re:Bulldozer outdated already ? (1)

GauteL (29207) | more than 2 years ago | (#38135026)

I had a 486sx. It may technically have had an FPU on the die, but it was defective and disabled.

Re:Bulldozer outdated already ? (-1)

Anonymous Coward | more than 2 years ago | (#38135756)

go back further... the 80386 also had a FPU on-die. the 386SX had the Fpu disabled.

Re:Bulldozer outdated already ? (1)

Hatta (162192) | more than 2 years ago | (#38135184)

Its application dependent. I doubt if much fp stuff gets done in cryptography, routing, and many simulations.

So it's like the Cyrix 6x86?

Re:Bulldozer outdated already ? (4, Interesting)

confused one (671304) | more than 2 years ago | (#38134760)

Windows does not (yet) know how to properly schedule threads on that hardware. This has caused issues with all the benchmarks, not unlike what happened when Intel Hyperthreading was first released. Once the proper support is added to the OS kernels, the results should be much better.

Re:Bulldozer outdated already ? (1)

deKernel (65640) | more than 2 years ago | (#38135116)

You are thinking the same thing that I had in the back of my mind. The changes in hardware could very well be just enough that the existing kernels are designed to properly handle. The example of Hyperthreading is case-in-point. Once Windows/Linux/BSD/Oracle and such do in fact, make changes to accommodate any subtle changes needed to take full advantages of the hardware, then the tests will be more valid. Now if all/some don't see the need to make any changes, then we can use the word "flop" to describe the current CPU since if the hardware design requires changes in software to exploit the new features and the software does not change: flop, flop and flop.

Re:Bulldozer outdated already ? (3, Interesting)

confused one (671304) | more than 2 years ago | (#38135352)

Tech Report demonstrated this to be the case by setting the thread affinity on their tests, so they were locked to specific cores, using only once core per module. They saw as much as a 30% improvement in the single threaded or lightly threaded benchmarks. Other sources, including AMD itself, have demonstrated as much as 10% improvement in performance by using a better thread scheduler. AMD has whitepapers discussing this issue.

As for changing the OS kernels... Windows 8 already has the changes. Windows 7 and Server 2008 may get them in a future update (Service Pack?). Linux kernel support is ready and is available in a kernel patch. Compiler support is now included in VS 2010. So, not necessarily a flop; but, might be a short while before the full capability of the architecture is realized.

Re:Bulldozer outdated already ? (1)

chrb (1083577) | more than 2 years ago | (#38135754)

All of the integer ops are executed in those units, so yes, they are important. Every single loop and jump and code branch executed by the processor is dependent on some integer arithmetic being performed at as low latency as possible. Even on a completely FPU-less system, you'd be surprised exactly how little floating point ops are actually necessary. Without an FPU you can still do: compiling, digital simulations, run kernels and do virtualization, web/file/database etc. serving, networking, cryptography.

Look at the Sun T1/T2 CPUs [wikipedia.org] , they are designed to have low-FPU power because the market they target doesn't care : "One of the limitations of the T1 design is that a single floating point unit (FPU) is shared between all 8 cores, making the T1 unsuitable for applications performing a lot of floating point mathematics. However, since the processor's intended markets do not typically make much use of floating-point operations, Sun does not expect this to be a problem. Sun provides a tool for analysing an application's level of parallelism and use of floating point instructions to determine if it is suitable for use on a T1 or T2 platform."

Virtualization (3, Interesting)

Anonymous Coward | more than 2 years ago | (#38134588)

When someone says that a CPU was designed around multiple threads I think virtualization. yeah you can argue that servers are multithreaded in that they have to handle multiple users connecting, but that's bull. I can write a badly threaded application that doesn't effectively use the multiple cores...

So how do these cpus perform with something like ESX running on them?

Scott

Re:Virtualization (2)

gl4ss (559668) | more than 2 years ago | (#38134784)

the anand benches were that. but they didn't make sense that much.

Great for BOINC! (3, Interesting)

courteaudotbiz (1191083) | more than 2 years ago | (#38134594)

That's perfect for running BOINC though, which is very good at using multiple cores at their full capacity. Useless for the business, but great for contributing to science projects :-)

Server performance on the desktop? (0, Troll)

Zero__Kelvin (151819) | more than 2 years ago | (#38134606)

"AMD's Bulldozer server benchmarks are here"

"One reason for the underwhelming performance on the desktop"

I stopped reading right there. When people start talking about the performance of a server on the desktop, it is pretty clear that they lack even the most basic understanding of what they are talking about.

Re:Server performance on the desktop? (0)

nedlohs (1335013) | more than 2 years ago | (#38134660)

It's more a case of you lacking a basic understanding of English.

Re:Server performance on the desktop? (0)

Zero__Kelvin (151819) | more than 2 years ago | (#38134874)

Should I presume that it is your belief that I cannot understand what you wrote that kept you from offering a justification for your objection? Or is it the case that you know any actual argument you made to support your claim would make you look foolish?

Re:Server performance on the desktop? (1)

nedlohs (1335013) | more than 2 years ago | (#38135752)

I can't see a way of phrasing it differently, so it seems a pointless exercise. Especially considering my writing tends to be verbose and hard to read at best and you had difficulty with what I hope is writing that went through an editor. Of course you stopped reading before the sentence gave the explanation so maybe if I just repeat that sentence and the few following it verbatim:

One reason for the underwhelming performance on the desktop is that the Bulldozer architecture emphasizes multithreaded performance over single-threaded performance. For desktop applications, where single-threaded performance is still king, this is a problem. Server workloads, in contrast, typically have to handle multiple users, network connections, and virtual machines concurrently. This makes them a much better fit for processors that support lots of concurrent threads.

If you really want my unskilled wording:

Previous benchmarks on the desktop variants of the architecture were unimpressive. The architecture emphasizes server features over desktop features and hence the server variant should be much better. Now that server benchmarks are in, however, the results are terrible.

Re:Server performance on the desktop? (0)

Anonymous Coward | more than 2 years ago | (#38135914)

So your comment argues that I don't understand English because I didn't understand what I quite explicitly stated I didn't read then? Thanks for your input. - ZK

Re:Server performance on the desktop? (1)

Kjella (173770) | more than 2 years ago | (#38134890)

Or maybe you should have finished that paragraph that explains:

Server workloads, in contrast, typically have to handle multiple users, network connections, and virtual machines concurrently. This makes them a much better fit for processors that support lots of concurrent threads. Some commentators have even suggested that Bulldozer was, first and foremost, a server processor; relatively weak desktop performance was to be expected, but it would all come good in the server room.

You're bashing them for not understanding exactly what the paragraph is meant to show that they do understand. Epic fail.

The EPIC FAIL is your, mi amigo (0)

Zero__Kelvin (151819) | more than 2 years ago | (#38135020)

Or maybe you could think before you post. Talking about desktop performance for a processor designed for a server is like talking about the performance of a race car for trips to the grocery store. Newsflash: Things that are used in applications for which they were not designed are not as good as the performance of other things that were designed for said application!

Re:The EPIC FAIL is your, mi amigo (0)

dave420 (699308) | more than 2 years ago | (#38135260)

Yes, and I'm sure we all agree with you. The issue is THAT IS NOT WHAT THEY WERE SAYING! You've got it in your head that that is the case, but it simply isn't. You misread the article.

Re:The EPIC FAIL is your, mi amigo (0)

Zero__Kelvin (151819) | more than 2 years ago | (#38135428)

"You misread the article."

You misread my post, to wit the first line: "I stopped reading right there." If I was reading Car and Driver, and they started talking about poor gas mileage of a NASCAR car when compared to the Honda Civic, I would stop reading that article too. I simply cannot take seriously an article that goes there. Excuse the pun, but YMMV ;-)

Re:The EPIC FAIL is your, mi amigo (0)

Anonymous Coward | more than 2 years ago | (#38136022)

Wow your nick is apt, you really are an absolute zero.

Here's a more suitable site for you: http://www.learntoreadfree.com/ [learntoreadfree.com]

Please stop cluttering up this site with your crap.

Re:Server performance on the desktop? (0)

dave420 (699308) | more than 2 years ago | (#38135218)

You should have kept reading. The author is highlighting how the performance issues on the desktop were claimed to not be a hindrance on the server. The rest of the article goes into depth on that last point, focussing on the server.

It seems you are the one with the lack of understanding.

Justification (0)

Anonymous Coward | more than 2 years ago | (#38134662)

I thought the justification for the new architecture was that it adds more cores without adding as many transistors as a more traditional architecture, not that it made the individual cores faster, if it doesn't "show the same regressions as were found on the desktop" and we've got more cores at less size per core then surely this is a win for the new architecture?

And moreover (2)

unity100 (970058) | more than 2 years ago | (#38134670)

Bulldozer chips are in short supply due to sales. Because they are not able to immediately meet opteron demands, amd is keeping 8150 supply low, binning them as opterons instead, and therefore leaving desktop market undersupplied. read the informative thread below.

http://www.overclock.net/t/1171264/compared-3-different-bulldozer-fx-8120s-want-to-know-the-difference/10 [overclock.net]

bulldozer 8150s have been in short supply on newegg and amazon. sometimes they are out of stock, and you cant even put them on watchlist.

way too high sales for a 'failed' processor ?

Re:And moreover (3, Insightful)

PIBM (588930) | more than 2 years ago | (#38135008)

Or simply, way too low yield...

Re:And moreover (1)

unity100 (970058) | more than 2 years ago | (#38135078)

if it was a catastrophe, there wouldnt be enough sales to cause yield issues either.

Re:And moreover (1)

PIBM (588930) | more than 2 years ago | (#38135198)

Sadly, there are too many fanboys just like someone I know.

Re:And moreover (1)

unity100 (970058) | more than 2 years ago | (#38135088)

Re:And moreover (1)

PIBM (588930) | more than 2 years ago | (#38135222)

That's great news! That way, no one will make the error of buying one!

Now, go away.

Questioning the benchmark procedures (2)

KXeron (2391788) | more than 2 years ago | (#38134680)

One element has me curious about how these benchmarks were prepared: Is the benchmark software compiled on the target platform/cpu combination with all available optimisations of that platform?

Many of these benchmarks have a binary/library or set thereof that is written for a single target platform (the platform the original developers of the benchmark were working on), Usually pre-compiled, usually for intel, on an intel system, by an intel compiler, with intel optimisations or at least two of the four. This same binary is then used against whatever systems on compatible architectures, this has the high potential to produce skewed results on non-intel platforms as not all manufacturers use the same optimisations.

While this specific processor may not be as great as it should have been, I feel that benchmarks in themselves are usually flawed and must be taken with a grain of salt until real-world software that isn't in a lab-style environment is attempted on it.

Lets play a game. (-1)

Anonymous Coward | more than 2 years ago | (#38134692)

Those use multiple threads well.

The name of this game is called.. "Spot the intel astroturfers".

That article was a catastrophe... (2)

synapse7 (1075571) | more than 2 years ago | (#38134708)

Maybe it's early, but I was having a hard time seeing the comparisons they were trying to make. Also when Ars was comparing pricing, X system is 400k and Y system is 600k, what the hell was that, usually stats like that would be accompanied with a link or site to said system. It said benchmarks were "here", I didn't see any. I'd like to see benchmark details such as OS. May be too early to judge as this is the first generation chip, and will the Bulldozer perform better under the next iteration of windows(if that was the control)?

Windows is not optimized for Bulldozer (5, Informative)

Anonymous Coward | more than 2 years ago | (#38134786)

TPC-C is performed on Windows 2008 see http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=111111501
Anantech tested on Windows 7.
It is known that Windows 7 and 2008 are not optimized for Bulldozer, especially at the task scheduling level.
So we do not know the real power of the Bulldozer architecture in the Windows world yet
See http://hexus.net/tech/news/cpu/32394-bulldozer-benchmarks-correct-definitive which unfortunately only has very few benchmarks.
You can also look at the phoronix site, where Bulldozer is tested on Linux.

Re:Windows is not optimized for Bulldozer (2)

confused one (671304) | more than 2 years ago | (#38135396)

Toms Hardware and Tech Report have also discussed and tested this theory. They found the current Windows scheduler does not use the hardware correctly. AMD has whitepapers available explaining the issue and the changes requried for the scheduler(s). Microsoft is working to make the changes as is the Linux kernel team.

A much better source for this kind of information. (5, Informative)

Bleek II (878455) | more than 2 years ago | (#38135054)

Anandtech.com provides much more knowledgeable and professional reviews. They had this to about AMD's new chip, "Unfortunately, with the current power management in ESXi, we are not satisfied with the Performance/watt ratio of the Opteron 6276. The Xeon needs up to 25% less energy and performs slightly better. So if performance/watt is your first priority, we think the current Xeons are your best option. The Opteron 6276 offers a better performance per dollar ratio. It delivers the performance of $1000 Xeon (X5650) at $800. Add to this that the G34 based servers are typically less expensive than their Intel LGA 1366 counterparts and the price bonus for the new Opteron grows. If performance/dollar is your first priority, we think the Opteron 6276 is an attractive alternative." http://www.anandtech.com/show/5058/amds-opteron-interlagos-6200/14 [anandtech.com]

Brings high end computing down to the home. (0)

Anonymous Coward | more than 2 years ago | (#38135112)

For those people who do lots of media transcoding, and 3d rendering, either as part of work, or just on their own time, I feel that the 62xx series are fantastic. I mean, under $6,000 for a 64-core workstation with 128GB of ram, and the capacity to add a high end video card? Consider me sold. It's like a cluster array in the bedroom, without having to worry about the networking headache.

Yes, performance falls behind in a few sectors, but compared to where computers were 3 years ago (my last large build), the 62xx chips pull ahead in every category.

Just because something isn't the fastest doesn't mean that it isn't fast enough xD

Hell, it's tempting to build such a system just for giggles and bragging rights.

One 2.1Ghz 62xx core is still faster then my old Athlon 64 3000+, and that ran my games for ages and ages xD

Odd (0)

Anonymous Coward | more than 2 years ago | (#38135150)

Why would he even mention the idea of editorializing the end if AMD?

If one failed product meant the end of a company, we'd have no companies. Intel has screwed up alot in the past, and they're still around...

Bad artcile... (4, Insightful)

Junta (36770) | more than 2 years ago | (#38135302)

Though I'm suspicious that Bulldozer is going down remarkably like NetBurst (NetBurst made design compromises for marketable massive clock gains, Bulldozer similarly makes compromises to boost the now-marketable core count) and time may prove that wrong, but this article was crap.

It looked like they cherry picked some benchmarks from the world at large with no control. As pointed out in the article, the tpmC benchmark had massive storage differences and the cost delta means there were probably node count differences. There are so many things in play that it is impossible to derive any sort of statement specifically about the processors. The article, however uses that as a point to show AMD is more expensive to make AMD look bad but in the same breath says better SSDs probably drove the benefit to steal AMD's thunder. He can't have it both ways. I'm inclined to believe the storage architecture was the key in terms of cost and performance given the nature of the test.

Later, the article says AMD should have just done 16-core Magny-Cours. Clearly AMD should hire him as he is a genius who *must* have considered all the complexities and figured out a way to achieve that core density when no one else in the industry has. No one pretends for a second that a bulldozer module matches 2 'real' cores, but they can't just wave their wand and make a 16-core package of the old architecture. Bulldozer is all about trying to ascertain the 'important' bits of a core and share other bits in the hopes the added resource gives most of the benefit of an additional core without the downsides that make it impossible to do that many cores on a socket.

Sunk cost fallacy (2, Insightful)

JDG1980 (2438906) | more than 2 years ago | (#38135306)

Bulldozer can't consistently beat Phenom X6 in desktop workloads.

It can't consistently beat Magny-Cours in server workloads.

It doesn't seem to be any more power-efficient than AMD's last generation, despite being built on a smaller process node (32nm vs 45nm).

At what point does AMD simply admit Bulldozer is a failure, pull the plug, and write off the sunk costs? Putting good money after bad is a classic business mistake that has killed many companies.

AMD should continue improving their existing cores on the 32nm process (they already have some of the work done with Llano) and forget about their "revolutionary new" architecture which is basically this decade's Prescott.

Or, heck, see if it's possible to scale up the Bobcat cores for mainstream desktop use. Don't forget, Intel's very successful Core 2 Duo came from a previous design (Pentium M) that had been reserved to laptops. AMD will probably have more luck increasing performance (both raw clock and IPC) on Bobcat than trying to tame the heat, insane transistor count, and long pipeline of Bulldozer.

no much thought then (0)

Anonymous Coward | more than 2 years ago | (#38135320)

>but my untrained eye can't yet see any possible silver lining in these new processors.

Maybe you need to buy some new glasses then or are you just another of the Intel trolls

   

Same ol BS benchmarks (1)

sgt scrub (869860) | more than 2 years ago | (#38135750)

After clicking on links I finally found some benchmarks. As usual, they were bullshit. Can't these people think of a test that can put them through real hoops? I used to throw 60G pcap files (1 minute of traffic) at machines to determine if the hardware could run our IPS software. The machine with the fewest millions of threads not yet processed won. The application opened a thread for every packet that traversed a 1G nic. The content of each packet was then sent (branched) through the appropriate inspections simultaneously; one thread for each protocol check, one thread for each header check, one thread for each regular expression on the body, making a potential (65,535^2^10k + 4^252^200) new threads per second. No branch prediction can be used in this kind of test because the traffic is never predictable so every path for every packet must be traversed completely. Note: the 10k and 200 are the number of rules (regular expressions) applied to the packets.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?