×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Launches Piledriver-Based 12 and 16-Core Opteron 6300 Family

timothy posted about a year and a half ago | from the launching-needs-less-wooing-than-unveiling dept.

AMD 133

MojoKid writes "AMD's new Piledriver-based Opterons are launching today, completing a product refresh that the company began last spring with its Trinity APUs. The new 12 & 16-core Piledriver parts are debuting as the Opteron 6300 series. AMD predicts performance increases of about 8% in integer and floating-point operations. With this round of CPUs, AMD has split its clock speed Turbo range into 'Max' and 'Max All Cores.' The AMD Opteron 6380, for example, is a 2.5GHz CPU with a Max Turbo speed of 3.4GHz and a 2.8GHz Max All Cores Turbo speed."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

133 comments

Boing (-1)

Anonymous Coward | about a year and a half ago | (#41879073)

avast me hearties !

shared FPU (4, Interesting)

Janek Kozicki (722688) | about a year and a half ago | (#41879077)

6200 series have shared FPU (floating point unit). Which means that there are less FPUs that there are processing cores. To multiply two floating point numbers cores are waiting in a queue until FPU is free to use, this happens when all cores are calculating at the same time. If you are doing intensive calculations this is going to be slower than if you used 6100 series. 6100 series have dedicated FPU for each core.

I know this because we were recently buying a new cluster for calculations using YADE software.

How, here's the question: how about 6300 series, is there a dedicated FPU?

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41879109)

For most people, this would not be an issue. Very little of the average CPU's time is spent doing floating point math.

Re:shared FPU (2)

pipatron (966506) | about a year and a half ago | (#41879235)

If you buy a 12 or 16-core CPU, it's not because you want your Facebook page to load faster. It's because you have some serious parallel workload to process, likely involving a lot of calculations.

Re:shared FPU (2, Insightful)

Anonymous Coward | about a year and a half ago | (#41879269)

For some people, that's true. Others, not. Don't presume to know other people's applications. Multithreaded != FPU intensive

Re:shared FPU (4, Insightful)

neyla (2455118) | about a year and a half ago | (#41879301)

*shrug* Todays "top of the line" is tomorrows facebook-renderer. I've got an 8-core CPU, and didn't even want one, it was just a side-effect of buying a reasonable-speced machine on other factors (that I -did- care about) and the 8 cores being standard in a workstation in that performance-range. If there'd been a $25 off for half-the-cores option I'd gladly have taken it, but there wasn't. (yes I know I could roll-my-own)

Re:shared FPU (1)

jawtheshark (198669) | about a year and a half ago | (#41881975)

Just a question: what CPU is that? I've got a nice Core i7 (which I rarely use, it was cheap) that sports four cores. The task manager shows 8 cores, but that's because of hyperthreading [wikipedia.org]. I haven't keeping up to date, I might have missed some leaps in the amount of cores on the desktop market.

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41882763)

They said workstation, so it's presumably the same 8-core Intel Xeon we have in our servers (which are hyperthreaded too, so the dual-CPU servers can run 32 threads).

Re:shared FPU (4, Insightful)

Anne Thwacks (531696) | about a year and a half ago | (#41879361)

If its MONEY you are talking about, they are probably integer calculations. I am fairly certain our servers only ever execute floating point instructions by accident

Re:shared FPU (1)

Sulphur (1548251) | about a year and a half ago | (#41880039)

If its MONEY you are talking about, they are probably integer calculations. I am fairly certain our servers only ever execute floating point instructions by accident

Sounds like a fixed point bug.

Re:shared FPU (5, Insightful)

ByOhTek (1181381) | about a year and a half ago | (#41879417)

Yep. Lots of servers where I work. Lots of high-CPU-use stuff. About 30-40 different applications across the servers.
The vast majority of what they do is integer math. I doubt we'd notice if the CPUs were sent with the floating point math faked by the integer side of the house in the CPU.

Mind you, another place I worked, had twice as many applications, and less than a dozen were integer intensive, and the rest were FP intensive.
i.e., not everyone would need the large number of FPUs. There are different use cases, and if cutting the number of FPUs down reduces the CPU price, and the power consumption, some of us would be all over it.

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41880111)

I imagine if doing tons of FP ops was your priority, you would be using CUDA and a cluster of GPUs.

Re:shared FPU (1)

ByOhTek (1181381) | about a year and a half ago | (#41880899)

That would make sense, but I was working with such a group and NONE of the commercial software exported any workload to the GPUs. Also, these days, GPUs tend to do almost as well with integer performance as FP performance, so either way, offloading is not a bad idea.

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41879579)

You mean like a web server that is doing no FPU work?

Re:shared FPU (4, Interesting)

DarkOx (621550) | about a year and a half ago | (#41879687)

No actually in most cases its likely you are using it to drive a host server for a bunch of VMs. I am pretty sure that is the largest market segment for 16-core x86-64 processors today.

Re:shared FPU (1)

h4rr4r (612664) | about a year and a half ago | (#41880041)

That and webservers.
Or video processing, assuming for some reason you can't use GPU acceleration.

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41881291)

That and database servers

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41881567)

Horribly presumptuous to the extent you might as well not have said anything at all. There are many, many common workloads which absolutely are not FP-bound yet are still lend themselves to parallelism.

Basically, your statement is best completely ignored.

Re:shared FPU (1)

Joce640k (829181) | about a year and a half ago | (#41883099)

If you buy a 12 or 16-core CPU, it's not because you want your Facebook page to load faster. It's because you have some serious parallel workload to process, likely involving a lot of calculations.

Even so, most software will have a difficult job using up every available FPU processing slot. Sharing the FPU between CPUs might not be as bad as you imagine. Having 16 cores/8 FPUs is almost certainly better than 8 cores/8 FPUs because you'll keep the FPUs busier. If your code does a lot of expensive operations like divides and sqrt()s then even 4 FPUs might be able to keep up with the CPU.

Re:shared FPU (2)

ByOhTek (1181381) | about a year and a half ago | (#41879131)

I assume that they have 8 FPUs? I'm curious to see how they split up. Do the 6300s have a similar shared-FPU configuration?

Damn, according to AMDs site, they have a TDP of 140W and 115W. Then again, that's 8.75W/core and 7.1875W/core. At 115W, the 12 core 3300s about 9.6W/core. THe 3200s have the same thermal profile, but with slightly lower clock speeds.

The 3100s are 85W, 115W and 140W for 12 core (and don't have quite as high a clock speed).

Good numbers on the idle power would be nice too. I guess it's time to do some research.

Re:shared FPU (-1, Flamebait)

Anonymous Coward | about a year and a half ago | (#41879141)

"Which means that there are less FPUs THAT there are processing cores"

Let me guess - you're American.

Re:shared FPU (0, Flamebait)

K. S. Kyosuke (729550) | about a year and a half ago | (#41879239)

You must be Chinese, otherwise you'd notice the 'less' where 'fewer' would be a better fit, but I digress - do you really feel this urge to make stupid remarks in public? Go get it out of your system, chop up some firewood or run a few miles, you'll feel better, trust me.

Re:shared FPU (1, Insightful)

ByOhTek (1181381) | about a year and a half ago | (#41879393)

Probably not. We don't say 'that there are' here in America. We would use 'than'. Come on over some time, it might help alleviate some of that burden of ignorance you have.

There are also these things called 'typos', when people make a mistake in their typing, usually because they are thinking faster than they can type.

Re:shared FPU (1)

SuricouRaven (1897204) | about a year and a half ago | (#41879209)

Only if you're doing FP-intensive calculations, though. Heavy floating point math is actually quite rare outside of science and engineering, and even then I imagine that a substantial part of the processor time is spent on non-floating-point parts of the algorithm.

Re:shared FPU (1)

confused one (671304) | about a year and a half ago | (#41882177)

And if you look at AMD's strategy, they seem to be suggesting people look at recompiling to run the floating point intensive code on GPUs. The GPUs are better suited to handle the computations than a FPU in a general purpose CPU core or module.

Re:shared FPU (1)

Anonymous Coward | about a year and a half ago | (#41879245)

You realize the FPU is also twice as wide as on K10, right?

Re:shared FPU (1)

unixisc (2429386) | about a year and a half ago | (#41879337)

Looks like you are the right market for a POWER or Itanium based system. Any idea which are the RISC based workstations still left standing?

Shared, but it can be split into two (5, Informative)

Anonymous Coward | about a year and a half ago | (#41879541)

Yes, they have a shared 256 bit FPU, but that can be split into two 128 bit parts. So no, multiplying two floating point numbers in two threads is performed immediately and simultaneously, the cores do not wait at all. I measured this on a previous generation Opteron 6234, the performance loss caused by running two threads on two cores of the same module vs two cores in different modules was barely measurable, 3%.

Re:shared FPU (2, Informative)

Anonymous Coward | about a year and a half ago | (#41879549)

Depends on what you mean by "FPU". The "shared" FPU is really a single shared 256bit SIMD unit that can also double as an FPU. It can do one 256bit AVX, 2 128bit SSE, 4 64bit floats, or 8 32bit floats per cycle. It is fully shared and is also capable of having one core doing a 128bit SSE and the other core doing 2 64bit floats per cycle, or one core doing 4 32bit floats and the other doing 2 64bit floats(assuming no dependencies and the OoO scheduler can manage it).

The only time this FPU unit is shared is when a 256bit AVX instruction is being executed *or* in the corner case that one core could have done 4 64bit floats out-of-order but is now limited to only 2.

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41879665)

Which means that there are less FPUs that there are processing cores.

Fewer, not less.

Re:shared FPU (1)

Anonymous Coward | about a year and a half ago | (#41879729)

You are simply wrong.
The FPU can do 4 individual 128 bit operations per clock of which two can be a multiply-accumulate operation (D=A*B+C). As each 128 bit operation can do up to 4 x 32 bit float operations or 2 x 64 bit float the FPU can do peak 4 x 2 (FMA units) x 2 (one multiply+one add) = 16 single precision FLOPS per clock.
The Intel Ivy Bridge (the current generation) can do one 256 bit floating point multiplication and one 256 bit floating point add per clock. The peak is therefore 256/32=8 single precision floats x 2 = 16 single precision FLOPS per clock.

Now if you use the Intel processors with hyperthreading disabled then yes, each Intel core have a theoretical peak 2x higher than an AMD core. In practice the difference isn't that high partially because even floating point intensive code have many integer instructions and partially because it's easier to do 128 bit SIMD compared to 256 bit SIMD in many cases.

If you do use hyperthreading, well then the throughput is the same per Intel virtual core and AMD core.

Re:shared FPU (1)

cheesybagel (670288) | about a year and a half ago | (#41882119)

Yeah but in practice a lot of people don't have binaries compiled with FMA instructions. Those will probably only start getting out a long time after Haswell comes out. This means for a lot of people AMD's processor will seem to have half the peak FLOPS that it actually does have.

Re:shared FPU (0)

Anonymous Coward | about a year and a half ago | (#41883559)

Piledriver FPU structure is similar to Bulldozer, so the answer is no, there is not a dedicated FPU on the 6300 series.

Compared to Intel's offerings, how do these compar (4, Insightful)

Hadlock (143607) | about a year and a half ago | (#41879091)

I'm not even sure how you could post a story about AMD, what with it's recent decline this entire last decade, and not directly compare them to intel.
 
Are these even desktop or server chips? It's been so long since I bought AMD, I really couldn't tell you which line Piledriver sits in anymore, or if they've consolidated them.
 
The general gist I've read is that AMD is cheaper than Intel, and in the past has been "more green" due to power consumption, but with Ivy Bridge, your bang for the buck and much, much smaller lithography process has given intel the advantage in both areas.

Re:Compared to Intel's offerings, how do these com (2)

ericloewe (2129490) | about a year and a half ago | (#41879143)

Piledriver is the architecture, like Intel's Ivy Bridge is the architecture.

These are server chips. Best case, these are finally faster than their pre-Bulldozer parts in real, consumer desktop use. They will not beat an 8 core Sandy Bridge Xeon in FP-heavy applications, and power consumption is, at best, on the same level as the Xeons.

All they can do is work like crazy on their next line (Steamroller as it?) so they're truly competitive again.

AMD calculate TDP differently. (4, Informative)

Anonymous Coward | about a year and a half ago | (#41879459)

Intel calculate their TDP based on full load which isn't necessarily maximum power use.

AMD calculate their TDP based on maximum power use.

Re:AMD calculate TDP differently. (0)

Anonymous Coward | about a year and a half ago | (#41880225)

Except AMD tends to hit its TDP much more often.

Stock Intel i7-920(2.66ghz+1.2v) TDP 130watts
Intel i7-920 @ 3.8ghz+1.3v actual power draw with 8 threads of prime95(blended): about 95 joules(95 watts)

I've seen benchmarks of stock Bulldozers exceeding TDP as power-draw. I have not seen Intel exceed TDP as power-draw and Intel has a lower TDP.

Re:AMD calculate TDP differently. (1)

Anonymous Coward | about a year and a half ago | (#41881051)

Except AMD tends to hit its TDP much more often.

Do you care to add citations? I have a single processor 12 core Opteron 6234 (Bulldozer) server, processor rated at 115W. Plus 4 drives and 48G ram. Even if I stress test the processor with 100% load on all cores with a burn-in tool, I cannot reach more than 160W for the whole server. And that also includes the ~20% loss in the power supply. It is obvious that the processor is very well below its TDP.

Re:AMD calculate TDP differently. (0)

Anonymous Coward | about a year and a half ago | (#41881729)

Exceeding TDP could also mean that the core are running very busy instead of idling. From a performance point of view not necessarily bad other than power consumption/cooling etc.

Re:Compared to Intel's offerings, how do these com (-1)

Anonymous Coward | about a year and a half ago | (#41879507)

Piledriver is the architecture, like Intel's Ivy Bridge is the architecture.

These are server chips. Best case, these are finally faster than their pre-Bulldozer parts in real, consumer desktop use. They will not beat an 8 core Sandy Bridge Xeon in FP-heavy applications, and power consumption is, at best, on the same level as the Xeons.

All they can do is work like crazy on their next line (Steamroller as it?) so they're truly competitive again.

I piledrivered yo mama.

Re:Compared to Intel's offerings, how do these com (4, Informative)

SQL Error (16383) | about a year and a half ago | (#41879843)

Piledriver is the architecture, like Intel's Ivy Bridge is the architecture.

These are server chips. Best case, these are finally faster than their pre-Bulldozer parts in real, consumer desktop use. They will not beat an 8 core Sandy Bridge Xeon in FP-heavy applications, and power consumption is, at best, on the same level as the Xeons.

That's true. A 16-core Opteron has the same FP width as an 8-core Xeon, and a higher TDP for a given clock.

On the other hand, we buy almost all AMD because it lets us build cheap 1U or 2U 4-socket servers with 512GB of RAM each. 4-socket Intel chips (E5-4600 or E7) are much more expensive; mid-range servers work out to 50% more for Intel, and high-end servers about 80% more for equivalent speed.

Re:Compared to Intel's offerings, how do these com (0)

Anonymous Coward | about a year and a half ago | (#41882269)

Why do you need quad socket? Have you compared the supermicro boards with two E5-2600 Xeons and up to 512 GB RAM as 16 DIMMs? They even have ones with onboard dual 10G-BaseT in case that helps cut costs for your datacenter networking...

Re:Compared to Intel's offerings, how do these com (3, Informative)

gagol (583737) | about a year and a half ago | (#41879149)

Are these even desktop or server chips? It's been so long since I bought AMD, I really couldn't tell you which line Piledriver sits in anymore, or if they've consolidated them. The general gist I've read is that AMD is cheaper than Intel, and in the past has been "more green" due to power consumption, but with Ivy Bridge, your bang for the buck and much, much smaller lithography process has given intel the advantage in both areas.

Server chips. Opteron was always, and always has been about servers.

I am not a business owner and do not operate servers myself. For home usage, a low price CPU with adequate power will kick intel "we-cripple-all-but-i7-features" anytime in value for my $. I do not do geek pissing contests.

the i5 is generally uncrippled for home use (1)

Chirs (87576) | about a year and a half ago | (#41881445)

It supports turbo boost, virtualization (vt-x and vt-d), speedstep, etc.

Re:Compared to Intel's offerings, how do these com (1)

ByOhTek (1181381) | about a year and a half ago | (#41879169)

AMD has lost performance/watt recently. These are intended as server chips (G34 socket, not AM3+).

These might bring back performance per watt, as AMDs have seemed to scale better in the the multi-CPU per box / multi-core per CPU segment recently.

Re:Compared to Intel's offerings, how do these com (4, Informative)

greg1104 (461138) | about a year and a half ago | (#41879267)

These Opteron models are the new server line from AMD. The desktop version based on the same architecture (the Trinity alluded to in the summary) closed some of the gap against Intel [extremetech.com]. But Intel remains the market leader on single core performance, performance per core, and power utilization. AMD continues to push the number of cores upward more aggressively, but there's not many workloads where that matters enough for their slim advantage to result in a net win. And the lower efficiency means that sometimes even having more cores doesn't aggregate into enough speed to be a useful alternative. That leaves AMD to compete on pricing. And the CPU is a relatively small part of the total budget on larger servers. Load up a Dell 815 [dell.com] for example and you'll find the CPU pricing seems small compared to what filling its RAM capacity up costs. And then there's reliable storage, at a while higher price level altogether.

The rule of thumb I've been using for the last year, based on benchmarking of CPU heavy database work, is that I expect a 32 core AMD server to be about as fast as a 24 core Intel one, while using significantly more power. The 40% performance per watt gain claimed here--from AMD's own hand-picked best case scenario benchmark--is only enough to make the Intel performance and gap decrease in size, not go away. We'll see if these new Opterons benefit from the re-engineering work done recently more than the desktop ones did; so far it doesn't look good.

Re:Compared to Intel's offerings, how do these com (2, Insightful)

serviscope_minor (664417) | about a year and a half ago | (#41879451)

MD continues to push the number of cores upward more aggressively, but there's not many workloads where that matters enough for their slim advantage to result in a net win.

I disagree: that's exactly what Xeon and Opteron are about. What differentiates those two from the Core and Phenom processors is that the former have multiple crazy fast and very expensive low latency links to allow glueless multi socket systems. Once you've got an (16)8 (hyper)thread Xeon or 16 core opteron and have more than one socket, you're already expecting a workload to scale to 32 distinct units.

Basically these chips only make sense for pretty parallel work loads.

Load up a Dell 815 for example and you'll find the CPU pricing seems small compared to what filling its RAM capacity up costs.

I use this as my go-to online quoter.

http://www.woc.co.uk/default6.aspx?nquoter=13 [woc.co.uk]

I have no affiliation except that I've bought such machines from them before.

Maxing out the RAM (512G) costs £3300. Maxing out the processors costs £2300. It's not quite as much, but it's substantially over half the price.

That said, I've heard rumours that the new opterons can drive 32G DIMMS, in which case you could load it up with 1T RAM for the low, low price of £30,000. In which case, your point certainly stands.

The 40% performance per watt gain claimed here--from AMD's own hand-picked best case scenario benchmark--is only enough to make the Intel performance and gap decrease in size, not go away

True, but the Opterons are substantially cheaper. If you factor in lifetime cost including bang for buck, power and cooling, it's basically a wash and really dependent on the specific workload.

If they have closed the gap this much, then they will be a substantially cheaper option overall.

Re:Compared to Intel's offerings, how do these com (1)

greg1104 (461138) | about a year and a half ago | (#41879749)

I wasn't clear enough on what I meant by number of cores. AMD's strengths when they did well in the server market (2003 to 2009) included more sockets, more cores per socket, and higher memory bandwidth to each socket. At this point the only one of those leads they maintain is that they still cram more real cores onto a socket than Intel does. Presuming the number of sockets is the same, I was suggesting that AMD's higher core count per socket doesn't give them much of a real-world advantage. As you suggested, the multi-socket situation isn't different enough between AMD and Intel for it to be a competitive advantage for either anymore; that's fairly level now.

An Intel server with 8 cores and the current generation of HyperThreading is not necessarily any slower than these new AMD ones with 16 real cores. There are times you run into memory bandwidth issues at the top end of concurrency, and Intel has been the leader on that since Nehalem in 2009. At the low end of active cores, sometimes there is just one thing you want to run really fast, and there Intel's Turbo approach is still better than AMD's. The middle area where AMD is at least competitive--lots of cores active but not constrained by memory bandwidth--is not that wide of a range of server workloads.

Re:Compared to Intel's offerings, how do these com (1)

serviscope_minor (664417) | about a year and a half ago | (#41879909)

There are times you run into memory bandwidth issues at the top end of concurrency, and Intel has been the leader on that since Nehalem in 2009.

Certainly on the desktop. I thought on the server they both have quad channel DDR3 per socket. Of course, that gives intel a higher per-core bandwidth.

I thought that the 6200 series support slightly higher clocked memory (1866) compared to Intel (1600).

I've not cheecked more thoroughly than looking up a few figures though.

Re:Compared to Intel's offerings, how do these com (1)

dbIII (701233) | about a year and a half ago | (#41880459)

and you'll find the CPU pricing seems small compared to what filling its RAM capacity up costs

Yes, but you are going to be getting the same amount of RAM and almost always the same type no matter which way you go. The price of a CPU really matters once you go beyond a couple of sockets. Also fuck Dell since you just get the whitebox of the week with a Dell badge on it - Supermicro and a long list of others will give you something better far cheaper and may even give you support from someone based in your own country that speaks your own language.

Re:Compared to Intel's offerings, how do these com (0)

Anonymous Coward | about a year and a half ago | (#41879561)

My wallet called, it said your fanboy attitude will not change it's mind about buying an AMD CPU.

Re:Compared to Intel's offerings, how do these com (2)

Kartu (1490911) | about a year and a half ago | (#41879931)

"Athlon 64" was released in December 2003 that beat P4 in all regards, price, power consumption, performance. Intel recovered from it only in January 2006 with first "Core" CPU. How does that make AMD to be in "decline entire last decade" pretty please?

No Ivy bridge on the top Xeons yet anyway (1)

dbIII (701233) | about a year and a half ago | (#41880343)

Are these even desktop or server chips?

With respect, why are you even commenting on this if you have not got that much out of the summary? I'll try though, these are for the sort of servers where a lot of tasks are done in parallel and it's a big deal since the best comparable Intel chips are 10 core, 2GHz and horribly expensive. That may change but Intel doesn't seem so interested in that end of the market for now and have let AMD undercut them by several multiples of the price ($9000 for 64 cores vs $80,000 for 80 cores back in January).
It could be argued that 10 Xeon cores act like 20 opteron cores, but that really depends on exactly what the tasks are.

Bang for the buck (0)

Anonymous Coward | about a year and a half ago | (#41880491)

My anecdotal experience is that Intel's total cost of ownership is quite steep.

I had an ASRock AM2+/AM3 motherboard I purchased in 2010 for $80. I also purchased a Athlon 64 X2 5200 Brisbane 2.7GHz for it. Mind you, that's not the Athlon x2, that's the Athlon 64 x2 which is 1-2 generations behind of the one you're probably thinking of.

I also had another computer with an Intel Core 2 Duo 2.3GHz which ran a bit faster, but not by much.

The only upgrade path without a new motherboard for the Core 2 system was to get a Core 2 quad, which is still ridiculously expensive ($180+) for its age and not competitive.

On the other hand, AMD designed their architecture so that the Athlon 64 x2, Phenom, Phenom II could all use the same motherboard. That's at least three generations of chips. I got a 3.4GHz Phenom II quad-core black edition at newegg for $100, dropped it into my 3-year-old motherboard, overclocked it to 3.7GHz and have been very happy with my "bang for the buck." Sure, it isn't competitive cycle-per-cycle, but as a consumer, I mostly want it for video work (which it does great), photography work (which it does great), and gaming (which it does great).

That's what I would call bang for the buck, personally.

Re:Compared to Intel's offerings, how do these com (1)

Ritz_Just_Ritz (883997) | about a year and a half ago | (#41880871)

I recently bought one of their non-server Trinity APU processors specifically to be used for my HTPC. The power footprint is low enough that it fits in a shoebox sized enclosure and the integrated Radeon graphics mops the floor with anything from Sandy/IvyBridge and all at a lower cost. I use it to crank out 1080p video, send audio to my AVR and the kids use it to play games with a fair amount of eye candy turned on and at a playable resolution and frame rate.

Would I buy any current AMD processors for a server farm? Probably not.

Best,

Re:Compared to Intel's offerings, how do these com (2)

dshk (838175) | about a year and a half ago | (#41881275)

Would I buy any current AMD processors for a server farm? Probably not.

The predecessors of this series, the Opteron 6200 is used in quite a few supercomputers. Actually I counted 21 Opteron bases systems in the last supercomputer top 100 list. [top500.org]

Is this going to save AMD ? (0)

vikingpower (768921) | about a year and a half ago | (#41879103)

Will it contribute to its survival ? Or is this one in a long series of convulsions accompanying AMD's bleeding to death ?

Re:Is this going to save AMD ? (4, Funny)

bug1 (96678) | about a year and a half ago | (#41879113)

AMD have been dying for 20 years now, its just fashionable for you followers to talk about it more in recent months.

They will probably die the year of the Linux Desktop.

Re:Is this going to save AMD ? (0)

Anonymous Coward | about a year and a half ago | (#41879189)

The way win8 is going, the year of the Linux Desktop is approaching..

Re:Is this going to save AMD ? (1, Insightful)

jiteo (964572) | about a year and a half ago | (#41879201)

AMD have been dying for 20 years now.

Except they haven't. They've been dying since Intel started their tick-tock stratgey with the Core series, and AMD hasn't been able to keep up with Intel's gains in performance.

Re:Is this going to save AMD ? (4, Insightful)

serviscope_minor (664417) | about a year and a half ago | (#41879473)

They've been dying since Intel... bribed vendors not to use Opteron processors so that even when AMD were clearly superior, they could never get ahead of Intel. That of course meant that they never had the revenue to capitalise on their very substantial advantage. Intel, of course got away with paying only $1bn, substantially cheaper than it would have been not to engage in illegal business practicses.

FTFY.

Re:Is this going to save AMD ? (2)

greg1104 (461138) | about a year and a half ago | (#41879513)

The official tick-tock strategy goes back to the 2006 Core branding change. But Intel had been using two design teams to research and release alternate forms of optimization for a long time before that. In the mid 90's you could make out that one team focused on new architecture style features (386, Pentium) while the other was more about performance tweaking (486, Pentium Pro). The Itanium work spawned a new team altogether. The Core architecture was birthed from releasing that two of those paths--the one that let to the terrible Pentium 4 and Itanium products--had completely botched things. They pulled out of that tailspin by using the other active architecture at the time, the one that went from Pentium 3 to Celeron to Pentium M, as the basis for the new Core.

In some ways it's kind of a shame that it happened that way, because that was the last gasp for interesting new processor features from that style of design. We used to get major architecture changes: 8 to 16 to 32 to 64 bits, extra processing styles going from 387 to MMX to SSE. Now we get tick-tock, shrink and optimize. It's pretty boring.

Re:Is this going to save AMD ? (2)

cheesybagel (670288) | about a year and a half ago | (#41882281)

Geez man. The 486 and Pentium Pro were not performance tweaks. The Pentium Pro for example was a wholly new out-of-order design.

Re:Is this going to save AMD ? (1)

bug1 (96678) | about a year and a half ago | (#41879521)

AMD have been dying for 20 years now.

Except they haven't. They've been dying since Intel started their tick-tock stratgey with the Core series, and AMD hasn't been able to keep up with Intel's gains in performance.

Take a look at their historical share price, its all over the place, they have had lots of ups and downs.

Intel tried to kill them early by legal means.

They always had problems with margins due them being in the bottom end of the market.

I think financially they have had a few big years of losses and required extra external funding just to survive.

Re:Is this going to save AMD ? (0)

Anonymous Coward | about a year and a half ago | (#41879221)

Nah...they were KILLING it about 6-7 years ago or so. They looked poised to actually take over the PC market....

Then intel partnered with apple.......

You think I'm joking....go look up the timeframe this all happened and what market value and shares of each company were at the time.

Re:Is this going to save AMD ? (4, Informative)

greg1104 (461138) | about a year and a half ago | (#41879375)

AMD had one period in the limelight. When the first good 64-bit x86 systems were Opterons [wikipedia.org] launched in 2003, they had a really competitive product for servers. Intel was busy jerking off with Itanium at that time, was oblivious to power consumption (the Pentium 4 was the desktop processor available), and just generally executing terribly. It was like a textbook classic case where the near monopoly market leader was fat and dumb, and got its ass handed to it by its scrappy competitor.

It took Intel until 2006 to release its first Core microarchitecture chips and start acting right again. By 2009 they had jumped back ahead of AMD in every market again, with the Nehalem [wikipedia.org] server chips. And that was it; Intel has stayed one to two generations ahead of AMD ever since.

Re:Is this going to save AMD ? (2)

drinkypoo (153816) | about a year and a half ago | (#41879653)

AMD had one period in the limelight. When the first good 64-bit x86 systems were Opterons launched in 2003, they had a really competitive product for servers

You're right about the one period in the limelight, but it began when they first released an Athlon processor, and it ended when Intel finally got control of their TDP, and thus it's substantially longer than you suggest, though the ending is the same.

Until recently the primary arguments for AMD were lower power consumption and better performance per dollar. Now neither are true, and the only argument is that it's cheaper. But if that's the case, and you do a little math, you can see that the argument is that you can get less for less, and if you're going to do that, why not buy something with an ARM processor, and no ATI graphics drivers?

Re:Is this going to save AMD ? (4, Interesting)

greg1104 (461138) | about a year and a half ago | (#41879907)

From 1999 to 2003, AMD's Athlon was a moderately superior CPU to Intel's Pentium III competitor. More most of that time I felt that success was limited by AMD's lack of high quality motherboards to place the CPUs in. My memory of the period matches the early history of the Athlon at cpu-info [cpu-info.com]. You can't really evaluate CPUs without the context of the motherboard and system they're placed into. And the Athlon units as integrated into the systems they ran on were still a hard sell, relative to the slightly slower but more reliable Intel options. That situation didn't really change until the nForce2 [wikipedia.org] chipset was released, and now we're up to the middle of 2002 already.

I highlighted the 2003 to 2006 period instead because it was there AMD was indisputably in the lead. 64 bit support, nForce3 with onboard gigabit as the motherboard, the whole package was viable and the obvious market leader if you wanted high performance.

Re:Is this going to save AMD ? (1)

drinkypoo (153816) | about a year and a half ago | (#41881031)

You can't really evaluate CPUs without the context of the motherboard and system they're placed into.

That is so true, and you have to look at the chipset, too. Whether AMD or Intel, if you stuck with the vendor's chipset you were usually assured of reliable operation... but intel's sucked down more power over and over again, to the point that AMD's desktop stuff was better than Intel's mobile stuff for a while, and it was cheaper, and not that much slower. All that was around the Athlon 64 period, though. Athlon was just a little slower at most tasks, a bit slower at integer, but notably faster at FP which had just become important due to 3d gaming becoming a major thing on the PC at around the same time, having originally come into flower in the Pentium era.

Re:Is this going to save AMD ? (1)

cheesybagel (670288) | about a year and a half ago | (#41882375)

Intel wasn't free of chipset issues at that time either? Remember the Rambus DRAM chipsets full of bugs like the i820? AMD's own chipsets were pretty stable, even if they had somewhat obsolete feature sets back then. VIA's chipsets had less obsolete feature sets but were chock full of bugs. NVIDIA's nForce had a great feature set and lots of bugs. I actually got one of those. With the right drivers they worked pretty well...

Re:Is this going to save AMD ? (0)

Anonymous Coward | about a year and a half ago | (#41882899)

Ah, VIA. I remember the days when our drivers were full of code along the lines of:

if ( is_via_chipset() )
{
      disable_all_hardware_performance_features_because_they_dont_work();
}

As for RAMBUS, if I remember correctly Intel dumped those chipsets pretty quickly once it proved to be a braindead technology.

Re:Is this going to save AMD ? (1)

ifiwereasculptor (1870574) | about a year and a half ago | (#41880513)

Well, better performance per dollar isn't necessarily false nowadays. The Athlon overlingered and Bulldozer was a failure, but the revised Piledrivers are actually doing ok, at least on the lower end. An A10-5800K is pretty much on par with an i3 3220 on heavier workloads and you get a much better IGP. For about the same as an i3 3240 you get an FX-6300, which is way better for any kind of multithreaded work.

I don't think they'll be able to hold on much longer, since Haswell at 14nm vs Steamroller at 28nm will probably be embarassing to watch, but for this generation, thanks to Intel not having gained a lot on the performance front by moving to 22nm, they managed to catch up, if only on price/performance.

Re:Is this going to save AMD ? (1)

fast turtle (1118037) | about a year and a half ago | (#41881645)

Yea, the AMD IGP is better and sure as hell offers better gaming performance but and this is the big fucking but, the i3-2120 offers far better performance in the area's that count for business/enterprise users. The CPU has enough performance that even with the crappy Intel IGP, they still do what's needed quite well while offering a much lower TDP and even that's becoming important to everyone.

I've been looking into this for a while and working up a build for the new year that's based on an Intel option. Much of the reason is that I'll be using a Xeon with the Intel 4000 IGP (good enough) as the TDP is 45w for the CPU while Passmark results show performance over 8000. The next best AMD offering is a meager 6000 for a 130w TDP and pretty much the same price. I hate to say that Intel is beating AMD in multiple area's and Performance per Watt is just one of them.

Re:Is this going to save AMD ? (1)

dshk (838175) | about a year and a half ago | (#41883045)

the i3-2120 offers far better performance in the area's that count for business/enterprise users. The CPU has enough performance that even with the crappy Intel IGP, they still do what's needed quite well while offering a much lower TDP and even that's becoming important to everyone.

Do you talk about business desktops? They are idling all the time, you have to look at the idle power, not TDP.

Too bad there is per core licensing (4, Interesting)

alen (225700) | about a year and a half ago | (#41879155)

Last year we bought some servers with 6 core cpu's
The. SQL 2012 came out with per core licensing
I did some quick math and its cheaper to buy new servers with 4 core cpu's than license SQL 2012 for 12 cores per server

Re:Too bad there is per core licensing (0)

Anonymous Coward | about a year and a half ago | (#41879321)

Usually you can disable all but N cores, to save on licensing costs, and additionally let you turn on more cores as needed and as you can afford.

Re:Too bad there is per core licensing (1)

Anonymous Coward | about a year and a half ago | (#41879381)

Last year we bought some servers with 6 core cpu's
The. SQL 2012 came out with per core licensing
I did some quick math and its cheaper to buy new servers with 4 core cpu's than license SQL 2012 for 12 cores per server

Postgresql?, http://www.postgresql.org. Last I heard it scaled linearly to 64 cores.

Re:Too bad there is per core licensing (5, Insightful)

greg1104 (461138) | about a year and a half ago | (#41879633)

PostgreSQL versions from 8.3 to 9.1 did pretty well using up to 16 cores. 9.2 was the version that targeted scalability up to 64 cores, released this September [postgresql.org].

The licensing model of commercial databases is one part of why PostgreSQL is become more viable even for traditional "enterprise" markets. PostgreSQL doesn't use processors quite as efficiently as its commercial competitors. The PostgreSQL code is optimized for clarity, portability, and extensibility as well as performance. Commercial databases rarely include its level of extensibility. This is why PostGIS as an add-on to the database is doing well against competitors like Oracle Spatial. And they're often willing to do terrible things to the clarity of their source code in order to chase after higher benchmark results. Those hacks work, but they cost them in terms of bugs and long-term maintainability.

But if the software license scales per-core, nowadays that means you've lost Moore's Law as your cost savings buddy. What I remind people who aren't happy with PostgreSQL's performance per-core is that adding more cores to hardware is pretty cheap now. Use the software license savings to buy a system with twice as many cores, and PostgreSQL's competitive situation against commercial products looks a lot better.

Re:Too bad there is per core licensing (1)

alen (225700) | about a year and a half ago | (#41879913)

DR is why we are on SQL server

SQL, Oracle and IBM have very nice DR capabilities built in to the system

Re:Too bad there is per core licensing (2)

h4rr4r (612664) | about a year and a half ago | (#41880093)

What DR is postgres missing?

I really want to know since I use it all the time. Streaming replication works great.

Re:Too bad there is per core licensing (1)

greg1104 (461138) | about a year and a half ago | (#41880231)

The main complaints I get are that the commercial databases provide DR with GUI or web based management tools all ready go to. PostgreSQL provides APIs for building such things, but they only seem elegant if you agree that shell scripting is a good solution to some problems.

Re:Too bad there is per core licensing (1)

h4rr4r (612664) | about a year and a half ago | (#41880405)

Ah, so not a lack of functionality just not shiny enough for some.

I am a big boy I can do without the handholding shiny.

Re:Too bad there is per core licensing (0)

Anonymous Coward | about a year and a half ago | (#41880435)

http://phppgadmin.sourceforge.net/doku.php?id=start

Re:Too bad there is per core licensing (3, Interesting)

greg1104 (461138) | about a year and a half ago | (#41880107)

Some of the other developers in my company just recently released Barman [pgbarman.org] for PostgreSQL. That's obviously inspired by Oracle's RMAN DR capabilities. A fair number of companies were already doing work like that using PostgreSQL's DR APIs, but none of them were willing to release the result into open source land until that one came out. We'll see if more pop out now that we've eroded the value of those private tools, or if there's a push to integrate more of that sort of thing back into the core database.

As a matter of policy preference toward keeping the database source code complexity down, features that are living happily outside of core PostgreSQL are not integrated into it. One of the ideas it's challenging to crack at some companies is just how many of a database's features need to be officially part of it. Part of adopting open-source solutions expects that you'll deploy a stack of programs, not just one giant one from a single provider.

Re:Too bad there is per core licensing (0)

Anonymous Coward | about a year and a half ago | (#41879619)

Per core licensing with unlimited instances. Not that many people will be running lots of instances except in the case of VMs. Looking at Netcraft, Apache and nginx have gone down a bit and IIS has gone up over the past year. Maybe demand for ms-sql has gone up and hosted companies are driving demand for stacking many instances of SQL.

Re:Too bad there is per core licensing (2)

dbIII (701233) | about a year and a half ago | (#41880595)

The opposite applied to me with geophysical software when a cluster licence model was killed off and replaced by per host licencing. I ended up with a few dozen now mostly idle 8 core machines replaced by a few 48 and 24 core machines that ended up being cheaper in total than converting licences.

How does it benchmark against the 'fastest' (-1)

Anonymous Coward | about a year and a half ago | (#41879261)

How does it benchmark against the fastest IBM processor, this 'fast' processor they sell for 100k-500k a time?:

http://hardware.slashdot.org/story/12/08/28/1457211/ibm-mainframe-running-worlds-fastest-commercial-processor

I'm guessing it is 2 orders of magnitude (i.e. x100) faster than the IBM chip, because IBM avoids benchmarking its chips and they're not very good.

No AMD, 8% is not enough to forego 20 core... (0)

Anonymous Coward | about a year and a half ago | (#41879271)

AMD states that "an 8% increase is enough to no longer need to widen their product line to 20 core CPUs". Guess what AMD, when your CPUs perform like shit compared to other server based CPUs, an 8% increase of shit means nothing. Another 4 cores could have compensated for the terrible performance your CPU provides in comparison to Intel's E5 line, particularly for highly concurrent based programs. An 8% only warrents that our company purchases the new 4 socket Intel based motherboards. Yeah I know, "But hey, Intel CPUs cost much more... blah blah blah", if you're living in your mother's basement, then certainly Intel's CPUs cost too much for your world of warcraft 10hour/day "work". But when you're actually relying on the CPU performance, as we do due to scientific, financial, and industrial simulations and applications, then Intel's nearly 50% superior performance more than pays for the cost of the CPUs within a week.

Turbo (1)

Hsien-Ko (1090623) | about a year and a half ago | (#41879657)

If this is not about a button on my case, then I don't care what that old buzzword is regurgigated into.

Re:Turbo (1)

greg1104 (461138) | about a year and a half ago | (#41880191)

They re-used "HyperThreading" as the branding for something new, too, despite its name being associated with nothing but bad the first time. Anyway, Intel's Turbo Boost [wikipedia.org] is a great feature for making single task systems faster, ones that weren't benefiting from having more cores around. AMD's Turbo CORE [amd.com] is obviously inspired by that, but hasn't been quite as good so far. This latest generation of chips from AMD closes more of the gap between them and Intel in that area though.

Re:Turbo (1)

cheesybagel (670288) | about a year and a half ago | (#41882475)

IBM's POWER CPU line has had Turbo mode for way longer than Intel did. They also have had SMT for a long time and they actually get it to perform well unlike Intel.

AMD competitive CPU's - 2 generations for sure (1)

Anonymous Coward | about a year and a half ago | (#41879979)

Rubbish about AMD having only one competitive time: The T-Bird Athlons offered much better bang for the buck and performance than Netburst based P4's.

Anyone who followed computer architecture knew this. Athlon XP's were generally considered better all around chips than Netburst based P4's until the P4's hit > 3.2+ Ghz. The only way Intel could even stay competitive with the T-Bird Athlons and Early Athlon XP's was stuff like the Extreme Edition.

Likewise, early dual core Netburst based products P4's weren't great chips, they were massive hot power-hogs. First generation Opterons were a much better choice for many server workloads. The general advice now is to look at your workload. If your choice is an Opteron versus a Xeon look at the system cost along with your workload. If the Opteron is cheaper then you can buy more memory (If that helps you workload). Hyper transport was significantly better than the Netburst based Xeons for multiprocessing.

For single threaded Integer+FPU performance Intel is definitely better than AMD right now.

If your running prebuilt binaries that aren't heavily threaded then Intel Xeons are your best bet, but don't forget approaching this as a systems engineering problem. If your workload loves memory then you may well be better off with an Opteron with lower single threaded performance, but a lower cost, and throw the cost delta towards more memory.

If cost is no object then your best bet is a Xeon. As much as people love to beat up AMD we have them to thank for decent x64 performance on the desktop. If they hadn't made a killer chip with the Opteron then Intel would have tried to push us all to Itanium which had great theoretical performance and great FP workload performance for apps the compiler could schedule decently for but sucked eggs running integer apps with a decent number of branches... Look at the Spec CPU Perl benchmark results for an idea how much Itanium sucked at that stuff.

AMD Marketing has a Stargate Reference (1)

CajunArson (465943) | about a year and a half ago | (#41880157)

"With this round of CPUs, AMD has split its clock speed Turbo range into 'Max' and 'Max All Cores.' "

Remember Episode 200 of Stargate and the "Set Weapons to Maximum!" line?

Re:AMD Marketing has a Stargate Reference (0)

Anonymous Coward | about a year and a half ago | (#41881817)

The next logical step would be the old Star Trek set "phaser to overload" trick.

Short-term vs long-term investment (0)

Anonymous Coward | about a year and a half ago | (#41880703)

From a CPU design standpoint, Intel may be winning at the moment, but AMD has been looking to the future with their processor designs.

Intel has been pushing relatively the same design methodology and making incremental improvements since Core 2. AMD tore down their designs and started much closer to "from scratch." They didn't expect to be market leaders in performance with Bulldozer, and they refined it with Piledriver a good deal. This "half-core" strategy they've built on is actually a very good idea; it hasn't been done before quite this way, and they're still learning lessons. One of the best advantages, though, is the ability to put more cores onto a single chip.

Piledriver hit the performance improvement AMD claimed it would when Bulldozer was released. People compare it roughly to Intel i5's now. If they keep on their performance increase road map, there's no reason to think that Piledriver II or whatever won't be comparable to i7's, and after that, they're in step with i9's or whatever Intel names their Core line next, if not ahead.

As far as FP performance goes, AMD's higher-end CPU-integrated Radeons put Intel's offerings to shame, and give FP performance to the processor that the general-purpose cores couldn't hope to match (if used properly).

Intel's policy of keeping prices high on older processors and changing sockets frequently between generations doesn't help their total cost of ownership, either. You also have to include upgrade paths for a few years, motherboard/RAM costs in that, etc.

Re:Short-term vs long-term investment (0)

Anonymous Coward | about a year and a half ago | (#41881871)

lol

Re:Short-term vs long-term investment (1)

fast turtle (1118037) | about a year and a half ago | (#41882175)

As far as FP performance goes, AMD's higher-end CPU-integrated Radeons put Intel's offerings to shame, and give FP performance to the processor that the general-purpose cores couldn't hope to match (if used properly).

We'd already been hearing about CUDA from Nvidia when AMD announced their Fusion Design (1st stage was the Llano) and the first thing I thought of at that time was that AMD was going to use the GPU cores to replace the shitty FPU performance on their chips. I've posted on this numerous times and with the Bulldozer design. What I expect is that by the 4th generation, the new CPU's will look like a single core to the OS instead of the multiple cores we're used to seeing but the threading performance is going to drastically improve as they're already beginning to implement the multi-thread design efforts. Simply put, AMD is completely redefining the CPU while Intel works on simply improving what's already there. Who is going to win in the long run is going to be interesting though I suspect that if AMD can turn around their profit issues, they'll be able to improve the design of their chip to run as well as Intel on a 45nm basis and not have to go with the die shrinks that Intel is pushing.

Keep in mind that these die shrinks are beginning to reach the physical limits of the silicon itself and Intel is researching new wafer processes that are going to cost lots of money. If AMD can improve/match the Intel Performance/TDP factors while using a proven 45nm process, they will be in a much better financial position then Intel as they'll be able to continue using existing fabs that are paid for, thus offering their chips for less money.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...