Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Confirms Kaveri APU Is a 512-GPU Core Integrated Processor

timothy posted about 10 months ago | from the can't-even-count-that-high dept.

Graphics 130

MojoKid writes "At APU13 today, AMD announced a full suite of new products and development tools as part of its push to improve HSA development. One of the most significant announcements to come out the sessions today-- albeit in a tacit, indirect fashion, is that Kaveri is going to pack a full 512 GPU cores. There's not much new to see on the CPU side of things — like Richland/Trinity, Steamroller is a pair of CPU modules with two cores per module. AMD also isn't talking about clock speeds yet, but the estimated 862 GFLOPS that the company is claiming for Kaveri points to GPU clock speeds between 700 — 800MHz. With 512 cores, Kaveri picks up a 33% boost over its predecessors, but memory bandwidth will be essential for the GPU to reach peak performance. For performance, AMD showed Kaveri up against the Intel 4770K running a low-end GeForce GT 630. In the intro scene to BF4's single-player campaign (1920x1080, Medium Details), the AMD Kaveri system (with no discrete GPU) consistently pushed frame rates in the 28-40 FPS range. The Intel system, in contrast, couldn't manage 15 FPS. Performance on that system was solidly in the 12-14 FPS range — meaning AMD is pulling 2x the frame rate, if not more."

cancel ×

130 comments

Sorry! There are no comments related to the filter you selected.

AMD RUELZ !! (-1)

Anonymous Coward | about 10 months ago | (#45403587)

About time !!

Slashdot FINALLY gets nerd news today !!

Which GT630? (1)

jandrese (485) | about 10 months ago | (#45403621)

nVidia has at least three [geforce.com] versions of the GT630, each fairly different from one another. None of these would be an amazing accomplishment to beat, although they are more powerful than Intel's normal integrated offerings.

Re:Which GT630? (1)

timeOday (582209) | about 10 months ago | (#45403837)

The Intel Iris Pro 5200 is 28% faster than the Geforce GT 630 on the PassMark G3D [videocardbenchmark.net] benchmark. (I don't know how much the variants you linked differ in performance?)

Re:Which GT630? (1)

Anonymous Coward | about 10 months ago | (#45404035)

The Intel Iris Pro 5200 is 28% faster than the Geforce GT 630 on the PassMark G3D benchmark. (I don't know how much the variants you linked differ in performance?)

Except the Iris Pro variant is found in shiny i7 Haswells that start at about $470 in bulk for the cheapest part. In which case the price/performance ratio is clearly against Intel.

Re:Which GT630? (-1)

Anonymous Coward | about 10 months ago | (#45404151)

Except that the exact test kit was an i7 4770k, which *has* an Iris Pro 5200. They deliberately gimped the intel setup by using a slower GPU than the integrated one.

Re:Which GT630? (4, Informative)

Guspaz (556486) | about 10 months ago | (#45404321)

The 4770K has an Intel HD 4600, not an Iris Pro 5200. The nVidia GPU is faster than the 4600 in the CPU tested.

The only 4770 series chip to feature Iris Pro is the 4770R.

Reference: http://ark.intel.com/products/family/75023 [intel.com]

Re:Which GT630? (0)

Anonymous Coward | about 10 months ago | (#45406505)

OK, so what you're saying is that an AMD CPU/GPU that yet has to hit the market is slightly faster than an Intel one that is currently available, and slightly slower than another Intel one that is currently available. Gotcha!

Re:Which GT630? (0)

Anonymous Coward | about 10 months ago | (#45406811)

for a much much cheaper price.

Re: Which GT630? (0)

Anonymous Coward | about 10 months ago | (#45406977)

No. They're saying the NVIDIA card tested fits that performance criteria and the new unreleased AMD chip doubles the NVIDIA performance.

Re: Which GT630? (0)

Anonymous Coward | about 10 months ago | (#45404329)

Last time I checked, 4770k doesn't have irs pro.

Re:Which GT630? (1)

Anonymous Coward | about 10 months ago | (#45403847)

Actually, the GT 630 scores 720 on passmark, while the Iris Pro 5200 scores 922. So not only did AMD choose a remarkably shitty graphics card to test against, they also chose one that's slower even than the integrated chip on the Intel CPU.

Re:Which GT630? (1)

Rockoon (1252108) | about 10 months ago | (#45404543)

AMD's last generation APU scores 865 on that benchmark, so not sure what point you are trying to make here.

I expect the top end Kaveri to score ~1150 on passmarks G3D, and it wont cost $450 or $650 like the two Intel chips that actually have the 128 MB of L4 cache that distinguishes the Iris Pro 5200 from the Intel HD 4600 (which only scores 598 on G3D)

Re: Which GT630? (0)

Anonymous Coward | about 10 months ago | (#45407011)

The Intel chip tested doesn't have Iris Pro 5200 and neither does any Intel chip that uses a solder free socket. The crappy NVIDIA card tested is still marginally quicker than the Intel chip tested.

catch me up now someone? (0)

Anonymous Coward | about 10 months ago | (#45403671)

Not paid any attention for 5+ years. So this is what, a GPU on a CPU that can play games?

Re:catch me up now someone? (1)

cheesybagel (670288) | about 10 months ago | (#45403709)

AMD has had those for a long time ever since they started their APU family post ATI acquisition.

Re:catch me up now someone? (4, Informative)

bobbied (2522392) | about 10 months ago | (#45403827)

Yes. One part with a middle of the road CPU and a middle of the road GPU.

The one advantage I see technically to this approach is you can get data from the CPU to the GPU without having to touch a trace on the motherboard. The over all complexity of the system goes down and the CPU to GPU performance goes up.

The disadvantages are many. More heat/power dissipation on the one part means it will run hotter (not that AMD doesn't do that anyway). Makes you pay for the GPU, even if you don't use/want it. Higher latency between the memory and the GPU which is KEY to a GPU performance. I'm sure there's more..

All this aside. Bully for AMD. These are great devices for low cost systems with reasonable performance.

Full Disclosure: I have a current low end AMD/GPU based system that I really like. It was CHEAP, and performs well enough for what I do.

Re:catch me up now someone? (2)

jacksonic (914470) | about 10 months ago | (#45403881)

The best benefit I find is the need for only a single heatsink. One giant radiator and a single large, slow case fan, and you have an extremely quiet system.

Re:catch me up now someone? (2)

0123456 (636235) | about 10 months ago | (#45404267)

The best benefit I find is the need for only a single heatsink. One giant radiator and a single large, slow case fan, and you have an extremely quiet system.

I have an i7 and GTX 660 in my gaming PC, and it's an extremely quiet system with a heck of a lot more performance than this thing.

Re:catch me up now someone? (1)

LordLimecat (1103839) | about 10 months ago | (#45404349)

It also probably costs as much for the CPU and GPU as it would for the entire AMD-based system.

Re:catch me up now someone? (4, Informative)

Rockoon (1252108) | about 10 months ago | (#45404763)

It also probably costs as much for the CPU and GPU as it would for the entire AMD-based system.

The highest end AMD APU you can currently build includes an A10-6800K, which is a whopping $140 for the CPU+GPU. Include the cost of RAM for the GPU so that it can be comparable with a discrete GPU setup...$22 to compensate for dedicating 2GB of DDR3 1866 to the GPU...

$140 + $22 = $166.

His GTX 660 is $190. His i7 is no less than $290 based on todays newegg prices.

$190 + $290 = $480

So he is $314 in the hole. Clearly he doesnt want to talk about semantics such as cost.

Re:catch me up now someone? (0)

0123456 (636235) | about 10 months ago | (#45405641)

So he is $314 in the hole. Clearly he doesnt want to talk about semantics such as cost.

Clearly you didn't bother to read the thread, or you'd have noticed that it wasn't about cost.

Re:catch me up now someone? (2)

Rockoon (1252108) | about 10 months ago | (#45406119)

Clearly you didn't bother to read the thread, or you'd have noticed that it wasn't about cost.

The person I replied to specifically talked about cost, and in fact its the only thing they talked about.

Maybe YOU should read the thread.

Re:catch me up now someone? (1)

0123456 (636235) | about 10 months ago | (#45405567)

It also probably costs as much for the CPU and GPU as it would for the entire AMD-based system.

If you'd read the thread, you'd see that the GP didn't mention cost at all, only noise.

Re:catch me up now someone? (0)

Anonymous Coward | about 10 months ago | (#45405633)

Yeah and you very likely paid out the dickhole for it. Good for you?

Re:catch me up now someone? (1)

0123456 (636235) | about 10 months ago | (#45405671)

Hey, anyone one who can't read a thread before jerking their knee.

Re:catch me up now someone? (3, Insightful)

Dahamma (304068) | about 10 months ago | (#45404059)

You can call the advantage "complexity", but in practice that really means price, heat, and size, all of which are critical to laptops. Additionally, putting them on the same die makes it easier to have unified memory, which can further simplify things (and be as fast or faster in some applications for the less money if designed correctly - for example, compute tasks that touch a lot of the same data on the CPU and GPU like video encoding, etc).

And it most definitely does not have to "run hotter" than *two* discrete parts (and is certainly easier to cool, anyway). Computers are *always* using a GPU these days, modern OSes do all sorts of 3D effects (even some mobile ones). If the GPU (and software/driver) is designed well, it would be a lot simpler, cheaper, and possibly even more power efficient than the dual-graphics design of Macbook Pros and some Wintel laptops...

For a desktop, this isn't anything all that exciting (except for those who want cheap PCs with reasonable performance). For a laptop/embedded system, it's a really interesting chip, even if it's not the cheapest.

Re:catch me up now someone? (1)

Anonymous Coward | about 10 months ago | (#45406125)

And it most definitely does not have to "run hotter" than *two* discrete parts

No but it concentrates all the heat into the space of one of those discrete parts, so the cooler needs to be more efficient with diffusing heat from the tiny little surface.

Re:catch me up now someone? (3, Informative)

LWATCDR (28044) | about 10 months ago | (#45405145)

Actually this is a bigger deal than you think. I remember when you had to pay extra to get a floating point processor. Most software worked really hard to use integers when ever they could since they could not depend on an FPU being in most systems.
By having a GPU as part of the CPU more software will start to use GPU computing to speed up things like transcoding and even spreadsheets http://slashdot.org/story/13/07/03/165252/libreoffice-calc-set-to-get-gpu-powered-boost-from-amd [slashdot.org] .
We all know that GPUs can speed up a lot of operations but developers don't want to put in the work because not everyone has them.

Re:catch me up now someone? (2)

mrchaotica (681592) | about 10 months ago | (#45405693)

We all know that GPUs can speed up a lot of operations but developers don't want to put in the work because not everyone has them.

That, and the fact that programming for GPUs now is analogous to having to choose between different processor- and language-specific floating-point libraries.

Re:catch me up now someone? (1)

AcidPenguin9873 (911493) | about 10 months ago | (#45405361)

Higher latency between the memory and the GPU which is KEY to a GPU performance.

Latency doesn't matter very much for GPU. A little, but not much. Bandwidth is what matters for GPU. (Latency matters for CPU tasks.) As long as the GPU has enough buffer depth to cover the latency to and from memory (which it certainly does), the memory bandwidth is what will keep the GPU pipelines completely full.

Re:catch me up now someone? (1)

bobbied (2522392) | about 10 months ago | (#45406021)

I agree that it's about through put and my "latency" term was a poor word choice. You got to keep the GPU's busy, which means you have to get data into and out of memory as quickly as possible.

Re:catch me up now someone? (1)

Dizzer (251533) | about 10 months ago | (#45405493)

Makes you pay for the GPU, even if you don't use/want it.

This is like saying the disadvantages of apples is that they don't taste like oranges.

Re:catch me up now someone? (0)

Anonymous Coward | about 10 months ago | (#45405737)

FYI: latency on the GPU is not an issue. The whole GPU architecture is designed to tolerate latency. Bandwidth is the issue.

Cool (0, Troll)

0123456 (636235) | about 10 months ago | (#45403687)

So if I buy an AMD CPU, I can play games with low frame-rates at low detail settings (yeah, I know it says 'medium', but when almost all games now go at least up to 'ultra', 'medium' is the new 'low').

Or I could just buy a better CPU and a decent graphics card and play them properly.

Re:Cool (3, Insightful)

Fwipp (1473271) | about 10 months ago | (#45403771)

Yes, if you spend more money you can get more performance. The whole point of the APU is that you can spend less on a single piece of silicon than you would for "a better CPU and a decent graphics card."

Re:Cool (1)

spire3661 (1038968) | about 10 months ago | (#45403903)

The problem is that we want what the consoles have. A LOT more cores, GDDR5 and hUMA.

Re:Cool (0)

Anonymous Coward | about 10 months ago | (#45404507)

> The problem is that we want what the consoles have.

It depends on who you consider "we". If "we" are the mass market, then you're absolutely wrong.

Re:Cool (5, Insightful)

asliarun (636603) | about 10 months ago | (#45403865)

So if I buy an AMD CPU, I can play games with low frame-rates at low detail settings (yeah, I know it says 'medium', but when almost all games now go at least up to 'ultra', 'medium' is the new 'low').

Or I could just buy a better CPU and a decent graphics card and play them properly.

Yes, but could you do that in a compact HTPC cabinet (breadbox sized or smaller) and have your total system draw less than 100W or so?

I'm really excited by this news - because it allows traditional desktops to reinvent themselves.

Think Steam Machines, think HTPC that lets you do full HD and 4k in the future, think HTPC that also lets you do light-weight or mid-level gaming.
Think of a replacement to consoles - a computing device that gives you 90% of the convenience of a dedicated console, but gives you full freedom to download and play from the app store of your choice (Steam or anything else), gives you better control of your hardware, and lets you mix and match controllers (Steam Controller, keyboard and mouse, or something else that someone invents a year down the line).

I'm long on AMD for this reason. Maybe I'm a sucker. But there is a chance that desktops can find a place in your living room instead of your basement. And I'm quite excited about that.

Re:Cool (0)

Anonymous Coward | about 10 months ago | (#45404249)

Insensitive clod, most slashdotter's living rooms are already in the basement.

Re:Cool (1)

Nemyst (1383049) | about 10 months ago | (#45404291)

Why the needlessly stringent power draw? You can get passively cooled discrete GPUs or low-noise active cooling which would give you a major bump in performance. APUs won't be able to do 4K for a loooong time for anything but video.

Re:Cool (1)

Rockoon (1252108) | about 10 months ago | (#45404847)

APUs won't be able to do 4K for a loooong time for anything but video.

..if a "looooong time" means as soon as AMD and Intel support DDR4, which is in 2014... sure.

The main bottleneck for on-die GPU's is memory bandwidth. Intel "solved" the bandwidth problem in Iris Pro by including a massive L4 cache that cannot be manufactured cheaply. AMD hasn't solved the problem, but is far enough ahead in GPU design that the Iris Pro is only on par with AMD's 6800K APU.

Re:Cool (1)

Kjella (173770) | about 10 months ago | (#45405123)

APUs won't be able to do 4K for a loooong time for anything but video.

..if a "looooong time" means as soon as AMD and Intel support DDR4, which is in 2014... sure.

I think by "anything but video" he was referring to gaming, even the 780 Ti and R9 290X struggle with 4K. What do you think DDR4 would change? As far as I know they already support the 4K resolution but it'll play like a slide show.

Re:Cool (1)

Rockoon (1252108) | about 10 months ago | (#45406249)

even the 780 Ti and R9 290X struggle with 4K.

30+ FPS in Far Cry 3 with Ultra settings at 4K resolution doesnt sound to me like "struggling"
49 FPS Grid 2 with Ultra settings at 4K resolution doesnt sound to me like "struggling"
45+ FPS in Just Cause 2 with Very High settings at 4K resolution doesnt sound like "struggling"
30+ FPS in Total War: Rome II with HQ settings at 4K resolution doesnt sound like "struggling"

If you arent right about existing hardware.. how could you possibly be a good judge of future hardware? Are you unaware that people are already doing 1080P gaming on their APU? My guess is that you were thinking that APUs "struggle" with 480P or some shit....

Re:Cool (1)

Lonewolf666 (259450) | about 10 months ago | (#45406243)

If expensive solutions count, why not a PC version of the PS4 board?
8Gbyte would be enough for an average gaming PC, and with DDR5 the bandwidth would be decent too :-)

Re:Cool (1)

asliarun (636603) | about 10 months ago | (#45405651)

Why the needlessly stringent power draw? You can get passively cooled discrete GPUs or low-noise active cooling which would give you a major bump in performance. APUs won't be able to do 4K for a loooong time for anything but video.

You make a valid point - and I don't know *all* the options that exist.
It would actually be a very interesting exercise to do this kind of a comparison. Say, take some HTPC like constraints such as space and heat, identify the options available - both CPU+discrete graphics and CPU+GPU integrated, and compare the options using price and performance.

Back to your point, it is not just power draw - space and cooling are also factors. A reasonably strong integrated CPU+GPU system lets you build a cabinet that can be very slim - say, something that resembles a compact blue ray player.

I would also imagine that an integrated solution like this will allow better airflow.

Finally there's price. Undoubtedly discrete graphics will always have the performance crown. However, if you think of Moore's law, CPUs have already reached the point of diminishing returns in terms of size of individual cores or even number of cores in a chip. From now on, IMHO, Moore's law will be all about integrating as much of the motherboard as possible into a single chip or package. And GPU is the most obvious starting point.

To put it another way, in terms of price-performance-heat, discrete GPUs will not be able to compete with a highly integrated solution - over time. They will keep getting pushed into smaller and smaller niches. An integrated solution will generally be cheaper and cooler for equivalent performance. It wasn't a viable solution in many cases until now only because the performance was sub-par - but Kaveri is the first viable chip that gives you enough horsepower to play last gen games at full HD with reasonable frame rates. In two years, Kaveri will be at 2 teraflops - the same as a PS4.

Re:Cool (1)

triffid_98 (899609) | about 10 months ago | (#45406399)

Bassed on the requirements I'm guessing this is for HTPC purposes.

In that case the 'needlessly stringent' power draw is because

A. The case is probably tiny, it may not even have space for a discrete GPU. Less power input = less heat to dissipate.

B. For a completely fanless solution you want a picoPSU. These max out at around 160watts.

C. Most people looking for quiet HTPC could care less if you can run Gears of Warcraft 5 on it.

Re:Cool (1)

StikyPad (445176) | about 10 months ago | (#45404929)

Won't happen. Integrated devices like "smart" TVs and dedicated media streaming hardware have already obsoleted HTPCs, and as much as I like to play some PC games on a big screen, the market is clearly satisfied with the walled gardens known as game consoles, most of which can serve as media streaming clients as well.

Re:Cool (2)

cloud.pt (3412475) | about 10 months ago | (#45403923)

The real deal here is you are purchasing an APU for roughly less than half the price when compared to a mid-range Intel & discrete graphics solution, and getting double the performance. Apple knows this is the way to go for price-performance and that is why the new entry-level 15' MBPs lost discrete GPU. OEMs like Apple are forcing Intel to catch up with integrated GPU technology. It's all about trade-offs: you place a less performing GPU in the same die as the CPU in order to get the best possible memory interface. While you won't reach enthusiast or prossumer performance levels without adding a high end GPU, you will definitely target the common user market.

Re:Cool (0)

Anonymous Coward | about 10 months ago | (#45404077)

Yes, and if you buy a Ferrari you can go faster than you could in your Honda. How is "price/performance tradeoff" a hard concept to understand?

Re:Cool (2)

0123456 (636235) | about 10 months ago | (#45404199)

How is "price/performance tradeoff" a hard concept to understand?

Because if you care about games, this is too slow. If you don't care about games, this is irrelevant. I can't see any tradeoff that makes any sense outside of tiny niche markets (e.g. people who want to play games but don't care if they look like crap).

Re:Cool (2)

gl4ss (559668) | about 10 months ago | (#45404281)

it's fast enough to play though.

Re:Cool (1)

timeOday (582209) | about 10 months ago | (#45404309)

What's wrong with 28-40 FPS on BF4 at 1920x1080? That's a brand-new game with high end graphics.

Re:Cool (0, Troll)

Rockoon (1252108) | about 10 months ago | (#45404907)

Nothing.

This is whats going on here. User 0123456 purchased or built an i7 system and a 660 GT which surely cost somewhere between $600 and $1200, He is now trying to justify this expense to himself by arguing with others. If he really felt that he made a good decision, he wouldn't feel the need to throw out opinions ("because if you care about games, ...") as if it trumps the numbers.

Some people literally spend thousands on their gaming rigs, so why it is that he is so self-conscious about what he spent can only be speculated about.

Re:Cool (2)

0123456 (636235) | about 10 months ago | (#45405589)

What's wrong with 28-40 FPS on BF4 at 1920x1080? That's a brand-new game with high end graphics.

It's not 'high-end graphics' when you're playing on a low graphics setting.

Turn it up to Ultra and see what it runs at.

Re:Cool (0)

Anonymous Coward | about 10 months ago | (#45404357)

people who want to play games but don't care if they look like crap

We used to call those "console users".

Re: Cool (0)

Anonymous Coward | about 10 months ago | (#45404479)

Umm... I hope you realize that playing video games is a niche market compared to all other markets the CPU manufacturers address.

Re:Cool (2)

LordLimecat (1103839) | about 10 months ago | (#45404695)

Because if you care about games, this is too slow.

Its really not, unless its not the game you care about but the eye-candy. IIRC professional Starcraft 2 gamers (who could be said to "care about games") turn the graphics way down anyways.

Re:Cool (2)

LordLimecat (1103839) | about 10 months ago | (#45404679)

So I _COULD_ buy a car for under $20k that does 0-60 in a modest amount of time...
Or I could just buy a Bugatti Veyron with a better engine and drive properly.

Is that the argument you're making?

Re:Cool (0)

Anonymous Coward | about 10 months ago | (#45405599)

The sort of cheap store-bought PCs will now be able to play games quite well, plus GPU accelerate many tasks. Outside of gamers few ever buy extra graphics cards so it makes sense to have acceptable performance in a computer straight out the box.

Sure, gamers, hobbyists and those with lots of spare cash will opt for the most expensive pieces of kit, but that's no reason to ignore the lower end of the market. Think of how many now happily own cheap Android tablets and get by with $300 laptops. Not everyone has the money to blow on the fanciest hardware - and many of those who do would rather spend it on something else.

Yes, but... (5, Funny)

mythosaz (572040) | about 10 months ago | (#45403695)

...how much faster does it mine Bitcoins?

I need to mine some so I can put them in a totally safe online wallet.

Re:Yes, but... (-1)

Anonymous Coward | about 10 months ago | (#45403777)

what are these coins you speak of? can I store them in my butt?

Re:Yes, but... (0)

Anonymous Coward | about 10 months ago | (#45403855)

Certainly. Right next to that big rock and that clock.

Re:Yes, but... (1)

PRMan (959735) | about 10 months ago | (#45404017)

It won't. There are custom chips for that now. And you're right, online wallets aren't safe.

Re:Yes, but... (0)

Anonymous Coward | about 10 months ago | (#45404155)

that's why I print my bitcoins onto paper and keep them in my wallet. all .03821 BTC of them

Yeah i don't get it (1)

hypergreatthing (254983) | about 10 months ago | (#45403717)

What market is amd shooting for?
Haswell with iris pro will probably beat out amd for integrated graphics performance and will have better battery life.
On the top end, desktop users will always go for a dedicated graphics card.
On the mobile end, these things will eat up battery and have no reason to be on a tablet.
All that's left is the cheap oem side of things. Haswell is still fairly expensive on the low end. If intel can bring down the price a bit and make it competitive they will beat out amd in every category.

Re:Yeah i don't get it (1)

amiga3D (567632) | about 10 months ago | (#45404063)

The cheap OEM side is a huge market. This is good because now the cheap OEM side is decent instead of shitty in terms of performance. Hardcore gamers with money to blow are not the market for this.

Re:Yeah i don't get it (1)

hypergreatthing (254983) | about 10 months ago | (#45404367)

sorry, i should of said cheap oem side with mid 3d graphics. The market doesn't exist for that. The typical non gaming user wouldn't care if it was the 3d graphics capability was from 3 years ago. If it can play video and run business apps that's all they care about

Old sandy bridge/ivy core chips already fit that market perfectly and the price will/have come down for those chips already. Or throw an old richland/trinity and they wouldn't know the difference.

Re:Yeah i don't get it (0)

Anonymous Coward | about 10 months ago | (#45405209)

"should of" said? SHOULD OF? Godamnit. It's SHOULD HAVE.

Re:Yeah i don't get it (1)

nigelo (30096) | about 10 months ago | (#45405757)

He said it on accident.

Re:Yeah i don't get it (1)

amiga3D (567632) | about 10 months ago | (#45406141)

Yet another grammar Nazi. Die already.

Re:Yeah i don't get it (1)

symbolset (646467) | about 10 months ago | (#45404109)

Small form factor business PCs, Media center PCs, low-end Steambox, emerging economies desktop. Strangely enough, servers. Integrating the GPU into the CPU gets the BOM cost down and raises the minimum performance standard. They are now approaching a teraflop on an APU. That is amazing.

Re:Yeah i don't get it (1, Insightful)

0123456 (636235) | about 10 months ago | (#45404239)

Small form factor business PCs,

Don't need 3D performance. Don't need GPGPU performance in 99% of cases.

Media center PCs

Plenty fast enough already to play video at 1920x1080.

low-end Steambox

If you want your games to look like crap.

Integrating the GPU into the CPU gets the BOM cost down and raises the minimum performance standard.

Because lots of people run 3D games on servers.

Certainly we do use GPUs for some floating-point intensive tasks on servers, but this is nowhere near fast enough to be useful.

Re:Yeah i don't get it (2)

SirSlud (67381) | about 10 months ago | (#45404537)

Because lots of people run 3D games on servers.

Certainly we do use GPUs for some floating-point intensive tasks on servers, but this is nowhere near fast enough to be useful.

We're not that far off thin client gaming. So suggesting that lots of companies won't be running 3D games server-side in the near future is disingenuous.

Re:Yeah i don't get it (1)

0123456 (636235) | about 10 months ago | (#45405609)

We're not that far off thin client gaming. So suggesting that lots of companies won't be running 3D games server-side in the near future is disingenuous.

If you're buying a server to run 'thin client gaming', you sure as heck won't be using integrated graphics to do so.

Re:Yeah i don't get it (0)

Anonymous Coward | about 10 months ago | (#45404551)

Don't get in the way of an AMD fanboy fapathon!

Re:Yeah i don't get it (1, Troll)

jkflying (2190798) | about 10 months ago | (#45404779)

Small form factor business PCs,

Don't need 3D performance. Don't need GPGPU performance in 99% of cases.

Doesn't matter, because it's cheap. Also, CAD and Photoshop *do* use GPGPU these days.

Media center PCs

Plenty fast enough already to play video at 1920x1080.

This should handle 4k video decoding.

low-end Steambox

If you want your games to look like crap.

I think you missed the "low end" part of that quote. Also, it will be really, really cheap compared to something with an additional dGPU. You don't even need PCIe on the motherboard. Not everybody can afford to game at 3x 1080p on high. These should handle 1080p on medium just fine.

Integrating the GPU into the CPU gets the BOM cost down and raises the minimum performance standard.

Because lots of people run 3D games on servers.

Certainly we do use GPUs for some floating-point intensive tasks on servers, but this is nowhere near fast enough to be useful.

These have HUMA. GPGPU-CPU interactions will be much faster than on any previous architecture because not only do they share memory space, they are also cache coherent at a hardware level. It suddenly makes having a whole bunch of FPUs on the graphics card useful for regular old FPU applications, because they can be accessed just as quickly as SSE/x87 FPUs. It makes OpenCL suddenly useful for very small kernels, instead of only being useful for massive data-processing chunks where the parallelisation had to be wide and simple enough to make up for memory copying overhead. TL;DR: I want this on my server, even if just for the stuff like generating graphics and accelerating database hashing. Never mind Folding@home and HPC kind of work.

Seriously, stop being such a downer.

Re:Yeah i don't get it (1)

Rockoon (1252108) | about 10 months ago | (#45404341)

Intels problem is that the method they used to get Iris Pro to perform so well for them (which is actually only about equal to the existing Radeon HD 8670D in the 6800K APU) is expensive, and the method isnt going to get any cheaper any time soon.

The method is simply to add another cache level to their cache hierarchy, and to make it a massive and ridiculously expensive 128MB.

If cache was cheap, all their processors would have a 128MB L4 cache. Cache is quite expensive tho, which is why their budget Haswells such as the Core i3-4130T only have 3MB of it, and the Intel HD 4400 on them performs at literally half the speed of the Iris Pro or the 6800K's Radeon HD 8670D.

Intels GPU problem continues to be shitty GPU architecture, which is a result of them not giving many actual shits about it.

Re:Yeah i don't get it (1)

edxwelch (600979) | about 10 months ago | (#45404461)

The eDRAM in the Iris pro is quite expensive to manufacture and hence, Intel charges a premium for it. The Core i7-4770R for instance, is listed at $392.00.

Re:Yeah i don't get it (1)

Aighearach (97333) | about 10 months ago | (#45404585)

This is huge because it means low-end systems having strong performance with HTML5 apps, WebGL, and casual/educational 3d rendering. It also means that gaming on low-end systems will be vastly improved.

I can't imagine how you can claim how the market will respond to this vs. Intel's offering without some prices and delivery dates. Historically, the AMD offering will have a more favorable price/performance ratio, and Intel will sell more in the high end based on their brand.

And these are low power. An APU uses less power than a comparable CPU+GPU. It basically means that the mid-range laptops that might currently have a discrete GPU would use less power for the same graphics performance by not needing it anymore. It pushes the bar for needing a discrete GPU farther into the high end.

This could also be used in new classes of systems; for example, for people who already stopped needing more CPU, but buy mid to high end systems because they need lots of RAM and they want decent 3d performance - rarely. This probably describes most software developers these days. This could mean a mid-level system with lots of RAM that is still low power, and can do "everything."

Re:Yeah i don't get it (1)

Anonymous Coward | about 10 months ago | (#45404879)

What market is amd shooting for?

Clearly the Finnish one - 'kaveri' means a friend in Finnish.

Re:Yeah i don't get it (1)

Iniamyen (2440798) | about 10 months ago | (#45406749)

All that's left is the cheap oem side of things.

Isn't that pretty much the biggest market out there??

Cut the crap. We all know what needs to be made. (1, Interesting)

Anonymous Coward | about 10 months ago | (#45403849)

We need a CPU/GPU SoC based on the tech that's going in to the xbone and ps4. They both have multicore procs with a built in GPU that's capable of pushing next gen games at HD resolution.

We need that. A cheap PC built on a powerful single-chip solution. Not this wimpy shit.

Personally, I'd go for the the PS4 solution. 8 gigs of high speed GDDR5 that's both the main memory and graphics memory? Fuck yes. Give me that. I'd forgo the ability to expand memory if I could have GDDR5 as main memory. (The DDR3+128meg edram solution in the xbone is cheaper and clearly inferior)

Re:Cut the crap. We all know what needs to be made (0)

dc29A (636871) | about 10 months ago | (#45403883)

They both have multicore procs with a built in GPU that's capable of pushing next gen games at HD resolution.

Not the Xbox One, it will render 720p and 900p and upscale it to 1080p. PS4 can render 1080p without upscaling.

Re:Cut the crap. We all know what needs to be made (0)

Anonymous Coward | about 10 months ago | (#45403989)

The PS4 doesn't maintain constant 60 FPS though. It dips here and there :)

Re:Cut the crap. We all know what needs to be made (0)

Anonymous Coward | about 10 months ago | (#45404183)

They both can render 1080p games at 60 fps. Developers on consoles make the choices on performance for games. With a console, you hope that the developer has chosen the best balance of performance and graphical fidelity since you don't get to choose or play around with it like you can on the pc. You'll see better performance from these machines as the engines become more mature and the developers more experienced with the platforms.

Re:Cut the crap. We all know what needs to be made (0)

Anonymous Coward | about 10 months ago | (#45404161)

Not the Xbox One, it will render 720p and 900p and upscale it to 1080p. PS4 can render 1080p without upscaling.

Both can render 1080p. In practice, neither does everything in 1080p (although PS4 seems to have more titles with in actual HD).

Re: Cut the crap. We all know what needs to be mad (0)

Anonymous Coward | about 10 months ago | (#45404025)

GDDR5 doesn't perform well under loads the CPU throws at it. GDDR in general does really well for sequential things like scanning screens, reading textures, antialiasing. Peak bandwidth is a pretty terrible measure of memory performance... it makes big numbers look better though, which is why marketing departments like it.

Re: Cut the crap. We all know what needs to be mad (0)

Anonymous Coward | about 10 months ago | (#45404279)

Err, 5Gb/s GDDR5 has about the same absolute latencies as DDR3-2133...

Intel Iris Pro (1)

nhat11 (1608159) | about 10 months ago | (#45404011)

AMD should at least try using Intel Iris Pro which is their highest end GPU. The 630 GT is a ok low end GPU depending on which version they use.

Re:Intel Iris Pro (0)

Anonymous Coward | about 10 months ago | (#45404475)

As stated previously, the Iris Pro is not available on "Desktop" type Intel processors, only BGA style solder-on processors.

The real advantage is the programming model (5, Interesting)

Anonymous Coward | about 10 months ago | (#45404315)

These machines share the memory between CPUs and GPUs, and that's the advantage:
You can use the GPU cores to do otherwise forbiddingly expensive operations (such as detailed
physics, particle simulations, etc) very easily. Traditional systems need to copy data between vram and main memory over the bus system bus, which takes time.

Programming languages are already starting to support mixed CPU/GPU programming with through new language constructs. At the moment, mainly rendering and physics is done on the GPU, soon it will be easy to do anything that can be efficiently parallelized.

Re:The real advantage is the programming model (0)

Anonymous Coward | about 10 months ago | (#45404565)

HSA is efficient, but how fast is it going to be with so few resources? I have a feeling a 20+ CU GPU with "slow" PCIe is going to smoke a Kaveri with only 8 CUs.

Re:The real advantage is the programming model (1)

chuckugly (2030942) | about 10 months ago | (#45405373)

Just like the CBM Amiga

Re:The real advantage is the programming model (2)

idontgno (624372) | about 10 months ago | (#45405805)

Yup.

It's the wheel of reincarnation: [catb.org]

...a well-known effect whereby function in a computing system family is migrated out to special-purpose peripheral hardware for speed, then the peripheral evolves toward more computing power as it does its job, then somebody notices that it is inefficient to support two asymmetrical processors in the architecture and folds the function back into the main CPU, at which point the cycle begins again.

Re:The real advantage is the programming model (0)

Anonymous Coward | about 10 months ago | (#45405877)

I'll back this up as an anonymous coward. Even lowercase to emphasize coward. Anyway, people who need to churn on huge datasets (>16GB or larger) with frequent interactivity between nodes (here's looking at you molecular dynamics) will see large improvements with this type of technology.

Now, let's get cracking on DDR4 and get a useful amount of memory bandwidth to play with.

Big news! (1)

viperidaenz (2515578) | about 10 months ago | (#45404967)

Brand new AMD APU with 512 GPU cores beats Discrete NVidia card with 128 cores that's more than a year old now.

Hang on, was this supposed to be impressive?

Re:Big news! (1)

idontgno (624372) | about 10 months ago | (#45405931)

Of course it was.

It's benchmarketing. You're not supposed to pay attention to the unbalanced comparison behind the curtain. You're supposed to suspend all critical thought and begin Pavlovian salivation. Otherwise, you're not fanboi enough and need some re-education. Or something.

Meh. The way you can tell a marketer isn't lying is when he's not breathing.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>