Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD's Kaveri APU Debuts With GCN-based Radeon Graphics 123

crookedvulture writes "AMD's next-generation Kaveri APU is now available, and the first reviews have hit the web. The chip combines updated Steamroller CPU cores with integrated graphics based on the latest Radeon graphics cards. It's also infused with a dedicated TrueAudio DSP, a faster memory interface, and several features that fall under AMD's Heterogeneous System Architecture for mixed-mode computing. As expected, the APU's graphics performance is excellent; even the entry level, $119 A8-6700 is capable of playing Battlefield 4 at 1080p with medium detail settings. But the powerful GPU doesn't always translate to superior performance in OpenCL-accelerated applications, where comparable Intel chips are very competitive. Intel still has an advantage in power efficiency and raw CPU performance, too. Kaveri's CPU cores are certainly an improvement over the previous generation of Richland chips, but they can't match the per-thread throughput of Intel's rival Haswell CPU. In the end, Kaveri's appeal largely rests on whether the integrated graphics are fast enough for your needs. Serious gamers are better off with discrete GPUs, but more casual players can benefit from the extra Radeon horsepower. Eventually, HSA-enabled applications may benefit, as well."
This discussion has been archived. No new comments can be posted.

AMD's Kaveri APU Debuts With GCN-based Radeon Graphics

Comments Filter:
  • by SargentDU ( 1161355 ) on Tuesday January 14, 2014 @05:40PM (#45957403)
    The summary did not state what the prices are. Are they cheaper to buy than the Intel chips they are being compared with?
    • By at least $200. That doesn't include the difference in price when comparing the AMD APU socket motherboard vs. Intel socket motherboard
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        It's $200 cheaper than an i3 4330? That's pretty impressive given that the i3 is $130, are AMD going to refund me $70 for buying their CPU?

        • It's $200 cheaper than an i3 4330? That's pretty impressive given that the i3 is $130, are AMD going to refund me $70 for buying their CPU?

          if so i think i am going to buy me enough cpu's to retire early.

          • Only way to properly compare the pricing is to include mobo, cpu, ram and gpu.
            No other way around that.

        • Re: (Score:2, Troll)

          by higuita ( 129722 )

          and the graphic card is free for you?

    • by s.petry ( 762400 ) on Tuesday January 14, 2014 @08:07PM (#45959449)
      The summary also spends a lot of time talking about how great Intel is. It makes sense that prices are not discussed because the submitter appears to be heavily biased, and price always favors AMD.
      • Comment removed (Score:5, Interesting)

        by account_deleted ( 4530225 ) on Wednesday January 15, 2014 @04:35AM (#45962801)
        Comment removed based on user account deletion
        • by Mdk754 ( 3014249 )
          If only I had mod points... Everyone gets so caught up in the top of the line comparison, when the midrange price/perf is untouchable.

          My only gripe with AMD is their quest to hit that higher perf at the expense of power consumption. The TDP on some of their chips are nuts.
        • by s.petry ( 762400 )
          To fit in line with your point that every pro-AMD comment gets modded down, I get modded a troll for pointing out the bias to begin with.
        • Indeed, at the end of they what matters for 80%+ is the 80% of users - ie. Average Joes.
          Very few people purchase the very top of the line, it makes absolutely no sense to pay triple for marginal gains - even if comparing within the intel brand. For a very few people it does make sense however.

          We use quite a few AMD products in our DC - they are very solid, and very nice performance to price ratio.
          Because AMD is not as much used, we don't have the multitude of choices, but the highest end difference is:

          Intel

    • Re: (Score:2, Insightful)

      Comment removed based on user account deletion
      • by WuphonsReach ( 684551 ) on Wednesday January 15, 2014 @07:07AM (#45963393)
        As a long time AMD fan (if we whitebox build, it's always an AMD chip), I have to say "it depends".

        For a lot of applications, per-core performance is what matters. And for the last few years, Intel beats AMD hands-down on per-core performance. As in 30-50% faster. That i3 for $200 is going to run rings around the AMD for $200. For a lot of single-threaded programs (many games are CPU-bound by a single thread), that 30-50% faster speed matters.

        However, if your application is multi-threaded and the problem you are trying to solve (media transcoding) is easily done in parallel, then the AMD chips are a better fit.

        The "Bulldozer" architecture was a dud. Lots of cores for cheap, but low performance per core under a lot of workloads. The Piledriver architecture is better and AMD is at least somewhat competitive again.

        I'm very curious to see how well the new Steamroller (Kaveri) series chips perform.
      • Damn right!
        And this goes for servers as well.

        The bulk of nodes we sell are *first gen* ATOM. Yeah, first gen. And they idle mostly, since our workload is I/O intensive, not CPU.
        Even the highend gear we purchase is 2 gens old - but it's still higher end than the last high end gear.

        1U Dual Quad Xeon L5520 with 72Gb ram and less than 500$ and room for 4x3.5"? Who could resist that, when we used to pay close to 200$ per month for a Xeon W3560, 32G Ram with 2x2Tb drives.
        And the CPUs offer more more power than th

    • Re: (Score:3, Insightful)

      by guacamole ( 24270 )

      Benchmarks show that for pure CPU intensive tasks, the A10 APUs are roughly comparable to Haswell Core i3s (the entry level ones, at least). The i3-4150 costs $130-140, the last generation A10-6800K dropped to $130-140. The new A10-7850K is listed for $189 on Newegg. Considering this, the new A10-7850K is not very inciting at all. It's not even convincingly faster than A10-6800K, with the current drivers at least. AMD hinted that the new A10-7850K graphics performance will be on the level with Radeon HD7730

      • How about the Intel Pentium G3220? It's Haswell, socket 1150, low power and nearly half the price of the i3.

        • Yes. To add the insult to the injury, the G3220 is priced at $69 on newegg right now. It's basically a slightly lower clocked i3 without hyperthreading. If you don't play games or edit multimedia, then that's all you really need on an entry level desktop. Add a $100 video card, and it will probably run games at faster frame rate than AMD's $189 A10 Kaveri.

      • by Anonymous Coward

        With the amd chips you also get full virtualization, encryption, ecc ram and overclocking.

  • Saying something is "capable of playing" a game at X settings is a completely pointless statement for almost all hardware because the ability to play is based almost solely on having enough memory to load all of the assets. They need to state the average frame rate it gets at whatever settings they are playing it on.
    You could play BF4 on a 300 MHz processor as long as you have enough memory, it would just look like a slide show.
    • by Baloroth ( 2370816 ) on Tuesday January 14, 2014 @05:59PM (#45957657)

      Most people, when they say "capable of playing", mean that it can actually be played on those settings, i.e. that the frame rate is high enough for the game to be considered playable. Generally, this means an average frame rate of ~30 and minimums of 20 or more (although that depends a bit on the reviewer, some people consider a frame rate of 30 totally unplayable, personally anything above 20 can still be played).

      • Except for people that say "capable of playing" never mean 30 fps and are likely completely oblivious to how many frames per second the game is running at. The GPU in question definitely does not get an average of 30 fps on BF4 medium settings.
        • I tried to find some hard data on either statement. It looks like the model number in TFS is a typo, and the test I found that showed results with BF4 [techreport.com] neglected to explain what the medium settings are. It does, however, show us an average of 28fps, which would support your definitely not 30fps by a hairsbreadth. Now if only there were some technology [geforce.com] to make that difference from 20fps count...
    • Except that in this case "X settings" is 1080p30. It may be low quality otherwise, but it meets your requirement.

  • by gman003 ( 1693318 ) on Tuesday January 14, 2014 @05:44PM (#45957469)

    I'm helping a friend with a custom, low-cost gaming machine. We'd looked into using an APU, and I looked into it again today when I saw this. The gaming performance just isn't there yet. They're fine for regular desktop use, but even the top-of-the-line one can't handle gaming.

    The two things that could still be useful are GPGPU, and dual graphics. Having an on-chip GPU just for compute purposes, especially with all the enhancements they've added, would be very useful if more things used GPU compute, but it just wasn't worth it for this build and this user. And they have spoken a bit of using both the integrated GPU and a discrete graphics card in tandem, similar to using two GPUs in Crossfire, but they haven't released the drivers for it, nor listed which cards will work, and the card they chose to demo it with was their bottom-end graphics card. Given all that, and that a similar CPU without the integrated graphics was about half the price, I couldn't justify getting one.

    I am pretty impressed with how tightly they've integrated them, though. Much better than Intel's offerings. If they made one that had the graphics horsepower for gaming, I'd have used one.

    • and the card they chose to demo it with was their bottom-end graphics card.

      Probably because you wouldn't notice a difference if you paired a tiny integrated GPU with a powerful standalone one. The added overhead may even reduce performance.

      • by jandrese ( 485 )
        I was thinking the integrated GPU might be useful for PhysX calculations while the discrete GPU does the graphics.
        • That's not valid for AMD cards or IGPs because PhysX is Nvidia-only. If companies start using OpenCL to implement physics acceleration that could change.
    • by phorm ( 591458 )

      The article also notes that a lot of the tech in these is new, so older games don't necessarily take advantage of it. It would be interesting to see how this looks a year from now.

      • by 0123456 ( 636235 )

        The article also notes that a lot of the tech in these is new, so older games don't necessarily take advantage of it. It would be interesting to see how this looks a year from now.

        Game developers optimize to run best on the fastest computers out there, not slow CPUs with slow integrated graphics. AMD would have to pay them to put effort into optimizing for these things.

        • I think GP might be referring to AMD's Mantle API. [wikipedia.org] Apparently Battlefield 4 [pcgamer.com] supports it.
        • by phorm ( 591458 )

          Good game developers make games to run at reasonable performance on the most machines. it sells a lot more games that way...

          • by 0123456 ( 636235 )

            Good game developers make games to run at reasonable performance on the most machines. it sells a lot more games that way...

            But they won't go out of their way to implement special support for crappy hardware, because every game review and benchmark will be running on a high-end machine.

    • by Anonymous Coward

      It sounds like you were actually trying to build a low-cost high-end gaming machine. That can't be done, it doesn't work like that. Look at what GPU and CPU performance AMD A-series gives you, assess if it's enough for you, and if it is then look at the price and pick up your jaw; this is the strength of the AMD A-series, good CPU and GPU performance at an amazing price in a single package.

      If you want the best then you have to pay silly money for Intel and discrete graphics boards, that's just how it is.

    • I'm shocked GPUs, especially with all this integration, haven't taken over already. The whole reason Intel bought half the industry was it became obvious a Pentium core could be tucked into a tiny corner of a 3D graphics chip, both speed and transistor count-wisr, as their development, drivin by infinite potential consumption on cooler and more complex virtual worlds, would ever-more outstrip a general-purpose CPU.

      Frankly, by now I was expecting a merged GPU/monster FPGA-type design with dynamic programmin

      • by Anonymous Coward

        GPU computations hasn't taken off because APU still uses separate address space for CPU and GPU bits. It is a real bitch to copy from CPU to GPU and then to CPU back again. Yes, even on a single chip. Insanity!

        Unified address space should fix the problems, but then memory protection need to be handled properly. Apparently coming "Soon" from AMD, but not soon enough.

    • by theGreater ( 596196 ) on Tuesday January 14, 2014 @07:17PM (#45958799) Homepage

      And they have spoken a bit of using both the integrated GPU and a discrete graphics card in tandem, similar to using two GPUs in Crossfire, but they haven't released the drivers for it, nor listed which cards will work, and the card they chose to demo it with was their bottom-end graphics card.

      That's not very truthy:
      http://www.amd.com/us/products/technologies/dual-graphics/pages/dual-graphics.aspx#3 [amd.com]

      • That link is not very truthy. Not only does it only list a single "recommended" card, rather than a list of any that are compatible, but it also has not been updated for these new GCN-based APUs. As noted in TFA (the Anandtech one, specifically) "AMD recommends testing dual graphics solutions with their 13.350 driver build, which due out in February."

    • Comment removed based on user account deletion
  • by serviscope_minor ( 664417 ) on Tuesday January 14, 2014 @05:49PM (#45957527) Journal

    Really looking forwards to the HSA benchmarks.

    Nothing out there will tax these chips. All GPGPU codes are written asuming hugh latency between CPU and GPU. With shared caches these things have nanosecond latency and should beable to bring the GPU to bear on a much wider class of algorithms.

    Now, it's always worth shipping the data to the GPU, since if it's in the L2 cache, it's there for the GPU as well.

    It will take a while before people code to this though.

    • So you mean kind of like what the Intel chips already do?

      • by Bengie ( 1121981 )
        Intel's IGP is not the same. It's true that it also uses the same L4 cache, but it is not the same address space. This means data will be duplicated as it needs to be copied from the CPU address space to the GPU address space, even if they share the same physical memory.
        • Sort of like "Intel InstantAccess", that allows the CPU to directly access GPU memory space?

          It was a driver limitation, not a hardware one

          • Re: (Score:3, Insightful)

            by Bengie ( 1121981 )
            Except "Intel InstantAccess" requires making system calls to allow the kernel to map GPU memory to user space. AMD's HSA requires nothing special at all. The GPU understands and honors protected mode, so you can arbitrarily pass pointers to and from the GPU with no system calls. You can even communicate between the GPU and CPU without system calls. AMD HSA even lets the GPU work with virtual memory. "Intel InstantAccess" only works with data that is in memory, AMD can issue page faults and let the OS load
            • Re: (Score:1, Flamebait)

              So you knew how it works and you just decided to spread lies in your previous post?

              • by Bengie ( 1121981 )
                Get back to me when Intel has something that requires no system calls and has a unified memory space, then they'll be comparable.
    • The XBox One and PS4 have given us a preview of AMDs technology in this area, haven't they?
      • the ps4 has high speed ram sheared for video and cpu.

        Xbox has slower desktop DDR3 for video / cpu.

        PS4 seems good and high speed ram makes it better then other on board chips that use the slower desktop ram.

    • by godrik ( 1287354 )

      I am a little bit skeptical about that. I am not really sure how much it will really change things. The use case actually seems very thin to me. You need a kernel which is compute intensive and where the data transfer from memory to the core is expensive. Because if there is little data to transfer, then the overhead is small. I read some benchmarks from AMD and only few kernels seemed to be in the sweet spot. On top of that becasue of the memory architecutre, I feel like raw memory to core bandwidth will b

      • You need a kernel which is compute intensive and where the data transfer from memory to the core is expensive.

        You're missing the cpu-gpu latency.

        So, the integrated GPU will already work as well as any other GPGPU of similar specs. Nothing has got worse there.

        There is a problem with writing GPU code in that some things work far better on the GPU and other things work far worse than a CPU. At the moment writing code which has a mixture of those is extremely hard.

        Basically, this architecture will allow one to

        • by godrik ( 1287354 )

          You're missing the cpu-gpu latency.

          The CPU-GPU latency is already very small, smaller than a millisecond. and I assume that most of the CUDA startup latency comes configuring the kernel launch and not from the interconnect. Because PCI-express has very low lateny, infiniband card get network communication with a latency less than 2 microseconds. That's about 6000 CPU cycle (assuming 3GHz CPU).

          If you want to gain by removing latency, you need the computation to be VERY small and frequent.

          Moreover, we are very good at overlapping communication

    • by LWATCDR ( 28044 )

      That is going to be the kicker. Just like VLIW it will really depend on software tools and support. AMD is supporting a lot of FOSS projects that support OpenCL. Maybe AMD needs to throw some support to WebKit and Mozilla to support their GPU compute systems.

  • While the GPU is good, the Kaveri CPU is slightly slower than Richland in the benchmarks - after 4 years of waiting that's a big disappointment.

    • by elwinc ( 663074 )
      Anandtech points out that they chose a process with higher transistor density to go for greater IPC instead of high clock rates in the CPU. There's also an amusing comment in the review about how the Bulldozer CPU architecture "sure had a lot of low hanging fruit." In other words, why weren't most of these improvements included back in 2011?
      • In other words, why weren't most of these improvements included back in 2011?

        Developing hardware is a lot different than developing software. With software you can go - "oh that now works, lets add this" or "oh, that didn't work out - how about we take that out". With hardware you can't without going back to the start of the manufacturing process.

        With hardware a large part of the exercise in risk management - adding one feature that you can't get production ready will kill the entire product. So most projects pick just one or two key areas to develop, the ones that will make the big

  • by Anonymous Coward on Tuesday January 14, 2014 @05:57PM (#45957613)

    It's 1:2 AMD:Intel, at the kindest level.

    It's 2:3 with radeons:nvidia.

  • In Finnish kaveri mean buddy. Quite fitting name :)
    • In Finnish kaveri mean buddy. Quite fitting name :)

      And "apu" means help or assistance, or auxiliary as a prefix. For example "apuprosessori" meaning co-processor.

  • Since AMD has gone fabless, who do they now use to manufacture these chips?
    • by Anonymous Coward

      GloFo

    • by Mashdar ( 876825 )
      The same people as always. They spun off "Global Foundaries", but are still using them. (They are contractually obligated to, for the foreseeable future.)
  • by Salgat ( 1098063 ) on Tuesday January 14, 2014 @06:32PM (#45958121)
    It's very exciting seeing both AMD and Intel compete to push embedded GPUs. More and more of the computer is being pushed onto the CPU's package (SoC); one day we can expect to see RAM become embedded too as a new level of cache that is more than sufficient for even gamers. The reason why discrete GPUs and other components will ultimately lose is latency. GPUs and CPUs will reach a point where the bottleneck that exists between them will hinder communication enough that embedded GPUs will become a necessity. The same goes for RAM. One day we may even see hybrid CPU/GPUs, such that some cores will be more general purpose where others are more special purpose. Ultimately we can thank our phones for helping drive this push; especially since phones are rapidly approaching the performance of desktop and laptop computers.
    • by 0123456 ( 636235 )

      Good luck getting a 100W CPU and 300W GPU into the same package.

      Well, OK, stuffing them in there won't be too hard, but cooling it will be a bastard.

  • by Anonymous Coward on Tuesday January 14, 2014 @06:34PM (#45958165)

    Most of the people who decide they still need a full-sized desktop computer will be completely covered with one of the AMD A-series APUs, at a bargain price. Only the remaining 1 out of 5 users are power-users who need the highest CPU and/or GPU performance, and have to resort to expensive Intel CPUs and discrete graphics boards.

    • Re: (Score:2, Insightful)

      by guacamole ( 24270 )

      Expensive Intel CPUs? Intel's Core i3 is pretty much equivalent to the AMD A10 on general purpose CPU power. Right now, i3-4130 is $129 on newegg while the A1-7850K is $189. The only thing that the A10 has on Core i3 is integrated graphics, but throw a $100 Radeon card into either of these systems, and it will run much faster than the integrated graphics on the A10. And don't forget the dual core Haswell Pentium chips sold for under $100. A Pentium G3220 costs $69 on newegg right now. Add a $100 Radeon HD77

  • by wjcofkc ( 964165 ) on Tuesday January 14, 2014 @07:38PM (#45959067)
    I would say this discussion fits in well with the frequent discussions we have about the alleged impending death of the desktop computer. Consider the argument that PC's which were brand new around 2007/2008 are still so overpowered for most needs that this is the cause of declining PC sales. Now consider that much less expensive than Intel, yet also brand spanking new AMD chips are practically supercomputers compared to chips from that era. If my 2008 PC is still really fast, but I want to upgrade anyway, why pay the Intel premium (outside of some ultra-demanding professional use) when I can save so much and still have a computer that will be faster than I need for years to come. That's AMD's advantage in this game. I currently have a quad core 3 GHZ AMD system with GPU disabled in favor of a low cost NVIDIA card and it's great. I am waiting till next fall for the price to plummet on current 8 core 4 GHZ AMD chips for my next upgrade. And even then it will be just for the hell of it, not need.
  • It's a pretty sad reading IMHO. The Kaveri APU does not seem even decidedly faster than the last generation A10. The only bright spot is that the 65watt TDP A8 APU is not that much slower than the 95watt A10 APU.

  • by xiando ( 770382 ) on Wednesday January 15, 2014 @12:03AM (#45961477) Homepage Journal
    Still using a Phenom II 3x *CPU* and it's fast enough for my GNU/Linux system so I see little reason to upgrade it - but if I decide to do so then I would very much like to buy a CPU, not a APU. Would it be so hard for AMD and Intel to offer actual CPUs again? Am I the only one who would like to buy one at some point? APUs are nice if you want a cheap system with alright graphics.. but why do they force us to buy one even if all we want/need is a CPU?
    • Well, at least Intel is not charging a huge premium for the integrated graphics. The Core i3-4150 is only $130 and the rest of Core line use the same basic GPU.

    • No one is forcing you to buy a CPU with integrated graphics. Look at the high end solutions. This includes AMD's FX series and Intel's socket LGA 2011 platform. Both Intel and AMD know that people buying high-end CPUs will buy a discrete graphics card anyway, so there is no point wasting valuable die space on it.
    • Comment removed based on user account deletion
    • The trend away from desktops to laptops continues. Both AMD and Intel design with this in mind. With Intel the biggest improvement to Haswell was power consumption, which really on a desktop is meaningless for the most part. Both are trying to greatly improve their integrated graphics, because making laptops with dedicated cards is expensive, and you can sell more of the cheaper ones. It is easier to make one more less design.

      If you are buying a desktop for gaming the integrated graphics are almost useless

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...