Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD

AMD Betting Future On the GPGPU 181

arcticstoat writes with an interview in bit-tech "AMD's manager of Fusion software marketing Terry Makedon revealed that 'AMD as a company made a very, very big bet when it purchased ATI: that things like OpenCL will succeed. I mean, we're betting everything on it.' He also added: 'I'll give you the fact that we don't have any major applications at this point that are going to revolutionize the industry and make people think "oh, I must have this," granted, but we're working very hard on it. Like I said, it's a big bet for us, and it's a bet that we're certain about.'"
This discussion has been archived. No new comments can be posted.

AMD Betting Future On the GPGPU

Comments Filter:
  • by Assmasher ( 456699 ) on Tuesday May 31, 2011 @03:17PM (#36300448) Journal

    for OpenCL, this sounds very dangerous. Dangerous as in "Remember this really cool company named SGI that made uber powerful and specialized computing platforms?"

    Personally, I actually use things like OpenCL to do real time image processing (video motion analysis), but I don't know too many others in the industry that do, so I can't imagine their market is particularly large.

    There must be some huge potential markets that just don't seem to come to mind for me...

    • The point of it is that most if not all computers made today have the potential for including OpenCL goodness, it doesn't mean that a particular user will need it, but it will be available for the developers to tap. It might be something that the user only uses from time to time like for video decoding/encoding, but if the hardware is already capable of handling a bit of that with minor design adjustments there's little reason not to offer it.

      Plus, 3D accelerators used to be just for games, and since pretty

    • by JBMcB ( 73720 ) on Tuesday May 31, 2011 @03:26PM (#36300556)

      Mathematica 8 can use OpenCL (and CUDA) I think the new MATLAB can, too.

    • Maybe they're betting on governments buying video surveillance processing equipment...

      • Most of the corporate/government money for that is going into FPGA boxes that sit between the camera (or in the camera) and the network connection.

    • by Sir_Sri ( 199544 ) on Tuesday May 31, 2011 @04:09PM (#36301048)

      I don't think he means OpenCl specifically. OpenCl is a tool that connects you to GPU hardware. GPU hardware is designed for a different problem than the CPU, so they have different performance characteristics, in the not too distant future heterogenous multi core chips that do both the CPU and GPU calculations of today will be mainstream, and there will general purpose computing tools (which OpenCl is a relatively early generation of, along with CUDA) to access that hardware.

      While I don't agree with the idea that this is the entire future, it's certainly part of it. Right now you can have 1200mm^2 of top tier parts in a computer, roughly split half and half CPU/GPU - but not every machine needs that, and it's hard to cool much more than that. So long as there's a market which maximizes performance and uses all of that, CPU/GPU integration will not be total. But there will be, especially in mobile and not top end machines 'enough' performance in 600-800 mm^2 of space, which can be a single IC package which will be a combined CPU-GPU.

      It is, I suppose, a bit like the integration of the math co-processor into the CPU a decade ago. GPU's are basically just really big co-processors, and eventually all that streaming, floating point mathy stuff will belong in the CPU. That transition doesn't even have to be painful, a 'cheap' fusion product could be 4 cpu cores and 4 GPU cores, whereas an expensive product might be a 8 core CPU in one package, and 8 cores of GPU power on a separate card, but otherwise the same parts (with the same programming API). The unified memory will eventually obsolete the dedicated GPU probably, but GPU RAM is designed for streaming, in order operations, whereas CPU ram is for out of order random memory block grabs, ram that does either in order or out of order equally well would solve that problem (or as long as it does it well enough), but architecturally I would have GPU ram as a *copy* of the piece of memory that the gpu portion of a fusion part will talk to.

      As to what the huge market is: OpenCL gives you easier access to the whole rendering subsystem for non rendering purposes. So your 'killer' apps, are laptops, tablets, mobile phones, low powered desktops, really, anything anyone does any sort of 3D on (games, windows 7, that sort of thing), so basically everything, all your drawing tools.

      The strategy is poorly articulated with OpenCl, but I see where they're going. I'm not sure what Intel is doing in this direction though, which will probably be the deciding factor, and nVIDIA, rather than positioning for a buyout (by Intel), seems to be ready to jump ship to SoC/ARM type products. Intel doesn't seem to have the GPU know how to make a good combined product, but they can certainly try and fix that.

      • I was also thinking about the analogy of the GPU to math co-processor. But I think the future is kind of the reverse where processor packages are different specialized and generic architectures mixed and matched both on a single chip and motherboards that evolve into back planes. Expansion slots are more or less becoming processing slots. Sure you can plug peripherals into them but by and large peripherals have all gone external. A desktop motherboard is becoming little more than a backplane with an integ

    • I don't believe you understand OpenCL's application domain, as you insinuate that it only applies to specialized use case scenarios. Even if you choose to ignore how widespread OpenCL is in domains such as games, you always have multimedia and graphics processing. Adding to this, there are countless people all around the world who desperately seek a "supercomputer in a box" such as what you can get if you are suddenly able to use graphics cards for this. I happen to be one of these people who desperately

    • SGI's computer systems weren't "uber powerful and specialized" in the late 90's. They had largely lost the commercial UNIX market to DEC and Sun by then, and were getting eaten alive in technical computing by HPPA. This continued right up until the demise of SGI MIPS.

      What they had going for them was NUMAlink-induced scale-up capacity, which they still have in their Itanium and x64 systems.
      • SGI had and maintained its niche market for workstation graphics processing, especially in FX houses. They never were competitors in the data centers, and for a long time, nothing really stood up agains IRIX.

        What killed SGI was an explosion in capabilities from the x86 market. Suddenly a maxed-out $3-4K machine from Dell with a decent NVidia or AMD card could beat the pants off of the $10K + machines SGI was making. In-house IRIX apps were ported to Linux or BSD, Windows apps skyrocketed in popularity, and

    • by durrr ( 1316311 )
      Look at the openCL link from article. everyone have a hand in that cookiejar.
  • AMD lost that bet (Score:5, Informative)

    by blair1q ( 305137 ) on Tuesday May 31, 2011 @03:22PM (#36300506) Journal

    AMD famously overpaid by 250% for ATI, then delayed any fusion products for 2 years, then wrote off all of the overpayment (which they had been required to carry as a "goodwill" asset). At that point, they lost the money.

    Luckily for them, ATI was still good at its job, and kept up with nVidia in video HW, so AMD owned what ATI was, and no more. But their gamble on the synergy was a total bust. It cracked their financial structure and forced them to sell off their manufacturing plants, which drastically reduced their possible profitability.

    What they have finally produced, years later, is a synergy, but of subunits that are substandard. This is not AMD's best CPU and ATI's best GPU melded into one delicious silicon-peanut-butter cup of awesomeness. It's still very, very easy to beat the performance of the combination with discrete parts.

    And instead of leading the industry into this brave new sector, AMD gave its competition a massive head-start. So it's behind on GPGPU, and will probably never get the lead back. Not that its marketing department has a right to admit that.

    • by LWATCDR ( 28044 ) on Tuesday May 31, 2011 @03:30PM (#36300594) Homepage Journal

      Of course it isn't the best GPU with the Best CPU. It is a good CPU with a good GPU in a small low power package. It will be a long time before the top GPU is going to be fused with the top CPU. That price point is in an area where their are few buyers.

      Fusions first target is going to be in small notebooks and nettops. The machines that many people buy for every day use.
      GPGPU's mainstream uses are going to be things like video transcoding, and other applications that are going to be more and more useful to the average user.
      For the big GPGPU power house just look to high end discrete cards just as high end audio users still want desecrate DSP based audio cards. I am waiting to see AMD use Hyperchannel as the CPU GPU connection in the future for really high end GPGPU systems like supercomputing clusters.

      • by blair1q ( 305137 )

        "It will be a long time before the top GPU is going to be fused with the top CPU. That price point is in an area where their are few buyers."

        If AMD wants to stay in business instead of rummaging in dumpsters for customers, it will do exactly that, and discover that they can take major desktop and notebook market share by selling lower power and lower unit cost at higher combined performance.

        But that's only if Intel and nVidia don't smack them down in that segment, because AMD's best stuff is not nearly as g

        • by LWATCDR ( 28044 )

          They are selling lower unit costs and higher power just in that segment. The I3 is the first target and then the I5. Most customers are using integrated graphics and want good enough speed with long battery life. The fact is that most people want the best bang for the buck. The top end will always be served best by discrete GPUs. AMDs best stuff is every bit as good as nVidia in the graphics space and Intel doesn't even play in that space. AMD does well in the CPU space where they compete but Intel does ha

          • by blair1q ( 305137 )

            "The I3 is the first target and then the I5."

            If they're targeting second-banana segments with their best-possible offer, they're aiming for continued marginalization.

            This strategy will trade one AMD CPU out for one AMD GPU in. The number of additional sales they make on the integrated GPU will be a smaller fraction on top of that.

            The money, btw, is still in the high end. Everything else is lower margin. Especially if it has an AMD sticker on it.

        • by Rinikusu ( 28164 )

          You speak as if Intel and nVidia are teaming against ATI when Intel would love nothing more than to eliminate nVidia so customers have to put up with their shitty GMA bullshit...

          • You speak as if Intel and nVidia are teaming against ATI when Intel would love nothing more than to eliminate nVidia so customers have to put up with their shitty GMA bullshit...

            There's only one thing that they'd love more, and that would be to eliminate AMD. Intel competes with ATI for the low end integrated graphics market. nVidia makes nicer but more expensive stuff.

      • by 0123456 ( 636235 )

        It will be a long time before the top GPU is going to be fused with the top CPU.

        Yeah, a long time like never.

        If you 'fused' the top GPU and the top CPU in the same package, you'd end up with a fused lump of silicon because you couldn't get the 500+W of heat out of the thing without exotic cooling.

        And who, out of the performance fanatics who buy the top CPU or top GPU, would want to have to replace both of them at the same time when they could be two different devices?

        • If they can put the best in one core it would make a dandy basis for a game console. Probably it will have to go in two packages and connect via HT, though. That will increase cost but, as you say, make cooling possible.

    • Their Brazo's platform begs to differ,
    • Did anybody really think they'd meld the *best* gpu with the *best* cpu ? this is beyond naive, it has never happened and will never happen.

      what is *is* is the best integrated graphics/video, with an OK CPU. That combo is OK for 95% of users. Brazos is completely sold out due to much better sales than expected. The new APUs can have the same success, especially given Intel over-emphasis on the CPU, and sucky integrated GPUs.

    • by Hydian ( 904114 )

      Why would anyone want a 500W+ chip? In a laptop? That's just stupid. Even in a desktop, I wouldn't necessarily want a top end CPU/GPU chip on one die. That's a lot of heat and power concentrated in one spot. Ever wonder why there isn't a 6GHz CPU? That's why they didn't do this with top end parts. These integrated chips (no matter who is making them) are not intended to be top end products. It is simply not feasible.

      AMD set their sights on the netbook/low end laptop market with their first release b

    • by mauriceh ( 3721 )

      So, basically you are saying you are an Intel shill?
      Actually the ATI revenue is what kept AMD in business the past 3 years.
      Not to mention the 1.6 billion in settlement from Intel for copyright infringements.

      • by blair1q ( 305137 )

        >So, basically you are saying you are an Intel shill?

        No, I'm saying I'm realistic about the disaster that is AMD.

        >Actually the ATI revenue is what kept AMD in business the past 3 years.

        That's what I said.

        >Not to mention the 1.6 billion in settlement from Intel for copyright infringements.

        For what? Dude. Seriously?

    • Luckily for them, ATI was still good at its job, and kept up with nVidia in video HW, so AMD owned what ATI was, and no more. But their gamble on the synergy was a total bust. It cracked their financial structure and forced them to sell off their manufacturing plants, which drastically reduced their possible profitability.

      And how do you think ATI was able to be so good at its job? With help from AMD's engineers, patents, and processes. ATI's cards only started getting really good after the buyout, for instance their idle power dropped by huge amounts after integrating AMD power saving tech. It was years before nVidia had any decent cards with sub-50 watt idle power (let alone less than 10 watt), and it cost them market share. Avoiding a process disaster like nVidia's recall also was no doubt influenced from being part of

  • What is the point? (Score:3, Insightful)

    by the_raptor ( 652941 ) on Tuesday May 31, 2011 @03:24PM (#36300528)

    So will this make peoples web apps and office programs run noticeably better?

    Because that is what the vast majority of computers are being used for even in the commercial sector. Computer hardware peaked for the average user around 2000. Now as the article points out we are sitting around waiting for better software*. AMD would be better off developing that software then pushing hardware for a need that mostly doesn't exist.

    * Why is it that stuff like user agents and other forms of AI mostly disappeared from the scene in the 90's? We have the power now to run the things that everyone seemed to be working on back then.

    • Why is it that stuff like user agents and other forms of AI mostly disappeared from the scene in the 90's? We have the power now to run the things that everyone seemed to be working on back then.

      My guess would be that the tasks people were envisioning for them got taken up by something else. Like google maybe.

      I just don't think that things like your own private thing to crawl the web are what people want/need any more. It wouldn't be the first time someone has postulated some "ground breaking" technology

      • by LWATCDR ( 28044 )

        I would say part Google and part twitter. Google for directed knowledge and Twitter for breaking news. Twitter uses a massive distributed organic super computer as it's user agent.

    • * Why is it that stuff like user agents and other forms of AI mostly disappeared from the scene in the 90's? We have the power now to run the things that everyone seemed to be working on back then.

      Because user agents are useless once you know how to use your own computer.

      "Useragent search 'AI', display the 4 most interesting results in separate tabs."
      - vs -
      [ctrl+J] AI [enter] [click] [click] [click] [click]

      Hint: middle button click == open in new tab [on FF].

      P.S. "Google" <-- search engines ARE user agents! They spider the web, and determine the most relevant results. All you have to do is type your interest at the moment (or say it if you have voice activation software ala Android), a

      • Um... [ctrl+J] opens up the download window on Firefox.
        • by gmhowell ( 26755 )

          Um... [ctrl+J] opens up the download window on Firefox.

          Yeah, and the Germans didn't bomb Pearl Harbor either. Doesn't really change the point of the rant.

    • What it means is that you can have a gaming machine where the GPU is completely shut off when you're not actually gaming. There are definitely cards out there that will consume nearly as much power as the rest of the system. While they're somewhat unusual for most people, there are definitely cards out there that will use over 100watts themselves, and that's without going the SLI route. And for a machine using that much power, you can still be taking about 50 watts being used on just normal tasks. Granted t

    • by LWATCDR ( 28044 )

      Yes it will. Most web browsers now use hardware acceleration.

      • by Luckyo ( 1726890 )

        Not on XP, not on linux (at least properly working one), and I'm not sure about macOS.

        Gonna be a small market in this realm for a while.

        • by LWATCDR ( 28044 )

          These are new computers so XP isn't an issue. MacOS does but they only run Intel for now. Linux we will see.

    • by AmiMoJo ( 196126 )

      See IE9 and future versions of Firefox/Chrome that will include GPU accelerated graphics. There is definitely a benefit for the average user of web apps, or at least there is if they like graphics intensive sites. Flash can play back 1080p video on fairly low end machines as long as they have GPU video decoding now, and there is of course WebGL if that ever takes off.

      I think people underestimate the hardware requirements of modern systems. Yeah, you can run that stuff on a Pentium 3 with 128MB RAM but why b

  • by Script Cat ( 832717 ) on Tuesday May 31, 2011 @03:28PM (#36300564)

    Make it abundently clear what you need to start development on this platform. Will it work on all new computers or just a rare AMD chipset and my code is worthless on all other machines.

    • by Fwipp ( 1473271 )

      To use OpenCL, you need any device that has an OpenCL driver written for it, and the driver. These devices include, but are not limited to:
      AMD graphics cards
      NVIDIA graphics cards
      AMD CPUs (not just the new Fusion ones)
      Intel CPUs
      Multiple IBM products, including the Cell processor
      Some chipsets / embedded systems.

      To get started with an x86 processor, just download the AMD APP (accelerated parallel processing) SDK and follow the tutorials.

      • Last time I checked, you also needed to link against vendor-specific libraries, meaning one library for AMD, one for NVIDIA, and one for Intel. This makes cross-platform OpenCL deployment a bitch. Unless and until these vendors get together and settle on an ICD standard, I don't see OpenCL going mainstream.

    • Use a VM -- write once, debug everywhere!
  • by assemblerex ( 1275164 ) on Tuesday May 31, 2011 @03:31PM (#36300606)
    solely based on their mediocre driver support.
    • I'm guessing you've had this stance for a very long time
      • by rwa2 ( 4391 ) *

        ... heh, ever since I took the plunge and bought an ATi Radeon 7500 All-in-Wonder, purely on the strength of their promise to work with open source driver team. While that card did get decent support from the GATOS team at the time, my card was kinda the cutoff point for their future closed and open source driver efforts.

        Anyway, nowadays I mostly just pine for that alternate universe where Intel bought ATi instead, in which we'd be rid of crappy Intel IGP hardware, but finally have had good open source dri

        • by Amouth ( 879122 )

          Anyway, nowadays I mostly just pine for that alternate universe where Intel bought ATi instead

          i don't think anyone expected AMD to buy ATI.. when i first read about it i checked the date to make sure it wasn't an April fools joke.

      • I bought a 4870X2 which cost $400 appx. What I got was buggy drivers, gimped performance, and I was tossed to the curb once they had a new chipset out. I have to use very old drivers because the new drivers have the habit of not using half the gpus, etc. Never buy ATI, even if it seems more powerful on paper. Nvidia will provide drivers with performance increases for years, unlike ATI.
    • solely based on their mediocre driver support.

      esp. on GLinux -- At least they are releasing open source drivers, but I haven't used them for a long time (don't they still require a binary blob with those "open source" drivers?). When will they learn, we buy your shit for the hardware, your drivers mean jack shit, they are not "uber bad ass top secret", let the hardware stand on its own and give us the full open source code!

      Both the open and closed Nvida drivers I use are a little flaky on GLinux too... Honestly, if you want to make it to the futur

    • And this has been the response of the entire visual effects industry, worldwide, too, for about the last 8-10 years.

      Almost 100% linux based pipelines. And now almost 100% Nvidia Quadro. Because of driver support, not the cards themselves. But it's a problem ATI can remedy for themselves. I think re-establishing, and putting meaningful funds into their open-source driver project is the first step.
      • I sure hope they do it sometime soon. I just put Vista back on to a machine here because it won't run anything else properly... and it's old :(

  • a computer is becoming something you carry everywhere and use almost everywhere. the PC/mac is something most people will keep off 95% of the time and use a few times a week

    • by MrHanky ( 141717 )

      That's just utter bullshit and marketing hype.

    • really? Some decline in sales by the major forecasters is predicted by Forrester and others in desktop sales because of tablets, but 15 million units in 2015 compared to 18 now is still huge and hardly a "death". When I'm at a desk, a desktop is the nicest thing. I have laptop and a cell phone with fold out screen, but that's not how I prefer to work. Give me my quad core desktop monster with 8G of RAM any day of the week.
      • How long before your phone has four cores and a crapload of RAM? If IBM could ever get the cost down on MRAM we could maybe get 'em with stacks of it :)

    • You're almost right.

      Except that at work -- Where desktops will never die. Editing a spreadsheet or writing code on a portable is retarded. Even if we go to a dockable solution it's still a PC.

      P.S. The "smart" in smartphone == PC == Personal Computer.

    • Seriously, on what do you base this? The fact that you personally are a cellphone junkie?

      Sorry man, I've seen no evidence PCs are dying at all. The seem to be doing quite well.

      In fact, looking at history, I am going to predict they will continue to sell at or above current levels indefinitely. Why? Because that is what's happened with other computers.

      Mainframes are not dead. In fact, there are more mainframes in use now then back when the only computers were mainframes. They are completely dwarfed in number

  • Tip for Terry (Score:5, Interesting)

    by kop ( 122772 ) on Tuesday May 31, 2011 @03:36PM (#36300658)
    I have a great tip for Terry,
    invest a little money in blender foundation: http://blender.org .
    They are working on new renderer based on CUDA ,with just a little support this renderer could work on openCL.
    Existence of a free and open source openCL renderer of professional quality would force closed source developers to develop GPU based renderers as well or lose customers.
    You can even invest in secret , there are other sustantial supporters of the blender foundation whose identity is not given.

    The Cycles renderer F.A.Q.
    http://dingto.org/?p=157
    • by Spykk ( 823586 )
      Repurposing 3d rendering hardware's programmable pipeline in order to render 3d scenes? Yo dawg...
    • by JamesP ( 688957 )

      Existence of a free and open source openCL renderer of professional quality would force closed source developers to develop GPU based renderers as well or lose customers.

      Like this ? http://www.nvidia.co.uk/page/gelato.html [nvidia.co.uk]

      Don't kid yourself, true professionals use the professional/closed source solution

      Don't get me wrong, Blender kicks ass (while has its warts) but it's far (albeit not very) from 3DS/Maya/Cinema 4D

  • AMD are far from the only company to make this bet. For one, the bet is backed by Apple, who are the creators of OpenCL. Nvidia have a GPU computing SDK with full support for OpenCL for all major platforms. Even Intel has just recently provided Linux drivers for OpenCL, and have supported windows for a while. ARM will have an implementation soon for their Mali GPU architecture.

    I use OpenCL for nonlinear wave equations. There may only be a few OpenCL developers at the moment, but with articles like this, the

  • Until relatively recently, it's always bugged me that there's been these incredible number crunching processors, but that they've been mostly locked away due to the focus on one subset of graphics (rasterization), rather than an all-encompassing generic style which would allow ray-tracing, particle interactions, and many other unique, weird and wonderful styles, as well as many amazing applications which couldn't otherwise exist.

    Finally, that's beginning to change with projects like OpenCL, CUDA, even GPU c

  • I'll give you the fact that we don't have any major applications at this point that are going to revolutionize the industry and make people think "oh, I must have this"

    Translation: We don't really understand how to market this, or the size of the market for this.

    it's a big bet for us, and it's a bet that we're certain about.

    Translation: We don't have any other promising R&D in the pipeline at the moment so if this fails to play out well for us we will still be number 2 but no longer a top line mark, it will be back to the K6 days for us.

  • Why do we still need to buy "graphics" hardware to use GPGPU-like acceleration? Why not extend our general notion of the cpu?

    It makes me feel rather silly to be buying a graphics card just to improve the performance of some non-graphics-related computation.

    • by hey ( 83763 )

      My thoughts exactly. This should be part of the CPU. Like floating point.

    • Hence AMD's attempt to rebrand GPUs as APUs. The GPU basically is becoming a SIMD co-processor.
  • by Ken_g6 ( 775014 ) on Tuesday May 31, 2011 @05:11PM (#36301770) Homepage

    I've developed applications (for PrimeGrid.com) for both nVIDIA CUDA and AMD OpenCL. The thing about GPGPU is that you have to write very close to the hardware to get any reasonable speed increases. CUDA lets you do that. But OpenCL is practically CUDA running on AMD; not close to the hardware at all.

    By all rights, my application should be faster on AMD cards. It's embarassingly parallel and has a fairly simple inner loop - albeit doing 64-bit math. If I could write it on AMD's close-to-metal language, Stream, I'm sure it would be. But while nVIDIA offered nice documentation and an emulator for CUDA, AMD didn't for Stream, and only recently did for OpenCL. nVIDIA's since removed their emulator, and since they (shrewdly) don't let one machine run both brands of cards together, I'm mainly aiming at CUDA now.

    If AMD had come up with a C-like, close-to-metal language, with good documentation and preferably an emulator, they could have run circles around CUDA. Maybe they still can; but I doubt it.

  • Please AMD read ALL the comments on this page even those with a score of 0 or -1...

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...