Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Upgrades Hardware

AMD Breaks 1GHz GPU Barrier With Radeon HD 4890 144

MojoKid writes "AMD announced today that they can lay claim to the world's first 1GHz graphics processor with their ATI Radeon HD 4890 GPU. There's been no formal announcement made about what partners will be selling the 1GHz variant, but AMD does note that Asus, Club 3D, Diamond Multimedia, Force3D, GECUBE, Gigabyte, HIS, MSI, Palit Multimedia, PowerColor, SAPPHIRE, XFX and others are all aligning to release higher performance cards." The new card, says AMD, delivers 1.6 TeraFLOPs of compute power.
This discussion has been archived. No new comments can be posted.

AMD Breaks 1GHz GPU Barrier With Radeon HD 4890

Comments Filter:
  • It Was Epic (Score:5, Funny)

    by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Thursday May 14, 2009 @03:06PM (#27954781) Journal

    AMD Breaks 1GHz GPU Barrier

    I was diligently working at XYZ Corp a few buildings down when Incident One happened in their lab. At first, I was just sitting in my cubicle when suddenly we felt a severe shuddering of space & time around us. Then a few seconds later everyone heard a loud "Ka-BOOM" and everyone stood up to see what was going on outside. The buildings directly adjacent to the AMD lab had all their windows blown out and every car alarm within a square mile was going off. Some scientists with their hair blown straight back and carbon scoring randomly on their faces and white lab coats were seen to climb out of the rubble of AMD's R&D building. They immediately began dusting themselves off, high-fiving each other and patting each other on the back laughing and ecstatic. Then they headed towards the liqueur store down the street to pick up some champagne. Shortly after it was discovered that 1Ghz is the frequency at which æther vibrates when it is at rest so once you pass it, you leave a wake of æther behind your time cone. Roger Penrose and Stephen Hawking are due to give a speech at "GPU Ground Zero" this week, I hope to make it.

    If I were working marketing for AMD, I would be pointing out how switching from base ten to base eleven, twelve, thirteen, etc provides a theoretically unlimited amount of newsworthy advertisements in broken barriers. "We just need to make it to 2,357,947,691 hertz and we'll be the first to claim we've broken the 1 Ghz (base11) barrier! Where the hell was the report that we broke base9 last year?!"

    • base9 wasn't really that much of a feat. Not to mention, the class action law suit on differing bases really put a damper on that party.
    • by ArcherB ( 796902 )

      Shortly after it was discovered that 1Ghz is the frequency at which æther vibrates when it is at rest so once you pass it, you leave a wake of æther behind your time cone.

      Wow! And here I thought it was 1.21Ghz at 88 MPH.

      • Great Scott! Don't you mean 1.21 JHz (jigahertz)?
        • Re: (Score:3, Insightful)

          by yabos ( 719499 )
          jigawatts
        • by tlhIngan ( 30335 )

          Great Scott! Don't you mean 1.21 JHz (jigahertz)?

          "Giga" in some countries is actually pronounced "jiga". (History says that is how "Giga" is pronounced everywhere except the US, but that's debatable). Thus, 1.21GHz would be an accurate figure in this article.

          • Re: (Score:3, Informative)

            by fbjon ( 692006 )
            I would think history says it's pronounced with a hard 'g', specifically greek history.
    • Re: (Score:3, Funny)

      by Anarchduke ( 1551707 )
      AMD Broke the 1 GHz barrier on their CPU, and now they break the 1GHz barrier on their GPU.

      It doesn't matter what base you use, AMD owns that achievement.

      According to AMD top researchers, whether it was base-9, base-10, or base-11 doesn't matter. According to AMD,

      "All your base are belong to us."
  • Didn't AMD break the 1ghz desktop CPU "barrier" too? ;)

    • Re:AMD CPU too (Score:5, Informative)

      by LoRdTAW ( 99712 ) on Thursday May 14, 2009 @03:10PM (#27954857)

      Digital Broke that with the DEC Alpha (Was it DEC at that time?). Wasn't popular but it was a desktop CPU for high end workstations.

  • by smooth wombat ( 796938 ) on Thursday May 14, 2009 @03:09PM (#27954849) Journal

    one will finally have a graphics card capable of playing Duke Nukem Forever.

    Oh wait...

  • by G3ckoG33k ( 647276 ) on Thursday May 14, 2009 @03:12PM (#27954897)

    Why is it harder to raise the clock frequenceies on GPUs than CPUs? Is more code in use at the same time per unit area, or?

    • They've never needed to get the clock speed up that high before, remember Ghz != Performance
      • Re: (Score:3, Insightful)

        by Pulzar ( 81031 )

        They've never needed to get the clock speed up that high before, remember Ghz != Performance

        Err... It's not that black and white, you can't just say that GHz != performance. If you take a card and raise its clock, you'll usually get more performance. If you raise memory speed you'll usually get more performance. The only time you won't is when the one is bottlenecking the other.

        All we're learned from CPU wars is that within two different architectures, the faster one isn't necessarily the one with more GHz.

    • by KillerBob ( 217953 ) on Thursday May 14, 2009 @03:21PM (#27955061)

      Heat. Because of the form factor, you can't put a massive heatsink on a graphics card, certainly not the kind that you see on high end desktop CPUs.

      GPUs are also generally a completely different architecture than a CPU... they're usually massively parallel and optimized for working with enormous matrices, whereas a CPU is significantly more linear in its operation, and generally prefers single variables.

      • Re: (Score:2, Interesting)

        by dhanson865 ( 1134161 )

        Yeah you can't put the exact same heatsink on them but take a look at the Accelero S1 Rev. 2 at http://www.arctic-cooling.com/catalog/product_info.php?cPath=2_&mID=105&language=en [arctic-cooling.com]

        You even putting a 120mm fan on it doesn't cover the entire fin area. http://www.silentpcreview.com/article793-page5.html [silentpcreview.com]

        Yeah with fan it'll be a 3 slot solution and yeah it only weighs half the weight of a high end CPU heatsink but then again that is not their biggest GPU heatsink.

        The heaviest solution on AC's site is the

        • by powerlord ( 28156 ) on Thursday May 14, 2009 @06:18PM (#27958359) Journal

          Pity there isn't a GPU socket on the motherboard the same as the CPU socket. Then we COULD use those big honking CPU cooling solutions (or some derivative of them), provided the case were designed to accommodate the board. You could also get high speed runs between memory (perhaps it could have its own bank), and the CPU.

          Pity some CPU maker couldn't come along, buy a GPU maker, and make something like this.

          (of course existing GPU solutions in slots are MUCH easier to upgrade, which is something against this sort of solution, unless they come out with a form factor that combines Chip+Cooling solution (similar to the old Slot1/A)

          • should be in a HTX slot not a pci-e onee

          • of course existing GPU solutions in slots are MUCH easier to upgrade, which is something against this sort of solution, unless they come out with a form factor that combines Chip+Cooling solution (similar to the old Slot1/A)

            You're not gonna believe this dude, but someone beat you to the idea of a slotted GPU. Sorry. =[

            http://www.legitreviews.com/images/reviews/378/ati_radeon_x1950.jpg [legitreviews.com]

            They put all the pins on the bottom in such a way that it fits into a modular slot on the motherboard and even comes with built in cooling. =]

          • Pity there isn't a GPU socket on the motherboard the same as the CPU socket. Then we COULD use those big honking CPU cooling solutions (or some derivative of them), provided the case were designed to accommodate the board. You could also get high speed runs between memory (perhaps it could have its own bank), and the CPU.

            Not a bad idea, though discrete cards today have dedicated memory for the GPU, with a bus custom designed for that card. Not expandable, but high performance. The connection to main memor

      • by Ecuador ( 740021 )

        Something that I don't see other posters mention is that the design of many parts of the CPU are hand-tweaked down to the transistor level exactly for this purpose - low heat, high frequency. GPU's are designed in a larger scale, which is logical if you remember that if you exclude the cache, the CPU is a much simpler (in transistor count) than a GPU, when GPU generations occur much more often and differ more from one to the other. So, you have a fraction of the CPU design cycle to incorporate a radically d

      • by Xest ( 935314 )

        I've actually had larger heatsinks on my GPUs than CPUs in recent years with those double height graphics cards taking up two back plate slots and that was with a 2.83ghz quad core when it was high end! That said, the heatsinks often seem to be on the wrong side of the card in most motherboards/cases, that is, they're on the bottom, and of course, heat rises, so presumably it's more that than the actual size of the heat sinks? They seem to get round this by creating those heat tunnels that try to lead the h

    • by caerwyn ( 38056 )

      GPUs are a little more CISCy.. Since the cycle time is constrained to be as slow as the slowest operation that must complete in one cycle, it means that it's a bit harder to cut down on cycle time.

      • Re: (Score:3, Interesting)

        Are they? Looking at CUDA, I'd say that this is debatable. More likely, the massively paralellizable problem space means they scale out instead of going for high clockrates, which also means less fancy crap with caches, as the speed differential is lower and memory access more predictable.
        • by caerwyn ( 38056 )

          Right- it's a design choice. Rather than incredibly simple micro-ops that the real instructions get translated to, or instructions spanning multiple clock cycles, they've chosen to keep the per-core implementation much simpler. That lets them pack more of the simple cores on the chip, getting additional parallel processing at the cost of per-core optimal performance- which is fine, because these are things running on massively parallel problems.

          CISCy was perhaps the wrong choice, but it's valid in a sense-

    • > "Why is it harder to raise the clock frequenceies on GPUs than CPUs?

      Speed costs money...how fast 'ya want to go?

    • Re: (Score:3, Insightful)

      by mdm-adph ( 1030332 )

      GPU's have recently become massively parallel -- not as much need to go too fast in overall clock speed.

    • Re: (Score:2, Informative)

      by zolf13 ( 941799 )
      Wide vector processing with "800 stream processing units" (or "pipes" or "cores") - it is hard to put 800 cores in one chip and not to boil the silicon.
    • I think it has to due with the massively parallel operations. You can't pipeline stuff as far. Of course, I'm just guessing.

      Basically, due to the parallelization it's more efficient to add more streams/'processors' than to ramp up the overall speed of the system - for example, the referenced 4890 has 800.

      In order to have all the stream processors work, you might have to be a bit more conservative in your timing.

      • I think it has to due with the massively parallel operations. You can't pipeline stuff as far. Of course, I'm just guessing.

        That can't be it. Graphics cards can have vast pipelines. Pipelines' main problems are with branches, and graphics cards don't need to be able to branch.
        • Newer (SM 3.0+) shaders allow flow control, so branching is supported in more recent architectures.
        • This site [codinghorror.com] suggests a couple possibilities.

          A: A GPU had, until fairly recently, only a 1 high slot. Even with 2 slots, it has less room for cooling than the CPU, where weight actually matters more than size.
          B: Transistors. The site dates from 2006, but mentions that my core 2 duo has ~291 million transistors. A G800GTX has 680M, and my research shows that the 4890 this review is about has 959 Million. Even a Core 2 Quad is 582M, and we know they cost a bit more for a given speed rating. A GT200 is liste

    • by mikael ( 484 ) on Thursday May 14, 2009 @03:40PM (#27955399)

      You have so much data being churned around. The high end GPU's have 240+ stream processors, compared to a handful for a mobile phone. Then there is the constant punting of video data from the VRAM chips to the LCD screens (width x depth x RGB x bits/channel Hertz. VRAM is like standard RAM memory except there is a special read channel to allow whole rows of memory to be read by the video decoder simultaneously as it is being read/written by the GPU. It would be possible to
      raise the clock frequency, but they would need a larger heatsink. If you visit the overclocking websites, you will see some of the custom water cooling systems that they have. Early supercomputers like Cray used Fluorinert [wikipedia.org].

  • I have a intel quad core 2 duo, a Q6600 I think.

    How many TeraFLOPS is that?

    • Re: (Score:2, Informative)

      by pshuke ( 845050 )
      According to intel [intel.com] it's about 0.04.
    • by wjh31 ( 1372867 ) on Thursday May 14, 2009 @03:35PM (#27955299) Homepage
      last time i checked, a graphics card will get about 100x more Flops than a similarly priced CPU, give or take an order of magnitude (hey, im an astrophycist, order of magnitude is good enough)
    • Even quad-core x86 CPUs are in the 10s of GigaFLOPS.

      CPUs have to do a lot of integer ops, and have to be good at everything. GPUs simply have to crunch a lot of Floating Point numbers,

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Modern GPUs including every single Nvidia GPU since the G80 series has had a full integer instruction set capable of doing integer arithmetic and bit operations.

        CPUs aren't designed to be good at everything, they're designed to be exceedingly good at executing bad code, which is the vast majority of code written by poor programmers or in high level languages.

        You can write code for a CPU without worrying specifically about the cache line size, cache coherency, register usage, memory access address patterns a

  • by Yvan256 ( 722131 ) on Thursday May 14, 2009 @03:25PM (#27955141) Homepage Journal

    As you may have seen from the sales of netbooks and low-power computers, the future is... wait for it... low-power devices!

    Where are the 5W GPUs? Does the nVidia 9400M require more than 5W?

    • Even for desktops, I'd like to see more of those. Lets say below 20 W, so a not-too-massive passive heat sink will do.
      I'm quite happy with the performance of my NVidia 6800 GT, and it needs about 50W at full usage. With the latest chip technology (40 nm anyone?), the same performance should be possible with much less power consumption.

    • Re: (Score:2, Flamebait)

      by drinkypoo ( 153816 )

      Does the nVidia 9400M require more than 5W?

      Google is your friend [justfuckinggoogleit.com]

      The GeForce 9400M claims a TDP of only 12 W. [tomshardware.com]

      • So "only" about as much power as a hard drive [ixbtlabs.com].

        • It's a big step in the right direction. I had been hoping to answer the other question but it looked like it was going to be too hard to find information on an embedded GPU core (like for cellphones and stuff.) I wonder what's in the GP2x Wiz [dcemu.co.uk]

          • by ZosX ( 517789 )

            uhhh....wikipedia?

            Specifications

                    * Chipset: MagicEyes Pollux System-on-a-Chip
                    * CPU: 533MHz ARM9 3D Accelerator
                    * NAND Flash Memory: 1 GB
                    * RAM: SDRAM 64 MB
                    * Operating System: GNU/Linux-based OS

    • >Where are the 5W GPUs?

      Intel integrated graphics

      • by Ilgaz ( 86384 )

        Yes, when you offload the entire thing to CPU and even ignore hardware t&l feature from GeForce 2 ages, it goes down to 5 watts.

        Even Apple couldn't stand to their junk and switched back to real GPUs, down to "non pro" laptops.

    • Nvidia has the mobile graphics line, which is designed for cellphones. AMD used to have a mobile graphics division, but I believe that they sold it to Qualcomm.

      So in a the next few months, we'll be seeing mobile chipsets from both companies (Nvidia's Tegra and Qualcomm's Snapdragon) that will have scaled-down tech capable of handling HD video and impressive 3D graphics on embedded devices.
    • No one said that this video card was going to be shoved into every computer. This video card is for people who use a computer for more than reading slashdot and checking email.
    • as you can see from the pictures of the massive heatsink (covers the entire board) this is NOT a low power device

      and until there is a market for laptop gamers wanting 60fps and millions of polygons specialized cards/chips like this will be found only on render farms, gamer desktop rigs and graphics workstations -which is their intended market anyway

      you generally do not get high performance with an economical product, so, for my car analogy I will say that a Pontiac Vibe that gets 35 miles to the gallon is n

      • by Ilgaz ( 86384 )

        Soon, not just gamers but ordinary users may need way higher "FPS" than today. 3d stuff (200hz), artificial 3d, massive amounts of transcoding, 12bit per channel video, 2K (or even 4K) are all making their way to average home user. Slowly but sure. These things were all pro high end studio stuff just some years ago.

        For example, Apple is still testing a technology which scales desktop to infinite levels of DPI. It is there, embedded to core of OS but not stable or complete yet. To display such a desktop on a

    • They've been around for a long time - they're called integrated graphics.
  • Power consumption? (Score:3, Interesting)

    by LoRdTAW ( 99712 ) on Thursday May 14, 2009 @03:28PM (#27955191)

    No mention of power consumption or heat dissipation. My PC is already a radiator and in the summer fights with my AC.

    I am interested in the computing power, 1.6 terraflops is no small number even if it is single precision.

    • Re: (Score:3, Insightful)

      by wjh31 ( 1372867 )
      wikipedia (http://en.wikipedia.org/wiki/Comparison_of_ATI_graphics_processing_units#Radeon_R700_.28HD_4xxx.29_series) suggests the 4890 comes in at 190w, go to s little under double that if they make an x2 version. entry level 4000 series comes in at 25W.

      if you want TFlops, try the 4870x2 at 2.4TFlops, or NVideas tesla (http://en.wikipedia.org/wiki/NVIDIA_Tesla) series, made just for GPGPU which reach over 4TFlops
    • by F34nor ( 321515 )

      This is why I am going to literally make my next PC a hot water heater.

  • I'm a long-time Nvidia user because of good driver support on Windoze and Linux. I would love to give ATI a try but i've read a lot of negative things about driver quality in Linux. Granted, that was some time ago and things may have changed today. I'd be interested to hear about other slashdotters' experiences using today's ATI hardware + drivers under Linux/X.

  • I can finally get a 5.0 on the Vista Experience Index!
    • The highest score you can get on Vista is 5.9.
      • I think that was the part that was meant to be funny, but my 8800 has gotten a 5.9 on that test for over a year now. Isn't it time we moved past the 'Vista is slow' thing?

        • Just because your graphics card is fast does not mean Vista isn't slow (yes, the double negative was on purpose).
  • 1600 FLOPs per Hz? That's actually rather impressive.
  • And.... (Score:2, Informative)

    by MasseKid ( 1294554 )
    And it's still slower than a GTX 285 OC edition. Ghz != Preformance. And Nvidia, stop renaming your cards damn it!
    • Re: (Score:3, Interesting)

      by PitaBred ( 632671 )
      The 4890 actually DX 10.1, and probably has support for almost all the features in 11. Does the Nvidia card? Didn't think so.

      I'm also interested in your "slower than a GTX 285" assertion. I just looked at some benchmarks, and Xbit labs has an overclocked 4890@1GHz [xbitlabs.com] beating the tar out of the 285.
  • uhhh (Score:5, Funny)

    by nomadic ( 141991 ) <`nomadicworld' `at' `gmail.com'> on Thursday May 14, 2009 @04:07PM (#27955901) Homepage
    AMD does note that Asus, Club 3D, Diamond Multimedia, Force3D, GECUBE, Gigabyte, HIS, MSI, Palit Multimedia, PowerColor, SAPPHIRE, XFX and others are all aligning to release higher performance cards."

    Wait, let me get this straight. Graphics card manufacturers are actually attempting to make their graphics cards perform better? Why was I not informed of this before???
  • "Barrier" (Score:3, Insightful)

    by Burning1 ( 204959 ) on Thursday May 14, 2009 @04:10PM (#27955989) Homepage

    AMD Breaks 1GHz GPU Barrier [reference.com]

    You keep using that word. I do not think it means what you think it means.

  • Maybe you want to check the disclaimer too ...

    Note: Damage caused by overclocking AMDâ(TM)s GPUs above factory-set overclocking is not covered by AMDâ(TM)s product warranty, even when such overclocking is enabled via AMD software.

  • Offer the card, in same price down to cents along with a goodly written driver for Mac Pros and even more miraculously to last generation G5s (Quad/Dual Core).

    Open Firmware, Endianness, Altivec, non standard interface (???), all excuses gone. If anyone wonders what I talk about, just watch this card's price when (if!) it ships to Macs. You will understand the comedy going on. In PowerPC times, we had some sort of excuse as "Firmware is hard to code", "drivers man, they can't code for PowerPC" etc. Now all e

  • by Cajun Hell ( 725246 ) on Thursday May 14, 2009 @07:41PM (#27959367) Homepage Journal
    Everyone thought it would be 999MHz this year, 999.9 MHz the next year, 999.99999 MHz a few years later. It looked uncrossable! Well done, AMD!

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...