Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Overclocked Radeon Card Breaks 1 GHz 199

dacaldar writes "According to Yahoo Finance, noted Finnish over-clockers Sampsa Kurri and Ville Suvanto have made world history by over-clocking a graphics processor to engine clock levels above 1 GHz. The record was set on the recently-announced Radeon® X1800 XT graphics processor from ATI Technologies Inc."
This discussion has been archived. No new comments can be posted.

Overclocked Radeon Card Breaks 1 GHz

Comments Filter:
  • Huzzah! (Score:5, Funny)

    by Anonymous Coward on Wednesday October 26, 2005 @04:45PM (#13883998)
    What a day for world history! It will be remembered forever!
  • Awesome! (Score:1, Funny)

    by Anonymous Coward
    First Duke Nukem Forever post
    First Imagine a Beowulf Cluster of those post
    First Can Netcraft confirm that? post

    • Re:Awesome! (Score:2, Funny)

      by Anonymous Coward

      Hmm. Your ideas are intriguing to me and I wish to subscribe to your
      newsletter.

  • One wonders... (Score:5, Interesting)

    by kko ( 472548 ) on Wednesday October 26, 2005 @04:46PM (#13884000)
    why this announcement would come out on Yahoo! Finance
    • Looking at the article I'd say to give ATI shares a boost.

    • Look at the bottom of the page. It's just a press release announcing that ATI was the first to get to 1 GHz. Basically a "fuck you" to Nvidia, nothing more.
    • Re:One wonders... (Score:5, Insightful)

      by Jarnis ( 266190 ) on Wednesday October 26, 2005 @04:51PM (#13884056)
      ... because ATI made a big press release about it.

      Since their product is still mostly vapor (you can't buy it yet), and nVidia is currently owning them in the high end market because ATI's product is so late, one has to grasp straws in order to try look l33t in the eyes of the potential purchasers.

      Wish they'd spend less time yapping and more time actually putting product on the shelves.

      Nice overclock in any case, but ATI putting out a press release about it is kinda silly
  • by Z0mb1eman ( 629653 ) on Wednesday October 26, 2005 @04:46PM (#13884001) Homepage
    I didn't have Slashdot in a full screen window, so the headline read:

    Overclocked Radeon Card Breaks
    1 GHz

    Was wondering why an overclocked card breaking is such a big deal :p
  • Benchmarks? (Score:5, Funny)

    by fishybell ( 516991 ) <.moc.liamtoh. .ta. .llebyhsif.> on Wednesday October 26, 2005 @04:47PM (#13884009) Homepage Journal
    Without the pretty graphs how will I know what's going on?!
  • The performance of GPU's seem to grow faster than those of CPU's. I remember someone had proposed to use GPU's to proces generic data. It would be 12 times faster than a CPU.
    • Re:GPU to excel CPU (Score:3, Interesting)

      by TEMM ( 731243 )
      GPU's are great at working on linear algebra problems, which are basically what graphics are. For general purpose computing however, they would not be that much faster than a CPU
      • For general purpose computing however, they would not be that much faster than a CPU

        Umm, you're out of your mind. Or more precisely, you're trying to hard to guard your statements. "not that much faster" is rubbish.

        Obviously, for "general purpose computing" a GPU would not only not be "that much faster" than a CPU, but indeed, it would be significantly slower

        If this wheren't so, we'd offcourse be using our GPUs as CPUs (or more likely, construct CPUs the way GPUs are constructed)

    • Re:GPU to excel CPU (Score:3, Interesting)

      by steveo777 ( 183629 )
      I recall reading an article on /. a long time ago involving a group of coders from MIT or something like that who pitted a P4 CPU against an ATI or nVidia GPU that was running at about a third the clock speed with a tenth of the memory. They were, of course, running mostly linear equations and the CPU got it's pants kicked in by just under an order of magnitude IIRC.

      What I've been waiting for is some sort of mathematics program (I used to use Mathematica in college) that could utilize this concentrated pow

      • Re:GPU to excel CPU (Score:3, Informative)

        by LLuthor ( 909583 )
      • Re:GPU to excel CPU (Score:5, Informative)

        by Mostly a lurker ( 634878 ) on Wednesday October 26, 2005 @05:51PM (#13884522)
        A good question! This excerpt from a recent article in Extreme Tech [extremetech.com] seems relevant:
        The third future project at ATI is dramatically improved support for the GPGPU scene. These are researches, mostly academic, that are tapping into the massive parallel computing power of graphics processors for general computing tasks, like fluid dynamics calculations, protein folding, or audio and signal processing. ATI's new GPU architecture should be better at GPGPU tasks than any that has come before, as it provides more registers per pipeline than either ATI's old architecture or Nvidia's new one. This is a sore spot for GPGPU developers but not really a limitation for game makers. The improved performance of dynamic branching in the new architecture should be a huge win for GPGPU applications as well. Developers working to enable general purpose non-graphics applications on GPUs have lamented the lack of more direct access to the hardware, but ATI plans to remedy that by publishing a detailed spec and even a thin "close to the metal" abstraction layer for these coders
    • Re:GPU to excel CPU (Score:3, Informative)

      by OzPeter ( 195038 )
      You mean like these people are doing?

      Generic GPU programming [gpgpu.org]
    • Re:GPU to excel CPU (Score:4, Informative)

      by Jerry Coffin ( 824726 ) on Wednesday October 26, 2005 @05:09PM (#13884188)
      The performance of GPU's seem to grow faster than those of CPU's. I remember someone had proposed to use GPU's to proces generic data. It would be 12 times faster than a CPU.

      Go here [gpgpu.org] for several examples of this -- far from simply having been proposed, it's been done a fair number of times.

      The thing to keep in mind with this is that while the GPU has a lot of bandwidth and throughput, most of that is due to a high degree of parallelism. Obviously 1 GHz hasn't been a major milestone for CPUs for quite a while, but CPUs are only recently starting to do multi-core processing, while GPUs have been doing fairly seriously parallel processing for quite a while.

      Along with that, the GPU has a major advantage for some tasks in having hardware support for some relatively complex operations that require a fair amount of programming on the CPU (e.g. multiplying, inverting, etc., small vectors, typically has a single instruction to find Euclidean distance between two 3D points, etc.)

      That means the GPU can be quite a bit faster for some things, but it's a long ways from a panacea -- you can get spectacular results applying a single mathematical transformation to a large matrix, but if you have a process that's mostly serial in nature, it'll probably be substantially slower than on the CPU.

      Along with that, development for the GPU is generally somewhat difficult compared to development on the CPU. Writing the code itself isn't too bad, as there are decent IDEs (e.g ATI's RenderMonkey) but you're working in a strange (though somewhat C-like) language. Much worse is essentially a complete lack of debugging support. Along with that, you have to take the target GPU into account in the code (to some extent). I just got a call in the middle of a meeting this morning from one of my co-workers, pointing out that some of my code works perfectly on my own machine, but not at all on any his. I haven't had a chance to figure out what's wrong yet, but I'm betting it stems from the difference in graphics controllers (my machine has an nVidia board but his has Intel "Extreme" (ly slow) graphics).

      --
      The universe is a figment of its own imagination.

    • What exactly is "generic data"? If a GPU chip could do a CPU chips job 12 times as fast, it would be used as a CPU chip.
      GPU chips are designed to do a certain type of calculation (matrix multiplication) as quickly as possible, and, unsurprisingly, they can do it a lot faster than chips designed for a much wider array of calculations. 3D graphics is a sufficiently popular application that it has caused chips to be designed for rapid matrix multiplication, but various other applications requi
  • by neologee ( 532218 ) on Wednesday October 26, 2005 @04:48PM (#13884019) Homepage
    I always knew ati would finnish first.
  • by syphax ( 189065 ) on Wednesday October 26, 2005 @04:49PM (#13884029) Journal
    have made world history

    I think that's going a bit far. Good for them and everything, but world history? V-E day, Einstein's 1905, Rosa Parks refusing to give up her seat on the bus- these events impact world history (sorry for the all-Western examples); making a chip oscillate faster than an arbitrary threshold does not.
    • by bcattwoo ( 737354 ) on Wednesday October 26, 2005 @04:57PM (#13884097)
      What are you talking about? I'm sure I will have next October 26 off to celebrate Overclocked Radeon Broke 1GHz Barrier Day. Heck, this may even become Overclocked GPU Awareness Week.
      • Yeah, but the real bitching will come when they move the holiday to the fourth Monday of October so that people can get a three-day weekend. How dare they! Have they no sense of history?!
  • I'm not going to take that seriously until I see some actual 3DMark results. I can overclock my 9800Pro to some insane speeds but once I start to push it I get all kinds of corruption.
  • Global Warming (Score:1, Redundant)

    by koick ( 770435 )
    AHA! There's the proverbial smoking gun for global warming!
  • The culprit (Score:5, Funny)

    by ChrisF79 ( 829953 ) on Wednesday October 26, 2005 @04:50PM (#13884043) Homepage
    I think we've found the source of global warming.
  • World history? (Score:3, Insightful)

    by Seanasy ( 21730 ) on Wednesday October 26, 2005 @04:51PM (#13884053)
    ... have made world history...

    Uh, it's cool and all but not likely to be in the history books. (easy on that hyperbole, wiil ya)

  • by Keith Mickunas ( 460655 ) on Wednesday October 26, 2005 @04:51PM (#13884054) Homepage
    where were you when the first video card was overclocked to 1GHz. And most people will respond "huh?".

    Seriously, "world history"? There's no historical significance here. It was inevitable, and no big deal.
  • Historical? (Score:1, Insightful)

    by Anonymous Coward
    How is this any more historical than overclocking it to 993 mhz? Its not! 1ghz is just a nice round number. It I overclock one to 1.82 ghz tomorrow, no one will care!
    • How is this any more historical than overclocking it to 993 mhz?

      What if Apollo 11 had travelled 99.3% of the way to the moon? What if the Manhattan Project built a bomb with only 99.3% of a critical mass of Uranium in it? What if the Continental Congress had gotten 99.3% of the votes to declare independence, but then decided just to stay a British colony?

      History is made by those who achieve something, not by those who just come really close and then fail. Centuries from now, when this event is looked

  • by crottsma ( 859162 ) on Wednesday October 26, 2005 @04:52PM (#13884064)
    NVidia will make a competitve model, with blackjack, and hookers.
  • by pclminion ( 145572 ) on Wednesday October 26, 2005 @04:52PM (#13884067)
    If you cool a chip, you can make it run faster. This is a matter of physics that doesn't need to be tested any more than it already has been. In some small way I appreciate the geek factor but I'm far more interested in geek projects that have some practical use.

    And as for being the first people in the world to do this... the chances of that are small. I'm sure there are people at Radeon (and other companies) who have done things far more bizarre, but didn't announce it to the world.

    • Yes, but a lot of different chips have different overclocking potential. It's interesting to see which can be pushed the furthest, even if its impractical. Beside, since when are geeky pursuits practical?
      • Yes, but a lot of different chips have different overclocking potential. It's interesting to see which can be pushed the furthest, even if its impractical.

        Really, I don't think it's interesting whatsoever. It's like testing the strength of various bulletproof glass samples at a temperature of -100 C. The fact is, bulletproof glass is not used in such environments so the test gives no useful information.

        Beside, since when are geeky pursuits practical?

        I can't believe you're being serious. My geeky pur

      • Well, Linux started off as a geeky pursuit. I'd say it's pretty practical :)
    • This is a matter of physics that doesn't need to be tested any more than it already has been.

      Yeah, but it's also well known that if you force more air/gasoline/nitrous into a car engine, it will go faster. But people continue to try to break land speed records. It's human nature. People do it just for the sake of doing it.

      I'm sure there are people at Radeon (and other companies) who have done things far more bizarre, but didn't announce it to the world.

      The company is ATI, and no, they don't do anythi

      • The company is ATI

        Yeah I goofed, sorry.

        They don't care what happens to their chips at -80C or +180C. All they do is test them a little bit beyond the limits of their recommended operating range.

        I highly doubt it. I have some pretty intimate knowledge of what sorts of things go on at a certain giant chipmaker, and believe me, all kinds of crazy shit has been done that you don't know about simply because they don't tell you. I imagine ATI is similar. I don't know if they've done anything exactly like

  • Not for the weak (Score:5, Insightful)

    by Mr. Sketch ( 111112 ) <`mister.sketch' `at' `gmail.com'> on Wednesday October 26, 2005 @04:52PM (#13884071)
    The team, optimistic that higher speeds could ultimately be achieved with the Radeon X1800 XT, attained the record speeds using a custom-built liquid nitrogen cooling system that cooled the graphics processor to minus-80 degrees Celsius.

    It seems we may have a ways to go before it can be done with standard air cooling. I actually didn't think that operating temperatures for these processors went down to -80C.
    • Good point! Why must they be so cold? I always thought that the problem was removing heat (i.e. not letting it get hot) rather than making it ridiculously cold. Anyone?
      • If you can come up with a clever way of removing heat from a chip quickly without making its surroundings much cooler, I'm sure we'd all be happy to hear it.
        • If you can come up with a clever way of removing heat from a chip quickly without making its surroundings much cooler, I'm sure we'd all be happy to hear it.

          Actually I doubt that you understand my point, but to answer your question: anything accepting heat at a temperature which is well beyond the temperature of the uncooled system is ok. This need not be a cryogenic temperature.

          If you would stick the processor in ice water it would remain at more or less zero degrees C, the only problem being that evapo

          • Yes, I'm quite aware of thermodynamic arguments. As you say, though, a major difficulty is conducting heat away from the hot portions. You could have a huge amount of room-temperature water flowing by the chip, and cool it quite effectively. However, this requires, well, a huge flow rate, which necessitates noisy pumps, etc. Besides which, you get to deal with heat flux rates, which (to a good approximation) are simply proportional to the temperature differences between the hot portions and the cold portio
            • Why not increase the heat flux by cooling the coolant? Then you don't need such ridiculous flow rates. And the nicest coolant around right now is the liquid nitrogen.

              Firstly, it is rather easy to make a permanent 'ice water reservoir' (or similar) while it is nontrivial to produce liquid nitrogen. Admittedly for a short test this is no real issue (bear in mind I never said it was BAD, just asked if it had any advantages to use liquid N).

              Secondly this large temperature difference between the coolant and a

  • comon now (Score:5, Funny)

    by Silicon Mike ( 611992 ) * on Wednesday October 26, 2005 @04:55PM (#13884084)
    If I could only go back in time and add liquid nitrogen to my 8088 processor. I know I could have gotten it up to 5.33 mhz, no problem. NetHack benchmarks would have been off the chart.
  • GPU vs. CPU Speed (Score:1, Interesting)

    by Anonymous Coward
    I've always wondered...Why have GPU speeds always been so much slower than CPU speeds?

    Are they made on a different process? Are they made with different materials? Are there signifigantly more transistors on a GPU?

    Why don't we have a 3Ghz GPU?
    • Re:GPU vs. CPU Speed (Score:4, Informative)

      by freidog ( 706941 ) on Wednesday October 26, 2005 @05:16PM (#13884241)
      Since DirectX 8 (I think), the color values have been floating point numbers, this is to avoid loosing a lot of possible values through all blending with multi-texturing and effects (fog, lighting ect) which are of course much slower than very simple integer calculations. Even on the Athlon64's FP add and muls are 4 cycles, you'd have to make the top end A64 about 700mhz if you make them single cycle execution. (multi-cycle instructions aren't as bad a thing on the CPU as there are plenty of other things to do while you wait, not so in GPUs).

      GPUs have also tended to focus on parallel execution - at least over the last few years - increasing the number of pixels done at the same time, to compensate for not being able to hit multi-ghz speeds, so yes they have many more transistors than typical CPUs (the 7800GTX might break 300 million, well over 250 million) - and of course heat is an issue if you push the voltage and / or clock speeds to far. The last few generations of GPUs have been up around 65-80W real world draw, more than most CPUs out there. And of course GPUs have very little room for cooling in those expansion slots.
      • the color values have been floating point numbers [...] which are of course much slower than very simple integer calculations.

        You say this as if it is both true and obvious. In fact, it is false. For several years now, floating point arithmetic has been just fast (and in many cases, faster) than integer arithmetic. I have actually sped up code which was previously cleverly written to use integers, just by switching to doubles.

        What is still very slow is converting from integers to floats, and vice versa

      • The Athlon64's adds and muls are pipelined, so its still 1 per cycle. If you look at the pipeline of a GPU (dozens of stages), you'll probably find that add and mul takes even longer.
    • by xouumalperxe ( 815707 ) on Wednesday October 26, 2005 @05:40PM (#13884426)
      Well, while the CPU people are finally doing dual core processors (essentially, two instruction pipelines in one die, plus cache et al), the GPU people have something like 24 pipelines in a single graphics chip. Why is it that the CPU people have such lame parallelism?

      To answer both questions. Graphics are trivial to parallelize. You know to start with that you'll be doing essentially the same code for all pixels, and each pixel is essentially independent from its neighbours. So doing one or twenty at the same time is mostly the same, and since all you need is to make sure the whole screen is rendered, each pipeline just needs to grab the next unhandled pixel. No syncronization difficulties, no nothing. Since pixel pipelines don't stop each other doing syncing, you effectively have a 24 GHz processor in this beast.
      On the other hand, you have an Athlon 64 X2 4800+ (damn, that's a needlessly big, numbery name). It has two cores, each running at 2.4 GHz (2.4 * 2 = 4.8, hence the name, I believe). However, for safe use of two processors for general computing purposes, lots of timing trouble has to be handled. Even if you do have those two processors, a lot of time has to be spent making sure they're coherent, and the effective performance is well below twice that of a single processor at twice the clock speed.

      So, if raising the speed is easier than adding another core, and gives enough performance benefits to justify it, without the added programming complexity and errors (there was at least one privilege elevation exploit in linux that involved race conditions in kernel calls, IIRC), why go multiple processor earlier than needed? Of course, for some easily parallelized problems, people have been using multiprocessing for quite a while, and actually doing two things at the same time is also a possibility, but not quite as directly useful as in the graphics card scenario.
  • After the LCD screen "news" earlier, I am glad to see that the unashamed marketting is submitted to Yahoo on this one ;).
  • It was 2D mode only (Score:5, Interesting)

    by anttik ( 689060 ) on Wednesday October 26, 2005 @04:57PM (#13884098) Journal
    Sampsa Kurri told in a Finnish forum that it was over 1 GHz only in 2D mode. They are trying to run it with same clocks later. ATI left some tiny details away from their press release... ;P
    • by jandrese ( 485 ) * <kensama@vt.edu> on Wednesday October 26, 2005 @05:05PM (#13884161) Homepage Journal
      It also apparently crashed a lot. This is kind of like saying "I got a Volkswagon Beetle up to 200kph[1]!!!" with a whole lot of modifications.

      [1] Going downhill
    • Are you sure? TFA does say "Noted Finnish over-clockers Sampsa Kurri and Ville Suvanto achieved graphics engine clocks of 1.003 GHz and a memory speed of 1.881 GHz (940.50 MHz DDR (dual data-rate) memory clocks) with maximum system stability and no visual artifacts."

      The phrase "maximum system stability" though might be misleading. If you define it as just POSTing, then man I've done some awesome overclocking myself! :)

      Interesting that these overclockers are "noted", and "Finnish." That does sort of give
  • Liquid Nitrogen cooling is a bit of a cheat, didn't Toms Hardware get over 5ghz out of a 3ghz P4 a few years ago by constantly pouring liquid N over it?

    I'm gonna wait until you can get 1ghz with a practical cooling solution before getting too excited (tho the way CPUs are heading these days cryogenic cooling may come as stock in a few years!)
    • LN2 isn't really a cheat; it's pretty accessible through common channels, and even commercial nitrogen cooling is available. Also, if you do a quick Google, you can find screenshots of a 7GHz nitrogen cooled P4. Kinda validates the platform, even if they couldn't ship it due to power constraints.

      GPUs on the otherhand aren't anywhere near overclockable so this is quite the hack.
    • You are going to have to wait a long ass time. As practical cooling is just a $5 fan from ATI. Yes I am the victim of countless fried stockspeed ATI cards, I speak from experience.

  • Quake IV (Score:1, Funny)

    by Anonymous Coward
    And it still can't play Quake IV
  • by Ezku ( 806454 ) on Wednesday October 26, 2005 @05:00PM (#13884123)
    Sampsa and Ville already broke their own record by overclocking the same setup to over 1GHz for both the GPU and memory. See pictures over at Muropaketti [muropaketti.com].
  • by tradjik ( 862898 )
    Now, just pair that up with Intel's new dual-core Xeons and watch your power meter spin! At least you won't have to worry about the increase in gas prices this winter, you can just run your PC to heat your home.
  • FPS (Score:5, Funny)

    by koick ( 770435 ) on Wednesday October 26, 2005 @05:01PM (#13884131)
    Cuz you know it's like way better to play Quake IV at 953 Frames Per Second. Totally!
  • by Xshare ( 762241 ) on Wednesday October 26, 2005 @05:01PM (#13884136) Homepage
    That's just sad... that video card now has more clockspeed and more memory than my own main computer.
    • That's just sad... that video card now has more clockspeed and more memory than my own main computer.

      Curious thought - the human brain has about 50% of its mass devoted to processing sight and patterns in images... Sounds about right, doesn't it?
      • the human brain also runs on a dual hemisphere model, and has the capability in the event of 'damage' to a critical controller section of the brain remap other portions of the brain to control those functions (sometimes, depending on the type and extent of damage to the brains circuitry)

        the brain also relies on quantum effects for certain computational effects, has a 1.5 GB hardware ROM (DNA) that includes the full specification of every tissue and every protein in the body... and can store and recall an i
    • You make a jest out of it...

      It's true for me...
  • by GrAfFiT ( 802657 ) on Wednesday October 26, 2005 @05:19PM (#13884259) Homepage
    ...the pictures of the rig : here they are [muropaketti.com], 3DMark05 included [muropaketti.com].
  • O RLY? (Score:3, Informative)

    by irc.goatse.cx troll ( 593289 ) on Wednesday October 26, 2005 @05:26PM (#13884317) Journal
    http://www.bfgtech.com/7800GTX_256_WC.html [bfgtech.com]

    BFG GeForce(TM) 7800 GTX OC(TM) with Water Block. Factory overclocked to 490MHz / 1300MHz (vs. 400MHz / 1000MHz standard), this built-to-order card will feature a water block instead of a GPU fan for those wanting to purchase or who may already have an existing liquid-cooled PC system. BFG will hand-build your card using Arctic Silver 5 Premium Thermal Compound. Easily hooked up to any existing 1/4" tubing system or to 3/8" tubes with the included adapters, this card runs cool and silent. BFG Tech is proud to offer their true lifetime warranty on this graphics card. (Card with water block requires internal or external water cooled system, sold separately.)

    • So? That's running at 490 mHz core 1300 mhz memory. This Radeon is running at 1000Mhz core and 2000Mhz memory.
    • Re:O RLY? (Score:3, Insightful)

      by GarfBond ( 565331 )
      Good job not even reading the summary. The card you referenced is under half the speed of this ridiculously overclocked ATI card, which happens to be 1GHZ core/1.8GHZ memory.
  • by David Horn ( 772985 ) <david&pocketgamer,org> on Wednesday October 26, 2005 @05:35PM (#13884385) Homepage
    It'll just about be able to handle Windows Vista... :-)
  • "We're just warming up," joked Kurri.

    To -80 degrees? Ouch! How cold were you before that?

  • by wcrowe ( 94389 )
    Mine goes to eleven.

  • by Dwonis ( 52652 ) *
    Since when is clock speed a meaningful indication of special-purpose chip speed?

    On the other hand, 'almost 2 GHz memory speed' is a little more meaningful. At least they mentioned that.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...