Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD RV790 Architecture To Change GPGPU Landscape?

ScuttleMonkey posted more than 5 years ago | from the continuing-the-leapfrog-process dept.

102

Vigile writes "To many observers, the success of the GPGPU landscape has really been pushed by NVIDIA and its line of Tesla and Quadro GPUs. While ATI was the first to offer support for consumer applications like Folding@Home, NVIDIA has since taken command of the market with its CUDA architecture and programs like Badaboom and others for the HPC world. PC Perspective has speculation that points to ATI addressing the shortcomings of its lineup with a revised GPU known as RV790 that would both dramatically increase gaming performance as well as more than triple the compute power on double precision floating point operations — one of the keys to HPC acceptance."

cancel ×

102 comments

OpenCL? (3, Interesting)

Yvan256 (722131) | more than 5 years ago | (#27125505)

I hope all these new things will be compatible with OpenCL.

Re:OpenCL? (2, Informative)

Anonymous Coward | more than 5 years ago | (#27125631)

since OpenCL is just an abstraction layer like OpenGL and DirectX most modern hardware already does it just needs driver support

Re:OpenCL? (1)

Chees0rz (1194661) | more than 5 years ago | (#27131111)

Yes, and if the hardware does NOT support the standard, then the drivers will be doing all the work (workarounds), and we're right where we started- SLOW computations.

These hardware devices MUST to be designed, developed, and tested for the OpenCL standard... And believe me- they are.

Re:OpenCL? (0, Troll)

Anonymous Coward | more than 5 years ago | (#27125665)

I also hope that these new things support all 0 lines of OpenCL code out there.

OpenCL = ClosedBS. you must be an Apple fanboi

Re:OpenCL? (1)

lakeland (218447) | more than 5 years ago | (#27125839)

Surely not!

I'd heard it was recently ratified as an ISO standard...

Re:OpenCL? (1)

slashdot_commentator (444053) | more than 5 years ago | (#27131617)

Really. What's the ISO #?

Put up or shut up.

Re:OpenCL? (1)

lakeland (218447) | more than 5 years ago | (#27141879)

It was a joke in reference to a company recently pushing ISO to ratify a non standard.

But you're right, I should probably have used a tag.

Re:OpenCL? (3, Informative)

LWATCDR (28044) | more than 5 years ago | (#27125877)

I say HUH?
OpenCL is supported by Apple but also AMD and nVidia. The standard is being managed by a Not For Profit.
Compared to CUDA it is actually very open.
It is currently vapor ware but everything starts out that way for the most part.

OpenCL is more Closed BS than is CUDA or DX.
I just hope that it actually becomes a working standard.

Re:OpenCL? (0)

Anonymous Coward | more than 5 years ago | (#27126287)

OpenCL standardization was open to anyone who paid a very large amount to the not-for-profit Khronos Group to get in on the standardization process.

Unlike OpenCL, CUDA is the de facto standard for GPGPU computing. It isn't BS: you can actually compile and run many CUDA code examples right now.

I blame Apple for the poor signal-to-noise-ratio associated with OpenCL. They announced support for something non-existant and then the whole GPU industry said "me too" to get on Apple's good side.

Nvidia will build an OpenCL to CUDA bridge to appease Apple. AMD will be left wondering why Apple isn't using AMD chips in Macbooks even though they are OpenCL compatible. Meanwhile Microsoft will develop their own GPGPU standard into Windows 7 which will become the de facto standard.

Everyone in High Performance Computing will have long since abandoned these low level standardization attempts and stick with platforms that address programmability and not compatibility.

Re:OpenCL? (1)

UnknowingFool (672806) | more than 5 years ago | (#27126705)

I blame Apple for the poor signal-to-noise-ratio associated with OpenCL. They announced support for something non-existant and then the whole GPU industry said "me too" to get on Apple's good side.

According to wiki:

On June 16th 2008 the Khronos Compute Working Group was formed with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for the next 5 months to finish the technical details of the specification for OpenCL 1.0 by Nov 18 2008. This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.

If I understand OpenCL correctly, Apple initially conceived of it but turned it over to the Khronos Group for development. The Khronos Group is a consortium of companies including Apple, Intel, AMD/ATI, NVIDIA, etc. It also developed OpenGL. So you're criticizing Apple for the Khronos Group having poorly documentation of a new standard which has only been released in the last 3 months. Do you also blame the Netgear because WEP security is not very secure?

Re:OpenCL? (1)

Funk_dat69 (215898) | more than 5 years ago | (#27129171)

What's with the 'tude, do you work for Nvidia or something?

The truth is, if Nvidia wanted Cuda to be a standard, they should have opened it up to Khronos or whoever to make it generalized. They didn't and it will remain in the niche that it is in.
I'm no Apple fan, but at least the recognized a huge hole and did something to fill it.
Would a windows-only GPGPU standard really become the de facto standard? Maybe for games that may use it, but who does HPC on windows?

Re:OpenCL? (-1, Troll)

Anonymous Coward | more than 5 years ago | (#27126379)

ohai you love nigger dick..

Re:OpenCL? (1)

UnknowingFool (672806) | more than 5 years ago | (#27126775)

According to wiki and the Khronos Group: "OpenCL (Open Computing Language) is the first open, royalty-free standard for general-purpose parallel programming of heterogeneous systems." The initial specification was released in December 2008. One of their members is Apple who originally proposed it. It is a counterpart to OpenGL but in computing instead of graphics. I supposed you would call "OpenGL=ClosedBS. you must be a SGI fanboi"

Re:OpenCL? (-1, Troll)

hairyfeet (841228) | more than 5 years ago | (#27128491)

I don't know about him, but I would call OpenGL completely pointless. Pretty much DirectX IS the standard that everyone uses and from what I have read OpenGL is light years behind DirectX when it comes to features or ease of programming. In the same vein OpenCL can be as "open" as it freaking wants to be but if CUDA is the 'standard" that 90%+ of the GPGPU code gets written in then it too will be pointless.

The fact that it is vaporware while CUDA is out there getting code written for it already puts it at a disadvantage IMHO, same as the amount DirectX code and programmers familiar with it simply blows OpenGL out of the water. While I wish it wasn't so that I could play games on whatever OS I want, DirectX owns the market. From the looks of it unless OpenCL hits the ground running and does it fast CUDA will own the market too.

CAD (1)

Clarious (1177725) | more than 5 years ago | (#27128867)

Until my CAD programs use DirectX, I won't call it 'standard', sure most games on Windows use DirectX, but that doesn't mean OpenGL pointless.

Re:OpenCL? (1)

gbutler69 (910166) | more than 5 years ago | (#27129061)

Huh? So Wii, PS3, Linux, and Mac use DirectX?

Re:OpenCL? (-1, Troll)

hairyfeet (841228) | more than 5 years ago | (#27130445)

For those that labeled me troll, sorry to hurt you little feelings, but try reading the post again without the blinders. We are talking about CODE, and NOT about a specific niche like CAD,okay. We all do know what code is, correct? And in LOCs do you HONESTLY believe that OpenGL is anywhere near DirectX? Really? Because if so I have some really nice swampland in AR to sell you.

This is what I was talking about in a nutshell, without trying to flame, just pointing out some facts. FACT: in LOC DirectX is WAY more popular than OpenGL. FACT: There are a HELL of a lot more programmers that are familiar with DirectX and how to write efficient code in it than OpenGL. FACT: This makes it a LOT easier for business who need 3D acceleration for some product to hire a DirectX coder over an OpenGL one. And finally FACT: There is a LOT more support sites and places out there to learn quality DirectX coding than there is OpenGL.

NOW do you see why OpenCL is already at a disadvantage and why I said OpenGL is kinda pointless? You can say CAD all you want. That niche is what? 0.01% of the computer users of the world? Now compare that with ALL 3d accelerated apps. You will see that DirectX OWNS the market. You can be sure that when new 3d hardware is released it supports the latest DX out of the box. The entire ecostructure of 3D is built around it. Do I wish it weren't so? Hell yes! Then there would be competition and MSFT wouldn't be able to bone Windows gamers by keeping the latest DX only on whatever platform they want you to use.

But right now CUDA is out there. Programmers are learning it, sites are being built around giving you support for it, and more importantly CODE, which is what this post was about, not some FLOSS vs MSFT post, is being written. Having some "open" vaporware is doing NOBODY any good. Just as having OpenGL supported by a niche that is less than 0.01% of the worlds users and programmers don't really help much. If you want to WIN, if you want your code to be run by more than 0.01% of the population, then you have to aim high. Maybe that is why OpenGL programmers were were threatening to go to DX [theregister.co.uk] and sites were proclaiming the war is over [tomshardware.com] and OpenGL is as good as dead?

But if you want OpenCL to be a microscopic niche just like OpenGL, please just try to drown out anyone who points out the emperor has no clothes under an avalanche of negative mods. You can sit at home and enjoy your 0.01% while the rest of the planet passes you by. Play to win, or don't play at all.

Re:OpenCL? (1)

UnknowingFool (672806) | more than 5 years ago | (#27131069)

NOW do you see why OpenCL is already at a disadvantage and why I said OpenGL is kinda pointless? . . . But right now CUDA is out there. . . Having some "open" vaporware is doing NOBODY any good.

If my memory serves me correctly CUDA only works with NVIDIA GPUs. That basically makes it USELESS on ATI or Intel or any other GPU. Again you call something that has just been released "vaporware" while something that was released in Feb 2007 the defacto standard. So CUDA was released earlier. That does not mean it will be come the standard and given the fact that you need an NVIDIA GPU for it, that makes adoption rather limited.

Re:OpenCL? (1)

hairyfeet (841228) | more than 5 years ago | (#27131267)

The problem with that argument is in the market we are talking about Nvidia OWNS it, with the Tesla and Quadro platforms, and has had the market pretty much locked for years now. I don't really see any workstation with ATI chips, but I have seen more Quadro setups than I can count. This is like saying "But in only works on Windows!" And? When Windows drops below 60% then that might be an argument. Same as when Nvidia drops below 60% of the workstation market then your argument is valid. But until then you can go to Dell, HP, Wherever, and look up professional graphics workstations. What do they have in them? The Quadro. And it has been that way for years. I would be surprised if the ATI chips had even a 10% market share. And all of those workstations sporting Quadro in the wild can already support CUDA. Kinda gives them a nice built in market full of professional coders, don't it? Now if you are asking could OpenCL take off five or tens years from now by beating Nvidia to the punch with some new programming technique? Sure they could. But never underestimate the advantage of market dominance.

By the time OpenCL becomes something other than vaporware there will be a lot of quality high performance code written by a lot of professional coders for CUDA. It will be hard to get them to give up and start again,having to port all that working code to a new arch, unless OpenCL is light years ahead of CUDA. And since CUDA is put out by one of the largest GPU manufacturers in the world, one that knows how to squeeze every bit of power out of those chips because they built them, I just don't see that happening.

I have nothing against ATI. Just bought one of their cards for my oldest and it is quite nice. But I am also a realist who can see what is the norm in graphics workstations. And everywhere I look, even in a little backwater state like mine, is the Quadro. And it has been that way for a very long time now. And from the looks of thing Tesla is heading towards that kind of dominance with their "supercomputer in a box" design. So while I wish the OpenCL guys well, they will really have to be 120% better than the other guy simply because of the dominance of the market and the large head start. Same as OpenGL in anything but CAD is pretty much toast. The competition is simply too far ahead for them to catch up. It sucks for the free market, but that is just reality.

Re:OpenCL? (1)

UnknowingFool (672806) | more than 5 years ago | (#27133785)

The problem with that argument is in the market we are talking about Nvidia OWNS it, with the Tesla and Quadro platforms, and has had the market pretty much locked for years now. I don't really see any workstation with ATI chips.

NVIDIA has about 60% of the market right now. While the it has majority of the market right now, that doesn't mean they will continue to have that market. So NVIDIA OWNING the market is an exaggeration and over simplification.

I wend to Dell and HP and they both offer ATI cards in their workstations. They wouldn't do so if it wasn't a viable option for them. Even Apple offers an ATI option. And the market is larger than just high end workstations.

Now if you are asking could OpenCL take off five or tens years from now by beating Nvidia to the punch with some new programming technique? . . . By the time OpenCL becomes something other than vaporware there will be a lot of quality high performance code written by a lot of professional coders for CUDA

What I'm saying is that both are relatively new and saying one is vaporware and one is the defacto standard is pure speculation. You keep using the term "vaporware" and not using it right. Vaporware is software promised and never delivered. Duke Nukem 3D is vaporware. OpenCL and CUDA are frameworks and both have been released. Whether they are adopted or not adopted is another question. They are not vaporware.

In essence your entire argument has been heard over and over again in things like gaming consoles. That would be like saying in January 2007, Xbox 360 OWNS the market while the PS 3 is vaporware. Turns out both are now losing to the Wii.

Re:OpenCL? (1)

hairyfeet (841228) | more than 5 years ago | (#27135937)

And yet again I get marked as a troll for pointing out the emperor has no clothes. Must be nice to only allow groupthink into one's head. makes all those nasty problems just 'disappear". And yet I noticed not a SINGLE person had a link or even a legitimate argument against anything I put down that was marked with FACT beforehand. Maybe because they are true?

Yes, I know there are FireGL cards. And I'm willing to bet my last buck they don't even have 10% of the market. Part of my business is buying and selling used and off lease workstations. There are plenty of businesses out there that don't mind not being "cutting edge" in return for a lower price. I have been doing this since the days of Win9X and the Rage Pro. And do you know what I have found? Pop open a workstation in the last 9 years and there is more than a 95% chance it is going to have a Quadro card.

Scream about FLOSS all you want, that is what I find when I pop off the side of a workstation. Now while Nvidia may only have 60% of the HOME market, that isn't the market we are speaking of. Which just as I pointed out to the OpenGL guy who screamed "What about CAD?" doesn't make a didly damn in the big picture. If you figure every single CAD user on the entire planet and put them together you are talking about MAYBE a 2% niche. Wow, that is SO going to change the direction of the market there buddy! But keep marking me down ALL you want, I got karma to burn. But I'll make a prediction using my incredible psychic powers, and it is this: OpenGL, because they refuse to innovate and compete, will keep dying a slow death. In 3 to 5 years a major CAD company will release their program with DX support and watch as it sells more units. Other CAD vendors will notice this and jump on board. And then OpenGL will be just another dead product on sourceforge.

I'm sorry if that reality hurts your feelings. I am actually sorry that it turned out that way, as I was an early OpenGL adopter and for awhile it looked like they might bring true competition to the 3d market, and I am a firm believer in competition being ultimately good for the market. Just look at how badly MSFT has stagnated and ignored their customers(especially businesses) since Ballmer took over and decided he "Must fucking kill Apple" instead of actually making good products. That wouldn't have happened if they actually had competition. BOTH OpenGL AND Dx would be better solutions today if they had actually competed instead of the OpenGL group simply giving up on everything but CAD.

But you ignore the realities of the market at your own peril. OpenCL is letting the competition get a HUGE lead. This is not good. The CUDA programming language works on the dominant workstation card. Again advantage Nvidia. Finally Nvidia knows that this is a good market, where they can push high end solutions like Tesla and multi Quadro setups. This gives them an incentive to make their product the best they can to keep their advantage. By the time the OpenCL releases anything actually usable I'm betting the CUDA will have gone through several versions and will have settled on an easy to use framework that makes writing code for it VERY easy. Why? Because it is in Nvidia's best interests to do so. They want the market, and that is what it takes to get it. But screaming about it not being FLOSS really isn't going to change shit. Sorry to burst your bubble, but it is true. Just as not being FLOSS hasn't hurt Windows, DotNET, Dx, Or now Silverlight, which seems to be taking a chunk from Adobe, another nonfree vendor. Like I said before, play to win or don't play. It looks like from where I'm sitting that CUDA is playing to win. OpenCL? Looks like they are taking meetings. Good luck with that.

Re:OpenCL? (1)

xouumalperxe (815707) | more than 5 years ago | (#27136985)

I'm sorry if that reality hurts your feelings. I am actually sorry that it turned out that way, as I was an early OpenGL adopter and for awhile it looked like they might bring true competition to the 3d market, and I am a firm believer in competition being ultimately good for the market.

So you were actually around programming OpenGL in the very early 90s, or perhaps you consider early adopter the time around when GLQuake was released ('97)? Whichever way you look at it, Direct3D only gained relevance much later than OpenGL was already well established, which makes statements like "it looked like they might bring true competition to the 3d market" sound completely off.

And I'd expect anyone who's been doing 3D graphics for as long as you claim you have to realize that, irrespectively of a majority of Windows installations on desktops, and a majority of Direct3D software for Windows, everything else on the desktop is OpenGL (not much, granted), and, more relevantly, the mobile device and game console markets are dominated by OpenGL (with both the Wii and PS3 using it as their graphics libraries).

Re:OpenCL? (1)

hairyfeet (841228) | more than 5 years ago | (#27140111)

And yet again we have another trying to take the discussion ANYWHERE except on the actual subject at hand, because their argument doesn't hold water there. This is NOT about, in no particular order: Game Consoles, Cell Phones, Netbooks?Nettops, or FLOSS vs proprietary code. Okay? this entire discussion is over HPC with regards to workstations and other avenues where HPC is actually relevant. While I know there are a few that are using a bunch of linked together PS3s as a "poor mans supercomputer" I really don't see fortune 500 companies buying their "supercomputers" from Gamestop, do you?

This discussion is NOT FLOSS vs proprietary, but specific languages for programming applications for a specific market, in this case HPC. And I have yet to hear anyone give a reason why they would use OpenGL except for CAD. Even the OpenGL group says they are writing their new specs with CAD in mind, so why would you try to shoehorn it into something else? Really hate MSFT THAT bad? And as far as OpenCL, they are taking meetings. The competition is getting a HUGE head start, they are already the dominant platform in workstations, which is one of the core markets we are speaking of, and their Tesla seems as "plug and go" as a solution in such a demanding field can be. Is anything I have just said in ANY way incorrect? Then why do the mods keep trying to bury me? Afraid your emperor has no clothes?

Look I WANT there to be competition, I really do. While everyone else looked at me like I was nuts I went out of my way to buy the more expensive cards that had better OpenGL support in the hopes that putting my money where my beliefs were would help grow the market. Instead OpenGL has rotted on the vine while Dx has come out with more and more features. Sorry, but it is true. Read any 3d related forum and you will find post after post complaining about how a simple to code in Dx function is a royal PITA in OpenGL. Are all these coders around the world incompetent? Or maybe, just maybe, the writers of Dx went out of their way to make Dx easy to code for while OpenGL took meetings.

And I have a sinking feeling that is what is going to happen to OpenCL. The CUDA writers are going to get feedback, take notes, and make usability Job #1 while the OpenCL guys take meetings. By the time they get out version 0.01(which will pretty much be a POC and not actually usable for anything) CUDA will be at version 4 or 5 and will have most of the bugs already worked out. But this isn't about whose philosophy is better or worse. This is about providing a solution to a specific problem, in this case GPGPUs and HPC applications. Right now Nvidia is the ONLY way to get anything done. Do you think companies are going to sit around a few years waiting on OpenCL because it is "free as in freedom"? Nope, they will just write CUDA apps.

And we BOTH know the reason Sony and Nintendo went OpenGL is NOT because of the merits of OpenGL over Dx. It is because they didn't want to cut a check to MSFT with every console they make, not with said MSFT being their competitor. That is just good business sense, but doesn't really say anything about OpenGL. If anything it gives MSFT even MORE of an advantage, because of the ease of porting between PC and 360. Again, that is just reality, and doesn't make one better than the other. But OpenGL is pretty much dead except for a few niches. More code is being written for Dx every day and sites have proclaimed the war is over [tomshardware.com] and Dx has won. Because THAT is the reality of the market. But if you want to be like OpenGL and cling to a tiny percentage of the market as the world passes you by, enjoy. Play to win or don't play. MSFT knew that, Nvidia knows that, OpenGL and OpenCL are taking meetings. Good luck with that.

Re:OpenCL? (1)

Chees0rz (1194661) | more than 5 years ago | (#27131227)

What most people don't understand is that CUDA is an ugly language tied very close to NVIDIA architecture. Props for being the first- but it is too closed. OpenCL is an *attempt* to abstract this architecture away- which is why it can supposedly be used on GPU, CPU, Cell, etc.
If OpenCL picks up, then CUDA will be nothing but a layer in the software stack to save the Nvidia driver team some lines of code-
OpenCL -> Cuda -> Hardware

Nvidia has already accepted this by announcing OpenCL support- I don't see why anyone would write for CUDA anymore- except for the fact that they already know it.
You are right, there is no code yet, which is a problem. But the great thing about OCL is that once compilers and drivers are released the code can be run on the CPU before hardware even hits the market.
Something you don't seem to get to in your rant, though, is that Cuda is not OpenCl's competition... DX11 and Compute Shaders are.
ps. OGL and OCL will be able to share memory objects- so cool! (probably DX11 will do the same). These abstraction are just COOL- whoever wins (as long as it isn't CUDA)

Re:OpenCL? (1)

slashdot_commentator (444053) | more than 5 years ago | (#27131727)

OpenCL fanboy religion.

OF COURSE, open standards are more desirable than closed ones, EXCEPT when the open standard doesn't outperform the market standard OR, in OpenCL's case, DOESN'T EXIST.

There are capitalists now that want to make their buck NOW, not wait two years just to find out that they STILL have to wait another X years for something to roll out. If you want to do GPU offload processing, mathematical processing, or the state-of-the-art game NOW, you're stuck with CUDA. If you want to wait for the ideologically pure, send your resume to 3-D Realms. They may want to add someone to their Duke Nukem team. Why don't you help out with GNU Hurd? I hear they can use some help.

As for Nvidia making noises about helping out OpenCL, that's all it is, hot air. They're not deferring to OpenCL, they just don't give a crap about marketing the niche CUDA market. Why do a Microsoft and front? Just smile, make polite noises to keep everyone happy, and just keep moving CUDA forward. They win if CUDA's the standard. They don't lose if CUDA loses relevance.

Re:OpenCL? (1)

UnknowingFool (672806) | more than 5 years ago | (#27130923)

The fact that it is vaporware

You call OpenCL vaporware when it was first proposed last June, had the initial draft by November, and was released in December. By my count it's less the 9 months old. I don't know about you but in my world, it might considered vaporware if like that was years instead of months.

nVidia rules (3, Insightful)

Anonymous Coward | more than 5 years ago | (#27125555)

... the "rename the same old shit four times to try and con people"-market, that's for sure.

Re:nVidia rules (5, Informative)

i.of.the.storm (907783) | more than 5 years ago | (#27126065)

It's sad that this is actually almost true... Geforce 8800GT->9800GT->GT2x0 (I think 250 or something) are all the same GPU...

Re:nVidia rules (2, Informative)

StarHeart (27290) | more than 5 years ago | (#27127261)

No, they are all of the same base architecture, but aren't the same card. The 8800GT and the 9800GT are pretty close. Probably the biggest difference is some 9800GT cards are 55nm chips instead of 65nm. On the other hand there is a lot of difference between 8800GT and the GTX260. The GTX260 has 32 dedicated double precision processors that the 8800GT does not. My rough understanding is that those double precision processors are roughly equal to 1.5x a Q6600(quad core), or 6 cores. The GTX260 also comes with more streaming(single precision) processors. The 8800GT is 96/112 and the GTX260 is 192/216, depending on model.

Just look at this graphic.

http://pyrit.googlecode.com/svn/tags/opt/pyritperfaa3.png [googlecode.com]

Re:nVidia rules (2, Informative)

kirillian (1437647) | more than 5 years ago | (#27127373)

Yes, the GTX260 is a different card...but the 250 is an improvement of the die-shrink of the original 8800 GTS/9800 GTX - 9800GTX+. ...takes 20 seconds to find an article for reference... http://techreport.com/articles.x/16504 [techreport.com] The card is actually a little different, but the gpu architecture is the same...in fact...the gpu is just a re-branded chip of the same fab line as the 9800GTX...

Re:nVidia rules (2, Informative)

i.of.the.storm (907783) | more than 5 years ago | (#27127699)

They aren't the same card, but the 9800GT is only different from the 8800GT in adding triple SLI support. Some later 9800GTs are a die shrink but the original card was not. And I said GT250, not 260. The 250 is a rebranded 9800GTX+, although I thought it was a 9800GT. See http://www.maximumpc.com/article/news/nvidia_geforce_250_rebranding_complete [maximumpc.com] and http://www.hexus.net/content/item.php?item=14656&page=2 [hexus.net]

correction and performance calculations (0)

Anonymous Coward | more than 5 years ago | (#27130919)

The GTX260 has 32 dedicated double precision processors

No version of the GTX 260 has this number of dedicated double-precision units. In both the 192- and 216-unit versions, 1 of the 8 units in each core can do one double-precision MADD per cycle.
So:
GTX 260: 192/8 = 24 cores, thus 24 double-precision units
GTX 260 216SP: 216/8 = 27 cores, thus 27 double-precision units

Even the GTX 280 has only 30 such units:
240/8 = 30

The GTX260 also comes with more streaming(single precision) processors.

The terminology for both the G8x/G9x series and the GT200/D10U series is to call all the units "streaming processors" without distinction between single- and double-precision ability; perhaps you're just clarifying that the GTX 260 has more units [as in, is more capable] even if we only consider single-precision units.

My rough understanding is that those double precision processors are roughly equal to 1.5x a Q6600(quad core), or 6 cores.

1) It depends heavily on the job (compute- or memory-bound? I'm not familiar with Pyrit, but table generation for cipher attacks sounds memory-bound)
2) Those particular numbers are off for both single- and double-precision.

From the above numbers, the maximum theoretical double-precision rate for GTX 260-216SP is ~67 GFLOPS, and stock memory is 896MB @ ~112GB/s. Real-world is about half that because you seldom get both instructions per cycle and only reliably get 1. Single-precision is 3 instructions/clock theoretical but 2 for practical, and 8 times the units, so 67*(3instr/2instr. = 1.5)*8 ~= 805 GFLOPS.

The Q6600 Kentsfield is 2400MHz/1066MHz.; we'll be generous for the sake of memory bandwidth and talk about the Xeon E5420 at 2500MHz/1333MHz. Each core can theoretically perform 4 FLOPS whether single- or double-precision, and we have 4 of them, running at the known frequency, so theoretical max is 80 GFLOPS, and not quite so hard to exploit because it and its compilers were engineered to give good performance on poorly-behaved data, unlike GPUs (and their hamstrung, underdeveloped compilers).

Since we're using Socket-F we can saturate the FSB: (2*64-bit channels) * (2*667MHz interleaved per channel) ~= 21.3GB/s theoretical, which is rather optimistic but of efficiency comparable to a graphics card's device memory's.

If your task can be done with single-precision, is memory-bound and fits in about 1GB per job, the GTX 260 compares favorably to a Xeon: 800 (500, say) vs 80 (60, say) GFLOPS, and 112 (85, say) vs 21 (15, say) GB/s to memory. For double-precision it drops to 67 (33, say) GFLOPS so there's no clear-cut speed advantage and you should either use AMD's current offerings or stick to using standard CPU (or Clearspeed or a proper supercomputer if you're made of money).

If your dataset doesn't fit AMD's or Nvidia's GPU device memory (1~3GB), you pretty much have to go with a CPU or supercomputer since you can bolt 64GB or 128GB to your dual quad-core Xeon system, and your code will be vastly more portable for free.

correcting my thinko (0)

Anonymous Coward | more than 5 years ago | (#27131205)

No idea why I said 2 units * (1 MUL + 1 ADD) * 4 cores * 2500MHz = 80GFLOPS (!?) for the Kentsfield/Harpertown; obviously it's 40GFLOPs, and obviously the analysis holds.

Re:nVidia rules (1)

root_42 (103434) | more than 5 years ago | (#27127809)

Actually it's the GTS250, which uses the G92b chip. The changes compared to the other G92 based chips are relatively small though. Hence the similar chip-name.

Re:nVidia rules (1)

i.of.the.storm (907783) | more than 5 years ago | (#27127853)

Ah right, GTS 250. But it uses the exact same G92b GPU as in the 9800GTX+.

Waiting for: (4, Funny)

residieu (577863) | more than 5 years ago | (#27125585)

Waiting for GPGPGPUs

Re:Waiting for: (1, Insightful)

Anonymous Coward | more than 5 years ago | (#27126235)

Your wish has been granted:

http://www.gpgpgpu.com/

Re:Waiting for: (2, Funny)

trum4n (982031) | more than 5 years ago | (#27126279)

I expected that to redirect to a Rick Roll.

Re:Waiting for: (2, Funny)

Hurricane78 (562437) | more than 5 years ago | (#27126543)

And because you love Rick Rolls, you clicked it anyway? ;)

Re:Waiting for: (4, Funny)

trum4n (982031) | more than 5 years ago | (#27126651)

Yep. It's a great song!

Re:Waiting for: (1)

UnknowingFool (672806) | more than 5 years ago | (#27126515)

2 GPs, 1 GPU?

Me too (1)

roystgnr (4015) | more than 5 years ago | (#27126637)

Well, waiting for cheapie GPGPUs, anyway.

Re:Waiting for: (1)

Chris Burke (6130) | more than 5 years ago | (#27127255)

I want ASGPUs. I want a fat rendering pipeline that is optimized for, and can only render, dancing babies.

Re:Waiting for: (0)

Anonymous Coward | more than 5 years ago | (#27128199)

Waiting for GPGPGPUs

Yeah, I hear those will finally support the eieio [ibm.com] instruction. :)

Re:Waiting for: (0)

Anonymous Coward | more than 5 years ago | (#27128231)

What if you encrypt using PGP running on a GPU?

PGPGPGPU!

Re:Waiting for: (1)

forkazoo (138186) | more than 5 years ago | (#27128481)

Waiting for GPGPGPUs

Well, I haven't heard that Gnu Privacy Guard has announced support for Programmable Graphics Processing Units, but I'm sure it's only a matter of time.

Apparently I'm behind on my acronyms... (0)

Anonymous Coward | more than 5 years ago | (#27125595)

What in the screaming blue hell is a GPGPU?

GPGPU= General Purpose GPU (5, Informative)

frith01 (1118539) | more than 5 years ago | (#27125663)

General Purpose GPU's = massively parallel flops operations possible. ( Think matrix math, real time sims, lab testing, SETI, etc).

Still separate from a CPU, which has additional capabilities.

For the older folks, think of this as a math co-processor :) [ with it's own fan]

Re:GPGPU= General Purpose GPU (0)

Anonymous Coward | more than 5 years ago | (#27125957)

For the older folks, think of this as a math co-processor :) [ with it's own fan]

Thank you. And shit, apparently I'm old. When did that happen?

Re:GPGPU= General Purpose GPU (2, Informative)

PopeRatzo (965947) | more than 5 years ago | (#27126373)

Think matrix math, real time sims, lab testing, SETI, etc

My wife told me to add "fluid dynamics!"

(note: I often read slashdot comments aloud to her, and sometimes she throws back replies that I dutifully pass along)

Re:GPGPU= General Purpose GPU (5, Funny)

Hurricane78 (562437) | more than 5 years ago | (#27126581)

I'm sorry to break this to you, but... *whispers* she's not real...

Slashdot readers having wives was already crazy... but they being interested in it too... Yeah, right...

Re:GPGPU= General Purpose GPU (0)

Anonymous Coward | more than 5 years ago | (#27128253)

If his wife knows what fluid dynamics is, and in all likehood, actually understands them (which puts her well above MY level), it is not surprising at all she can be interested into a witty response to /. trivia.

Just because most /.ers are losers in real life doesn't mean all of them are. I do have a wife, for example, and she is damn cute. Doesn't know fluid dynamics, but not everyone can be perfect and I love her as she is, fluid dynamics or not :)

Re:GPGPU= General Purpose GPU (0)

Anonymous Coward | more than 5 years ago | (#27133909)

Maybe this is sexist, but fluid dynamics? A woman who looks good enough to be married?
Naaa...

Oh, wait... If the husband looks just as bad... ^^

Re:GPGPU= General Purpose GPU (0)

Anonymous Coward | more than 5 years ago | (#27127443)

Fluid dynamics falls into matrix math as far as the number crunching part is concerned.

Re:Apparently I'm behind on my acronyms... (4, Insightful)

jandrese (485) | more than 5 years ago | (#27125677)

It's basically using your video card as a general purpose processor [google.com] . You might think such an acronym would be hard to find on google, but it turns it it isn't.

Re:Apparently I'm behind on my acronyms... (2, Insightful)

Lord Ender (156273) | more than 5 years ago | (#27126293)

It is bad journalism on the part of the slashdot editors to force the readers to google for acronyms. Common, long-standing acronyms, like CPU, are one thing. But GPGPU should absolutely be defined in the summary. I find it hard to believe some people pay money for this site, and that "editors" get paid money for their "editing."

Re:Apparently I'm behind on my acronyms... (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#27126431)

Quit bitching and google shit you don't know next time. Most devs in my circle know the acronym. if you want fluff tech stories GTFO.

Re:Apparently I'm behind on my acronyms... (1)

PopeRatzo (965947) | more than 5 years ago | (#27126449)

I find it hard to believe some people pay money for this site, and that "editors" get paid money for their "editing."

The small amount I pay to subscribe to Slashdot is just about the best bargain I get on the intertubes. Yes, I often have to google stuff that I find in stories and comments, but now that Firefox lets me just highlight a word or phrase, right-click and google I don't mind a bit. I find that I learn a lot in the process.

[NOW will you take that black mark off of my soul, Commander Taco?]

Re:Apparently I'm behind on my acronyms... (1, Offtopic)

Hurricane78 (562437) | more than 5 years ago | (#27126629)

Question: What do you get from the subscription, other than being able to read stories half an hour earlier? (No idea why I would need that.)

Re:Apparently I'm behind on my acronyms... (1)

Chabo (880571) | more than 5 years ago | (#27127153)

Back in the days before AdBlock, another benefit to subscribing was the removal of ads.

Re:Apparently I'm behind on my acronyms... (0)

Anonymous Coward | more than 5 years ago | (#27127653)

(No idea why I would need that.)

Ever noticed the content free, early +5 posts pushing some piece of commercial propaganda? Commercial astroturfers use it to frame the discussion. Don't think of the elephant. [amazon.com]

Re:Apparently I'm behind on my acronyms... (1)

complete loony (663508) | more than 5 years ago | (#27128941)

I wouldn't call GPGPU [google.com] a new term. There's been heaps of stories here about it.

Re:Apparently I'm behind on my acronyms... (2, Funny)

Slumdog (1460213) | more than 5 years ago | (#27125867)

What in the screaming blue hell is a GPGPU?

I think you meant "screaming green hell"

Re:Apparently I'm behind on my acronyms... (0)

Anonymous Coward | more than 5 years ago | (#27127743)

That would be a GPU built for the U.S. Army, widely used in WWII, Korea, and Vietnam.

And I am assuming "screaming blue hell" refers to Dr. Manhattan?

That's a lot of pages supporting guesswork. (4, Insightful)

jandrese (485) | more than 5 years ago | (#27125647)

So this is what some anonymous guy on the internet thinks might happen? Granted, he has a lot of material in there, but in the end it's all just guesswork. Apparently he's a big fan of cheaper lower end video cards as well, and is hoping that ATI releases one.

Re:That's a lot of pages supporting guesswork. (0)

Anonymous Coward | more than 5 years ago | (#27126087)

Yeah well I think that he's wrong.

also, seems to be guessing at the wrong thing (2, Informative)

Trepidity (597) | more than 5 years ago | (#27126329)

AMD's double-point floating point performance is already great. What they lack is the rest of it. The programming model is pretty bad compared to CUDA (nobody is using Brook+), and they seem to be basically waiting for OpenCL to fix that. The bottlenecks in most attempts to use AMD chips for GPGPU code are also not really the floating-point units themselves, but the rest of the architecture; it's hard to keep the ALUs fed with your data without a magic compiler, a better programming model, a better architecture, or some combination of those.

Yep (1)

Sycraft-fu (314770) | more than 5 years ago | (#27127393)

And, of course, like with most people who do a "My favored company will come out with the bestest thing EVAR!" he's ignoring the fact that nVidia won't sit still. I don't know what's coming next from nVidia. What I do know is they currently have a powerful card for gaming and GPGPU (GTX285) that does support double precision as well as single precision, though DP is much slower. So, fairly safe to say their next generation card will also support DP, and will probably be faster than their current card.

To me, this just seems like fanboy rambling. Yes I'm sure ATi's next card will be better than their current cards. What of it? Unless you've got specifics AND specifics of what their competitor is doing, you can't really say how it'll change things. I mean even if you found out that ATi was making a card that was 10x as fast as their current one, that wouldn't mean anything unless you also knew nVidia wasn't.

We'll know what happens..... When it happens.

Well, I hope they hurry up... (1)

mdm-adph (1030332) | more than 5 years ago | (#27125907)

...because since I learned that BOINC now supports CUDA (but still has no love for GPGPU), I'm about to ditch my ATI cards for a few Nvidia ones.

Re:Well, I hope they hurry up... (2, Informative)

Tweenk (1274968) | more than 5 years ago | (#27125979)

CUDA = an Nvidia-specific way to do GPGPU...

Personally I'm waiting for OpenCL, which would be to GPGPU what OpenGL was for 3D graphics when it was released - essentially a vendor and platform neutral general processing interface to the GPU.

Re:Well, I hope they hurry up... (1)

mdm-adph (1030332) | more than 5 years ago | (#27126195)

Hey -- whatever it's called, I'm just about to make a purchase decision based upon the fact that my hardware isn't supported. Somebody needs to get coding. :P

Re:Well, I hope they hurry up... (1)

heson (915298) | more than 5 years ago | (#27126771)

Sadly, experience dictates that: Whatever card you buy now is insignificant in performance when OpenCl is mature enough to use.

LOLNO (4, Insightful)

MostAwesomeDude (980382) | more than 5 years ago | (#27125941)

As far as I know, the RV790 will be in the R600/R700 family and will work almost perfectly with existing R600/R700 code. While I have no guarantees on this, current talks with AMD employees haven't given off any indication that this chipset will be radically different from its cousins.

Didn't know there was a "landscape" yet to change (0)

Anonymous Coward | more than 5 years ago | (#27126091)

Meanwhile, isn't this just yet another area that AMD/ATI is playing catchup? Not a role I'd like to be in against Intel and NVidia.

ATI or AMD? (0)

Anonymous Coward | more than 5 years ago | (#27126183)

ATI or AMD?

Re:ATI or AMD? (2, Informative)

Sinning (1433953) | more than 5 years ago | (#27126655)

yes

Re:ATI or AMD? (1)

B1oodAnge1 (1485419) | more than 5 years ago | (#27126765)

Yes.

Re:ATI or AMD? (1)

Aranykai (1053846) | more than 5 years ago | (#27127193)

Both. Nvidia for linux, ATI for windows.

Re:ATI or AMD? (1)

CheshireFerk-o (412142) | more than 5 years ago | (#27128751)

not that ive used windows, or an ATI, but i hear the drivers are equally shitty on both OS.

Re:ATI or AMD? (1)

robthebloke (1308483) | more than 5 years ago | (#27133583)

traditionally yes, but actually they aren't as bad these days as everyone says...

Re:ATI or AMD? (0)

Anonymous Coward | more than 5 years ago | (#27129091)

ATI for Linux are you out of your mind???!

What I want from the GPU (2, Interesting)

Belial6 (794905) | more than 5 years ago | (#27126861)

What I want from the GPU is features like what the CPUs have so that the GPU can have multiple VMs running in it. The only reason that I don't run inside of a VM as my primary computing environment is because graphics acceleration pretty much suck in it. When AMD bought ATI I expected virtualized video to be one of their early announcement.

Imagine if your VMed OS could believe that it had 100% control of the video card, but your video card would display on it's own 'surface', and still use full hardware acceleration for the process. As far as I can tell, video is the only serious stumbling block left in virtualizing the x86 architecture.

Re:What I want from the GPU (1)

kriebz (258828) | more than 5 years ago | (#27127201)

That's an interesting idea and maybe it will happen one day, but hardware virt hasn't trickled down that far yet. It' still at the mid-range server level, except a few power users, developers, and engineers. Cards now how have dynamic virtual memory mapping, which might just make this possible, but certainly not simple.

In the Land of UNIX Where Everything Works you can send GLX over the network for 3D graphics where ever the card lives, whether it's a VM host or a cluster headnode. That's probably more useful than emulating the 25 year old VGA BIOS and umpteen stupid extensions.

Re:What I want from the GPU (1)

Late Adopter (1492849) | more than 5 years ago | (#27132379)

In the Land of UNIX Where Everything Works you can send GLX over the network for 3D graphics where ever the card lives, whether it's a VM host or a cluster headnode. That's probably more useful than emulating the 25 year old VGA BIOS and umpteen stupid extensions.

That's a neat idea, I had forgotten OpenGL worked like that. However, I don't really see a use case. You're going to virtualize an X11 app and have it connect to the X11 server on the host? Surely this is something you only want to do for one app at a time, in which case why the VM?

Re:What I want from the GPU (0)

Anonymous Coward | more than 5 years ago | (#27129567)

The next vmware esx version handles that: I/0 passthrough baby! You can bind a video card to a given VM. Can'T wait till it trickles down to vmware workstation

Re:What I want from the GPU (0)

Anonymous Coward | more than 5 years ago | (#27129591)

The reason it's the only stumbling block left is because no one cares about video.

Re:What I want from the GPU (2, Interesting)

fast turtle (1118037) | more than 5 years ago | (#27130289)

VirtualBox is supposed to have started solving this problem. It's beta and still experiemental but if it works well, then it's exactly what I've been looking for as it means I can finally run XP ina Vbox setup under a 64bit Gentoo Linux.

Re:What I want from the GPU (0)

Anonymous Coward | more than 5 years ago | (#27137977)

Yeah, as of the latest VirtualBox, you can enable 3D accelaration and it definitely seems to work - I've been using it for some time now - Windows XP running on Linux.

The article has nothing to do with reality (1)

hkultala (69204) | more than 5 years ago | (#27127151)

Some guy who does not know very much posts a long speculation article, all speculation done with his limited undertanding. And then this is posted as news.

RV790 is just higher-clocked RV770. There are no more shader units. There are no shader units converted to 64-bit. it's just ~10% clock speed increase, giving about 10% more performance.

RV800 will come at end of the year, that will have much power.

Mod Parent Up (1)

adisakp (705706) | more than 5 years ago | (#27128911)

Just like the parent says: the actual article is a work of fiction and speculation with no hard facts on future products.... merely "what if's".

Re:The article has nothing to do with reality (0)

Anonymous Coward | more than 5 years ago | (#27128991)

I agree. Pure speculation. My favourite part of the "news" article was this:

These guesses are simply guesses based on what we have seen before, as well as a feel for where the industry has been going. I have no inside information, or anything even close to being an official or unofficial source for these guesses. These guesses are also based on us being 10 months out of the last major update to AMD's chip lineup.

So he's basically just made something up because it was his turn to write an article this week.

How 64-bit operations on RV7x0 work (1)

hkultala (69204) | more than 5 years ago | (#27131895)

Some more information how RV7x0 calculates 64-bit floating point:

All shader processors in RV7x0 are natively 32-bit. There are 5 ALU's in each shader processer. When RV7x0 calculates an 64-bit MUL operation, it does it by using 4 of those 32-bit ALU's together. When RV7x0 calculates an 64-bit ADD operation, it combines 2 32-bit ALU's together.

That's why RV7x0 has floating point MUL throughput of 1/5 of it's 32-bit MUL thoughtput. There is no "group of 64-bit ALU's" like the article thinks.

As usual:AMD/ATI = good designer, poor marketing (0)

Anonymous Coward | more than 5 years ago | (#27127921)

Actually the AMD Firestream is far superior to the nVidia, for several reasons including true double precision, and generally better performance.

Further the power consumption on the Firestream cards is far lower than the nVidia cards.

However, as usual, shoddy AMD marketing have left their offerings out in the "What is this" cloud.

Misunderstanding about double precision (1)

SoftwareArtist (1472499) | more than 5 years ago | (#27128913)

His predictions about double precision appear to be based on a misunderstanding about how the 4800 series works. Here's what he says about it: "That 680 GFLOPs would be assuming AMD converts 2/5 of the stream units to double precision. Now, if AMD were to convert 3/5 of those units to double precision, a single card could do slightly over 1 TFLOP." He seems to believe that 1/5 of the stream units support double precision, and they could simply convert some additional ones to support it as well. But that isn't the case. In fact, it has no double precision units. Instead, it can have four single precision units work together to act as one double precision unit. That's how they were able to support double precision without using a lot of extra silicon. Actually having dedicated double precision units would require far more silicon, and would be a major change.

Re:Misunderstanding about double precision (0)

Anonymous Coward | more than 5 years ago | (#27129239)

hrmm.. actually I thought it was that in each block one unit supported double precision, and you could also use the other 4 combined to do 2 at the same time... hence the 2/5 thing

Drivers (1)

Brandybuck (704397) | more than 5 years ago | (#27129479)

I would rather have quality Open Source drivers. Yeah, you through the specs "over the wall", but it would be nice if you were a bit more active. Like giving us an actual Open Source driver. Or patches. Or something. We shouldn't be doing your work for you.

Slashdot editrolls to reuse same old marketspeak? (0)

Anonymous Coward | more than 5 years ago | (#27129747)

Let's cut out all the hype: AMD is working on a new GPU. It is expected to be faster than the current one, and they would very much like us to believe it will be better than the competition.

Can I run Perl scripts on it ? Can GPGPU routines safely coexist with games ? Can they please just agree on a common interface so I don't get locked in to the GPU-of-the-month and these companies' tendencies to break backward compatibility on an annual basis ?

The day I can use GPGPU like any other processor, is the day I start giving a damn about GPGPU. For now, just shut the fuck up and give me a video card that doesn't suck.

Hardware Repair Related Sites (0)

Anonymous Coward | more than 5 years ago | (#27131177)

Hi dear my all site iPhone,iPod,Mac,Apple any other hardware repair related sites i am sure my all sites your problum solve

Plz use Google .com & Google .co.in - Good Ranking my all sites

http://www.techrestore.com/
http://www.macbook-repair.com/macbook-repair.htm
http://www.macbook-pro-repair.com/macbook-pro-repair.htm
http://www.apple-macbook-pro-repair.com/apple-macboock-pro-repair.htm
http://www.apple-ipod-repair.com
http://www.macbook-screen-repair.com/macbook-screen-repair.htm
http://www.apple-macbook-screen-repair.com/macbook-screen-repair.htm
http://www.apple-lcd-screen-repair.com/apple-lcd-screen-repair/
http://www.ibook-repair.com/ibook-repair/
http://www.macbookscreenrepair.com
http://www.powerbookscreenrepair.com/powerbook-screen-repair/
http://www.powerbook-repair.com/powerbook-repair/
http://www.iphonerepairhelp.com/iPhone-repair.htm
http://www.psprepairhelp.com/psp-repair.htm
http://www.ipod-repair-help.com/iPod-repair.htm
http://www.iphone-repair-help.com/iphone-repair.htm
http://www.ps3driverepair.com/ps3-drive-repair.htm
http://www.psp-repair-help.com/psp-repair.htm
http://www.brickedpsp.com/bricked-psp.htm
http://www.partsforpsp.com/parts-for-psp.htm
--------------------
Other sites
http://crystaltechesolutions.wordpress.com/
http://preciousgemstonebeads.wordpress.com/
http://www.shriramgems.com
http://unlimited-moviez.com/webdir/

Call-Us
1-888-64-RESTORE 1-888-647-3786
Contact By riya1984@gmail.com

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...