Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Banks On Flood of Stream Apps

timothy posted more than 5 years ago | from the streams-come-with-banks-anyhow dept.

AMD 124

Slatterz writes "Closely integrating GPU and CPU systems was one of the motivations for AMD's $5.4bn acquisition of ATI in 2006. Now AMD is looking to expand its Stream project, which uses graphics chip processing cores to perform computing tasks normally sent to the CPU, a process known as General Purpose computing on Graphics Processing Units (GPGPU). By leveraging thousands of processing cores on a graphics card for general computing calculations, tasks such as scientific simulations or geographic modelling, which are traditionally the realm of supercomputers, can be performed on smaller, more affordable systems. AMD will release a new driver for its Radeon series on 10 December which will extend Stream capabilities to consumer cards." Reader Vigile adds: "While third-party consumer applications from CyberLink and ArcSoft are due in Q1 2009, in early December AMD will release a new Catalyst driver that opens up stream computing on all 4000-series parts and a new Avivo Video Converter application that promises to drastically increase transcoding speeds. AMD also has partnered with Aprius to build 8-GPU stream computing servers to compete with NVIDIA's Tesla brand."

cancel ×

124 comments

Sorry! There are no comments related to the filter you selected.

An alternative spin. (0, Troll)

wild_quinine (998562) | more than 5 years ago | (#25755881)

"AMD will release a new driver for its Radeon series on 10 December which will probably kill your family."

There, fixed that for you.

Re:An alternative spin. (0)

Anonymous Coward | more than 5 years ago | (#25756229)

"AMD will release a new driver for its Radeon series on 10 December which will probably kill your family."
There, fixed that for you.

Pearl Harbor is December 7th dumbass.

I'm concerned. (1)

gnutoo (1154137) | more than 5 years ago | (#25756883)

What? Is it Windows 7?

That's both deadly and vaporware at the same time.

I hope they make this free software. That's where the supercomputer market is, so non free releases don't make sense.

"deadly and vaporware" acronym (2, Funny)

NotQuiteReal (608241) | more than 5 years ago | (#25756973)

deadly and vaporware is more commonly known as sbd. Everyone knows that.

Re:An alternative spin. (0)

Anonymous Coward | more than 5 years ago | (#25758203)

Troll? Cmon, that was hilarious.

Useless without free drivers! (5, Interesting)

neonleonb (723406) | more than 5 years ago | (#25755885)

Surely I'm not the only one who thinks this'll be useless without open-source drivers, so you can actually make your fancy cluster use these vector-processing units.

Re:Useless without free drivers! (1, Insightful)

ceoyoyo (59147) | more than 5 years ago | (#25756037)

Uh, what? Just like your video card is useless for displaying graphics without open source drivers?

Re:Useless without free drivers! (3, Informative)

Chandon Seldon (43083) | more than 5 years ago | (#25756187)

Uh, what? Just like your video card is useless for displaying graphics without open source drivers?

We're not talking about video games here. Some people use computers for important work, not just for screwing around.

Re:Useless without free drivers! (5, Funny)

Cassius Corodes (1084513) | more than 5 years ago | (#25756209)

We're not talking about video games here. Some people use computers for important work, not just for screwing around.

How dare ye!

Re:Useless without free drivers! (4, Informative)

ceoyoyo (59147) | more than 5 years ago | (#25756357)

Yes, thank you for telling me. I use mine for cancer research. That includes GPGPU, by the way. Yes, I'm serious.

I don't believe I know anyone who uses the source code for their video driver. All the GPGPU people use one of the GPU programming languages. The hard core ones use assembly. The young 'uns will grow up with CUDA. None of those requires the source code for the driver.

Re:Useless without free drivers! (1)

lysergic.acid (845423) | more than 5 years ago | (#25756443)

i think the point is that if you want to use GPGPU for cluster computing it's ideal to have open source drivers, especially if you're not going to release Linux/Unix drivers yourself.

Re:Useless without free drivers! (5, Insightful)

ceoyoyo (59147) | more than 5 years ago | (#25756507)

Any my point is, why? All you need is a decent API. Claiming it's useless without open source drivers is just a silly ruse by an open source zealot to advance an agenda.

Open source has a lot of things going for it, but it's more fanatical followers are not among them.

Re:Useless without free drivers! (3, Insightful)

lysergic.acid (845423) | more than 5 years ago | (#25756829)

what agenda are they advancing? the agenda of being able to use this feature on the platform they are running?

sure, if all hardware manufacturers were in the habit or releasing Unix & Linux drivers then closed-source binaries and a decent API would be fine. but the reality is that many manufacturers do not have good Linux/Unix support. that is fine. but if they want to leave it to the community to develop the Linux/Unix drivers themselves then it would be really helpful to have open source Windows drivers to use as a template.

it's not useless to you since you're running Windows, but not everyone uses a Windows platform for their research. for those people it would be useless without either, open source drivers or a set of Linux/Unix drivers. i mean, if you're already running a Beowulf cluster of Linux/BSD/Solaris machines then it might not be practical to convert them to a Windows cluster (can you even run a Beowulf cluster of Windows machines?), not to mention the cost of buying 64 new Windows licenses and porting all of your existing applications to Windows.

it's probably an exaggeration to say that closed-source drivers are useless. and perhaps AMD will release Linux/Solaris/Unix drivers. but if they're not going to then open sourcing the Windows drivers and the hardware specs would be the next best thing. and the outcry for open source drivers isn't without some merit since past Linux support by AMD/ATI with proprietary drivers have left much to be desired, with Linux drivers only receiving updates half as a often as the Windows drivers and consistently underperforming against comparable graphics cards.

Re:Useless without free drivers! (5, Insightful)

ceoyoyo (59147) | more than 5 years ago | (#25756927)

It's fine to lobby for open source drivers. It's also great if you want to run something on your chosen platform and you want the company who makes the hardware to support that. Both of those I can wholeheartedly support.

Claiming that something is useless without open source drivers is either dishonest or deluded. As I said, I don't think the important goals of the open source movement are served by either lying or ranting about your delusions.

When I was a kid, my mom used to tell me (3, Insightful)

coryking (104614) | more than 5 years ago | (#25756943)

with Linux drivers only receiving updates half as a often as the Windows drivers and consistently underperforming against comparable graphics cards

If something hurts, stop doing it.

You expect the world to cater to your lifestyle choices. You made the choice to run a platform that isn't well supported by video card manufacturers. Either stop using the platform, or find video cards that work on your platform. What if there is no good video cards for your platform? Tough luck. Sorry. You should have considered that before installing the OS, eh?

It is beyond arrogant to expect the world to cater to your choice of operating system.

Re:When I was a kid, my mom used to tell me (1)

ustolemyname (1301665) | more than 5 years ago | (#25757097)

well, now I know what follows arrogance: optimism.

Re:When I was a kid, my mom used to tell me (4, Informative)

lysergic.acid (845423) | more than 5 years ago | (#25757193)

i'm a graphic designer so run Windows. i haven't touched Linux or Unix in over half a decade. but i'm not a selfish jackass who thinks that only my needs are important, and as long as they are met everyone should just go to hell.

there's nothing arrogant about expecting hardware manufacturers to support the 3 most popular OSes: Windows, OS X, and Linux. and it's precisely because people understand that hardware manufacturers can't be expected to support every single OS out there (even well-known ones like Solaris, FreeBSD, BeOS, etc.) that people are pushing for open source drivers.

your mom may not have told you this, but businesses depend on their customers to make money. so listening to consumers and meeting consumer demands is generally a good idea (ever heard of market research?). by allowing their hardware to be used on a wider range of platforms they are broadening the market for their products.

AMD isn't in the business of selling video card drivers, just the video cards. that is why they have open sourced their Radeon drivers in the past. and if we were all as simple-minded as your mom, then no one would ever speak up for themselves. and hardware manufacturers aren't run by mind readers.

Re:When I was a kid, my mom used to tell me (0, Troll)

jsoderba (105512) | more than 5 years ago | (#25758705)

Your mom might have told you that the customer is always right, but that is not actually true. When customers make a nontrivial request a wise manager will do a cost-benefit analysis. Is the gain from fulfilling the customers' request greater than cost of doing so? (Where cost includes direct cost of labor and materials as well as the PR cost of snubbing customers.) Developing and supporting drivers is a lot of work. So is writing and supporting documentation.

AMD did not open-source their Radeon drivers. They released partial specs and left implementing the driver to the X.org community. It seems clear that AMD has put a low priority on supporting the radeonhd project from the slow pace at which updated specs is released for new chips.

Re:When I was a kid, my mom used to tell me (1)

hotdiggitydawg (881316) | more than 5 years ago | (#25760011)

your mom may not have told you this, but businesses depend on their customers to make money. so listening to consumers and meeting consumer demands is generally a good idea (ever heard of market research?)

There's your problem: for most big businesses, consumer != customer. As a consumer you have no importance as an individual, and often you are only vaguely relevant as a secondary market. Especially if you are a niche market.

AMD isn't in the business of selling video card drivers, just the video cards.

To a distributor. Who sells them to a retailer. Who sells them to you.

When your mom-and-pop computer shop opens their own chip manufacturing plant and sells direct to the public, then your opinion might start to matter.

Re:When I was a kid, my mom used to tell me (1)

Kjella (173770) | more than 5 years ago | (#25760039)

there's nothing arrogant about expecting hardware manufacturers to support the 3 most popular OSes: Windows, OS X, and Linux. and it's precisely because people understand that hardware manufacturers can't be expected to support every single OS out there (even well-known ones like Solaris, FreeBSD, BeOS, etc.) that people are pushing for open source drivers.

Most people just want the magic driver fairie to come, and don't really care if it's open or closed source but they've heard that the OSS community will write drivers for their ultra-obscure platform even though noone else will. Yes, sometimes the community writes its own driver but in many other cases you see companies still have to do most of the work to make an OSS driver happen. Which usually get flamed if they're not on par with the closed source Windows drivers anyway. Writing up and clearing detailed specs for release, and not just responding with STFU if you waste developer/support time asking about said specs costs money too. And even if you hit them over the head with the world's biggest "not supported" sign, end users will ask for support. Nothing about supporting other OSes is really free as in beer, and if the added sales don't make up for the cost then those users are asking for subsidies from others.

Re:When I was a kid, my mom used to tell me (3, Insightful)

Fulcrum of Evil (560260) | more than 5 years ago | (#25757529)

You expect the world to cater to your lifestyle choices.

Of course - we are a customer base, and we expect to have our needs catered to.

ou made the choice to run a platform that isn't well supported by video card manufacturers. Either stop using the platform, or find video cards that work on your platform. What if there is no good video cards for your platform? Tough luck. Sorry. You should have considered that before installing the OS, eh?

Third choice: lobby for support from major chip manufacturers. What makes you think a large group of users is powerless, anyway? 5% of 100M is 5MM people, with some of them having cause to buy 100+ units of product.

It is beyond arrogant to expect the world to cater to your choice of operating system.

Don't need the world. Just need a couple companies.

Re:Useless without free drivers! (1)

cj1127 (1077329) | more than 5 years ago | (#25757347)

True. I may be sticking my neck out, but from my point of view I'd say that the open-source movement could really use losing Stallman in order to gain market share. Closed-source software isn't immoral, not is it required for a large number of applications, so why are Stallman et al pushing for "all or nothing" when it comes to open source?

Re:Useless without free drivers! (1)

spauldo (118058) | more than 5 years ago | (#25758477)

They're pushing for it because someone has to.

If there weren't people out there pushing the cause in its purest sense, we wouldn't have half the progress we do. Stallman has a role, and it's a vital one. If it wasn't him (or at least his push for a total free software world) the GNU system wouldn't have started up, and we wouldn't have Linux at all.

That doesn't mean you have to buy into it. Everyone pretty much knows there's lots of room for closed software. Stallman just represents the extreme, which you need to balance the other extreme. After all, how many times have we heard large companies tell us open source is inferior/unsecure/communist/etc? It's all gotta balance out.

Re:Useless without free drivers! (3, Interesting)

Anonymous Coward | more than 5 years ago | (#25758441)

> Any my point is, why? All you need is a decent API.

Well, that assumes closed source can provide a decent API.
Considering the huge amount of bugs even the CUDA compilers have (and they are fairly good compared to others, particularly FPGA synthesis tools) there is a severe risk that you will get stuck in your project without the possibility to do _anything_ about it.
Closed source also leads to such ridiculousness as the disassembler only being available as a third-party tool (decuda) making it even harder to find the bugs in the tools.
(But yes, calling it useless is over-the-top.)

Re:Useless without free drivers! (1)

Chandon Seldon (43083) | more than 5 years ago | (#25756513)

None of those requires the source code for the driver.

And Google just serves web pages, which doesn't require access to the source code for the web server. That doesn't mean that they'd be caught dead using a binary blob for a web server. It's just not an acceptable business risk.

It's like having backups. Sure, restoring from backups isn't part of the plan, but not having backups isn't a risk that anyone takes with important business data. Personally, I'd consider my research data to be even more important than that. I sure as hell wouldn't risk it to a poorly maintained and unstable binary blob - at least not if there was absolutely any other choice, and even then not without a hell of a lot of precautions to make sure it didn't get quietly corrupted or randomly lost.

And someday (3, Funny)

coryking (104614) | more than 5 years ago | (#25756575)

Lord only knows what kinds of boobie traps they put in power supplies. The CIA and the NFL probably know more about you then you realize thanks to that "120V power supply" on the back of each computer in Google's data center. I mean, unless you have the schematics, how do you really know what it is doing?

You dont. Neither does Google. The wise are already beginning to short GOOG. Will their shareholders wake up and demand schematics? Only time will tell.

Re:And someday (2, Interesting)

Max Littlemore (1001285) | more than 5 years ago | (#25756693)

Hmmm. The old "I don't know everything about everything, therefore I don't care if I don't know everything about something" argument. Gets 'em every time..

Re:And someday (0)

Anonymous Coward | more than 5 years ago | (#25757889)

Have you ever been to a datacenter? We use 240v there.

Re:Useless without free drivers! (3, Insightful)

ceoyoyo (59147) | more than 5 years ago | (#25756729)

Like I said about zealots making things up....

Your argument about the business world not using non-open source is spot on. Excellent example. Of COURSE nobody would trust their critical systems to, say, an OS they don't have the source for! Never mind closed source apps! Naturally they only buy video cards that have open source drivers too.

Re:Useless without free drivers! (1, Interesting)

Chandon Seldon (43083) | more than 5 years ago | (#25756959)

Like I said about zealots making things up....

I think you're letting your personal ideology cloud your view of the world around you.

Of COURSE nobody would trust their critical systems to, say, an OS they don't have the source for!

Most major companies don't. They happily run employee desktops on Microsoft Windows, because they can easily swap them out when they break. They run critical legacy systems on IBM mainframes (or whatever). And they run new critical systems on platforms that are almost entirely FOSS. I'm sure you can easily come up with a counterexample, but they're the exception, not the rule.

Re:Useless without free drivers! (2, Informative)

drinkypoo (153816) | more than 5 years ago | (#25757565)

I think you're letting your personal ideology cloud your view of the world around you.

I think you missed the sounds of sarcasm. Not too hard to do, as it was confusingly mixed with some other, simpler HHOS-style text.

Of COURSE nobody would trust their critical systems to, say, an OS they don't have the source for!

Most major companies don't. They happily run employee desktops on Microsoft Windows, because they can easily swap them out when they break.

Not a critical system, then. Critical systems are the machines that cause serious problems when they fail.

They run critical legacy systems on IBM mainframes (or whatever). And they run new critical systems on platforms that are almost entirely FOSS.

IBM sells more Linux than AIX today, and they sell quite a bit of Linux across their line, at least on the systems-formerly-known-as-S/390-and-RS/6000. I'm not sure if I'm disagreeing with you, or proving your point, but whatever.

All I know for sure is that the EULA for Windows prohibits using it to control a nuclear reactor, or at least it used to, and it bloody well should.

Re:Useless without free drivers! (1)

Jah-Wren Ryel (80510) | more than 5 years ago | (#25757781)

All I know for sure is that the EULA for Windows prohibits using it to control a nuclear reactor, or at least it used to, and it bloody well should.

But voting machines, that can ultimately send hundreds of thousands to war, no problemo! [youtube.com]

Re:Useless without free drivers! (0, Troll)

zunicron (1344365) | more than 5 years ago | (#25756791)

Cancer research? Don't you mean more like, get-the-govt-to-fund-my-zero-progress-and-let-me-sit-on-my-ass-all-day-spending-taxpayer-money research?

Sarah Palin? (1, Funny)

Anonymous Coward | more than 5 years ago | (#25756963)

Is that you?

Re:Sarah Palin? (1)

Cassius Corodes (1084513) | more than 5 years ago | (#25757021)

Release the fruit flies!

You are serious? I'm scared. (1)

inTheLoo (1255256) | more than 5 years ago | (#25756955)

All the GPGPU people use one of the GPU programming languages. The hard core ones use assembly.

Well, yeah. That's what you have to do when you don't have a free implementation. Kind of sucks to rewrite everything when you swap hardware platforms, don't it? Tell me it would not be nicer to have a free, vendor supported framework that would at least port to different generations by the same vendor.

Also, please tell why you would waste with that kind of thing when there are lots of spare cycles at super computes on every University. Learning GPU programming languages and assembly is a nice hobby and all, but I'm not sure you should write it into your grant.

Oh, I see, I'm talking to the CEO Yoyo. I've been trolled. Nice one.

Re:Useless without free drivers! (0, Redundant)

aliquis (678370) | more than 5 years ago | (#25756535)

WhatÂ?!! You can use your graphics card to get screwed!?!

Crossfire HD4870X2, BABY!

Length? What the fuck do I care, my case is big enough.

Re:Useless without free drivers! (1)

Tubal-Cain (1289912) | more than 5 years ago | (#25757815)

Seeing as you are currently using yours to browse slashdot, you obviously aren't one of those people.

Re:Useless without free drivers! (0)

Anonymous Coward | more than 5 years ago | (#25760913)

We're not talking about video games here. Some people use computers for important work, not just for screwing around.

Yep, everyone is scramling to get a heater. Yes, a heater. I looked at one video card the other day and it recommened a 400 watt power supply.

This segment is ignoring the middle road user completely. I want a card with dedicated video RAM that will go into a 250 watt system and leave enough juice for a second hard drive. Does not need 1000 CPU threads at 100 watts, just better performance than shared memory will allow.

Besides, who that is prudent with money is going to spend more for the GPU than the CPU case and all? Ya, there are performance freaks, but most want a decent non-shared cheap video card.

Re:Useless without free drivers! (3, Insightful)

TubeSteak (669689) | more than 5 years ago | (#25756203)

Surely I'm not the only one who thinks this'll be useless without open-source drivers, so you can actually make your fancy cluster use these vector-processing units.

You may or may not be surprised by this, but not all of the magic happens in hardware, which is why you don't see open sourced drivers for a lot of stuff.

Sometimes it just makes sense to put the optimizations in the driver, so that when you tweak them later, you don't have to flash the BIOS.

Re:Useless without free drivers! (1)

ratboy666 (104074) | more than 5 years ago | (#25756289)

Think.

GPGPU is USELESS unless it is in hardware. It is were a "driver thing", it couldn't possibly work. Indeed, the GPGPU user would get worse performance, because the multiple execution units would have to be simulated...

So, these "optimizations" are not in the driver.

Re:Useless without free drivers! (2, Interesting)

644bd346996 (1012333) | more than 5 years ago | (#25757141)

What about things like GLSL compilers? Are they in hardware too?

No, really useless. Hence the reason AMD opened up (1)

ciroknight (601098) | more than 5 years ago | (#25756911)

You may or may not be surprised by this, but not all of the magic happens in hardware, which is why you don't see open sourced drivers for a lot of stuff.

That was true in the mid-90s, when drivers meant the difference between 10fps and 30fps. We're in the late 00s now, where drivers are as thin as possible, so that APIs are sitting right on top of hardware commands, and compiling shaders is the about the only thing left to do with software alone.

Any such optimizations or color corrections are typically done in baked-in firmware, if they are done at all. With GPGPU, there's just zero reason not to give programming languages and APIs full access to the hardware in the same way that Linux has full access to your CPU (minus a few notable exceptions, most of which are meaningless to most of us).

This is why AMD has opened up the floodgates on documentation of their hardware, and the open drivers have been improving ever since.

Re:Useless without free drivers! (2, Insightful)

Wesley Felter (138342) | more than 5 years ago | (#25756261)

Plenty of people seem to be getting serious work done with NVidia's proprietary Linux CUDA drivers.

blah blah blah (2, Insightful)

coryking (104614) | more than 5 years ago | (#25756347)

You chose to run on a platform knowing full well these things aren't likely to be supported. Very little sympathy from me. Sorry.

If you want to encourage more drivers on your platform of chose, perhaps you might consider making it easier for hardware companies to target your kernel. Maybe consider, oh I don't know, a stable, predictable ABI?

Maybe loose the attitude as well. The world doesn't owe you or your OS choices anything. All you can do is focus your efforts at making your platform of choice attractive to those whose support you wish to seek.

For hardware vendors, it is easy. Simply make it cheap and easy to write drivers on your platform of choice. Ask yourself, "Is it cheap and easy for hardware vendors to target my platform"? If the answer is "No", then figure out what you can do to make it cheap and easy for them. If that is impossible because if violates some guiding values of your platform, well shucks, either be a man and deal with it, or reconsider your value system. Whining about how the world isn't giving what it owes you doesn't help anybody.

Re:blah blah blah (1)

trooper9 (1205868) | more than 5 years ago | (#25756705)

Mod parent up!

Re:Useless without free drivers! (1)

Aardpig (622459) | more than 5 years ago | (#25757767)

You may not be the only one, but that doesn't mean you're not wrong. I'm currently working on using NVIDIA GPUs w/ CUDA to do seismic modeling of massive, luminous stars. The fact that the drivers are closed-source doesn't count for shit.

Re:Useless without free drivers! (1)

MostAwesomeDude (980382) | more than 5 years ago | (#25758115)

Yeah, no, you're the only one.

Unless, of course, you feel like writing the support into the open drivers. I've got other things to do, as do most of my compatriots.

Not to be insulting, but OpenGL 2.0 support, DRI2, KMS, are all of much higher importance than yet another GPGPU language, especially since Radeons are still not on the Gallium system yet.

~ C.

BS (1)

yabos (719499) | more than 5 years ago | (#25759679)

If that were true then there wouldn't be any software today. You don't need open source drivers to use the published API. That's what an API is for. You certainly won't need the source code to the Mac GPU drivers to use Apple's upcoming OpenCL APIs.

I for one... (1)

Enderandrew (866215) | more than 5 years ago | (#25756157)

...welcome our new GPGPU overlords.

All I'm wondering is why it took so long? They are in the business of selling hardware, and if we can find a new use for it, then we're more likely to purchase AMD/ATI's hardware.

What they might have been waiting for (1)

Ungrounded Lightning (62228) | more than 5 years ago | (#25756275)

All I'm wondering is why it took so long? They are in the business of selling hardware, and if we can find a new use for it, then we're more likely to purchase AMD/ATI's hardware.

A new driver that would turn a consumer-grade high-end video card into an easy-to-use midrange supercomputer? More powerful than the stuff the US nuclear weapon programs were using when they designed the last batch of bombs?

Maybe they were waiting for a congress and president that they don't think will declare their graphics cards to be weapon manufacturing tools and apply export controls. B-)

Re:What they might have been waiting for (1)

Enderandrew (866215) | more than 5 years ago | (#25756469)

They're produced outside the US and the technology to produce processors isn't exactly some US trade secret the rest of the world doesn't have.

But you keep up that baseless paranoia.

Re:What they might have been waiting for (2, Funny)

zippthorne (748122) | more than 5 years ago | (#25756937)

Precisely. The Democrats would never tinker with computing hardware or software (like trying to force everyone to use the same, weak, encryption algorithm AND turn over their keys to the government) like the Republicans would.

Re:I for one... (1)

gbjbaanb (229885) | more than 5 years ago | (#25758861)

because Nvidia doesn't have the same API set, so any GPGPU processing has to be 'lowest common demoninator' for consumer-marketed software.

In other words, unless you have the luxury of specifying a particular make of graphics card, you won't write code to take advantage of these new APIs because you will immediately limit your target audience (and no doubt several people will criticise it for not running on their hardware and give it a bad reputation).

Now, if you could make it so it ran in hardware on your AMD card, but fall back to software emulation if you had a Nvidia card, that'd be different. Your software woudl then run, and you'd have a competitive advantage over Nvidia. People would start buying ATI cards... until Nvidia pulled the same trick.

Re:I for one... (1)

marcosdumay (620877) | more than 5 years ago | (#25758963)

That's normal. Advances in consumer hardware do take a long time to develop. It's risky, needing a very elaborated business plan, research is slow, since everything must be checked several times before it is tried and most of the times it needs modifications on manufacturing plants, what takes a lot of time.

Poeple say that normal developing time for VLSI is around 7 years. I can't know if that holds today, since I'm not on this market, but on the past I've seen several examples of that.

Coprocessor. (1)

Ostracus (1354233) | more than 5 years ago | (#25756179)

Actually what I had in mind when I mentioned it on the previous story is that AMD could make the SIMD part of the CPU. That way you could use whatever video card you wanted and have the advantages SIMD brought.

Hope it's not like the last transcoding software.. (4, Informative)

Amphetam1ne (1042020) | more than 5 years ago | (#25756247)

Last time I looked at the Catalyst/Avivo hardware transcoding software it was somwhat less usable than I hoped. The thing that killed it for me was the lack of batching or cl options. It actually turned out to be less time consuming for me to use a software transcoder with batching and leave it on overnight and while I was at work than it did to go back over to the pc after every finished run to setup the next file for transcoding on the vid card. The quality of the video that was transcoded in hardware was a bit on the patchy side as well.

Somthing that I would be interested in is integrating support into burning software to speed up the transcoding side of DVD video burning. Unfortunately it doesn't look like it's happening any time soon. I think the problem is that by the time technology has matured enough to make it viable, the increase in CPU speed will have made it redundant.

Re:Hope it's not like the last transcoding softwar (1)

coryking (104614) | more than 5 years ago | (#25756491)

the increase in CPU speed will have made it redundant.

I think the deal is, CPU speed is what has matured. These days, we dont get fast by increasing clock speed, we get fast by going massively parralel. Hence the article and the hardware. The GPU is reaching the point where it is a fancy specialized CPU. Kind of like the FPU's back in the day before 486 DX's were king.

No matter what happens though, the "CPU" as we define it currently will never be able to outperform what we now define as the "GPU" no matter what the increase is in CPU clock speed. Clock speed doesn't matter as much for the kinds of problems GPU's are built to solve.

Open standard API (3, Interesting)

Chandon Seldon (43083) | more than 5 years ago | (#25756255)

So... is there an open standard API for this stuff yet that works on hardware from multiple manufacturers?

If not, developing for this feels like writing assembly code for Itanium or the IBM Cell processor. Sure, it'll give you pretty good performance now, but the chances of the code still being useful in 5 years is basically zero.

Re:Open standard API (3, Informative)

Wesley Felter (138342) | more than 5 years ago | (#25756285)

OpenCL will be the standard; it should support real processors, ATI, NVidia, and maybe Cell if someone bothers to write a backend.

Re:Open standard API (2, Insightful)

Anonymous Coward | more than 5 years ago | (#25756493)

No standard API yet, because the NVIDIA chips only work on integers and the Stream processors actually can do double precision. From a scientific computing standpoint, portability of my codes is almost as simple as a patching process to get the keywords correct (nevermind the memory handling and feedback, that is still pretty different) and that is only because of the floating point. It's a pain in the you know what trying to keep numbers straight when you have to multiply everything by 10000.

Re:Open standard API (2, Informative)

644bd346996 (1012333) | more than 5 years ago | (#25757189)

NVIDIA's Tesla products all support single precision IEEE-754 floating point, and their 10-series supports double precision.

Re:Open standard API (3, Informative)

tyrione (134248) | more than 5 years ago | (#25757655)

NVIDIA's Tesla products all support single precision IEEE-754 floating point, and their 10-series supports double precision.

Nvidia is moving to OpenCL compliance, as well.

OpenCL (1)

LordMyren (15499) | more than 5 years ago | (#25758315)

OpenCL is supposed to be it; its officially endorsed by Khronos (of OpenGL fame) and Apple. Release date is still unknown, drivers are even less known. Its promising a general purpose stream processing API that can inter-operate with OpenGL.

I've been scouring today's press releases for OpenCL, and thusfar I've been extremely disappointed to hear numerous promises about Brook+ (the proprietary stream api AMD originally backed which I dont give a crap about) and nothing about OpenCL. AMD better fucking not re-neg, or their hardware is going to be useless as all fuck.

The future of Computing is in... (1, Redundant)

rm999 (775449) | more than 5 years ago | (#25756451)

Low cost, low power CPUs. Already, 99% of people don't use 99% of the power of their CPUs 99% of the time. Why then, is AMD banking on people wanting massive power on specialized hardware that will require fancy compilers?

I consider myself in the ever-dwindling group of "PC gamers," and even I am looking forward to the death of GPUs in my computer. I'm tired of building expensive and power-sucking desktops that go obsolete in 18 months. Instead of building a 1000 dollar desktop next year, I'm getting a laptop and an Xbox 360. My 350 dollar EEE PC will be able to load PDF files and power-points without a 350-watt, 400 dollar video card, thank you very much.

Re:The future of Computing is in... (5, Insightful)

waferhead (557795) | more than 5 years ago | (#25756603)

You do realize this article has ~absolutely nothing to do with gaming, or even normal users, right?

The systems discussed using CUDA or GPGPU will probably spend ~100% of their lives running flat out, doing simulations or such.

Visualize a Beowulf Cluster of these. Really.

Re:The future of Computing is in... (1)

rm999 (775449) | more than 5 years ago | (#25758535)

My point had more to do with the future of AMD than with video games. I was talking about what 99% of users need from their computers, not what 0.000001% of people want (super computer architects). AMD acquired ATI because they thought that was the future of mainstream computing, not because they wanted to serve the niche supercomputer community.

I'm asserting that AMD made a huge mistake buying ATI. I am the audience they meant to target, and they miserably failed. Now, the best they can do is serve a small market at a huge cost; that makes me sad, because I used to be a faithful fan of their brand (and an investor in their company.)

Re:The future of Computing is in... (1)

ypctx (1324269) | more than 5 years ago | (#25758619)

Real speech recognition and communication will require a lot of power. As well as real 3D games and other relaxation/leisure environments. Current games are just Wolfenstein with more details. Human brain is at never ending quest for being entertained - you show it something new, and it won't want the old one anymore. We are underutilizing our hardware now but that's because it is not fast enough to run next gen applications, or because we haven't yet discovered how to make such applications. But it will come.

You misunderstand how people use computers (3, Insightful)

coryking (104614) | more than 5 years ago | (#25756649)

Already, 99% of people don't use 99% of the power of their CPUs 99% of the time

So by your logic, those people would be happy with a computer that was 1% as fast as what it is now?

Make no mistake, once you actually hit that 1% of the time you need 100% of your CPU, the more the better. I can think of two horsepower intense things a normal, every day joe now expects his computer to do:

1) Retouching photos
2) Retouching and transcoding video (from camera/video camera -> DVD)

Dont underestimate transcoding video either. More and more people will be using digital video cameras and expect to be able to output to DVD or Bluray.

Re:You misunderstand how people use computers (1)

rm999 (775449) | more than 5 years ago | (#25758489)

"So by your logic, those people would be happy with a computer that was 1% as fast as what it is now?"

Well, no, and I think you answered your own question. But, to make it clear, that's why I said 99% of the time and not 100% of the time. Regardless, my point wasn't that we need less power, it's that the power we have now is adequate for most people. We don't need to jump through hoops to eeke out more raw computing power.

Re:The future of Computing is in... (1)

Varun Soundararajan (744929) | more than 5 years ago | (#25757723)

Low cost, low power CPUs. Already, 99% of people don't use 99% of the power of their CPUs 99% of the time.

I can see you havent kept upgrading Microsoft Operating systems. Everything that used to run well on my XP machine after a fateful Vista upgrade crawls. Why should the machine take 1GB of RAM just to boot up and start operating, when all I do is just check mail.

--
talking about Windows in Slashdot; whining about it. I'm not new here.

Re:The future of Computing is in... (1)

LordMyren (15499) | more than 5 years ago | (#25758311)

Power efficiency is a good reason. If you want to play a video, your average Atom or ARM cpu might possibly have enough headroom to play your video, but it will consume close to its maximum draw to play it. A gpu on the other hand is specialized hardware that can do things like video playback with ease. Even rendering a media player visualization will heavily tax a cpu, but a gpu may hardly notice the load. Its easy to reduce these examples into something more all purpose: for nearly any task that is highly-parallelizable, there are enormous power gains to be had by putting it on special purpose massively parallel computing hardware.

I think right now you're locked in the "A gpu is an expensive pci-express addon card" mentality. In the future, the GPU will be part of the same die as the cpu. Actually, screw the future: the iPod has a PowerVR gpu built in. What the ipod does would not be possible without the gpu.

Re:The future of Computing is in... (1)

tacocat (527354) | more than 5 years ago | (#25758665)

I think you are half right.

There are two developing markets for computers: cheap low power home users and high performance hobbyist, research, business. This second one used to belong to a formal server environment. This is being replaced by heavy workstations.

Similarly, many server environments are being replaced by the pizza boxes and blade servers, making low power very important here.

Many are starting to move into an architecture of modest home/desktop performance machines with some really impressive hardware supporting back end operations. In a fashion, we are moving back into the thinner client, fatter server model of days gone by. This makes sense because the client is not thin, but thinner. And the computer today that is considered thinner has more than enough power to surf, email, quicken, powerpoint, excel whilc pushing database and modeling into a backend processor.

Re:The future of Computing is in... (1)

marcosdumay (620877) | more than 5 years ago | (#25759017)

People don't "need" it only because they don't have it. If you are a gammer, think about this chip as a very flexible physics processing unit, now if you were a home investor, you could think about it as an at-home market simulator. Of course, if you are doing any kind of CAD, the uses are obvious, as are the uses if you are doing any kind of image (or movie) manipulation. Of course, if could be used simply as a GPU (but more flexible, allowing some interesting optimizations), but the other uses are just too numerous.

Also, it can lead to lower power consuption if you are able to turn your main CPU partialy off and send the floating point heavy work to this one.

From the corn field department ... (1)

kawabago (551139) | more than 5 years ago | (#25756525)

If you let them build it, they will come.

AMD Is Out to Lunch (3, Insightful)

Louis Savain (65843) | more than 5 years ago | (#25756579)

And so are Intel and Nvidia. Vector processing is indeed the way to go but GPUs use a specific and highly restricitve form of vector processing called SIMD (single instruction, multiple data). SIMD is great only for data-parallel applications like graphics but chokes to a crawl on general purpose parallel programs. The industry seems to have decided that the best approach to parallel computing is to mix two incompatible parallel programming models (vector SIMD and CPU multithreading) in one heterogeneous processor, the GPGPU. This is a match made in hell and they know it. Programming those suckers is like pulling teeth with a crowbar.

Neither multithreading not SIMD vector processing is the solution to the parallel programing crisis. What is needed is a multicore processor in which all the cores perform pure MIMD vector processing. Given the right dev tools, this sort of homogeneous processing environment would do wonders for productivity. This is something that Tim Sweeny [wikipedia.org] has talked about recently (see Twilight of the GPU [slashdot.org] ). Fortunately, there is a way to design and program parallel computers that does not involve the use of threads or SIMD. Read How to Solve the Parallel Programming Crisis [blogspot.com] for more.

In conclusion, I will say that the writing is on the wall. Both the CPU and the GPU are on their death beds but AMD and Intel will be the last to get the news. The good thing is that there are other players in the multicore business who will get the message.

Re:AMD Is Out to Lunch (1)

DigiShaman (671371) | more than 5 years ago | (#25756817)

What is needed is a multicore processor in which all the cores perform pure MIMD vector processing.

You mean like what's in the Playstation 3; an IBM Cell processor?

Re:AMD Is Out to Lunch (1)

Louis Savain (65843) | more than 5 years ago | (#25756991)

Nope. Sorry. The Cell Processor is a perfect example of how not to design a multicore processor. Just my opinion, of course.

Re:AMD Is Out to Lunch (3, Interesting)

hackerjoe (159094) | more than 5 years ago | (#25757385)

Wow, that's really fascinating, because the Sweeney article you mention has him going on and on about generally programmable vector processors which make heavy use of that SIMD thing you so hate.

Oh wait, I didn't mean fascinating, I meant boring and you're an idiot. Engineers don't implement SIMD instructions because vector processors are easy to program, they implement them because they are cheap enough that they're worth having even if you hardly use them, never mind problem domains like graphics where you're expecting to use them all the time.

(And yes, I did read your article too, just to be charitable. It's amusing that you think the Cell is "a perfect example of how not to design a multicore processor", because the first step of writing software that performs well on the Cell is to break it down into a signal processing chain. What's hilarious, though, is that you think this will make software easier to write. Clearly you haven't tried to write any real software using your proposed system yet, much less worked with a team on it.)

Re:AMD Is Out to Lunch (2, Insightful)

White Flame (1074973) | more than 5 years ago | (#25757183)

Wait a minute. Typically the SIMD of GPU commands is for handling vector triples (coordinates or colors) and matrices, which completely translates into supercomputer tasks that are being talked about in TFA: "tasks such as scientific simulations or geographic modelling".

GPUs nowadays have hundreds of parallelized vector/matrix processors and the drivers & hardware take care of scheduling them all through those pipelines for you. Within the targeted fields, I can't see a downside of this sort of development.

Re:AMD Is Out to Lunch (1)

644bd346996 (1012333) | more than 5 years ago | (#25757237)

These products are targeted at the scientific computing market. Almost all of that software is written to use BLAS or something similar. SIMD processors are plenty well suited to the matrix and vector operations needed for that. Sure, MIMD architectures may be theoretically nicer, but the way to establish the stream computing market is to create a product that can accelerate current software, which means providing an interface similar to what current software uses (ie. BLAS).

Re:AMD Is Out to Lunch (0)

Anonymous Coward | more than 5 years ago | (#25757311)

"Intel and AMD, and NVidia and everyone is wrong, on their death bed, I see dead electronic companies!"

Louis Savain is at again, pluging links to his RebelScience blog, touting pseudo relations with Tim Sweeney, and explaining how the entire world doesn't "get it"!

Hey what happened with your project decoding AI algorithms hidden in the Bible, son? There's unstable, and there's crazy, but, that's batshit crazy.

death of cpu (0)

Anonymous Coward | more than 5 years ago | (#25757473)

film at 11

Re:AMD Is Out to Lunch (1)

dbIII (701233) | more than 5 years ago | (#25758249)

Vector processing is indeed the way

That's what it looks like, until after a few weeks of used car salesman antics you get a quote on a machine with a cell processor. AMD and Intel etc have a solid future even in the numerical processing field for as long as anything that does vector processing requires black ops military budgets. Why buy one decent machine when you can get twelve that give you 3/4 of the speed each?

I don't think we'll see much change until Intel or AMD step into that arena.

Re:AMD Is Out to Lunch (1)

GreatBunzinni (642500) | more than 5 years ago | (#25758305)

You seem to be a bit lost. This GPGPU deal is not a means to get the holy grail of multicore computing nor was it ever seen as such. It is just a means to enable the users, whether they are regular folks or scientists in computing-intensive fields like structural engineering and biomedical research, to harness the computing power of high power processors which happen to be very cheap and available in any store shelf in any computer store. It's a means to take advantage of the hardware that sits idle on your computer and a means to get the job done spending only a couple thousands of dollars on off-the-shelf parts instead of a multi-million dollar CPU cluster. It isn't a solution for tomorrow's computing needs that uses tomorrow's hardware. It's a solution for today's computing needs that takes advantage of yesterday's computing hardware.

And if you really believe that Intel and even AMD don't have a clue about making CPUs or even how the IT market is ran then you probably should just leave slashdot and never return, impostor.

Re:AMD Is Out to Lunch (4, Informative)

LordMyren (15499) | more than 5 years ago | (#25758391)

Theres a lot of tall claims here, but the one that sticks out as most needing some kind of justification is that "The industry seems to have decided that the best approach to parallel computing is to mix two incompatible parallel programming models (vector SIMD and CPU multithreading)". GPU's mix these models fine and I havent seen anyone bitching about the thread schedulers on them or bitching about not being able to use every transistor on a single stream processor at the same time. How you can claim these models are incompatible, when in fact its the only working model we have and it works fine for those using it, is beyond me. You criticize the SIMD model, but the GPU is not SIMD: it is a host of many different SIMD processors, and that in turn makes it MIMD.

Moving on to what you suggest, I fail to see how superscalar out-of-order execution is not MIMD, and we've been doing that shit for years. The decoder pulls in a crap ton of things to do, assigns them to work units, and they get crunched in parallel. Multiple inputs, multiple data sources, smart cpu to try to crunch it all. Intel tried to take is a step further with EPIC explicitly parallel instruction computing and look how that fucking turned out: how many people here know what Itanium even fucking is?

The "how to solve the parallel programming crisis" link is pretty hilarious. Yes, lists of interlocking queues are badass. Unfortunately the naive implementation discussed at the link provide no allowances for cache locality. In all probability the first implementation will involve data corruption and crap performance. Ultimately the post devolves into handwaving bullshit that "the solution I am proposing will require a reinvention of the computer and of software construction methodology". This is laughable. Just because stream processing isnt insanely easy doesnt mean we have to reinvent it just so we arent burdened with dealing with multiple tasks. Even if you do reinvent it, as say XMT has done, you still have to cope with many of the same issues (xmt's utility in my mind is a bridge between vastly-superscalar and less-demanding EPIC).

Good post, I just strongly disagree. The GPU is close to the KISS philosophy: the hardware is dumb as a brick and extremely wide, its up to the programmers to take advantage of it. I find this to be ideal. I've seen lots of muckraking shit saying "this is hard and we'll inevitably build something better/easier" but a lot of people thought the internet was too simple & stupid to work too.

Re:AMD Is Out to Lunch (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#25760869)

Man, you're not even wrong. You come across as a know-it-all but you are in fact clueless. It would be a waste of everybody's time to address your nonsense. How you were upmodded is a sign that the moderators are as clueless as you are.

Re:AMD Is Out to Lunch (1)

Have Brain Will Rent (1031664) | more than 5 years ago | (#25758559)

What is needed is a multicore processor in which all the cores perform pure MIMD vector processing.

Ummmm it seems to me that if it's MIMD then it's not vector processing.

Re:AMD Is Out to Lunch (0)

Anonymous Coward | more than 5 years ago | (#25759657)

Not quite sure I got the point of the "Parallel Programming Crisis" article you linked to. It sounded an awful lot like he said "multithreading isn't the way to go", followed by, "what we really need is a way to break programs down into multiple threads...". What he seems to suggest is the obvious solution everyone already uses that applies to certain sorts of parallel & repetitive problems (iterative problems on a grid like finite element analysis, cellular automata, etc.) The pictures he draws look a lot like the Cell architecture (which is exactly CPU multithreading + vector SIMD).

Vector SIMD looks to me like it fulfills a different niche - places where you do the same sort of processing on multiple data points. It's more limited, but it seems relatively cheap in terms of silicon so you can get up to a 4x improvement at minimal cost. A few things still irk me about the common implementations of SIMD (on Intel, particularly):
1. poor support for trig functions ... I want a fast SIMD version of sincos(), exp(), etc. not to make some poor substitute using a Taylor series.
2. wider registers so I can have 4 doubles ... using floats for scientific programming is like asking, "do you feel lucky, punk?"
3. better GCC documentation. They have a long list of functions listed, with no description of what they do.

Consumer GPGPU Will Happen, Just Not On ATI Cards (2)

rsmith-mac (639075) | more than 5 years ago | (#25756697)

Consumer products using GPGPU tech will (and indeed are) happening, but it's sure as heck not going to be on ATI's GPUs. The performance is there, but the development tools are a joke. The main runtime (Brook+) is the technological equivalent of giving yourself a root canal every time you program in it, and the rest of the Stream SDK supporting toolset is more or less entirely AWOL. It's rather unfortunate, but compared to NVIDIA's CUDA the whole system is a joke; CUDA is an excellent toolkit that someone clearly put a lot of thought in to in order to meet the needs of developers, while the Stream SDK/Brook+ still feels like it's a research project that nobody optimized for commercial use.

The hardware is there, but no one in their right mind is going to program with AMD's software. Everyone is waiting for OpenCL. Even if it doesn't really take off, it still can't possibly be worse than the Stream SDK.

Re:Consumer GPGPU Will Happen, Just Not On ATI Car (3, Insightful)

LordMyren (15499) | more than 5 years ago | (#25758405)

Yes yes yes and yes.

However, AMD already said it was backing OpenCL. I'm pissed as fuck I didnt hear anything about OpenCL this press cycle, but they're the only major graphic company to have ever stated they were getting behind OpenCL: I'm holding onto hope.

You're right: no one uses Brook. Trying to market it as any way part of the future is a joke and a mistake: a bad one hopefully brought on by a 2.50$ share price and pathetic marketting sods. On the other hand, I think people using CUDA are daft too; its pre-programmed obsolesence, marrying yourself to proprietary tech that one company no matter how hard they try will never prop up all by themselves.

OpenCL isnt due out until Snow Leopard, which is rumored to be next spring. Theres still a helluva lot of time.

Re:Consumer GPGPU Will Happen, Just Not On ATI Car (0)

Anonymous Coward | more than 5 years ago | (#25758471)

> On the other hand, I think people using CUDA are daft too; its pre-programmed obsolesence, marrying yourself to proprietary tech that one company no matter how hard they try will never prop up all by themselves.

Agreed, but it still is the "only" choice if you want to use that processing power.
Also, in my case most of the time was spent analyzing different algorithms to see if they are suitable for GPUs and still good enough, so at least that is not wasted.

Re:Consumer GPGPU Will Happen, Just Not On ATI Car (1)

mzechner (1351799) | more than 5 years ago | (#25758615)

given the recent track record of the khronos group ( anyone one remember the opengl 3.0 specs ) i don't think we'll see opencl anytime soon. i hope they don't screw that one up if it ever sees the light of day. cuda has gotten a lot of scientific attention in the past year. it's a nice paper machine, just take your favorite algorithm and port it even if it's not the best fit for the architecture. sure, missing cachin for global memory will bite you in the ass but it's still better than ati's close to metal initiative or going the graphics pipeline way. any competing api will have a hard time against cuda. the ati thing is to little to late.

what for the CPU? (1)

Eil (82413) | more than 5 years ago | (#25756825)

By leveraging thousands of processing cores on a graphics card for general computing calculations, tasks such as scientific simulations or geographic modelling, which are traditionally the realm of supercomputers, can be performed on smaller, more affordable systems.

So if I understand this right, they're adding transistors to the graphics card that would normally be added to the CPU. How exactly does that help? It's not really a graphics cards anymore if you're doing general processing on it. Why not just put that horsepower on the CPU?

Have AMD forgotten that they're still a CPU business? I used to like AMD but now that they're having to compete with two industry powerhouses who have held the lead for a long time (Intel and nVidia) they seem to be getting a little schizophrenic.

Re:what for the CPU? (1)

Nikker (749551) | more than 5 years ago | (#25757673)

This is just the first iteration.

First they put together a bit of a hack and see what happens. I think it would be more cost effective to deliver both solutions on the die rather than an expansion slot, it will also make the unit more potent clock for clock since it is sitting on all the bandwidth the mother board and north bridge can supply. Once we start getting good communication going with the device we can start playing with it.

I have to admit my self though with everyone complaining about open sourcing the specs of the card at the hardware level is really cool and will help as far as creative programming goes since it opens up for some crazy efficient programming as well opportunities to take advantage of anomalies that might turn out being interesting and change the course of the industry. There is nothing at all wrong with telling these companies what you want and if they give it a chance I think they will be impressed.

Re:what for the CPU? (2, Informative)

marcosdumay (620877) | more than 5 years ago | (#25759841)

"It's not really a graphics cards anymore if you're doing general processing on it."

It is still capable of hight speed rendering, so it can still be used as a GPU.

"Why not just put that horsepower on the CPU?"

Because the processors at a GPU are simpler, it is way cheaper to add power there. Also, there is nothing stoping AMD to add transistors at both, since they are at different chips.

"Have AMD forgotten that they're still a CPU business?"

They are also at the GPU business, and this chip is on both.

Arcsoft (2, Funny)

Toll_Free (1295136) | more than 5 years ago | (#25756879)

Is this the same arcsoft that gave us .arc back in the 80s?

If so, I'd like to get rarsoft involved. Any idea how long it takes my P233 machine to unrar a .264 video? I mean, like HOURS.

Imagine if I could harness my video card, an S3Virge.... holy bat, crapman, I'd probably cut my derar times by what, a third?

--Toll_Free

Gut feeling... (1)

perlchild (582235) | more than 5 years ago | (#25757049)

My gut feeling is that they're talking shareholderese, and it has no bearing whatsoever what technical merit their platform has... But how they can sell it to investors matters to them.

troll (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25757419)

comprehe8nsive [goat.cx]

Open Source Audio and Video Encoders (2, Insightful)

Brit_in_the_USA (936704) | more than 5 years ago | (#25760663)

The traction for this will come when someone releases opensource audio, and later, video encoder libraries using GPU acceleration based upon this (or another) abstraction layer.

MP3, OGG, FLAC - get these out the door (especially the first one) and a host of popular GUI and CLI encoders would jump on the bandwagon. If there are huge speed gains and there are no incompatibility issues because the abstraction layer and drivers are *stable* and retain *backwards compatibility* with new releases then more people will see the light and there will be pressure to do the same with video encoders. Before you know it the abstraction layer would become defaco and all GPU makers would follow suit - at that point we (the consumer) would win in having something that works on more than one OS on one particular card from one particular GPU maker and we can get on with some cool innovations.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>