Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Khronos Releases OpenCL Spec

kdawson posted more than 5 years ago | from the fast-work dept.

Graphics 115

kpesler writes "Today, the Khronos Group released the OpenCL API specification (which we discussed earlier this year). It provides an open API for executing general-purpose code kernels on GPUs — so-called GPGPU functionality. Initially bolstered by Apple, the API garnered the support of major players including NVIDIA, AMD/ATI, and Intel. Motivated by inclusion in OS X Snow Leopard, the spec was completed in record time — about half a year from the formation of the group to the ratified spec."

cancel ×

115 comments

Second person to post (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#26044965)

Sucks cock.

Re:Second person to post (0)

Anonymous Coward | more than 5 years ago | (#26044983)

Only when I can get it.

"Slow Down Cowboy!" indeed!

yeah... (0, Funny)

Anonymous Coward | more than 5 years ago | (#26044987)

...but does it run Linux?

Re:yeah... (4, Funny)

CarpetShark (865376) | more than 5 years ago | (#26045273)

Yes, but you get 2**256 very tiny virtual consoles on screen, each with only 128bits of ram. On the up side, every console can be at a slightly different angle, with different specularity.

Dear moderator... (0, Offtopic)

CarpetShark (865376) | more than 5 years ago | (#26045399)

I don't think redundant means what you think it means ;)

If I had mod points (0, Offtopic)

Chrisq (894406) | more than 5 years ago | (#26045603)

I wouldn't know whether to mod you insightful, funny or off topic. You are all three!

Re:If I had mod points (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#26045901)

I wouldn't know whether to mod you insightful, funny or off topic. You are all three!

There, modded troll for you.

what does it DO? (4, Interesting)

Bizzeh (851225) | more than 5 years ago | (#26045017)

is this simply a spec that people expect ati and nvidia to conform to? or is this another api outside of CUDA and CAL, that wraps the two up so that a single api can execute code on all GPGPU's?

Re:what does it DO? (4, Informative)

u38cg (607297) | more than 5 years ago | (#26045051)

No it basically turns your graphics card into a general purpose floating point number cruncher, which is potentially useful for all sorts of things (although I predict Moore's Law will in a few years render it as obsolete as the maths co-processor).

Re:what does it DO? (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#26045057)

Go look at CNN, China just got attacked by nukes!

Re:what does it DO? (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#26045627)

Go look at Soviet Russia, nukes just got attacked by China.http://slashdot.org

Re:what does it DO? (1)

FunkyRider (1128099) | more than 5 years ago | (#26052545)

How about your mom got attacked by my dick 20 years ago, then here you come?! Nuke joke is no funny

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26045137)

Wouldn't Moore's Law also work on GPUs?

Re:what does it DO? (3, Insightful)

Free the Cowards (1280296) | more than 5 years ago | (#26046299)

If the past few years is any indication, it works much better on GPUs than on CPUs.

Re:what does it DO? (5, Interesting)

moogord (904702) | more than 5 years ago | (#26045139)

It has applications further than that, the SIMD architecture of gpus makes them almost perfect as a hugely powerful non general purpose processor. Do you want to use this to handle AI? no. do you want to use this to enable millions of crates to go flying every which way when you fire a rocket? yes. Its essentially what glsl is to Nvidia's Cg, but instead of cg its an open (that's the important thing) CUDA replacement.

Re:what does it DO? (1)

deniable (76198) | more than 5 years ago | (#26045421)

If there's enough processing elements for SIMD, you could use it for the hard core matrix operations in engineering analysis. I'm wondering waht this could do for things like FEA.

Re:what does it DO? (3, Informative)

volsung (378) | more than 5 years ago | (#26045807)

CUDA is already doing great things in molecular dynamics, which bears some similarity to FEA:

HOOMD Benchmarks [ameslab.gov]

A single 8800 GTX reaching 75% of the performance of a 32 node cluster isn't bad. I imagine the GTX 280 would easily beat the cluster.

Re:what does it DO? (3, Insightful)

mrchaotica (681592) | more than 5 years ago | (#26046999)

The two major issues to be solved with that are that you need double-precision hardware (I can't remember if the Nvidia 9000 series supports that or not) and, more importantly, you need to write GPU algorithms for solving sparse matrices.

Re:what does it DO? (1, Informative)

Anonymous Coward | more than 5 years ago | (#26047475)

I about 99% sure that the 8800/9800 series are single precision, and gt260/280 have double precision. I'd guess everything beyond 260/280 will have double precision, but you never know.

Re:what does it DO? (1, Informative)

Anonymous Coward | more than 5 years ago | (#26048437)

Oh, and you need floating point exceptions, not silent over/underflow.
Otherwise you get nonsense out the other end and have to start over.

I've yet to see a graphics card that's fully IEEE-compliant.

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26054881)

you need to write GPU algorithms for solving sparse matrices

Let me rephrase that: you need to write algorithms for solving sparse matrices in the stream computing style.

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26055093)

Make that ONE major issue. The GTX280 already supports double precision floating point.

Re:what does it DO? (1)

smidget2k4 (847334) | more than 5 years ago | (#26045483)

Actually this would be GREAT for AI. Game AI? I have no idea. But, using a floating point co-processor like this you could do the calculations directly on, say, a robot, instead of having to send the data back to a mainframe for processing. Also much cheaper than buying a really fast CPU for the same tasks.

This would be much faster than a general purpose CPU for all sorts of machine learning concepts like hidden markovs, computer vision, speech recognition... if only I had one of the cards...

Re:what does it DO? (4, Informative)

chris_oat (5511) | more than 5 years ago | (#26046971)

Do you want to use this to handle AI?

It depends on what kind of AI you are talking about. Path finding actually maps nicely to the GPU. AMD released a demo that showcases this by running a path finding simulation on the GPU for several tens of thousands of agents. Read all about it in Chapter 3 the Advanced Real-Time Rendering course notes [amd.com] from SIGGRAPH 2008. Demo and screen shots here: Froblins Demo [amd.com]

Re:what does it DO? (1)

docgiggles (1425995) | more than 5 years ago | (#26048049)

I plan on use it to model proteins for my Science Olympiad team without upgrading the processor with our limited team budget

Re:what does it DO? (1)

ChrisA90278 (905188) | more than 5 years ago | (#26049687)

Actually it could handle AI. If your AI project involve neural networks this could make then run very fast. Also it could do some rather simple everyday tasks well too such as transcoding media files, adjusting the color of images and the first levels of processing for voice recognition. All of these tasks involve massive numbers of simple calculatons

Re:what does it DO? (1)

V!NCENT (1105021) | more than 5 years ago | (#26050017)

An open spec for crunching a shitload of calculations on a GPU on a card that handless dedicated rastering graphics...

Call it far-fetched but... do I smell the opportunity of real-time ray tracing games running on top of rasterised 3D desktop?

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26045409)

The math coprocessor isn't obsolete, it's just integrated in the main CPU chip. That's what may happen to the GPU, witness AMD's efforts to integrate ATI's product into its own.

math co-proc have always been there! (4, Informative)

malaba (9813) | more than 5 years ago | (#26045783)

they just have been integrated into the main chip

by 486 era if I remember correctly.

By that time they had enough transistor to just put everything inside the same silicon chip, faster, cheaper.

Today, every CPU have an IEEE floating point unit.
To say we don't have maths co-proc is misleading.

Re:math co-proc have always been there! (1)

Hatta (162192) | more than 5 years ago | (#26049359)

And eventually what happened with the FPU will happen with the GPU.

Re:math co-proc have always been there! (1)

SendBot (29932) | more than 5 years ago | (#26052307)

intel had 486's without math coprocessors integrated (the 486sx), though it was actually on the die but disabled (either intentionally or from chips with defective math coprocs). They had a 487 you could couple with your 486sx that was still basically the same chip as the 486dx, and I think may have just disabled the other 486sx and run everything on that one processor. From what I've heard, all that craziness was due to marketing rather than technical reasons.

Re:what does it DO? (3, Insightful)

/ASCII (86998) | more than 5 years ago | (#26046257)

The math co-processor wasn't made obsolete. It became so vital to system performance that Intel and friends started including it in on the CPU proper. These days, they call it an FPU.

Re:what does it DO? (1)

DragonWriter (970822) | more than 5 years ago | (#26053991)

The math co-processor wasn't made obsolete. It became so vital to system performance that Intel and friends started including it in on the CPU proper. These days, they call it an FPU.

A more cynical view is that as they became more popular, Intel started losing marketshare to alternative math coprocessor vendors, which they ended by putting the co-processor onto the CPU, which made it much harder for alternatives to compete.

Re:what does it DO? (1)

Creepy (93888) | more than 5 years ago | (#26047349)

Um, the math co-processor never became obsolete; they started building it on-die rather than in a separate package.

And yes, it is similar to CUDA and CAL, but designed for any general purpose parallel computing, not just GPU from what I can tell.

Re:what does it DO? (5, Informative)

san (6716) | more than 5 years ago | (#26045107)

is this simply a spec that people expect ati and nvidia to conform to? or is this another api outside of CUDA and CAL, that wraps the two up so that a single api can execute code on all GPGPU's?

It's the latter: a single API + kernel language for any GPU. Because both NVIDIA and AMD are represented in the contributor list, it actually has a chance of being adopted.

Re:what does it DO? (5, Informative)

mikrorechner (621077) | more than 5 years ago | (#26045611)

It's the latter: a single API + kernel language for any GPU. Because both NVIDIA and AMD are represented in the contributor list, it actually has a chance of being adopted.

According to heise.de [heise.de] (in German), nVidia says that OpenCL applications will run seamlessly on any gpus with a CUDA-compliant driver. Does anyone know if that applies to the proprietary Linux drivers?

If this really takes off, how long until the hardworking people from the x.264 or VLC or ffmpeg or mplayer projects can write a H.264/AVC decoder that uses the GPU?

Re:what does it DO? (3, Informative)

Anonymous Coward | more than 5 years ago | (#26045737)

Yes, there is a CUDA driver and SDK for Linux on NVIDIA's site: http://www.nvidia.com/object/cuda_get.html

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26045745)

If it says any GPU with a CUDA-compliant driver, then I don't see why it wouldn't apply to the LINUX drivers, since they are CUDA-compliant. In my experience, CUDA actually seems to be better supported on LINUX than it is on other OSes.

Re:what does it DO? (2, Insightful)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#26046249)

I strongly suspect that Nvidia knows that a lot of CUDA crunching boxes are going to be running linux. It's cheap, it's stable(not mainframe stable; but easily reliable enough for commodity crunching boxes), it has low overhead, and it is easy to administer. Plus, it runs on boring commodity x86 whiteboxes. I imagine that, even if they didn't support graphics acceleration on Linux, they would support CUDA.

Re:what does it DO? (1)

pato101 (851725) | more than 5 years ago | (#26046243)

Does anyone know if that applies to the proprietary Linux drivers?

Proprietary Linux drivers do CUDA, don't they? If I'm not wrong, NVidia proprietary Linux drivers do not lack of any features .... why should they lack now?

Re:what does it DO? (1)

SkybuckFlying (1140667) | more than 5 years ago | (#26046473)

Who cares about the decoder? 2.0 GHz Core 2 Duo can handle the decoding without breaking a sweat. Encoding 20 minutes of 1080p H.264 video, however, takes a fair chunk of time. Encoding is where we want to see some action. OK, nice _extra_ to free some CPU time when watching a video.. but typical use case is such that when you watching the video you don't have much else with HIGH realtime priority going on. So.. encoding.. make it faster, make it smooth, make me cum!

Re:what does it DO? (1)

mikrorechner (621077) | more than 5 years ago | (#26046659)

Who cares about the decoder? 2.0 GHz Core 2 Duo can handle the decoding without breaking a sweat.

That's nice and well when we're talking about desktop systems. But think about MythTV media center PCs - if you could combine an Atom CPU and a passively cooled nVidia or AMD GPU, a super-silent, HDTV capable home-grown set-top box would be possible.

Of course, an OpenCL encoder would help, too, for this kind of setup - broadcast TV encoding, for example.

Re:what does it DO? (2, Informative)

3.1415926535 (243140) | more than 5 years ago | (#26047531)

This isn't necessary for that because modern GPUs already have dedicated hardware for video decode [slashdot.org] .

Re:what does it DO? (1)

drdaz (994457) | more than 5 years ago | (#26048301)

That's an nVidia only API...

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26050027)

better yet, drop that atom and go with ARM.

Re:what does it DO? (1)

Jorophose (1062218) | more than 5 years ago | (#26052825)

But if you intend to play games, VIA Nano with a possible GeForce chipset (sometime soon!) would be way better (or at least the CN896 chipset and a low-power GeForce).

ARM is low power; but other than that, what advantages does it have? Does it really scale to desktop performance? I mean, I'd love to have a small board like the BeagleBoard, but the thing isn't that great for desktop performance. Where are ARM's upsides?

Re:what does it DO? (1)

ConanG (699649) | more than 5 years ago | (#26046865)

There are already CUDA H.264 encoders and decoders. Don't know of any open source tools yet, though.

http://www.badaboomit.com/?q=node/4 [badaboomit.com]

Hardworking people at VLC you say? Bah. (1)

Vu1turEMaN (1270774) | more than 5 years ago | (#26047323)

VLC is fine if you don't care about preserving the quality of the format, or if you're too braindead to install proper codecs. Or if you want your integrated subtitles to look like shit, unless you run a nightly build with a few tweaks.

Honestly, I've found that Zoom Player's codec downloader and auto-configured silent install work the best for everyone, from the common person to the hardcore encoder to the obscure format enthusiast. Its a nice little stand-alone exe that, when run, will actually update your codecs too if you don't have the newest version.
http://www.mediafire.com/?emxigti2dwh [mediafire.com]

The hard-working people of the REAL open source codec scene (ie none of the ones you mentioned) are who I want to look at this.

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26051279)

Check out Cuda Codecs on sourceforge

http://sourceforge.net/projects/cudacodecs/

Re:what does it DO? (1)

Jorophose (1062218) | more than 5 years ago | (#26052787)

Just out of curiosity, what about ATI?

(9600GT still looks like a good deal, but the HD4670 might just be slightly better for me at the moment... But do these include something like PhysX? Is it even a consideration?)

Re:what does it DO? (1)

larry bagina (561269) | more than 5 years ago | (#26045629)

Microsoft isn't one of the companies listed. Which is probably a good thing, since they have a tendency to prevent progress then release their own proprietary version (ala OpenGL).

Re:what does it DO? (0)

Anonymous Coward | more than 5 years ago | (#26048881)

ATI and NVIDIA have already comitted to using OpenCL(NVIDIA probably already has it working on the CUDA capable hardware). It is almost identical to CUDA in many ways, but will be slightly more vendor neutral.

I hope this becomes a cross-platform thing. (4, Insightful)

Anonymous Coward | more than 5 years ago | (#26045037)

There's no way I'm writing a single line of CUDA code when it only works with nVidia hardware, and I think there are a lot of other people like me. This could open up GPGPU programming to a much wider group of programmers.

Re:I hope this becomes a cross-platform thing. (0)

LingNoi (1066278) | more than 5 years ago | (#26045663)

No they aren't like you because everyone else knew you could run CUDA on ATI too.

Re:I hope this becomes a cross-platform thing. (0)

Anonymous Coward | more than 5 years ago | (#26046351)

Link please? There's been grumblings of people accomplishing this, but not with great success... not anything I've seen anyway.

Re:I hope this becomes a cross-platform thing. (2, Insightful)

drfireman (101623) | more than 5 years ago | (#26046369)

CUDA on ATI would help me a bit. But a quick googling of this concept turned up a bunch of pages saying it'll never happen, doesn't work, etc. Could you post a link?

No way for CUDA on ATI (2, Interesting)

DrYak (748999) | more than 5 years ago | (#26050001)

CUDA on ATI can't be done easily.
As a writer of CUDA code, I can tell you that a lot of CUDA isn't as high level as nVidia's marketing would like you to believe. And thus is very much linked to specific properties of the current hardware from nVidia.

The lower-level of CUDA enables the programmer to do some really clever optimisations. But as the hardware peculiarities aren't abstracted away, writing a compatible implementation for chips from a different manufacturer which aren't exactly the same underneath isn't trivial, even if the specifications of CUDA *are* published.

On the other hand, the Brook language is a really a high level language which completely abstracts the details of hardware implementation. The BrookGPU implementation supports several back-end, including en OpenGL + GLSL back-end which as well on GPUs from both ATI and nVidia.
Though I didn't follow the latest development since ATI hired the main guy to write Brook+ for them.

Because it is supposed to be vendor neutral, OpenCL looks promising too, but I haven't read the specifications yet.

What about other chipmakers? (2, Interesting)

elh_inny (557966) | more than 5 years ago | (#26045059)

While I see quite a few members that I wasn't expecting (Creative Labs), my concern is that there are some companies that should definitely be participating in this but aren't.
By that I mean gfx chip makers such as Via or S3, as for now it seems we're tied to the major players (nVidia, AMD, Intel) for desktop/laptop implementations and that's never good for the consumer.

Either way the spec itself is a great initiative and I can't wait to get my hands on beta bulids of Snow Leopard to try it out...

Re:What about other chipmakers? (2, Interesting)

elh_inny (557966) | more than 5 years ago | (#26045119)

Oops.. I just noticed S3 is on the list, they managed to get a lot of companies on board after all.

Re:What about other chipmakers? (1)

ThePhilips (752041) | more than 5 years ago | (#26045337)

Well, it doesn't mean a thing. You know, M$ was on ODF OASIS board quite some time too...

To me litmus test of OpenCL would be independent (from video card vendor) portable implementation which runs on Linux.

Participation doesn't mean a lot to how the spec would develop.

To put it in another words: I'm waiting for reaction from Mesa and X.org folks. Then, if reaction would be positive, the news would get me excited.

Re:What about other chipmakers? (3, Interesting)

larry bagina (561269) | more than 5 years ago | (#26045675)

Open Source means all of us. X.org and Mesa don't have a magic cow that shits code, it has to be written by people in their spare time (and X.org is stagnating due to a lack of developer interest). Nobody on the OpenCL list particularly give a shit about linux, and adoption will happen with or without linux or open source. Instead of waiting for other people to tell you how to feel, maybe you should sit down and read the spec.

Re:What about other chipmakers? (0)

Anonymous Coward | more than 5 years ago | (#26047049)

(burp). They used to have a magic cow. Then I had a nummy steak and now I shits the code.

Re:What about other chipmakers? (1)

setagllib (753300) | more than 5 years ago | (#26046259)

I don't think your litmus test is reasonable. Because of the "big 3" video vendors signed on, virtually all modern Linux desktops will end up with OpenCL supported, and in the case of Intel and perhaps AMD, with open source drivers. nVidia will hold out open sourcing, but not many people will care because the performance will be good anyway.

Having an independent implementation here is not important, and not at all useful, given that you'd be independent for the sake of it, rather than letting the vendor write you an open source driver!

Re:What about other chipmakers? (1)

JebusIsLord (566856) | more than 5 years ago | (#26049825)

What does this have to do with the Mesa and X.org teams? They're graphics guys, and this isn't for graphics. I'd like to see inclusion in GCC (auto-vectorization maybe?) though.

Re:What about other chipmakers? (1)

Repossessed (1117929) | more than 5 years ago | (#26049449)

Does Via make anything with enough power for this to matter? All their graphics (and for that matter, everything else) seems to be bare minimum or less hardware that competes on price and power.

Don't get me wrong, I love them* but I have a hard time imagining it'd be worth anything to exploit the power in their graphics.

*Except that they don't make a server quality chipset to go with their processors. That pisses me off.

Great! (3, Insightful)

johannesg (664142) | more than 5 years ago | (#26045077)

Now, if only they could do the same for OpenGL... Which is needed by a lot more people, and is in my opinion a lot more important for anyone who wishes to be free of Windows.

Re:Great! (0)

Anonymous Coward | more than 5 years ago | (#26045149)

...to be free of Windows

Sounds like windows is a disease. Come on don't be so critical

Re:Great! (0)

Anonymous Coward | more than 5 years ago | (#26045231)

Actually, many things about Windows are a giant PITA for people who have experienced "bliss" elsewhere. File system semantics, API, dynamic libraries, using development packages of libraries (headers), software package management etc. are all very painful. Console + shell is almost unusable.

Re:Great! (0)

Anonymous Coward | more than 5 years ago | (#26045371)

Actually, many things about Windows are a giant PITA for people who have experienced "bliss" elsewhere.

*nod* *nod*

Re:Great! (0)

CaptnMArk (9003) | more than 5 years ago | (#26045479)

try this in command:

start c:\

start "C:\program files"

???

start /?

and laugh...

Re:Great! (1, Informative)

Anonymous Coward | more than 5 years ago | (#26046137)

Sorry, your joke is that the utility functions exactly as documented? Really?

Re:Great! (1)

DAldredge (2353) | more than 5 years ago | (#26052041)

Explain to me what is so wrong with PowerShell.

Re:Great! (1, Insightful)

AliasMarlowe (1042386) | more than 5 years ago | (#26045411)

...to be free of Windows

Sounds like windows is a disease. Come on don't be so critical

Windows is a condition, rather than a disease. It has unpleasant and often expensive consequences (spyware, antivirus subscriptions, etc.) for those afflicted with it and for many others (spam botnets, net worms, etc.). Luckily, it is avoidable and curable in many (but far from all) cases: just use BSD or Linux.

Re:Great! (1)

ThePhilips (752041) | more than 5 years ago | (#26045349)

Now, if only they could do the same for OpenGL.

Care to elaborate?

OpenGL is quite well supported on both Mac OS X and Linux. So with OpenGL you are already free.

Re:Great! (5, Insightful)

robthebloke (1308483) | more than 5 years ago | (#26045953)

I think the OP meant, "If they could finally get around to ratifying an openGL 3.1 specification in 6 months (instead of being 2 or 3 years late as GL3.0 was); turn it into a useful standard that people actually want to use (which GL3.0 is not); and finally make good on all the things we were promised for 3.0, which they ended up ditching at the last minute. If that happens linux/mac openGL developers around the world will feel less dirty than they do right now"....

He wasn't implying anything about windows + GL as such, more making the observation that openGL is vital to Mac/linux - and as such those OS's are very much at the mercy of the Khronos group's actions (or more accurately - no action at all as was the case with GL3).

Re:Great! (1)

johannesg (664142) | more than 5 years ago | (#26047083)

I think the OP meant, "If they could finally get around to ratifying an openGL 3.1 specification in 6 months (instead of being 2 or 3 years late as GL3.0 was); turn it into a useful standard that people actually want to use (which GL3.0 is not); and finally make good on all the things we were promised for 3.0, which they ended up ditching at the last minute. If that happens linux/mac openGL developers around the world will feel less dirty than they do right now"....

He wasn't implying anything about windows + GL as such, more making the observation that openGL is vital to Mac/linux - and as such those OS's are very much at the mercy of the Khronos group's actions (or more accurately - no action at all as was the case with GL3).

Thank you sir. That is indeed exactly what I meant, except that you phrased my frustrations a lot better than I did.

Re:Great! (1)

tyrione (134248) | more than 5 years ago | (#26047113)

I think the OP meant, "If they could finally get around to ratifying an openGL 3.1 specification in 6 months (instead of being 2 or 3 years late as GL3.0 was); turn it into a useful standard that people actually want to use (which GL3.0 is not); and finally make good on all the things we were promised for 3.0, which they ended up ditching at the last minute. If that happens linux/mac openGL developers around the world will feel less dirty than they do right now"....

He wasn't implying anything about windows + GL as such, more making the observation that openGL is vital to Mac/linux - and as such those OS's are very much at the mercy of the Khronos group's actions (or more accurately - no action at all as was the case with GL3).

The 6 months for OpenCL is based upon 3 or more years of Apple work flushed out and the 6 months are the flushed out specs that satisfy all the big 3 GPU vendors and the more specialized vendors. Perhaps now that this is done, Apple will do more of the heavy lifting and then accelerate OpenGL 3.1 to be done soon as well, seeing as how much it will play a role in Snow Leopard with OpenCL.

Re:Great! (1)

jerep (794296) | more than 5 years ago | (#26049013)

The OpenCL spec is also way shorter than the OpenGL spec, and so are the header files. If you want to compare the time it takes to bring specifications out you have to compare the resulting size of these specifications.

Re:Great! (1)

beelsebob (529313) | more than 5 years ago | (#26045375)

Actually, that's kinda the point. OpenGL needed to go much more general purpose (read, push a load of verticies and then do general purpose computing on them to turn them into colours). OpenCL is rather more flexible than CUDA in that it can read from vertex buffer objects, and write to framebuffers, which means that it could sensibly be used for "software" graphics engines.

Re:Great! (0)

Anonymous Coward | more than 5 years ago | (#26048983)

I've wondered about this when OpenGL 3.0 came out and I heard a few people suggest some new library to use. Seeing people doing ray tracing with CUDA (go look on the NVIDIA website; there's a few examples listed), I thought, "Well, hell. When OpenCL comes out, if someone's solution is to come out with a new library, build it in that. And then you gain the graphics hardware acceleration, with only needing OpenCL drivers."

Of course, as I continued thinking, I suppose you could make a way to have some sort of drivers for the library that could use that native acceleration for things when available on the cards instead of just the OpenCL code. But, in theory, it could make those drivers a lot easier to make for the hardware guys. Now, I'm not sure if that would be true. Just playing with thoughts in my head, thinking of some of the complaints with OpenGL brought up.

Of course, that would take someone willing to do it. I suppose I could. I've taken interest in wanting to learn OpenCL just for the hell of it, and have been playing around with OpenGL in mostly C++. As I thought about it, I've even looked up some of the concepts of doing such a thing, and found some decent resources.
But uh... I have this nasty habit of getting my program/library nice and functional with just some cleaning up needed and maybe actually making it do something more significant than what something else already does, and then going "Wait. If I did it this way, it would be so much better, for so many reasons!" And then I just ditch the whole thing and start from scratch. And then I get distracted by another project that I'm going to do the same thing on, and. Well. Yeah, I never get anything done with my programming...

Re:Great! (1)

SpinyNorman (33776) | more than 5 years ago | (#26046149)

You've got that exactly backwards.

It's OpenCL that's trying to follow in the footsteps of OpenGL, not vice versa.

OpenGL is an open specification that has many implementations taking rather good advantage of graphics hardware! ;-)

OpenCL wants to establish a similar standard but for GPU based kernel execution rather than graphics rendering.

..but (-1, Redundant)

Anonymous Coward | more than 5 years ago | (#26045199)

...does it run linux?

Namespace conflict (1)

rombust (1361309) | more than 5 years ago | (#26045463)

Looking at http://www.khronos.org/registry/cl/api/1.0/cl_gl.h [khronos.org] ,they are using the CL prefix. This will cause a huge headache for existing code that uses the ClanLib SDK. http://clanlib.org/docs/clanlib-0.9.0/reference/modules.html [clanlib.org]

Re:Namespace conflict (1)

jgtg32a (1173373) | more than 5 years ago | (#26045685)

This looks like a job for sed

Re:Namespace conflict (1)

ArchMageZeratuL (1276832) | more than 5 years ago | (#26045755)

Doesn't seem to be an actual issue, though. OpenCL uses cl_lowercase for typedefs, CL_UPPERCASE for defines and clCamelCase for methods, while ClanLib seems to use CL_lowercase for everything.

Re:Namespace conflict (0)

Anonymous Coward | more than 5 years ago | (#26047681)

Also, no one uses ClanLib, so who cares?

Re:Namespace conflict (0)

Anonymous Coward | more than 5 years ago | (#26048201)

Looks like OpenCL uses clNaming, while ClanLib uses CL_Naming. No problems here (and also not a namespace conflict, it's C, not C++).

What about other vector processors... like Cell? (0)

Anonymous Coward | more than 5 years ago | (#26045571)

Do the other major players for SIMD (or SIMT as NVidia has it) systems also sign on?

I would like to know if code written in OpenCl will be able to use cell processors or other multi thread systems just as easily.

Re:What about other vector processors... like Cell (2, Insightful)

larry bagina (561269) | more than 5 years ago | (#26046857)

OpenCL code is a subset of c, with an API that the GPU must implement. So it will work on Cell (or any CPU) with gcc and a library implementing the API.

Re:What about other vector processors... like Cell (1)

Jorophose (1062218) | more than 5 years ago | (#26053019)

You think we could get this little bugger working on Linux (and BSDs, etc) then?

http://www.leadtek.com.tw/eng/tv_tuner/specification.asp?pronameid=447&lineid=6&act=2 [leadtek.com.tw]

It looks very nice, being a 4-core Cell CPU on a card with 128MB of XDR. PCIEx1 (it's too bad I can't find PCIE splitters/risers...) and half-height but it uses a fan (I'm sure we can fix that. heh.) so it's almost the right card for a mythTV backend (or combo, or frontend since it can decode video).

It would also be nifty for those of us who like to do... "creative" programming.

But leadtek hobbled it, with drivers only for XP and Vista and requiring their bloatware to go with.

Re:What about other vector processors... like Cell (1)

644bd346996 (1012333) | more than 5 years ago | (#26053167)

The Cell architecture is mentioned in the intro section of the spec as an example of what OpenCL might be good for. The language is generic enough that most any vector processor should work as a target.

Moore's Supercomputer. (1)

Ostracus (1354233) | more than 5 years ago | (#26045601)

Actually I'm glad to hear this. With IGP and Crossfire/SLI with dual-GPU in a quad arrangement. One could have an inexpensive (relatively speaking) supercomputer under their desks. Throw in the upcoming quad-core in 45nm fabrication and now is a good time to be into computers.

What about your other stuff Khronos?! (0)

Anonymous Coward | more than 5 years ago | (#26045721)

I wish Khronos would put some effort into their OpenVG [khronos.org] stuff because that is something that is really needed badly. Cairo is not a worthwhile substitute because its performance is not very good, the API is too big (and it sucks), and Glitz is too complicated/buggy.

Currently there is no decent and complete free OpenVG implementation. The reference implementation is slow, ShivaVG is incomplete and appears dead, that Qt version requires Qt, ginkoVG doesn't support OpenGL, etc.

Re:What about your other stuff Khronos?! (1)

644bd346996 (1012333) | more than 5 years ago | (#26053183)

Khronos isn't in the business of implementing their specs. They just define them. And OpenVG was just updated.

That's nice (1)

800DeadCCs (996359) | more than 5 years ago | (#26045785)

Now, work on getting double-precision.

Re:That's nice (1, Informative)

Anonymous Coward | more than 5 years ago | (#26048929)

It's in the spec as an extension. The feature will appear when hardware implements it.

Drivers? (0)

Anonymous Coward | more than 5 years ago | (#26046193)

Anyone know how long we have to wait for an OpenCL enabled driver from nVidia or ATI?

The 80's called, they want their PRAGMA back (0, Troll)

fletch_wins (1332929) | more than 5 years ago | (#26048595)

Why are all HPC languages targeting the GPU pretending that compiler technology stopped with C? I can understand wanting the low-level control, but not always. How about some higher level constructs to encapsulate the often repeated GROUPSIZE, group_id, local_id logic?

There's only one way of accessing memory that performs - so make the compiler enforce it, unless I explicitly want to use low-level API to do something fancy.

Plugins (1)

jlebrech (810586) | more than 5 years ago | (#26048995)

Plugins for ffmpeg or any other codec for that matter could provide a huge boosts for linux. What about blender or yafray support.

There would be a huge uptake of Linux for video editor and 3d graphics artists.

FAIL (0)

Anonymous Coward | more than 5 years ago | (#26053789)

I cannot express how disappointed I am by OpenCL. Take CUDA, add some levels of overengineering and make it unusable like OpenGL.

CUDA is far from perfect, but to a certain degree free of hassles: Get your memory with cudaMalloc(), write a __global__ function, and launch it. Done.
Things that can be done with 5 lines of CUDA now need 3 pages of OpenCL setup code. Its not rocket science, but very annoying to do and I really hope CUDA will stay for a while.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...