×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

An Open Source Compiler From CUDA To X86-Multicore

timothy posted more than 4 years ago | from the abstraction-gains-a-layer dept.

Programming 71

Gregory Diamos writes "An open source project, Ocelot, has recently released a just-in-time compiler for CUDA, allowing the same programs to be run on NVIDIA GPUs or x86 CPUs and providing an alternative to OpenCL. A description of the compiler was recently posted on the NVIDIA forums. The compiler works by translating GPU instructions to LLVM and then generating native code for any LLVM target. It has been validated against over 100 CUDA applications. All of the code is available under the New BSD license."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

71 comments

Open Source poll: AVR or PIC? (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30537722)

Which one do you prefer, and why?

Alternative? (4, Insightful)

Guspaz (556486) | more than 4 years ago | (#30537744)

This isn't an alternative to CUDA; it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards. In other words, your choices as a developer are to use OpenCL and have your code run everywhere (AMD, nVidia, x86 slowly), or use CUDA and have your code run on nVidia or x86 slowly.

What possible reason could you have to want to be locked into one GPU vendor?

Re:Alternative? (2, Insightful)

Anonymous Coward | more than 4 years ago | (#30537834)

I think Cuda was first out there, later on OpenCL occurred. And i see it as bad thing really, since that binds you to using Nvidia card. I hope it wont become popular i dont want to stick to Nvidia.(ot, when AMD has Linux drivers open sourced)

Re:Alternative? (1, Informative)

beelsebob (529313) | more than 4 years ago | (#30538306)

Pardon? OpenCL does not in any way bind you to an nVidia card, it was a standard created by Apple (not nVidia) and pushed to Khronos to manage as an open standard (also not nVidia). ATI have just announced drivers for their cards for OpenCL.

Re:Alternative? (3, Informative)

Sloppy (14984) | more than 4 years ago | (#30538428)

He means CUDA was here first, and it does(did) lock you into Nvidia. So if you jumped on the bandwagon early, your code is Nvidia only. If you waited for a standard (opencl) (or ported your app) then you're cross-platform.

Re:Alternative? (2, Informative)

Elbows (208758) | more than 4 years ago | (#30539646)

On top of that, the CUDA tools are still much better than OpenCL. OpenCL is basically equivalent to CUDA's low-level "driver" interface, but it has no equivalent to the high-level interface that lets you combine host/device code in a single source, etc. CUDA also supports a subset of C++ for device code (e.g. templates), which I don't believe is the case for OpenCL. CUDA also has a debugger (of sorts), profiler, and in version 3 apparently a memory checker. But I haven't been following OpenCL that closely lately -- it may be catching up on the tool front.

If you're developing an in-house project where you have control over the hardware you're going to run on, or you know that most of your customers have Nvidia cards anyway, there are still good reasons to go with CUDA.

Re:Alternative? (0)

Anonymous Coward | more than 4 years ago | (#30539110)

I didn't say that. OpenCL doesn't, CUDA does.

Re:Alternative? (2, Informative)

TeXMaster (593524) | more than 4 years ago | (#30538438)

I think Cuda was first out there, later on OpenCL occurred.

Yes and no. CUDA and CTM/Brook+/FireStream came to live more or less at the same time when NVIDIA and ATI realized that GPGPU (General Purpose computing on the GPU) was getting traction in the scientific computing world (originally implemented using OpenGL and shaders).

OpenCL was essentially an effort (by Apple first and foremost, although obviously with cooperation from both NVIDIA and ATI) to get a standardized interface to SIMD multicore programming. It's actually quite close to low-level CUDA programming, although I'm not sure how close it is to the ATI solution (I've tried going through the ATI docs a couple of time, but their stuff is absolutely abysmal when compared to the NVIDIA docs and SDKs, sadly).

Re:Alternative? (1)

Trepidity (597) | more than 4 years ago | (#30539278)

As far as I can tell, OpenCL is pretty much based on CUDA, not on an attempt to unify CUDA and CTM/Brook+/FireStream. That's partly because ATI's solutions never really caught on, and have been sorta ignored.

Re:Alternative? (2, Informative)

Anonymous Coward | more than 4 years ago | (#30539348)

OpenCL isn't ALL that close either to CUDA or anything from AMD (CAL, Brook+). The status quo with AMD is that the OpenCL implementation they have is very immature e.g. doesn't support a lot of fairly basic and highly desirable OpenCL "extensions" (actually it didn't support ANY until about 2 days ago, and now they're just beta testing a few of the most rudimentary ones). Additionally there are still lots of issues with missing / unclear documentation, missing features, bugs, development / runtime platform portability issues, et. al. Most significantly, the openCL performance is still a fraction of the performance commonly achievable with Brook+ or CAL in many common scenarios on the AMD platform. This is sometimes / often true for their 58xx series boards, and pervasively so for their older 4xxx series cards (which by architectural limitations as well as by lack of planned OpenCl development toolchain support / optimization will never really perform well with OpenCL).

On the NVIDIA side, CUDA performance and usage flexibility is still typically and substantially higher than is achievable via OpenCL, since obviously CUDA exists to fairly optimally exploit their GPU architectural capabilities whereas OpenCL is a generic GPU-vendor / architecture "neutral" platform that doesn't give as much card specific control as CUDA (or CAL in AMD's case).

Development tools and platform portability are still poor in both NVIDIA and AMD cases. NVIDIA, for instance, lacks CUDA/OpenCL support on platforms like Solaris, FreeBSD. AMD AFAIK doesn't even have graphics driver support (much less OpenCL/Stream/CAL/Brook+) on BSD, Solaris, Mac(?), and the support is pretty rocky on LINUX still.

LINUX Open Source drivers for AMD hardware are still barely at the stage of providing high quality basic 2D functionality for R600/R500 GPUs, R700 isn't there yet, and R800 is farther out still. In none of these cases does anything like Stream / Brook+ / OpenCL work with the open source driver. It seems as if it may take the better part of 2010 to go by before we see even the first good previews of OpenCL and decently useful 3D graphics running on R600/R700/R800 GPUs with Gallium, X.org, Mesa, et. al. all coming together with the open source radeon drivers.

Basically if you want high performance within the next few months, plan on writing GPU model specific code in CUDA for NVIDIA, and deal with platform / software / card portability issues that will come up frequently. If you're targeting AMD, either target R800 generation cards only, or assume that you'll be getting only a fraction of the performance from R700/R600 cards using OpenCL, and even in the case of R800, don't assume there will be production quality comprehensive high performance driver/toolchain support before mid to late 2010.

If you just want stuff to be "portable" across GPU vendors and do graphics-like computations with the GPUs, use either OpenCL or DX11 (on Windows Vista/7 platforms), or just stick to shaders in DX9/DX10 for even better portability.
Don't expect OpenCL to be "write once run anywhere" with minimal developer issues or end user runtime configuration / linking issues for at least a few more months in the case of AMD/NVIDIA on Windows. As of now even a lot of developers have issues with DLL compatibility / versioning / paths / capabilities detections etc.

I think 18 months from now maybe it will be really a more streamlined experience to use OpenCL across OS platforms and GPU cards, but still probably mostly for GPU generations that are DX11 and beyond only, not really so much the legacy models (which are still 95% of the deployed market).

Re:Alternative? (1)

TeXMaster (593524) | more than 4 years ago | (#30542276)

OpenCL isn't ALL that close either to CUDA or anything from AMD (CAL, Brook+). The status quo with AMD is that the OpenCL implementation they have is very immature e.g. doesn't support a lot of fairly basic and highly desirable OpenCL "extensions" (actually it didn't support ANY until about 2 days ago, and now they're just beta testing a few of the most rudimentary ones). Additionally there are still lots of issues with missing / unclear documentation, missing features, bugs, development / runtime platform portability issues, et. al. Most significantly, the openCL performance is still a fraction of the performance commonly achievable with Brook+ or CAL in many common scenarios on the AMD platform. This is sometimes / often true for their 58xx series boards, and pervasively so for their older 4xxx series cards (which by architectural limitations as well as by lack of planned OpenCl development toolchain support / optimization will never really perform well with OpenCL).

On the NVIDIA side, CUDA performance and usage flexibility is still typically and substantially higher than is achievable via OpenCL, since obviously CUDA exists to fairly optimally exploit their GPU architectural capabilities whereas OpenCL is a generic GPU-vendor / architecture "neutral" platform that doesn't give as much card specific control as CUDA (or CAL in AMD's case).

I do wonder how much this is because of OpenCL being vendor-neutral and thus 'far' from the underlying architecture, and how much it depends on the quality of the compilers. I suspect that NVIDIA does not have much of an interest in optimizing their OpenCL compiler as much as they do with their CUDA compiler, for the obvious reason that with CUDA they have vendor lock-in and can sell more hardware, whereas with OpenCL there is the (remote) possibility that a better compiler from ATI might lead people to look at the other hardware more.

Re:Alternative? (1)

DeKO (671377) | more than 4 years ago | (#30543712)

On the NVIDIA side, CUDA performance and usage flexibility is still typically and substantially higher than is achievable via OpenCL, since obviously CUDA exists to fairly optimally exploit their GPU architectural capabilities whereas OpenCL is a generic GPU-vendor / architecture "neutral" platform that doesn't give as much card specific control as CUDA (or CAL in AMD's case).

That's not true. I've run many equivalent CUDA and OpenCL kernels on NVIDIA cards, and they perform both the same. Pretty much in accordance with those benchmarks [sisoftware.net].

There's no reason for OpenCL code to be any slower than CUDA code (the same compiler is used, only with small changes in the frontend). Maintainability on the other hand... with CUDA you can launch a kernel just like you were calling a function; with OpenCL you have almost a dozen of setup steps (reminds me of programming Win32 applications directly with raw Win32 api calls). Function and operator overloading, templates... those are nice things to have at your disposal when you need it. Let's hope they make an "OpenCL++" standard too.

Re:Alternative? (0, Offtopic)

Anonymous Coward | more than 4 years ago | (#30538754)

How do you mod retarded?

Re:Alternative? (2, Funny)

Score Whore (32328) | more than 4 years ago | (#30538830)

Hard to say, but it must be easy since there are lots of mods that are, at the very least, a bit challenged. If you know what I mean.

Re:Alternative? (0, Offtopic)

Cytotoxic (245301) | more than 4 years ago | (#30539932)

-----Hard to say, but it must be easy since there are lots of mods that are, at the very least, a bit challenged. If you know what I mean.-----

<The Tick>

                      Nope.

</The Tick>

Ok, that one was strictly for the two other people who would get a Warburton/Tick reference. But the three of us laughed our asses off..... "Yes, it is I, Bat-Manuel! I saved them three times later that night, if you know what I mean." Heh, funny... Ok, well, if you had watched the show you'd laugh too. And the stupid thing wouldn't have been canceled after a half-season. So basically it's doubly your fault that you didn't get it.

Spooooon!

Re:Alternative? (2, Interesting)

Yvan256 (722131) | more than 4 years ago | (#30537852)

When did AMD drop the ATI brand?

Re:Alternative? (3, Insightful)

Guspaz (556486) | more than 4 years ago | (#30538198)

Progressively more and more.

Example: Go to "ati.com" and you get redirected to the regular amd.com front page. Go to desktop graphics products and you get a page titled "AMD Graphics for Desktop PCs" inviting you to shop for "AMD Desktop Graphics Cards".

The actual cards themselves have as product name "ATI Radeon", but describing an "ATI Radeon" as an "AMD graphics card" is accurate.

Re:Alternative? (1)

Yvan256 (722131) | more than 4 years ago | (#30538738)

I'm not sure it would be wise for AMD to drop a known brand name like ATI.

Re:Alternative? (1)

PitaBred (632671) | more than 4 years ago | (#30539184)

AMD is working on a unification of GPU and CPU. It makes perfect sense to start attaching the AMD name to the GPUs.

Re:Alternative? (1)

Darundal (891860) | more than 4 years ago | (#30540996)

Not right off the bat, but a slow transition from one brand to the other, like what is happening now, can be quite good for them.

Metal Gear Solid Joke Here (0)

Anonymous Coward | more than 4 years ago | (#30537920)

Ocelot!

Re:Alternative? (2, Informative)

raftpeople (844215) | more than 4 years ago | (#30537990)

What possible reason could you have to want to be locked into one GPU vendor?

The reason is that today CUDA has a headstart and is more mature. Eventually things will probably shift to OpenCL but that takes time and people don't want to sacrifice features today.

Re:Alternative? (2, Informative)

Pinky's Brain (1158667) | more than 4 years ago | (#30538038)

I've seen feature requests suggesting they are considering it, but at the moment too much information is lost in the PTX->LLVM step to be able to generate CAL or OpenCL.

Re:Alternative? (4, Informative)

CDeity (467334) | more than 4 years ago | (#30539802)

The greatest challenges lie in accommodating arbitrary control flow among threads within a cooperative thread array. NVIDIA GPUs are SIMD multiprocessors, but they include a thread activity stack that enables serialization of threads when they reach diverging branches. Without hardware support, this kind of thing becomes difficult on SIMD processors which is why Ocelot doesn't include support for SSE yet. It is also one of the obstacles for supporting AMD/ATI IL at the moment, though solutions are in order.

Translation from PTX to LLVM to multicore x86 does not necessarily throw away information concerning the PTX thread hierarchy initially. The first step is to express a PTX kernel using LLVM instructions and intrinsic function calls. This phase is [theoretically] invertible and no information concerning correctness or parallelism is lost.

To get to multicore from here, a second phase of transformations insert loops around blocks of code within the kernel to implement fine-grain multithreading. This is the part that isn't necessarily invertible or easy to translate back to GPU architectures and is what is referenced in the note you are citing.

Disclosure: I'm one of the core contributors to the Ocelot project.

Re:Alternative? (1)

Icegryphon (715550) | more than 4 years ago | (#30538152)

What possible reason could you have to want to be locked into one GPU vendor?

Hardware, libraries, and Toolkit.
Cuda was useable way before anything else
At the Time Cuda came out AMD was using CTM [wikipedia.org].
Which is absolutely Painful to use.

Re:Alternative? (4, Informative)

TheRaven64 (641858) | more than 4 years ago | (#30538716)

it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards

Actually, it does. It lets CUDA code run on any processor that has an LLVM back end. The open source Radeon drivers have an experimental LLVM back end and use LLVM for optimising shader code.

Re:Alternative? (0)

Anonymous Coward | more than 4 years ago | (#30539838)

This isn't an alternative to CUDA; it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards. In other words, your choices as a developer are to use OpenCL and have your code run everywhere (AMD, nVidia, x86 slowly), or use CUDA and have your code run on nVidia or x86 slowly.

What possible reason could you have to want to be locked into one GPU vendor?

Because the hardware doesn't suck?

Re:Alternative? (1)

Tycho (11893) | more than 4 years ago | (#30540320)

Because the hardware doesn't suck?

We'll see about that in 2Q10, the earliest that Fermi could be released. This assumes nVidia avoids bankruptcy and can get the steaming pile poo known as Fermi to actually work acceptably and reliably enough for general release to "consumers".

Because (1)

Groo Wanderer (180806) | more than 4 years ago | (#30540068)

"What possible reason could you have to want to be locked into one GPU vendor?"

Perhaps because you are sick and tired of GPUs that don't die an early death, and love sitting on the phone and being told that it isn't covered by warranty by HP, Dell, Apple, Sony, and the rest.

                -Charlie

Re:Alternative? (1)

triso (67491) | more than 4 years ago | (#30540880)

What possible reason could you have to want to be locked into one GPU vendor?

Only that the other GPU Vendor, AMD/ATI, doesn't have a working Linux driver for 3-d, proprietary or open. In addition there isn't much support for their older cards,

Re:Alternative? (1)

badkarmadayaccount (1346167) | more than 4 years ago | (#30551588)

Same reason people stick to Flash, superior development tools. But there is a catch - LLVM has been romancing vector support, and I believe clang is used as a opencl frontend, so anything with a llvm backend == supports opencl

Performance? (1)

pablodiazgutierrez (756813) | more than 4 years ago | (#30537790)

I wonder how the performance of the open source solution is compared to the proprietary compiler by NVidia. If it's good enough, they might be scared.

Re:Performance? (1)

Gregory Diamos (1706444) | more than 4 years ago | (#30541266)

Here's a graph performance [gdiamos.net]. The GPU version uses NVIDIA's JIT to generate native instructions for a particular GPU so the GPU results here should be more or less the same as if the program was compiled with NVIDIA's static compiler.

Wait wut? (3, Insightful)

Icegryphon (715550) | more than 4 years ago | (#30537808)

Why would you go from CUDA(Fast Floating-points) to x86(slower Floating-points)?
Is there support yet for double-precision floating points yet on Nvidia cards?
This makes as much sense as a Wookiee on the planet Endor.
Unless the Point is portability but, then why write it in Cuda to begin with?

Re:Wait wut? (3, Insightful)

tepples (727027) | more than 4 years ago | (#30537870)

Why would you go from CUDA(Fast Floating-points) to x86(slower Floating-points)?

For running legacy apps that were developed between the release of CUDA and the release of OpenCL. There aren't many, I'd guess.

Re:Wait wut? (0)

Anonymous Coward | more than 4 years ago | (#30539714)

The FPU in modern x86 CPUs is much faster than the ones in GPUs. The difference is that your GPU has hundreds of them and your CPU has only one per core. It's perfect for testing and debugging and will probably also be perfect for when x86 CPUs get hundreds of cores.

Re:Wait wut? (1)

Tycho (11893) | more than 4 years ago | (#30540384)

This assumes that your GPU can perform (or that you can deal without) double precision operations, can carrying out renormalization, offer rounding as well as chopping, and properly handle "Not a Number" or Infinity values by following the IEEE754 standards.

Re:Wait wut? (2, Interesting)

Midnight Thunder (17205) | more than 4 years ago | (#30540160)

For running legacy apps that were developed between the release of CUDA and the release of OpenCL. There aren't many, I'd guess.

Sounds like there is great potential for a tool that will convert CUDA to OpenCL.

Re:Wait wut? (1)

Jesus_666 (702802) | more than 4 years ago | (#30541388)

Or for running science-related apps on computers without a NVIDIA GPU. As far as I can tell, computational science is all about CUDA. Even in courses about GPGPU computing you get brief rundowns á la "CUDA is [15 minute explanation]. Then there's also OpenCL and Sh but nobody uses those" and requirements like "everyone needs to use CUDA. If you don't have a supported NVIDIA GPU please buy one or drop the course" because the lecturer is convinced that teaching anything but CUDA would be a waste of time for everyone.

I don't know if things are different elsewhere but in the science sector CUDA has massive brand recognition whereas OpenCL doesn't.

Re:Wait wut? (1, Interesting)

Anonymous Coward | more than 4 years ago | (#30537880)

Suppose you have working CUDA code but your dataset is relatively small, say a block of 1000 floating point numbers. Then the overhead of delegating the work to the GPU isn't necessarily worth the trouble.

Re:Wait wut? (2, Informative)

beelsebob (529313) | more than 4 years ago | (#30538330)

Which is exactly why you should be using OpenCL, not CUDA – because it lets the OpenCL driver decide whether to run it on the CPU or the GPU.

Re:Wait wut? (1)

Trepidity (597) | more than 4 years ago | (#30539314)

It doesn't really, though. OpenCL "decides" based on some very high-level, high-granularity features of devices it can enumerate. In practice, if you want your code to run reasonably well, you know which parts are going to run on the GPU and which on the CPU. OpenCL isn't an auto-parallelization solution, just a set of primitives for parallel programming--- more like an MPI or OpenMP that also supports GPGPU than the old 70s holy grain of auto-parallelizing where the compiler or runtime magically figures out how to chunk up your computation and where to send the chunks.

Re:Wait wut? (2, Insightful)

SpinyNorman (33776) | more than 4 years ago | (#30538036)

I can think of a couple of reasons it may be useful on x86 :

- Better debugging tools
- Allows CUDA development without buying specialized hardware up-front (a lesson I've learnt - don't buy hardware until the software is ready)

It's also another option for multi-core programming. If the CUDA API is good, maybe it's an efficient way to develop certain types of parallel apps even if you never intend to use it on a GPU.

CUDA is only fast on some computers (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30538594)

Why would you go from CUDA(Fast Floating-points) to x86(slower Floating-points)?

Because if you don't have the right hardware, CUDA isn't fast floats. It's a program that doesn't run at all.

Doesn't sound like a compiler (3, Interesting)

gnasher719 (869701) | more than 4 years ago | (#30537846)

Seems to be just a front-end for LLVM. And if it is just a front-end for LLVM, then why doesn't it support ATI graphics cards? That would actually make it useful; there is no need for a second CUDA compiler for NVidia cards.

Recompiler to what back-end? (1)

tepples (727027) | more than 4 years ago | (#30537882)

And if it is just a front-end for LLVM, then why doesn't it support ATI graphics cards?

That depends on whether LLVM has a back-end for ATI graphics cards. Is the Stream Computing SDK based on LLVM or something else?

Re:Doesn't sound like a compiler (1)

beelsebob (529313) | more than 4 years ago | (#30538344)

Seems to be just a front-end for LLVM. And if it is just a front-end for LLVM, then why doesn't it support ATI graphics cards?
Because OpenCL already does that job just fine. The only possible use for this is to have legacy CUDA apps actually run while people port them to use OpenCL instead.

Re:Doesn't sound like a compiler (3, Informative)

MostAwesomeDude (980382) | more than 4 years ago | (#30538904)

There is no LLVM backend for AMD/ATI cards. Of the few of us that actually understand ATI hardware, most of us are working on other things besides GPGPU. Sorry.

Re:Doesn't sound like a compiler (0)

Anonymous Coward | more than 4 years ago | (#30552052)

There is no LLVM backend for AMD/ATI cards.

Watch the LLVM tree carefully over the next few months. There may be some interesting checkins on the way.

I'm betting.. (2, Funny)

RightSaidFred99 (874576) | more than 4 years ago | (#30538160)

NVidia isn't real happy about this. No Christmas cards for those guys! In fact the developers should expect some insipid, obvious, and unfunny cartoons will be drawn about them.

just-in-time compiler? (0, Troll)

Snaller (147050) | more than 4 years ago | (#30539112)

Doesn't that mean "not compiled at all"

Re:just-in-time compiler? (0)

Anonymous Coward | more than 4 years ago | (#30540200)

No.

"Not compiled at all" means that every time through a given set of code, you're translating it to machine code.

A JIT compiler, on the other hand, when it runs through code, will translate that code into machine instructions and keep that translation around in case there's a next time. So for code that's run once, you translate, and have a bunch of machine code that's never run again... but for code that's run multiple times (function calls, loops, that sort of thing), you translate, and run that bunch of machine code multiple times.

So for a task that runs through its code just once, there's no difference. But for a task that runs through particular code paths multiple times - which is going to be just about anything that's a reasonable size - JIT is a win. Not as fast as ahead of time compilation, of course, but relatively few applications need that extra performance boost nowadays.

OpenCL not a magic bullet (3, Insightful)

smcdow (114828) | more than 4 years ago | (#30539248)

A bit off topic, but since I'm seeing posts about OpenCL and portability...

OpenCL will indeed get you portability between processors, however OpenCL does not make any guarantees about how well that portable code will run. In the end, to get optimum performance you still have to code to the particular architecture on which that your code is going to run. For example, performance on Nvidia chips is extremely sensitive to memory access patterns. You could write OpenCL code that runs very well on Nvidia chips, but runs poorly on a different architecture.

Not saying that portability isn't a good thing, but a lot of people seem to be thinking that OpenCL will solve all your portability problems. It won't. It only will let code run on multiple architectures. You'll still have to more or less hand optimize to the architecture.

Re:OpenCL not a magic bullet (2, Informative)

Midnight Thunder (17205) | more than 4 years ago | (#30540224)

Not saying that portability isn't a good thing, but a lot of people seem to be thinking that OpenCL will solve all your portability problems. It won't. It only will let code run on multiple architectures. You'll still have to more or less hand optimize to the architecture.

Like the argument of assembler vs C, I think as time goes on we will find ourselves with code that can do a better job of optimising the code for a specific processing core, given a block of OpenCL code than the programmer. Sure there will always be specific cased where the programmer can do a better job, but most programmers IMHO would rather write portable code and let the optimisation left to code which does a better than them - for reasons of lack of intrinsic knowledge and time.

Re:OpenCL not a magic bullet (0)

Anonymous Coward | more than 4 years ago | (#30549042)

The fundamental flaw in your assumption that optimizers will solve everything is that the things that make the most difference actually change the algorithm used.

Example: I can sort a 2-dimensional array by columns however that will cause major cache missing on most processors, if I change the algorithm to run by rows or rearrange the data structure so that it is stored (y, x) instead of (x, y) then the performance is improved. The only way an optimiser could do this is by haphazardly undermining the programmer (there may be a reason it was done by columns that can't be changed) or using an obscenely high level "write me a program that sorts this array" command (largely impractical).

Re:OpenCL not a magic bullet (1)

TheRaven64 (641858) | more than 4 years ago | (#30550578)

There's only so much an optimizer can do, and it depends on how high-level a language it. With C, for example, the optimizer can't turn three arrays of colour values into an array of structures, which would let it use the vector unit for operations. In a higher-level language, which didn't expose the memory layout to the programmer, this is possible.

In general, high-level languages have more potential for optimization than low-level ones. In contrast, low-level languages make it easier for the programmer to write optimised code.

Re:OpenCL not a magic bullet (1)

complete loony (663508) | more than 4 years ago | (#30543172)

This is one of the strengths of LLVM. If your hardware performs better with some specific tweaks to the code, then write an optimising pass that makes the appropriate transformations. Then you can keep your back end machine code generator as simple as possible. Even better, write your optimiser in a generic way so anyone else tackling a similar problem can reuse your work. Heck if you're lucky someone else has already done so.

larrabee (0)

Anonymous Coward | more than 4 years ago | (#30539608)

Had Larrabee turned into a product this xmas, I think alot of people would have been interested in CUDA to x86.
I'm sure the people still working on it will be interested in it.

Next step CUDA to ATI...

We need the opposite... (0)

Anonymous Coward | more than 4 years ago | (#30540140)

An (Open Source) Compiler From X86-Multicore To CUDA.... This way, the ION3 could completely miss the Atom part of the equation, and we would get one more player in the x86 field.

Why? (3, Informative)

Gregory Diamos (1706444) | more than 4 years ago | (#30541194)

So there seem to be several questions as to why people would want to use CUDA when an open standard exists for the same thing (OpenCL).

Well, honestly, the reason why I wrote this was because when I started, OpenCL did not exist.

I have heard the following reasons why some people prefer CUDA over OpenCL:

  • The toolchains for OpenCL are still immature. They are getting better, but are not quite as bug-free and high performance as CUDA at this point.
  • CUDA has more desirable features. For example, CUDA supports many C++ features such as templates and classes in device code that are not part of the OpenCL specification.

Additionally I would like to see a programming model like CUDA or OpenCL replace the most widespread models in industry (threads, openmp, mpi, etc...). CUDA and OpenCL are each examples of Bulk Synchronous Parallel [wikipedia.org] models, which explicitly are designed with the idea that communication latency and core count will increase over time. Although I think that it is a long shot, I would like to see more applications written in these languages so there is a migration path for developers who do not want to write specialized applications for GPUs, but can instead write an application for a CPU that can take advantage of future CPUs with multiple cores, or GPUs with a large degree of fine-grained parallelism.

Most of the codebase for Ocelot could be re-used for OpenCL. The intermediate representation for each language is very similar, with the main differences being in the runtime.

Please try to tear down these arguments, it really does help.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...