Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA Shaking Up the Parallel Programming World

ScuttleMonkey posted more than 6 years ago | from the best-discoveries-made-by-accident dept.

Graphics 154

An anonymous reader writes "NVIDIA's CUDA system, originally developed for their graphics cores, is finding migratory uses into other massively parallel computing applications. As a result, it might not be a CPU designer that ultimately winds up solving the massively parallel programming challenges, but rather a video card vendor. From the article: 'The concept of writing individual programs which run on multiple cores is called multi-threading. That basically means that more than one part of the program is running at the same time, but on different cores. While this might seem like a trivial thing, there are all kinds of issues which arise. Suppose you are writing a gaming engine and there must be coordination between the location of the characters in the 3D world, coupled to their movements, coupled to the audio. All of that has to be synchronized. What if the developer gives the character movement tasks its own thread, but it can only be rendered at 400 fps. And the developer gives the 3D world drawer its own thread, but it can only be rendered at 60 fps. There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.'"

cancel ×

154 comments

need some brains (2, Funny)

conan1989 (1142827) | more than 6 years ago | (#23283112)

where's the MIT CS guys when you need them?

Re:need some brains (2, Funny)

JamesRose (1062530) | more than 6 years ago | (#23283124)

Leave them a post it, they'll get back to you ;)

Re:need some brains (1)

mrbluze (1034940) | more than 6 years ago | (#23283132)

Leave them a post it, they'll get back to you ;)
But how do you know it's them and not some kind of AI script?

Re:need some brains (0, Offtopic)

definate (876684) | more than 6 years ago | (#23283178)

They turned me into a newt!

I got better.

Re:need some brains (0)

Anonymous Coward | more than 6 years ago | (#23283378)

I need better, too. :(

Re:need some brains (0)

Anonymous Coward | more than 6 years ago | (#23283390)

Plenty of other folks are doing what the parent thread post/article notes (SETI@Home's end-user client has been for years now, iirc, since 2006, & on a GPU)!

Example/Proof? This use of multiple thread design is only 1 of 1,000's out there:

---

APK Dr. Who Screensaver 2008++:

http://www.drwhodaily.com/community/index.php?showtopic=386 [drwhodaily.com]

----

(That single file monolithic .scr Win32 PE multithread designed screensaver that also contains data access pointers for playing its animation files, which are stored literally inside itself as a runtime accessible resource for using its data in memory, not on disk (so it is only "1 moving part" K.I.S.S. designed))

AND, not just "MIT CS guys" can handle/deal with it!

(Not that they're "that much 'smarter'" (fact is, I doubt they are) than the rest of the joes out there writing code either, especially experienced devs)

MOST "ordinary coders" can "handle it", especially IF they "think out/architect" their apps wisely...

So, as far as it being the "sole province" of "MIT type guys"? Hey, trust me... it's far from that & their "exclusive proveince of the elite/brainchildren of MIT" (they're just students too, & there is a HUGE diff. between academia level experience, & that of the "real working world" in this "art & science" - experience usually DOES create that effect in a highly technical field!)

Sure, occasionally, you DO get a "prodigy" that comes outta the academic world, but, the odds are STRONG they discovered it @ an early age too (mostly due to exposure to programming by parents I'd say).

STILL, in essence - multithreaded apps that are implicitly "SMP ready/Multicore ready" that have to sync sound & animation abound online, such as the one noted above!

Heck - multithreaded, implicitly SMP/MultiCore Ready apps + Operating Systems exist in fairly large numbers & have for years! Using taskmgr.exe in Windows can show anyone that much, easily (if you have the PROCESS Tab's THREADS column selected as visible! e.g.-> Here, I have 30 apps running, & 28 of those ARE 2-N thread bearing & thus, my system (AMD Athlon64 X2 dualcore) is being taken advantage of via the OS Process Scheduler kernel subsystem shunting off child OR parent threads of app execution to multiple cores, when & IF necessary (especially when the first of N cores gets near to saturated)).

(The screensaver above is one that's not only "coarse multithreading designed" (meaning diff. threads of execution from the parent thread processing diff. discrete tasks & data), but, also does the "fine grained" multithreading problems this thread notes also (by taking the same set of data & busting it across diff. threads (2-3 of them in this app's case, depending on what's going on @ any given moment during its operations + having to perform syncronization & blocking as needed)).

Imo & experience - today's (& even last decade's) programming tools do NOT "take a prodigy" to do this level of work, + it's almost as simple as programming using LEGOs (the building blocks toy)... you largely program CONTROLS for more than a decade now (oh, there IS more to it, but it has gotten simpler than when I first started coding Windows apps (Win16 stuff)).

MY BOTTOM-LINE/POINT, after all that is now "said & aside"?

Plus, I'd wager NVidia &/or ATI's toolkits make it elegant, & relatively "simple/easy" to do as well (today's programming IDE &/or addon tools such as .OCX/ActiveX controls/OLEServers, or .VCL make things a LOT simpler than it was prior to say, 15 yrs. ago too, using tools like Microsoft Macro Assembler OR even C/C++ dev environments))

Today & FOR YEARS NOW (more than a decade)?

With Today's dev. tools?? Put it THIS way - YOU DON'T NEED TO BE "TONY STARK BOY GENIUS INDUSTRIALIST" (outta MIT no less, lol, saw the film last night & have been huge fan of that series since I was a child in the 1970's) to do this level of work, IF you think out a SOLID design.

Great film by the way, go see it ("IRON MAN")

APK

P.S.=> I don't really understand this sudden "big fuss" about multithreaded design OR even "explicitly parallel designed apps" (those that do their OWN process/thread scheduling via SetProcessAffinity type Win32 API calls). Multithread designed apps & Operating Systems've been out there for a decade or more (especially Windows NT-based ones, who had it down pat far before say, Linux did)...

HOWEVER - using a GPU to do it? That is news, to SOME extent though (once more though - I know SETI has been using this for years now already in their Seti@Home clients), I'd wager the toolset used makes it much simpler than it would be in say, raw or even Macro Asm type languages, by far!

(Again - the "hard part's" finding processes that lend themselves to "fine grained" multithreaded design (since imo @ least, doing "coarse multhreading" is simple enough) - not every program or dataset lends itself to that type of design, @ least not easily)... apk

Re:need some brains (1)

Eli Gottlieb (917758) | more than 6 years ago | (#23284772)

Just don't call a string-theory lab!

Dumbing down (5, Funny)

mrbluze (1034940) | more than 6 years ago | (#23283116)

'The concept of writing individual programs which run on multiple cores is called multi-threading. That basically means that more than one part of the program is running at the same time, but on different cores.

Wow, I bet nobody on slashdot knew that!

Re:Dumbing down (1)

Lobais (743851) | more than 6 years ago | (#23283184)

Well, it's a hot copy from tfa.
If you rtfa you'll notice that it's about "Nvidia's CUDA system, originally developed for their graphics cores, are finding migratory uses into other massively parallel computing applications."

Re:Dumbing down (2, Informative)

Lobais (743851) | more than 6 years ago | (#23283190)

Oh, and CUDA btw. http://en.wikipedia.org/wiki/CUDA [wikipedia.org]

CUDA ("Compute Unified Device Architecture"), is a GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the graphics processing unit (GPU).

Re:Dumbing down (1)

aliquis (678370) | more than 6 years ago | (#23283226)

Yeah, I had prefered if the summary mentioned some short little information about how CUDA helps and does it better. Because as the summary is written now it's like "Maybe a video card dude will fix it because they need to run more threads", not "This video card dude came up with a new language which made it much easier to handle multiple threads", or whatever.

HOW does CUDA make it easier? I'm very confident it's not because Nvidia hardware contains lots of stream processors.

Ohwell, guess I need to RTFA, and maybe Wikipedia aswell..

Re:Dumbing down (1)

smallfries (601545) | more than 6 years ago | (#23283942)

There is no real detail in the article so dredging my memory for how CUDA works... It probably is because they are stream processors - i.e a pool of vector processors that are optimised for SIMD. The innovation was that the pool could be split into several chunks working on separate SIMD programs. Rather than threads there are programmable barriers to control the different groups and explicit memory locking to ensure the cache is partitioned between the different groups.

So to put it another way, the big threading "innovation" in CUDA is to not use threading, but instead to partition the memory and use low-level synchronisation primitives. Something that the supercomputing guys are well aware of, although they prefer to stick a MPI layer on top of it.

Re:Dumbing down (2, Funny)

Kawahee (901497) | more than 6 years ago | (#23283336)

Slow down cowboy, not all of us are as cluey as you. It didn't come together for me until the last sentence!

There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization

Why was this greenlit (0)

Anonymous Coward | more than 6 years ago | (#23283864)

Why was that headline greenlit? Next we'll be like the NY Times and have to avoid those confusing acronyms and spell out Central Processing Unit and Redundant Array of Inexpensive Disk [drives].

Re:Dumbing down (1)

elloGov (1217998) | more than 6 years ago | (#23284040)

'The concept of writing individual programs which run on multiple cores is called multi-threading. That basically means that more than one part of the program is running at the same time, but on different cores.

Wow, I bet nobody on slashdot knew that!

Although your comment funny at Slashdot, it's a wise ass, arrogant reply in reality. It's always good to reinforce knowledge and be reminded of it. Meanwhile, thank you for reinforcing the stereotype of a programmer, that of arrogance. :)

Re:Dumbing down (1)

ultranova (717540) | more than 6 years ago | (#23284228)

Although your comment funny at Slashdot, it's a wise ass, arrogant reply in reality. It's always good to reinforce knowledge and be reminded of it. Meanwhile, thank you for reinforcing the stereotype of a programmer, that of arrogance. :)

Why would anyone but a fairly advanced programmer be interested in the new fads in parallel programming ? Besides, the summary is misleading, giving the impression that multithreading is exclusive to multicore processors, which is false; it can give huge benefits in a single-core processor to have the UI run in a thread of its own so it won't get blocked by a long-running task, for example.

While reinforcing knowledge is good, explaining the addition of integers in an university-level mathemathics book is ridiculous. So is the summary, and for the same reason. Moreso because the explanation is wrong.

Re:Dumbing down (1)

TapeCutter (624760) | more than 6 years ago | (#23284446)

Regardless of your arrogance quotient, you are correct.

IAACS, multi-threading and parallel processing are two different but related concepts. The hard part is coming up with a parallel algorithm for certain classes of problems, implementing low level syncronization is trivial by comparison. OTOH I've seen a lot of programmers stab themselves in the eye with forks.

Re:Dumbing down (0)

Anonymous Coward | more than 6 years ago | (#23284146)

Threads are OS simply constructs that allow for multiple paths of execution.

On single processor systems threads give the allusion of concurrent processing due to timeslicing.

In multicore environments, threads are often used to logically divide a program into components that are concurrently executed on different cores. After all, why not use a programming model (threads) that everyone is already familiar with?

However, there is no requirement that one use threads for programming on in a multicore environment, so the statement "The concept of writing individual programs which run on multiple cores is called multi-threading" is a tad misleading, and an over simplification.

Where's the story? (4, Informative)

pmontra (738736) | more than 6 years ago | (#23283118)

The articles sums up the hurdles of parallel programming and says that NVIDIA's CUDA is doing something to solve them but it doesn't say what. Even the short Wikipedia entry at http://en.wikipedia.org/wiki/CUDA [wikipedia.org] tells more about it.

Re:Where's the story? (3, Insightful)

mrbluze (1034940) | more than 6 years ago | (#23283128)

No offence, but I'm perplexed as to how this rubbish made it past the firehose.

Re:Where's the story? (1)

pmontra (738736) | more than 6 years ago | (#23283134)

Agreed.

Re:Where's the story? (0, Offtopic)

harry666t (1062422) | more than 6 years ago | (#23283164)

Indeed.

Re:Where's the story? (0, Offtopic)

definate (876684) | more than 6 years ago | (#23283188)

Exactly.

Re:Where's the story? (1)

mrbluze (1034940) | more than 6 years ago | (#23283204)

Precisely.

Re:Where's the story? (0)

Anonymous Coward | more than 6 years ago | (#23283216)

It's the middle of the night, they need to post something.

Re:Where's the story? (0)

Anonymous Coward | more than 6 years ago | (#23283328)

Right on!

Re:Where's the story? (0)

Anonymous Coward | more than 6 years ago | (#23283368)

Me, too.

Re:Where's the story? (3, Funny)

alex4u2nv (869827) | more than 6 years ago | (#23283936)

I sense a race condition developing

Re:Where's the story? (1)

harry666t (1062422) | more than 6 years ago | (#23284016)

You really made me lmao (: +8, funny :P

Re:Where's the story? (1)

linRicky (961271) | more than 6 years ago | (#23283144)

Exactly! Not the sort of link I was expecting for a slashdot article. There's no technical details whatsoever. And how exactly is the CUDArchitecture alleviating the present issues with parallel programming?

Re:Where's the story? (1)

UziBeatle (695886) | more than 6 years ago | (#23283686)

Oh, as if it will help I rate this thread a 10 out of 10.

  Mega Ditto's.

  Makes me wonder why I still bother to check into slashdot. Force of habit now perhaps.

  I know this is an old saw but it is true, Slashdot has degraded from the site
I recall back in the 90's.
Vastly so.

I hope they are not paying the 'editors' to review and link story submissions to the main page.
If so, they surely are not getting their moneys worth. Random Brownian motion site , DIgg, can do as well, if not better.
Okay, that last as a bit over the top but I meant well.

Re:Where's the story? (1)

EvilNTUser (573674) | more than 6 years ago | (#23283946)

In my opinion it doesn't even summarize the hurdles properly. I'm not a game programmer, so I don't know if the article makes sense, but it left me with the following questions. Hopefully someone can clarify.

-Why would character movement need to run at a certain rate? It sounds like the thread should spend most of its time blocked waiting for user input.

-What's so special about the audio thread? Shouldn't it just handle events from other threads without communicating back? It can block when it doesn't have anything to do.

-How do semaphores affect SMP cache efficiency? Is the CPU notified to keep the data in shared cache?

-What is a "3D world drawer"? Is it where god keeps us in his living room?

For all I know, I have ridiculous misconceptions about game programming, but this article certainly didn't make anything clearer.

Re:Where's the story? (2, Informative)

Yokaze (70883) | more than 6 years ago | (#23284164)

-Why would character movement need to run at a certain rate? It sounds like the thread should spend most of its time blocked waiting for user input.

You usually have a game-physics engine running, which practically integrates the movements of the characters (character movement) or generally updates the world model (position and state of all objects). Even without input, the world moves on. The fixed rate is usually taken, because it is simpler than a varying time-step rate.

-What's so special about the audio thread? Shouldn't it just handle events from other threads without communicating back?

Audio is the most sensible thing to timing issues: Contrary to video (or simulation), you cannot drop arbitrary pieces of sound without the user immediately noticing.

-How do semaphores affect SMP cache efficiency? Is the CPU notified to keep the data in shared cache?

Not specially, they are simply a special case of the problem: How to access data
Several threads may compete for the same data, but if they are accessing the same data in one cache-line, it will lead to lots of communication (thrashing the cache).
In CUDA, a thread-manager is aware of the memory layout and will decide, which parts of memory will be processed by which shaders/ALUs/CPUs. Thereby, it is also possible to make more efficient use of the caches.

-What is a "3D world drawer"? Is it where god keeps us in his living room?

Drawer as in "someone, who draws", or 3D world painter. It draws/paints the state of the world as updated by the simulation thread.
This can happen asynchronously, as you will not notice, if a frame is dropped occasionally.

Re:Where's the story? (1)

EvilNTUser (573674) | more than 6 years ago | (#23284368)

Thanks for the reply, but I still don't understand why audio would be a synchronization issue. As you say, it needs a certain amount of CPU time or it'll stutter, but isn't that a performance issue?

Also, the article would've done better just talking about the thread manager you mention. That makes more sense than the stuff about semaphores affecting performance positively (unless I misunderstood the sentence about the cache no longer being stale).

And, uh, that drawer comment was a joke...

Re:Where's the story? (1)

adonoman (624929) | more than 6 years ago | (#23284494)

One issue is that the audio thread may need priority access to event data. If a lower priority thread has locks on data that the higher priority thread needs, you can end up with a priority inversion where the lower priority thread starves out the high priority threads.

Thats.. (4, Funny)

mastershake_phd (1050150) | more than 6 years ago | (#23283126)

There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.
That's called wasted CPU cycles.

Aritificial Intelligence. (1)

Safiire Arrowny (596720) | more than 6 years ago | (#23283230)

Like I was saying in another post, since everything per game object must be synchronized to the slowest procedure (video rendering of the object), the way to not wasted cpu cycles is to spend it on AI.

In essence, the faster your CPU then, (static on consoles), the more time you can devote to making your game objects smarter after you're done the audio visual.

Re:Thats.. (1)

aliquis (678370) | more than 6 years ago | (#23283232)

... and bad planning / lack of effort / simplest solution.

But we already know it's hard to split up all kinds of work evenly.

Anyway, what does CUDA to help with that?

CUDA helps by... (1)

Joce640k (829181) | more than 6 years ago | (#23283372)

CUDA helps by moving more work to the GPU - where the biggest bottleneck is.

Um, no, that can't be right... :-(

Re:Thats.. (1)

badpazzword (991691) | more than 6 years ago | (#23283358)

That's called wasted CPU cycles.
BOINC for PS3?

BOINC for PS3 (0)

Anonymous Coward | more than 6 years ago | (#23285118)

BOINC is only a framework for organizing job-level massive parallelism. It's not an abstraction for parallelism at the application level: when you write a BOINC application, you don't get any parallelism for "free". It's still up to you as the application developer to target your app for a specific platform, let alone hardware, because BOINC simply hands off / manages execution of your application. The app developer must write for x86/Win, x86/Linux, x86/Mac, PPC/Mac, etc. Most critically, that means that you have the privilege and responsibility of exploiting the hardware (x86, amd64, PPC, PS3, etc.) yourself, specific to your application's needs, at the application level. BOINC will then do handle job management and scheduling between your server and each instance of the client.

So your question is actually a bit ill-formed. Instead of asking "could we run a framework on the PS3?", which would provide no free parallelism, you probably meant to ask "could we run BOINC applications on the PS3?". The problem lies not in porting BOINC to PS3 but in having yet another platform which users (application authors such as SETI@H or Einstein@H) would need to target. Some (most...) of those guys are fairly small operations and stick to x86 hardware and often only Windows at that, at least for a while until they get Mac and Linux clients working alongside.

The Folding@home operation is well-organized and has more resources than most, and they don't run on BOINC. They're the ones who have a PS3 client (which is much tougher to write than an x86 client to exploit the given hardware), and who even support a handful of ATI's recent but disparate GPUs (Windows only I believe). It's not that BOINC on PS3 (or whatever) is impossible; it's that it gains the application developer nothing without a LOT more effort. The question of whether or not it's worth that effort falls to the user and not the authors of BOINC.

CUDA = NVIDIA desperate to compete with Intel? (4, Insightful)

Cordath (581672) | more than 6 years ago | (#23283446)

CUDA is an interesting way to utilize NVIDIA's graphics hardware for tasks it wasn't really designed for, but it's not a solution to parallel computing in and of itself. (more on that momentarily) A few people have gotten their nice high end Quadros to do some pretty cool stuff, but to date it's been limited primarily to relatively minor academic purposes. I don't see CUDA becoming big in gaming circles anytime soon. Let's face it, most gamers buy *one* reasonably good video card and leave it at that. Your video card has better things to do than handle audio or physics when your multi-core CPU is probably being criminally underutilized. Nvidia, of course, wants people to buy wimpy CPU's and then load up on massive SLI rigs and then do all their multi-purpose computation in CUDA. Not gonna happen.

First of all, there are very few general purpose applications that special purpose NVIDIA hardware running CUDA can do significantly better than a real general purpose CPU, and Intel intends to cut even that small gap down within a few product cycles. Second, nobody wants to tie themselves to CUDA when it's built entirely for proprietary hardware. Third, CUDA still has a *lot* of limitations. It's not as easy to develop a physics engine for a GPU using CUDA as it is for a general purpose CPU.

Now, I haven't used CUDA lately, so I could be way off base here. However, multi-threading isn't the real challenge to efficient use of resources in a parallel computing environment. It's designing your algorithms to be able to run in parallel in the first place. Most multi-threaded software out there still has threads that have to run on a single CPU, and the entire package bottlenecks on the single CPU running that thread even if other threads are free to run on other processors. This sort of bottleneck can only be avoided at the algorithm level. This isn't something CUDA is going to fix.

Now, I can certainly see why NVIDIA is playing up CUDA for all they're worth. Video game graphics rendering could be on the cusp of a technological singularity. Namely, ray tracing. Ray tracing is becoming feasible to do in real time. It's a stretch at present, but time will change that. Ray tracing is a significant step forward in terms of visual quality, but it also makes coding a lot of other things relatively easy. Valve's recent "Portal" required some rather convoluted hacks to render the portals with acceptable performance, but in a ray tracing engine those same portals only take a couple lines of code to implement and have no impact on performance. Another advantage of ray tracing is that it's dead simple to parallelize. While current approaches to video game graphics are going to get more and more difficult to work with as parallel processing rises, ray tracing will remain simple.

The real question is whether NVIDIA is poised to do ray-tracing better than Intel in the next few product cycles. Intel is hip to all of the above, and they can smell blood in the water. If they can beef up the floating point performance of their processors then dedicated graphics cards may soon become completely unnecessary. NVIDIA is under the axe and they know it, which might explain all the recent anti-Intel smack-talk. Still, it remains to be seen who can actually walk the walk.

Re:CUDA = NVIDIA desperate to compete with Intel? (1)

smallfries (601545) | more than 6 years ago | (#23284020)

First of all, there are very few general purpose applications that special purpose NVIDIA hardware running CUDA can do significantly better than a real general purpose CPU, and Intel intends to cut even that small gap down within a few product cycles.
That's not strictly true. Off the top of my head: Sorting, FFTs (or any other dense Linear Algebra) and Crypto (both public key and symmetric) covers quite a lot of range. The only real issue for these application is the large batch sizes necessary to overcome the latency. Some of this is inherent in warming up that many pipes, but most of it is shit drivers and slow buses.

The real question is what benefits will CUDA offer when the vector array moves closer to the processor? Most of the papers with the above applications used pre-CUDA hardware with all of the horrors of general-purpose coding running under OpenGL. A couple of the applications would already receive a significant boost from running in CUDA on modern hardware (primarily from latency reducton).

It doesn't suprise anyone that we are watching the second generation of FPU being folded into the processor. It wouldn't suprise me personally if ten years from now the individual floating EUs inside most chips had disappeared completely leaving small Integer / Control pipes as a front-end to a massive vector array of FP units. There is more at stake than who can trace rays the quickest.

Re:CUDA = NVIDIA desperate to compete with Intel? (1)

Barny (103770) | more than 6 years ago | (#23284542)

As an article earlier this month [bit-tech.net] pointed out they are in fact in the process of porting the CUDA system to CPUs.

The advantages would be (assuming this is the wonderful solution it claims) you run your task in the CUDA environment, if your client only has a pile of 1U racks then he can at least run it, if he replaces a few of them with some Tesla [nvidia.com] racks, things will speed up a lot.

I did some programming at college, I do not claim to know anything about the workings of Tesla or CUDA, but it sure sounds rosy if this stuff would work.

Re:CUDA = NVIDIA desperate to compete with Intel? (1)

Spatial (1235392) | more than 6 years ago | (#23284748)

How is a raytracing renderer going to render the view through a portal with "no performance impact"? Magic? It still has to draw the view through the portal just like the render target method used in the normal renderer does, even if it might be more efficient. That isn't free.

The whole raytracing thing seems like empty hype to me. How is it going to be a significant step forward when we already have proven methods that're capable of graphics bordering on the photo-realistic? It's hard to move forward when you're already at the end of the line.

Re:CUDA = NVIDIA desperate to compete with Intel? (0)

Anonymous Coward | more than 6 years ago | (#23285030)

A few people have gotten their nice high end Quadros to do some pretty cool stuff, but to date it's been limited primarily to relatively minor academic purposes.
From what I've heard, that's not nearly true. There are quite a few hard-core industrial applications that have seen speedups of 200X using CUDA. Stuff to do with complex medical imaging, oil field flow analysis, Monte Carlo computations on whatever.

Re:Thats.. (1)

mpbrede (820514) | more than 6 years ago | (#23283666)

There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.
That's called wasted CPU cycles.
Actually, synchronization and waiting does not necessarily equate to wasted cycles. Only if the waiting thread holds the CPU to the exclusion of other tasks does it equate to waste. In other cases it equates to multiprogramming - a familiar concept.

Re:Thats.. (1)

Machtyn (759119) | more than 6 years ago | (#23284420)

That's called wasted CPU cycles.
Just like what is happening when I scroll through these comments

/it's funny, laugh.

Oversimplifying is bad (2, Insightful)

BattleCat (244240) | more than 6 years ago | (#23283130)

Topic is rather interesting, especially for game developers, among whom I sometimes lurk , but what's the point of simplifying descriptions and problems up to the point of being meaningless and useless ?

Re:Oversimplifying is bad (2, Insightful)

mrbluze (1034940) | more than 6 years ago | (#23283140)

but what's the point of simplifying descriptions and problems up to the point of being meaningless and useless ?
This isn't information, it's advertising. The target audience is teenagers with wealthy parents who will buy the NVIDIA cards for them.

just hype and commercialism (1)

MauricioAP (67495) | more than 6 years ago | (#23283150)

This is just hype, it is well known that for real high-performance applications cuda is compute-bound, i.e. a lot of bandwidth is waste. Cuda is just another platform for niche applications, never to compete with commodity processors.

Re:just hype and commercialism (1, Interesting)

Anonymous Coward | more than 6 years ago | (#23284774)

It's definitely not just hype. In our company, we're using it to speed up some image processing algorithms which can now be applied in real time by just utilizing the <100$ video card in the PC. We are quite excited about this, as we would otherwise have to invest in expensive special purpose hardware accelerators (which are usually obsolete by the time they're designed, so you spend the rest of their life time paying off on hardware which has already lost its edge).

Perhaps the CUDA model in itself is a little clumsy; the fact that it opens up commodity hardware to do some impressive work is very nice.

Computer Programming: An Introduction for the Scientifically Inclined [curly-brace.com]

Nothing new (0)

Anonymous Coward | more than 6 years ago | (#23283160)

I wonder what the hell I've been doing when I needed multiple threads to have a consistent view of state.

Oh wait, its called SYNCHRONISATION.

I'm sure its exciting to the OP and all, but hell, this is basic CS101 shit.

Er. (1)

Safiire Arrowny (596720) | more than 6 years ago | (#23283176)

So make it all synchronize to the lowest fps, the video of course.. We are talking about one game object after all.

In real application, the audio/video must be calculated for many of objects, and it is a static 30 or 60 fps video, and always static samples per second audio, perhaps cd quality 44100 samples per second but likely less.

This synchronization is not unsolved. Every slice of game time is divided between how many $SampleRate frames of audio divided by game objects producing audio, and how many triangles versus amount triangles possible.

You take the lowest amount possible per slice of game time (here a second) and call that the target amount. You don't put more game objects than resources in your environment at a time, the AI can know this secret detail and not show up.

How does more than one core processor issue ever help to a game object to be expressed better? There is only AI left. Use the extra cores to display more objects, give the objects better AI, or give better sounding/looking objects.

I don't understand the point of this article. (1)

destruk (1136357) | more than 6 years ago | (#23283234)

This tells me nothing. Why would you want a game (Common single threaded-programmed application) to compete with your divx compression and ray tracing bryce3d application running in the background? Are they (Intel, AMD, IBM) all saying that we need to hook up 8 or 12 or 24 processor cores at 3ghz each to get an actual speed of 4ghz while each one waits around wasting processing cycles to get something to do? That is the lamest thing I've heard in a long time. I'd much rather have a SINGLE CORE Graphene processor at 12Ghz, than quadcore or oct-core at 4ghz.

Re:I don't understand the point of this article. (1)

Safiire Arrowny (596720) | more than 6 years ago | (#23283264)

Even though I think this is a very speculative and information free article, if you imagine it in the domain of the PS3 console for example, where any time a core is not doing anything useful it is wasting potential, I guess you could see where they're coming from.

At least that is the idea I had while reading it, I wasn't thinking about running other cpu intensive PC apps at the same time as a game.

Re:I don't understand the point of this article. (1)

rdebath (884132) | more than 6 years ago | (#23283340)

But you can't have a 12GHz, at that speed light goes about ONE INCH per clock cycle in a vacuum, anything else is slower, signals in silicon are a lot slower.

So much slower that a modern single core processor will have a lot of "execution units" to keep up with the instructions arriving at the 3GHz rate these instructions are handed off to the units in parallel and the results drop out of the units "a few" clock cycles later. This is good except when the result of UnitA is needed before UnitB can start. At this point UnitB has to wait; Intel discovered that nearly half their execution units were waiting most of the time so they invented HyperThreading.

With HyperThreading the execution units are shared between two threads that the OS wants to run at the same time which means more of the silicon is used and the machine is faster.

At this point clock speeds have hit a hard wall, they will continue to go up but only in ratio to the feature size on the silicon. OTOH the number of gates goes up with the square of the feature size. I would expect the mass retail sale of 16 to 30 cores in a single silicon before we hit 12GHz.

Of course that brings it's own problems, refactoring a program into (say) 10 threads is easy; when compared to 100 or 10000!

Re:I don't understand the point of this article. (2, Informative)

TheRaven64 (641858) | more than 6 years ago | (#23283762)

But you can't have a 12GHz, at that speed light goes about ONE INCH per clock cycle in a vacuum, anything else is slower, signals in silicon are a lot slower.

An inch is a long way on a CPU. A Core 2 die is around 11mm along the edge, so at 12GHz a signal could go all of the way from one edge to the other and back. It uses a 14-stage pipeline, so every clock cycle a signal needs to travel around 1/14th of the way across the die, giving around 1mm. If every signal needs to move 1mm per cycle and travels at the speed of light, then your maximum clock speed is 300GHz.

Of course, as you say, electric signals travel a fair bit slower in silicon than photons do in a vacuum, and you often have to go a quite indirect route due to the fact that wires can't cross on a CPU, so the practical speed might be somewhat lower.

Intel discovered that nearly half their execution units were waiting most of the time so they invented HyperThreading.
Minor nitpick, but actually IBM were the first to market with SMT, and they took it from a university research project. Intel didn't discover anything other than that their competitors were getting more instructions per transistor than them.

Re:I don't understand the point of this article. (1)

rdebath (884132) | more than 6 years ago | (#23284024)

Personally I would have guessed the speed to be over the current 3GHz (or so) but CPU companies haven't increased the clock rate for a long time and there are a lot of people (like the OP) who would pay top dollar for a faster clocked CPU.

SMT: Oh the DEC Alpha was the first commercial CPU was it. I thought about checking it after I posted! Still HyperThreading is a somewhat special variant because only the absolute minimum of hardware is duplicated to allow SMT making it reasonable to run both with and without the second thread.

Re:I don't understand the point of this article. (1)

smallfries (601545) | more than 6 years ago | (#23284050)

I think that you've oversimplified a tad too much. You are assuming instant switching time on your gates. Sure light could propagate that fast in a vacuum, and electrons in a wire could do some comparable %. But a pipeline stage may have a combinatorial depth of several hundred gates and once you subtract their switching time signal propagation is a serious problem. The current range of Core2s has to use lots of fancy tricks (like asynchronous timing domains) to get around clock-skew at 3Ghz on a 11mm square die.

Re:I don't understand the point of this article. (1)

rdebath (884132) | more than 6 years ago | (#23284178)

I'll have a minor nitpick too please

Electrons in a wire move at around 3 inches per hour, it's the signals that move at near lightspeed.

NVidia is doing that? an insult to INMOS... (4, Interesting)

master_p (608214) | more than 6 years ago | (#23283212)

Many moons ago, when most slashdotters were nippers, a British company named INMOS provided an extensible hardware and software platform [wikipedia.org] that solved the problem of parallelism, in many ways similar to CUDA.

Ironically, some of the first demos I saw using transputers was raytracing demos [classiccmp.org] .

The problem of parallelism and the solutions available are quite old (more than 20 years), but it's only now that limits are reached that we see the true need for it. But the true pioneers is not NVIDIA, because there were others long before them.

Re:NVidia is doing that? an insult to INMOS... (2, Interesting)

ratbag (65209) | more than 6 years ago | (#23283352)

That takes me back. My MSc project in 1992 was visualizing 3D waves on Transputers using Occam. Divide the wave into chunks, give each chunk to a Transputer, pass the edge case between the Transputers and let one of them look after the graphics. Seem to recall there were lots of INs and OUTs. A friend of mine simulated bungie jumps using similar code, with a simple bit of finite element analysis chucked in (the rope changed colour based on the amount of stretch).

Happy Days at UKC.

couldn't resist a quick Inmos story... (4, Interesting)

Fallen Andy (795676) | more than 6 years ago | (#23283432)

Back in the early 80's I was working in Bristol UK for TDI (who were the UCSD p-system licensees) porting it to various machines... Well, we had one customer who wanted a VAX p-system so we trotted off to INMOS's office and sat around in the computer room. (VAX 11/780 I think). At the time they were running Transputer simulations on the machine so the VAX p-system took er... about 30 *minutes* to start. Just for comparison an Apple ][ running IV.x would take less than a minute. Almost an hour to make a tape. (About 15 users running emulation I think). Fond memories of the transputer. Almost bought a kit to play with it... Andy

OMG Slashdot (0)

Anonymous Coward | more than 6 years ago | (#23283214)

> The concept of writing individual programs which run on multiple cores is called multi-threading

What the hell has happened to you, dear slashdot? I told you visual basic was bad for your brain...

New programming tools needed (3, Insightful)

maillemaker (924053) | more than 6 years ago | (#23283220)

When I came up through my CS degree, object-oriented programming was new. Programming was largely a series of sequentially ordered instructions. I haven't programmed in many years now, but if I wanted to write a parallel program I would not have a clue.

But why should I?

What is needed are new, high-level programming languages that figure out how to take a set of instructions and best interface with the available processing hardware on their own. This is where the computer smarts need to be focused today, IMO.

All computer programming languages, and even just plain applications, are abstractions from the computer hardware. What is needed are more robust abstractions to make programming for multiple processors (or cores) easier and more intuitive.

Re:New programming tools needed (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23283266)

Erlang?

Re:New programming tools needed (1)

destruk (1136357) | more than 6 years ago | (#23283290)

I can agree with that. Any error that crashes 1 out of 20 or so concurrent threads, on multiple cores, using shared cache, is too complex for a mere human to figure out. After 30+ years programming single threaded applications, it will take a lot of new tools to make this happen.

Re:New programming tools needed (0)

Anonymous Coward | more than 6 years ago | (#23283330)

The programming model of CUDA is SIMD (single instruction multiple data), like SSE, but with much more D. It is not really parallel programming with complex dependencies, but doing the same thing to lots of data at the same time. If your program has complex dependencies between different tasks which do not involve huge gobs of data, then it probably won't translate to an efficient CUDA program.

Re:New programming tools needed (2, Interesting)

TheRaven64 (641858) | more than 6 years ago | (#23283414)

There's only so much that a compiler can do. If you structure your algorithms serially then a compiler can't do much. If you write parallel algorithms then it's relatively easy for the compiler to turn it into parallel code.

There are a couple of approaches that work well. If you use a functional language, then you can use monads to indicate side effects and the compiler can implicitly parallelise the parts that are free from side effects. If you use a language like Erlang or Pict based on a CSP or a pi-calculus model then you split your program into logically independent chunks with a message passing interface between them the compiler or runtime can schedule them independently.

More investment needed in e.g Erlang (3, Interesting)

Kupfernigk (1190345) | more than 6 years ago | (#23283538)

The approach used by Erlang is interesting as it is totally dependent on message passing between processes to achieve parallelism and synchronisation. To get real time performance, the message passing must be very efficient. Messaging approaches are well suited to parallelism where the parallel process are themselves CPU and data intensive, which is why they work well for cryptography and image processing. From this point of view alone, a parallel architecture using GPUs with very fast intermodule channels looks like a good bet.

The original Inmos Transputer was designed to solve such problems and relied on fast inter-processor links, and the AMD Hypertransport bus is a modern derivative.

So I disagree with you. The processing hardware is not so much the problem. If GPUs are small, cheap and address lots of memory, so long as they have the necessary instruction sets they will do the job. The issue to focus on is still interprocessor (and hence interprocess) links. This is how hardware affects parallelism.

I have on and off worked with multiprocessor systems since the early 80s, and always it has been fastest and most effective to rely on data channels rather than horrible kludges like shared memory with mutex locks. The code can be made clean and can be tested in a wide range of environments. I am probably too near retirement now to work seriously with Erlang, but it looks like a sound platform.

Re:More investment needed in e.g Erlang (2, Interesting)

jkndrkn (1092917) | more than 6 years ago | (#23283678)

> and always it has been fastest and most effective to rely on data channels rather than horrible kludges like shared memory with mutex locks. While shared-memory tools like UPC and OpenMP are gaining ground (especially with programmers), I too feel that they are a step backwards. Message passing languages, especially Erlang, are better designed to cope with the unique challenges of computing on a large parallel computer due to their excellent fault tolerance features.

You might be interested in some work I did evaluating Erlang on a 16 core SMP machine:

http://jkndrkn.livejournal.com/205249.html [livejournal.com]

Quick summary: Erlang is slow, though using the Array module for data structure manipulation can help matters. Erlang could still be useful as a communications layer or monitoring system for processes writen in C.

Yes, I read your paper (2, Interesting)

Kupfernigk (1190345) | more than 6 years ago | (#23284304)

It doesn't surprise me in the slightest. Erlang is designed from the ground up for pattern matching rather than computation, because it was designed for use in messaging systems - telecoms, SNMP, now XMPP. Its integer arithmetic is arbitrary precision, which prevents overflow in integer operations at the expense of performance. Its floating point is limited. My early work on a 3-way system used hand coded assembler to drive the interprocess messaging using hardware FIFOs, for Pete's sake, and that was as high performance as you could get - given the huge limitations of trying to write useful functions in assembler.

That in a nutshell is why I suggested that investment in Erlang would be a good idea. It's better to start with the right approach and optimise it, than go off into computer science blue sky and try to design a perfect language for paralleling GPUs - which practically nobody will ever really use.

Re:More investment needed in e.g Erlang (0)

Anonymous Coward | more than 6 years ago | (#23284438)

eh, the message passing architectures always have higher overhead than the simpler ones. just look at the many overhauls the OS such as mach have had (which still doesn't scale to high parallelism). Shared memory IPC is the one that works fastest in the real world within a machine, message passing is for infrequent comm on very slow channels such as a network.

Re:New programming tools needed (4, Interesting)

maraist (68387) | more than 6 years ago | (#23283880)

Consider that if you've ever done UNIX programming, you've been doing MT programming all along - just by a different name.. Multi-Processing. Pipelines are, in IMO the best implementation of parallel programming (and UNIX is FULL of pipes). You take a problem and break it up into wholly independent stages, then multi process or multi-thread the stages. If you can split the problem up using message-passing then you can farm the work out to decoupled processes on remote machines, and you get farming / clustering. Once you have the problem truely clustered, then multi-threading is just a cheaper implementation of multi-processing (less overhead per worker, less number of physical CPUs, etc).

Consider this parallel programing pseudo-example

find | tar | compress | remote-execute 'remote-copy | uncompress | untar'

This is a 7 process FULLY parallel pipeline (meaning non-blocking at any stage - every 512 bytes of data passed from one stage to the next gets processed immediately). This can work with 2 physical machines that have 4 processing units each, for a total of 8 parallel threads of execution.

Granted, it's hard to construct a UNIX pipe that doesn't block.. The following variation blocks on the xargs, and has less overhead than separate tar/compress stages but is single-threaded

find name-pattern | xargs grep -l contents-pattern | tar-gzip | remote-execute 'remote-copy | untar-unzip'

Here the message-passing are serialized/linearized data.. But that's the power of UNIX.

In CORBA/COM/GNORBA/Java-RMI/c-RPC/SOAP/HTTP-REST/ODBC, your messages are 'remoteable' function calls, which serialize complex parameters; much more advanced than a single serial pipe/file-handle. They also allow synchronous returns. These methodologies inherently have 'waiting' worker threads.. So it goes without saying that you're programming in an MT environment.

This class of Remote-Procedure-Calls is mostly for centralization of code or central-synchronization. You can't block on a CPU mutex that's on another physically separate machine.. But if your RPC to a central machine with a single variable mutex then you can.. DB locks are probably more common these days, but it's the exact same concept - remote calls to a central locking service.

Another benifit in this class of IPC (Inter Process Communication) is that a stage or segment of the problem is handled on one machine.. BUt a pool of workers exists on each machine.. So while one machine is blocking, waiting for a peer to complete a unit of work, there are other workers completing their stage.. At any given time on every given CPU there is a mixture of pending and processing threads. So while a single task isn't completed any faster, a collection of tasks takes full advantage of every CPU and physical machine in the pool.

The above RPC type models involve explicit division of labor. Another class are true opaque messages.. JMS, and even UNIX's 'ipcs' Message Queues. In Java it's JMS. The idea is that you have the same workers as before, but instead of having specific UNIQUE RPC URI's (addresses), you have a common messaging pool with a suite of message-types and message-queue-names. You then have pools of workers that can live ANYWHERE which listen to their queues and handle an array of types of pre-defined messages (defined by the application designer). So now you can have dozens or hundreds of CPUs, threads, machines all symmetriclly passing asynchronous messages back and forth.

To my knowledge, this is the most scaleable type of problem.. You can take most procedural problems and break them up into stages, then define a message-type as the explicit name of each stage, then divide up the types amongst different queues (which would allow partitioning/grouping of computational resources), then receive-message/process-message/forward-or-reply-message. So long as the amount of work far exceeds the overhead of message passing, you can very nicely scale with the amount of hardware you can throw at the problem.

For true multi-threading programming, there are highly efficient message passing system.. You don't have to persist the message, nor do you even have to copy it - you merely pass a reference to the message container, as all threads share the same memory-space. A linked-list queue is very efficient to synchronize against for adding-to/pulling-from.

The ideal is to have as many workers as you have physical processing units.. And this is the great thing about clustered or message-passing systems.. You can have as few as one, or as many as a thousand workers for any given stage.. So a deployment engineer (separate from the designer) can tweak the number of threads allocated to each stage / RPC unit based upon the physical deployment configuration (8 core machine, dual core, single-core, etc).

Now while I'm mostly talking web-services, database-driven frameworks, enterprise class systems; the same applies to home-grown UNIX nightly cron-tasks. Backups, nightly updating file-indexing programs, web browsers, or merely trying to copying a LOT of files over a slow network. Splitting up sequential tasks into pipelinable stages is extremely powerful (Ford knew it umpteen years ago). And is often accessible to the general public.

But with the exception of overcoming latency and keeping pipeline buffers small, traditional UNIX pipelines are slower on single CPU architectures than multi-CPU. By this I mean 'tar cvzf' combines two stages into one, which is optimized for a single CPU. But what if you wanted to add an encryption stage. 'tar czf - | openssl'. Now we have potentially two CPUs active 100% of the time. Two competing processes on the same CPU with no IO-wait are slower than a single process, unless you have multiple CPUs. With that, then having 3 CPUs would be best for a script like 'tar cf - | gzip | openssl'.

Now what this slashdot communit seems to whine about the most is an entirely different class of programming problems. non-symmetric, non-message isolated, shared active resource model. I don't have a good fancy name to throw at you. This is complex and VERY buggy stuff. I really like Java for this type of work, because you can create safe exception-friendly synchronized 'blocks' of code. All that you need to do is make sure you enter and exit those blocks in an orderly fashion (always enter synchronized regions in the same heirarchical order). The problem is that this very often becomes a MASSIVE bottleneck. The bottleneck is as follows:

average_wait_time = number_concurrent_process * min_lock_time
This is to say that if you have a common piece of locking code that can take min_lock_time milliseconds. There is a high probability that two threads will eventually hit it.. If a third also hits it, it has to sit at the back of the line until both the first and second thread finish.. But the second thread can't even start until the first is done.. And so on with the fourth, fith, etc.. Until at 64 threads, you don't have millisecond wait, but second wait... And even if the probability of two threads hitting the same code are VERY small.. If it ever does happen, you double, quadruple, etc, the probability for other threads to pile up.

To overcome this, there are various techniques that aren't as 'elegant' but don't have this sort of characteristic.. There are special CIS problems which allow ultra atomic operations which don't involve blocking.. Simple numeric operations, test-and-set, semaphores.. NEVER use global variables except for these 'atomic' operators. Use an if-statement with a test-and-set type atomic operation, then execute one of multiple paths on an isolated suite of global variables. Use 'thread-locals' if your programming language facilitates it (namely globally addressable variables that are stored relative to your thread's stack segment, so there's guaranteed zero collision). These types of operations are bug-free for seasoned programmers.. Styles of programming that are HIGHLY safe.. And you can write good code analyzers to validate that there aren't unsafe uses of global variables (though it's harder to validate against concurrent use of referenced variables (class objects, pass-by-reference / pointers)).

Finally are the ultra-high performance explicit lock and unlock operations.. Latches, etc which are highly buggy.. If one process forgets to count-down, unlatch, restore a consumed semaphore slot, unlock something.. Or does any of the above in the wrong order, then Lord help you debug your code.

In my code, I try to keep this last class to an absolute minimum. Something like farming out 300 simultaneous RSS web feeds, then latching until they all complete - then writing a periodic timer which validates that all spawned threads are still in fact running with open TCP connections. It's really trial-by error seasoned programming.. You don't even think of all the worst cases until you experience them, one by one. Hell, we've had our stateful firewall drop the connection in such a way that the OS reported that the socket was still established, yet nothing was ever going into or out of that pipe again. How do you code against such errors? Have a periodic process scrub the /proc/ip/contract table? In java code??

The point is, the further you deviate from symmetric pipelineable code, the more difficult the challenge.. We only did the above because our we wanted more control than generic symmetric JMS provided. Targeted optimization.

Re:New programming tools needed (1)

ultranova (717540) | more than 6 years ago | (#23284588)

Consider that if you've ever done UNIX programming, you've been doing MT programming all along - just by a different name.. Multi-Processing. Pipelines are, in IMO the best implementation of parallel programming (and UNIX is FULL of pipes).

Unix pipes are a very primitive example of a dataflow language [wikipedia.org] .

Re:New programming tools needed (1)

Tim Browse (9263) | more than 6 years ago | (#23284134)

When I came up through my CS degree, object-oriented programming was new. Programming was largely a series of sequentially ordered instructions. I haven't programmed in many years now, but if I wanted to write a parallel program I would not have a clue.

But why should I?

What is needed are new, high-level programming languages that figure out how to take a set of instructions and best interface with the available processing hardware on their own. This is where the computer smarts need to be focused today, IMO.
Crikey, when was your CS degree? Mine was a long time ago, yet I still learned parallel programming concepts (using the occam [wikipedia.org] language).

Re:New programming tools needed (1)

Kjella (173770) | more than 6 years ago | (#23285072)

Yes, they could be better but the problem isn't going to go away entirely. When you run a single-threaded application A to Z you only need to consider sequence. When you try to make a multi-threaded application you have to not only tell it about sequence but also the choke points where the state must be consistent. There are already languages to make it a lot easier to fit into the "pool" design pattern where you have a pool of tasks and a pool of resources (threads) to handle it, which works when you got static parallelization but that doesn't cover anywhere near all the issues.

Imagine you're doing a physics simulation with each thread handling an object, but if they come too close you need to simulate the interaction as well. Now the parallelism is dynamic with objects moving in and out of different interactions and plenty messaging back and forth to do collision detection. The compiler is never going to figure that out - collidig objects and etheral objects passing through each other is both "valid" solutions as far as it knows.

Uh, what a crap (4, Informative)

udippel (562132) | more than 6 years ago | (#23283240)

"News for Nerds, Stuff that matters".
But not if posted by The Ignorant.

What if the developer gives the character movement tasks its own thread, but it can only be rendered at 400 fps. And the developer gives the 3D world drawer its own thread, but it can only be rendered at 60 fps. There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.

If a student of mine wrote this, a Fail will be the immediate consequence. How can 400 fps be 'only'? And why is threading bad, if the character movement is ready after 1/400 second? There is not 'a lot of waiting'; instead, there are a lot of cycles to calculate something else. and 'waiting' is not 'synchronisation'.
[The audio-rate of 7000 fps gave the author away; and I stopped reading. Audio does not come in fps.]

While we all agree on the problem of synchronisation in parallel programming, and maybe especially in the gaming world, we should not allow uninformed blurb on Slashdot.

Re:Uh, what a crap (1)

destruk (1136357) | more than 6 years ago | (#23283300)

Samples per second would be more accurate.

Re:Uh, what a crap (1)

maxume (22995) | more than 6 years ago | (#23283326)

I'm pretty sure it means "fixed at 400 fps" rather than "just 400 fps".

Re:Uh, what a crap (0)

Anonymous Coward | more than 6 years ago | (#23283466)

[The audio-rate of 7000 fps gave the author away; and I stopped reading. Audio does not come in fps.]

My guess is that the author tried to find a common base unit to compare computing time required by different tasks. Rather than using time (e.g. milliseconds) per task, he chose to use frames per second which may be easier to understand by the average gamer. Both units indicate how processing intensive a task is.

Re:Uh, what a crap (1)

Quixote (154172) | more than 6 years ago | (#23284432)

While I agree that the "article" was by a nitwit, I do have to quibble about something you wrote.

How can 400 fps be 'only'?

You are responding to the following (hypothetical) statement:
but it can be rendered at only 400 fps

Which is different from the one written:
but it can only be rendered at 400 fps

See the difference?

Re:Uh, what a crap (0)

Anonymous Coward | more than 6 years ago | (#23284842)

>[The audio-rate of 7000 fps gave the author away; and I stopped reading. Audio does not come in fps.]

Well not to argue, but digital audio for soundtracks often is expressed in terms of frames.

hardware encoded timestamps (0)

Anonymous Coward | more than 6 years ago | (#23283382)

Oh and could you figure out some way to timestamp FPS game captures for the upcoming olympic video games?

yawn (1)

nguy (1207026) | more than 6 years ago | (#23283408)

Except for being somewhat more cumbersome to program and less parallel than previous hardware, there is nothing really new about the nVidia parallel programming model. And their graphics-oriented approach means that their view of parallelism is somewhat narrow.

Maybe nVidia will popularize parallel programming, maybe not. But I don't see any "shake up" or break throughs there.

Nvidia should just put out their own OS (1)

Latinhypercube (935707) | more than 6 years ago | (#23283416)

Why not start again with a massively parallel GPU, skipping all the years of catchup that will be necessary with multi-core cpu's. Make an OS for your chips...

Is this why there's no OpenGL 3.0? (1)

zackhugh (127338) | more than 6 years ago | (#23283610)

NVidia is one of the major voices in the Khronos Group, the organization that promised to release the OpenGL 3.0 API over six months ago. The delay is embarrassing, and many are turning to DirectX.

It occurs to me that NVidia may not want OpenGL to succeed. Maybe they're holding up OpenGL development to give CUDA a place in the sun. Does anyone else get the same impression?

Re:Is this why there's no OpenGL 3.0? (1)

mikael (484) | more than 6 years ago | (#23283664)

Delays are mainly due to disagreements between different vendors rather than any one company wanting to slow the show down.

Look at the early OpenGL registry extension specifications - vendors couldn't even agree on what vector arithmetic instructions to implement.

Re:Is this why there's no OpenGL 3.0? (2, Insightful)

johannesg (664142) | more than 6 years ago | (#23283726)

NVidia has every reason to want OpenGL to succeed - if it doesn't, Microsoft will rule supreme over the API to NVidia's hardware, and that isn't a healthy situation to be in. As it is, OpenGL gives them some freedom to do their own thing.

However, having mentioned Microsoft... If *someone* does want OpenGL to succeed it is them... If and when OpenGL 3.0 ever appears, I bet there will be some talk of some "unknown party" threatening patent litigation...

Destroying OpenGL is of paramount important to Microsoft, since it will grant them total dominance over 3D graphics. Apple, Linux, Sony (PS3), and other vendors that rely on OpenGL will completely lose their ability to compete.

CUDA is limiting, not liberating (4, Informative)

njord (548740) | more than 6 years ago | (#23283632)

From my experience, CUDA was much harder to take advantage of then multi-core programming. CUDA requires you to use a specific model of programming that can make it difficult to take advantage of the full hardware. The restricted caching scheme makes memory management a pain, and the global synchronization mechanism is very crude - there's a barrier after each kernel execution, and that's it. It took me a week to 'parallelize' port some simple code I had written to CUDA, whereas it took my an hour or so to add the OpenMP statements to my 'reference' CPU code. Sorry Nvidia - there is no silver bullet. By making some parts of parallel programming easy, you make others hard or impossible.

Re:CUDA is limiting, not liberating (1)

ameline (771895) | more than 6 years ago | (#23283852)

Mod parent up... His is one of the best on this topic.

Re:CUDA is limiting, not liberating (1, Interesting)

Anonymous Coward | more than 6 years ago | (#23283984)

You make a good point: The data-parallel computing model used in CUDA is very unfamiliar to programmers. You might read the spec sheet and see "128 streaming processors" and think that is the same as having 128 cores, but it is not. CUDA inhabits a world somewhere between SSE and OpenMP in terms of task granularity. I blame part of this confusion on Nvidia's adoption of familiar sounding terms like "threads" and "processors" for things which behave nothing like threads and processors from a multicore programming perspective. Managing people's expectations is an important part of marketing a new tech, and hype can lead to anger.

That said, for a truly data parallel calculation, CUDA blows any affordable multicore solution out of the water. By removing some of the flexibility, GPUs can spend their transistor budget on more floating point units and bigger memory buses.

OpenMP is nice, but it will be a while before multicore CPUs can offer you 128 (single precision) floating point units fed by a 70 GB/sec memory bus. :)

(I'm willing to forgive more of the limitations of CUDA because now putting a $350 card into a workstation gives us a 10x performance improvement in a program we use very frequently. Speed-ups in the range of 10x to 40x are pretty common for the kind of data parallel tasks that CUDA is ideal for. If you only see a 2x or 3x improvement, you are probably better off with OpenMP and/or SSE.)

Ah, but that's the point! (1)

nxmehta (784271) | more than 6 years ago | (#23285086)

The entire reason why CUDA works and is powerful is exactly because it is limited. Nvidia knows that there is no silver bullet. They're not claiming that this is one (David Kirk has said so himself at conferences). CUDA is a fairly elegant way of mapping embarrassingly data parallel programs to a large array of single precision FP units. If your problem fits into the model, the performance you get via CUDA will smoke just about anything else (except maybe an FPGA in some scenarios).

Your notion about particular models making some parts of parallel programming easy while other parts are hard is what people really need to learn to accept about parallel programming. If you're expecting a single model to make everything easy for you, trust me, stop programming right now.

You need to pick the programming model that matches the parallelism in your application- there will never be one solution. When sitting down to write code, you have to ask yourself: what is the right model for this algorithm? Is it:

Data parallel (SIMD, Vector)
Message Passing
Actors
Dataflow
Transactional
Streaming (pipe and filter)
Sparse Graph
Etc...

There are many models out there, and many languages + hardware substrates for these models that will give you orders of magnitude speedup for parallel programs. They key is to just to sit down, think about the problem, and pick the right one (or combinations).

The real research focus in parallel programming should be to make a taxonomy of models and start coming up with a unified infrastructure to support intelligent selection of models, mixing and matching, and compilation.

Blog spam. Link to actual article. Nvidia loss? (2, Interesting)

Futurepower(R) (558542) | more than 6 years ago | (#23284520)

Avoid the blog spam. This is the actual article in EE times: Nvidia unleashes Cuda attack on parallel-compute challenge [eetimes.com] .

Nvidia is showing signs of being poorly managed. CUDA [cuda.com] is a registered trademark of another hi-tech company.

The underlying issue is apparently that Nvidia will lose most of its mid-level business when AMD/ATI and Intel/Larrabee being shipping integrated graphics. Until now, Intel integrated graphics has been so limited as to be useless in many mid-level applications. Nvidia hopes to replace some of that loss with sales to people who want to use their GPUs to do parallel processing.

No one is going to "solve" the problem (1)

swillden (191260) | more than 6 years ago | (#23284744)

Multi-threaded programming is a fundamentally hard problem, as is the more general issue of maximally-efficient scheduling of any dynamic resource. No one idea, tool or company is going to "solve" it. What will happen is that lots of individual ideas, approaches, tools and companies will individually address little parts of the problem, making it incrementally easier to produce efficient multi-threaded code. Some of these approaches will work together, others will be in opposition, there will be engineering tradeoffs to be made (particularly between efficiency of execution and ease of development) and the incremental improvements will not so much make it easier to to multi-threaded programming as make it feasible to attack more complex problems.

Pretty much just like the history of every other part of software development.

Re:No one is going to "solve" the problem (0)

Anonymous Coward | more than 6 years ago | (#23284982)

Exactly! I couldn't agree more...

Multi-threaded, parallel algorithm development is anything but easy - however, it's a mighty powerful approach to many problem.
While CUDA may not be the perfect solution to every problem, it is yet another tool which I can use (as a software engineer) to solve complex problems.

One thing that's nice about NVIDIA putting out CUDA is that it's free. If you already have an high-endish card you can make use of it. Yeah, it only runs on NVIDIA and right now it only works on some of their cards but that will change. As time goes on, every NVIDIA card will be able to make use of it and something like an OpenGL for GPGPU will evolve (boy, wouldn't that be nice). Hell, my OSX laptop can already use it for parallel-applicable work. I now have a bunch of additional processors to use on my machine which I can used for a TON of things and I didn't have to buy anything... Fabulous!

Life is not synchronous - Parallel is the future..

Reminds me of OLD the stories I used to hear... (3, Interesting)

JRHelgeson (576325) | more than 6 years ago | (#23284920)

I live in Minnesota, home of the legendary Cray Research. I've met with several old timers that developed the technologies that made the Cray Supercomputer what it was. Hearing about the problems that multi-core developers are facing today reminds me of the stories I heard about how the engineers would have to build massive cable runs from processor board to processor board to memory board just to synchronize the clocks and operations so that when the memory was ready to read or write data, it could tell the processor board... half a room away.

As I recall:
The processor, as it was sending the data to the bus, would have to tell the memory to get ready to read data through these cables. The "cables hack" was necessary because the cable path was shorter than the data bus path, and the memory would get the signal just a few mS before the data arrived at the bus.

These were fun stories to hear but now seeing what development challenges we face in parallel programming multi-core processors gives me a whole new appreciation for those old timers. These are old problems that have been dealt with before, just not on this scale. I guess it is true what they say, history always repeats itself.

Why?? (0)

Anonymous Coward | more than 6 years ago | (#23284954)

I always wondered why the parallelization problem couldn't be solved using a concept similar to TCP/IP. If you think of CPU instructions like a "packet" and assign them a sequence number, then the CPU can keep track of what order the instruction results should come out. In addition to L1 and L2 cache, there should be a cache to hold the results of CPU instructions until they can be streamed out in the correct order. In essence, it would look like a single-core CPU to the outside world, but using buffers and sequencing tricks, perform in parallel.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...