Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Is Parallel Programming Just Too Hard?

kdawson posted more than 7 years ago | from the Moore's-Law-for-software dept.

Software 680

pcause writes "There has been a lot of talk recently about the need for programmers to shift paradigms and begin building more parallel applications and systems. The need to do this and the hardware and systems to support it have been around for a while, but we haven't seen a lot of progress. The article says that gaming systems have made progress, but MMOGs are typically years late and I'll bet part of the problem is trying to be more parallel/distributed. Since this discussion has been going on for over three decades with little progress in terms of widespread change, one has to ask: is parallel programming just too difficult for most programmers? Are the tools inadequate or perhaps is it that it is very difficult to think about parallel systems? Maybe it is a fundamental human limit. Will we really see progress in the next 10 years that matches the progress of the silicon?"

cancel ×

680 comments

Sorry! There are no comments related to the filter you selected.

Nope. (2, Insightful)

Anonymous Coward | more than 7 years ago | (#19305169)

Parallel programming isn't all that hard, what is difficult is justifying it.

What's hard, is trying to write multi-threaded java applications that work on my VT-100 terminal. :-)

Re:Nope. (5, Interesting)

lmpeters (892805) | more than 7 years ago | (#19305295)

It is not difficult to justify parallel programming. Ten years ago, it was difficult to justify because most computers had a single processor. Today, dual-core systems are increasingly common, and 8-core PC's are not unheard of. And software developers are already complaining because it's "too hard" to write parallel programs.

Since Intel is already developing processors with around 80 cores [intel.com] , I think that multi-core (i.e. multi-processor) processors are only going to become more common. If software developers intend to write software that can take advantage of current and future processors, they're going to have to deal with parallel programming.

I think that what's most likely to happen is we'll see the emergence of a new programming model, which allows us to specify an algorithm in a form resembling a Hasse diagram [wikipedia.org] , where each point represent a step and each edge represents a dependency, so that a compiler can recognize what can and cannot be done in parallel and set up multiple threads of execution (or some similar construct) according to that.

Re:Nope. (5, Interesting)

poopdeville (841677) | more than 7 years ago | (#19305465)

I think that what's most likely to happen is we'll see the emergence of a new programming model, which allows us to specify an algorithm in a form resembling a Hasse diagram, where each point represent a step and each edge represents a dependency, so that a compiler can recognize what can and cannot be done in parallel and set up multiple threads of execution (or some similar construct) according to that.

This is more-or-less how functional programming works. You write your program using an XML-like tree syntax. The compiler utilizes the tree to figure out dependencies. See http://mitpress.mit.edu/sicp/full-text/book/book-Z -H-10.html#%25_sec_1.1.5 [mit.edu] . More parallelism can be drawn out if the interpreter "compiles" as yet unused functions while evaluating others. See the following section.

Re:Nope. (5, Interesting)

Lost Engineer (459920) | more than 7 years ago | (#19305587)

It is still difficult to justify if you can more easily write more efficient single-threaded apps. What consumer-level apps out there really need more processing power than a single core of a modern CPU can provide? I already understand the enterprise need. In fact, multi-threaded solutions for enterprise and scientific apps are already prevalent, that market having had SMP for a long time.

Re:Nope. (1)

creimer (824291) | more than 7 years ago | (#19305309)

Didn't "time slicing" take care of the VT-100 terminal problem?

Re:Nope. (3, Funny)

dch24 (904899) | more than 7 years ago | (#19305443)

Parallel programming isn't all that hard
Then why is it that (as of right now) all the up-modded posts are laid out sequentially down the comment tree?

Re:Nope. (4, Insightful)

Gorshkov (932507) | more than 7 years ago | (#19305651)

Then why is it that (as of right now) all the up-modded posts are laid out sequentially down the comment tree?
Because one of the things TFM neglects to mention is that parallel programming, like any other programming method, is suitable for some things and not for others .... and the hard reality is, is that most application programmes you see on the desktop are basically serial, simply because of the way PEOPLE process tasks & information

There is a very real limit as to how much you can parallelize standard office tasks.

Yes. (-1, Redundant)

Anonymous Coward | more than 7 years ago | (#19305171)

Yes.

our brains aren't wired to think in parallel (5, Insightful)

rritterson (588983) | more than 7 years ago | (#19305175)

I can't speak for the rest of the world, or even the programming community. That disclaimer spoken, however, I can say that parallel programming is indeed hard. The trivial examples, like simply running many processes in parallel that are doing the same thing (as in, for example, Monte Carlo sampling) are easy, but the more difficult examples of parallelized mathematical algorithms I've seen, such as those in linear algebra are difficult to conceptualize, let alone program. Trying to manage multiple threads and process communication in an efficient way when actually implementing it adds an additional level of complexity.

I think the biggest reason why it is difficult is that people tend to process information in a linear fashion. I break large projects into a series of chronologically ordered steps and complete one at a time. Sometimes if I am working on multiple projects, I will multitask and do them in parallel, but that is really an example of trivial parallelization.

Ironically, the best parallel programmers may be those good managers, who have to break exceptionally large projects into parallel units for their employees to simultaneously complete. Unfortunately, trying to explain any sort of technical algorithm to my managers usually exacts a look of panic and confusion.

Re:our brains aren't wired to think in parallel (0)

Anonymous Coward | more than 7 years ago | (#19305221)

our brains aren't wired to think in parallel

Very ironic, since they are so good at it. [wikipedia.org]

I would also wonder if most programmers are just too left-brained, since non-linear big-picture stuff is more the domain of the right brain (at least in common knowledge...feel free to correct me with updated technical information).

Re:our brains aren't wired to think in parallel (4, Insightful)

buswolley (591500) | more than 7 years ago | (#19305389)

Our brains are massively parallel, but we do not consciously attend to more than a couple of things at a time.

Re:our brains aren't wired to think in parallel (5, Informative)

Anonymous Coward | more than 7 years ago | (#19305223)

I do a lot of multithreaded programming, this is my bread and butter really. It is not easy - it takes a specific mindset, though I would disagree that it has much to do with management. I am not a manager, never was one and never will be. It requires discipline and careful planning.

That said, parallel processing is hardly a holy grail. On one hand, everything is parallel processing (you are reading this message in parallel with others, aren't you?). On the other, when we are talking about a single computer running a specific program, parallel usually means "serialized but switched really fast". At most there is a handful of processing units. That means, that whatever it is you are splitting among these units has to give itself well to splitting this number of ways. Do more - and you are overloading one of them, do less - and you are underutilizing resources. In the end, it would be easier often to do processing serially. Potential performance advantage is not always very high (I know, I get paid to squeeze this last bit), and is usually more than offset by difficulty in maintenance.

Re:our brains aren't wired to think in parallel (2, Insightful)

bladesjester (774793) | more than 7 years ago | (#19305311)

you are reading this message in parallel with others, aren't you?

I don't know about the rest of Slashdot, but I read comments in a linear fashion - one comment, then the next comment, etc. Most people that I have known read in a linear fashion.

Walking and chewing gum is a parallel process. Reading is generally linear.

Re:our brains aren't wired to think in parallel (5, Insightful)

poopdeville (841677) | more than 7 years ago | (#19305593)

That's debatable. You don't look at text one letter at a time to try to decipher the meaning, do you? You probably look at several letters, or even words, at a time. Understanding how the letters or words relate (spatially, syntactically, semantically) is a parallel process. Unless you're a very slow reader, your eyes have probably moved on from the words you're interpreting before you've understood their meaning. This is normal. This is how you establish a context for a particular word or phrase -- by looking at the surrounding words. Another parallel process.

Every process is serial from a broad enough perspective. Eight hypothetical modems can send 8 bits per second. Or are they actually sending a single byte?

Re:our brains aren't wired to think in parallel (2, Insightful)

bladesjester (774793) | more than 7 years ago | (#19305647)

If you want to make that argument, you might as well make the argument that a computer doing the simple addition of two numbers is doing parallel processing.

It could also be stated in a twisted manner of your view - looked at narrowly enough, anything can be considered to be parallel. However, realistically, we know that isn't really the case.

Our brains run in parallel but think in serial (1)

Samarian Hillbilly (201884) | more than 7 years ago | (#19305521)

The basic problem is one of optimization. THERE WOULD BE ABSOLUTELY NO REASON TO EVER PROGRAM IN PARALLEL IF COMPUTERS WERE FAST ENOUGH. This means that there needs to be a separation of concerns between the algorithm expressed serially and it's parallel optimization. So far, automatic parallelization has had only limited sucess and there is no reason to believe this will change in the near future. This suggests a three-fold architecture.
1) A parallelizable (but not parallel) serial language for "every" programmer to write in.
2) A specialized "mapping" meta-language for optimization experts, that would parallelize expressions in the serial language in an optimal way for the target hardware.
3) The resulting machine code.

NOBODY SEEMS TO BE WORKING ON THIS APPROACH. In fact object-oriented programming with it's state (object) based approach works dead against parallel programming. The parallizable/serial language would probably have to be functional.

Re:our brains aren't wired to think in parallel (1)

catchblue22 (1004569) | more than 7 years ago | (#19305257)

I wonder if one day there will be higher level tools to help programmers accomplish this. Programmers seldom work in assembler code, because such low level tasks are taken care of by the compiler. Why can we not have similar tools for optimizing parallel programming? It seems to me that this kind of complexity is best handled by a computer.

Re:our brains aren't wired to think in parallel (4, Insightful)

ubernostrum (219442) | more than 7 years ago | (#19305305)

You may want to look into Erlang, which does two things that will interest you:

  • Concurrency is handled by lightweight thread-like pseudo-processes passing messages to one another, and supported directly in the language.
  • Shared state between these processes is absolutely forbidden.

There are still concurrent problems which are hard, but generally it boils down to the problem being hard instead of the language making the problem harder to express.

Re:our brains aren't wired to think in parallel (0)

Anonymous Coward | more than 7 years ago | (#19305427)

If it involves message passing, it isn't lightweight. 'Nuff said.

Re:our brains aren't wired to think in parallel (2, Interesting)

Organic Brain Damage (863655) | more than 7 years ago | (#19305299)

People may or may not process information in a linear fashion, but human brains are, apparently, massively parallel computational devices.

Addressing architecture for Brain-like Massively Parallel Computers [acm.org]

or from a brain-science perspective

Natural and Artificial Parallel Computation [mit.edu]

The common tools (Java, C#, C++, Visual Basic) are still primitive for parallel programming. Not much more than semaphores and some basic multi-threading code (start/stop/pause/communicate from one thread to another via common variables). I've made programs, specifically, spiders, that can run 200-1,000 simultaneous threads usefully on a PC. They work ok, as long as the inter-thread coupling is minimized. Until we get enough exposure to parallel systems that we develop new languages to express the solutions, parallel programming will remain accessible to the few, the proud, the geeks. But I don't think it's because of our brains architecture.

Re:our brains aren't wired to think in parallel (2, Insightful)

buswolley (591500) | more than 7 years ago | (#19305413)

Our brains may be massively parallel, but this doesn't not mean we can effectively attend to multiple problem domains at once. Especially, our conscious attentional processes tend to be able to focus on at most a couple of things at once, albeit at a high level of organizational description.

Re:our brains aren't wired to think in parallel (1)

Eivind (15695) | more than 7 years ago | (#19305581)

True. Infact it is very hard to do even stuff in parallell that is *far* away from our abilities.

Moving your left foot in a circle is trivial to do. Drawing a square on a piece of paper is trivial to do (aslong as neither needs to be perfect), now try doing both simultaneously.

Re:our brains aren't wired to think in parallel (2, Interesting)

Anonymous Coward | more than 7 years ago | (#19305315)

In my exp it's not really that hard and it really doesn't come down to thinking in parallel except in cases where you are going for really fine-grained parallelism. Coarse grained parallelism is really easy and can give HUGE benefits to users... a simple example of this is creating a background thread to handle incremental saves... suddenly the app stays responsive as it's GUI thread is no longer being stalled by factors beyond it's control (ie: network load).

Under C++ I use the boost threading libraries and they are excellent, allowing me to write once and run on all required platforms and when my Java hat is on it's a snap too because of the superb libraries available.... moral, it is hard if you don't use the right tools or try to split the task along poorly chosen boundaries.

Granted that tasks like large matrix solves and the like are a brute to multi-thread... but fortunately people that are way smarter than me have already done it and I can handle plugging in a library.

 

Re:our brains aren't wired to think in parallel (3, Interesting)

bloosqr (33593) | more than 7 years ago | (#19305331)

Back when i was in graduate school we used to joke .. in the future everything will be monte carlo :)

While everything perhaps can't be solved using monte carlo type integration tricks .. there is more that can be done w/ 'variations of the theme' than is perhaps obvious .. (or perhaps you can rephrase the problem and ask the same question a different way) .. perhaps if you are dreaming like .. what happens if i have a 100,000 processors at my disposal etc

Re:our brains aren't wired to think in parallel (0)

Anonymous Coward | more than 7 years ago | (#19305363)

Unfortunately, trying to explain any sort of technical algorithm to my managers usually exacts a look of panic and confusion.

Perhaps you are not as good at explaining technical issues to non-technical people as you think you are? You need to step back, think in the person's shoes, and try to see it through their eyes. Because they haven't spent days thinking of the problem like you have, and you are trying to cram a lot of information into a few minutes. Duh! Of course it will sound foreign.

Re:our brains aren't wired to think in parallel (5, Insightful)

jd (1658) | more than 7 years ago | (#19305395)

Parallel programming is indeed hard. The standard approach these days is to decompose the parallel problem into a definite set of serial problems. The serial problems (applets) can then be coded more-or-less as normal, usually using message-passing rather than direct calls to communicate with other applets. Making sure that everything is scheduled correctly does indeed take managerial skills. The same techniques used to schedule projects (critical path analysis) can be used to minimize time overheads. The same techniques used to minimize the use of physical resources (SIMPLEX aka Operational Research) works just as well on software for physical resources there.

The first problem is that people who make good managers make lousy coders. The second problem is that people who make good coders make lousy managers. The third problem is that plenty of upper management types (unfortunately, I regard the names I could mention as genuinely dangerous and unpredictable) simply have no understanding of programming in general, never mind the intricacies of parallelism.

However, resource management and coding is not enough. These sorts of problems are typically either CPU-bound and not heavy on the networks, or light on the CPU but are network-killers. (Look at any HPC paper on cascading network errors for an example.) Typically, you get hardware which isn't ideally suited to either extreme, so the problem must be transformed into one that is functionally identical but within the limits of what equipment there is. (There is no such thing as a generic parallel app for a generic parallel architecture. There are headaches and there are high-velocity exploding neurons, but that's the range of choices.)

Errors are difficult to find out in such systems (2, Informative)

himanshuarora (881139) | more than 7 years ago | (#19305435)

Well, I was a Teaching Assistant of 'Concurrent Algorithms'. There used to be 30 students for that class. For a given assignment there used to be 30 different solutions and many used to be wrong.

It is difficult to find out errors in these kind of algorithms. There has been some papers published in big conferences and it was found that some mathematical proofs were wrong because they could not think of some scenarios where it can fail.

Re:our brains aren't wired to think in parallel (0)

Anonymous Coward | more than 7 years ago | (#19305475)

This is an average RTL designer's task, every single day. All parallel tasks are still broken down into linearly executed tasks that are recombined at some point. I think the biggest hurdle is the attempt of modern languages to abstract far too much and this prevents the designer from creating an effective design. I also don't think a language-only approch is enough. The hardware itself needs to support a means of inter-process handshaking as that's exactly how we do it when we design custom parallel hardware.

We do it all the time! (1)

taniwha (70410) | more than 7 years ago | (#19305519)

our brains are highly parallel - we wouldn't be able to do such complex though processes with a bunch of meat otherwise .... streams of thought and reasoning are however pretty linear.

Up thread someone wondered if we'd make the same rates of progress in parallel programming as chip design (Moore's law) .... guess what hardware design these days is largely programming, and highly programming at that .... and it's done by mere mortals. I have a decade in logic design and maybe 15 more years as a programmer - largely as a kernel hack and doing embedded systems - they're really all the same stuff - spending a lot of time doing logic helps you hone those parallel programming skills - after a while the deadlocks and places that need locks just start to become obvious ....

Note to peanut gallery: favorite gdb debugging command "t a a bt"

Our brains can rewire themselves (1)

deek (22697) | more than 7 years ago | (#19305579)

Yes, you too can _learn_ how to program in parallel. You're right, in that we fundamentally do things in a linear fashion. But, we can learn to think differently, and learn different methods that will work in a parallel way.

Taking your mathematical example, here's a parallel example of a change in thinking. At one stage, the concept of 'zero' was unfathomable. As little as over 2000 years ago, there was no zero. You could say that their brains weren't wired to think in zeroes. That's certainly changed these days. Once the concept was discovered, and taught, it eventually became instinctive for everyone.

We obviously haven't reached that stage with parallel programming, but all it takes is familiarity with the concepts and methods, and a bit of practice. I rate it up there with object oriented programming. In fact, they're both very suited for each other.

One word.... (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19305177)

FORTRAN

Two words: map-reduce (3, Interesting)

Anonymous Coward | more than 7 years ago | (#19305183)

Implement it, add CPUs, earn billion$. Just Google it.

Re:Two words: map-reduce (1)

shannara256 (262093) | more than 7 years ago | (#19305347)

I found Joel Spolsky's article on map-reduce ("Can Your Programming Language Do This?" [joelonsoftware.com] ) very enlightening, much more so than the wikipedia article [wikipedia.org] . Unfortunately, a google search for map reduce ranks it ninth.

Re:Two words: map-reduce (5, Informative)

allenw (33234) | more than 7 years ago | (#19305487)

Implementing MapReduce is much easier these days: just install and contribute to the Hadoop [apache.org] project. This is an open source, Java-based MapReduce implementation, including a distrbuted filesystem called HDFS.

Even though it is implemented in Java, you can use just about anything with it, using the Hadoop streaming [apache.org] functionality.

Re:Two words: map-reduce (0)

Anonymous Coward | more than 7 years ago | (#19305543)

Map-reduce is one of those things functional programmers do all day. (If your functions have no side effects, who cares if you run 10 at the same time?)

Perhaps what the author meant to say (had he gone to college) is "Have we hit the point beyond which procedural/OO programming does not scale?".

Have some friggin' patience (4, Insightful)

Corydon76 (46817) | more than 7 years ago | (#19305207)

Oh noes! Software doesn't get churned out immediately upon the suggestion of parallel programming! Programmers might actually be debugging their own code!

There's nothing new here: just somebody being impatient. Parallel code is getting written. It is not difficult, nor are the tools inadequate. What we have is non-programmers not understanding that it takes a while to write new code.

If anything, that the world hasn't exploded with massive amounts of parallel code is a good thing: it means that proper engineering practice is being used to develop sound programs, and the jonny-come-lately programmers aren't able to fake their way into the marketplace with crappy code, like they did 10 years ago.

Re:Have some friggin' patience (0)

Anonymous Coward | more than 7 years ago | (#19305281)

It is not difficult, nor are the tools inadequate. What we have is non-programmers not understanding that it takes a while to write new code.

Have you done any serious parallel programming yourself? If so, how certain are you that it's correct? I ask because I've been following programming research, and the general consensus seems to be that it is too hard with the existing tools. I haven't done anything serious requiring multiple threads, but the experience I have had suggests that it can be a very tricky thing indeed to do correctly.

Re:Have some friggin' patience (1)

QuantumG (50515) | more than 7 years ago | (#19305407)

I remember learning about model checking at university. We did some finite state analysis and looked at how people did more comprehensive, formal analysis (but never did it ourselves). This was 10 years ago. I've never used it in my career. I've never known anyone who's used it. I assume people who work on automotive or aerospace systems must though.

Re:Have some friggin' patience (1)

GileadGreene (539584) | more than 7 years ago | (#19305437)

I ask because I've been following programming research, and the general consensus seems to be that it is too hard with the existing tools. I haven't done anything serious requiring multiple threads, but the experience I have had suggests that it can be a very tricky thing indeed to do correctly.
Yes, and then again no. Concurrency is hard. But it doesn't have too be as hard as most developers make it. Shared state and threads are horrible way to do concurrency, but it's all most developers are taught. We've had the tools to do concurrent programming in a much more manageable way for 15-20 years (Google for the occam programming language, or CSP - for a modern take, see Erlang or E). I've personally written software consisting of 1000+ interacting "threads" in a complex, dynamically changing communications topology. Concurrency bugs have generally not been a problem with that code (I spent more time debugging some of the sequential stuff) because of the design paradigm I used (occam and CSP). Granted, that toolset is not "mainstream", and there's more research dollars in selling stuff like Software Transactional Memory (which nominally lets programmers stay in their little sequential comfort zone) than there is in actually taking existing, proven solutions mainstream (again, Erlang is about the closest we've got right now).

It's not trivial, and often not necessary (5, Interesting)

Opportunist (166417) | more than 7 years ago | (#19305211)

Aside from my usual lament that people already call themselves programmers when they can fire up Visual Studio, parallelizing your tasks opens quite a few cans of worms. Many things can't be done simultanously, many side effects can occur if you don't take care and generally, programmers don't really enjoy multithreaded applications, for exactly those reasons.

And often enough, it's far from necessary. Unless you're actually dealing with an application that does a lot of "work", calculate or display, preferable simultanously (games would be one of the few applications that come to my mind), most of the time, your application is waiting. Either for input from the user or for data from a slow source, like a network or even the internet. The average text processor or database client is usually not in the situation that it needs more than the processing power of one core. Modern machines are by magnitudes faster than anything you usually need.

Generally, we'll have to deal with this issue sooner or later, especially if our systems become more and more overburdened with "features" while the advance of processing speed will not keep up with it. I don't see the overwhelming need for parallel processing within a single application for most programs, though.

Re:It's not trivial, and often not necessary (1)

ubernostrum (219442) | more than 7 years ago | (#19305387)

Many things can't be done simultanously, many side effects can occur if you don't take care and generally, programmers don't really enjoy multithreaded applications, for exactly those reasons.

This is why the languages which have the highest level of "enjoyment" for concurrent tasks tend to move further along the spectrum toward pure functional programming, and partially or completely ban side effects and mutable state (e.g., Erlang forbids shared state in its concurrency model, and also forbids reassignment).

Not justifyable (3, Interesting)

dj245 (732906) | more than 7 years ago | (#19305235)

I can see this going down in cubicles all through the gaming industry. The game is mostly coming together, the models have been tuned, textures drawn, code is coming together, and the coder goes to the pointy haired boss.

Coder: We need more time to make this game multithreaded!
PHB: Why? Can it run on one core of a X?
Coder: Well I suppose it can but...
PHB: Shove it out the door then.

If flight simulator X is any indication (a game that should have been easy to parallize) this conversation happens all the time and games are launched taking advantage of only one core.

Re:Not justifyable (1)

lmpeters (892805) | more than 7 years ago | (#19305383)

If flight simulator X is any indication (a game that should have been easy to parallize) this conversation happens all the time and games are launched taking advantage of only one core.

How about Adobe Creative Suite 3? The professionals using it are the most likely to buy quad- or 8-core systems, but CS3 only supports up to two cores.

Re:Not justifyable (0)

Anonymous Coward | more than 7 years ago | (#19305595)

If you don't leave some juicy features for next revision CS4, your customers won't run in the upgrade threadmill...

Re:Not justifyable (4, Insightful)

Grave (8234) | more than 7 years ago | (#19305501)

I seem to recall comments from Tim Sweeney and John Carmack that parallelism needed to start from the beginning of the code - IE, if you weren't thinking about it and implementing it when you started the engine, it was too late. You can't just tack it on as a feature. Unreal Engine 3 is a prime example of an engine that is properly parallelized. It was designed from the ground up to take full advantage of multiple processing cores.

If your programmers are telling you they need more time to turn a single-threaded game into a multi-threaded one, then the correct solution IS to push the game out the door, because it won't benefit performance to try to do it at the end of a project. It's a fundamental design choice that has to be made early on.

Are Serial Programmers Just Too Dumb? (4, Interesting)

ArmorFiend (151674) | more than 7 years ago | (#19305243)

For this generation of "average" programmers, yes its too hard. Its the programming language, stupid. The average programming language has come a remarkably short distance in the last 30 years. Java and Fortran really aren't very different, and neither is well suited to paralellizing programs.

Why isn't there a mass stampede to Erlang or Haskell, languages that address this problem in a serious way? My conclusion is that most programmers are just too dumb to do major mind-bending once they've burned their first couple languages into their ROMs.

Wait for the next generation, or make yourself above average.

Re:Are Serial Programmers Just Too Dumb? (0)

Anonymous Coward | more than 7 years ago | (#19305307)

"Java and Fortran really aren't very different, and neither is well suited to paralellizing programs."

Maybe you haven't seen the latest Fortran 2003 standard? Even Fortran 95 had parallel constructs.
Fortran is still the most widely used parallel and high-performance computing language.

Re:Are Serial Programmers Just Too Dumb? (1)

QuantumG (50515) | more than 7 years ago | (#19305373)

Why isn't there a mass stampede to Erlang or Haskell, languages that address this problem in a serious way?
Ummm, because just writing a simple game like tic-tac-toe or Tetris is considered worthy of scientific papers?

People who feel a need to coin terms like Functional Reactive Programming [haskell.org] and develop 40 different "frameworks" to shoehorn event processing into a functional environment are the reason why these languages are shunned by people who just want to get work done.

Re:Are Serial Programmers Just Too Dumb? (1)

coolgeek (140561) | more than 7 years ago | (#19305391)

My conclusion is that most programmers are just too dumb to do major mind-bending once they've burned their first couple languages into their ROMs.
I think it's more that most programmers are just too dumb to try a different paradigm for designing their code. "WAAAH IT'S TOO HARD!" I dunno. Maybe I'm different. I got my first job as a programmer in 1984, working on multiprocessor machines.

Re:Are Serial Programmers Just Too Dumb? (2, Interesting)

Opportunist (166417) | more than 7 years ago | (#19305619)

That's maybe due to most programmers nowadays not being programmers but people who learned how to write code. When the IT biz was the make-money-fast scheme, people picked up any kind of computer skill that more or less fit them, some went into web based programming since that's where the really big bucks were, they learned PHP and a few tidbits of logic.

Then dot.com blew up and they were dead in the water. Today you need a fraction of the PHP artists that were sought after in 2000. So they picked up C, in a sorta-kinda-way, since it's "about the same" and now they push into the programming market as cheap workforce.

Well, you get what you pay for.

Programmers (2, Insightful)

Anonymous Coward | more than 7 years ago | (#19305245)

Well, the way they 'teach' programming nowadays, programmers are simply just typists.

No need to worry about memory management, java will do it for you.

No need to worry about data location, let the java technology of the day do it for you.

No need to worry about how/which algorithm you use, just let java do it for you, no need to optimize your code.

Problem X => Java cookbook solution Y

Re:Programmers (1)

compro01 (777531) | more than 7 years ago | (#19305321)

this is probablely the reason why in my college course, they has us use assembler (on the PIC16F84A) last semester before they start on higher level stuff next semester.

Re:Programmers (1)

creimer (824291) | more than 7 years ago | (#19305377)

I just graduated with an associate degree in computer programming. (This is my second associate degree from the same community college; my first degree was in General Ed in 1994.) Java is the catch all programming language where you really don't have to think about anything. I went out of my way to learn C++ even though I had to think about stuff that I never did in Java. Parallel programming was a topic that we never came across.

Re:Programmers (1)

bladesjester (774793) | more than 7 years ago | (#19305617)

After doing classes in college using C, C++, Assembly, and a host of other things then picking up Java, Ruby, and another host of other stuff, the only time I really had to worry about doing anything in parallel was multithreading things in my OS and internetworking courses.

As a general rule, for your standard, vanilla business stuff, it just doesn't tend to come up...

Did someone say paralell programming? (0)

Anonymous Coward | more than 7 years ago | (#19305249)

It may come in useful for the numerous beowolf clusters I have imagined on slashdot.

Parallel Language... (3, Insightful)

vortex2.71 (802986) | more than 7 years ago | (#19305251)

Though there are many very good parallel programmers who make excellent use of the Message Passing Interface, we are entering a new era of parallel computing where MPI will soon be unusable. Consider when the switch was made from assembly language to a programming language - when the "processor" contained too many components to be effectively programmed with machine language. That same threshold has long since passed with parallel computers. Now that we have computers with more than 100 thousand processors and are working to build computers with more than a million processors, MPI has become the assembly language of parallel programming. It hence, needs to be replaced with a new parallel language that can controll great numbers of processors.

Re:Parallel Language... (0)

Anonymous Coward | more than 7 years ago | (#19305405)

Control great numbers? I think MPI is doing quite well on BG/L on 130K processors - surely it works pretty well. I don't imagine the typical application is going to require that many parallel threads for a long, long time. I do believe that in many cases MPI can become the 'assembler' of message-passing, and already several packages out there provide libraries to handle parallel scientific methods - look at ScaLAPACK, CHARM++, FFTW, etc.

The real difference is between applications that have multiple tasks doing the same operations (SPMD - essentially most MPI tasks in a typical code) or parallelism that where each processor handles different aspects of an application, such as (for example) one for the AI of a game, one for user interface, one for graphics, etc. It is the latter case where I believe your analogy to the earlier switch from assembly to a higher-level language is key. Keeping track of 1000 identical tasks operating on different data is one thing, but keeping track of 1000 different tasks is a whole other beast!

Re:Parallel Language... (0)

Anonymous Coward | more than 7 years ago | (#19305611)

"I think MPI is doing quite well on BG/L on 130K processors - surely it works pretty well." Bah, the guys who won the Gordon Bell prize in 2005 on BG/L had a hell of a time with getting the thing to run. Ever tried to manage I/O on 140K processors? Think MPI All Reduce is a valid operation on Blue Gene L when you are using the whole machine? Ever wonder why the GB in 2005 was for achieving 100 Teraflops on a machine with a theoretical peak of over 300 Teraflops?

Re:Parallel Language... (1)

GileadGreene (539584) | more than 7 years ago | (#19305421)

MPI is a heavyweight solution resting on top of essentially sequential languages. What is need is languages that integrate share-nothing message-passing into the language itself, in a lightweight manner. Erlang and E are good examples of that kind of approach.

Yes, difficult, but our brains are not limited. (3, Insightful)

themoneyish (971138) | more than 7 years ago | (#19305255)

In one word, the answer is yes. It's difficult for people who are used to programming for a single CPU.

Programmers that are accustomed to non-parallel programming environments forget to think about the synchronization issues that come up in parallel programming. Several conventional programs do not take into account synchronization of the shared memory or message passing requirements that come up for these programs to work correctly in a parallel environment.

This is not to say that there will not be any progress in this field. There will be and there has been. The design techniques and best practices differ for parallel programming than for the conventional programming. Also currently there is limited IDE support for debugging purposes. There are already several books on this topic and classes in the universities. As the topic becomes more and more important, computer science students will be required to take such classes (as opposed to it being optional) and more and more programmers that know and are experts in parallel programming will be churned out. It's just not as popular because the universities don't currently seem to make it a required subject. But that will change because of the advancement in hardware and more market demand for expert parallel programmers.

Our brains might be limited about other things, but this is just a matter of better education. 'Nuff said.

One thing about our brains and programming... (1)

themoneyish (971138) | more than 7 years ago | (#19305317)

is that our brains do NOT need to think in parallel to write solid parallel code. Just certain design principles need to be used and it is happening all the time.

And someone said languages are a limitation, and that's probably partially true. The most popular programming languages don't make it easy for a programmer to write parallel code. The ones that do are not so popular yet. So there's a little gap there, but as hardware technology grows more powerful, that will change.

Clusters? (3, Insightful)

bill_mcgonigle (4333) | more than 7 years ago | (#19305263)

Since this discussion has been going on for over three decades with little progress in terms of widespread change

Funny, I've seen an explosion in the number of compute clusters in the past decade. Those employ parallelism, of differing types and degrees. I guess I'm not focused as much on the games scene - is this somebody from the Cell group writing in?

I mean, when there's an ancient Slashdot joke about something there has to be some entrenchment.

The costs are just getting to the point where lots of big companies and academic departments can afford compute clusters. Just last year the price of multi-core CPU's made it into mainstream desktops (ironically, more in laptops so far). Don't be so quick to write off a technology that's just out of its first year of being on the desktop.

Now, that doesn't mean that all programmers are going to be good at it - generally programmers have a specialty. I'm told the guys who write microcode are very special, are well fed, and generally left undisturbed in their dark rooms, for fear that they might go look for a better employer, leaving the current one to sift through a stack of 40,000 resumes to find another. I probably wouldn't stand a chance at it, and they might not do well in my field, internet applications - yet we both need to understand parallelism - they in their special languages and me, perhaps with Java this week, doing a multithreaded network server.

Re:Clusters? (1)

Lost Engineer (459920) | more than 7 years ago | (#19305627)

If multi-core chips made it into laptops first, it's because they're more efficient. You can easily shut down the second core in the OS if it's not needed, buying yourself massive power savings.

Also as the Core Duo is the successor to the Pentium M, it's technology was already in laptops, even if they were single core. Even a dual core Core chip easily burns less juice than that dog the Pentium 4, as anyone who's ever had a Pentium 4 laptop sitting on his lap will attest to. As a side note, I also believe the Pentium 4 contributed towards the trend of huge (in screen size and weight) laptops, which now seems to be reversing itself.

Yes and no. Languages help (or hinder). (2, Insightful)

Anonymous Coward | more than 7 years ago | (#19305267)

It's very easy to just say "Parallel programming is too hard", and try to ignore it. But consider this: the languages we use today are designed to work in the same way as the CPUs of old: one step at a time, in sequence, from start to finish.

A special case of this is seen in the vector units found in today's CPUs: MMX, SSE, Altivec, and so forth. You can't write a C compiler that takes advantage of these units (not easily, anyway), because the design of C means that a programmer takes the mathematics and splits it up into a bunch of sequential instructions. Fortran, on the other hand, is readily adapted, because the language is designed for mathematics; you tell it what you need to do, but not how to do it.

In the same way, trying to cram parallelism into a program written in C is a nightmare. Semaphores, exclusive zones, shared variables, locking, deadlocks ... there's a hell of a lot to think about, and you only have to get it wrong once to introduce a very hard to reproduce and debug problem. Other languages are more abstract, and have much greater opportunities for compilers to extract the inherent parallelism; it's just that they have had more use to date in academia to illustrate principles than in the real world to solve problems.

As time marches on, and the reality of the situation becomes increasingly obvious, I would expect that performance-intensive apps will start to be written in languages better suited to the domain of parallel programming. Single threaded apps will remain - eg, Word doesn't really need any more processing power (MS' best efforts to the contrary notwithstanding) - and C-like languages will still be used in that domain, but I don't think inherently sequential languages, like C, C++, and others of that nature, will be as common in five or ten years as they are today, simply because of the rise of the parallel programming domain necessitating the rise of languages that mimic that domain better than C does.

tag yes (0)

Anonymous Coward | more than 7 years ago | (#19305271)

please someone tag yes.

wtf happened anyway? the tags got all boring.

not hard (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19305285)

however, if you're using cheap outsourced or H1B insourced labor, they probably came from a degree mill and don't have the low level understanding to do it. (This criticism also applies to every VB monkey I've ever met).

Yes and No (5, Interesting)

synx (29979) | more than 7 years ago | (#19305289)

The problem with parallel programming is we don't have the right set of primitives. Right now the primitives are threads, mutexes, semaphores, shared memory and queues. This is the machine language of concurrency - it's too primitive to effective write lots of code by anyone who isn't a genius.

What we need is more advanced primitives. Here are my 2 or 3 top likely suspects:

- Concurrent Sequential Programs - CSP. This is the programming model behind Erlang - one of the most successful concurrent programming languages available. Writing large, concurrent, robust apps is as simple as 'hello world' in Erlang. There is a whole new way of thinking that is pretty much mind bending. However, it is that new methodology that is key to the concurrency and robustness of the end applications. Be warned, it's functional!
- Highly optimizing functional languages (HOFL) - These are in the proto-phase, and there isn't much available, but I think this will be the key to extremely high performance parallel apps. Erlang is nice, but not high performance computing, but HOFLs won't be as safe as Erlang. You get one or the other. The basic concept is most computation in high performance systems is bound up in various loops. A loop is a 'noop' from a semantic point of view. To get efficient highly parallel systems Cray uses loop annotations and special compilers to get more information about loops. In a functional language (such as Haskel) you would use map/fold functions or list comprehensions. Both of which convey more semantic meaning to the compiler. The compiler can auto-parallelize a functional-map where each individual map-computation is not dependent on any other.
- Map-reduce - the paper is elegant and really cool. It seems like this is a half way model between C++ and HOFLs that might tide people over.

In the end, the problem is the abstractions. People will consider threads and mutexes as dangerous and unnecessary as we consider manual memory allocation today.

Re:Yes and No (0)

Anonymous Coward | more than 7 years ago | (#19305419)

So why don't people take a page from Verilog, VHDL, etc.? Doesn't get a whole lot more parallel than that.

Re:Yes and No (1)

pnotequalsnp (1077279) | more than 7 years ago | (#19305493)

The problem is definitely with the primitives and the model. Without them we cannot even solve wait-free consensus (or equivalently elect a leader) in an asynchronous distributed system! This stuff has been proven really hard a long long time ago, but not in a galaxy far away. -- Don't use "Engineer" in your programming job title. When your code fails horribly you definitely will not be held legally accountable.

Re:Yes and No (2, Interesting)

GileadGreene (539584) | more than 7 years ago | (#19305549)

Concurrent Sequential Programs - CSP. This is the programming model behind Erlang
Technically speaking, Erlang has more in common with the Actor model than with CSP. The Actor model (like Erlang) is based on named actors (cf Erlang pids) that communicate via asynchronous passing of messages sent to specific names. CSP is based on anonymous processes that communicate via synchronized passing of messages sent through names channels. Granted, you can essentially simulate one model within the other. But it pays to be clear about which model we're discussing.

There is a whole new way of thinking that is pretty much mind bending.
It's really not that new. CSP has been around in the literature since at least 1978 (the date of Hoare's first paper on the topic). Hewitt's Actor model predates that by a number of years. Languages implementing both CSP and the Actor model have been around for at least 20 years. They just haven't seen widespread use so far.

Be warned, it's functional!
Not necessarily. Erlang is certainly implemented that way. But it's not a requirement. The 20-year old occam programming language, which was based pretty heavily on CSP, is basically an imperative language. The E programming language is OO to the core. The Oz language freely mixes functional, imperative, and OO constructs. All three permit message-passing concurrency in the Actor or CSP style.

The compiler can auto-parallelize a functional-map where each individual map-computation is not dependent on any other.
This can help to improve the performance of computational tasks. But it doesn't address the fact that (a) the world is fundamentally concurrent, and (b) most modern apps (web browsers, word processors, games) have a large interactive component that could really benefit from a concurrent design. Too much emphasis is being placed on pure computational speed in discussions like this one, when in most cases computation isn't the bottleneck - it's I/O and interactivity. As Joe Armstrong (the guy behind Erlang) has said, "In Concurrency Oriented Programming, the concurrent structure of the program should follow the concurrent structure of the application. It is particularly suited to programming applications which model or interact with the real world."

It's still the wild west... (2, Interesting)

Max Romantschuk (132276) | more than 7 years ago | (#19305293)

Parallel programming and construction share one crucial fundamental requirement: Proper communication. But building a house is much easier to grasp. Programming is abstract.

I think part of the problem is, that many programmers tend to be lone wolves, and having to take other people (and their code, processes, and threads) into consideration is a huge psychological hurdle.

Just think about traffic: If everyone were to cooperate and people wouldn't cut lanes and fuck around in general we'd all be better off. But traffic laws are still needed.

I figure what we really need is to develop proper guidelines and "laws" for increased parallelism.

Disclaimer: This is all totally unscientific coming from the top of my head...

My two cents. (2, Interesting)

Verte (1053342) | more than 7 years ago | (#19305297)

I'd like to think compilers should be smart enough to assess data dependencies and space those instructions out- we've always had to do this anyway, at least since pipelined processors hit the market, but loops still aren't cascaded properly. An example is a loop that calculates a sum of products- the add instruction must wait for the multiplication instruction to finish, when in fact the processor could be doing a heap of multiplications, and using associativity to cut down dataflow problems in the add stage. Spreading the dataflow graph out as much as possible at compile time also helps with cache coherency between many processors.

I think a program compiled that way would need hardware that will understand that the data dependencies are spread out so that it can distribute instructions among the processors, although the distribution could be very simple if the dependencies could be spread out significantly- instructions could almost be distributed like dealing cards. It's a much finer granularity than threading, but I think more applications suit this sort of parallelism.

Another barrier to parallel programming is accessing [for read] data that should be global to all threads. You can do it by passing pointers to state [messy], using globals [dangerous, obese] or by copying all the data onto the stack of the thread [slow]. Threads need to really share address space- use the same stack and everything, IMHO.

loosely coupled tasks (1)

TheSHAD0W (258774) | more than 7 years ago | (#19305301)

Splitting up a single task can be a lot of trouble, but modern programs, games and applications with lots of creeping featurism, can benefit enormously by sticking those tasks in separate threads and letting them run on different processors. Time effects on 3D scenes, AI for NPCs, real-time changes in embedded data, all can be offloaded, which improves the response time for the main thread to the user.

Non-Repeatable Errors (1)

MrSteveSD (801820) | more than 7 years ago | (#19305313)

Finding errors can be hard enough, but it becomes much harder when they are non-repeatable. That's exactly the sort of bug you get with multi-threaded programming. Run a function 99 times, fine. Run it the 100th time, bang!. On the 100th run the threads were juggled by the OS in such a way that your flawed programming suddenly shows up and the threads clash. I've already used some graphics software that had this exact problem. Thinking in a parallel/multi-threaded way may be a challenge, but I think debugging is the real issue.

Yes, because programmers are too conservative (5, Insightful)

Coryoth (254751) | more than 7 years ago | (#19305319)

Parallel programming doesn't have to be quite as painful as it currently is. The catch is that you have to face the fact that you can't go on thinking with a sequential paradigm and have some tool, library, or methodology magically make everything work. And now, I'm not talking about functional programming. Functional programming is great, and has a lot going for it, but solving concurrent programming issues is not one of those things. Functional programming deals with concurrency issues by simply avoiding them. For problems that have no state and can be coded purely functionally this is fine, but for a large number of problems you end up either tainting the purity of your functions, or wrapping things up in monads which end up having the same concurrency issues all over again. It does have the benefit that you can isolate the state, and code that doesn't need it is fine, but it doesn't solve the issue of concurrent programming.

No, the different sorts of paradigms I'm talking about no shared state, message passing concurrency models ala CSP [usingcsp.com] and pi Calculus [wikipedia.org] and the Actor Model [wikipedia.org] . That sort of approach in terms of how to think about the problem shows up in languages like Erlang [erlang.org] , and Oz [wikipedia.org] which handle concurrency well. The aim here is to make message passing and threads lightweight and integrated right into the language. You think in terms actors passing data, and the language supports you in thinking this way. Personally I'm rather fond of SCOOP for Eiffel [se.ethz.ch] which elegantly integrates this idea into OO paradigms (an object making a method call is, ostensibly, passing a message after all). That's still research work though (only available as a preprocessor and library, with promises of eventually integrating it into the compiler). At least it makes thinking about concurrency easier, while still staying somewhat close more traditional paradigms (it's well worth having a look at if you've never heard of it).

The reality, however, is that these new languages which provide the newer and better paradigms for thinking and reasoning about concurrent code, just aren't going to get developer uptake. Programmers are too conservative and too wedded to their C, C++, and Java to step off and think as differently as the solution really requires. No, what I expect we'll get is kluginess retrofitted on to existing languages in a slipshod way that sort of work, in as much as it is an improvement over previous concurrent programming in that language, but doesn't really make the leap required to make the problem truly significantly easier.

Re:Yes, because programmers are too conservative (1)

tirerim (1108567) | more than 7 years ago | (#19305433)

People will adapt eventually. You don't see too much stuff written in COBOL these days, because better stuff has come around since. Yes, it will take a few years before the new languages and new techniques take over, but any major paradigm shift takes time. And all it will really take to kick the transition into high gear is a few significant pieces of software that blow the competition away by being written to take proper advantage of parallel architectures. Seems like a business opportunity to me if that's your bag.

Re:Yes, because programmers are too conservative (1)

Coryoth (254751) | more than 7 years ago | (#19305507)

People will adapt eventually...Yes, it will take a few years before the new languages and new techniques take over, but any major paradigm shift takes time.
I'm not so confident. Consider, for example, the OO revolution. It was a new paradigm, which provided a good way to deal with the ever largeer programming projects that were being undertaken. At the time there were a number of languages that took that bull by the horns and produced very nice solutions, such as SmallTalk, and Eiffel. Developer conservatism kicked in however, and what we ended up with was a bastardized kluge of the clean OO concepts hacked onto C in the mess that is C++. Eventually someone was kind of enough to wipe up the worst of the drool, and make the child a little more presentable, and we got Java. I expect to see the same with concurrent paradigms. We'll see some wonderful languages which elegantly implement the new paradigm, like Erlang, but what we'll end up with is whatever kluge can be arranged to make things slightly better in C++ and Java (Software Transactional Memory seems to be the popular option right now). Years later someone will again, clean up the resulting mess and make a language that is merely not quite as slovenly, but still relatively retarded.

Careful with process algebra and process calculii (1)

dr_pump95 (869367) | more than 7 years ago | (#19305509)

No, the different sorts of paradigms I'm talking about no shared state, message passing concurrency models ala CSP and pi Calculus

You need to be careful with these paradigms. They all use an interleaved concurrency model as the basis for their semantics. This is OK as a way of defining semantics, but not for execution in a truly parallel environment. In essence, the semantic model says that running computation A in parallel with computation B is the same as executing "A" then "B" or "B" then "A". Notice that the parallelism has been removed to make the semantics easier to reason about.

In a truly parallel application, you need to be very sure that executing "A" in parallel with "B" is not actually going to break anything. Often this means some implicit synchronisation, which is what we're trying to remove. There are better models around. Petri nets are truly parallel, but also introduce some synchronisation requirements. Models based on Winskel's event structures are somewhat better, but not that widely known or understood.

So, there is still some way to go here.

Amdahl's law (5, Insightful)

apsmith (17989) | more than 7 years ago | (#19305325)

I've worked with parallel software for years - there are lots of ways to do it, lots of good programming tools around even a couple of decades back (my stuff ranged from custom message passing in C to using "Connection-Machine Fortran"; now it's java threads) but the fundamental problem was stated long ago by Gene Amdahl [wikipedia.org] - if half the things you need to do are simply not parallelizable, then it doesn't matter how much you parallelize everything else, you'll never go more than twice as fast as using a single thread.

Now there's been lots of work on eliminating those single-threaded bits in our algorithms, but every new software problem needs to be analyzed anew. It's just another example of the no-silver-bullet problem of software engineering...

Discussed for years on comp.arch (1)

calidoscope (312571) | more than 7 years ago | (#19305335)

Anyone who's followed the discussions on comp.arch will be familiar with Nick McLaren's appeals for more to be done on parallelizing algorithm's. The writing has been on the wall for several years now that we'll be seeing incremental improvements in single threaded performance over the next few years.


Along those lines, Sun's work on threading was largely in support of adapting applications to run on multiprocessor systems. This work has been going on for more than a decade and has received additional impetus with the 'Niagara' series processors. Seems to me that Intel has finally seen the light.

Functional programming removes the problem (0)

Anonymous Coward | more than 7 years ago | (#19305359)

Instead of continuing to write software in imperative languages like C and Java and dealing with the added complexity of multiple threads, race conditions, etc. it seems to this AC a bit simpler to just start pushing functional programming as a solution: let the compiler do the work.

Historical precedent: as processors and their instruction sets became more complex and varied, there was no "crisis" of programmers failing to adapt to more complicated assembly language; instead the solution was to start progamming in "higher level" languages (ya know like Cobol and Fortan!).

BeOS (0)

Anonymous Coward | more than 7 years ago | (#19305365)

if you want to solve the problem, think BeOS. Join Haiku or code for it www.haiku-OS.com. If you want to learn about how parallelism work in that OS, just take a look at the BeBook (the API reference easy to find on the web) or one of the now free Oreily book on the subject.

Too much emphasis on instruction flow (3, Interesting)

putaro (235078) | more than 7 years ago | (#19305401)

I've been doing true parallel programming for the better part of 20 years now. I started off writing kernel code on multi-processors and have moved on to writing distributed systems.

Multi-threaded code is hard. Keeping track of locks, race conditions and possible deadlocks is a bitch. Working on projects with multiple programmers passing data across threads is hard (I remember one problem that took days to track down where a programmer passed a pointer to something on his stack across threads. Every now and then by the time the other thread went to read the data it was not what was expected. But most of the time it worked).

At the same time we are passing comments back and forth here on Slashdot between thousands of different processors using a system written in Perl. Why does this work when parallel programming is so hard?

Traditional multi-threaded code places way too much emphasis on synchronization of INSTRUCTION streams, rather than synchronization of data flow. It's like having a bunch of blind cooks in a kitchen and trying to work it so that you can give them instructions so that if they follow the instructions each cook will be in exactly the right place at the right time. They're passing knives and pots of boiling hot soup between them. One misstep and, ouch, that was a carving knife in the ribs.

In contrast, distributed programming typically puts each blind cook in his own area with well defined spots to use his knives that no one else enters and well defined places to put that pot of boiling soup. Often there are queues between cooks so that one cook can work a little faster for a while without messing everything up.

As we move into this era of cheap, ubiquitous parallel chips we're going to have to give up synchronizing instruction streams and start moving to programming models based on data flow. It may be a bit less efficient but it's much easier to code for and much more forgiving of errors.

Re:Too much emphasis on instruction flow (3, Interesting)

underflowx (1108573) | more than 7 years ago | (#19305541)

My experience with data flow is LabVIEW [wikipedia.org] . As a language designed to handle simultaneous slow hardware communication and fast dataset processing, it's a natural for multi-threading. Parallelization is automated within the compiler based on program structure. The compiler's not all that great at it (limited to 5 explicit threads plus whatever internal tweaking is done), but... the actual writing of the code is just damn easy. Not to excuse the LabVIEW compiler: closed architecture, tight binding to the IDE, strong typing that's really painful, memory copies everywhere. But the overall model for dataflow is just superior for parallel applications. It's unfortunate that there seems to be little alternative out there with similar support for data flow, but more overall utility.

Maybe this will eventually force better coding? (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19305411)

It seems to be the norm to expect customers to increase the number of cpus, use faster cpus, add more memory. Efficient coding seems to only occur in text books and magazine articles. Even worse is when they resort to Java, when you make it easier for idiots to write programs, they write some pretty horrible code that sometimes works. Don't ask them to debug it.

And now you want them to understand parallel programmimg? Good one! Next, you will expect them to learn C or C++, maybe even learn optimization techniques?

Most professional programmers today are dumb as dirt. The hours are long, the pay sucks, you have no control over the specs, and scope creep will make you want to scream. It seems like a lot of the best ones have taken other jobs that are easier yet pay more.

Before parallel programing becomes serious you will start seeing tightly coded binaries, like the kind you used to see twenty years ago. Pretty to imagine.

IBM (or Intel) dude at GDC 2007 said. . . (1)

ookabooka (731013) | more than 7 years ago | (#19305431)

I went to the Game Developers Conference and went to a session where a gentleman from either IBM or Intel gave a talk on how to utilize extra cores and yet not penalize those that did not have multicore. As an aside, Intel also offers something called the "Building Blocks" [intel.com] library which parallelizes primitive things such as for loops. Anywho, back to something game specific, a few of the suggestions below:

Particle physics for client-side visual effects (omit on non multi-core machines)
Animate faces more, smoother more natural animations when walking up stairs/inclines (again more static and rigid on single core machines)
Animate cloth/hair

And my personal favorite
Dynamic texture/model tessellation. (models far away are less complex than models close up, make the transition smoother)


So, whats my opinion? It's the libraries and programming languages and compilers that will change, the programmer just needs to have an idea of synchronization issues and whatnot.

Not hard, but not that easy (1)

mveloso (325617) | more than 7 years ago | (#19305447)

One reason that parallel programming is hard is simple: lots of things can't be parallelized effectively. The computer is really doing one thing, namely, waiting for input (ie: the app is user-bound). If your app isn't user bound, then parallelism is possible.

If your app isn't user bound, it should be pretty easy to parallelize, assuming that your processing stream isn't serialized (ie: the does a result depends on the previous result?). The problem then becomes contention. For some reason, lots of developers can't wrap their heads around threading and contention issues. Maybe it's been presented incorrectly in textbooks, or something like that. People have problems programming asynchronously as well. Maybe that's the problem?

Anyhow, once you figure out the contention issue, then there are all kinds of other things you have to deal with, like asynchronous status notification. Timeouts. Data that never comes back. Unexpected termination. Data coherency. Read and write locking. Synchronization.

In the end, sometimes it's easier to break your data set up into chunks and spawn a couple of processes in the background, then use cat to aggregate the data sets. It's definitely easier to do it that way (ie: not a lot of mental lifting involved).

Most developers don't care about scalability. The ones that do parallelize. The ones that don't don't.

bad education (3, Insightful)

nanosquid (1074949) | more than 7 years ago | (#19305459)

No, parallel programming isn't "too hard", it's just that programmers never learn how to do it because they spend all their time on mostly useless crap: enormous and bloated APIs, enormous IDEs, gimmicky tools, and fancy development methodologies and management speak. Most of them, however, don't understand even the fundamentals of non-procedural programming, parallel programming, elementary algorithms, or even how a CPU works.

These same programmers often think that ideas like "garbage collection", "extreme programming", "visual GUI design", "object relational mappings", "unit testing", "backwards stepping debuggers", and "refactoring IDEs" (to name just a few) are innovations of the last few years, when in reality, many of them have been around for a quarter of a century or more. And, to add insult to injury, those programmers are often the ones that are the most vocal opponents of the kinds of technologies that make parallel programming easier: declarative programming and functional programming (not that they could actually define those terms, they just reject any language that offers such features).

If you learn the basics of programming, then parallel programming isn't "too hard". But if all you have ever known is how to throw together some application in Eclipse or Visual Studio, then it's not surprising that you find it too hard.

Two Problems (3, Insightful)

SRA8 (859587) | more than 7 years ago | (#19305469)

I've encountered two problems with parallel programming 1. For applications which are constantly being changed under tight deadlines, parallel programming becomes an obstacle. Parallelism adds complexity which hinders quick changes to applications. This is a very broad generalization, but often the case. 2. Risk. Parallelism introduces a lot of risk, things that arent easy to debug. Some problems I faced happened only once every couple of weeks, and involved underlying black-box libraries. For financial applications, this was absolutely too much risk to bear. I would never do parallel programming for financial applications unless management was behind it fully (as they are for HUGE efforts such as monte carlo simulations and VaR apps.)

To the tune of "Sitting On The Dock Of The Bay" (1)

iminplaya (723125) | more than 7 years ago | (#19305471)

First I'll put a bit over here
Then I'll put a bit over there
Watchin' all those bits go in
Then watch 'em do that I/O again

I'm sittin' at my desk all day
Watching the bits toil away
Ooo, I'm just sittin' at my desk all day
Wastin' time

too many options (1)

penguinbroker (1000903) | more than 7 years ago | (#19305505)

parallel programming has a lot of obvious benefits. but say i want to write parallel code do i take advantage of 2 cores only or should i focus on a robust solution that identifies the number of core and optimizes accordingly?

erlang is interesting because it presents a parallel platform that scales to the number of cores available. ultimately one of the strongest features of modern code is the ability to run across numerous hardware architectures and on diverse group of processors. as a result, it's usually best to follow the old mantra in CS to program for the 'worst case scenario' which manifests itself in single core processors today.

parallel programming isn't impossible, it's just not that worthwhile right now for 99% of coders

What I find most entertaining ... (1)

TechnoLuddite (854235) | more than 7 years ago | (#19305527)

... is that most of the pertinent replies are modded 2.

There's long been the comment that multi-threaded programming was going to be the next quantum shift, much like OOP was. And the difficulty was going to be the same -- namely, getting the programmer's brain to wrap around the concept. The shift from OOP to threaded programming is likely to be at least as difficult as the shift from linear to OOP.

What I'm seeing here (caveat: not a programmer by trade, only a lowly QA ... but I do have a rudimentary awareness of programming) is that the tools aren't fully ready. This is something I could fully believe -- I've been witness to the development cycle without tools, and the development cycle with tools. It's equivalent to editing a photo with MS Paint vs. Photoshop.

As for those accusing programmers of being lazy, I'll pass on one question posed to me by a friend:

"Why doesn't everyone just code in assembly language?"

No, it's not hard. (1)

ShakaUVM (157947) | more than 7 years ago | (#19305555)

I taught parallel processing (as a TA) at UC San Diego, and worked at the supercomputer center for years doing parallel code.

No, parallel processing is not "too hard". Phah.

It's simply hard for people that are being forced to write parallel code without taking a class in the subject, or really understanding what they're doing. Parallel code is as mind bending as recursive code the first time you saw it, only more so. It takes a lot of work to wrap your mind around the weirdness of having the same code being executed in different places with different data, making sure you pass the data back and forth correctly, synchronize correctly, and doing it all without making mistakes that destroy your performance.

To anyone interested in the field, sit down, learn MPI, run through some tutorials, and get to the point where you can run something like a radial blur on an array in parallel without breaking a sweat, and you know you're mentally ready to write parallel code. In class, getting to this state usually takes 2-3 weeks, but if you're working on your own, you should be able to do it faster.

Math Barbie says (1)

Russ Nelson (33911) | more than 7 years ago | (#19305571)

Math Barbie says "Parallel Programming is hard!"

Superwaste (1)

gustgr (695173) | more than 7 years ago | (#19305601)

Recently my University [www.usp.br] bought a supercomputer listed [top500.org] between the top500 computer systems in the world. During a class of Computational Physics, my Professor was commenting this issue and noted that the system was intended to serve about 80 research groups with tasks that demand parallel processing. The reality were much more modest though: only two groups were actually using the system (one of them was my Professor's research group), and of this two groups, only his group were taking advantage of parallel programming techniques to use the system the way it should be used.

He said that a new policy would be implanted, this was about 3 or 4 months ago, so I don't know how it is right now. But this reflects the lack of preparation to use such a system, they've wasted tons of cash with something they don't really know how to use properly.

A Case Study (2, Informative)

iamdrscience (541136) | more than 7 years ago | (#19305603)

My brother just recently started doing IT stuff for the research psych department at a respected university I won't name. They do a lot of mathematical simulations with Matlab and in order to speed these up they decided to buy several Mac Pro towers with dual quad core Xeons at $15,000*. The problem is, their simulations aren't multithreaded (I don't know if this is a limitation of Matlab or of their programming abilities -- maybe both) so while one core is cranking on their simulations to the max, the other 7 are sitting there idle! So while a big part of this ridiculous situation is just uninformed people not understanding their computing needs, it also shows that there are plenty of programmers stuck playing catch-up since computers with multiple cores (i.e. Core 2 Duo, Athlon X2, etc.) have made their way onto the desktops of normal users.

I think this is a temporary situation though, and something that has happened before, there have been many cases where new powerful hardware seeps into the mainstream before programmers are prepared to use it.



*I know what you're thinking: "How the hell do you spend $15,000 on a Mac?". I wouldn't have thought it was possible either, but basically all you have to do is buy a Mac with every single option that no sane person would buy: max out the overpriced RAM, buy the four 750GB hard drives at 100% markup, throw in a $1700 Nvidia Quadro FX 4500, get the $1K quad fibre channel card, etc.

And what of the other limit? (2, Insightful)

holophrastic (221104) | more than 7 years ago | (#19305631)

I would argue that most user tasks cannot be fundamentally parallel. Quite simply, if a user highlights some text, and hits the bold button, there really isn't anything to split across multiple cores. No matter how much processing is necessary to make the text bold (or to build the table, or to check the spelling of a word, or to format a report, or to calculate a number, or to make a decision -- as in A.I.) it's a serial concept, and a serial algorithm, and cannot be anything more. Serial is cool too remember.

So we're looking at multiple tasks. Obvious gaming and operating systems get to split by engine (graphics, network, interface, each type of sound, A.I., et cetera). I'd guess that there is a limit of about 25 such engines that anyone can dream up. Obviously raytracing gets to have something like 20 cores per pixel, which is really really cool. But that's clearly the exception.

So really, in my semi-expert and wholy professional opinion, I think priming is the way to go. That is, it takes the user up to a second to click a mouse button. So if the mouse is over a word, start guessing. Get ready to bold it, underline it, turn it into a table, look up related information, pronounce it, stretch it, whatever.

Think of it as: "we have up to a full second to do something. We don't know what it's going to be, but we have all of these cores just sitting here. So we'll just start doing stuff." It's the off-screen buffering of the user task world.

Which results in just about any task, no matter how complicated, can be instantly presented -- having already been calculated.

Doesn't exactly save power, but hey, the whole point is to utilize power. And power requires power. There must be someone's law -- the conservation of power, or power conversion, or something that discusses converting power between various abstract or unrelated forms -- muscle to crank to generator to battery to floatig point to computing to management to food et cetera; whatever.

Right, so priming / off-screen buffering / preloading. Might as well parse every document in the directory when you open the first one. Might as well load every application on start-up. Can't have idle cores lying around just for fun.

Has anyone ever thought that maybe we won't run out of copper after all? I'd bet that at some point in the next twenty years, we go back to clock speed improvement. I'd guess that it's when core busing becomes rediculous.

I still don't understand how we went from shared RAM as the greatest thing in the world to GPU's with on-board RAM, to CPU's with three levels of on-chip RAM, to cores each with their own on-core RAM.

Hail to the bus driver; bus driver man.

The Bill comes due (4, Insightful)

stox (131684) | more than 7 years ago | (#19305637)

After years of driving the programming profession to its least common denominator, and eliminating anything that was considered non-essential, somebody is surprised that current professionals are not elastic enough to quickly adapt to a changing environment in hardware. Whoda thunk it? The ones, you may have left, with some skills are nearing retirement.

Time (1)

Baldrson (78598) | more than 7 years ago | (#19305641)

Maybe if physicists started with a proper theory of time they'd figure out the reason our formalisms are so prone to over-serialize algorithmic descriptions. But then asking physicists for consilience with other disciplines is basically like asking a gang of tweakers to reflect on the bloody holes they're itching in their heads.

As for computer scientists... well... I really don't know that there is much going on of any value outside of Kolmogorov Complexity. Maybe something will come out of that that will resolve the issue while the physicists are trepanning themselves into theoretic epilepsy.

Hardware Engineers VS. Software Designers (-1, Redundant)

Anonymous Coward | more than 7 years ago | (#19305653)

Computer engineers and hardware designers have learned parallel design techniques for a while now. They successfully realize these designs in ASICs and FPGAs.

Maybe programmers and software people should learn some hardware design techniques and translate them to the software domain.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>