Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Time to Get Good At Functional Programming?

Soulskill posted more than 5 years ago | from the dysfunctional-programming-on-the-way-out dept.

Programming 620

prone2tech writes "From an article at Dr. Dobb's: Chipmakers have essentially said that the job of enforcing Moore's Law is now a software problem. They will concentrate on putting more and more cores on a die, and it's up to the software industry to recraft software to take advantage of the parallel-processing capabilities of the new chips. As is argued in this article, this means becoming proficient in parallel functional programming. The bad news? Getting good at functional programming is hard, harder than moving from iterative Pascal or Basic or C coding to object-oriented development. It's an exaggeration but a useful one: When you move to FP, all your algorithms break.'"

Sorry! There are no comments related to the filter you selected.

Convince your boss. (5, Funny)

narcberry (1328009) | more than 5 years ago | (#26009343)

You mean oo isn't the only option?

Re:Convince your boss. (5, Interesting)

fyngyrz (762201) | more than 5 years ago | (#26009541)

Another question you might ask yourself is, are you going to let the CPU designers push you into a programming paradigm you are not effective in? Personally, I can see a machine being quite useful with say, 16 or 32 cores just because these machines do more than one thing at a time. But I'd much rather see them speed the cores up than endlessly multiply the number of them. There is a *lot* of room left to do this. Three D architectures offer more connectivity than is currently being used, and both the number and type of one-cycle instructions within a CPU can be increased until the summary is "all of 'em", which I doubt they're going to ever get to, orthogonality can be increased until again, the answer is that all instructions are available to the same degree for all registers and addressing modes no matter what. Compilers like broad orthogonality (and so do assembly programmers, not that there are a lot of us left.)

If CPU designers run off towards the horizon making many-core designs, what if no significant number of people follow them there? Which... frankly... pretty much seems to be the case. I've an 8-core machine, and just about the only things that actually use those cores together are the easy ones: graphics and waveform encoding/decoding. Aperture sure enough uses all my cores in a well-balanced fashion and can build a JPEG in a snap; but that's a far cry from my web browser doing the same thing while trying to render a page.

I'm just saying that the direction the CPU folks have chosen lately doesn't have to be the direction we actually end up going, and there are points in favor of this as the best choice.

Just some thoughts.

Re:Convince your boss. (0)

Anonymous Coward | more than 5 years ago | (#26009639)

Multi-Core is a way to go. And its easy to write software for it.

OpenMP anyone?

#pragma omp parallel for
for(int i=0; i10000; i++)
{ //split work between CPUs
}

Re:Convince your boss. (5, Insightful)

Zironic (1112127) | more than 5 years ago | (#26009645)

Well, the problem is that no matter how much you bash an algorithm with a functional language you can't magically make a sequential task into a parallel one.

Re:Convince your boss. (5, Insightful)

Chandon Seldon (43083) | more than 5 years ago | (#26009753)

Well, the problem is that no matter how much you bash an algorithm with a functional language you can't magically make a sequential task into a parallel one.

Thing is, you probably have a parallel task that was already bashed into a sequential one.

Most real-world problems are parallel problems. Even the ones that aren't (say... compiling a file in C) you can usually run a lot of instances of in parallel.

Re:Convince your boss. (3, Funny)

danwesnor (896499) | more than 5 years ago | (#26009651)

But I'd much rather see them speed the cores up than endlessly multiply the number of them.

Your idea is not feasible because it screws up too many marketing campaigns. Please revise your idea and run it through sales before submission to management.

Re:Convince your boss. (-1, Troll)

Anonymous Coward | more than 5 years ago | (#26009589)

you have a problem with oop fag? maybe you should have stuck to sucking jocks dicks in the college locker room.

Broken Algorithm BS (5, Insightful)

marnues (906739) | more than 5 years ago | (#26009361)

When you move to FP, all your algorithms break

If moving to a functional programming language breaks your algorithms, then you are somehow doing it wrong. That line doesn't even make sense to me. Algorithms are mathematical constructs that have nothing to do with programming paradigm. Assuming the language is Turing complete, how is that even possible?

Re:Broken Algorithm BS (5, Insightful)

marnues (906739) | more than 5 years ago | (#26009395)

Christ on a crutch...I didn't even pick up on it the first time around. How can Moore's Law ever be a software issue? I can accept that most people don't care about transistor count, but saying it can somehow become a software issue is just too many steps removed from the original meaning. I love functional programming languages, but this article is hurting my brain.

Re:Broken Algorithm BS (5, Informative)

reginaldo (1412879) | more than 5 years ago | (#26009611)

Moore's Law becomes a software issue when we need to change our coding paradigm to use all of the cores on the chip. The hardware holds up it's end of the deal, but we need to develop software that utilizes the hardware correctly.

That's where FP comes into play, as it allows developer's to develop heavily parallelized code that is also safe and fault-tolerant.

Re:Broken Algorithm BS (2, Informative)

Anonymous Coward | more than 5 years ago | (#26009613)

No, it makes sense.

If you want to keep seeing Moore's-law increases in CPU performance, the improvements are no longer going to come from throwing more transistors at single-threaded programs. We hardware engineers are simply out of smart ideas to eke exponentially better performance out of a single instruction stream.

Instead, we're going to produce chips with more parallelism -- more cores and more threads. But it's going to take a lot of software work (languages, transactional memory, libraries, et cetera) for these improvements to translate to functionally faster computers.

Hardware is going parallel, like it or not. Software has to change as well.

Re:Broken Algorithm BS (2, Interesting)

Sancho (17056) | more than 5 years ago | (#26009815)

Moore's law dealt specifically with hardware. To say that "Moore's law is now a software problem" shows as much of a misunderstanding as attributing "any mention of Nazis causing someone to immediately lose the argument" to Godwin's Law[1].

It's reasonable to suggest that increasing the speed at which software runs will start requiring learning multi-threaded programming techniques, but to say that software will allow Moore's Law to continue is incorrect.

The idea that performance will increase at the same rate as doubling of transistors is attributed to a colleague of Moore. I have not heard of a moniker for this "law."

[1] Godwin's law states that as a USENET thread increases in length, the probability of a comparison to Hitler or a Nazi goes to 1. Godwin's law states nothing about the thread ending or about a winner or loser.

Re:Broken Algorithm BS (1)

thermian (1267986) | more than 5 years ago | (#26009653)

How can Moore's Law ever be a software issue?

It can't be. What we have here is hero worship. A simple statement made in a magazine that few people read has been elevated to the status of eternal truth.

Moore's law is a hardware thing, and has been applied to more and more improbable situations in recent years. I suspect this is because many people in computer science feel envious of the laws other disciplines enjoy. Computer science has few of its own, and of those few, this is the most often quoted.

Any transposition to software is without merit. Therefore any line of reasoning using this as a basis is irredeemably flawed.

It is true that the multi core 'revolution' means we have to change our ways. I'm in the midst of a total re-write of my open source product for this very reason. However my choice of language has been determined by suitability to the problem, not fluffy nonsense about Moore's law and it isn't a functional language.

Re:Broken Algorithm BS (5, Informative)

Anonymous Coward | more than 5 years ago | (#26009715)

It's true that Moore actually said the transistor count doubles every 18 months. However, for a long time, an immediate corollary of Moore's Law was that software doubled in speed every 18 months, which is essentially why Moore's Law as important. I think what they author is trying to say is that in order for this corollary to remain true, people must learn to write parallel software. It is much easier for a compiler to get an FP running in parallel than a sequential program (SP) running in parallel. Hence, those who can write in FP languages will be better suited to write the software of tomorrow than whose who can only write in SP languages.

Re:Broken Algorithm BS (3, Insightful)

DiegoBravo (324012) | more than 5 years ago | (#26009855)

>> How can Moore's Law ever be a software issue?

In a sense, it can be: if we start rewriting Java/C#/VB apps in assembler, I'm pretty sure the performance will at least double each year, and we can forget about those cores for good.

Thermodynamic computing (5, Funny)

CustomDesigned (250089) | more than 5 years ago | (#26009909)

Pure functional programming removes all side effects. This make memory optimization (critical to efficient multiprocessing) much easier. It also makes garbage collection easier - but that is pretty much canceled by an increase in garbage.

But beyond functional programming is thermodynamic computing. This starts with functional, but requires all operations to be reversible. Ideally, the total electrons are conserved - you can never clear a bit - just exchange bits (and of course more complex operations like add, mul, etc - but all reversible and charge conserving). Of course real hardware will still need to make up for losses, but power consumption and heat go way down.

The fascinating thing is that thermodynamic programming requires a pool of known 0 bits and known 1 bits. As the algorithm progresses, you can't just throw away results you aren't interested in - you collect the unwanted results in an entropy pool. Eventually, you run out of known bits, and need to clear some entropy bits in order to continue. This takes lots more power (like erasing a flash block). The analogy to real world entropy is striking.

Re:Broken Algorithm BS (0)

irtza (893217) | more than 5 years ago | (#26009497)

the way I loosely (I guess strict interpretation would likely just lead to a contradiction) interpreted this is that the algorithms are not directly applicable. In other words, you have to pick a new algorithm to accomplish the same job without incurring some form of a penalty. Maybe I'm wrong, but this makes sense in my world... now I await the reply from the folks who can not tolerate deviation from the strictest rules of language. You know, let me make this easy for them: I am going to

Re:Broken Algorithm BS (5, Insightful)

sexconker (1179573) | more than 5 years ago | (#26009539)

While algorithms won't break, you'll certainly have to rewrite a lot of them to take advantage of multiple processors.

This problem is not new.
The solutions are out there, and are also not new.

Article is pure shit.

Re:Broken Algorithm BS (1, Interesting)

Anonymous Coward | more than 5 years ago | (#26009617)

Turing complete doesn't mean the running times are the same. Your algorithms don't break in the sense that they no longer produce the correct output. They break in the sense that their running time is multiplied by some polynomial function of the input size; i.e. they are broken for practical purposes.

Re:Broken Algorithm BS (1)

Jhyrryl (208418) | more than 5 years ago | (#26009693)

I believe the intent of the statement is that all of your patterns break. For example, as TFA points outs, in functional programming expect to use "Recursion as the primary tool for iteration."

Any of your programming patterns that use loops would instead use recursive function calls.

Re:Broken Algorithm BS (2, Informative)

DoofusOfDeath (636671) | more than 5 years ago | (#26009719)

If moving to a functional programming language breaks your algorithms, then you are somehow doing it wrong.

Easy. Pure functional programming doesn't permit side-effects. Algorithms that perform I/O at various points in the algorithm can't easily be expressed in languages like that.

Also, although some popular functional languages like ML and Erlang have hacks to get around this, purely functional programming doesn't like a function modify global state. Without those hacks in the language, algorithms that require in-place modification of arrays (such as some sorting algorithms) can't be expressed at all in those languages. (You can modify the algorithms to not do in-place modifications of arrays, but then that's not the original algorithm any more.)

Why would the Algorithm break? (3, Interesting)

Zironic (1112127) | more than 5 years ago | (#26009363)

I don't see why an algorithm would break just because you're changing language type, the whole point of an algorithm is that it's programming language independent.

Re:Why would the Algorithm break? (4, Informative)

koutbo6 (1134545) | more than 5 years ago | (#26009425)

With functional programming languages make a rather restrictive assumption, and that is all variables are immutable.
This is why functional programs are more suited for concurrency, and this is why your sequential algorithms will fail to work.

Re:Why would the Algorithm break? (1)

Zironic (1112127) | more than 5 years ago | (#26009549)

There is nothing that prevents you from implementing sequential algorithms in a functional language.

Re:Why would the Algorithm break? (1)

koutbo6 (1134545) | more than 5 years ago | (#26009633)

indeed .. thats why functional languages are useful. The claim made in the article is that this is merely and exaggeration:

Getting good at functional programming is hard, harder than moving from iterative Pascal or Basic or C coding to object-oriented development. It's an exaggeration but a useful one: When you move to FP, all your algorithms break

If you can get passed the idea that you only have immutable variables, you can do pretty much anything you usually do in other sequential languages. The algorithms are identical at an abstract level.
If however your algorithms are more concrete and you tend to think about variables when developing them, you are in for a hard time.

Re:Why would the Algorithm break? (1)

DragonWriter (970822) | more than 5 years ago | (#26009583)

With functional programming languages make a rather restrictive assumption, and that is all variables are immutable.

This doesn't break algorithms. It just changes the way a particular algorithm will be expressed in code. Any functional language will provide a way to create and access a mutable storage cell, though it may be more programming work to do so than it would be in a language where that was the normal use of a variable.

Re:Why would the Algorithm break? (1)

johanatan (1159309) | more than 5 years ago | (#26009769)

Yea but if you did that, you'd be missing the point. That's like using C++ as if it were C. You're really missing the elegance of the language/paradigm when you do that.

Re:Why would the Algorithm break? (2, Informative)

techno-vampire (666512) | more than 5 years ago | (#26009605)

..all variables are immutable.

You mean to say that in a functional programming language, variables aren't? A paradox, a paradox, a most delightful paradox!

Re:Why would the Algorithm break? (4, Informative)

Chandon Seldon (43083) | more than 5 years ago | (#26009699)

You mean to say that in a functional programming language, variables aren't? A paradox, a paradox, a most delightful paradox!

Functional variables are like mathematic variables - they're variable in that you may not have discovered their value yet, but once you discover their value it stays the same for the current instance of the problem. For the next instance of the problem (i.e. the next call to the function), you're talking about a different set of variables that potentially have different values.

Re:Why would the Algorithm break? (1)

koutbo6 (1134545) | more than 5 years ago | (#26009733)

indeed .. but thats what I called them for a lack of a better term. I can't remember what the proper term is for a variable in a functional language.
If you read joe Armstrong's book on programming erlang, I believe he uses the term "variable" since it is what most programmers are used to.

Re:Why would the Algorithm break? (2, Informative)

Nedmud (157169) | more than 5 years ago | (#26009479)

In practice, algorithms are generally programming language independent *within the set of imperative languages*. They are almost always written as a series of steps. And that's why they don't apply directly to functional languages.

I just checked Knuth's definition, and he lists: finiteness, definiteness, input, output and effectiveness as the requirements for an algorithm. All of these translate with more or less effort into the functional programming landscape.

As an example, I have seen the quicksort algorithm written something like:

sort(l) =
        if l == [] then []
        else with (lowers, equals, highers) = partition(l) in
        sort(lowers) + equals + sort(highers)

(The original was in Haskell which as you can see I am no longer proficient in.)

So I think I agree that "all your algorithms will break" is exaggeration.

Re:Why would the Algorithm break? (2, Interesting)

Anonymous Coward | more than 5 years ago | (#26009767)

The reason quicksort is quick is that in a language with modifiable random-access arrays, partitioning can easily be done in-place (without having to copy the array). You can't do that in functional programming.

Re:Why would the Algorithm break? (1)

pla (258480) | more than 5 years ago | (#26009881)

I don't see why an algorithm would break just because you're changing language type, the whole point of an algorithm is that it's programming language independent.

More to the point, why would your choice of programming paradigm have a significant impact on parallelization of algorithms? They all have equal computational power, and differ only in the style of expressing your intent.

If anything, current CPUs best match the imperative model (and more cores of a similar design won't change that); But they only do so out of popularity - If functional programming didn't count as a complete bitch to implement with no benefits over imperative or OO, CPUs would export a more functional-friendly ISA.

And for the record, I can code adequately in Tcl... And given a choice, I'd choose C(++/#/whatever) for any task (Even intended recursion - In that, on Von Neuman architecture, the nonrecursive form of any given algorithm will outperform the recursive version).

Amdahl's Law (5, Insightful)

Mongoose Disciple (722373) | more than 5 years ago | (#26009367)

Question is, how realistic is that?

Amdahl's Law also tells us tells us that the amount that parallelization can speed something up is ultimately limited by the parts that can't be done in parallel.

Re:Amdahl's Law (5, Informative)

Tenek (738297) | more than 5 years ago | (#26009631)

And Gustafson's Law tells us that as the problem size increases the non-parallelizable portion of the program will shrink.

Re:Amdahl's Law (1)

blair1q (305137) | more than 5 years ago | (#26009673)

If something can't be done in parallel, maybe it can be pipelined.

Or you can reserve one CPU to do the non-parallelizable, non-pipelineable parts, while the rest are working on the next parallel/pipelined task.

Treating the processors as heterogenous, with the ability to make homogenous subsets, is the key to getting nearer to unit efficiency from a multiprocessing system.

Re:Amdahl's Law (2, Informative)

Anonymous Coward | more than 5 years ago | (#26009799)

Yes.

But furthermore, saying that FP "solves" the multithreading problem is intellectually dishonest. FP proponents like to pretend that because FP programs are intrinsically parallelizable, you don't have to worry about it: the magic compiler or runtime will do it for you. Wrong. The difficulty of multithreaded programming is not to use threads, it's how to do it efficiently. Sure, if the problem is sorting 1 billion ints, you'll multithread from the top of the recursion, and that'll be quite efficient, but anyone can do the same trivially in any language. But the real problem start when you got to schedule multiple heterogeneous tasks, and FP has no intrinsic answer to that.

As for the legendary reliability that Erlang fans love to point out, I'm glad we got an example at last. Ericsson AXD-301. How? "No shared state and a sophisticated error-recovery model" he says. Yeah, right! It's a f*cking switch! How about custom hardware built like a tank, with predictable failure modes, and specs (protocols) that haven't changed in 10 years (at least). Give me a break, it's not exactly difficult to get plenty of nines in those conditions!

Scheme (4, Funny)

Rinisari (521266) | more than 5 years ago | (#26009377)

(have I (feeling ((become popular Scheme) again)))

Oh noes! (3, Funny)

TypoNAM (695420) | more than 5 years ago | (#26009525)

Lisp! NO!!!!!!!!!!!!!!!!

Rolls over and dies...

(added to make filter happy)

Re:Scheme (1)

smitty_one_each (243267) | more than 5 years ago | (#26009827)

au_contraire :: String -> String
au_contraire "Scheme" = "Haskell [realworldhaskell.org] "

it's always a good time to try functional (5, Interesting)

cats-paw (34890) | more than 5 years ago | (#26009385)

It's been said in the comments on slashdot many times. Learning functional programming techniques will improve your programming skills. There are many good functional languages out there, and many have imperative features for ease of transition. No functional will not solve all of your problems, but it will give you that most valuable of all lessons, how to think about a problem _differently_.

You don't need an excuse, start today.

Re:it's always a good time to try functional (2, Interesting)

Duhavid (677874) | more than 5 years ago | (#26009459)

What language would you recommend for an OO programmer to start with?

Re:it's always a good time to try functional (1)

LaskoVortex (1153471) | more than 5 years ago | (#26009599)

What language would you recommend for an OO programmer to start with?

Scheme [mit.edu]

Re:it's always a good time to try functional (1)

Chandon Seldon (43083) | more than 5 years ago | (#26009731)

What language would you recommend for an OO programmer to start with?

Scheme or Standard ML would be good places to start. One of the key points of learning a functional language, if you think of yourself as an "OO programmer", is being able to look at problems and come up with solutions that *aren't* OO, so try to avoid looking for a functional / OO hybrid language until you're comfortable with functional problem decomposition.

Re:it's always a good time to try functional (1)

fishbowl (7759) | more than 5 years ago | (#26009749)

I would suggest XSLT, because it has obvious real-world applications that make the endeavor of learning it not just an academic one.

I would also recommend taking a few simple algorithms and implementing them in Haskell, ML and Scheme. I would also suggest doing the same thing in, say, C or Perl but limiting yourself to a pure functional model.

One of the most enlightening courses I took dealt with lambda calculus as its own idiom, distinct from applications to computer science, but the material would make for some awfully dry recreational reading.

Re:it's always a good time to try functional (1)

johanatan (1159309) | more than 5 years ago | (#26009789)

Haskell

Re:it's always a good time to try functional (1)

nschubach (922175) | more than 5 years ago | (#26009813)

It's not really a "programming language" as much as it is a scripting language right now, but I've been digging into Ruby (not rails) and I like what I see so far.

Re:it's always a good time to try functional (4, Informative)

j1m+5n0w (749199) | more than 5 years ago | (#26009905)

I would say Haskell, but I think that's the language everyone should learn, so I'm biased. The typeclass system provides for some of the functionality of object oriented programming.

If Haskell scares you, Ocaml is another good choice. It's a multi-paradigm language with an emphasis on functional programming, but it also allows you to use mutable state wherever you like (whether this is a good thing or not is a matter of some debate). It even has some traditional object-oriented programming features, but they tend not to get used much in practice.

If you care about performance, they both have decent native-code compilers. My impression is that Ocaml is a bit faster for single-core tasks, but Haskell's parallel programming features are much better.

Re:it's always a good time to try functional (5, Interesting)

LaskoVortex (1153471) | more than 5 years ago | (#26009465)

You don't need an excuse, start today.

The excuse is: it's fun. But if you do start, choose the right language for the job. Python for example seems good for fp, but was not designed for the task. Don't choose a language that simply supports functional programming. Choose a language that was designed specifically for functional programming. You'll be happier in the long run when you don't run into limitations of the language you choose.

My 2c.

Re:it's always a good time to try functional (5, Informative)

TheRaven64 (641858) | more than 5 years ago | (#26009551)

Python for example seems good for fp

Last time I heard this, I checked, and the python developers were refusing to commit tail-recursion optimisation patches because it 'made debugging too hard'. Since most functional algorithms are tail-recursive, you will blow your stack very quickly without this. It's even in GCC, meaning that C is better suited to functional programming than Python.

Re:it's always a good time to try functional (1)

NinthAgendaDotCom (1401899) | more than 5 years ago | (#26009581)

Oh man, thanks for warning me. I don't want to blow my stack at the office. Think of the mess and the embarrassment.

Re:it's always a good time to try functional (2, Informative)

blair1q (305137) | more than 5 years ago | (#26009683)

s/stack/wad

dang newbies

Re:it's always a good time to try functional (1)

rk (6314) | more than 5 years ago | (#26009807)

You can do tail recursion optimizations implemented in pure Python. They're not speedy at all (If you want speedy, write in C/C++, Java, or something), but they keep recursive functions from filling the stack. It would be better if it were implemented in the environment, but you can make it work.

Multi Threaded programming (1)

Samschnooks (1415697) | more than 5 years ago | (#26009399)

Quotes from TFA:

The answer is that FP languages store state in function parametersâ"that is, on the stack. ....

Everything is a function.

I think of FP programming as if it were a multithreaded program with some threads having their own core. I write threaded programs didn't use global variables, OK, maybe a semaphore, but nothing like variables. I don't see what the big deal is. I don't think this is going to cause programmers as many problems as he is saying.

Formal Methods Initiative (4, Insightful)

Janek Kozicki (722688) | more than 5 years ago | (#26009407)

This reminds me about the /. article "Twenty Years of Dijkstra's Cruelty" [slashdot.org] , just a few days ago.

Problem boils down to fact that programming is in fact a very advanced calculus. And writing a program is 'deriving' it. As in reaching a correct formula with a proof that it's correct. That's how software should be written anyways. And functional programming will only make it *simpler*, not harder.

Re:Formal Methods Initiative (1)

sexconker (1179573) | more than 5 years ago | (#26009561)

What's cal coo lus?

Question (3, Interesting)

Anonymous Coward | more than 5 years ago | (#26009411)

So a quick question before I go and master FP...does the compiler automatically compile the code that can be done in parallel in the proper "way", or do I have to specify something?

Also, if I rewrote an app written in an imperative language to a FP one like Haskell, would I see a that much of a difference on a multi-core processor?

Re:Question (1)

Timothy Brownawell (627747) | more than 5 years ago | (#26009563)

So a quick question before I go and master FP...does the compiler automatically compile the code that can be done in parallel in the proper "way", or do I have to specify something?

I would imagine that this depends on the language and compiler you use.

Also, if I rewrote an app written in an imperative language to a FP one like Haskell, would I see a that much of a difference on a multi-core processor?

That depends on a great many things, including the quality of the compilers, your skill at writing code into the different languages, and the nature of the particular app.

Re:Question (2, Interesting)

Zironic (1112127) | more than 5 years ago | (#26009597)

The way it should work is that if you're implementing a pararell algorithm like merge sort http://en.wikipedia.org/wiki/Merge_sort [wikipedia.org] it should just go and get a new processor core each time you make a recursive call.

Re:Question (1)

johanatan (1159309) | more than 5 years ago | (#26009829)

'Should' and 'do' are not the same though. Even Haskell (which *should* do these things automatically) does not. You still essentially have to specify a little 'tag' to say 'parallelize this'. It's a lot less verbose than the equivalent parallel_for in C# or C++, but still not as automatic as I'd like.

I'm not sure if other FP languages/compilers do this, but it is certainly possible in theory (and the Haskell guys are working on it too I think).

Re:Question (1)

Chandon Seldon (43083) | more than 5 years ago | (#26009833)

The way it should work is that if you're implementing a pararell algorithm like merge sort http://en.wikipedia.org/wiki/Merge_sort [wikipedia.org] it should just go and get a new processor core each time you make a recursive call.

How do you implement that so that for N processors each call after the Nth isn't horribly inefficient? Sometimes you want a new thread, sometimes you don't, and if you can figure out how to tell the difference with static analysis before you know about the target machine...

Re:Question (1)

johanatan (1159309) | more than 5 years ago | (#26009841)

Depends on the language and compiler. It is certainly possible in theory, but I don't yet know of any language/compiler which does it.

And, it wouldn't so much be the compiler exclusively that does this, but rather some co-operation between it and the runtime.

Slowly getting there (0)

Anonymous Coward | more than 5 years ago | (#26009443)

Fortunately, people are also getting better and better at the comparably diffficult task of writing good books about FP. ;-) See the new book on Haskell [realworldhaskell.org] .

Re:Slowly getting there (2, Funny)

K. S. Kyosuke (729550) | more than 5 years ago | (#26009491)

Hmm, strange, I thought I had logged on. You gotta love web apps. :]

Break? How? (1)

grumbel (592662) | more than 5 years ago | (#26009445)

I don't quite see why algorithm should break. I would say the opposite is true, functional languages allow you to write an algorithm down much more clearly and compactly, leading to fewer bugs and better code.

The problem I have with functional programming is more that most programming isn't about clear well specified algorithms, but about routing UI events around, connecting framework components and simple batch processing, i.e. areas where functional programming doesn't provide an obvious advantage.

And of course functional programming doesn't make your algorithms magically parallel, it can help sometimes, but there is still plenty of manual work involved.

Moore's Law? (2, Insightful)

DragonWriter (970822) | more than 5 years ago | (#26009447)

Chipmakers have essentially said that the job of enforcing Moore's Law is now a software problem.

How is maintaining the rate of increase in the number of transistors that can be economically placed on an integrated circuit a software problem?

Re:Moore's Law? (1)

perlchild (582235) | more than 5 years ago | (#26009755)

More like "we can't make data transfers fast enough to supply moore's law, so your software will have to use less of them". Having more parallel algorithms means they can seem to respect Moore's Law(which is good, their shareholders are happy). When in fact, it was an approximation in the first place, and probably should have said "you will route twice as much information through the chip in the same amount of time every 18 months." The paraphrase is mine, because outside of specific problems, having more transistors might be good, but what you really care about is the amount of arbitrary, problem-specific data you can push through your chip. I posit that the new GPU fad is based at least in part on the attraction of having pipelines dedicated to processing vector-based data at high speeds, and related to this.

It's just hype (0)

Anonymous Coward | more than 5 years ago | (#26009463)

Some things are easy to parallelize because they don't need mutable shared data (known as embarassingly parallel [wikipedia.org] problems), other things are hard because they do need it.

Functional programming prohibits using mutable data... so parallelization in FP is easy, but not because FP actually makes anything easier - it's because the only things you can do in FP are the embarassingly parallel problems.

parallel algorithms, mutable data, and STM (4, Insightful)

j1m+5n0w (749199) | more than 5 years ago | (#26009777)

While pure functional code isn't allowed to manipulate mutable, shared state, functional languages often provide some mechanism to mix "pure" and "impure" (stateful, imperative code).

In the haskell world, there is the IO monad, which is sort of a sandbox where anything is allowed. Functions within the IO monad (often called "IO actions") are allowed to invoke other IO actions or call pure code, but the reverse is not true; pure code cannot invoke an IO action. Also, due to laziness, pure functions that were passed an unevaluated "thunk" as an argument might trigger deferred IO, but this is (usually) transparent to the programmer.

So far, this doesn't sound any better than a pure imperative language, but there is also an STM monad (for software transactional memory) which is like pure code except that you're allowed to access shared mutable data through a restricted API. STM is based on the idea that if two processes race and manipulate the same shared data structures at the same time, the conflict can be detected by the run time system, which can stop and replay the transaction one after the other.

The reason STM transactions can be safely replayed by the run-time system is that the language guarantees that the STM transaction doesn't have any hidden state somewhere, that might cause problems if the transaction were replayed. This is not a guarantee you can make in C, C++, Java, or any other popular language that I am aware of.

Note 1: It is possible for STM transactions to livelock if they continually conflict and are replayed, so you can still shoot yourself in the foot. However, it does make avoiding certain other problems much easier.

Note 2: I'm not really a haskell guru, so everything above should be taken with a grain of salt. Real World Haskell has a chapter [realworldhaskell.org] on STM, which is the basis of my current understanding (I haven't yet had cause to use STM in any program I've written.)

Re:It's just hype (1)

mevets (322601) | more than 5 years ago | (#26009785)

It's not just hype; Church's hypothesis asserts that lamda calculus and turing machines are equivalent. I have a slightly different problem with 'fp is the answer' in that fp implementations that do not support lazy evaluation are so slow in practice to be useless. Lazy evaluation, which is approximately caching, monitors the dependencies between entities, thus limits re-evaluation to just what might have changed. I don't really see how this relationship management can be implemented effectively without 'mutable storage', which resurrects all the problems with more conventional programming models.

Perl (0)

Anonymous Coward | more than 5 years ago | (#26009471)

If you think perl is unreadable, wait till you read FP code.. it's even more compact and complicated than perl one-liners

Function is easy (3, Insightful)

TheRaven64 (641858) | more than 5 years ago | (#26009477)

The biggest problem with functional languages tends to be their type systems (I'm looking at you, Haskell). A functional language with a nice type system, like Erlang, is easy to pick up. And the example I picked totally at random, Erlang, also happens to have CSP primitives in the language, which makes parallel programming trivial. I've written code in it and then deployed it on a 64-processor machine and watched it nicely distribute my code over all 64 processors. If you program in a CSP style (which is easy) then your code will exhibit 1000-way parallelism or more and so will trivially take advantage of up to this many processes.

And, actually, object orientation is a good option too. Alan Kay, who defined coined term, defined it as 'simple computers [objects] communicating via message passing' - sounds a lot like CSP, no? The main difference is that OO is usually implemented with synchronous message passing, but you can implement it with asynchronous messaging (actor model) then you have something almost identical to CSP. You can also add this implicitly with futures. I've done this in Objective-C for Etoile. Just send an object an -inNewThread message and any subsequent message you send to it is passed via a lockless ring buffer to the other thread and executed. We use it in our music jukebox app, for example, to run the decode in a separate thread from the UI. Implementing it in the language, rather than the library, means you can do it more efficiently, so this by no means replaces Actalk or Erlang in the general case, but modern processors are fast serial processors so it makes sense to program much larger chunks of serial code on these systems than Erlang or Actalk encourage.

Re:Function is easy (0)

Anonymous Coward | more than 5 years ago | (#26009817)

CSP? I think you mean CPS (Continuation Passing Style).

Re:Function is easy (1)

johanatan (1159309) | more than 5 years ago | (#26009861)

And actually MS windows is a good option too. Just use 'PostMessage' instead of 'SendMessage' and run all windows in separate processor-bound threads. :-)

And, yes, I'm joking. That would be a pain to manage.

I'm a functional programmer, but... (1, Funny)

Anonymous Coward | more than 5 years ago | (#26009513)

I can't find a job. So don't worry, your non functional programming will lend you jobs at least fro another 30-50 years.

If you want to find a job learn to type (0)

Anonymous Coward | more than 5 years ago | (#26009911)

"...at least fro another..."

Sounds like a cop-out to me! (1)

Jane Q. Public (1010737) | more than 5 years ago | (#26009515)

If they want people to use their goddamned chips, then THEY can bloody well develop the necessary software tools.

Buying a 16-core chip that does not have adequate software support is like buying a cadillac in a part of the world that has no gasoline. Most people won't do it!

My suggestion is: boycott. Just don't buy the damned things until there IS adequate software support. (There isn't yet, even for 4-core chips!)

Then watch the chipmakers jump to hire lots of programmers!

Re:Sounds like a cop-out to me! (1)

sexconker (1179573) | more than 5 years ago | (#26009575)

But if you don't buy the hardware, who will write the software? And without any cars, who will open a gas station?

Man, that was an easy one.

Re:Sounds like a cop-out to me! (1)

sumdumass (711423) | more than 5 years ago | (#26009825)

Christ, what are you doing? Trying to get intel and AMD to beg the government for bail out money too? Imagine all the people without jobs if the go down......

XSLT is functional (1)

kbrasee (1379057) | more than 5 years ago | (#26009529)

So I think I'll just switch from Java to that.

Breaking algorithms (1)

DragonWriter (970822) | more than 5 years ago | (#26009545)

The bad news? Getting good at functional programming is hard, harder than moving from iterative Pascal or Basic or C coding to object-oriented development.

I suppose its hard for someone whose done lots of lots of programming and none of it in a functional style, much less a functional language, and who is used to artifacts of other languages as if they were fundamental elements of programming.

It's an exaggeration but a useful one: When you move to FP, all your algorithms break.

Algorithms don't break. The algorithms that have a simple, concise realization in an FP language may not be the same ones that do so in, e.g., C, and the implementation of an algorithm, even one that is similarly easy in the FP language and C, may be very different, but changing programming paradigms doesn't do anything remotely like breaking algorithms.

Suggested reading. (4, Informative)

DoofusOfDeath (636671) | more than 5 years ago | (#26009567)

I've recently gotten into FP. I started with Erlang and then branched into ML and Haskell. In case you're interested, here are the best books I've encountered for each language:

Programming Erlang [amazon.com]

Programming Haskell [amazon.com]

ML for the Working Programmer [amazon.com]

Also, I'd definitely recommend starting with Erlang, because the Programming Erlang book made for a very easy introduction to functional programming.

Re:Suggested reading. (0)

Anonymous Coward | more than 5 years ago | (#26009675)

Did you mean: me for the working programmer?

NOOOOO!

mpod uGp (-1, Troll)

Anonymous Coward | more than 5 years ago | (#26009643)

FP not the problem (1)

Todd Knarr (15451) | more than 5 years ago | (#26009649)

The big problem won't be moving to FP. The big problem will be that many common programs simply aren't massively parallel. If your job doesn't involve many parallel tasks, you can't take advantage of multiple cores in it.

Of course, you don't have to move to FP to take advantage of multiple cores. You just need to move to multi-threaded programming. I've been doing that for 20 years now. It takes a different approach to breaking down problems into implementable units, that's all. Certain algorithms are easier to expess in a functional language, but that's more a matter of the algorithm than multi-threaded vs. single-threaded. And multi-threaded is hard. It's not a matter of the language, it's a matter of having to remember that other things may have their fingers in your data, or of designing things so they don't. That's what gives most programmers fits, and I don't see FP making that any easier because it happens before you've gotten to the code.

It will be solved in other ways (1)

Cthefuture (665326) | more than 5 years ago | (#26009661)

While I think every programmer worth their salt should know at least one functional language, I think the ALGOL-like (or C-like if you will) imperative languages will continue to dominate. I believe they work more like the human mind. More like the way we do elementary math. I think we will just get better compilers and libraries that will help with the parallelism.

Another thing that is interesting is that scripting languages and virtual machines really seem to be the future for programming. The problem is currently very few of these languages even support threads, let alone anything more sophisticated.

"multicore paradigm shift" is nonsensical hype (0)

Anonymous Coward | more than 5 years ago | (#26009687)

man pthread_create(3) and quit pretending the sky is falling.

Which is more likely? (5, Insightful)

That's Unpossible! (722232) | more than 5 years ago | (#26009689)

A. Many programmers start writing or re-writing their code in functional programming languages.

or

B. Programmers continue writing to their platform of choice, e.g. .NET, Java, etc., and the guys writing the virtual machines do the heavy-lifting, making the VM execute more efficiently with multi-cores?

I'll go with B.

Apple is already proving this. Mac OS X Snow Leopard will have a lot of this built-in. Read about "Grand Central."

Sounds like BYTE magazine in 1985 (4, Interesting)

Stuntmonkey (557875) | more than 5 years ago | (#26009697)

Look at the table of contents of this BYTE magazine from 1985 [devili.iki.fi] . In a nutshell it said the same thing as this article: Functional languages are the great hope for solving the parallel programming problem. Only then the languages were different: Hope, Linda, and Prolog were among them.

My response back then was to get excited about FP. My response now is: Where is the proof? Can anyone name a single instance where a functional paradigm has yielded the best measured performance on a parallel computing problem? In other words, take the best functional programmers in the world, and pair them up with the best tools in existence. Can they actually create something superior, on any problem running on any hardware? This is a very low bar, but until it's demonstrated FP will be confined mostly to the lab.

IMHO the path forward is to treat parallel programming like just another optimization. As we know, the vast majority of your code doesn't need to run fast, and you get most of the performance benefit by optimizing small bits of code that really matter. I suspect the same thing will happen with parallel programming: In a given application only a few areas will benefit much from parallelism, and these tasks will probably be very similar across applications. Graphics rendering, large matrix math, video encoding/decoding, and speech recognition would be examples. People will treat these as special cases, and either develop special-purpose hardware (e.g., GPUs), or libraries that encapsulate the nitty-gritty details. The interesting question to me is what is the best runtime model to support this.

Re:Sounds like BYTE magazine in 1985 (1, Informative)

Shados (741919) | more than 5 years ago | (#26009837)

Functional programming isn't faster in a multi-core environment per say. What makes it better for multi-core programming, is that through functional programming, it is easier for a non-human entity (read: the compiler) to defer your -intent- from your code. Once the compiler, and more so, the runtime, know your intent, it can do the heavy lifting to make your code work in parallel, which is easier than the alternative (manage and spawn threads on your own, the efficiency of which depends on the hardware, so you'd always have to test how many core are present, how many are available at under which load, etc...no fun).

So by moving some stuff on to functional programming, we can tend work on compilers and frameworks/runtimes that will take said code, and do whats best on the given platform to make it go fast, which, while possible, is much much more difficult using procedural.

An example off hand is Microsoft's Parallel LINQ. LINQ is just fancy syntax sugar over functional programming., and it lets you turn, by adding basically one method, any LINQ code to parallel code, adjusting itself to current system load and amount of cores. mylist.Select( o => DoSomething(o)) simply becomes mylist.AsParallel().Select( o => DoSomething(o)), and poof, you're now calling DoSomething on every element of mylist, in parallel, spreading the load as well as possible.

Now this is a simple example, and there IS equivalent syntax sugar to turn a normal For loop in parallel code... but with functional programming, you can push it further.

Thats all there is to it.

Re:Sounds like BYTE magazine in 1985 (2, Interesting)

gomoX (618462) | more than 5 years ago | (#26009877)

Functional Programming is not a buzzword that is supposed to be better at paralellism. When coding in a stateless fashion (what FP is all about), function reduction can be split transparently across many computers. There are no locks, no funny semaphores and mutexes, no deadlocks, no nothing. It just works, because of its uncoupled nature of "no side effects, ever".

There is one kind of early optimization that is not premature: architectural optimization. If you design your whole system to be synchronous, you trade off scalability for simplicity. If you design your whole system around the imperative paradigm, there will be a significant amount of work involved in making things work in parallel environments. There is no amount of later optimization that will fix this kind of architecture issues.

Functional design is an architectural decision.

example (4, Interesting)

bcrowell (177657) | more than 5 years ago | (#26009721)

As an example of the learning curve, I wanted to learn a little OCaml. I played around with this [inria.fr] insertion sort example. I used it to sort a list of 10,000 integers, and it took 10 seconds, versus <1 second in C with linked lists. Not too horrible. But changing it to 100,000 integers made it die with a stack overflow, so I'm guessing that its memory use goes like n^2. However, it's not at all obvious to me from looking at the code that this would be the case. I think if I wanted to do a lot of OCaml programming I'd have to develop "FP Eye for the Straight Guy." Probably if you wanted to make it perform better on big arrays you'd want to make it tail-recursive, but it's not totally obvious to me from the code that it's *not* tail-recursive; although the recursive call isn't the very last line of code in the function, it is the very last thing in its clause...?

I know of at least one well known OSS project in Haskell, written by a very smart guy, that is really struggling with performance issues. I wonder whether bad performance is to FP as null-pointer bugs are to C. Sure, a sufficiently skilled programmer should theoretically never write code that will dereference a null pointer, but nevertheless my Ubuntu system needs a dozen security patches every month, many of which are due to null pointers, buffer overflows, etc.

From experience (1)

Neotrantor (597070) | more than 5 years ago | (#26009737)

you just have to tackle a few interesting projects in FP to start to really get it. I've been doing FP for 3 years or so. In college we had lisp and ML but I opted to do alot of haskell stuff.

Haskell, erlang, and ocaml are all very similar and have a growing user base. I'm biased towards haskell but the others also have a strong user base.

if you really want to know FP, buy a book or read alot of docs and go idle in any of the major FP language channels on irc.freenode.net. Just like learning imperative languages, hands on is best.

I recommend #haskell

Functional Perl is a good way to start (1)

filmmaker (850359) | more than 5 years ago | (#26009751)

I read a book not too long ago called Higher Order Perl, which highlights how natural functional programming in Perl is. That's right. Perl has pretty nice syntactical support for functional programming, too. The object-oriented stuff is bolted on, clumsily, in my opinion. But functional Perl is just as natural as imperative Perl. I've been writing Perl for eight years, although much less in the last few years, but this familiarity makes the transition from imperative (or what most programmers actually do, which is combine imperative and OOP styles, esp. in languages where the OOP part is bolted on, like Perl and PHP) to functional pretty simple. I had meant to use functional Perl as a stepping stone to OCaml and Haskell. But now I'm having too much fun with functional Perl...there's just no way this can end well... =D

SemiStupidQuestion. (1)

sumdumass (711423) | more than 5 years ago | (#26009781)

This is probably a stupid question but,

Shouldn't it be possible for the operating system to provide a milticore framework-layer that will allow normal single core algorithms and applications to run in a quasi virtual mode by splitting or taking advantage of the paralel processing and reporting it back as if a single processor?

I mean if it could be done on a low enough level, and I believe most operating system provide that level by taking over the processing control from the BIOS, but it should be possible to create a sort of layer that would take advantage of the multiple cores directly allowing old school programs to function as normal but seeing the advantage of multiple cores. I'm sure it would be slower then a true multi-threaded application but would the overhead make it worse then a regular application that can only take care of one processor?

I probably used quite a few terms wrong in that. That is why I am asking, I don't pretend to be an expert here. Anyways, I was thinking that if it was a framework, then small alterations could be made to implement the ability and that too could limit exactly how much differently an application could run and thereby eliminate some of the problems in the process. My guess is the problem would be getting information back before the subsequent instruction had returned in which a modified scheduler could help with. But then again, I don't know.

take it from a computational physicist (0)

Anonymous Coward | more than 5 years ago | (#26009821)

Not all tasks can be made parallel. Monte Carlo simulations, or any task that requires knowing the outcome of the previous task is not able to be made parallel.

"Hard" is not the problem (1)

unity100 (970058) | more than 5 years ago | (#26009847)

In a lot of industries many people are doing very hard jobs and making world go around. its not the problem.

the problem is feasibility. up to this point, from what it seems, mainstream computer users and even non-niche I.T. world didnt have the need for multi cores' increased performance. what do they do daily runs on existing stuff already.

and if multi core programming takes longer time, which is the main cost in programming and development projects, its a no go. huge time, small efficiency gain that little number of people care about -> infeasible.

check out games. its one of the most cutting edge areas of mainstream computer usage, and what ? crysis uses 2 cores and its still the most loaded game in demand for performance. how many 4 core games are there ? none ?

there.

this may work for supercomputing, and then again a niche part of it, but i dont see it for mainstream computer usage or i.t. industry work.

I wish the Itanic hadn't been such a horrible chip (1)

Omnifarious (11933) | more than 5 years ago | (#26009913)

Because it had a good solution to this problem. Basically it presented the compiler with the opportunity to launch several parallel operations.

That's where this problem should be solved. Convincing all the software people to change how they do things to make up for the inability of the hardware people to design good hardware isn't the way to solve the problem.

The Itanic (erm some people might call it the Itanium) had an architecture that was ostensibly designed to fix this. But in reality it was a horrible failure for a number of reasons that I think had more to do with Intel's hubris and temporary failure to realize that good marketing is no subsitute for good engineering in the 2002-2006 timeframe.

Though, I also think that a reason for its failure is that radical new chip architectures basically require Open Source software to succeed because otherwise you have to get too many proprietary software vendors on board to porting their software. For example, the port to the x86_64 architecture was essentially complete about 2.5 years ago for Linux and still isn't really done for Windows.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?