Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Is Parallelism the New New Thing?

kdawson posted more than 6 years ago | from the still-working-on-the-old-new-thing dept.

Supercomputing 174

astwon sends us to a blog post by parallel computing pioneer Bill McColl speculating that, with the cooling of Web 2.0, parallelism may be a hot new area for entrepreneurs and investors. (Take with requisite salt grains as he is the founder of a Silicon Valley company in this area.) McColl suggests a few other upcoming "new things," such as Saas as an appliance and massive memory systems. Worth a read.

cancel ×

174 comments

Sorry! There are no comments related to the filter you selected.

About time (4, Funny)

olddotter (638430) | more than 6 years ago | (#22893390)

When I was in graduate school in the mid '90's I thought Parallelism would be the next big thing. Needless to say I was a bit early on that prediction. Finally maybe those graduate classes and grant work will pay off. :-)

ACtually (-1, Troll)

geekoid (135745) | more than 6 years ago | (#22893434)

That was a good prediction, but AMD decided t be a competitor and it's tuned into a consumer numbers war. Putting a lot of technology on the back burner.

Fortunatly we can hear AMDs death rattle.

Yes I dislike AMD, for many reasons...pricks.

Re:ACtually (3, Funny)

garett_spencley (193892) | more than 6 years ago | (#22893496)

Paul ?

Paul Otellini ?

I didn't know you posted on slashdot !

So what's up man ? Can I buy you a beer ?

Re:About time (2, Funny)

UbuntuLinux (1242150) | more than 6 years ago | (#22893554)

As a user of Linux, I have to say the Parallelism is the 'old thing', as Linux has supported parallel operations for over a decade. Compare this to closed source, proprietary operating systems, such as Windows, where this sort of thing is relatively new.

I remember back in the late 90's writing some kernel modules for Linux, I was astounded by how easy it was. Even though my CPU at the time only had a single core, the power of Linux allowed it to execute more then one code stream at a time. When attempting the same thing on a closed source, properietary operating system, things were much more difficult. This is yet another reason for people to support open source software - it is through the contributions of the general public that Linux has grown so vastly superior to every other mainstream operating system in this regard, and just about every other.

Microsoft are literally shitting themselves about Linux, and articles like this really drive it home.

Re:About time (5, Informative)

pleappleappleap (1182301) | more than 6 years ago | (#22894174)

As a user of Linux, I have to say the Parallelism is the 'old thing', as Linux has supported parallel operations for over a decade. Compare this to closed source, proprietary operating systems, such as Windows, where this sort of thing is relatively new.

Windows is not the only closed-source proprietary operating system out there. AIX and Solaris have supported parallel functions for a number of years, and various IBM mainframe operating systems have had those functions since the '70's. There are architectures which had it in the '60's.

Proprietary closed-source operating systems had these functions FIRST before Linux was a twinkle in Linus Torvalds's shorts.

Re:About time (4, Funny)

Gerzel (240421) | more than 6 years ago | (#22895162)

"Proprietary closed-source operating systems had these functions FIRST before Linux was a twinkle in Linus Torvalds's shorts."

Do not mock the shorts of Torvalds, for they are mighty indeed!

Re:About time (1, Interesting)

Anonymous Coward | more than 6 years ago | (#22894202)

>> Compare this to closed source, proprietary operating systems, such as Windows, where this sort of thing is relatively new.

Windows NT supports multiple processors since 1993.

>> When attempting the same thing on a closed source, properietary operating system, things were much more difficult.

Multithreading in the past had better support on closed OS like Windows NT and BeOS than you had in the first releases of Linux kernel (where you were stuck with a user-mode threading scheme).

Rule #1 : Windows/Microsoft are shitty enough on their own. No need to invent stuff just to make them appear worse than they are.

Re:About time (0)

Anonymous Coward | more than 6 years ago | (#22894264)

An excellent troll - people who understand technology will be annoyed, and Linux fanboys will rabidly defend your comment. Well done, sir!

Multithreading Is to Blame (2, Insightful)

MOBE2001 (263700) | more than 6 years ago | (#22894378)

Needless to say I was a bit early on that prediction.

The reason is that all academic researchers jumped on the multithreading bandwagon as the basis for parallel computing. Unfortunately for them, they could never get it to work. They've been at it for over twenty years and they still can't make it work. Twenty years is an eternity in this business. You would think that after all this time, it would have occurred to at least one of those smart scientists that maybe, just maybe, multithreading is not the answer to parallel computing. Nope: they're still trying to fit that square peg into the round hole.

Both AMD and Intel have invested heavily into the multithreading model. Big mistake. Multiple billion dollar mistake. To find out why threads are not part of the future of parallel computing read Nightmare on Core Street [blogspot.com] . It's time for the computer industry to wake up and realize that the analytical engine is long gone. This is the 21st century. It's time to move on and change to a new model of computing.

Re:Multithreading Is to Blame (2, Informative)

Anonymous Coward | more than 6 years ago | (#22894590)

Unfortunately for them, they could never get it to work.

Hyperbole much? Parallel systems such as MPI have been the staple of high performance computing since the mid 90's, and there are plenty of developers (including myself) who can write multi-threaded code without breaking into a sweat, and get it right.

At what point did parallel and concurrent programming "fail"? I really must have missed that memo.

Re:Multithreading Is to Blame (1, Informative)

MOBE2001 (263700) | more than 6 years ago | (#22894700)

Parallel systems such as MPI have been the staple of high performance computing since the mid 90's, and there are plenty of developers (including myself) who can write multi-threaded code without breaking into a sweat, and get it right.

In that case, you should hurry and tell Microsoft and Intel to refrain from giving that 20 million they want to give to UC Berkeley and UI Urbana-Champaign to find a solution to the parallel programming problem. According to you, Microsoft, Intel, AMD and all the others are wasting hundreds of millions in research labs around the world trying to make it easy to build apps with threads. After all, you already found the solution, right? And you found an easy way to build threaded programs, right?

Sure.

Re:Multithreading Is to Blame (0)

Anonymous Coward | more than 6 years ago | (#22894958)

You seem confused. I simply said I can do it and I know other developers who can. Just because the majority of developers can't doesn't mean it has "failed": it just means there are a lot of mediocre developers who need new tools to manage the job, which is where that $20million is being spent. AMD and Intel and others want to make it easier for you. Personally I'm quite at home writing multithreaded kernel level C, but clearly not everyone is.

Re:Multithreading Is to Blame (0, Troll)

MOBE2001 (263700) | more than 6 years ago | (#22895056)

You seem confused. I simply said I can do it and I know other developers who can. Just because the majority of developers can't doesn't mean it has "failed": it just means there are a lot of mediocre developers who need new tools to manage the job, which is where that $20million is being spent. AMD and Intel and others want to make it easier for you. Personally I'm quite at home writing multithreaded kernel level C, but clearly not everyone is.

IOW, you're smart and everybody else is an idiot. You sound like a pompous ass to me. So much for your solution to the parallel programming problem.

Re:About time (1)

drooling-dog (189103) | more than 6 years ago | (#22894694)

When I was in graduate school in the mid '90's I thought Parallelism would be the next big thing. Needless to say I was a bit early on that prediction.
Had a company in the early 90s dedicated to heterogeneous parallel computing in what we now call genomics and proteomics. Despite the ongoing boom in DNA sequencing and analysis, it was hard at the time to interest either end-users or (especially) investors in distributed processing. Most worried that it was overkill, or that the computations would somehow be out of their control. How times change...

Re:About time (1)

rbanffy (584143) | more than 6 years ago | (#22894744)

Parallelism is the new new thing since at least around the ILLIAC IV...

Let me know when I can buy (2, Insightful)

geekoid (135745) | more than 6 years ago | (#22893412)

32G chips and put them in 4 slots on my 64 Bit PC before talking about 'massive memory'

Re:Let me know when I can buy (1)

asliarun (636603) | more than 6 years ago | (#22893752)

32G chips and put them in 4 slots on my 64 Bit PC before talking about 'massive memory'
Can't you already do that with a server motherboard? Even if you're looking for a PC, Skulltrail supports gobs of RAM and 8 cores.
On the server side, Intel is coming out (soon) with Dunnington [wikipedia.org] , which will be a 6-core single-die CPU with a monster cache... AND you can put 4 of them on a motherboard, giving you a 24-core machine. Then, you can also get custom workstations (Tyan?) that support multiple motherboards in a single box with a high speed interconnect. This is only going to get better when CSI/QPI [wikipedia.org] gets released later this year, and in a couple of years, Larrabee [anandtech.com] (large number of simple cores).

In my opinion, the benefits of parallelism can be more easily extracted at the VM or OS level, rather than at the application level. If you see how programming is evolving, it is clear that low level implementation details are increasingly being made transparent to the programmer. Heck, they're even abstracting out your CPU and application processes and instead using platform virtual machines (JVM/CLR) and application domains. After all this, it just doesn't makes sense to have programmers start coding/optimizing for multiple cores. You might as well ask them to write in assembly (however studly it may be).

Re:Let me know when I can buy (1)

denisfalqueto (987529) | more than 6 years ago | (#22894108)

I can't agree with you about virtual machines easing the making of parallel programs. There is nothing they can do to make a multi-threaded program correct if the programmer is sloppy. When you talk in parallelization in applications, you are almost likely talking of threads and the inherent problem about them is the sharing of state (data) between two or more threads. This will require synchronization and it is not a straightforward reasoning to get. And it is solely on the shoulders of the programmer, not the runtime or the language.

Re:Let me know when I can buy (2, Insightful)

Sancho (17056) | more than 6 years ago | (#22894110)

I agree that parallelism is more easily (and transparently) used at the OS level, but that doesn't mean that we don't need to start moving that way in applications, too. As we move towards a point where extracting more speed out of the silicon becomes harder and harder, we're going to see more and more need for parallelism. For a lot of applications, it's going to be irrelevant, but for anything at all CPU intensive (working with video, games, etc.) it's going to eventually be the way of the future.

Re:Let me know when I can buy (1)

billcopc (196330) | more than 6 years ago | (#22894020)

Truth.

Actually it would be so much easier if we just had more than 4 slots. I don't want to be stuck in "server land" and their smorgasbord of crap boards, just for the privilege of running a high memory PC.

Re:Let me know when I can buy (1)

pleappleappleap (1182301) | more than 6 years ago | (#22894214)

Get more slots.

Seriously.

I've had a machine that could take that kind of memory size for almost ten years.

TRIPS (3, Interesting)

Bombula (670389) | more than 6 years ago | (#22893446)

Someone in another recent thread mention the TRIPS architecture [wikipedia.org] . It's quite interesting reading.

Re:TRIPS (2, Interesting)

The Living Fractal (162153) | more than 6 years ago | (#22893604)

TRIPS and EDGE are interesting approaches to parallel processing. The thing is, data interdependency is going to make execution speed remain as the trump card for computing. That is to say: you cannot parallelize an algorithm that requires step-by-step completion. EDGE simply identifies, at the compilation level, what parts of a program can be parallelized, based on interdependencies, and then it creates the program based on this in the form of 'hyperblocks'.

If each subsequent step is dependent on the previous step then this by nature makes the program impossible to parallelize. So what we're most likely going to find out is that we'll hit the 'wall' of parallelization in programs in a relatively short period and be back to the familiar place: increasing the clock speed.

Re:TRIPS (0)

olddotter (638430) | more than 6 years ago | (#22893896)

You are making that assumption by only considering "small window" compile time parallelism. But if we train the developers correctly (I know, I know, why start doing that again, we haven't for decades) then looking at software design at a high level and designing parallelism into the software architecture from the beginning; it should be possible to reach better performance on many different types of applications.

M$ made it easy to use low skilled developers to write (barely) passable software. But I think if parallelism takes off, it will separate the "men from the boys" in the software development space. Given the 4, 6, and 8 core processors around today I see no reason that parallelism will not become important, at least in areas where speed matters.

Re:TRIPS (3, Interesting)

The Living Fractal (162153) | more than 6 years ago | (#22894252)

I certainly agree with your post from a system-level perspective of abstraction. I.E. if we design it into the system at every level, from hardware up through the layers of abstraction to software and the OS, then we will see the largest possible gain from parallelization. Computers will be able to utilize hundreds, thousands or millions or more 'micro-cores' to perform complex tasks faster than ever before.

I guess my point is that I think we'll actually create the basic, expandable model fairly quickly. Would you agree that today's supercomputing, which utilizes parallelization on a scale far beyond desktop computing, has successfully harnessed parallelization? I hope you would. If so, then the next step is miniaturization of what supercomputing is already doing. That step is just now taking place. It's not something that will happen overnight, but I do think that after we've fully integrated parallelization into everyday computing we'll be back to the same old game again: that of looking for ever better ways to increase FLOPS through transistor/switch speed.

My basic thoughts on this are that it is, in theory, easier to model the perfect parallelization of a program, and the optimum number of cores for a specific type of computer, than it is to model the fastest possible clock speed of a CPU. Because of this we'll probably see diminishing returns in advancement of parallelization at an accelerated rate compared to CPU design and clock speed.

Re:TRIPS (1)

Howlett (102725) | more than 6 years ago | (#22894382)

I have recently been reading a series of articles here [blogspot.com] and here [rebelscience.org] regarding the multi-core parallel programming problem. The guy seems like he could be a little out there on the edge, but his concept for the COSA project, and fine grain parallelism seem really attractive to me. I have been thinking about trying to implement a COSA virtual machine to try it out.

I have also been thinking about trying to implement the COSA hardware in an FPGA, but that seems like a much harder project

1% of programmers (4, Insightful)

LotsOfPhil (982823) | more than 6 years ago | (#22893454)

Only around 1% of the world's software developers have any experience of parallel programming.

This seems far, far too low. Admittedly I work in a place that does "parallel programming," but it still seems awfully low.

Re:1% of programmers (1)

LotsOfPhil (982823) | more than 6 years ago | (#22893492)

Sigh, replying to myself. The source for the 1% figure is a blog of someone at Intel:

A more pressing near-term problem is the relative lack of experienced parallel programmers in industry. The estimates vary depending on whom you ask, but a reasonable estimate is that 1% (yes, that's 1 in 100) of programmers, at best, have "some" experience with parallel programming. The number that are experienced enough to identify performance pitfalls across a range of parallel programming styles is probably an order of magnitude or two fewer.

Still open to debate.

Re:1% of programmers (1)

PhrostyMcByte (589271) | more than 6 years ago | (#22893580)

Perhaps they meant it as in "specifically designing to scale up", as opposed to a developer who just uses a thread to do some background processing.

One thing that's always saddened me is that most embarrassingly parallel problems like web and database development are still in the dark ages with this. They have so much potential, but almost nobody seems to care. To date the only frameworks I know of that allow fully asynchronous efficient processing of requests and database queries (so that you don't need to spawn a new thread/process for every request) is ASP.NET, WCF (SOAPish stuff in .NET), and some of .NET's database APIs. Does anyone know of any non-MS web technologies that allow this?

Re:1% of programmers (2, Interesting)

AKAImBatman (238306) | more than 6 years ago | (#22893924)

To date the only frameworks I know of that allow fully asynchronous efficient processing of requests and database queries (so that you don't need to spawn a new thread/process for every request) is ASP.NET, WCF (SOAPish stuff in .NET), and some of .NET's database APIs.

How do you see that as different from what Java J2EE does? Most J2EE servers these days use pools of threads to handle requests. These threads are then utilized based on poll/select APIs so that one thread can handle many requests depending on the availability of data. Database connections are similarly pooled and reused, though any optimization on the blocking level would need to be done by the JDBC driver.

Re:1% of programmers (1)

pleappleappleap (1182301) | more than 6 years ago | (#22894230)

Oracle and DB2 have done it for quite a long time. It wouldn't surprise me if PostgreSQL could do it too.

Re:1% of programmers (1)

PhrostyMcByte (589271) | more than 6 years ago | (#22894316)

How do you see that as different from what Java J2EE does? Most J2EE servers these days use pools of threads to handle requests. These threads are then utilized based on poll/select APIs so that one thread can handle many requests depending on the availability of data. Database connections are similarly pooled and reused, though any optimization on the blocking level would need to be done by the JDBC driver.

I've never used Java/J2EE before so I couldn't say.

.NET uses a per-operation callback to notify the app of I/O completion - it doesn't expose whatever internal mechanism it uses to achieve async, so the server choose the most efficient method (iocp, epoll, kqueue, how many threads, etc.). If you use SQL Server, DB queries can be done async too. If you do it right in many cases you can keep from ever blocking on I/O.

Re:1% of programmers (2, Interesting)

Shados (741919) | more than 6 years ago | (#22893598)

I'd be sceptical of the source of the information too (Intel's blog as you posted), but that doesn't seem that low to me...

The entire hierarchy system in the IT fields has to deal with the painfully obvious fact that less than 1% of programmers know what they're doing: that is, in an ideal scenario, everyone would know what they're doing, and you'd have a FEW hardcore computer scientists to handle the nutso theoritical scenarios (most parallel programming for example can be done with only basic CS knowledge... like, let say, trimming all of the strings in an array...everyday stuff). But thats not how it works: In the current world, you usually have a few douzan coding monkeys with some understanding of the theory (but no clue how to use it), and a few software architects who call the shots (Note i'm talking about software architects...those that work on software design, not system architects).

In an ideal world where programmers know how to do the -basics- of their job, software architects would be an obsolete job, yet they're really not.

Now, thats a side note, but my point is that the typical IT shop hierarchy is MADE with the "all programmers suck" in mind...so it really has to be a problem, and thus the 1% number isn't far fetched.

As for anecdotal evidence... I'm a .NET and Java dev, and those languages, while it could be better, make parallel programming relatively simple (at least, it is definately within the grasp of a code monkey). Yet, I have -never ever ever-seen any async/parallel code in anything but my own code, everywhere I worked (and I'm doing consulting, so I worked in a lot of places in a short amount of time).

The closest to parallel I've seen was people using a queue dispatcher (MSMQ) and have multiple triggers watching the same queue, thus resulting in some kind of parallel execution... thats it though.

So, 1% from my point of view actually seems pretty -high-, as sad as it is.

Re:1% of programmers (2, Insightful)

postbigbang (761081) | more than 6 years ago | (#22893712)

Your number is a bit insulting.

Consider that parallel computing means keeping extra monolitic cores busy. There are a number of programmers that need the discipline to know how to spawn, use, and tear down threads to keep them busy. But there are a helluva lot of them that plainly don't need to know. What we lack are reasonable compilers that allow the hardware layer to be sufficiently abstracted so that code can adapt to hardware infrasturcture appropriately. If that doesn't happen, then code becomes machine-specific rather than task-specific/fulfilling. If an app is written, then it should be able to take advantage of parallelism, and the plumbing should take care of the substrate, be that substrate a core-duo, or 64 cores and more, perhaps on separate systems with the obvious latencies that distance might inject.

Those layers are only nominally addressed in operating systems, which should be the core arbiter of how parallelism can be manifested to an application instance. It's up to the kernel makers to figure out how to take advantage and make that advantage useful to applications writers-- who should be at least knowledgable about how to twig those features in the operating system. But app writers needn't have to know all of the underlaying differential to write flexible code.

Re:1% of programmers (2, Insightful)

AKAImBatman (238306) | more than 6 years ago | (#22893864)

In an ideal world where programmers know how to do the -basics- of their job, software architects would be an obsolete job

Not really. The structure of a large system still has to be defined by someone. The key difference is that the architect would get a lot more feedback from his team, and could possibly even farm out high-level pieces of the design to be further architected by other developers.

I'm a .NET and Java dev, and those languages, while it could be better, make parallel programming relatively simple (at least, it is definately within the grasp of a code monkey). Yet, I have -never ever ever-seen any async/parallel code in anything but my own code, everywhere I worked (and I'm doing consulting, so I worked in a lot of places in a short amount of time).

That's because parallel code is the job of the J2EE or IIS server. Programmers develop modules that are loaded by the app server and run asynchronously. Granted, things are divided along he lines of one connection == one thread (though some app servers use poll/select for more granular control), but that's good enough to where systems like Sun's T1 "Niagara" processor can churn through a web load WAY faster than the single-threaded monstrosities we've been using to date.

And before you try to tell me that "application servers don't count", ask any parallel computing researcher if he thinks the threaded programming model is a good idea. You'll almost always get a resounding, "NO". Most of the parallel computing folks I've spoken with rave on and on about lambda and the inherent parallel nature of lambda functions.

The thing is, once parallelism takes off (if one can reasonable argue that it hasn't already), coding will be more about creating parallelizable modules rather than creating threads. The theory will be that the platform running the modules knows more about its resources and how to balance them than the code does. So by exposing areas where parallelism can occur, a language exposes the opportunity to split execution across many processors.

Let's take an ideal example: Let's say you create a raytracing function to cast a ray. Well, that function is inherently parallel in nature. All you need is to map a list of rays to the function and let the platform work out how to balance that function across many processing units.

A slightly less ideal (yet still parallelizable) example is collision detection in a video game. Collision detection is a matrix of objects that can interact. In general, that means that you want to test two objects against each other to see if they have collided or not. If they have, trigger an event to update the state of those objects. (e.g. explode, reverse direction, etc.) Once again, you can have a collision function that takes two items and works out if they have collided or not. A very parallelizable situation. Even firing any resulting events can be done in parallel, as long as the platform is careful not to dispatch multiple events on the same object in parallel. (That creates an out-of-order code issue which can be a bit tricky to resolve.)

Long story short: Parallelism is hard; expect the solution to be as invisible as possible.

Re:1% of programmers (1)

Shados (741919) | more than 6 years ago | (#22894004)

Not really. The structure of a large system still has to be defined by someone. The key difference is that the architect would get a lot more feedback from his team, and could possibly even farm out high-level pieces of the design to be further architected by other developers.


Note I said software architect, and I specifically stated I was not talking about system architects. Big difference between the two.

And yes application servers DO count. But during one connection, there's a LOT of things that can be split... Especially in these days of clouds and SOA. All the waiting on web services, all of the waiting on databases...its all time wasted. Its fairly simple. Start a worker thread with a callback. Bing bang poof. Sure, its not as ideal as parallelising a functional call, but it is simple, within the grasp of code monkeys, its clean, and it can make a 1 second request become a 0.3 second request.

My point was: parallelism in complex, scientific scenarios is hard. Parallelism in simple, everyday business applications for simple tasks is simple and is where you'll get one of the biggest bangs for your buck, and programmers should at least know the basics of it, to handle these basic scenarios. We're not talking about micro-optimisations here, we're talking a few lines of code to gain a 20-30% in a standard everyday business app just by doing work during idle time and splitting off obviously independent processes.

Re:1% of programmers (1)

AKAImBatman (238306) | more than 6 years ago | (#22894192)

Note I said software architect, and I specifically stated I was not talking about system architects. Big difference between the two.

Not really. A software architect represents a division of labor between the guys who build the hardware and the guys who build the software. The software architect obviously deals with the software aspect of the system (and when I say system, I mean a complete, large-scale application) and is thus responsible for how the code will be organized and constructed. You simply can't do away with that leadership aspect.

But during one connection, there's a LOT of things that can be split... Especially in these days of clouds and SOA. All the waiting on web services, all of the waiting on databases...its all time wasted.

Modern systems tend to work on poll/select rather than outright blocking the thread. If the thread is blocked, the system keeps itself busy doing other things.

Start a worker thread with a callback. Bing bang poof.

Welcome to AJAX 101. Obviously, there are inherent difficulties in running event handlers in an HTTP Request/Response. That's why we're pushing some of the parallelism to the client. By making many smaller calls, the application can provide a fully parallel experience to many clients, keeping both the client and the server busy performing useful work.

Re:1% of programmers (1)

BotnetZombie (1174935) | more than 6 years ago | (#22893708)

You don't even need to work in a place that does parallel programming to find that number low. It may also depend a bit on the programming languages used. I don't know many people that are good at C++ multithreading, while the majority of Java devs I know have at least some experience with it. That's not to say all of them are good, but many are - at least a much higher portion than 1%.

Re:1% of programmers (4, Insightful)

Otter (3800) | more than 6 years ago | (#22893754)

Admittedly I work in a place that does "parallel programming," but it still seems awfully low.

I think your experience is wildly skewed toward the high end of programming skill. The percentage of working programmers who can't iterate over an array is probably in the 15-20% range, even without getting into whether "web programmers" are included in that statistic. I'd be astonished if the number with parallel experience is significantly above 1%.

Re:1% of programmers (5, Insightful)

Anne Thwacks (531696) | more than 6 years ago | (#22893942)

I'd believe 1% as having "some experience". Some experience is what you put on your CV when you know what the buzword means.

If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high. If you mean "can exchange data between a userland thread and an ISR in compliance with the needs of reliable parallel execution" then its a safe bet that less than 0.1% are mentally up to the challenge. /. readers are not typical of the programming cummiity. These days people who can drag-and-drop call themselves programmers. Poeple who can spell "l337" are one!

Re:1% of programmers (1)

Otter (3800) | more than 6 years ago | (#22895108)

If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high.

I was thinking more along the lines of "learned something about parallelism in a CS class and remember having done so, although not necessarily what it was".

Re:1% of programmers (0)

Anonymous Coward | more than 6 years ago | (#22895098)

I'm not a programmer myself but i know how to do this very well.It seems ridiculous.

Re:1% of programmers (1)

gladish (982899) | more than 6 years ago | (#22894654)

Define "parrallel programing". Behind "realtime", I think "parallel programming" is the most overloaded term in the computing industry. For me, I consider "parallel progamming", pvm, mpi and other similar technologies. I think others will consider plaing old multi-threaded programming parallel programming. When these guys talk about the new new thing, I'm not sure if they're talking about the HPC market sort of opening up to the consumer (albeit packed up nicely) or are they just suggesting that new apps use the multi-core processors better?

Performance is the feature, Parallelism the means (1)

tjstork (137384) | more than 6 years ago | (#22893474)

Parallelism is just a means to a business feature - performance. If clients want it, then there will be capital for it. If they don't, then it won't matter to them.

evolution, not revolution (5, Insightful)

nguy (1207026) | more than 6 years ago | (#22893528)

the guy has a "startup in stealth mode" called parallel computing. Of course he wants to generate buzz.

Decade after decade, people keep trying to sell silver bullets for parallel computing: the perfect language, the perfect network, the perfect os, etc. Nothing ever wins big. Instead, there is a diversity of solutions for a diversity of problems, and progress is slow but steady.

Re:evolution, not revolution (1)

SanityInAnarchy (655584) | more than 6 years ago | (#22893596)

It looks to me more like progress is completely stalled.

I mean, yes, there are all kinds of solutions. Most of them are completely unused, and we're back to threads and locks. Nothing's going to be perfect, but I'll buy the "no silver bullet" when we actually have wide adoption of anything -- even multiple things -- other than threads and locks.

No (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22893532)

Like most topics in computer science, it's a "New Old Thing".

Parallel computing pioneer likes Parallelism (3, Insightful)

Alzheimers (467217) | more than 6 years ago | (#22893536)

A guy who's made it his life's work to study Parallel Computing has come forth to say, he thinks Parallelism is the next big thing?

Shock! And Awe!

"Next hot thing" my hiney (4, Insightful)

Rosco P. Coltrane (209368) | more than 6 years ago | (#22893540)

For having been in the computer industry for too long, I reckon the "next hot thing" usually means the "latest fad" that many of the entrepreneurs involved in hope will turn into the "next get-rich-quick scheme".

Because really, anybody believes Web-Two-Oh was anything but the regular web's natural evolution with a fancy name tacked on?

Re:"Next hot thing" my hiney (2, Insightful)

SanityInAnarchy (655584) | more than 6 years ago | (#22893628)

Oh, now that's not entirely fair.

Web 2.0 was a single name for an amorphous collection of technologies and philosophies. It was even worse than AJAX.

Parallelism is a pretty simple, well-defined problem, and an old one. That doesn't mean it can't turn into a buzzword, but I'm not convinced Web 2.0 can be anything but a buzzword.

Re:"Next hot thing" my hiney (1)

zappepcs (820751) | more than 6 years ago | (#22893654)

Yep the phrase "the next big thing in tech" is something uttered by people who are no longer allowed to work on Wall Stree.

Here's a clue for you "Ultra cheap computers" is the next big thing in tech, or haven't you heard about the impending financial crises that is about the consume the world's economies? That's right kiddies, no shiny new computers for your christmas... just new ISOs from Linux

Meh, can't blame him for trying to drum up business I guess

Re:"Next hot thing" my hiney (1)

samkass (174571) | more than 6 years ago | (#22893910)

I thought that multi-touch interfaces and embedded computing were the next big thing!

Seriously-- we've had enough computing power for the average desktop tasks for a long time. Instead of putting 8 CPUs on a die and bottling up all the processing power on the desktop, put 8 CPUs in 8 separate different domain-specific embedded devices sitting around you...

Re:"Next hot thing" my hiney (1)

zappepcs (820751) | more than 6 years ago | (#22894060)

Actually I've already commented on this. By making a pc that supports maybe 8 plug-in systems-on-a-card (blade style as was pointed out) and one main cpu to supervise the processor/boards and some raid storage with digital storage on the cards you can have the equivalent of 8 pc's running on your one desktop with one interface, near zero bottlenecks. A real click and rip situation.

You should also be able to choose the number of processors you wish to have running so others can power down when not in use. There are many advantages to this when you are trying to run multiple cpu intensive applications at once. Even though a p2p download runs in the background, it's still taking up cpu cycles. It only needs say 1GHz cpu to do its thing, and a 1GHz cpu is just fine for you to surf pr0n while it's downloading etc...

By specializing some hardware in the CPU architecture beyond what we have today, there can be many benefits. With ethernet in more common use, it will be able to tie them all together on several layers if need be. Shove all the video over internal ethernet, let the supervisor cpu mangle it onto the screen, or have multiple GPUs and displays... whatever you like or want, or can afford to upgrade to. I think that is the next worthy big and powerful pc architecture.

Look at the hardware in a DVD player, it's not all that special. Why not support multimedia on a custom built cpu board just for such things instead of trying to make generic hardware do EVERYTHING? Well, the list goes on, but by adding clever resources and eliminating duplication within your system much more can be accomplished.

Re:"Next hot thing" my hiney (1)

mrchaotica (681592) | more than 6 years ago | (#22894950)

I thought that multi-touch interfaces... were the next big thing!

Multi-touch and parallelism are both the "next big thing," because multiple touches are touches in parallel!

; )

Re:"Next hot thing" my hiney (1)

esocid (946821) | more than 6 years ago | (#22893674)

It may be that, but it's too early to tell since it seems like not many programmers can program parallel algorithms over their sequential counterparts. It could also be due to the lack of libraries and standards. It is lame this guy is making it sound like a pyramid scheme but he has an investment in it. *And I agree with that lame web 2.0 shit. Buzzwords don't mean jack.

Re:"Next hot thing" my hiney (1)

fpgaprogrammer (1086859) | more than 6 years ago | (#22893768)

there are some serious problems in parallel computing the greatest of which is teaching people how to write good parallel programs. this isn't like building social networks or applications on top of social networks--that really is the latest-latest fad. anyone seeking to get-rich-quick off parallel computing aught to reconsider their field of choice. it's more like get-frustrated-quick scheme.

More so now, but depends ... (3, Insightful)

Midnight Thunder (17205) | more than 6 years ago | (#22893620)

Now that we are seeing more and more in the way of multi-core CPUs and multi-CPU computers I can definitely see parallelism become more important, for task that can be handled this way. You have to remember that in certain cases trying to parallise a task can end up being less efficient, so what you parallelise will depend on the task in hand. Things like games, media application and scientific applications are usually likely candidates since they are either doing lots of different things at once or have tasks that can be split up into smaller units that don't depend on the outcome of the other. Server applications can to a certain extent, depending whether they are trying to the same resources or not (ftp server, accessing this disk, vs a time server which does not file I/O).

One thing that should also be noted, is that in certain cases you will need to accept increased memory usage, since you want to avoid tasks locking on resources that they don't really need to synchronise until the end of the work unit. In this case it may be cheaper to duplicate resources, do the work and then resynchronise at the end. Like everything it depends on the size and duration of the work unit.

Even if your application is not doing enough to warrant running its tasks in parallel, the operating system could benefit, so that applications don't suffer on sharing resources that don't need to be shared.

Re:More so now, but depends ... (2, Interesting)

Shados (741919) | more than 6 years ago | (#22893736)

Well, actually...I think more things can be paralleled than one would think at first glance... The very existance of the foreach loop kindda shows this... Looking at most code I have to work with, 90% of such loops simply do "I'm iterating through the entire list/collection/array and processing only the current element", some simple aggregates (the kind where you can split the task and aggregate the result at the end), etc. Virtually all applications have those, and call them often.

Being able to simply say "the order in which these tasks are made doesn't matter" lets you run a lot of tasks in parallel right there.

Then you have a lot of tasks which do not depend on each other at ALL, but will still wait on each other... Let say a typical MVC web app ala Rail or Strut... you have a lot of ressource handling going on in the view that doesn't even rely on the Model/database whatsoever...and then you have the operation of rendering the model data in the view... so querying the data, and handling the ressources, could be done at the same time, and once both are done, do the actual render. ASP.NET has a decent mechanism for that, but no one uses it.

These are all things that are being worked on (and some already exist but most people ignore them), and can dramatically boost performance of virtually ANY application...but there needs to be a certain awareness... And some of these features are just freagin late (Microsoft is working on PLINQ...ways to make it easy to process data structure in parallel... for example: int[] blah = listOfInts.ForAll( item => item * 2 );... and it will do it in parallel. Seems so obvious, but why hasn't this been around for years in .NET?)

Race conditions (2, Insightful)

microbox (704317) | more than 6 years ago | (#22894220)

Being able to simply say "the order in which these tasks are made doesn't matter" lets you run a lot of tasks in parallel right there.

Mucking around with language design and implementation highlights some of the deep problems that the "parallize-everything" crowd often don't know about.

In your example, the loop can only be efficently parallized if it doesn't have any side-effects. If any variables are written to out of scope of the loop, then they are exposed to race conditions in both the read and write direction. It's still possible to parallize the code if the machine code doesn't take advantage of the memory model of your architecture, and no other thread is modifying those variables, and it doesn't matter in which order the variables are modified, and modifications are atomic. As an implementation detail, memory reads and write will be signficiantly slower, and sychronization of atomic operations is also not without cost. So much in fact, that you'd probably be better off just running the loop on a single thread. Your code will be more predictable regardless.

At the moment, it seems that effective parallization requires some planning and effort on behalf of the programmer. The bugs can be extraordinarily subtle, impossible to debug and difficult to reason about. Expect single-threaded programming models to be around for a long time.

Re:Race conditions (1)

Shados (741919) | more than 6 years ago | (#22894312)

Of course, the work needs to be thread safe in that example. Thats why I said "lets you run a lot of tasks in parallel right there".

My point was, if you go through a typical everyday business app (since those are probably the most common kind developed these days, or close), you'll find that this situation is more common than not, which was my point. A LOT of loops contain operations without side effects and without shared ressources. Being able to easily handle those net you a large gain on the spot.

Basically, I'm advocating the 80/20 way. Make all of the "obvious" and "simple" scenarios easy, and you'll already get a large performance boost with little effort (its already being done...my parallel loop example can easily be done in virtually all languages, and is already used in many places, and it works quite well).

Then we can attack the harder scenarios that require more than a new language construct.

Re:More so now, but depends ... (1)

brewstate (1018558) | more than 6 years ago | (#22894224)

Threading is Parallel programming as well. I agree that it will be the "Next Big Thing" but only in the since that in order to use the cell processors and the multi-core cpu's we are going to have to use threading to optimize most of the serialized code (Where Possible).

Transistor Efficiency` (1)

Detritus (11846) | more than 6 years ago | (#22893624)

Tied in with parallelism is the issue of doing something useful with the billions of transistors in a modern computer. During any microsecond, how many of them are doing useful work as opposed to just generating heat?

Re:Transistor Efficiency` (1)

crispin_bollocks (1144567) | more than 6 years ago | (#22894270)

It's the ones doing "useful work" that generate the heat, the idle transistors should have very little dissipation.

Didn't we have this debate last week? (5, Informative)

Nursie (632944) | more than 6 years ago | (#22893630)

Oh yes, here it is [slashdot.org] .

And the conclusion?

It's been around for years numbnuts, in commercial and server applications, middle tiers, databases and a million and one other things worked on by serious software developers (i.e. not web programming dweebs).

Parallelism has been around for ages and has been used commercially for a couple of decades. Get over it.

Re:Didn't we have this debate last week? (1)

esocid (946821) | more than 6 years ago | (#22893742)

But think outside the box man. This is going to revolutionize the web 2.0 experience with its metadata socialized infrastructure.

Sorry, I totally got sick of using even that many buzz words. I'll stop now.

Re:Didn't we have this debate last week? (2, Funny)

spottedkangaroo (451692) | more than 6 years ago | (#22893850)

But, at the end of the day (where the rubber meets the road) this will utilize the core competencies of solutions that specialize in the new ||ism forefront.

Re:Didn't we have this debate last week? (0, Offtopic)

Creepy Crawler (680178) | more than 6 years ago | (#22893992)

You know what the best "buzz word" is? Dildo.

Bzzzzzzzzzzzzzz

Yes, & evidence of multithreaded code in FREEW (0)

Anonymous Coward | more than 6 years ago | (#22894486)

"Parallelism has been around for ages and has been used commercially for a couple of decades." - by Nursie (632944) on Friday March 28, @10:50AM (#22893630) Homepage
True, & NOT JUST IN COMMERCIALWARE (freeware/shareware too, for a LONG time):

APK Registry Cleaning Engine 2002++ SR-7:

http://www1.techpowerup.com/downloads/389/APK_Registry_Cleaning_Engine_2002++_SR-7_.html [techpowerup.com]

or

http://www.techpowerup.com/downloads/389/APK_Registry_Cleaning_Engine_2002++_SR-7_.html [techpowerup.com]

(Either link SHOULD work fine to see it)

Above ALL else:

SCREENSHOT:

http://www1.techpowerup.com/downloads/screenshots/389.jpg [techpowerup.com]

or

http://www.techpowerup.com/downloads/screenshots/389.jpg [techpowerup.com]

(Again - either link URL should work to check this out as an example of what you state, that multithreaded (& thus, SMP ready code) has been around for decades, since that app began its life in 1997 in fact, in freeware no less... so, it's NOT just present in commercialware apps either, but also shareware/freeware also).

Hey - I am just a SINGLE example of a guy that's been doing "multithreaded apps", since 1995 or so, online: & there are TONS of others that do also!

(In fact, anyone can examine their windows machine for the presence of multithreaded code, & see that MOST of what you use today IS multithreaded in fact - I run 30 processes here, & 28 of those ARE MULTITHREADED & thus, SMP-ready code!)

Multithreaded code is taken advantage of by the kernel portion memmgt subsystems of today's MODERN OS for better MULTITASKING (getting more done, by more cores/arms to do work with, so overall MORE GETS DONE (think of Gantt charts).

(Microsoft isn't alone here either - BSD variants & Linux variants too also do this (which Linux's original single kernel thread/round-robin usermode/Ring 3/RPL 3 "threads" were only sent thru the SINGLE kernel mode/RPL 0/Ring 0 one)...

In fact? That fact held Linux back from the "Enterprise ready class OS" acceptance for a LONG time, until they built a REAL SMP ready setup (iirc, around kernel 2.2 build or so on Linux kernels (but, I could be off here on the specifics))).

APK

P.S.=> Granted, what I use is "coarse grained" multithreaded design, which amounts to doing diff. tasks that process/touch DIFF. DATA, as they operate (rather than "fine-grained multitheading", which is taking a single data set & running portions of its processing on diff. threads BUT, on the SAME DATA (harder to do & NOT everything lends itself to it)).

E.G.=>

A=B+C
B=A-C
C=B+A

You cannot put this type of thing into threaded design, & expect gains out of it... simply because B has to WAIT on the completion of A, first... no point in placing A or B onto diff. threads, in other words... apk

Re:Yes, & evidence of multithreaded code in FR (1)

Nursie (632944) | more than 6 years ago | (#22894808)

Oh yeah, no "commercial only" thing was meant there, sorry to imply it. Apache web server would be an example of something that's been using these feature for some time too.

"You cannot put this type of thing into threaded design, & expect gains out of it... simply because B has to WAIT on the completion of A, first... no point in placing A or B onto diff. threads"

Oh sure, but where you have multiple threads doing seperate, non-codependant tasks you can parallelise really quite well.

Anyway, yes, my main point is my continued exasperation at folks who say "it's the next big thing" or "it'll never take off". It's here and has been for years!

Please no (3, Funny)

Wiseman1024 (993899) | more than 6 years ago | (#22893650)

Not parallelism... Why do MBA idiots have to fill everything with their crap? Now they'll start creating buzzwords, reading stupid web logs (called "blogs"), filling magazines with acronyms...

Coming soon: professional object-oriented XML-based AJAX-powered scalable five-nines high-availability multi-tier enterprise turnkey business solutions that convert visitors into customers, optimize cash flows, discover business logic and opportunities, and create synergy between their stupidity and their bank accounts - parallelized.

Most companies need parallel developers (2, Informative)

1sockchuck (826398) | more than 6 years ago | (#22893670)

This sure looks like a growth area for qualified developers. An audience poll at the Gartner Data Center conference in Las Vegas in November found that just 17 percent of attendees [datacenterknowledge.com] felt their developers are prepared for coding multi-core applications, compared to 64 percent who say they will need to train or hire developers for parallel processing. "We believe a minority of developers have the skills to write parallel code," said Gartner analyst Carl Claunch. I take the Gartner stuff with a grain of salt, but the audience poll was interesting.

McColl's blog is pretty interesting. He only recently started writing regularly again. High Scalability [highscalability.com] is another worthwhile resource in this area.

Re:Most companies need parallel developers (1)

ClientNine (1261974) | more than 6 years ago | (#22893848)

| ...coding multi-core applications... What the hell does that mean, anyway?

Can someone explain what the difference is between a programmer being "prepared for paralellism" and a programmer who knows how to do a good job with threading?

Writing multi-threaded apps has always been hard, and likely always will be harder than writing single-threaded apps. Go figure-- doing more stuff at once is tricker than doing one thing at a time. ("Duh.") I fail to see what the Big New Thing is.

Re:Most companies need parallel developers (3, Interesting)

Hasmanean (814562) | more than 6 years ago | (#22894664)

In common usage, threading usually implies different "streams" of execution doing independent things, at the same time. If the same function is executing in n different threads then you might call it "parallel" programming. A lot of multithreaded programming involves taking pieces of program functionality and breaking them out into separate threads, each executing independently.

Calling the latter architecture parallel computing is misnomer, it is really "simultaneous" computing i.e. things can happen at the same time, but there is a big difference between the same thread executing n times in parallel, and different threads doing different things simultaneously.

For example, a "Trivial" program which reads in a list of numbers from a file, computes something (say the sum of the magnitudes squared), and prints the result out to the screen might be implemented as follows:

while not eof
        read n numbers
        compute something from them
        print result


a multithreaded version might look something like this

Thread 1 (Disk IO): read n number from disk, write them to a queue/shared memory, repeat
Thread 2 (Outputting): wait for outputs to become available, print them, repeat
Thread 3 (Compute): wait for inputs to arrive in a queue, process them, write output to another queue, repeat



A parallel version would just have more than 1 Compute thread, and they would subdivide the work between them (for example 2 threads dividing the input array into stripes, one handling even indices, the other odd...or a bunch of threads computing different slices of the array). Note that the threads would still have to combine their results at the end of the computation, and that is not always simple to do in parallel.

Some problems or algorithms simply cannot execute in parallel. Also issues of memory access patterns, caching, branch divergence (if they threads take different code paths will this affect performance) come into play. It requires a whole new set of issues to worry about, but they are not too difficult. As the professor who teaches the course http://courses.ece.uiuc.edu/ece498/al1/ [uiuc.edu] says, learning parallel programming is not hard, he could teach it to you in 2 hours. But doing parallel programming well and efficiently is difficult. You can write a trivial parallel program which uses just 1 processor and has just 1 thread, which is identical to the sequential version, and it will work logically, although the performance will be severely limited. You can then extend it to use n threads, and it will experience a speedup. But to take full advantage of the hardware on your board, you will need to know a few tricks, and understand the hardware and your program's behavior intimately.

Another issue is that programmers were spoiled by processor upgrades coming along and speeding up their programs "for free" by virtue of their higher clock speed. Now with clock speeds reaching physical limits, the only evolution in new processors will be in the number of cores they have. So the only way to coax more out of a program will be to make it more parallel, and that might be trivial or it might be difficult. We're going to have to think laterally to get more performance out of software.

Who cares... (0)

Anonymous Coward | more than 6 years ago | (#22893710)

...about the new new thing? That's old. We want the new new new thing!

"the bastards say 'welcome'" (4, Informative)

david.emery (127135) | more than 6 years ago | (#22893720)

So all-of-a-sudden people have discovered parallelism? Gee, one of the really interesting things about Ada in the late 80s was its use on multiprocessor systems such as those produced by Sequent and Encore. There was a lot of work on the language itself (that went into Ada95) and on compiler technologies to support 'safe parallelism'. "Safe" here means 'correct implementation' against the language standard, considering things like cache consistency as parts of programs get implemented in different CPUs, each with its own cache.

Here are a couple of lessons learned from that Ada experience:
1. Sometimes you want synchronization, and sometimes you want avoidance. Ada83 Tasking/Rendezvous provided synchronization, but was hard to use for avoidance. Ada95 added protected objects to handle avoidance.
2. In Ada83, aliasing by default was forbidden, which made it a lot easier for the compiler to reason about things like cache consistency. Ada95 added more pragmas, etc, to provide additional control on aliasing and atomic operations.
3. A lot of the early experience with concurrency and parallelism in Ada learned (usually the hard way) that there's a 'sweet spot' in the number of concurrent actions. Too many, and the machine bogs down in scheduling and synchronization. Too few, and you don't keep all of the processors busy. One of the interesting things that Karl Nyberg worked on in his Sun T1000 contest review was the tuning necessary to keep as many cores as possible running. (http://www.grebyn.com/t1000/ [grebyn.com] ) (Disclosure: I don't work for Grebyn, but I do have an account on grebyn.com as a legacy of the old days when they were in the ISP business in the '80s, and Karl is an old friend of very long standing....)

All this reminds me of a story from Tracy Kidder's Soul of a New Machine http://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine [wikipedia.org] . There was an article in the trade press pointing to an IBM minicomputer, with the title "IBM legitimizes minicomputers". Data General proposed (or ran, I forget which) an ad that built on that article, saying "The bastards say, 'welcome' ".

dave

Re:"the bastards say 'welcome'" (1)

non (130182) | more than 6 years ago | (#22895336)

although many aspects of Brian Wilson's 'A Quick Critique of Java' [ski-epic.com] are no longer particularly valid, and indeed i find myself programming more and more frequently in a language i too once scorned, the section on multi-threaded programming surely still has relevance. i certainly second his opinion that multi-threaded applications tend to crash more often, and have in fact turned down offers of employment with companies whose code failed to run on my workstation due to the fact that it wasn't exactly thread-safe, despite the requirement of a 'high-performance' machine for it to run on.

"...with the cooling of Web 2.0,..." (4, Insightful)

sm62704 (957197) | more than 6 years ago | (#22893740)

Sorry guys, web 2.0 was never cool and never will be.

Will "Parallelism" be the next new thing? No... (1)

Jerf (17166) | more than 6 years ago | (#22893874)

Will "Parallelism" be the next new thing? Well... no. That's like asking if "for loops" are going to be the next big thing. It's a tool, to be used when appropriate and not used when appropriate. It's going to be very hard to convert "Parallelism" into a magic VC pixie dust.

I say this as someone who has recently been tuning his career, experience, and personal projects towards learning more about parallel programming in practice, and I still don't see this as a "next big thing". It's just another in a long list of skillsets that is going to be gradually more useful over the next few years. I still think it's a good move to learn about it (and I'm putting my effort where my mouth is), but I don't see it sustaining any sort of bubble.

One thing I did learn from the last discussion is that a lot of people are very far behind on parallelism, though; if you think locks and pthreads are the state of the art, you've got some catching up to do. There's been a lot of practical progress in the past ten years that doesn't seem to have made it into the popular consciousness of Slashdot programmers. (And I emphasize the word practical; it's not just academic wanking anymore, or libraries optimized for such.) I suggest anyone interested in the topic check out Erlang and Software Transactional Memory (in that order). Whether these technologies will "win" I don't know, but Erlang in particular, while it may in some sense not contain a single idea that can't be traced to the 1960s, is unique in being a complete runtime and library based on ideas-that-will-be-new-to-you (most likely, based on the previous /. discussion).

Parallel Software: Hard. Analysis Tools: Overdue (1)

Znord (610696) | more than 6 years ago | (#22893950)

Parallel software is mastered, often, only by those who put up huge designed-in fences to partition software. That or those who run languages with innate safety but wasteful and non-pipelined code. We've finally tapped the Moore well too long.

Analysis tools, code-that-analyses code accurately, will finally need to get off the ground if we're going to get out of this gap. Otherwise, we're just going to see hundreds to thousands of hardware-supported, virtualized, inefficient pseudo-threads and tons of message passing analogies.

Shared memory contention is a hardware-solved problem at this level of integration. Shared memory code never lives up to its possible speed in most programming, except for the OS.

I'm entering grad school for this. There's more funding than there are students! Let's get on it!

Total Tripe (1)

Zebra_X (13249) | more than 6 years ago | (#22894052)

So I started writing without reading TFA. Then I read TFA to be sure that I was not in fact missing something. I'm not, and you're not either... The assertions of the article are absurd - someone must have felt that they needed to put something on the web for google to index.

While not related to parallelism I especially like "SaaS as an Appliance. One area within SaaS that is growing quickly is the opportunity to deliver a SaaS product as an appliance."

So you mean to tell me that the next big thing is installing software on a server, and installing that server in my datacenter and supporting it?! New!? HA HA HA. So actually what you are saying is that SaaS is too limiting, unreliable (from a connectivity perspective) or not secure enough for client needs and that we have to do it the Old way.

As far as the last item in the Blog goes I'm not sure where all of this crazy excitement over parallelism is coming from. There have been several posts on slashdot over the last few weeks regarding it. Massive parallelism has been available for decades for very special types of scientific problems that merit the additional complexity of coding them up. Parallelism has also been available in the form of threads.

The real conundrum is that not many problems on the desktop are in fact, parallel and if they are then one would utilize threads, and if really needing to be sure the OS is behaving itself, processor affinity would be used as well. This would allow, given the proper task, one to fully utilize every cycle of the available cores. The real desktop problem is how can most efficiently allocate processes across multiple cores and physical CPU's.

The tools to do this are available today, here and now! Just get your latest copy of Xcode, VS Studio, or GCC! Unless someone reinvents the way we code I'm not sure that anything is going to happen here.

Flavor of the month? (1)

plopez (54068) | more than 6 years ago | (#22894062)

Former venture capitalist Jeff Nolan has been agonizing this week over "what's next?" and "where the venture capitalists will put all that money?"

It really sounds like he is shilling.

But seriously, first it was DP and COBOL.
Then expert systems.
Then relational databases.
The object orientation.
Then webification.
Then XMLification.
Then web 2.0.

And probably a few I missed.

I think this is just another fad.

Re:Flavor of the month? (1)

Dr. Sp0ng (24354) | more than 6 years ago | (#22894850)

Interesting defintion of "fad" you have there. Are you seriously claiming that relational databases went away after the initial burst of enthusiasm? Or OO? The web? XML?

Man emerges from cave... and states the obvious. (1)

divisionbyzero (300681) | more than 6 years ago | (#22894078)

I'm not trolling. Sun, Intel, Nvidia, and ATI/AMD (for obvious reasons) have been investing in this area for years. I think that constitues Silicon Valley investments (except for those crazy Canadians at ATI).
 

What am I missing? (3, Insightful)

James Lewis (641198) | more than 6 years ago | (#22894094)

Now that multi-core computers have been out I keep hearing buzz around the idea of parallel computing, as if it is something new. We've had threads, processes, multi-CPU machines, grid computing, etc etc for a long time now. Parallelism has been in use on single processor machines for a long time. Multi-core machines might make it more attractive to thread certain applications that were traditionally single-threaded, but that's the only major development I can see. The biggest problem in parallel computing is the complexity it adds, so hopefully developments will be made in that area, but it's an area that's been researched for a long time now.

Re:What am I missing? (1)

olddotter (638430) | more than 6 years ago | (#22895022)

Parallelism has been done where it is easy (web server and similar) and where there was no other choice (scientific computing, etc.) but it has not been done well in "main stream" software.

Historically software development has been lazy (with a few notable exceptions) and sat back relying on new silicon (EE's, Moore's Law, higher clock rates) to improve performance. But in the future that may change. Breaking your software up into parallel tasks maybe required to get performance benefits from new silicon.

As a computer scientist I am ashamed of the lack of progress of my discipline. But I also harbor hope that this will put computer science back into the software development process. Right now development is mostly about visual editors and high level scripting languages.

Dont believe? Intel & MS have made a $20M bet (2, Informative)

kiyoshilionz (977589) | more than 6 years ago | (#22894158)

You think that nobody has a real interest in parallel computing? Intel's put their money on it already - they've allotted $20 million between UC Berkeley [berkeley.edu] and University of Illinois [uiuc.edu] to research parallel computing, both in hardware and software.

I am a EECS student at Cal right now and I have heard talks by the UC Berkeley PARLab [berkeley.edu] professors (Krste Asanovic and David Patterson, the man who brought us RAID and RISC), and all of them say that the computing industry is going to radically change unless we figure out how to efficiently use parallelism. This is the first time in history that software performance is beginning to lag behind how fast we can make our hardware. The failure of the frequency scaling to continue to improve system performance has been shown in the failure of the NetBurst microarchitecture - remember the Prescott? And the failure of the Tejas and Jayhawk [wikipedia.org] ? Building faster chips is over, it's a mechanical engineering issue - we can make chips put out more heat per area than the surface of the sun. Quoting professor Hennessey from Stanford:

"...when we start talking about parallelism and ease of use of truly parallel computers, we're talking about a problem that's as hard as any that computer science has faced. ... I would be panicked if I were in industry. ... you've got a very difficult situation."

To whoever is saying that parallelism is just a fad, you're really missing a lot of what's going on in the computing world. We've already switched to dual- and quad-core CPU's, and it doesn't look like it's going to stop any time soon.

The three kinds of parallelism that work (1)

Animats (122034) | more than 6 years ago | (#22894206)

We know three kinds of parallelism that work: clusters, shared memory multiprocessors, and graphics processors. Many other ideas have been tried, from hypercubes to SIMD machines, but none have been big successes. The most exotic parallel machine ever to reach volume production is the Cell, and that's not looking like a big win.

Graphics processors are the biggest recent success. They're still very difficult to program. We need new languages. C and C++ have the built-in assumption that all pointers point to the same memory space, and don't address concurrency at the language level at all. That's not going to work. The "dynamic languages" (Javascript, Python, etc.) have too much overhead. Special purpose graphics languages (Renderman, etc.) address the high concurrency issue, but only for a limited class of problems. None of the traditional parallel languages (Occam, etc.) have enough of a user base to make them compelling. Interestingly, what does look promising is compiling Matlab to GPU code; the big matrix operations of Matlab translate well to GPU-type machines, and number-crunching engineers like and use Matlab.

If your problem isn't a good match to either Matlab or RenderMan, though, it's all uphill right now.

Re:The three kinds of parallelism that work (1)

ceoyoyo (59147) | more than 6 years ago | (#22894568)

Every major desktop, notebook or workstation processor in the last ten years has had SIMD units built in. It's been very successful. Graphics processors are basically massive SIMD machines with a limited instruction set and restrictive architecture. They're not hard to program, but they are limited.

You've got two choices if you want to run lots of stuff in parallel. You can do it very easily and live with the restrictions, a la GPUs or simple coarse grained cluster stuff. Or you can have a lot more flexibility at the expense of some extra book keeping. Just like anything else.

The Cell isn't really an exotic parallel machine. It's a regular multiprocessor/multicore machine (like a ten year old desktop Mac) except that some of those processors are special purpose. Just like every desktop computer in the last five years or so has had a programmable GPU. The difference with Cell is that those special purpose processors are a little more diverse, which means that Cell is great for things like game consoles, but most of the chip goes unused for checking your e-mail and surfing the web. GPUs weren't a big hit either until there were enough of them that it was worth writing software to use them. When I got my first Voodoo you could pretty much play a Tomb Raider tech demo and that was it.

Re:The three kinds of parallelism that work (1)

Animats (122034) | more than 6 years ago | (#22894630)

The Cell isn't really an exotic parallel machine. It's a regular multiprocessor/multicore machine (like a ten year old desktop Mac) except that some of those processors are special purpose.

No, it's a non-shared-memory multiprocessor with limited memory (256K) per CPU. It belongs to roughly the same family as the nCube, although the Cell has a block DMA-like path to main memory rather than relying entirely on CPU to CPU data paths like the nCube.

It's typically used like a DSP farm; data is pumped through each Cell processor, crunched a little, and pumped out the other side. It's only good for problems that fit that model. Audio guys love it. Physics programmers, not so happy.

Then we are all doomed (3, Interesting)

DrJokepu (918326) | more than 6 years ago | (#22894294)

You see, the majority of the programmers out there don't know much about parallelism. They don't understand what synchronization, mutexes or semaphores are. And the thing is that these concepts are quite complex. They require a much steeper learning curve than hacking a "Web 2.0" application together with PHP, Javascript and maybe MySQL. So if now everybody will start writing multithreaded or otherwise parallel programs, that's going to result in an endless chain of race conditions, mysterious crashes and so on. Rembember, race conditions already killed people [wikipedia.org] .

Re:Then we are all doomed (1)

pohl (872) | more than 6 years ago | (#22894576)

Thank you for that wikipedia link. That is a tragic story. I don't understand why you think that "we're all doomed", though. Being in the minority of programmers who understand a given technique is potentially lucrative, so those people are not doomed. And there's no need whatsoever to eek out every last drop of 16-core performance out of a machine that needs to make sure that a beam-spreader is rotated into position prior to activiting the high-power X-Ray emitter -- so the obvious solution in that case is to not apply superfluous concurrency optimizations in the first place. Meaning that the patients who need such a machine need not be doomed either (any more than they already were given their malady). In short, who says that everybody will start writing parallel programs irrespective of its applicability to a given problem?

Know how? (0)

Anonymous Coward | more than 6 years ago | (#22894436)

Nobody knows how to efficiently program such pervasively parallel systems. Yes, it would be the next big thing but it only wont.

After Parallelism, the next big thing is... (1)

SpyPlane (733043) | more than 6 years ago | (#22894498)

Electricity, Automobiles, and the telephone!

Seriously, didn't we decide that parallel programming was the next big thing when Sutter wrote a big article in Dr. Dobbs a couple of years ago?

Welcome to the party pal, we've been here a while already!

Software as a Service != Parallelism (1)

Aaron Isotton (958761) | more than 6 years ago | (#22894500)

TFA is mainly about SaaS - Software as a Service (yes, I had to look it up). GMail, Google Calendar etc. in other words.

Honestly I think that parallelism and SaaS are pretty much on the opposite sides of the spectrum. Your typical SaaS application requires no parallelism whatsoever since they are typically low-impact programs. The only real improvement over ordinary software is that you don't have to install it, don't have to maintain it and that you can access it anytime from anywhere.

A typical SaaS provider has a few dozen to a few thousand servers running a few hundred to a few million instances of his software. Since typically a single server will run many instances of the software, parallelization will "just happen" for free.

A typical SaaS client runs in a web browser. Since a web browser is probably about the least efficient application GUI ever made (it runs in Javascript over XML over HTTP and uses HTML and DOM to "emulate" a drawing library), parallelization is not your problem there either.

SaaS can be great and so can parallelization. But they're not related.

Re:Software as a Service != Parallelism (1)

BotnetZombie (1174935) | more than 6 years ago | (#22895068)

SaaS can be great and so can parallelization. But they're not related.

Really? You say yourself:

A typical SaaS provider has a few dozen to a few thousand servers running a few hundred to a few million instances of his software. Since typically a single server will run many instances of the software, parallelization will "just happen" for free.

If that's not massively parallel, I don't know what is and someone has to code this end too. Even if you're mostly thinking about the client end, who's to say that you necessarily have to go through the typical browser as is most common now? Or that browsers can't be enhanced to do more work, more efficiently?

an 8000 node cluster is a parallel supercomputer (1)

vkg (158234) | more than 6 years ago | (#22894516)

http://vinay.howtolivewiki.com/blog/hexayurt/supercomputer-applications-for-the-developing-world-375 [howtolivewiki.com]

We've seen unambiguously that **GIGANTIC** data sets have their own value. Google's optimization of their algorithms clearly uses enormous amounts of observed user behavior. Translation efforts with terabyte source cannons. Image integration algorithms like that thing that Microsoft were demonstrating recently... gigantic data sets have power because statistics draw relationships out of the real world, rather than having programmers guessing about what the relationships are.

I strongly suspect that 20 years from now, there are going to be three kinds of application programming:

1> Interface programming

2> Desktop programming (in the sense of programming things which operate on *your personal objects* - these things are like *pens and paper* and you have your own.)

3> Infrastructure programming - supercomputer cluster programming (Amazon and Google are *supercomputer* *applications* *companies*) - which will provide yer basic services.

Real applications - change the world applications - need parallel supercomputer programming.

Dynamic Execution Backgound (1)

deadline (14171) | more than 6 years ago | (#22894738)

The HPC Cluster people have thought about this stuff for a while. One approach that I have thought about is described in the article:Cluster Programming: You Can't Always Get What You Want [clustermonkey.net] There are two follow-on articles as well Cluster Programming: The Ignorance is Bliss Approach" [clustermonkey.net] and Cluster Programming: Explicit Implications of Cluster Computing [clustermonkey.net] .

Of course if you really want to know how I feel about this: How The GPL Can Save Your Ass [linux-mag.com]

enjoy

Again, enough already!!! (1)

mlwmohawk (801821) | more than 6 years ago | (#22894802)

We *already* live in a parallel computing environment. Almost every computer has a large number of processes and threads running simultaneously. This *is* parallelism.

Granted, yes, certain products could benefit by extreme threading, i.e. like PostgreSQL breaking the hierarchy of query steps into separate threads and running them in parallel, like doing a more exhaustive search for the query planner using multiple threads, and stuff like that, but there is always going to be the competition between performance for a single logical process vs the performance for the system as a whole.

Maybe when/if we get more cores than processes it makes sense, but right now we use all the cores we have. As long as your load average is non-zero, you may be able to benefit from more cores.

threads and multiple cores in current OSes (1)

steverar (999774) | more than 6 years ago | (#22895070)

I'm pretty sure that OS X will distribute threads across however many cores it has to work with. What about MSFT XP, Vista or the Linuxes ? If a program creates 2 threads and the machine has 2 cores, will the OS assign threads accordingly ? Probably depends on current workload, but if the 2 threads are on separate cores then they're running in parallel, right ?

Twenty-plus years on... (1)

AlecC (512609) | more than 6 years ago | (#22895312)

Parallelism was the New New Thing when the Inmos Transputer rolled out in 1984 - a CPU explicitly designed to allow multiple CPUs to co-operate, and with a hardware scheduler to provide on-chip parallelism. Then we had the GAPP (Generalised Arithmetic parallel processor), the Connection Machine, and lots of other weird architectures whose names I cannot recall. I designed one myself in the late eighties, and we took it to breadboard stage (essentially Hyperthreading write very large).

Forgive me for not getting too excited by this new dawn. I have seen too many false dawns. Yes, parallelism is getting more and more prevalent, but expect more incremental steps, not an explosion.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?