×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Updates Compilers For Multicore CPUs

kdawson posted more than 6 years ago | from the what-about-gcc dept.

Intel 208

Threaded writes with news from Ars that Intel has announced major updates to its C++ and Fortran tools. The new compilers are Intel's first that are capable of doing thread-level optimization and auto-vectorization simultaneously in a single pass. "On the data parallelism side, the Intel C++ Compiler and Fortran Professional Editions both sport improved auto-vectorization features that can target Intel's new SSE4 extensions. For thread-level parallelism, the compilers support the use of Intel's Thread Building Blocks for automatic thread-level optimization that takes place simultaneously with auto-vectorization... Intel is encouraging the widespread use of its Intel Threading Tools as an interface to its multicore processors. As the company raises the core count with each generation of new products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelism. So the Thread Building Blocks are Intel's attempt to insert a stable layer of abstraction between the programmer and the processor so that code scales less painfully with the number of cores."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

208 comments

Umm.. (-1, Offtopic)

Mockylock (1087585) | more than 6 years ago | (#19402123)

What?

Re:Umm.. (4, Funny)

Timesprout (579035) | more than 6 years ago | (#19402343)

Intel has added kitten whiskers and pixie dust to its compilers so your ponies can now play on multiple paddocks.

Re:Umm.. (0, Flamebait)

Anonymous Coward | more than 6 years ago | (#19402951)

You know, it's not high school here. You don't have to pretend to be stupid. It will actually make people think worse of you, not better.

And if you simply are ignorant, you could always read about the things in the summary. You might learn something that way.

But has /. really got to the stage that people think it's somehow clever to be stupid? News for nerds, and all that....

Anyone want to... (4, Funny)

u-bend (1095729) | more than 6 years ago | (#19402129)

...briefly translate this article into cretin for me, so that I can understand a bit more of why it's so cool?

Re:Anyone want to... (5, Informative)

Trigun (685027) | more than 6 years ago | (#19402197)

The compiler worries about the cores so you don't have to. Is that too cretin?

Re:Anyone want to... (1)

u-bend (1095729) | more than 6 years ago | (#19402263)

haha! Maybe a little too cretin. I might be able to handle information that's a *tiny* bit more technical.
:)
Soooo, at the risk of sounding really stupid, wasn't this sort of thing happening with previous compilers?

Re:Anyone want to... (0, Funny)

Anonymous Coward | more than 6 years ago | (#19402293)

Go back to your PHP and leave this to the real programmers.

Re:Anyone want to... (3, Funny)

u-bend (1095729) | more than 6 years ago | (#19402393)

Heh, now that's what I really needed to hear. So crap's going to automatically make use of multiple cores better.

FYI, not a programmer/developer/etc., not even PHP, just interested in tech, but love the attitude anyway, AC ;)

Re:Anyone want to... (4, Informative)

BecomingLumberg (949374) | more than 6 years ago | (#19402221)

>>>So the Thread Building Blocks are Intel's attempt to insert a stable layer of abstraction between the programmer and the processor so that code scales less painfully with the number of cores.

They found a way to make the computer be able to determine how to use its many CPU cores automagically when you compile a program. It is cool, since it is really to figure out how to share a given workload 16 even ways.

Re:Anyone want to... (1)

Applekid (993327) | more than 6 years ago | (#19402297)

Sounds like snake oil to me.

I can't speak for Fortran but what standard C++ mechanisms are there for threading? If they added stuff to the CLR, shouldn't it have gone through the organizations that maintain them? Weird compiler extensions are bad for cross-compatiblity. (Which I guess is the point since Intel compilers -> Intel CPUs -> No other CPU manufacturers).

Besides, threading is still an OS specific venture. Do these optimizations just work by looking for calls to fork() or the Windows alternative?

I'd rather they take my code and automagically compile sections to take advantage of SSE invisibly.

OK (0)

Anonymous Coward | more than 6 years ago | (#19402241)

Translation: "They have made improvements that can better translate regularly programmed code into machine code that can run faster on their CPU's".

Hopefully, your games will be out faster, and run more realistic (in terms of AI and graphics) because programmers will spend less time making sure their code makes full use of the features of the CPU(s).

Re:Anyone want to... (2, Informative)

CaptainPatent (1087643) | more than 6 years ago | (#19402257)

essentially the compiler will automatically optimize thread splitting (time and number of splits if I'm reading this correctly) which is very handy feature as it will quickly become nearly impossible to manage future processors with 16+ cores. They do seem to hide a lot of the true features underneath market-speak though.

Re:Anyone want to... (5, Funny)

Mockylock (1087585) | more than 6 years ago | (#19402271)

The parallelism of the Compiler Fortran and Professional Edition of the uranium core both sport improved auto-vectorizationalism of the fortran and format that can target Intel's new SSE4 extensionalism. For thread-level parallelismisitic quantum theory, the compilers support the use of Intel's Threadtastic Building Block nationalism for objectionism for automatic thread-level optimizationalism that takes place simultaneously with auto-vectorization of parellel universes... Intel is encouraging the widespread use of its Intel Threading quantum physics parallel vectorizationistic Tools as an interface on the enterprise bridge to its Spock multicore processors. As the parallel company raises the vectorized core count with each multitudinal generation of new vector parallel products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelismistic forces.

See, it's not that hard to understand.

Re:Anyone want to... (4, Interesting)

LWATCDR (28044) | more than 6 years ago | (#19402463)

SSE4 the latest and greatest vector instruction set from Intel. MMX->SSE->SSE2->SSE3->SSE4. These instructions speed up things like trans-coding video and audio. They are also good for anything that does a lot of Floating-Point. The downside is very few systems have CPUs that support SSE4 and selecting it may hurt systems that don't have SSE4 or the program might not run at all depending on how the compiler is written. My bet is it will degrade gracefully. Over all SSE4 is most useful for people that are writing custom software right now and will become commonplace in off the shelf software once AMD supports it and systems that support it are more common.
The Threading Building Blocks are yet another attempt to make writing multithreaded code easier. Frankly I don't find pthreads hard but maybe I am just odd.
Threading is very important because we are not going to see an endless increase in clock speed anymore. Intel, AMD, and IBM are all pushing multiple cores. While adding an extra core or three really does help modern systems at least a little since we are often running multiple tasks current software will not scale as well when the cores start growing in a Moore like fashion. Right now we are at four cores if Moore's law holds in two years we might see eight, then 16, then 32... As you can see it gets out of hand pretty quickly. Your average desktop will not use four cores very well much less eight until software is written to take advantage of more cores.
Yes I know that Moore said 18 months but I was going for a nice round numbers.

And to make vector ops even simpler than in Parent (0)

Anonymous Coward | more than 6 years ago | (#19402621)

Regular version:

for(i = 0; i < 10; i++) a[i] = b[i] + c[i];


Vectorized version is something like this:

a[0..9] = b[0..9] + c[0..9];

Obviously, the above isn't valid code, but the idea is there, I hope?

Re:And to make vector ops even simpler than in Par (0)

Andrew Kismet (955764) | more than 6 years ago | (#19403383)

That's interesting, but what if step 11 of the loop is dependant on step 10? How does one vectorise that?
I can imagine vectorisation of loops working alright for basic loops like the one you described, which'll help in a number of cases, but it's not going to scale exceptionally well. It's good, but it's nothing amazing if I'm reading this right.

Re:Anyone want to... (0)

Anonymous Coward | more than 6 years ago | (#19403147)

They're making it easier for n00b h4x0rs to develop leet apps that take advantage of multi-core chips.

Intel - The Software Company (5, Insightful)

Necroman (61604) | more than 6 years ago | (#19402251)

We see Intel mainly as a CPU/chipset maker, but don't pay much attention to their software side. I believe they are one of the largest software development companies in the world. Between drivers, compilers, and all the other goodies to support all their hardware, they spend a lot of time doing software development.

And as much as they develop compilers to optimize code for Intel CPUs, the code most of the time will also see a speed increase on AMD CPUs as well. Who else do you want developing a compiler but the people who made the hardware it's running on.

Re:Intel - The Software Company (2, Insightful)

Tribbin (565963) | more than 6 years ago | (#19402327)

"Who else do you want developing a compiler but the people who made the hardware it's running on."

You mean like nvidia making nvidia drivers for linux?

Re:Intel - The Software Company (1)

RedElf (249078) | more than 6 years ago | (#19403711)

And just earlier today I was reading a comment right here on slashdot where someone was ranting because ATI wasn't letting the community do the driver development instead of doing it in-house.

Talk about polar opposites swimming in the same pond.

Re:Intel - The Software Company (1)

geekoid (135745) | more than 6 years ago | (#19403801)

I'm sure the ATI's driver being 'teh suck' has a lot to do with that. Seriously, they are always bad and do not take advantage of the chips.

I am pretty sure that if their drivers were well made the call to OS there drivers would not be so loud.

Re:Intel - The Software Company (1)

RedElf (249078) | more than 6 years ago | (#19403813)

Ahh yes, I found the comment I was talking about earlier, it can be read right here [slashdot.org].

Re:Intel - The Software Company (0)

Anonymous Coward | more than 6 years ago | (#19403965)

You mean like nvidia making nvidia drivers for linux?
The difference is Intel actually uses Linux. I went to their little show-and-tell in Pittsburgh [intel-research.net] last year and every single machine there was running Ubuntu, except for one MacBook. No Windows.

Re:Intel - The Software Company (4, Interesting)

dmoore (2449) | more than 6 years ago | (#19402371)

I have not tried their compiler, but for the Intel Performance Primitives (IPP), a library of useful MMX/SSE-optimized functions written by Intel, they explicitly fall-back to slow versions of the code if it detects an AMD processor, even if the AMD processor has MMX/SSE/SSE2. This kind of behavior is one reason that you may not want to trust Intel for your compiler needs if you are planning on doing development for more than just Intel-branded CPUs.

Re:Intel - The Software Company (2, Funny)

Elbereth (58257) | more than 6 years ago | (#19402707)

From the viewpoint of Intel, this is actually good practice. They don't know what features that AMD actually supports (through possibly intentional ignorance), and they don't want to cause someone's system to lock up. While I'd rather see my AMD CPU be supported by Intel's compiler, I can understand why they might be reticent to support certain features, even though the CPU reports support for that feature.

Anyways, it's not like MMX/SSE are really used for much of anything but benchmarks and voice synthesis. Or, at least, that's what it was like last time I actually cared enough to look.

When I was a kid, we didn't even have MMX. We made use with math coprocessors, and sometimes we didn't even have that. In fact, I remember using CPUs that didn't even have onboard MMUs or support for protected mode operation. Kids today are spoiled. Try using a VIC 20 or TI 99/4a for a few hours, then tell me how important it is to have your competitor design a compiler that optimizes for your CPU.

Re:Intel - The Software Company (1)

Red Flayer (890720) | more than 6 years ago | (#19402791)

Try using a VIC 20 or TI 99/4a for a few hours, then tell me how important it is to have your competitor design a compiler that optimizes for your CPU.
Noob. Try punchcards. At the very least, the Commodore PET. The VIC-20 was an awesomely powerful piece of hardware compared to that.

Re:Intel - The Software Company (0)

Anonymous Coward | more than 6 years ago | (#19402979)

I disagree. Intel disabled the optimizations in a very under-handed way and they did not come clean about it until they were 'outed'. There is no compelling technical reason for what they did.

The best thing to do when you can't trust Intel's compilers is to just not use them. Who knows what other crappy sneaky things they put in their tools?

Re:Intel - The Software Company (0)

Anonymous Coward | more than 6 years ago | (#19403279)

Who knows what other crappy sneaky things they put in their tools?
IIRC, the source of icc is available for viewing.

Re:Intel - The Software Company (0)

Anonymous Coward | more than 6 years ago | (#19403705)

Maybe some of the sources for the std libraries are available, but I don't think sources for the compiler proper are.

It's pretty much a black box with "crappy sneaky things" in it.

Re:Intel - The Software Company (3, Insightful)

Wesley Felter (138342) | more than 6 years ago | (#19403255)

So if an Intel processor reports SSEn support you assume that it works, but if an AMD processor reports the same feature, you assume that it doesn't work? Great idea.

This matters because the whole purpose of IPP is to take advantage of newer instructions. If you say "new instructions don't matter because no one uses them" it becomes a self-fulfilling prophecy. Optimized libraries could break out of that cycle, but only if they aren't used as competitive weapons.

Re:Intel - The Software Company (2, Insightful)

Bert64 (520050) | more than 6 years ago | (#19403641)

On the contrary, they should check for the presence of the appropriate feature, and then use it...
They should also let you build binaries without those fallback code paths, as a lot of code will never run on older machines (eg x86 macs, which all have at least sse3).
If someone's system lock up because AMD claimed to support a feature which they dont actually support, that's AMD's fault and intel could claim the moral high ground instead of the other way round.

Re:Intel - The Software Company (3, Interesting)

Chandon Seldon (43083) | more than 6 years ago | (#19402481)

It's really useful for a CPU company to develop an optimizing compiler for their hardware. It forces them to understand how their CPU features actually speed up software, and it gives them the opportunity to prove that certain hard optimizations actually work. It would probably be best for everyone if the compiler were open source, but if Intel thinks they need to sell it as a commercial product to justify it financially we still get all of the benefit on their future processor designs.

Re:Intel - The Software Company (1)

cyfer2000 (548592) | more than 6 years ago | (#19403291)

I thought every CPU provider does. Or they make CPUs compatible with existing compilers.

Re:Intel - The Software Company (1)

Chandon Seldon (43083) | more than 6 years ago | (#19404085)

All CPU makers make their CPUs compatible with existing compilers - but that completely ignores new instructions like SSE4. For that sort of thing, ether the programmer has to take advantage of it with hand-coded assembly, or someone needs to write a compiler optimized for the new instruction set. If the CPU vendor does it themselves rather than waiting for Microsoft and the GNU project to get around to it they can see results faster and feed information from/to hardware design more quickly and efficiently.

Re:Intel - The Software Company (1, Informative)

Anonymous Coward | more than 6 years ago | (#19403745)

It would probably be best for everyone if the compiler were open source, but if Intel thinks they need to sell it as a commercial product to justify it financially we still get all of the benefit on their future processor designs.

If it were open source you could modify it to work on AMD processors. In the past, I specced out an intel workstation rather than AMD specifically because my software used the Intel Math Kernel Libraries. Granted, it was only one computer many years ago (When AMD was faster than Intel) but when you see companies building big bewoulf clusters or considering processor/math intensive apps I bet there's a few extra sales to be made there.

And yes, the MKL gave me 60x speedup over hand-written matrix algebra. Big deal when things go from an hour to a minute.

Re:Intel - The Software Company (1)

15Bit (940730) | more than 6 years ago | (#19402681)

You can say the same for most of the other major chip makers - IBM and Sun both do the same, and in years gone by DEC used to make an arse-kicking Fortran compiler for the Alpha. In fact, probably the only major chip producer that doesn't make compilers is AMD.

They don't have to. (1)

Ayanami Rei (621112) | more than 6 years ago | (#19404125)

PGI and Sun both make auto-parallelizing and optimizing Fortran/C/C++ compilers specifically for K8 (and i386).

Re:Intel - The Software Company (1)

jimicus (737525) | more than 6 years ago | (#19403375)

Who else do you want developing a compiler but the people who made the hardware it's running on.

My goodness... you can't mean... that the company which developed the hardware is in a strong position to get a few people from the hardware dev team onto the team developing software for it?! And that these people are well placed to know what's worth optimising, where and how?

No shit, Sherlock.

The only amazing thing about this is that it is such a novel insight that it is necessary for you to be modded as such.

Re:Intel - The Software Company (2, Funny)

KingMotley (944240) | more than 6 years ago | (#19403917)

Who else do you want developing a compiler but the people who made the hardware it's running on.
Who else do you want developing an office suite but the people who made the operating system it's running on.

GCC (3, Insightful)

Anonymous Coward | more than 6 years ago | (#19402255)

Will they add these features to GCC or make docs available so others can?

Yeah, GCC is Key... (1)

mkcmkc (197982) | more than 6 years ago | (#19403427)

I recently downloaded Intel's compiler to see whether my C++ code would run faster on it--I ended up giving up on it (for now) after spending a day trying to get it to work. I'm sure their compiler has many whizzy features in it, but for me, they don't really matter unless they're in GCC. I hope Intel will realize that it's in their interest to migrate these advances there.

No and yes (2, Informative)

Sycraft-fu (314770) | more than 6 years ago | (#19403657)

No, they won't add them to GCC. Intel's compiler competes with GCC and it is the best there ever was. In every test I've ever seen on Intel chips, it comes out ahead and I'm sure they've no interest in changing that. However yes, the docs are out there. Intel's processors are extremely well documented and you can get everything you need. The problem isn't that the GCC people are having to guess how the processors work, the problem is that their coders aren't as good as Intel's at optimising their compiler. This isn't helped by the fact that GCC targets many architectures where the ICC is only for one.

However don't expect Intel to help GCC out. Their answer will just be "buy the ICC".

Re:No and yes (0)

Anonymous Coward | more than 6 years ago | (#19404157)

Nope. Intel has multiple times paid for development on GCC. The Itanium, for example, had GCC and full Linux distributions before windows (and I think before wide release).

Will they port their newest features? Probably not. Will they ensure that GCC advances at a pace were Linux runs equally well (or better) on Intel than AMD? Probably. They have nothing to gain from closing specs and not publishing their research. Especially as work that is shown to work on GCC can later be reimplemented from scratch on other compilers, providing Intel processors and edge.

Re:GCC (0)

Anonymous Coward | more than 6 years ago | (#19403873)

I think this is mostly done by the gcc people. GCC already has sse4 support in mainline (specifically, support for ssse3, sse4.1, and sse4.2). It also has an autovect-branch (tree-ssa is already in mainline). Intel® Threading Building Blocks seems to be yet another threading library, though only for C++, and only on Intel® machines (unlike, e.g., pthreads or OpenMP). If (for some reason) you still want to use this less-portable threading library, you can use it perfectly well with gcc.

Re:GCC (1)

sofla (969715) | more than 6 years ago | (#19404043)

The Thread Building Blocks look interesting, and are available for multiple compilers (MS, Intel, GCC) and platforms (Win, Linux, Mac). Reading between the lines, it looks like their strategy is to convince you to use TBB for your threading (maybe not a bad idea, by-hand threading code is boring and error prone), and then profile your code with VTune (which I have used before, its good as profilers go) plus the VTune plugin to do thread performance analysis.

As far as this actual announcement, there's not much to it. Looks like they upgraded their Fortran and C++ compliers to work better with TBB. TBB and VTune are Intel's strategy when it comes to optimizing for multiple cores. No surprise there - VTune has been the tool for optimizing for Intel for awhile now, if you want to optimize for Intel (which != optimizing for x86).

So depending on what you mean by "add these features to GCC", they already have. If you're looking for GCC extension support in the Intel compiler, though - well, I can't help you there. And likely neither can they, thanks to GPL. You painted yourself in that corner when you chose to code to the GCC compiler (and not ANSI) in the first place. Don't get me wrong, I like GCC, but coding to a specific compiler is always a bad idea, in my book.

Moore's law onto programmers?! (1)

iknownuttin (1099999) | more than 6 years ago | (#19402279)

FTFA: I've outlined before how multicore moves the burden of taking advantage of Moore's law from hardware onto developers.

From Wikipedia [wikipedia.org]:Moore's Law is the empirical observation made in 1965 that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months

Alrighty, then. It's been a while since my CS classes. How does that apply to software? Does he mean that instead of increasing transistors on a single chip, the transistors are virtually increasing by using multiple cores?

Re:Moore's law onto programmers?! (0)

Anonymous Coward | more than 6 years ago | (#19402403)

Moore's law states that the number of transistors you can squeeze into a certain area effectively doubles every 24 months. Most people interpret this to mean that every 2 years, your performance will double (you have twice as many transistors after all). Just adding a second core will not double your performance unless you have written your programs to actually run on both cores at once. Thus, instead of using the extra transistors to improve performance of the CPU, they will simply add a second core and put the work of optimizing programs on programmers.

Re:Moore's law onto programmers?! (0)

Anonymous Coward | more than 6 years ago | (#19402505)

Multicore allows you to double your transistor count by just copying the existing design. Thus Moore's law keeps going. The downside is that simple, single-thread programs no longer go twice as fast like they did in previous doublings. Thus, the programmers have to do much more work to take advantage of the extra transistors.

Re:Moore's law onto programmers?! (1, Interesting)

Anonymous Coward | more than 6 years ago | (#19402795)

I know people like car analogies, bear with me. Let's imagine a CPU is like a horse.

For years, people have been selectively breeding horses (CPUs) to allow them to go faster and faster. If you wanted a job done quickly, you simply bought a faster horse. That worked great until recently. Now, there isn't much improvement. So, instead of a faster horse, people are offered more horses for the same price. That's great, except that now the challenge becomes managing multiple horses in parallel and figuring out how to accomplish some of the same tasks as before, using all those horses efficiently. Hitching two horses to a wagon is easy. Four, harder. Eight, even more difficult. Eventually there is a practical limit, but in all cases, all you get from the horse breeder is a horse (the hardware). It's up to you to figure out how to put them together to get the job done effectively (software). Worst case, it's as if you had only a single horse that isn't much faster than before. Hence, the burden of an actual performance increase is shifting more to the user of the hardware than has been the case in the past. Adding more horses isn't as easy or simple a solution as before.

To stretch the analogy even more ridiculously, I guess this announcement is like Intel just released their latest and greatest version of a multi-horse harness especially for scientific stagecoaches (FORTRAN).

Re:Moore's law onto programmers?! (1)

creimer (824291) | more than 6 years ago | (#19403039)

The problem with a multi-horse harness, if not configured properly, is that the extra manure from the horses can cause the stagecoach to slide around and force the driver to do more work to keep everything in line. Plus it can stink to high heaven. ;)

Re:Moore's law onto programmers?! (1)

kybred (795293) | more than 6 years ago | (#19403455)

Moore's Law is the empirical observation made in 1965 that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months

Alrighty, then. It's been a while since my CS classes. How does that apply to software? Does he mean that instead of increasing transistors on a single chip, the transistors are virtually increasing by using multiple cores?

No, it means the number of programmers on a project doubles every 24 months.

learn better parallel programming techniques? (3, Interesting)

sr. taquito (996805) | more than 6 years ago | (#19402289)

If compilers keep abstracting away the interface between the programmer and the cpu, programmers will be less likely to write better code or learn new techniques that take advantage of all the power a few extra cores can provide right? That's just my take on it. Then again, I also think learning parallel programming techniques is fun, and a little more academic than most career programmers might like.

Re:learn better parallel programming techniques? (2, Insightful)

BoChen456 (1099463) | more than 6 years ago | (#19402599)

If compilers keep abstracting away the interface between the programmer and the cpu, programmers will be less likely to write better code or learn new techniques that take advantage of all the power a few extra cores can provide right?

If compilers keep abstrating away the programmer and the cpu, and getting better at optimization, programmers won't need to write better code or learn new techniques to take advantage of all the power a few extra cores can provide.

Instead the programmer can concerntrate on writing more understandable code.

Re:learn better parallel programming techniques? (1)

suggsjc (726146) | more than 6 years ago | (#19403977)

First, I agree (somewhat).
I've got a couple of thoughts that I'm not sure how to get out, so just see if can put the pieces together.

Low-level languages like C are powerful because they can interact (almost) directly with the hardware. Then there are other languages that are built on top of those languages that are designed to hide complexity and allow programmers to code more efficiently at the cost of non-optimized code.
I didn't RTFA, but if the compilers start taking liberties and "hiding the complexity" of writing multi-threaded code, then unless they are absolutely perfect how would someone truly take advantage of hardware even while programming in C?

I'm sorry that I don't have the time to refine my thoughts, but basically if low-level languages (lll's) start becoming more like mid/high-level languages then how do go back to being able to optimize code like you did back when lll's were real lll's?

Re:learn better parallel programming techniques? (0)

Anonymous Coward | more than 6 years ago | (#19402619)

we've already seen that with 'higher' level languages like Java and C# - programmers today don't really understand how to use memory properly, and as a result apps use up masses of the stuff, because 'the garbage collector will handle it for me'.

However, in this case the advantages are in better handling of the code you write. you still have to do it right. This is not an excuse for sloppiness.

Parallel programming is dull, its all splitting code into separatable sections and using the proper control synchronisation mechanisms to prevent data corruption. Making it work fast (ie with few context switches, long-held locks etc) is more fun. Career programmers like getting their work done well, we don't care at all if parallel programming stays in the university, or is used for specialist applications.

Re:learn better parallel programming techniques? (1)

peragrin (659227) | more than 6 years ago | (#19402713)

have you run windows? Yep that's C with poorly utilised memory.

Good programmers can write good highly optimized and mostly bug free code.

unfortunately good programmers are like good Car Drivers. Everyone says they are good, but very few really are.

Re:learn better parallel programming techniques? (2, Insightful)

BlueCollarCamel (884092) | more than 6 years ago | (#19402869)

Actually, I've always thought that telling the compiler what you wanted to do, instead of how to do it, would result in the compiler being able to determine the best path to take for a given task.
Even more so for interpreted/compiled on the fly languages. They can be dynamically compiled to take advantage of whatever hardware is available on each machine, without the developer having to code for it.

Re:learn better parallel programming techniques? (1)

Kupek (75469) | more than 6 years ago | (#19402871)

Aside from the auto-vectorizing stuff, most of Intel is advertising does not happen automatically. Instead, they provide abstractions that make it easier to write high performance multithreaded code. But programmers will still have to do the hard stuff, which is figure out how best to parallelize their algorithms, distribute their data, and synchronize their threads.

Re:learn better parallel programming techniques? (0)

Anonymous Coward | more than 6 years ago | (#19402967)

This is not new at all. Compilers (interpreters) have been abstracting away the interface between the programmer and the cpu since the beginning. Very-high-level languges like LISP and Prolog are from the 1960's and 70's.

1970's era LISP Systems were built on special hardware to suit LISP's functional programming. They evaluated the code more quickly than standard CPUs. Today, each independent S-expression would be evaluated on a separate CPU in parallel.

Prolog is an inherently parallel language. A Prolog query can build several knowledge trees at a time for the same variables. On the proposed Intel 80 core chip, up to 80 answers can be found concurrently.

I favor building parallellism into the compilers and interpreters where possible.

Some of us only want to *USE* it (1, Insightful)

Anonymous Coward | more than 6 years ago | (#19402999)

If I am writing a quantum pyhsic calculation package or compiling it (let us say.... Molpro 96) I want it to use correctly the many core I assign it to run , using high paralllelized fortran compiler. I don not want to know how and why it does it, I jsut want it to do it. I ain't a computer sicentist, I am not writing a thesis on computer science, and I don't care a iota about this. I leave that to computer scientist. Neither does my chief care about academic computer science. Same for intel multicore. The fun fact, is that msot of us want to use the power of the many core, and don't care a bit about the how, why etc...

Re:learn better parallel programming techniques? (1)

zakeria (1031430) | more than 6 years ago | (#19403913)

All compilers abstract away the interface between the programmer and the CPU, its the main reason you use the compiler in the first place..

Looks like something they rushed out (3, Informative)

Doctor Memory (6336) | more than 6 years ago | (#19402337)

I was looking at the Thread Building Blocks paper, and it reads like it was somebody's hastily-scribbled draft:

"The Intel Threading Tools automatically finds correctness and performance issues" (The tools finds?)
"Along with sufficient task scheduler and generic parallel patterns" (Who has insufficient task scheduler?)
"automatic debugger of threaded programs which detects many of thread-correctness issues such as data-races, dead-locks, threads stalls" (Sarcasm fails me...)

And that's just in the first few paragraphs, I haven't even gotten to the real meat of the article!

I'm used to informative, well-written and reasonably complete technical documentation from Intel — WTF is this?

You're lucky... (1, Funny)

Anonymous Coward | more than 6 years ago | (#19402425)

... the version before this one was in ebonics.

Re:Looks like something they rushed out (0)

Anonymous Coward | more than 6 years ago | (#19402647)

Who has insufficient task scheduler?

Apparently you haven't use ......

What? That's nothing! (0)

Anonymous Coward | more than 6 years ago | (#19402789)

There was an ad today on the slashdot mainpage that read: "Want to jump of your version control tool?"

Re:Looks like something they rushed out (1)

Red Flayer (890720) | more than 6 years ago | (#19402895)

"The Intel Threading Tools automatically finds correctness and performance issues" (The tools finds?)
No, the "Intel Threading Tools" is a product, in the singular -- it finds. Maybe Intel threading tools would find, but notice the subtle difference?

"Along with sufficient task scheduler and generic parallel patterns" (Who has insufficient task scheduler?)
OK, sso it's a bit awkward to parse, but isn't it obvious by the grammar that "sufficient" modifies both "task scheduler patterns" and "generic parallel patterns"?

"automatic debugger of threaded programs which detects many of thread-correctness issues such as data-races, dead-locks, threads stalls" (Sarcasm fails me...)
Oh wait, nevermind. This sentence shows that the author truly can't write clearly. Silly me, thinking that the author intentionally used correct grammar, instead of stumbling upon it by accident. Sorry about that.

I'm used to informative, well-written and reasonably complete technical documentation from Intel -- WTF is this?
This? This is Slashdot. Maybe Intel decided to go with the flow and make it seem like they are a true Slashdot insider. It's quite a nice bit of trickery, really -- if they really want it to succeed, they'll issue the same press release tomorrow.

Re:Looks like something they rushed out (1)

Mike1024 (184871) | more than 6 years ago | (#19404101)

"automatic debugger of threaded programs which detects many of thread-correctness issues such as data-races, dead-locks, threads stalls" (Sarcasm fails me...)
Oh wait, nevermind. This sentence shows that the author truly can't write clearly.


Couldn't you make that sentance pretty normal sounding just by removing that one errant 'of', i.e.

"[The software includes an] automatic debugger of threaded programs, which detects many thread-correctness issues such as data-races, dead-locks, threads stalls [...]"

Although, one would think Intel would be more careful about proof-reading their sales literature; western developers (i.e. the target of English-language technical sales literature) probably prefer not to be reminded of jobs like theirs being exported to places like India (which poorly-written technical documentation may remind them of).

Re:Looks like something they rushed out (2, Informative)

presearch (214913) | more than 6 years ago | (#19402937)

The Intel Compiler Lab is based in two Russian cities - Moscow and Novosibirsk.
Probably the source of the less than optimal text.

How's the documentation on -your- compiler coming along?

Re:Looks like something they rushed out (1)

Doctor Memory (6336) | more than 6 years ago | (#19403739)

The Intel Compiler Lab is based in two Russian cities - Moscow and Novosibirsk.
Probably the source of the less than optimal text.
The point is, whatever tortured, twisted prose was submitted should have been edited and polished before going out with an Intel logo on it. This was a white paper on the corporate web site, not a post on some random Intel engineer's blog — different standards apply.

Seriously, check out this opening paragraph from the Intel® 64 and IA-32 Architectures Application Note:
TLBs, Paging-Structure Caches, and Their Invalidation

The Intel® 64 and IA-32 architectures may accelerate the address-translation process by
caching on the processor data from the structures in memory that control that process.
Because the processor does not ensure that the data that it caches are always consistent
with the structures in memory, it is important for software developers to understand how
and when the processor may cache such data. They should also understand what actions
software can take to remove cached data that may be inconsistent and when it should do
so. The purpose of this application note is to provide software developers information about
the relevant processor operation. This application note does not comprehend task switches
and VMX transitions.
Notice how they even get the fact that "data" is plural right? That's the kind of documentation I'm talking about.

Re:Looks like something they rushed out (1)

geekoid (135745) | more than 6 years ago | (#19403753)

A large company develops and release an profession compiler, decent documentation is a reasonable expectation. To say snide comments does not help, and shows that you have no real argument.

Grow up.

Re:Looks like something they rushed out (1)

Doctor Memory (6336) | more than 6 years ago | (#19403919)

develops and release an profession compiler
Too easy, moving on....

To say snide comments does not help, and shows that you have no real argument.
Um, I'm not arguing. I'm making an observation. If you diagree with me, then you're making the argument. Which is fine, just so we know where we stand. Nice non-sequitur, though.

Grow up.
So expressing dismay that a respected corporation is showing less-than-professional work is a sign of immaturity? Buy a vowel and solve the puzzle, honey, Real World moves [jargon.net] and all...

OK, I'll Byte (2, Interesting)

Skjellifetti (561341) | more than 6 years ago | (#19402409)

As the company raises the core count with each generation of new products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelism.

As a programmer, I already have abstractions such as Active Objects [wustl.edu]. While this may make it easier for compiler writers or kernel hackers, what benefits does it bring to us ordinary mortals?

The inevitable... (4, Funny)

R2.0 (532027) | more than 6 years ago | (#19402411)

Cue "Fortran is Dead" comments in

30
20
10

Re:The inevitable... (0)

Anonymous Coward | more than 6 years ago | (#19402501)

Fortran is LIFE!

Fortran is dead? (0)

Anonymous Coward | more than 6 years ago | (#19402629)

I thought it was BSD that was dead?

It is, you know.

Re:The inevitable... (4, Insightful)

TeknoHog (164938) | more than 6 years ago | (#19403131)

Fortran is dead, and it has had native parallel math since 1990. C is alive and it needs ugly hacks to get parallel math.

I got yer dead Fortran... (0)

Anonymous Coward | more than 6 years ago | (#19404113)

...hangin' right here. [data-backu...torage.com]

Note the caption at the bottom of the photo that says how FORTRAN will make the machine easy to use!

intel's product page (3, Informative)

non (130182) | more than 6 years ago | (#19402431)

the intel product has somewhat more detail. it can be found here [intel.com].

fpC Cum (-1, Troll)

Anonymous Coward | more than 6 years ago | (#19402559)

lube is wiped ofnf

I dont understand this statement: (5, Insightful)

JustNiz (692889) | more than 6 years ago | (#19403387)

>> As the company raises the core count with each generation of new products, it will get harder and harder for programmers to manage the complexity associated with all of that available parallelism.

I'm very surprised and dissapointed by the pervasiveness of the incorrect myth thats being promoted even amongst supposedly technically knowledgeable groups that:
a) Writing multithreaded code is terribly difficult
b) You need to implement code to have the same number of threads as your target hardware has cores
Both of these is completely not true at least for the PC marchitecture.

The way to develop multithreaded code is to exploit the natural parallelism of the problem itself. If the problem decomposes down most neatly into one, three or 6789 threads, then design and write the implementation that way. Consequently the complexity of the problem does not increase as the number of cores available increases.

In the PC architecture case, attempting to design your code based on the number of cores in your target hardware just leads to a twisted and therefore bad and also non-portable design.

I'm surprised how few developers seem to understand that in fact its OK, normal and often desireable to have more than one application thread running on the same core. In fact you really can't even ensure or even assume that your multi-threaded app will get one core per thread even if the hardware has enough cores, or work best if it does, as core/thread allocation is dynamically scheduled by the OS depending on loading. Not to mention there's all sorts of other apps, drivers and operating system tasks running concurrently too, so depending on each core's load, one app-thread per core may actually not be the most optimal approach anyway.

Re:I dont understand this statement: (1)

vidarh (309115) | more than 6 years ago | (#19403647)

The problem is that if you have a problem that decompose neatly into four parts, and you want to be able to take advantage of new systems with far more cores, the amount of work you may need to do to decompose the problem further may be orders of magnitude more complex than getting it to decompose into the original four parts. The problem isn't when it decomposes naturally into more parts than you have cores, but when it decomposes into fewer parts.

Developers that fail to handle that will be unable to compete with those who can as the number of cores in relatively low end systems increase and not parallelizing your apps sufficiently leave you unable to take advantage of the full potential of the systems it runs on.

Re:I dont understand this statement: (1)

bcd (675118) | more than 6 years ago | (#19403863)

>> If the problem decomposes down most neatly into one, three or 6789 threads, then design and write the implementation that way.

Agreed. But the problem is that most programs are inherently serial in nature. Intel and others are targeting multi-core everywhere, not just the highly parallel scientific community, but the average desktop as well.

These Intel tools are trying to solve the problem by letting you write an apparently single-threaded application, that the compiler turns into something multi-threaded under the covers. There's no harm in not exploiting the extra parallelism available, but you're missing out on some potential performance if you don't.

The other approach is to make programmers think about the parallelism. In my experience, most programmers just aren't good at this. Some argue that we need better primitives than just semaphores, queues, etc., but I think it's human nature to think serially and that "thinking parallel" all the time just isn't going to happen.

Personally, I don't think this will be an issue at all for several more years, because systems are typically running an SMP-aware OS and are running lots of processes/threads at a time anyway (just ps -ef or look at the Windows Task Manager and look how much is there!) Users are also becoming more sophisticated and multitasking at the user level, e.g. web browsing, listening to music, whatever else all at the same time. Parallelism at the top should be exploited first and more fine-grained parallelism can be dealt with later, IMO.

Re:I dont understand this statement: (1)

mandolin (7248) | more than 6 years ago | (#19404067)

In the PC architecture case, attempting to design your code based on the number of cores in your target hardware just leads to a twisted and therefore bad and also non-portable design

Additionally, at least for "embarrassingly parallel" problems, it is easy enough to get the number of online processors at runtime, and (slightly harder) make the program use that information to decide how many worker threads to use.

Re:I dont understand this statement: (0)

Anonymous Coward | more than 6 years ago | (#19404171)

Your suggestion works in fantasy land where threads are free and switching between them is free also. In the real world, they aren't. They cost stack space and OS data structures. Having lots of threads in a runnable state is not a good thing for most server applications, because you will be spending a lot of time context switching (ie, book keeping) and not as much time getting real work done.

Would the OS benefit from using this? (3, Interesting)

wazzzup (172351) | more than 6 years ago | (#19403551)

I know OS X is compiled using GCC but I wonder if Apple would see performance gains by using it? If they did, would it somehow introduce problems? Basically, I'm wondering if there would be a downside to using the Intel optimized compilers as opposed to all-purpose GCC compiler.

As an aside, Linux is obviously compiled using GCC but I wonder if Microsoft compiles Windows using the Intel compilers?

Re:Would the OS benefit from using this? (0)

Anonymous Coward | more than 6 years ago | (#19403777)

For applications that spend most of their time in user mode, it won't matter if Apple uses icc. Intel's compiler generally can produce faster code than gcc. Until now, though, gcc was the only way to compile 64-bit EM64T code on a Mac (and then, restricted to command line apps, until 10.5 Leopard comes out to support GUI apps). EM64T/AMD64 code runs faster than X86 code in most applications, due in large part to twice the number of registers being available, so gcc was both the only 64-bit game in town for Macs, and it tended to produce the fastest code on a Mac (as long as it was 64-bit). The new icc can produce both 32-bit X86 and 64-bit EM64T code, though, which should squeeze even better performance out of MacIntels. For apps that spend most of their time in user mode, this will be a big benefit, regardless of whether Apple uses icc.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...