Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Reveals the Future of the CPU-GPU War

CowboyNeal posted about 7 years ago | from the state-of-the-chip dept.

Intel 231

Arun Demeure writes "Beyond3D has once again obtained new information on Intel's plans to compete against NVIDIA and AMD's graphics processors, in what the Chief Architect of the project presents as a 'battle for control of the computing platform.' He describes a new computing architecture based on the many-core paradigm with super-wide execution units, and the reasoning behind some of the design choices. Looks like computer scientists and software programmers everywhere will have to adapt to these new concepts, as there will be no silver bullet to achieve high efficiency on new and exotic architectures."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


Great! (3, Informative)

Short Circuit (52384) | about 7 years ago | (#18696287)

As I recall, AMD's Athlon beat out the competing Intel processor in per-clock performance, partially as a result of having a more superscalar architecture. It's nice to see that, with the NetBurst architecture dead, Intel's finally taking an approach that's expandable and extensible.

The CPU wars have finally gotten interesting again. I'm going to go grab some popcorn.

Re:Great! (4, Interesting)

JordanL (886154) | about 7 years ago | (#18696385)

So when Intel decides that it's time to implement new architectures and force new methods of coding it's an awesome thing, but when Sony does it people tell them to stop trying to be different... I know people will cry about the console market being different, but the principals of the decisions are the same. If people cried about the Cell I expect them to cry about Intel's new direction. And this had to be said... I have Karma to burn.

Re:Great! (3, Funny)

nuzak (959558) | about 7 years ago | (#18696543)

Intel doesn't go around telling us that their design will push 17 hojillion gigazoxels, with more computing power than Deep Blue, HAL, and I AM put together, in order to render better-than-real detail in realtime while simultaneouslly giving you a handjob and ordering flowers for your gf.

Re:Great! (4, Interesting)

JordanL (886154) | about 7 years ago | (#18696587)

Speaking as someone who wrote reports on the Cell as early as 2004, it was mostly the people who thought Sony was the devil himself who hyped it up.

Easiest way to make sure a product doesn't meet expectations is to raise expectations.

Great expectations! (2, Funny)

Anonymous Coward | about 7 years ago | (#18697241)

"Easiest way to make sure a product doesn't meet expectations is to raise expectations."

Sex with geeks is great!

Re:Great! (2, Interesting)

Short Circuit (52384) | about 7 years ago | (#18696547)

I thought Sony's processor design was awesome, and I still do.

Re:Great! (5, Informative)

strstrep (879828) | about 7 years ago | (#18697255)

It's a good design, it just doesn't seem like a good design for a video game system. It's a general purpose CPU attached to several CPUs that essentially are DSPs. DSP programming is very weird, and you need to at least understand how the device works on the instruction level for optimal performance. A lot of DSP code is still written in assembly (or at the very least hand-optimized after compilation).

It's very expensive to have DSP code written, when compared to normal CPU code, and video game manufacturers have been complaining that the cost of making a game is too high. Also, most of the complexity in a video game nowadays is handled by the GPU, not the CPU. Now the cell would be great for lots of parallel signal processing, or some other similar task, and I bet it could be used to create a great video game, it would just be prohibitively expensive.

The cell is a great solution to a problem. However, that problem isn't video games. A fast traditional CPU, possibly with multiple cores, attached to a massively pipelined GPU would probably work better for video games.

Re:Great! (-1, Troll)

Anonymous Coward | about 7 years ago | (#18696661)

So, I'm chilling in this shitty little European hamlet the other day (yeah, I know, what would an awesome American like myself be doing in a third world shit hole like Europe, right? But that's another story). Anyway, I'm in faggo-Euroland and these two flamers come up to me with their faggot accents talking about, "Hey, American, can we suck your cock?" I'm like, no, you cock smoking Eurotrash fuck cakes. Get the fuck out of here! So, of course, they call the polizei and they're fags too, (I mean, this is Europe right?). So the cops are like, "Sacreblieu!" All faggish, just as I expected. They start having homo-sex with the two European tinks like some fucking bonobos or something. I think that is how European men (ha ha) shake hands over there. I'm like, can I go, this is making me sick. They say, "Sacreblieu!" again and stuff me into the back of one of their Euro-clowncars those fucks over there are so proud of that look like they came out of some Warsaw-pact factory sometime in the sixties. They fuckstick cop that starts driving this little piece of shit starts grinding the gears cause the synchronizers are worn or something and I'm like, can I get a pound of that? He starts playing with his butt hole and is like "Sacreblieu!" and fines me for using an imperial measurement and proceeds to grind me 0.6 kilograms of gears all the way to the polizei estacion. Fucking homo.

The bottom line is, Europeans stink, they are feminine homosexuals, they charge you with faggot thought crimes, they drive clown cars, are fucking dumb hick fucks, and, worst of all, are arrogant about it. Europeans, you are jokes. If only I could reach through your OLPC monitors and slap the fucking smirks from your ugly faces. The happiness.

yay (2, Interesting)

Anonymous Coward | about 7 years ago | (#18696299)

Maybe they will ditch the shiatty 950 graphics chip, that is all too common in notebook computers

Re:yay (1)

phalse phace (454635) | about 7 years ago | (#18696381)

But not everyone needs to have a GeForce Go 7900 or whatever GPU in their notebooks. The GMA 950 is perfectly suitable for those who just use their notebooks to browse the web, send emails, etc.

Re:yay (1)

Joe The Dragon (967727) | about 7 years ago | (#18696465)

But not for vista users why can't intel have a on board video chip with it's own ram?

Re:yay (0)

Anonymous Coward | about 7 years ago | (#18697077)

But not for vista users why can't intel have a on board video chip with it's own ram?

GMA 950 works great for Mac OS X, though. Vista sucks.

Re:yay (0)

Anonymous Coward | about 7 years ago | (#18697211)

Onboard 950 works just fine with Beryl/Compiz. And Mac OSX. Looks to me like Microsoft's doing something wrong.

Re:yay (2, Informative)

ewhac (5844) | about 7 years ago | (#18696475)

The 945/950 GMCH is common in notebooks because it's easy to implement (Intel's already done almost all the work for you), it's fairly low-power and, most important of all, it's cheap.


Re:yay (5, Insightful)

Anonymous Coward | about 7 years ago | (#18696517)

Funny, I wouldn't consider a mobo without because Intel are working towards an open source driver. I'm sick of binary drivers and unfathomable nvidia error messages. At least Nvidia expend some effort, ATI are a complete joke. Even on windows ATI palm you off with some sub-standard media player and some ridiculous .NET application that runs in the taskbar (What fucking planet are those morons on?)

So you can bash intel graphics all you like but for F/OSS users they could end up as the only game in town. We're not usually playing the latest first person shooters, performance only need be "good enough".

Re:yay (1)

negative3 (836451) | about 7 years ago | (#18696617)

Exactly my thought - I have never had problems with any of the laptops with Intel graphics chipsets and Linux. In fact, Fedora pretty much kicks ass with the GMA950 on my Macbook as opposed to my desktop with an ATI card that has 2 DVI outputs (can never get both outputs to work at the same time in Fedora, but I freely admint I am probably screwing up the video configuration).

Re:yay (1)

Ajehals (947354) | about 7 years ago | (#18696791)

ahem, mail me your X configuration and I'll sort it for you if you want (or have a damn good go) - what you describe has become something of a regular occurrence in my part of the world, my email is shown but replace the domain with gmail.com as I'm out and about at the moment.

As for intel graphics on notebooks, I agree, there is nothing like having a component in a notebook where you don't have to worry about it being some bizarre non standard randomly hacked (or firmware crippled), especially when you are talking about the graphics card.

Not a troll -- Meta-Mod unfair (1, Insightful)

Anonymous Coward | about 7 years ago | (#18696539)

This is a valid criticism and comment.
The 950 is barely passable, especially with Vista.
Not really Intel's fault. Their target was the "barely passable" segment, leaving the real GPU makers the rest of the field. Probably Intel's main reason to offer this was a need by the OEMs for Intel to have a 1-stop shopping solution.

My Dell has the 950 and Vista Business and I wish I had upgraded to a more powerful GPU.

BTW, I am not the same AC as the original post.

Re:Not a troll -- Meta-Mod unfair (2)

Blahbooboo3 (874492) | about 7 years ago | (#18696759)

Uh, turn off all the cutesy graphics and Vista would be fine. Windows classic theme removes all the GUI candy and the 950 is fine...

Re:Not a troll -- Meta-Mod unfair (1)

postmortem (906676) | about 7 years ago | (#18697261)

...must be even better if you run DOS on it.

Seriously, intel shitsets offer almost nothing. It is not just 3D that is lacking, but hardware decoding features that are perhaps more important.

But you can't expect more from $2 chip.

Most people are not willing to pay more, that is root of problem. Intel just delivers cheap solution. Then some users figure " oh integrated graphics sux and I can't do a thing"... then it is too late.

Re:Not a troll -- Meta-Mod unfair (0)

Anonymous Coward | about 7 years ago | (#18696805)

You decided to buy an OS that requires a high end video card to be usable, it's your fault, not Intels.

Sure there is (5, Insightful)

Watson Ladd (955755) | about 7 years ago | (#18696319)

Abandon C and Fortran. Functional programing makes multithreading easy and programs can be written for parallel execution with ease. And as an added benefit, goodbye buffer overflows and double frees!

Re:Sure there is (0)

alphamugwump (918799) | about 7 years ago | (#18696489)

Good luck writing that game of yours in Haskell. For that matter, good luck writing a game in anything other than C/C++.

Re:Sure there is (1)

bhtooefr (649901) | about 7 years ago | (#18696519)

There's quite a few games written in Python.

Anyway, you seriously expect someone to click on a TINYURL link in a Slashdot sig?

Re:Sure there is (1)

alphamugwump (918799) | about 7 years ago | (#18696757)

Not any serious games that push as many polygons as possible. A demanding game must get at the raw hardware, and you don't really want to do that with a scripting language. Sure, you can compile python, but good humans will still beat compilers, especially on exotic hardware.

When you're writing a game engine, you want to manage your own memory. For many games, calls to malloc and free are too expensive. You want to be able to shuffle game data around. You want to be able to use inline assembly, for floating point stuff. You want to be able to take a pointer and run it across a goddam buffer. You want actual for loops, not a for each.

Sure, python is pretty good for web programming. And it works OK for small, 2D games, too. But it's not like people avoid functional languages out of ignorance. Imperative programming is still popular for a reason.

Re:Sure there is (2, Insightful)

beelsebob (529313) | about 7 years ago | (#18696861)

Actually, no... That's exactly the point here. These chips are so comlex to think about that a human can't possibly juggle it all in their head. A good human will *never* be as good as a compiler for these chips, and good functional languge compiler for these chips will almost always be better than a good procedural language compiler.

Re:Sure there is (2, Interesting)

alphamugwump (918799) | about 7 years ago | (#18697157)

What makes you think a compiler will be able to do it better than a human?

When people write games, they do all kinds of crazy stunts to ensure they have as few multiplications as possible. Can you really trust a compiler to get the code right for that tight inner loop? Figuring out parallelism might be hard, but game programming has always been hard.

Also, you avoided mentioning memory. It doesn't matter if Haskell uses marginally less memory if it's in the wrong place when you need it. Is that texture in RAM, VRAM, or swap? That sort of thing makes a big difference in terms of performance. And games must maintain a certain framerate. Sometimes it's completely unacceptable to use swap, even if the time gets made up later.

I'm not against functional languages, it's just a question of using the right tool for the job. You use high level languages for high level tasks, and low level languages for low level tasks. If you're writing a compiler or an AI or a raytracer, where realtime performance it not an issue, sure, functional languages are great. But games have always been married to the hardware, and I don't see how that could change any time soon.

Re:Sure there is (0)

Anonymous Coward | about 7 years ago | (#18696913)

Not any serious games that push as many polygons as possible.

"Serious games"? Games are entertainment, and there are vast numbers of entertaining games where performance is not a limiting factor. If you only want to talk about games where performance is critical, then fair enough. But when most people say "games", they mean, you know, anything designed for interactive entertainment. Christ, mobile phones typically have a bunch of games on them, it's one of the most profitable parts of the games industry, so performance clearly isn't universally important.

Sure, you can compile python, but good humans will still beat compilers

Completely and totally irrelevant. What matters when you are talking about performance is whether you can perform well enough to be entertaining. Whether one approach performs better than another only matters to language fanboys; in reality as long as you meet your constraints, absolute performance isn't important. For example, if a really slow implementation of sudoku runs a hundred times slower than something you cook up in assembly, it doesn't matter as long as the crappy implementation runs fast enough, and it's actually the best choice if you can write the stupid, slow version faster than the smart, fast version.

Re:Sure there is (1)

bryxal (933863) | about 7 years ago | (#18697205)

Ya, certainly not a serious game like Civilization 4. Oh sure its not full out Python many of the part are written in C but it shows how it has its uses and its strenghts

Re:Sure there is (1)

beelsebob (529313) | about 7 years ago | (#18696561)

Okay then, I'll go write it with the Haskell OpenGL and OpenAL libraries, and I'll sit here smug in the knowledge that while it may run a little slower now, when these CPUs come along, it'll run 30 or 40 times faster than your C impementation.

Re:Sure there is (3, Interesting)

Goalie_Ca (584234) | about 7 years ago | (#18696699)

Tim Sweeny of Epic has a pdf floating around. Basically a game can be divided into 3 groups. Shading (using shader language), numeric computations (functional) and then game state/logic. He quoted 500 Gflops for shading, 5 gflops for numerics, and 0.5 for game state/logic. Shading is a solved problem as far as concurrency is concerned. THe numeric computation uses the most cpu, that is mostly small numerical jobs for things like checking collisions and can be parallelized and done functionally. The game static/logic and scripting were said to be best done in c++/scripting. That is the part you won't really thread but you could use STM if needed.

Re:Sure there is (4, Insightful)

QuantumG (50515) | about 7 years ago | (#18696491)

Cool, with that kind of benefit, I'm sure you can point to some significant applications that have been written in a functional language which have been written for parallel execution.

This kind of pisses me off. People who are functional programming ethusists are always telling other people that they should be using functional languages but they never write anything significant in these languages.

Re:Sure there is (2, Interesting)

beelsebob (529313) | about 7 years ago | (#18696599)

How about google search? That a big enough exmpe for you? It's written using a functional library called Map reduce.

Re:Sure there is (1)

QuantumG (50515) | about 7 years ago | (#18696635)

Proprietary software doesn't count.. why? Cause no-one can see the benefits of using a functional language except the priveleged few who have access to the source code. So.. can you please name a large open source program written in a functional language which benefits from this supposed ease of parrallelisation that is being claimed here. Or are we just supposed to take your word for it?

Re:Sure there is (1)

beelsebob (529313) | about 7 years ago | (#18696653)

How about perl 6's compiler -- pugs is written entirely in Haskell.

Re:Sure there is (1)

QuantumG (50515) | about 7 years ago | (#18696755)

and the second part of my question? Is this multithreaded? Where's the benefits being claimed?

Re:Sure there is (2, Interesting)

beelsebob (529313) | about 7 years ago | (#18696797)

Can you show me any open source project where massive parallelism is being exploited? I'm not sure I can think of any.

In the mean time, you were given a nice example -- ATC systems and telephone exchanges, and in the mean time, you can have a research paper about map reduce [] -- if you don't belive me, belive the peer reviewed research.

Re:Sure there is (2, Insightful)

Anonymous Coward | about 7 years ago | (#18697265)

If you actually read that paper, you will notice from the code snippet at the end that map reduce is a C++ library. So it kind of proves the exact opposite of what you intended: people are doing great stuff with the languages that you are saying should be dropped.

Re:Sure there is (3, Informative)

Fnord (1756) | about 7 years ago | (#18696985)

A perl6 *interpreter* was written in haskell, and it's considered a non-performance oriented reference implementation, purely for exploring the perl6 syntax. No one has ever doubted that interpreters and compilers are easier to do in functional languages. One of the things you learn first when you take a class in lisp is a recursive descent parser. But the version of perl6 that's expected to acutally perform? Parrot is written in C. The fact that its no where near done is a completely different matter...

Re:Sure there is (1)

Pseudonym (62607) | about 7 years ago | (#18696793)

Proprietary software doesn't count.. why? Cause no-one can see the benefits of using a functional language except the priveleged few who have access to the source code.

And just how many open source applications that rely on parallelisation (and we won't count any kind of parallelisation where there's mostly no coordination required between threads/processes, such as Apache) have been written recently?

Re:Sure there is (1)

QuantumG (50515) | about 7 years ago | (#18696935)

A lot.. but not as many as I would expect have been written in functional languages.. seeing as they are so much easier to write multithreaded apps in.

Look, I don't think I'm asking something too unreasonable here. If functional languages are so much easier to write multithreaded apps using, then show me. Either point at something that already exists which demonstrates this claim or write something new that demonstrates this claim.

The claim has been made, back it up.

Re:Sure there is (2, Insightful)

Bill Barth (49178) | about 7 years ago | (#18696847)

If you read the paper you linked to below, you'll find that Google's Mapreduce language is implemented as a C++ library. Specifically, check out Appendix A of thier paper.

Re:Sure there is (1)

beelsebob (529313) | about 7 years ago | (#18696887)

Yes, indeed you will -- if you read up on any functional language you'll always discover that they re eventually based on procedural processes -- after all, our microprocessors are all procedural engines. The library itself is functional, and that, as the paper points out is where they get the speed -- not in the impementation detail that the library is written procedurally.

Re:Sure there is (2, Interesting)

Bill Barth (49178) | about 7 years ago | (#18696957)

Yes, but if you look at the code in Appendix A, it's written in C++. Yes, it calls a the MapReduce library which is functional in style, but the user still must write their code in an essenitally procedure language. Yes, the library hides a lot from the user, but so what? They still have to write their code in C++! The OP exhorted us to "abandon C and Fortan," but you're touting a C++ class library as an example of a win for functional languages. I assume you can see why we might object!

Of course Google's programmers can write cool parallel programs with this powerful library, but it's not a functional programming library! It's a C++ library that borrows some map and reduce ideas from functional languages.

I'd rephrase the objection to the OP as: "Show me the climate simulator in Haskell. Show me the ML hypersonic flow code for computing reentry flow over the shuttle."

Re:Sure there is (1)

beelsebob (529313) | about 7 years ago | (#18697007)

On the contrary -- this is a functional language -- the reason this can work so fast and be so parallel is that they have referentail transparency, and that they can do lots of things without them interfereing with each other. That's because they're writing functionally. It doesn't matter whether the translation happens into a high level language (like C++) or a low level one when the compiler transltes it into machine code.

The bottom line is that until we change architecture, we have to translate out functional programs into procedural code. That doesn't mean though that we can't get massive benefit by writing in a functional style (as google have done).

Re:Sure there is (3, Insightful)

Bill Barth (49178) | about 7 years ago | (#18697107)

There's no MapReduce compiler. The programmer writes C++. So, at best, the programmer is the functional language compiler, and he has to translate his MapReduce code into C++.

Again, no one disagrees with your idea that writing in a functional style is a good idea for parallel programming, but the OP said that we should give up on two specific languages and pick up a functional one. Clearly a program in a functional style can be written in C++ (which is a superset of C89, more or less, which is one of the two languages mentioned by the OP). The challege to the OP was to show the world a massively parallel program of significance written in a functional language, not one written directly in a procedural language but in a functioal style. You showed us the latter, we'd still like to see the former.

Re:Sure there is (1)

beelsebob (529313) | about 7 years ago | (#18697139)

I think we've come to a violent agreement then. In the mean time for your real world programs... see lower down for your telecoms systems being functional (erlang), your ATC systems being functional, nd your 3D games being made parallel in functional languages.

Re:Sure there is (1)

DragonWriter (970822) | about 7 years ago | (#18696689)

Cool, with that kind of benefit, I'm sure you can point to some significant applications that have been written in a functional language which have been written for parallel execution.

Would the telephone switching systems that Erlang was made for the express purpose of implementing count, or the air traffic control systems also implemented with it, or would it have to be something more of a significant application than that?

If games are more your thing, how about this [lambda-the-ultimate.org] .

This kind of pisses me off. People who are functional programming ethusists are always telling other people that they should be using functional languages but they never write anything significant in these languages.

Please share your definition of "significant".

Re:Sure there is (1)

Watson Ladd (955755) | about 7 years ago | (#18697275)

Erlang is used for some massively parallel problems like telephone switches. SISAL outperformed Fortran on some supercomputers. Jane Street Capital uses O'caml for their transaction processing system. Lisp is used in CAD programs. So FP is being used.

Re:Sure there is (1, Funny)

Anonymous Coward | about 7 years ago | (#18696495)

And hello bloat!

Re:Sure there is (5, Insightful)

ewhac (5844) | about 7 years ago | (#18696527)

Cool! Show me an example of how to write a spinning OpenGL sphere with procedurally-generated textures and reacts interactively to keyboard/mouse input in Haskell, and I'll take a serious whack at making a go of it.

Extra credit if you can do transaction-level device control over USB.


Re:Sure there is (0)

Anonymous Coward | about 7 years ago | (#18696701)

If you want to be serious, take a look at this C++/OCaml raytracer comparison [ffconsultancy.com] and a random benchmark. [debian.org]

Functional languages will let us utilize multiple cores without the headaches and performance is acceptable, to claim otherwise is plain short-sighted. Programmers are not going to be doing manual memory management or thread handling in application level code.

Re:Sure there is (2, Insightful)

QuantumG (50515) | about 7 years ago | (#18696827)

No, dude, he's asking you to solve real problems using your functional language which you claim to be so much better at solving real problems. And, as usual, the response from the functional programming crowd is to point at supposed case studies that no-one can verify or to point at contrived benchmarks.

Re:Sure there is (4, Insightful)

ewhac (5844) | about 7 years ago | (#18696907)

Functional languages will let us utilize multiple cores without the headaches and performance is acceptable, to claim otherwise is plain short-sighted.

I've done some rudimentary reading on functional programming languages -- mostly Haskell and LISP (which is sorta FP) -- and I believe you when cite all the claimed benefits. The architecture of the languages certainly enables it.

However, every time I've tried to get a handle on Haskell, all the examples presented tend to be abstract. In other words, they contrive a problem that Haskell is fairly well-suited to solving, and then write a solution in Haskell, using data structures and representations entirely internal to Haskell. "Poof! Elegance!" Well, um...

I'm a gaming, graphics, and device driver geek, and so my explorations of new stuff tend to lean heavily in that direction. I'm interested in more "concrete" expressions of software operation. Could Haskell offer new or interesting possibilities in network packet filtering? Perhaps, but first you have to read reams of text on how to bludgeon the language into reading and writing raw bits.

The other issue with FP is that they tend to treat all problems as a collection of simultaneous equations -- things that can be evaluated at any time in any order. There's a huge class of computing problems that can't be described that way. You can't unprint a page on the line printer. There are facilities for sequencing/synchronizing operations (Haskell's monads, for instance), but I get the impression that FP's elegance starts to fall apart when you start using them.

Understand that my exposure to FP in general and Haskell in particular is less than perfunctory, and am very likely misunderstanding a great deal. I'd like to learn and understand more about FP, but so far I haven't encountered the "Ah-hah!" example yet.


It's an old argument (3, Insightful)

Bozdune (68800) | about 7 years ago | (#18697313)

Functional languages are nicely parallelizable because they don't have side effects. Unfortunately, real life is full of side effects. So, a pure functional language has to "hack" the side effect by passing it around everywhere as a closure. That gets old really, really quickly. Which is why useful functional languages contain constructs with side-effects (not without accompanying hand-wringing from purists).

Back in the 70's, people like Jack Dennis used to promise the DARPA that they could parallelize the old Fortran code used to do complex military simulations by converting the Fortran code to a pure functional language. It would be wonderful! Well, they couldn't, and it wasn't.

The above notwithstanding, IF you can coerce a problem into a form in which a functional language can be effectively employed, the benefits can be huge. The code tends to be more elegant and more readable; algorithms that would be difficult to write in an applicative language like C become easy; data structure manipulation is trivial; and so on. Arguments that functional languages are "slow" have been debunked. Arguments that functional languages must be interpreted are wrong.

And, all the syntactic nonsense of C++ and the rest of the "object oriented" languages can be (mercifully) shed. Pure functional languages are object oriented by nature. However, functional languages do have their own idiosyncracies, such as the infamous Lisp "quote", and implementation-dependent funarg problems. So there are cobwebs still.

To sum up: If you have a hard algorithmic problem to solve, a functional language will probably be a better choice, even if you end up re-coding the algorithm in an applicative language later. If you have a device driver to write, though, roll up your sleeves and get out the C manual. But first: make sure to put a debug wrapper around your mallocs (and pad your malloc blocks with patterns on both sides) so you can trap double-frees, underwrites, and overwrites. It will pay many dividends.

Re:Sure there is (1)

beelsebob (529313) | about 7 years ago | (#18696833)

Okay then... First you write a simple function that generates the points on the sphere, based on a stream of positions of the pokes, then you write a function that generates the textures, then you use the Open GL libraries to generate the output... Then you sit back and glot, because on these chips, that's gonna run a whole lot faster than your C code.

Re:Sure there is (5, Insightful)

AstrumPreliator (708436) | about 7 years ago | (#18696895)

I couldn't find anything related to procedurally-generated textures, not that I really looked. I could find a few games written in Haskell though. I mean they're not as advanced as a spinning sphere or anything like that...

Frag [haskell.org] which was done for an undergrad dissertation using yampa and haskell to make an FPS.
Haskell Doom [cin.ufpe.br] which is pretty obvious.
A few more examples [cin.ufpe.br] .

I dunno if that satisfies your requirements or not. Though I don't quite get how this is relevant to the GP's post. This seems like more of a gripe with Haskell than anything. But if I've missed something, please elaborate.

Re:Sure there is (0)

Anonymous Coward | about 7 years ago | (#18696581)

LOL it's always funny to see you functional programming guys get all worked up each time around you see some hope that maybe now, this time there's a reason functional programming will catch on.

Sorry to say, it won't catch on this time either. The world's programmers have voted. You lost.

Now to hang out with the Amiga fan-boys, former Lisp-machine users, and bemoan your loss.


Not quite (1)

fm6 (162816) | about 7 years ago | (#18697331)

Dude, you think C and Fortran are the main alternatives to functional languages? You're about 20 years out of date! Nowadays, the Big Thing is OOP languages. Everybody programs in C++, Java, or C# these days.

You do have a point. I wrote the Concurrency chapter in The Java Tutorial (yeah, yeah, it wasn't my idea to assign a bunch of tech writers to write what's essentially a CS textbook), struggled for 15 pages just to discuss the basics of the topic, and ran out of time to cover more than half of what I should have. Most of what I wrote was about keeping your threads consistent, statewise. All of which is irrelevant to functional programming, because functional programs don't have state! If Java were a functional language, the whole chapter would have been a paragraph. A short one.

But getting programmers to give up stateful programming is not gonna happen. Because most programmers are just not Mr. Spock enough to create whole programs that all logic and no variables. Yes, I know, functional languages have variables too. But those variables are just fancy semantic shortcuts for the lambda of whatever. To most programmers, a "variable" isn't a symbol, it's a place where you store information. Doing without all the cubbyholes of information that are used in procedural programming is just too difficult for most of us.

(Somebody's going to reply, "It's not that hard! You just..." Dude, you have the Abstract Math gene. Most of us don't. Go away.)

Also, buffer overflows and double frees are a symptom of languages where the application programmer is responsible for managing their own memory. That's the case in C++ (and yes, C and Fortran) but not in more recent languages.

Re:Sure there is (1)

Coryoth (254751) | about 7 years ago | (#18697351)

Abandon C and Fortran. Functional programing makes multithreading easy and programs can be written for parallel execution with ease. And as an added benefit, goodbye buffer overflows and double frees!
Functional languages seem to regularly get trotted out when the subject of multi-cores and multi-threading comes up, but it really isn't a solution -- or, at least, it isn't a solution in and of itself. If you program in a completely pure functional manner with no state then, yes, things can be parallelised. The reality is that isn't really a viable option for a lot of applications. State is important. Functional languages realise this, and have ways to cope. With the ML family you get a little less purity as a tradeoff for a little more practicality. With truly pure languages like Haskell you just have to bind up state into monads, which is fine, except then you still have to reason about how to multithread the monadic portions of code.

This is not to say that functional programming isn't good. It is. It's great! It just doesn't magically make multi-threading problems go away -- at least not for the majority of applications. Functional programming is a fantastic way to go with many benefits, but it isn't a magic cure-all for multi-threading, and I wish people woul stop presenting it as if it were. If you want languages that are great for multi-threading then try languages designed with concurrency in mind like Erlang, or E, or Oz, or Scala. They won't magically make your problems go away either, but they will, at least, provide you with a paradigm that allows you to think and reason about your multi-threaded code much more easily.

Super-wide? (0)

Toe, The (545098) | about 7 years ago | (#18696327)

As a highway gets wider and wider... it approaches a parking lot.

Re:Super-wide? (0)

Anonymous Coward | about 7 years ago | (#18696469)

No it doesn't. In a parking lot, cars stop. Nothing about widening a highway implies that cars stop (in fact, quite the opposite!).

Re:Super-wide? (1)

Broken scope (973885) | about 7 years ago | (#18697017)

Because of how people are while driving, after 6 lanes you stop getting increased traffic flow and you can actually have slow downs.

Re:Super-wide? (1)

Short Circuit (52384) | about 7 years ago | (#18696613)

Unlike ultra-long pipelines, wide execution units don't add to instruction latency. Your analogy doesn't follow.

sense of humor? (0)

Anonymous Coward | about 7 years ago | (#18696669)

Ummm... I think it's a joke, dude.

Re:Super-wide? (0)

Anonymous Coward | about 7 years ago | (#18696751)

That is one of the dumbest comments I have ever read on slashdot. And that's saying something.

Re:Super-wide? (0)

Anonymous Coward | about 7 years ago | (#18696785)

But what's even dumber is the fact that his comment was getting a (+4, Insightful) for a while.

It just boggles the mind.

Cell (2, Insightful)

Gary W. Longsine (124661) | about 7 years ago | (#18696343)

The direction looks similar to the direction the IBM Power-based Cell architecture is going.

Re:Cell (1)

peragrin (659227) | about 7 years ago | (#18696577)

That was my thought too. Multiple cores with a handful of specific cores.

IBM's cell processor would be a lot more useful in general if it was slightly modified. replace one or two aux cores with GPu cores. 2-4 as more general purpose cores, and one or two FPGA style cores, possibly with preset configuration options(ie audio processing, physics, video, etc).

Complicated, yes. But can you imagine what one could do. Your single computer could be switched on the fly to encode an audio and video stream far more efficiently. Using GPu, and PPU's, game play could be increased.

it is a pipe dream. There is a lot of work. Software would have to be modified, And the net gain may not be all that much. Of course long term gain on such a porject would be very useful.

Astroturf (5, Insightful)

Anonymous Coward | about 7 years ago | (#18696361)

Arun Demeure writes "Beyond3D has once again obtained new information...

If you are going to submit your own articles [beyond3d.com] to Slashdot, at least have the decency to admit this instead of talking about yourself in the third-person.

Re:Astroturf (0)

Anonymous Coward | about 7 years ago | (#18696909)

One of the wonders of text-based internet communication is that everyone can try guessing your age based on vastly insufficient information. But it gets scary when every year, it looks like people think you're growing younger, not older. One theory is that I'm getting cooler all the time. Another is that I'm just getting dumber. Either way, I think I'd rather not know the answer to this question.
What a prick.

Re:Astroturf (0)

Anonymous Coward | about 7 years ago | (#18696943)

Said the anonymous coward

future of computing? (2, Interesting)

jcgf (688310) | about 7 years ago | (#18696367)

I'm just waiting till they come out with a complete single chip PC (I know there are examples but they aren't spectacularly performing). Just enough PCB for some external connectors and some voltage regulation.

We need a new architecture (5, Interesting)

jhfry (829244) | about 7 years ago | (#18696369)

I don't know what it is, or how it will be different from x86, but progress can't keep continuing if we don't look for better methods of doing things.

It cannot be argued that x86 is best architecture ever made, we all know it's not... but it is the one with the most research. We need the top companies in the industry, Intel, AMD, MS, etc. to sit down and design an entirely new specification going forward.

New processor architecture, a new form factor, a new power supply, etc...

Google has demonstrated that a single voltage PSU is more efficient, and completely do able. There is little reason that we still use internal cards to add functionality to our systems, couldn't these be more like cartridges so you don't need to open the case?

Why not do away with most of the legacy technology in one swoop and update the entire industry to a new standard.

PS, I know why, money, too much investment in the old to be worth creating the new. But I can dream can't I?

Re:We need a new architecture (1)

Short Circuit (52384) | about 7 years ago | (#18696703)

We need that about as much as we need to switch to trinary. (It's obvious, after all, that binary is preventing us from making advances in the fields of logic and compression, isn't it?)

No, we'll get a new architecture as soon as we need one. That is to say, once advancing the x86 architecture becomes more expensive than is cost-effective, someone's going to come up with a cheaper replacement that still has room to grow.

Sure, it would be nice if assembler instructions allowed one to designate a destination register, but that isn't really important, or even an area of focus. Code size for desktops, laptops and many embedded applications has become largely irrelevant. Even code efficiency has taken a backseat everywhere but performance data processing applications. Hell, when you've got 3D games being written in managed code, it must be obvious that developers would rather focus on complexity over speed. (Though that might change if the focus continues to shift towards data-driven web applications. Those applications qualify as high performance, and maintaining racks upon racks of servers isn't cheap.)

Re:We need a new architecture (1)

chris_eineke (634570) | about 7 years ago | (#18696705)

Once you find something that works one magnitude (10x) better than the old technology, people will adapt it.

In the great CPU-GPU War... (0)

Anonymous Coward | about 7 years ago | (#18696391)

The only winners were the power-supply industrial complex.

And the living envied the dead despite the real-time raytracing.

Similiar to what sun is doing with Niagara (3, Interesting)

mozumder (178398) | about 7 years ago | (#18696479)

Basically put in dozens of slow low IPC, but area-efficient, processors per CPU. Later on, throw in some MMX/VLIW style instructions to optimize certain classes of algorthims.

The first Niagara CPUs were terrible at floating point math, so they were only good for web-servers. The next generation I hear are supposed to be better at FPU ops.

Heavens to Betsy NO! (1)

jddj (1085169) | about 7 years ago | (#18696573)

Why do I think this means more gawdawful graphic controllers and drivers from Intel? Example: The chipset can do 1600x1200, the monitor can do 1600x1200, I can even see it flash on the screen for a half-second before I'm limited to 1024x768. Example: The Java applet is on monitor 2. Click on a drop-down box: WTF? The dropdown list appears on monitor 1!!! Avoid, avoid, avoid like the plague smart people....

Itanium (0)

Anonymous Coward | about 7 years ago | (#18696655)

"Looks like computer scientists and software programmers everywhere will have to adapt to these new concepts"

Is everyone on slashdot too young to remember the Itanium?

While the changes are need, just because intel says something doesn't meen the whole industry "has" to follow. Netburst was terrible. Itanium unsuccessful. Those that adapted to those new concepts also generally adapted to failure.

when will intel figure it out.... (1, Funny)

Anonymous Coward | about 7 years ago | (#18696723)

They suck so badly at making GPUs it's like watching the special olympics...

They will just not be supported... (1)

bky1701 (979071) | about 7 years ago | (#18696813)

It's a bad move on intel's part. Many common programs don't even make full use of multi-core, extended instruction sets and 64-bit. If they are relying on something exotic to put them ahead... it just isn't going to work out, unless they think up a method to run un-edited code optimized for their exotic architectures.

Intel against NVIDIA/ATI/AMD? OSS? (5, Insightful)

WhiteWolf666 (145211) | about 7 years ago | (#18696963)

If intel keeps supporting its equipment with excellent OSS support, I'll happily switch to an all-intel platform, even at a significant premium.

NVIDIA's Linux drivers are pretty good, but ATI/AMD's are god awful, and both NVIDIA's & AMD/ATI's are much more difficult to use than Intels.

I'd love to see an Intel GPU/CPU platform that was performance competitive with ATI/AMD or NVIDIA's offerings.

Good for linux (4, Insightful)

cwraig (861625) | about 7 years ago | (#18697045)

If Intel start making graphics card with more power to compete with nvidia and ati there they will find a lot of Linux support as they are the only ones which currently have open source drivers http://intellinuxgraphics.org/ [intellinuxgraphics.org] I'm all for supporting Intel move into graphics cards as long as they continue to help produce good linux drivers

News at 11:Sometimes specialized hardware is fast (1)

iamacat (583406) | about 7 years ago | (#18697081)

We have been hearing about digital convergence forever, but most people want a separate computer from cellphone or TV. Processors from Intel itself still have completely separate sets of instructions for integers and for floating point. In the same vein, even if Intel's architecture is possible, it will be less upgradable, more difficult to program for and have less backward compatibility overtime than a set of components with well defined functions.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account