Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Unleashing the Power of the Cell Broadband Engine

ScuttleMonkey posted more than 8 years ago | from the wish-my-brain-had-nine-cores dept.

Technology 136

An anonymous reader writes "IBM DeveloperWorks is running a paper from the MPR Fall Processor Forum 2005 explores programming models for the Cell Broadband Engine (CBE) Processor, from the simple to the progressively more advanced. With nine cores on a single die, programming for the CBE is like programming for no processor you've ever met before."

cancel ×

136 comments

Sorry! There are no comments related to the filter you selected.

PS3 Suggestion (-1, Redundant)

5, Troll (919133) | more than 8 years ago | (#14122545)

The Sony PS3 seems a good development kit alternative for open source programmers, low-budget laboratories or even startup companies.
It will carry a Cell with a powerful graphic chipset, a hard drive, a good deal of ports, and a Linux distribution.

The problem I see, however, is that it is restricted to 256 MB of RAM.
This is very small in comparison with the data processing capabilities of the Cell. Also, it is too little for modern OSes which usually starts working decently over 512 MB.
Virtual memory helps, but the PS3 will use 2'5 inch hard drives, which are quite slow.

My sugestion is that Sony could make a limited edition PS3 with bigger memory for developing, like 512 or 1 GB. After all, if they agreed to open Cell to the industry, why not help with technology's adoption selling cheap development kits?

It would be nice if IBM could back this idea, and convince Sony to make it a reality, don't you think?

Re:PS3 Suggestion (4, Informative)

spoonboy42 (146048) | more than 8 years ago | (#14122668)

The PS3 has 512M of memory by default. It is half Rambus XDR and half GDDR3, but both segments of memory can be addressed by both the processor and the GPU.

Re:PS3 Suggestion (0)

5, Troll (919133) | more than 8 years ago | (#14123123)

PS3 has 256 MB for the CPUs, and 256 MB for the graphic system.
As I said before, 256 MB for the OS is not enough.

Maybe you are thinking about the Xbox360 console, which has 512 MB of unified memory (it's an UMA).

Re:PS3 Suggestion (2, Interesting)

Trigun (685027) | more than 8 years ago | (#14122700)

Just cut Sony out of the loop, and have IBM do the work. They could re-revolutionize the desktop PC market.

Re:PS3 Suggestion (1, Interesting)

Doppler00 (534739) | more than 8 years ago | (#14122730)

Waaaaiiit a minute. This is the same DRM the heck out of everything Sony we are talking about here right? There is no chance they are going to allow a linux distribution to run easily on this platform. They are probably encrypting everything like Microsoft is doing with the XBox360.

People keep forgetting that Sony and Microsoft are in absolutely no way interested in providing you with a cheap computing platform for your linux cluster endevours at their loss. They make money off of selling games for these things. If people find ways of loading their own software on these boxes you better beleive they are going to start filing lawsuits. Not that I agree with that, but that will be what happened.

Re:PS3 Suggestion (4, Interesting)

rpdillon (715137) | more than 8 years ago | (#14122742)

Every PS3 hard drive is shipping [linuxdevices.com] with Linux [engadget.com] onboard [joystiq.com] .

Re:PS3 Suggestion (0)

Anonymous Coward | more than 8 years ago | (#14124456)

Waaaaiiit a minute. This is the same DRM the heck out of everything Sony we are talking about here right? There is no chance they are going to allow a linux distribution to run easily on this platform. They are probably encrypting everything like Microsoft is doing with the XBox360.


WHooooo! Mod this tard up!
Massive multinational corporations act like they're a single unit with a single mind and a single philosophy to be applied from video game electronics to music!

Information was meant to be free
our only crime was curiousity!

Slash-dot! Slash-dot!
Nerd! nerd! nerd!

Mod parent down, FUD (1)

Troglodyt (898143) | more than 8 years ago | (#14124752)

This is in fact not the "DRM the heck out of everything Sony" we are talking about, this is another part of Sony.
The part of Sony that has been providing Linux kits [playstation2-linux.com] for the PS2 [ps2linux.com] since 2002.

The console homebrew scene is rather big, and Sony and Microsoft can do nothing about it.

Mambo development (4, Informative)

iota (527) | more than 8 years ago | (#14122751)

Development for the Cell is open. You are free to download IBM's Cell Simulator [ibm.com] .
Written in C, a significant part of the Full-System Simulator's simulation capability is directly attributed to its simulation multitasking framework component. Developed as a robust, high-performance alternative to conventional process and thread programming, the multitasking framework is a lightweight, multitasking scheduling framework that provides a complete set of facilities for creating and scheduling threads, manipulating time delays, and applying a variety of interthread communication policies and mechanisms to simulation events.
The simulator runs a Redhat kernel, so the programming model will be familiar. Also both SCE's (gcc-based) and IBM's (XLC) compilers are available for both the PPU and SPU.

IBM will also be releasing Cell-based Blade servers next year, so pick one up if you're serious about development!

Re:PS3 Suggestion (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#14122850)

While reading all this stuff ,crawling the web looking for something ,dont forget about Christmas and your family!

Find the best gifts and Xmas ideas at:
  http://christmas.seavenue.net/ [seavenue.net]
  http://jewelry4xmas.seavenue.net/ [seavenue.net]

Re:PS3 Suggestion (1)

wheany (460585) | more than 8 years ago | (#14123072)

I've wondered where the spammers have been hiding all this time. You fuckers spammed the hell out of my guestbook while I made no promises not to delete any messages. Here you have Slashdot, one of the world's most popular discussion forums and they do not delete (practically) any comments. Sure you will be moderated to -1 in no time, but the message will still be there for everyone to see, if they choose to.

Usually you are much more inventive. What the hell took you shitheads this long this time?

MOD PARENT DOWN (4, Informative)

imroy (755) | more than 8 years ago | (#14123083)

Note to moderators: the user "5, Troll" likes to cut and paste posts from other sites to gain karma. This one [ibm.com] was found on the DeveloperWorks site with a quick google search.

Re:PS3 Suggestion (0)

Anonymous Coward | more than 8 years ago | (#14124691)

this would be good if Sony wasnt already planning to abandon the Cell processor with the next itiration of Play Station and go with a derivitive of Emotion. Currently its a low yeild chip that is well ahead of it's time. If you thought Xenos CPU is going to be hard to program for wait until you start development for Cell. While the GPU is a pretty generic PC chip, Cell is unlike anything you have ever seen. Using all of those in order only SPEs is going to be damn near impossible unless IBM and the others come out with some development tools a few generations beyond what you see today.

Re:PS3 Suggestion (2, Informative)

TheGSRGuy (901647) | more than 8 years ago | (#14124813)

2.5" drives don't have to be slow. Most laptops ship with 5400rpm (or even worse, 4200rpm) drives. I paid to upgrade to a 7200RPM drive w/8MB cache in my Dell notebook. Huge speed jump. You can even get 2.5" drives with 16MB caches. That would offer a significant speed bost.

Frankly, I don't see why they couldn't just use flash memory instead...everyone's doing it these days.

As Usual... (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#14122548)

I read this on digg days ago...

Re:As Usual... (0)

Anonymous Coward | more than 8 years ago | (#14122569)

Congratulations... but perhaps not everyone else did, and hopefully seeing a summary of a story you'd seen elsewhere didn't completely ruin your day.

Re:As Usual... (0)

Anonymous Coward | more than 8 years ago | (#14122642)

Then go back there.

Re:As Usual... (0)

Anonymous Coward | more than 8 years ago | (#14122666)

Funny, I read this on your mama's back last night.

Re:As Usual... (0, Offtopic)

Durinthal (791855) | more than 8 years ago | (#14122749)

Not to feed the troll or anything, but I haven't [digg.com] . And I do read digg regularly. Maybe I've missed something in the half-dozen different search terms I've tried (and feel free to correct me if I have), but it seems that this comment is nothing but troll. And honestly, even if it was on digg a week ago or whatever.. Why? Do you really hate /. so much that you take time from your day to make such caustic (and useless) posts?

Cell architecture (2, Informative)

rd4tech (711615) | more than 8 years ago | (#14122551)

The Cell Architecture grew from a challenge posed by Sony and Toshiba to provide power-efficient and cost-effective high-performance processing for a wide range of applications, including the most demanding consumer appliance: game consoles. Cell - also known as the Cell Broadband Engine Architecture (CBEA) - is an innovative solution whose design was based on the analysis of a broad range of workloads in areas such as cryptography, graphics transform and lighting, physics, fast-Fourier transforms (FFT), matrix operations, and scientific workloads. As an example of innovation that ensures the clients' success, a team from IBM Research joined forces with teams from IBM Systems Technology Group, Sony and Toshiba, to lead the development of a novel architecture that represents a breakthrough in performance for consumer applications. IBM Research participated throughout the entire development of the architecture, its implementation and its software enablement, ensuring the timely and efficient application of novel ideas and technology into a product that solves real challenges; More [ibm.com] ...

New Me (0)

Doc Ruby (173196) | more than 8 years ago | (#14122554)

I just want to draw a flowchart and have the compiler and realtime scheduler distribute processes and data among the hardware resources. If we are getting a new architecture and new "programming models", and therefore new compilers and kernels, how about a new IDE paradigm.

Re:New Me (4, Funny)

NanoGator (522640) | more than 8 years ago | (#14122578)

"I just want to draw a flowchart and have the compiler and realtime scheduler distribute processes and data among the hardware resources. If we are getting a new architecture and new "programming models", and therefore new compilers and kernels, how about a new IDE paradigm."

Bingo, sir.

Re:New Me (1)

jo42 (227475) | more than 8 years ago | (#14123464)

...and don't forget to give everything a new, academic, name.

Mindstorms! (1)

fub (126448) | more than 8 years ago | (#14122955)

Time to port the Lego Mindstorms development environment to the Cell processor!

Re:New Me (1)

ultranova (717540) | more than 8 years ago | (#14123171)

I just want to draw a flowchart and have the compiler and realtime scheduler distribute processes and data among the hardware resources. If we are getting a new architecture and new "programming models", and therefore new compilers and kernels, how about a new IDE paradigm.

Isn't this basically the dataflow [wikipedia.org] paradigm ?

I think it should be possible to make any programming language automatically spread its work across multiple processors simply by analyzing which operations depend on which to generate "wait until operationx X is complete" points (essentially the same as current processors do to feed their multiple pipelines from a single stream of instructions, but might work better since the compiler has more info than the processor); how efficient this kind of system would be is another matter.

Or simply include support for constructs like

future int result = operation();<br>
doSomeThing();<br>
doSomeThingE lse(result);

where the keyword "future" tells the compiler to perform "operation()" in a separate thread if possible, and block at "doSomeThingElse()" if result is not yet available (or, better yet, block in doSomeThingElse() at the line where result is actually needed). This would allow for extremely easy thread programming - but it would propably still be neccessary to include a separate threading facility.

You can do something like this in Java with the Future interface, FutureTask class and the Executor interface, but it requires a lot of extra work for the programmer. It would be nice if the compiler could generate the code for me. It would be even better if the JVM included support for such constructs, since it would allow the exact same bytecode to be run as a single-threaded application in uniprocessor machine without any overhead, and still take advantages of multiple processors when available.

Re:New Me (1)

MikeFM (12491) | more than 8 years ago | (#14125122)

Real geeks don't use IDEs! ;)

I'll use an IDE when they invent one that is fast, flexible, and chrome free as a normal bash command prompt. I have hated every IDE I've tried due to slowness and their insistance on making an ugly bloated interface. I don't need the IDE to try to guess what I'm typing, to offer code debugging while I'm typing, to FTP my files for me, etc.

Basic project management, checking my code for errors (when I ask.. not constantly), good seach and replace (with regular expressions and across multiple files), and a built-in reference guides for everything I'm doing (language, protocols, etc) would be good enough. Possibly an interface builder if it works well and stays out of the way when not needed. Much more than that and you're just letting wasting your time playing with an interface replace actual coding time.

Re:New Me (1)

Doc Ruby (173196) | more than 8 years ago | (#14125209)

Enjoy your wallow in the 1970s. My coding experience, which started there, includes typing hex machine code into the Apple ][+ machine language monitor. And it also includes working with customers, graphic artists and mathematicians. Which is why productivity is clearly highest throughout the cycle with flowcharts of reusable schematic objects. We've got lots of native topological intuition and skills we almost use while coding and debugging, but which we're not skilled in operating by typing.

I want to keep the lexical tools you love, especially for geeks who can't use the multidimensional tools, but also just to keep their proven value. So what I'm looking for is a callgraph visualizer for your stuff, and a flowchart compiler that produces procedural code like C and Java. Then we can each use the machines for maximum productivity among all our individual idiosyncracies.

Has nothing to do with Broadband (2)

ScottCooperDotNet (929575) | more than 8 years ago | (#14122579)

Damn you marketing droids! This has nothing to do with broadband at all.

Re:Has nothing to do with Broadband (0)

Anonymous Coward | more than 8 years ago | (#14122617)

Okay then, Sherlock, define broadband.

Re:Has nothing to do with Broadband (1, Informative)

Anonymous Coward | more than 8 years ago | (#14122729)

Frequency division multiplexing of multiple signals for transmission over a medium, such as coaxial cable.

Re:Has nothing to do with Broadband (3, Informative)

Guilly (136908) | more than 8 years ago | (#14122652)

I would assume they call it broadband because the 8 SPE's can communicate to each other over a 100GB/s link (called the Element Interconnect Bus -- yes, that's 100GB not 100Gb) and also because it provides plenty of SIMD instructions.

Oh yeah. If you read their web page [ibm.com] they also mention the Cell processor will be able to handle broadband rich media applications and streaming content:
The first-generation Cell Broadband Engine (BE) processor is a multi-core chip comprised of a 64-bit Power Architecture processor core and eight synergistic processor cores, capable of massive floating point processing, optimized for compute-intensive workloads and broadband rich media applications.

Re:Has nothing to do with Broadband (5, Insightful)

ScottCooperDotNet (929575) | more than 8 years ago | (#14122810)

Simply because IBM mentions broadband [wikipedia.org] doesn't mean it has anything to do with system-to-system data transmission. This sounds a bit like Intel's marketing of "shiny new Pentiums make the Internet faster."

"The Pentium III will make the Internet a much more consumer-friendly environment," says Jami Dover, Intel's marketing vice president. Surfing today, Dover maintains, is a limited experience because data-transfer rates over ordinary telephone lines do not allow for high-quality audio, video and 3D graphics. "You take people raised on TV and show them a flat, text [Web] page," says Dover. "It's quite a juxtaposition." [asiaweek.com] I guess Intel was hoping the world could go through a phone line with enough compression.

To us this is a nitpick, to the general public this is more confusion in a jargon filled marketplace.

Re:Has nothing to do with Broadband (0)

Hurricane78 (562437) | more than 8 years ago | (#14122916)

Well... in fact, you won't believe it, but actually it DID make the internet faster. At least that's what zdnet wrote in some internet pro and pc pro tests... I'm sure i still have the article somewhere below tese piles of magazines... hmmm...

If i remember it correctly they did several "real-life" benchmarks by rendering huge graphics- and plugin-loaded webpages and measuring the time. They even wrote that before, they laughed at intel because they believed it was absurd. but ot after they did their tests..

Well sure this depends on how much you trust the zdnet-labs. I did back then. Nowadays i don't know... ...but who cares anyways, because i'm no fan of electricity powered heating plates with data processing facilities... they're not very economic... for both functions... ;)

Re:Has nothing to do with Broadband (1)

pomo monster (873962) | more than 8 years ago | (#14123074)

Faster processors enable better compression so that you can squeeze more into less bandwidth. H.264, for example, is by most accounts the most bandwidth-efficient video codec (widely available right now), but it takes a lot of processor muscle to decode. AAC, same thing compared to MP3, though you'll never max out your processor playing AAC the way you might for a HD H.264 stream.

Re:Has nothing to do with Broadband (0)

Anonymous Coward | more than 8 years ago | (#14123332)

This is a perfectly legit use of the term broadband from the perspective of a computer architect or communications specialist. Just because you like to apply the word to computer networks doesn't mean it is the only place you can apply the word.

Re:Has nothing to do with Broadband (1)

wpmegee (325603) | more than 8 years ago | (#14124014)

Intel's pulled this trick before too. The P4 core is called NetBurst, and it makes the intarwebs really fly!

Wow ... (4, Interesting)

JMZorko (150414) | more than 8 years ago | (#14122583)

... all those _registers_ make me salivate! One of the coolest things about the RCA1802 (the processor I learned on) compared to others in its' time was that it had _loads_ of registers when compared to a 6502 or 8085. It spoiled me, though ... when I started exploring those other CPUs, I always thought "Huh? Where are all of the registers?"

So yes, I want a Cell-based devkit now, 'cuz this sounds like _fun_ :-)

Regards,

John

Re:Wow ... (1)

Ziviyr (95582) | more than 8 years ago | (#14122656)

Umm, did the 6502 need many registers? If there were to be many more and they were to be useful command size might out-reach bus width, some fancy dancing might make it a break-even on speed. Best idea might be a scratchpad area built in, still not much of a win over just using zero-page.

I wonder how well an uber-multicore 6502 at appropriately modern clockspeeds would fare today...

A couple of hundred registers... (1)

Gordonjcp (186804) | more than 8 years ago | (#14122708)

What do you think Page 0 was for?

Re:Wow ... (1)

tgd (2822) | more than 8 years ago | (#14123469)

What, an accumulator wasn't enough for you?

Kids these days...

Re:Wow ... (1)

Ziviyr (95582) | more than 8 years ago | (#14125030)

The 6502 had X and Y too.

I remember the 1802, too (1)

Flying pig (925874) | more than 8 years ago | (#14125109)

You had to do subroutine call and return in actual code. It wasn't until the 1805 that you got a very slow microcoded call and return and that you got to be able to load and save those registers to the stack without programming (RSXA and RLDX). The trouble with the 1802 was its microarchitecture was just too exposed to the programmer - like the PDP-8. I really don't want to go there again.

Another horrible early processor was the TMS9900, which pretended to have 16 16-bit registers but they were just mapped memory. And that too didn't have a proper subroutine call and return. It really wasn't better in the old days.

ps3 programming (3, Insightful)

orlyonok (729444) | more than 8 years ago | (#14122588)

from the article and if the ps3 cell cpu is even half the processor than this monster is i say that game companies will need a lot of real programmers to make real good games (as if they cared).

Re:ps3 programming...no, not really (2, Insightful)

Fallen Kell (165468) | more than 8 years ago | (#14122667)

I say no, they won't need lots of real programmers. They only need 1 or 2 per game team to do the overall design and let the compilers do the rest. Since the real guts of it will be compiler optimization. If your lead designers do their job, the compiler will be able to do its job and everything will work like it should.

Its when you take old code from previous things and then try to do a direct port that you will see some issues in performance hits. But if designed from the ground up in terms of the code for a cell environment (or ANY CPU architecture), it is all in the hands of the few top level software design architechs to properly structure the overall workings of the game's code. Once the structure is correct, sending the bits and pieces that need to be made to the rest of the code monkeys is no problem, they just need to follow the UML or whatever other design docs they are specifically suppose to implement.

Re:ps3 programming (0)

Anonymous Coward | more than 8 years ago | (#14122670)

from the article and if the ps3 cell cpu is even half the processor than this monster is i say that game companies will need a lot of real programmers to make real good games (as if they cared).

Huh? I always thought real programmers were the ones who could get Doom 3 to run faster on an 8088, not use excessive processing power to do simple tasks. Real programmers don't need the new processors. But real programmers are expensive and hardware isn't. Hire a couple of girlie-men programmers and some powerful hardware and then ship a bloated product. If real programmers were in such demand we wouldn't see the pseudo-languages of C# are VB (among others) to protect girlie-men programmers from themselves.

Ooh. Obligatory real programmer note: real programmers program only in FORTRAN, assembly language, or machine code.

Re:ps3 programming (1)

tepples (727027) | more than 8 years ago | (#14122701)

Obligatory real programmer note: real programmers program only in FORTRAN, assembly language, or machine code.

Or in situations where use of lowest-level languages is deprecated, such as when one set of program logic has to work on a PC-class machine and a handheld device[1], they program in C and make sure to look at the generated assembly language code (gcc -O3 -S) to make sure that the compiler is in fact doing its job.

[1] This situation is common in multiplatform video game development.

Re:ps3 programming (1)

orlyonok (729444) | more than 8 years ago | (#14122778)

Yes that's true, and you can have this scenario developing for handhelds: you have coded the fastest possible algorithm in C and your code is beautifully optimized by gcc, but the testers say that the frame rate is still 20 to 30% bellow the bare minimum and your boss want it to be fixed by tomorrow in the morning, in those cases you develop a radically novel algorithm from scratch, take the assembly generated by the compiler and bump it to the maximum manually or pray to have a real programmer nearby, if all else fail, you find an elegant way to end your suffering.

Re:ps3 programming (2, Interesting)

MikeFM (12491) | more than 8 years ago | (#14122680)

It'd seem to me that a lot of the development trickery will be in getting a proper compiler and specialized libs out there that take advantage of this parallelism without requiring massive changes to how the average developer has to write their code.

Most of the bitching we've heard from developers so far hasn't been that the cell sucks but that their existing codebases don't take advantage of it's design and they don't want to make a rewrite that locks them into the platform.

As with every platform the really good stuff will come out a couple years after it's release. At least with the Cell they are pushing it to go mainstream instead of just for gaming consoles so we should expect to see development moving along much faster than with a plain console.

Re:ps3 programming (2, Insightful)

iota (527) | more than 8 years ago | (#14122800)

It'd seem to me that a lot of the development trickery will be in getting a proper compiler and specialized libs out there that take advantage of this parallelism without requiring massive changes to how the average developer has to write their code.

Certainly people are working on that very idea. However, it's a long way off and not likely to happen in the lifetime of this version of the processor. Both XLC (IBM's optimizing compiler) and GCC have a very difficult time vectorizing (i.e. taking advantage of the SIMD instruction sets) within a single processor. IBM has released a Cell SDK for managing the PPU/SPU at a higher level, which should make the transition slightly easier for some developers, but on the whole - there is no way around the fact that the final algorithms and data design are very different when targetting a Cell.

Most of the bitching we've heard from developers so far hasn't been that the cell sucks but that their existing codebases don't take advantage of it's design and they don't want to make a rewrite that locks them into the platform.

These developers that are bitching are just the decendants of the developers that were bitching when games moved from 2D to 3D. That caused a major upheaval as well. We lost a lot of programmers in that transition, we're bound to lose some here too. But times change and multi-processing has been a long time coming - it's not going anywhere. The Cell may be a hit, or not - but the software techniques will be the basis of what we do for quite a while.

Re:ps3 programming (2, Interesting)

TheRaven64 (641858) | more than 8 years ago | (#14123484)

C is an incredibly bad language for programming a modern CPU. There are many parts of the C language which assume that the target machine looks a lot like a PDP-11. Trying to turn this code into something that runs on a superscalar machine, let alone a vector processor is incredibly difficult - we can only do it at all because such a huge amount of effort has been invested in research. If you want a good language for programming something like the cell, then you should take a look at more or less any functional language.

Oh, and anyone who thinks functional languages are scary should realise that they probably use (a very primitive and unfriendly) one for their build system - make.

HLL like Java & Smalltalk have two faults (1)

crovira (10242) | more than 8 years ago | (#14123883)

the first is that they don't deal well with resource contention. No language, or any other thing for that matter, does.

When you fork N processes on N objects and you have N-M processors, it costs you computationally, which translates into efficiency.

Its one thing to think of this situation as a bunch (N) of ball-bearings going a bunch of holes (N-M) with each ball-bearing having its state information local to it. (Any kind of concept of a sieve can serve as a 'gedanken' experiment.)

The situation becomes hopelessly confused when there is any dependency on external data or process sources.

The mechanisms for handling that confusion are all basically ones of reducing the many threads down into a single thread and meting out the shared resource piece-meal.

A sufficiently evolved schema is capable of handling replication of a shared 'read-only' resource but, despite the efficiencies inherent in that situation, it merely shifts the burden of resource access up one level. There will be a stiffer computational penalty to be encountered when 'access starvation' is reached.

Hopefully the replication penalty will be acceptable, and there are ways to mitigate the computational cost of that penalty, but the trade-off is an instance-level, existential sort of thing and exists at run-time and can only be guess-timated at algorithm/method design-time.

The second fault is one of design of the languages themselves.

They are not designed to operate within a schema. Actually no language is so the efficiencies to be gained from using a schema are bolted on to the application and not an inherent part of it.

Re:HLL like Java & Smalltalk have two faults (1)

dlaur (135032) | more than 8 years ago | (#14124116)

I wonder if we will see new language designs emerge due to the relative increase in the number of multi-processor/core CPUs out there. I mean something at a high level, not compiler improvements that try to overlap CPU operations from original top-down style code, but something that encourages the actual developers to use producer consumer patterns, co-routines, whatever, at their level to maximize utilization of CPU resources. We keep using the same old languages and assuming the compiler is smart enough to keep all the circuits busy. Anyone out there work on huge multi-processor systems or clusters that can free me from my ignorance?

Re:HLL like Java & Smalltalk have two faults (2, Informative)

TheRaven64 (641858) | more than 8 years ago | (#14124585)

Java and Smalltalk are both imperative languages and, while I am quite fond of Smalltalk, my post was about functional languages. Most functional languages don't permit aliasing, which dramatically reduces locking issues related to resource contention (and copy-on-write optimisations can make them very fast).

Re:ps3 programming (1)

waxwing (917397) | more than 8 years ago | (#14125242)

A Bluebottle port to the Cell would be ideal as far as I am concerned. Then, a functional language like Ocaml or Moby was wanted, it could be written in Oberon. I've got a software realtime raytracer running on Bluebottle (dual boot Linux/BB x86 box). Boy, would I like to port it to Cell. Of course the tracer could be converted to C++. But that would be ugly.

Re:ps3 programming (1)

NanoGator (522640) | more than 8 years ago | (#14122727)

"from the article and if the ps3 cell cpu is even half the processor than this monster is i say that game companies will need a lot of real programmers to make real good games (as if they cared)."

I'm not so worried about the programming aspect of it. Yeah, it'll cost more, and in the beginning it'll be nerve wracking to get adjusted, but I expect that in the end it'll be the least of their concerns. Not only will the techniques be laid down, but I imagine there'll be a lot of engine licensing going on. I imagine they'll even get to a point where it's less about the technical aspects and more about the the technique. But.. I really am babbling about something I know little about so I'll let it rest there. Mainly I'm replying because I remember hearing comments about this with the PS2 and today you never really hear about it anymore other than the occasional developer comment that the XBOX and GameCube are easier to develop for. It's not like its game library suffered badly for it.

I imagine the big increase in spending will go towards the artists, designers, and the sound people. In summary, I mean content / assets. Lotsa RAM to fill up there with visuals. I'm seriously wondering if we're going to see a 100 million dollar game in the next 10 years.

Re:ps3 programming (4, Insightful)

iota (527) | more than 8 years ago | (#14122770)

from the article and if the ps3 cell cpu is even half the processor than this monster is i say that game companies will need a lot of real programmers to make real good games (as if they cared).

1. Some of us do care, actually.
2. The Cell processor described is exactly the processor in the PS3.
3. Yes, regardless of what some would like to believe, there is no magic. It's different, but it's the way things are going, so some of us are adapting the way develop. It'll take work, and maybe a little time, but that's always been our job - we get hardware and we figure out how to do something cool with it.
4. It is actually really fun to work on and very impressive.

Re:ps3 programming (1)

ctid (449118) | more than 8 years ago | (#14123073)

2. The Cell processor described is exactly the processor in the PS3.


Not quite. I heard a talk from a Sony representative and she said that the PS3's cpu has eight SPEs, but only seven of them are enabled. This is to increase yields.

Re:ps3 programming (1)

BasilBrush (643681) | more than 8 years ago | (#14123161)

True. But that may well be just software. Perhaps at power on it loads a test program into each of the SPEs, and chooses 7 of the SPEs that complete without error. Or perhaps it's done as part of a soak test at the factory, and the SPE that's not to be used stored in non-volatile memory on the PS3.

Re:ps3 programming (1)

ctid (449118) | more than 8 years ago | (#14123169)

That's a good point. I hadn't thought of it that way.

Re:ps3 programming (1)

Andy_R (114137) | more than 8 years ago | (#14123207)

The article suggests to me that game companies will only need average programmers to make real good games, but they will need some absolute geniuses if they want to optimise them.

Even if you forget the SPEs entirely, and just write for one of the two threads the PPC offers, then you'll have a lot more horsepower than the PS2 to throw around. I exepct we'll see some pretty impressive early games that have the SPEs doing either nothing, or fairly minimal things like generating music, or doing little bits of algorythmic texture generation, and it will be years before we have games routinely pushing the Cell's bandwidth and processing power limits on all avaialble cores.

If I was programming, then I'd be looking at learning to deal with SPEs at the lowest possible level, as that's going to be a very important skill as other CBEA chips (ie. different varieties of Cell) start popping up everywhere.

20 core die (3, Funny)

Anonymous Coward | more than 8 years ago | (#14122633)

Amazing progress. So with 20 cores on a single die, we can play D&D in real time?

It's Saturday night and I'm all alone here, cut me some slack...

Faster Than Realtime - Just port Nethack (1)

billstewart (78916) | more than 8 years ago | (#14122774)

Faster than realtime? Sure, no problem....

You can run a 68000 or 80386 emulator in each of the SPUs, or just run lots of native processes in parallel.

Re:Faster Than Realtime - Just port Nethack (1)

imsabbel (611519) | more than 8 years ago | (#14123176)

Because those SPUs are SOOOO good at integer, especially the brach-heavy stuff like emulators need...
Not to mention having no cache at all will be SO great in such a non-streaming application (and no, those 256K ram dont count)

Re:20 core die (0)

Anonymous Coward | more than 8 years ago | (#14123164)

Think it will be long until the 100 core for Cthulhu? ;)

Blowjob (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#14122644)

Oh man, getting blowjob while reading Slashdot... Is this not every geek's dream :)

You earned your geek gold card!! (0)

Anonymous Coward | more than 8 years ago | (#14122673)

Looks like you officially earn you geek gold card today! Congrats.

Oh baby! (1, Funny)

Anonymous Coward | more than 8 years ago | (#14122719)

Damn, nothing gets me fired up on a Saturday night like the thought of a nine way!

Re:Blowjob (0, Funny)

Anonymous Coward | more than 8 years ago | (#14122972)

Did you really have to give us details of the fun you are having with your dog?

yea but (2, Funny)

rrosales (847375) | more than 8 years ago | (#14122722)

can it do infinite loops in 5 seconds?

Re:yea but (0)

Anonymous Coward | more than 8 years ago | (#14122738)

Can Linux really do this? Whats the story behind this quote?

Re:yea but (1)

Maavin (598439) | more than 8 years ago | (#14122826)

No, but I heard it can do the Kessel Run in less than twelve parsecs!

Re:yea but (1)

dohzer (867770) | more than 8 years ago | (#14122938)

If it's still only 12 parsecs I guess we haven't really advanced much from "a long time ago".

Remind anyone... (2, Insightful)

Kadin2048 (468275) | more than 8 years ago | (#14122744)

... of the promotional material for the Sega Saturn from a few years back?

I remember right about the time it came out, there was a lot of hype about it's architecture. Two main processors and a bunch of dedicated co-processors, fast memory bus, etc., etc. I don't remember any more specifics, but at the time it seemed very impressive. Of course it flopped spectacularly, because apparently the thing was a huge pain in the ass to program for and the games never materialized. Or at least that's the most often spoken reason that I've heard.

Anyway, and I'm sure I'm not the first person to have realized this, Cell is starting to sound the same way. The technical side is being hyped and seems clearly leaps and bounds ahead of the competition, but one has to wonder what MS is doing to prevent themselves from producing another Saturn on the programming side.

Re:Remind anyone... (1)

seebs (15766) | more than 8 years ago | (#14122752)

MS?

Uhm. MS is the one with the 3-core PPC. This is a 1-core (dual-threaded) PPC with 8 coprocessors.

And I want one.

Re:Remind anyone... (1)

earnest murderer (888716) | more than 8 years ago | (#14122783)

At the time there was also the issue of the PS1 being more powerful in addition to doing 3D while the Saturn basically had 3D support added in response and it not being very good.

This same "difficult to program" issue came up with the PS2 but seems to not have had much of an impact overall on sales. :)

Re:Remind anyone... (1)

Jarnis (266190) | more than 8 years ago | (#14122843)

Well, that would mean the first PS3 games that are any good are at least 2 years away.

Only recently (past 1-2 years) PS2 devs have begun to really exploit the aging hardware. And due to teeny RAM of PS2, it really takes skill.

Re:Remind anyone... (0)

Anonymous Coward | more than 8 years ago | (#14122785)

There are some key differences between Sega and Sony. With the exception of the Master System, Sega's consoles were always inferior to the Nintendo (and later Sony) consoles. Sega was always playing catch-up to Nintendo, even though they were first to market with each generation. Sega made the mistake in the beginning of being the only publisher for Master System games. Getting developers to sign on was *already* a problem for Sega when the Saturn came out, and they were consistently doing poorly compared to Nintendo. Sega was also relatively small.

In contrast, Sony is huge, doing well, and has many publishers writing titles for their current consoles and portables.

Re:Remind anyone... (0)

Anonymous Coward | more than 8 years ago | (#14122794)

Odd you mention that. Since it's the same tale told about the PS2, but I don't recall it flopping.

they gave up... (5, Interesting)

YesIAmAScript (886271) | more than 8 years ago | (#14122795)

Both Sony and MS realized they couldn't make a single true general-purpose CPU with the performance they wanted for a price they could afford to sell in their consoles.

Sony went to a CPU, GPU and 7 co-processors (Cell).
MS went to a 3 CPUs with vector-assist and a GPU.

Both companies are going to need to spend a lot of time and money on developer tools to help their developers more easily take advantage of their oddball hardware, or else they will end up right where Saturn did.

I guess the good news for both companies is that there is no alternative (like PS1 was to Saturn) which is straightforward and thus more attractive.

PS2 requires programming a specialized CPU with localized memory (the Emotion Engine) and it seems to get by okay. So developers can adapty, given sufficient financial advange to doing so.

Re:they gave up... (2, Interesting)

thomasscovell (887105) | more than 8 years ago | (#14122961)

No alternative? The Nintendo codename-Revolution will be comparatively "under"-powered, but will definitely be a simpler machine to code for and have novel (not novelty!) controller hardware that will afford the kind of possibilities Sony and Microsoft's idea of "next generation" don't offer. Just pushing more polygons isn't where it is at. There's been no growth in size of the gaming market since the SNES era, just more spending by those who do game. Nintendo's next generation model is at least looking to increase the gaming demographic, just the way their Nintendo DS handheld as (senior gamers? plenty of those in Japan not thanks to the DS!

Re:they gave up... (0)

Anonymous Coward | more than 8 years ago | (#14123497)

There's been no growth in size of the gaming market since the SNES era, just more spending by those who do game.


Pardon me but...

My 3 year old nephew has been given some lessons in how to hold a controller recently, and although I'm not ready to call him "hardcore," he seems destined to become a "gamer"

So you're statement is completely wrong. Just in my own life I've helped to increase the gaming market by 1.

You must be too young still to have any next generation relatives or you would realize that the gaming market has grown unbelievably since the "SNES Era" - hell I have 4 cousins - all gamers - who weren't born when the SNES was hot stuff.

mnb Re:they gave up... (0)

Anonymous Coward | more than 8 years ago | (#14123943)

You must be too young still to have any next generation relatives or you would realize that the gaming market has grown unbelievably since the "SNES Era" - hell I have 4 cousins - all gamers - who weren't born when the SNES was hot stuff.


I and most of my friends have stoped buying consoles and playing video games since the days of the SNES.

Add your 4, subtract my 4 and you get zero growth.

you're probably right (2, Interesting)

YesIAmAScript (886271) | more than 8 years ago | (#14124207)

Although Nintendo isn't even talking about the hardware specs, so we can't be sure.

But I didn't include the Revolution because Nintendo is saying the same thing they did with the Gamecube, that they don't need 3rd party developers. Revolution seems largely like a platform for Nintendo to sell you their older games again. Additionally, if Revolution is sufficiently underpowered compared to the other two, it may be that 3rd parties just plain cannot port their games to this platform, or else have to "dumb down" their game in such a way which might make the game uncompetitive with games that don't work on Revolution.

So, basically, N is downplaying new development so much on the Revolution that I simply left it out as a platform which would attract developers who were fed up with the other two. But probably I shouldn't have done so.

By the way, with all of this, I want to mention I'm a huge N fan. I have three GBAs, a DS and a Gamecube, plus all their other consoles back to the SNES. I just think that N is concentrating on 1st/2nd party development more than 3rd party development.

Revolution (1)

HalAtWork (926717) | more than 8 years ago | (#14123591)

I guess the good news for both companies is that there is no alternative (like PS1 was to Saturn) which is straightforward and thus more attractive.

It's called the Revolution.

Re:Remind anyone... (3, Insightful)

Sycraft-fu (314770) | more than 8 years ago | (#14122956)

Well, not quite. The odd processors were a problem for the Saturn, but not the major one. The really major problem was that it wasn't good at 3D. The Saturn was basically designed to be the ultimate 2D console, which it was. However 3D was kinda hacked on later and thus was hard to do and didn't look as good as the competition. This was at a time when 3D was new and flashy, and thus an all-important selling point.

However you are correct in that having a system with a different development model could be problematic. Game programmers (and I suppose all programmers) can be fairly insular. Many are already whining about the multi-core movement. They like writing single-thread code, a big while loop in essencde, since that's the way it's always been done. However the limitations of technology are forcing new thinking. Fortunately, doing multi-threaded code shouldn't require a major reqorking of the way things are done, espically with good development tools.

Well, the Cell is something else again. It's multi-core to the extreme in one manner of thinking, but not quite, because the Cells aren't full, independant processor cores. So programming it efficiently isn't just having 8 or 9 or however many cores worth of tasks for it.

Ultimately, I think the bottom line will come down to the development tools. Game programmers aren't likely to be hacking much assembly code. So if the compiler knows how to optimise their code for the cell, it should be pretty quick. If it doesn't and requires a very different method of coding, it may lead to it being under utilised.

Now it may not be all that imporant. Remember this isn't like the PS2, the processor isn't being relied on for graphics tranformations, the graphics chip will handle all that. So even if the processor is underultilised and thus on the slow side, visually stunning games should still be possible.

However it is a risk, and a rather interesting one. I'm not against new mthods of doing things, but it seems for a first run of an architecture, you'd want it in dev and research systems. Once it's been proven and the tools are more robust, then maybe you look at the consumer market. Driving the first generation out in a mass consumer device seems risky, espically given that the X-box has lead time and thus it's development model is already being learned.

Re:Remind anyone... (2, Interesting)

TheRaven64 (641858) | more than 8 years ago | (#14123494)

Many are already whining about the multi-core movement. They like writing single-thread code, a big while loop in essencde, since that's the way it's always been done.

Meanwhile, those of us who have been advocating building large numbers of loosely coupled, message passing, components all running with their own process space gave enormous grins on our faces at the thought of being able to do the message passing via a shared cache with only a cycle or two penalty...

Re:Remind anyone... (1)

MikeFM (12491) | more than 8 years ago | (#14125299)

I imagine the multi-core design will improve AI code, weather simulation, physics simulation, etc. Anything that needs lots of small concurrent calculations.

It seems that the cell design, once utilized, should make for games that feel better even if they look the same. Maybe the difference won't show in a screenshot but you'll be able to tell it's there when you play the game.

I'm interested in the rumor that multiple Cell processors will be able to work together even over the PS3's built-in networking. That could open up some interesting possibilities. Bigger tasks could be broken up across multiple Cell's for more bang. Especially awesome if the same will be true of Cell's in blade servers and other real world apps. Not needing a third party solution to run a program across multiple machines would be really awesome.

Reminds me of programming the nCube (3, Interesting)

Animats (122034) | more than 8 years ago | (#14122769)

The nCube, in the 1980s, was much like this. 64 to 1024 processors, each with 128KB and a link to neighboring processors, plus an underpowered control machine (an Intel 286, surprisingly.)

The Cell machines are about equally painful to program, but because they're cheaper, they have more potential applications than the nCube did. Cell phone sites, multichannel audio and video processing, and similar easily-parallelized stream-type tasks fit well with the cell model. It's not yet clear what else does.

Recognize that the cell architecture is inherently less useful than a shared-memory multiprocessor. It's an attempt to get some reasonable fraction of the performance of an N-way shared memory multiprocessor without the expensive caches and interconnects needed to make that work. It's not yet clear if this is a price/performance win for general purpose computing. Historically, architectures like this have been more trouble than they're worth. But if Sony fields a few hundred million of them, putting up with the pain is cost-justified.

It's still not clear if the cell approach does much for graphics. The PS3 is apparently going to have a relatively conventional nVidia part bolted on to do the back end of the graphics pipeline.

I'm glad that I don't have to write a distributed physics engine for this thing.

Re:Reminds me of programming the nCube (1)

bit01 (644603) | more than 8 years ago | (#14123192)

I'm glad that I don't have to write a distributed physics engine for this thing.

Granted, it's harder to program on a multi-processor. But it's not that much harder, more just fear of the unknown.

Programmers are already multiprocessing bigtime to handle multiple IO devices and to watch the wall clock time (independent of the processing time) and it's a rare real world programming problem that can't be easily partitioned, usually geometrically. In the case of the physics engine I'd initially just put the physics engine on a separate CPU. If that wasn't enough I'd just use a farmer-worker paradigm to farm out the physics work to multiple CPU's.

Having said that I'm a little scared of the number of programmers out there who don't know what a race condition is and how to avoid it...

---

Unrestricted DRM = Total Customer Control = Ultimate Customer Lockin = Death of the free market.

Reply to: Reminds me of programming the nCube (1)

arrrrg (902404) | more than 8 years ago | (#14123281)

IANAGP (game programmer), but it would seem to me that physics and lighting calculations should be easily parallelizable. Each processor can compute the physics for a separate set of objects / pixels / etc. Same for AI for each agent, if the companies actually bothered to put some effort into gameplay over graphics. On the other hand, I would guess that things like fluids (i.e. Far Cry) would be more difficult to do in parallel, due to the less local nature of the interactions.

Re:Reminds me of programming the nCube (2, Interesting)

plalonde2 (527372) | more than 8 years ago | (#14123891)

I disagree that that the cell architectures is "inherently less useful than a shared-memory multiprocessor".

Shared memory is the cause of 80% of the nasty little race conditions programmers leave peppered through their code on parallel machines - it's just too easy to break discipline, particularly considering the crap programming languages support we have - C and C++ are just not up to the task because of their assumption that you may touch anything in the address space.

Cell-like architectures have one other advantage, particularly in performance-sensitive applications: the explicit DMA to local stores *makes* you look at how the busses work on that machine, and they do not really differ from the busses on non-Cell-like modern machines; the structure of your Cell code will be bus-friendly on pretty much any architecture you port it to. And in our modern world, the bus is the bottleneck.

Task switching... (0)

Anonymous Coward | more than 8 years ago | (#14122975)

The cell seems to be great for system with a limited number of tasks (like a game console), but what about general OS ? Context changes seems to be a big problem and it looks like this CPU will be very bad in a general desktop computer.

I wonder... is there any processor that are good for tasks switching (by having, for example, several sets of registers and TLB so a task switch only mean using another set instead of saving and loading everything to memory) ?

Re:Task switching... (2, Interesting)

freeduke (786783) | more than 8 years ago | (#14123211)

For use with general OSes, what could be interesting would be a dual-core PPE with those 8 SPE.

The first core could be the main processor, handling processes, and the second core, could just be there to be interrupted by dedicated threads executed on the SPEs, and communicate with them. The main problem would come from memory bandwidth used by the core which handles the 8 SPEs, it should be designed to minimize the impact on the first core.

A solution to this could be to have a cell processor and a traditional single-core processor, both of them using HT to improve memory bandwidth. But it seems to be complicated. Anyway, this Cell processor could be interesing as a threads management unit.

Another point should be to double memory to each SPE, and prefetch context switches while another thread is running on it, and once, the context switch is done, retrieve data from the previous thread: this could me managed by the PPE. And if you combine this solution with a non-synchronized timer interrupts on each SPEs, I bet you can get a pretty good improvement on memory bandwidth consumption made by a cell unit...

With all those basics ideas, I think that there is plenty of room to use efficiently those cell processors

Does it run Windows? (0, Offtopic)

plusser (685253) | more than 8 years ago | (#14123271)

The problem will be for much of the IT industry is that those making the decisions would ask only one question:

    Does it Run Windows?

If the answer is no that the manager will say something like:

    "I don't care if the processor is the most powerful ever developed, costs next to nothing to produce and will allow us to build a powerful computer the size of of pea. If it doesn't run Windows, then I'm not interested".

And that sums up the total IT knowledge of that manager.

CBE = Failure (1, Troll)

MOBE2001 (263700) | more than 8 years ago | (#14123507)

Mod me down if you wish but I think the CBE architecture is bound to fail. The reason is that you don't design your software model around a new processor. It should be the other way around. You first come up with a software model and then design a processor optimized for the new model. This way you are guaranteed to have a perfect fit. Otherwise, you're asking for trouble.

The primary reason that anybody would want to devise a new software model is to address the single most pressing problem in the computer industry: unreliability. The reason that software is unreliable is that it is based on the algorithm. Switch to a non-algorithmic, signal-based, synchronous model and the problem will disappear. Unfortunately current processor architectures, including the CBE, are optimized for the algorithm. Click on the link below for details on a new software model designed to solve the reliability problem.

Re:CBE = Failure (4, Insightful)

plalonde2 (527372) | more than 8 years ago | (#14123916)

You're right - you don't design around a new processor.

But you should design around the changes in architecture that have been coming at us for the last 5-10 years: the bus is the bottleneck, and the Cell makes this explicit. It goes so far as to deal with the clock-rate limits we've reached by taking the basic "bus is the limit" and exposing it in a way that lets you stack a bunch of processors without excessive interconnect hardware (and associated heat) into a more power-efficient chip.

I've been working on Cell for nearly a year now, and it's been really nice being forced to pay attention to the techniques that will be required to get performance on all multi-core machines, which in essence means all new processors coming out. Our bus/clockrate/heat problems are all inter-dependent, and Cell is the first project I've seen that gets serious about letting programmers know they need to change to adapt to this new space.

rent out your ps3s redundant cell core? (1)

chumba (204996) | more than 8 years ago | (#14123527)

if every ps3 was networked and sony rented out your redundant core to the DoD, how fast would the worlds most powerful super computer be?

interconnect restrictions (2)

CdBee (742846) | more than 8 years ago | (#14124504)

Since most of the inter-processor "interconnects" would be consumer-grade DSL/Cable links, it'd have phenomental capacity to process chunks of data but serious latency issues in distributing work units. Commercial cluster data-processing units probably use gigabit ethernet or faster connections to get around this.

Wow (1)

RESPAWN (153636) | more than 8 years ago | (#14124281)

I haven't really done much programming since college and none of those programs have been multithreaded, so maybe I don't have the right background to comment. But, all I can say is wow. This is crazy compared to the Sparc processors that I learned assembly on. As somebody pointed out, not only do these processors have multiple cores, but apparently each one has 128 registers?! Processor design has come a long way.


That said, I see a lot of comments reflecting on how hard it will be for programmers to adjust to programming on this architecture. While I agree that there may be some learning that will have to take place, shouldn't most of the optimization take place on the compiler level? I mean, that's partly the point of languages such as C/C++: write a minimum ammount of architecture specific code and let the compiler do the rest.


Anyway, I find this new architecture very impressive and can't wait to see devices take advantage of this hardware.

Fail! (1)

IntergalacticWalrus (720648) | more than 8 years ago | (#14124959)

"programming for the CBE is like programming for no processor you've ever met before"

Which is exactly why it will never take off.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>