Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Octopiler to Ease Use of Cell Processor

Zonk posted more than 8 years ago | from the ps3-where-are-you dept.

423

Sean0michael writes "Ars Technica is running a piece about The Octopiler from IBM. The Octopiler is supposed to be compiler designed to handle the Cell processor (the one inside Sony's PS3). From the article: 'Cell's greatest strength is that there's a lot of hardware on that chip. And Cell's greatest weakness is that there's a lot of hardware on that chip. So Cell has immense performance potential, but if you want to make it programable by mere mortals then you need a compiler that can ingest code written in a high-level language and produce optimized binaries that fit not just a programming model or a microarchitecture, but an entire multiprocessor system.' The article also has several links to some technical information released by IBM."

cancel ×

423 comments

Sorry! There are no comments related to the filter you selected.

'Octopiler' (0, Funny)

Anonymous Coward | more than 8 years ago | (#14805044)

Wasn't that a James Bond film?

Re:'Octopiler' (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#14805162)

FRTFP!!!!

pwnezd!

So don't hire mere mortals (4, Funny)

ScrewMaster (602015) | more than 8 years ago | (#14805048)

Hire "Real Programmers". You know, the ones that only code in Assembler, and if they can't do it in Assembler then it isn't worth doing.

Re:So don't hire mere mortals (2, Funny)

stedo (855834) | more than 8 years ago | (#14805096)

Hire "Real Programmers". You know, the ones that only code in Assembler, and if they can't do it in Assembler then it isn't worth doing.
Hmph. "Real Programmers" needing a bleedin' assembler to tell them what their bleedin' instructions mean? Why, back in my day we had to write our programs in machine language. We saved our work by means of a small bar magnet held a short distance above a hard disk platter. And we had to pay for our own bytes.

You had machine language? (2)

Flying pig (925874) | more than 8 years ago | (#14805154)

You were lucky. We had to write our own microinstructions using a 12 bit ALU with no barrel shifter, and then burn them into ROM using a magnifying glass to vaporise the aluminium interconnect. And you had hard disks? We had to hand code on paper tape using a leather punch to make the holes. And we thought we were lucky. Next door, the guys in Alan Turing's department were having to stick together infinite paper tapes for some machine he made in the 30s.

Re:You had machine language? (1)

Mike Savior (802573) | more than 8 years ago | (#14805240)

>Next door, the guys in Alan Turing's department were having to stick together infinite paper tapes for some machine he made in the 30s.

And it was uphill, both ways!

Re:So don't hire mere mortals (3, Funny)

SkyFire360 (889512) | more than 8 years ago | (#14805111)

So don't hire mere mortals, Hire "Real Programmers"

Zeus was booked, Apollo was out of town, Hermes is still learning, Posideon just signed a 500-year agreement with Apple and Ares was killed off in God of War, so most of the good non-mortal programmers were out of the question. Hades claims to be a writer instead of a programmer, but most of the plot lines he comes up with ends up with everyone dead.

Re:So don't hire mere mortals (3, Funny)

Kadin2048 (468275) | more than 8 years ago | (#14805212)

Oh, come on. Everyone knows that Hades isn't a programmer any more, not since he got promoted to Management and got that whole division to run down there.

Re:So don't hire mere mortals (1)

StikyPad (445176) | more than 8 years ago | (#14805220)

And Jesus has already agreed to be my co-pilot.

Re:So don't hire mere mortals (0)

Anonymous Coward | more than 8 years ago | (#14805324)

...and any impugning of Allah will get Slashdot burnt down.

Re:So don't hire mere mortals (1)

creimer (824291) | more than 8 years ago | (#14805268)

Hades claims to be a writer instead of a programmer, but most of the plot lines he comes up with ends up with everyone dead.

Hamlet wasn't that bad. Besides, on some programming projects, having everyone dead is a blessing. It's the ghost of previous projects that continues to haunt the living.

Re:So don't hire mere mortals (1)

MobileTatsu-NJG (946591) | more than 8 years ago | (#14805349)

"Zeus was booked, Apollo was out of town, Hermes is still learning, Posideon just signed a 500-year agreement with Apple and Ares was killed off in God of War, so most of the good non-mortal programmers were out of the question. Hades claims to be a writer instead of a programmer, but most of the plot lines he comes up with ends up with everyone dead."

That still leaves Boomer and Starbuck!

Re:So don't hire mere mortals (1)

dcapel (913969) | more than 8 years ago | (#14805361)

Pity Hephaestus is more of a hareware person. Companies have considered Aphrodite, but she seems to always screw things up, and Dionysus is entirely too busy working on WINE. Athena has a bit of trouble working with male programmers, so she is out of the question. Artemis is really good at hunting down bugs, but she never seems to return calls.

Meh, immortal programmers are so hard to come by these days.

Re:So don't hire mere mortals (1)

Crafack (16264) | more than 8 years ago | (#14805200)

I wonder why nobody has posted this yet...

The story of Mel, the Real Programmer: http://www.pbm.com/~lindahl/mel.html [pbm.com]

/Crafack

Re:So don't hire mere mortals (1)

RyuuzakiTetsuya (195424) | more than 8 years ago | (#14805251)

that reminds me of one of the most intimidating things a girlfriend's mom EVER told me.

"We always coded in assembler. We never let the compiler do all the work for us."

I crapped myself right then and there.

Re:So don't hire mere mortals (1)

creimer (824291) | more than 8 years ago | (#14805280)

That's what you get for dating the daughter of a code monkey. :P

Re:So don't hire mere mortals (1)

Tim Browse (9263) | more than 8 years ago | (#14805345)

It's a fine attitude. And certainly why I bang nails in with my forehead.

Makes you wonder (5, Insightful)

Egregius (842820) | more than 8 years ago | (#14805050)

It makes you wonder what the release-titles of the PS3 will be like, if they didn't have a decent compiler untill now. And 'the PS3 is due out in 2006.'

Re:Makes you wonder (1, Interesting)

general_re (8883) | more than 8 years ago | (#14805061)

...they didn't have a decent compiler untill now.

Actually, it sounds like they still don't have one, just some ideas on how to make one someday.

No, it's there alright (4, Informative)

Daath (225404) | more than 8 years ago | (#14805160)

Nah, it's there. Download it [ibm.com] , if you want ;)

Not really (1)

CaptainCheese (724779) | more than 8 years ago | (#14805230)

keyword: "decent"

according to the article, the compiler's still in early stages of development...

Tier II programmers (1)

tepples (727027) | more than 8 years ago | (#14805252)

It makes you wonder what the release-titles of the PS3 will be like, if they didn't have a decent compiler untill now.

Obviously titles whose programmers earn a hefty salary premium for having Tier II skills (as defined in The Article). The art might not look as "next-gen" as it could because the developer had to reallocate some of the art budget toward programming.

Re:Makes you wonder (1)

bfizzle (836992) | more than 8 years ago | (#14805436)

If the cell processors is as bad ass as everyone makes it out to be Sony has nothing to worry about. All the PS3 has to be when it is launched is as good as the XBox360. Game developers need to be able to port games to either platform easily and if the PS3 has tools to make it easy to utilize just enough processing power to match the 360 then they will be sucessful. The cell processor will only really come into play after the developer tools mature and games are able to take full advantage of the cell processor.

Hello, Itanium... (5, Insightful)

general_re (8883) | more than 8 years ago | (#14805052)

Sound familiar? "All we need to make it work as advertised is a really slick compiler that doesn't actually exist yet..."

Re:Hello, Itanium... (2, Insightful)

Ceriel Nosforit (682174) | more than 8 years ago | (#14805107)

Sound familiar? "All we need to make it work as advertised is a really slick compiler that doesn't actually exist yet..."

From TFA:
"I say "intended to become," because judging from the paper the guys at IBM are still in the early stages of taming this many-headed beast. This is by no means meant to disparage all the IBM researchers who have done yeoman's work in their practically single-handed attempts to move the entire field of computer science forward by a quantum leap. No, the Octopiler paper is full of innovative ideas to be fleshed out at a further date, results that are "promising," avenues to be explored, and overarching approaches that seem likely to bear fruit eventually."

Too early to say for sure, of course, but I'd rather take this guy's word for it than study the papers myself. - Would I invest/bet money on it? Yes, I would.

Re:Hello, Itanium... (3, Informative)

Brain_Recall (868040) | more than 8 years ago | (#14805145)

More familiar than you may think. Some of the first Itanium compilers were spitting out nearly 40% NOP's, which are simply do-nothings. Because the IA-64 is explicilty parallel, instructions are generated and bundled together to be executed in parallel. The problem is branches, which destroy parallelism since they can change the code direction. On average, there are about 6 instructions between branches, so, such a design is very costly since the memory controller will be stuck getting inscructions that are empty. Of course, speculation and branch-prediction is generally a good way to increase performance, but like many things on the IA-64, that's left to the compilier to figure out. These are some of the exact same problems with the Cell, although, I wish I knew how the instruction set was. If it's more like Itanium, then they got all of the problems of the Itanium. If it's more of a direct approach, they may be able to pull it of because of the work in multi-processor systems that are done today. But, they simply can't expect the "super-computer" numbers Sony keeps flashing around. It may be good on certain tightly coded scientific calculations, but when it comes down to real-world code, it's stuck to the stripped-down Power4 that is coordinating the Cells.


They didn't call it the Itanic for nothing...

Re:Hello, Itanium... (3, Insightful)

timeOday (582209) | more than 8 years ago | (#14805181)

Everybody prefers a simpler programming model, there's no doubt about that. But with the recent lack of progress in unicore speeds, something has to give, and apparently that "something" is programming complexity. While the PC world moves from 1 to 2 cores, the PS3 is jumping straight to 8. But going from 1 to 2 threads is a bigger conceptual jump than from 2 to 8 anyways.

Fortunately for IBM and Sony, games are one place where hand-optimizing certain algorithms is still practical. I doubt they will place all their eggs in the octopiler basket. I can't imagine a compiler will find that much paralellism in code that isn't explicitly written to be parallel. Personally, I think they should instead focus on explicitly parallel libraries for common game algorithms like collision detection.

what are you talking about? (1)

twitter (104583) | more than 8 years ago | (#14805270)

Sound familiar? "All we need to make it work as advertised is a really slick compiler that doesn't actually exist yet..."

That's kind of a weird comparison given the differences in innovation, demonstrated results and company attitudes.

IBM's Cell is a much more radical break from previous chips like Itanium [wikipedia.org] , but the CES demo was reported to be very impressive. IBM has already released the SDK [slashdot.org] and openly published all specifications [slashdot.org] . The pace of development has been very rapid and people are predicting the replacement of Intel [linuxinsider.com] . The missing piece was a compiler to ease transition. It looks like that's coming along just fine.

The Itanium on the other hand was obsolete on it's launch. Even HP dumped it after killing their own better performing 64 bit processor for it and spending billions of dollars and ten years building it.

We can only wonder how things would have been if Intel had opened things up like IBM has, instead of making it so people have to figure things out on their own [unsw.edu.au] .

Re:what are you talking about? (1)

samkass (174571) | more than 8 years ago | (#14805405)

The pace of development has been very rapid and people are predicting the replacement of Intel.


Sorry, you lost all credibility there. The Core is a single core with a bunch of DSPs tacked on. It's a great replacement for a general purpose PowerPC in many embedded applications, but won't touch Intel's target market any time soon. In the year and a half since that article was written we've learned how much Intel and AMD can do to keep ahead of the game and how applicable to general-purpose computing the Cell isn't.

Octoplier? (0)

Anonymous Coward | more than 8 years ago | (#14805060)

what you say?

Sadly, not a lotta FPU hardware. (4, Insightful)

mosel-saar-ruwer (732341) | more than 8 years ago | (#14805064)


'Cell's greatest strength is that there's a lot of hardware on that chip. And Cell's greatest weakness is that there's a lot of hardware on that chip.

Sadly, there's almost no FPU hardware to speak of: 32-bit single precision floats in hardware; 64-bit double precision floats are [somehow?] implemented in software and bring the chip to its knees [wikipedia.org] .

Why can't someone invent a chip for math geeks? With 128-bit hardware doubles? Are we really that tiny a proportion of the world's population?

Re:Sadly, not a lotta FPU hardware. (1)

sedyn (880034) | more than 8 years ago | (#14805093)

Math geeks that would need 128-bit double percision are a subset of all math geeks...

Therefore an even smaller portion of an already small population.

Re:Sadly, not a lotta FPU hardware. (1)

ScriptedReplay (908196) | more than 8 years ago | (#14805303)

Math geeks that would need 128-bit double percision are a subset of all math geeks...

Perhaps you meant longdouble precision. Math geeks that can live with 32-bit floating point precision are also a small subset - most of those who do heavy math (not pixel processing) pretty much require 64-bit double precision. And that is not available in hardware from Cell (come to think of it, not for Alitvec, either)

Re:Sadly, not a lotta FPU hardware. (0)

Anonymous Coward | more than 8 years ago | (#14805127)

Because 128 bit would be a quadruple, not a double. You silly math geek, you. :)

QED, e pluribus unum, et cetera, et cetera...

Re:Sadly, not a lotta FPU hardware. (1)

rodac (580415) | more than 8 years ago | (#14805144)

What benefit does increasing the precision of floats to 128bits bring?
64bits are more than enough for 99.9999% and the remaining cases can be handled in sw emulation.

You can still not solve (without massive growth of the error terms) an equation system described by a Hilbert-matrix using Gaussean-elimination no matter how many bits you make the mantissa.

Check out William Kahan at UC-Berkeley. (3, Informative)

mosel-saar-ruwer (732341) | more than 8 years ago | (#14805296)


What benefit does increasing the precision of floats to 128bits bring? 64bits are more than enough for 99.9999% and the remaining cases can be handled in sw emulation. You can still not solve (without massive growth of the error terms) an equation system described by a Hilbert-matrix using Gaussean-elimination no matter how many bits you make the mantissa.

Check out some of Professor Kahan's shiznat at UC-Berkeley:

In particular, look at the pictures of "Borda's Mouthpiece" [page 13] or "Joukowski's Aerofoil" [page 14] in the following PDF document:
How Java's Floating-Point Hurts Everyone Everywhere
http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf [berkeley.edu]
WARNING: PDF DOCUMENT
As I understand it, the "wrong" pictures are computed using Java's strict 64-bit requirement; the "right" pictures are computed by embedding the 64-bit calculation within Intel/AMD 80-bit extended doubles, performing the calculations in 80-bits worth of hardware, and then rounding back down to 64-bits to present the final answer.

MORAL OF THE STORY: Precision matters. You can never have enough of it.

Re:Check out William Kahan at UC-Berkeley. (1)

honkycat (249849) | more than 8 years ago | (#14805370)

Interesting (although unforgivably badly formatted) document.

The two plots you point out aren't really examples of precision errors. Rather, they are errors brought about by not tracking the distinction between "positive 0" and "negative 0." You'll have this problem to some degree no matter how many bits of precision you've got if you don't track the sign of your numbers that round to 0.

Quad precision (1)

pkhuong (686673) | more than 8 years ago | (#14805148)

SPARCv8(?) and up have quad precision.

I've also implemented a simple double double (represents numbers as an unevaluated sum of two non-overlapping doubles) arithmetic in CL. It was ~25% as fast as doubles (mostly branchless, each op expands into ~2-8 double precision op). That gives an upper-bound on the slowdown ratio for the emulation of doubles with singles.

Re:Quad precision (1)

lordholm (649770) | more than 8 years ago | (#14805210)

Yes, the V8 have support for quads, but I can't think of a single implementation that does not force the OS to emulate it.

I corresponded with a Sparc designer. (1)

mosel-saar-ruwer (732341) | more than 8 years ago | (#14805263)


I corresponded with the Sparc designer about this very question, because LabVIEW supports a 128-bit "quad-precision" double for Sparc platforms:
I sent some email back and forth with one of the dudes on the Sparc design team, and he said that Sparc's 128-bit quad-precision double is a purely software implementation.

Compare e.g.

Re:Sadly, not a lotta FPU hardware. (3, Insightful)

stedo (855834) | more than 8 years ago | (#14805164)

The basic purpose of the Cell is to make the PS3 work. The basic purpose of the PS3 is to play games. Games, as a rule, don't give a damn about 64-bit floating point. Games can get away with 32-bit because they don't need to be incredibly accurate, they just need to be fast. No gamer will care whether or not the trajectory of the bullet was out by 0.000000000023~ as long as it moves fluidly. So, in making a chip for gaming, you are far better off making 32-bit really fast than spending time and die space on perfecting useless 64-bit.

Re:Sadly, not a lotta FPU hardware. (0)

Anonymous Coward | more than 8 years ago | (#14805183)

No gamer will care weather or not the trajectory of the bullet was out by 0.000000000023

you don't spend much time around hardcore gamers do you? "WTF?!? Missed F&$K you!"

of course usually they missed because of something they did, not any error in the trajectory of the math...

Re:Sadly, not a lotta FPU hardware. (3, Interesting)

Animats (122034) | more than 8 years ago | (#14805304)

Games, as a rule, don't give a damn about 64-bit floating point.

You wish. In a big 32-bit game world, effort has to be made to re-origin the data as you move. Suppose you want vertices to be positioned to within 1cm (worse than that and you'll see it), and you're 10km from the origin. The low order bit of a 32-bit floating point number is now more than 1cm.

It's even worse for physics engines, but that's another story.

If the XBox 360 had simply been a dual- or quad-core IA-32, life would have been much simpler for the game industry.

Re:Sadly, not a lotta FPU hardware. (2, Interesting)

stedo (855834) | more than 8 years ago | (#14805373)

True

Actually, what I can't figure out is why you want floating point at all. Floating-point data stores a certain number of bits of actual data, and a certain number of bits as a scaling factor. To use your example, this would mean that while items near the origin would be picture-perfect, the object 10km away would be out by well more than a cm.

Back when integer arithmetic was so much faster that floating point it was worth the effort, game coders used to use fixed-point arithmetic. This kept a uniform level of accuracy around the entire world, not like floating point which makes data near the origin more accurate. It was also very fast, and easy to implement. Why hasn't anyone implement fast fixed-point arithmetic in hardware? You could afford to go 64-bit if it was fixed-point since it is so much easier to compute (think integer arithmetic versus floating point), and 64-bit is accurate enough for very small detail in a very large world.

Re:Sadly, not a lotta FPU hardware. (1)

octopus72 (936841) | more than 8 years ago | (#14805394)

You can move 64-bit floating point data around, but as long as you don't do the double precision fpu math on cell, it is as fast as if processor were capable of doing it in hw, GPU takes care of transformations on current generation consoles anyway. No need to use slow FPU emulation.

Physics is a problem as you say, but I don't think precision is so much important for games, it is often enough to have 32-bit.

Re:Sadly, not a lotta FPU hardware. (1)

soldack (48581) | more than 8 years ago | (#14805207)

Doesn't the Itanium do pretty well on floating point?
-Ack

Re:Sadly, not a lotta FPU hardware. (1)

JFMulder (59706) | more than 8 years ago | (#14805211)

Consider the fact that the movie industry is slowly adopting HDR (that's 32bit per float component, 4 component pixel) as the prefered depth for image processing, I don't see why games should use more. At least in graphics. Plus, using 128 bit floats would cut the number of whatever you want to process each second by 4 since you would need to move 4 times the data for the same work. No, we don't need 128 bit floats for games just yet, or shall I say, 32-bit floats should be enough for everyone. ;)

Re:Sadly, not a lotta FPU hardware. (2, Informative)

Frumious Wombat (845680) | more than 8 years ago | (#14805322)

They have, although outside of certain implementations of double-complex, 64-bit double-precision (REAL*8 to Real Programmers) is enough.

Those machines are Cray Vector Processors, MIPS R8K and later, DEC Alpha, HP/Intel Itanium, IBM Power 4/5/n, IBM Vector Facility for the 3090, etc.

Notice how many of those you see every day, and how many fewer of those you can still buy.

Yes, unfortunately, you are that tiny a proportion of the world pop. I had hoped by this point that we'd have Cray Vector Processors on a chip, or integrated into the base chipset (like the old Proc/Math-CoProc combos), or be running EV10 Alphas on our desktops. Unfortunately, double-precision floating point benefits so few people that it's not worth it from a design standpoint to optimize the processors around it. The R8000 was a good example of this; incredible FP for the time, but terrible integer (early Itanium-2 falls into this category as well). So, it crushes numbers like mad in the background, but your word processor, etc, are no faster and possibly slower than the previous generation, less expensive processor.

Just a couple of years ago my boss commented that we had problems in quantum chemistry which were still more time-effective to solve on mid-90s Crays than modern MPPs, because the algorithms vectorized easily but didn't parallelize. Some of them have been fixed by now, and alternatives found for others, but there are a lot of problems (by the standard of scientists) that would benefit from having a processor optimized for double-precision ops. Unfortunately, by the standards of the cell-phone-camera wielding email junkies, those problems are an invisible subset of the things you do with a computer. Ergo, good enough for home entertainment and PowerPoint, less than ideal for scientific use.

Thankfully Power5 and Itanium will be around for a few more years.

YES! Re:Sadly, not a lotta FPU hardware. (1)

perler (80090) | more than 8 years ago | (#14805342)

Why can't someone invent a chip for math geeks? With 128-bit hardware doubles? Are we really that tiny a proportion of the world's population?

Yes, in fact you are a really tiny proportion of the world's population!

Re:Sadly, not a lotta FPU hardware. (3, Funny)

OldManAndTheC++ (723450) | more than 8 years ago | (#14805358)

Are we really that tiny a proportion of the world's population?

You math geeks need to multiply. :)

Re:Sadly, not a lotta FPU hardware. (1)

Tim Browse (9263) | more than 8 years ago | (#14805362)

Why can't someone invent a chip for math geeks? With 128-bit hardware doubles?

Because the math geeks won't pay for the fab plants.

Are we really that tiny a proportion of the world's population?

Yes. You're the math geek - you do the math.

Re:Sadly, not a lotta FPU hardware. (0)

Anonymous Coward | more than 8 years ago | (#14805393)

Real math geeks use only integers.

Re:Sadly, not a lotta FPU hardware. (1)

Watson Ladd (955755) | more than 8 years ago | (#14805442)

MMIX! [stanford.edu] . 256 general purpose registers, 32 special purpose, simple calling convention, and bitwise matrix multiplies. All we need is some real sillicon.

Octointerpreter (2, Interesting)

yerdaddie (313155) | more than 8 years ago | (#14805072)

Reading this is making me nostalgic for LISP machines [comcast.net] and interpreter environments that let programmers really play with the machine instead of abstracting it away. What I'd really like to see is someone who takes all the potential for reconfiguration and parallelism and doesn't hide it away but makes it available.

Threads and vectors (1)

tepples (727027) | more than 8 years ago | (#14805397)

What I'd really like to see is someone who takes all the potential for reconfiguration and parallelism and doesn't hide it away but makes it available.

It's called threads on the one hand and vector data types on the other. Once you have learned how to use those, you're a tier II developer (as defined in The Article) working with a PowerPC based computer connected through low-latency pipes to seven DSPs, and you can just spawn tasks in threads that die when the tasks finish. Trouble is that a lot of development firms that can only afford the lower salaries of tier III and IV programmers don't want to take the time to adapt a 90 percent finished single-threaded PC game to a highly threaded, vectorized environment.

Am I ignorant or . . . (1)

Nomihn0 (739701) | more than 8 years ago | (#14805074)

isn't this a bit of a pipe dream? A compiler that optimizes a program for multiple processors is a nice idea, but how can you foresee worst-case-scenarios that only emerge with human use? Take driving as a very abstract example. You "write" a car. You want it to both accelerate and brake on a dime while still being fuel efficient. Without knowing the driving conditions, city or country, how can you optimize your driving for efficiency?

Re:Am I ignorant or . . . (1)

Tinned_Tuna (911835) | more than 8 years ago | (#14805299)

They could write a compiler that only works on the Cell pocessor in the PS3, taking away a lot of the hardware variables. I think that would speed the process pf creating a compiler easier (not easy, easier)

Re:Am I ignorant or . . . (1)

slavemowgli (585321) | more than 8 years ago | (#14805367)

But you do know the driving conditions: they're the specs of the target architecture. It's still not an easy problem, of course, but it's not like you are supposed to write a compiler that emits perfect code for any target architecture - that would indeed be a rather hard problem.

Doesn't work that way... (1)

CarpetShark (865376) | more than 8 years ago | (#14805425)

You engineer programs in a sense similar to cars, yes. But, you interact with your tools on a much higher level than putting in a pedal and a brake pad. I suspect you do in actual car design too: it wouldn't be a huge step to be able to model a car in a 3D app and ask the computer how that shape of car will perform in terms of aerodynamics, gears, engine power and therefore miles per gallon or acceleration etc.

It's similar with programming. Instead of saying, this is a car, and it goes in that world, and we'll see what happens, you also design the world, and the way they interact, and you do it all at as high a level as you can. So, the compiler can see what you're doing at a fairly high level, and ideally, can understand and optimise that. Similarly, if you're doing programming multiple processors/cores with threads, then you use a compiler that understands threads. You tell it when threads can run at full speed, and when they need to stop and catch up with each other. Then, the compiler can hopefully examine what needs to be done and when, and what processors are available to do it on, and optimise accordingly. This is nothing new; lots of compilers/APIs do this sort of thing now in various ways.

What I want to know is... will this just be limited to a single 8-workhorse cell chip, as the name "Octopiler" suggests, or will it use the promised power of Cells, so that a program will spread its workload across all the Cell devices in your home if you have more than one? Somehow I doubt they're there yet.

Is it just me or... (1)

Kawahee (901497) | more than 8 years ago | (#14805079)

Is it just me or is it a bad idea to make something that completely breaks most programming paradigms, and requires a special compiler to compile it properly, and *then* use it in a next gen console, due out this year?

Surely it was screaming at them that this isn't something that's meant to be released so soon. I mean, the compiler have 4 tiers of 'optimisation', which is meant for the programmers to set so the compiler doesn't make a mess of their memory-management code if they memory manage correctly, or something like that. What this shows to me is that if IBM can't even get the code behind the compiler to make sense of the Cell's architecture, what chance do we have of programming it?

Re:Is it just me or... (1)

Tx (96709) | more than 8 years ago | (#14805108)

Is it just me or is it a bad idea to make something that completely breaks most programming paradigms, and requires a special compiler to compile it properly, and *then* use it in a next gen console, due out this year?

Not really, it's future proofing. It can be used as pretty much a still pretty powerful single core machine for the initial release titles, and as the programmers get to grips with how to get the most out of the cell architecture, and better tools come out, the titles will keep getting better over several years. Actually it's pretty much ideal, given the desired life of the console.

Re:Is it just me or... (1)

TheRaven64 (641858) | more than 8 years ago | (#14805179)

I would imagine that a lot of the problem is trying to generate SPU code from a language such as C. I would have thought that the solution would be to design a language more like Erlang[1] that is designed for parallelism, and allow your programmers to express their algorithms in this, rather than getting them to program for the PDP-11 and then trying to turn this into optimal code for something like the Cell.

[1] Much as I like Erlang, it would not actually be quite suitable for the Cell.

Re:Is it just me or... (0)

Anonymous Coward | more than 8 years ago | (#14805180)

Is it just me or is it a bad idea to make something that completely breaks most programming paradigms, and requires a special compiler to compile it properly, and *then* use it in a next gen console?

The guys at IBM wanted to make sure that when Dead or Alive comes out for that console, they can get maximum framerates and set 'Bouncing Boobies' to Hyperealistic.

These *ARE* math geeks, you know.

vcl v2 (1)

Space cowboy (13680) | more than 8 years ago | (#14805298)

On the PS2, there are two vector units (vu0 and vu1), which are basically where all the grunt work is done - the mips chip is there for housekeeping and non-time-critical code. Each VU has 2 code-paths (the instruction word is 64-bit, and there are two 32-bit instructions in each word). There are limitations on what you can do in each of the two words simultaneously. Sony have a GUI tool (in their professional kit) which allows the programmer to write essentially sequential code, and have it take full advantage of the vector units. According to Sony, it performs as well as a skilled programmer.

For the linux kit, they only released vcl (a commandline version). It's a bit like a compiler-stage. It takes sequential assembly language for a single VU and re-orders code, inserts wait-states etc. Finally producing another assembly output which is optimised for the dual-issue nature of a VU.

It strikes me that optimising for constraints over 2 code paths in a single unit isn't too far a stretch from optimising for constraints over 8 code paths in 8 units. The differences are mainly to do with locality of reference. On a VU it was up to the programmer to DMA data into scratch-space RAM, and set flags as semaphores on operation. There's no real reason why a computer program can't do that - a basic approach would be to do it on a function-by-function approach, or use #pragma constraints in the code. There's no need to have the all-singing, all-dancing version of the optimiser as version 1...

Simon.

A new era in performance breakthroughs? (1)

PornMaster (749461) | more than 8 years ago | (#14805080)

Microsoft's Todd Proebsting claims [microsoft.com] that compiler optimization only adds 4% performance per year, based on some back of the envelopes on x86 hardware.

This radical of a change in architecture should at least provide an accelerated growth from introduction through the next several years, which I'm sure will provide added incentive for those involved in compiler optimization -- finally, some real enhancements.

Re:A new era in performance breakthroughs? (1, Interesting)

stedo (855834) | more than 8 years ago | (#14805182)

Microsoft's Todd Proebsting claims that compiler optimization only adds 4% performance per year, based on some back of the envelopes on x86 hardware.

Then Microsoft's Todd Proebstring is wrong. Ask some Gentoo users. Personally, I recently wrote a bit of fairly simple mathematical code (computing difference sets). The total runtime on my 3 gig P4 was 22 seconds. I shaved off 2 seconds by optimizing the algorithm myself. By using gcc -O3, I shaved off a further 10 seconds, halving the runtime.

Anyway, this compiler isn't so much optimization as taking code intended for one paradigm (simple single-threaded code) and converting it to another (code with 8 cores of execution).

Re:A new era in performance breakthroughs? (2, Funny)

hunterx11 (778171) | more than 8 years ago | (#14805216)

Your post reminds me of the old adage, "Any sufficiently advanced fanboyism is indistinguishable from trolling."

Re:A new era in performance breakthroughs? (0)

Anonymous Coward | more than 8 years ago | (#14805258)

I like how you stopped reading and started mouth-foaming before you got to the "per year" part.

Re:A new era in performance breakthroughs? (0)

Anonymous Coward | more than 8 years ago | (#14805266)

Todd Proebstring said that compiler optimization adds 4% performance per year.

You optimized an inefficient bit of code. You didn't optimize the compiler.

Re:A new era in performance breakthroughs? (0)

Anonymous Coward | more than 8 years ago | (#14805338)

Yet more proof that Gentoo rots the brain. That's assuming you have one to start with.

Yay! A new generation, FINALLY! (2, Interesting)

porkThreeWays (895269) | more than 8 years ago | (#14805231)

I'm glad to see some real progress in the processor world. We are so guided by the enterprise market that we've had to support x86 WAY longer than we should have. The cell looks like it has a real chance of becoming the next big advancement. For one, IBM is working heavily with the open source community. This is possibly one of the best things they could have done to help the cell. By doing this, you make open source developers happy and more inclined to port over their applications. One of the hardest things to do in getting a new arch out is getting application support, and they've pretty much guaranteed a modest amount of applications by going open source. The nokia 770 is a pefect example of this. They've supported open source and made available more than enough tools for quick porting of applications, and look at the huge amount available already in the first few months. The nokia 770 probably sets records in how many applications were ported in such a short period of time.

Make the developers happy, and they will port their apps. With large amounts of available applications, the consumers will buy. When the consumers buy, you have a successful new arch.

Re:Yay! A new generation, FINALLY! (1)

jadavis (473492) | more than 8 years ago | (#14805404)

The cell looks like it has a real chance of becoming the next big advancement.

It will be interesting to compare the Cell with the UltraSPARC T1 (Niagara). They both have about 8 cores (T1 is 8 cores, Cell is 8+1), but the T1 can do 32 threads of execution simultaneously. The Cell has good floating point performance, but the T1 only has 1 FPU for all 8 cores (it's specifically not designed for FP performance). The T1 has very low power requirements, at about 72 watts (79 peak), while (as far as I can tell from google) the Cell will have high power consumption and they have not disclosed the exact figures yet.

And both companies are working very closely with the open source community. Sun actually went further, and open sourced the entire SPARC architecture [sunsource.net] . As far as I know, IBM is not opening up their architecture.

They clearly have different markets, but they are similar in the multithreading aspect. Whoever does a better job of the multithreading and makes good compilers that can help the programmers write parallel code will then be able to move into the other company's space (if Sun does it better, they can add FPUs, if IBM does better they can remove them). And that success depends on open source involvement, which depends on an architecture that is easy to code for. If open source programmers get heavily involved in a concurrent compiler for one architecture, it will win in the long term.

So, it's clear why both companies are fighting to get the attention of the open source community, which is becoming (in a lot of ways) the force that drives which technologies are actually used in business. And that's certainly good news.

Re:A new era in performance breakthroughs? (1)

PhrostyMcByte (589271) | more than 8 years ago | (#14805255)

There is no magic silver bullet to vectorizing code. Compilers need to guarantee that your app will run how you meant it to run and that is no small task when it needs to infer from a language without explicit parallelism support. If the PS3 uses standard C++, I doubt this compiler will do much to help measurably.

At the last PDC, Microsoft announced some very exciting ideas it is looking at to propose for the next C++ standard that will give language support for parallelism, essentially letting you do things like:
vector<int> vec;

future<int> i = active { return 1+1; };

// do stuff while we wait for i to complete
for(int val : vec) active {
// process each in parallel
}

usei(i.wait());

A summary of the idea here... (1)

edashofy (265252) | more than 8 years ago | (#14805102)

Posit: Parallel processing can solve certain types of problems much faster than serial processing.
Posit: The Cell architecture is highly parallel.
Posit: Most programmers today are good at writing serial, not parallel, code.

Hypothesis: A compiler can be developed that takes serially written programs and auto-transforms them into parallel programs to exploit the benefits of parallelism.

Now comes the research to attempt to validate that hypothesis. Will it succeed? We'll find out in several years. There are likely to be some suprising results, and maybe even a paradigm-shattering breakthrough. Or, this line of research may just peter out. It happens.

Re:A summary of the idea here... (4, Insightful)

irexe (567524) | more than 8 years ago | (#14805213)

Hypothesis: A compiler can be developed that takes serially written programs and auto-transforms them into parallel programs to exploit the benefits of parallelism.

Parallel programming and automated parallelization have already been researched exhaustively throughout the last thirty years of the 20th century. The outcome of all this research is that it is not feasible/tractable to create a compiler that is capable of recongising parallelism, as you suggest. Compilers that can do this are sometimes called 'heroic' compilers, for the reason that the required transformations are so incredibly difficult, and heroic compilers that actually work (well) simply don't exist.

Re:A summary of the idea here... (1)

stedo (855834) | more than 8 years ago | (#14805228)

Personally, I think we are going to need new languages to cope with parallel execution models. Consider this example: the for loop. A for loop in C or any of its offspring (C++, Java, C#, etc) relies on side-effects inside the loop to advance the code, and eventually one such side-effect will cause the loop to exit. This design implies serial execution, and converting it to parallel code would be extremely difficult.

Now consider one of the common uses of a for loop: to perform the same operation on an array or matrix of data. This is a conceptually parallel operation, which the programmer has had to force into the languages serial operation structure. To write a special compiler to convert it back out into parallel code is an unnecessary waste of time and effort. Instead, a new language should be written, which allows programmers to directly write code in parallel units. Imagine a language where, for example, all function calls were asynchrous.

AFAIK, the APL language was data-parallel which meant that you could perform operations, at least conceptually, on large sets of data at the same time. However, this language was last popular in the 60s. Anyone know of a modern language that can exploit parallelism?

Re:A summary of the idea here... (1)

Space cowboy (13680) | more than 8 years ago | (#14805340)

See my reply above (vcl v2 [slashdot.org] ) and look on the linux for PS2 website for VCL [playstation2-linux.com] .

VCL takes sequential code and splits it up into parallel code based on the constraints of the vector-units (each VU is dual-issue, with some restrictions). It'll re-order code, insert wait states, etc. Certainly it's a good start at auto-parallelisation of the code. It's supposed to do as well as a skilled engineer...

Simon

Anyone having flashbacks? (4, Insightful)

SmallFurryCreature (593017) | more than 8 years ago | (#14805106)

I seem to remember that the PS2 was a bitch to code for as well and that many of the early titles did not make full use of its capabilities. So?

All this meant that as the PS2 aged it could 'keep up' because the coders kept getting better and better.

Mere mortals do not write the latest graphics engines. I think there are a lot more tier1 people running around then /. seems to think. They are just to busy to comment here.

All that really matters is wether the launch titles will be 'good' enough. Then the full power of the system can be unleashed over its lifespan.

If your a game company and your faced with the choice of either making just another engine OR spending some money on the kind of people that code for super computers and get an engine that will blow the competition out of the water then it will be a simple choice.

Just because some guy on website finds it hard doesn't mean nobody can do it.

Re:Anyone having flashbacks? (1)

buffer-overflowed (588867) | more than 8 years ago | (#14805191)

Or you could put a quarter of those resources into a platform that's far easier to develop for and wind up with the same result. You know, like the XBox or Gamecube versus the PS2.

The only thing Sony has going for it is inertia, and everyone knows this.

I'm totally having deja vu. (2, Interesting)

Inoshiro (71693) | more than 8 years ago | (#14805332)

"All that really matters is wether the launch titles will be 'good' enough. Then the full power of the system can be unleashed over its lifespan."

Yea, but what's the full power of a system? Prettier graphics?

The "full power" of the PS1 seemed to be that its games became marginally less ugly as time went on, although FF7 was very well done since it didn't use textured polygons for most of it (the shading methods were much sexier). When I think about FF9, I don't like it more because it uses the PS1 at a fuller power level than FF7, I like it better because the story is cuter.

I like PGR2 better than PGR3 because PGR2 has cars I know and love from Initial D and my own experience, whereas PGR3 has super cars I've never driven or seen before.

I don't think Rez taxes the PS2 more than Wild Arms 3, but I like it better than Wild Arms 3. I also like most of the iterations of DDR, and they're not taxing in the slightest.

The full power of a system is not its graphics capability or how easy it is to control or its controller or its games -- it's the entire package. Does the PS3 have a good package? The Xbox 360 sure doesn't -- the controller power-up button is nice, but there is nothing new or interesting; it's a rehash. The PS3 is a rehash too.

The Sega Saturn was a rehash of the 8-bit and 16-bit 2D eras. It died. The PS3 and Xbox 360 are rehashes of the 64-bit and 128-bit 3D gaming eras.

compilers ... (4, Insightful)

dioscaido (541037) | more than 8 years ago | (#14805126)

... can get you only so far. You need to have parallelism in mind when you write the high-level code, otherwise it may end up with needless dependence on serial execution that a compiler may not be able to break, reducing the benefits of such an architecture. It will be interesting to see how well games are suited for concurrent execution. Logically there are lots of computations that can be performed independently (AI, physics) but all of it has inherent interaction with a central data source (the game world).

Why hasn't this been done before ? (1)

zymano (581466) | more than 8 years ago | (#14805140)

Always wondered why there is no cooperation between chip makers and even video card companies to make a compiler like this.

Far too complex? (2, Insightful)

hptux06 (879970) | more than 8 years ago | (#14805159)

Cells big programming problem goes right down to each SPE: The assembler commands for which cannot actually address main memory! Every time information is read into / out of the 256K "local storage" on each SPE, a dma command must be issued. Now, while this is Cell's greatest asset (Execution continues while seriously slow memory movement occurs), it is also difficult to work with.

Your average C programmer doesn't take architecture into account, and so there's no user indication of whether a variable can be paged to maim memory, if code needs to be fetched, and crucially: how far in advance data can be pre-loaded into the local storage, to avoid the SPE hanging on a memory operation.

I'd guess that this new compiler will try to address these issues, which is suggested by the article.

Re:Far too complex? (3, Insightful)

stedo (855834) | more than 8 years ago | (#14805201)

Your average C programmer will not be developing the core code. Most likely, a group of very good coders will create a game engine, and the average C programmers can use the API that the highly-skilled, highly-paid engine coders created to hide unnecessary implementation details.

OK, Great a compiler, but ... (1)

dnamaners (770001) | more than 8 years ago | (#14805176)

Hmm that FA was totally devoid of any real details. As it seems to me, and granted I do not develop on cell processors, and I am not a stickler for the "next big thing", but these things may be interesting. Unfortunately, if they want me to use them I need to know it works for me. I want my existing code to compile with minimal changes so I can test the new platform in the raw. I have the resources to test a few "maybe good may be not" systems a year. What I want to know in short is, If it "could" work well. This means I need to use my existing code base in part (their tier IV). I am happy to optimize in my spare time and if need be, once I know it "could be the thing". If the platform passes that test I'll buy a few more units and make a real go if it. I don't think that the cell processor is to that point yet, too little hardware on sale righ now and no software, and there lies the problem. Open source compiler support would be a big plus, but if the platform is "just that good" I can make an exception.

my $0.02

Think About This for a Moment (1)

Quantam (870027) | more than 8 years ago | (#14805196)

Somebody had to code this monstrosity of a compiler, and it wasn't you. Isn't that enough of a reason to believe there's a god?

And when it fails... (1)

errxn (108621) | more than 8 years ago | (#14805197)

...it will be known far and wide as the "Octopile o' Crap."

special compilers, expert programmer = DOA product (2, Insightful)

idlake (850372) | more than 8 years ago | (#14805229)

If a CPU needs a special compiler in order to give good performance, it's basically dead; there are simply too many different applications that do binary code generation.

Also, the division into "expert programmer" and "regular programmer" is silly. Most coding is done by people who aren't experts in the cell architecture (or any other architecture). That's not because people are too stupid to do this sort of thing, it's because it's not worth the investment.

If Cell can't deliver top-notch performance with a simple compiler back-end and regular programmers who know how to write decent imperative code, then Cell is going to lose. Hardware designers really need to get over the notion that they can push off all the hard stuff into software. People want hardware that works reliably, predictably,and with a minimum of software complexity.

Maybe CISC wasn't such a bad idea after all--you may get less bang for the buck, but at least you get a predictable bang for the buck.

Re:special compilers, expert programmer = DOA prod (0)

Anonymous Coward | more than 8 years ago | (#14805281)

"If a CPU needs a special compiler in order to give good performance, it's basically dead; there are simply too many different applications that do binary code generation."

Do you mean like the pentium 4? AFAIK it was quite succesfull.

Re:special compilers, expert programmer = DOA prod (1)

Frumious Wombat (845680) | more than 8 years ago | (#14805409)

Probably not true. Consider the yelling and screaming that went on in the late 90s as code had to become 'thread-safe'. Now that fight is mostly over, so you're already on the right track. Next step is take a page from the technical computing market, and generalize 'thread' to 'non-local access', i.e. your thread may be on another proc, with another cache or memory to access. This gets you to dual core, or openMP type systems. (SMP). One more step, and you're at NUMA, where that other core could be another entire computer, with a longer latency. Usable techniques are known (after all, somebody is using BlueGene, and there are codes such as NAMD which run segmented across hundreds of machines), so compilers have to be taught how to do as much of this automatically as possible, while programmers will have to be up to speed on multi-threaded, heirarchical memory access patterns.

The key is whether enough processors can be sold to make this investment of time worthwhile. Advances in Windows (quit yelling) have already driven some of those changes, as can be seen if you compare the behaviour of current programs versus those aimed for 3.1/95, but you haven't noticed it much because those changes are incremental. More tasks run asynchronously, dialogues don't lock the entire window manager while waiting for your response, systems wait until idle periods to do heavy I/O. The proposed Cell compiler is just one step beyond multi-threaded, so the transition will, in the end, be less fuss than is currently anticipated.

Dig into the technical docs for Intel's current Fortran versus its ancestral DEC variants, and you'll see compilers are already doing an amazing amount of work in terms of code reorganization, execution order prediction, etc., that their ancestors didn't. The language the programmer sees is almost identical to the one they saw 20 years ago, and only comes with a few more 'gotchas' to avoid. This has to happen, as the Market has decided that it's cheaper to add cores than design faster ones, so this sort of distributed programming is going to become the norm. You'll look back at simple, imperitive code some day soon and say, "How quaint". From the programmer's view, all that their new, miraculous, octapiler, has to do is take OpenMP statements within a current language, and they can continue working much as they did before.

On that note, it's somewhat heartwarming to envision hordes of recent CS grads, soaked in the latest OO paradigms, being told, "there's great money to be made programming for the Cell, but you're going to do it in High-Performance Fortran."

CISC? (1)

Billly Gates (198444) | more than 8 years ago | (#14805293)

Is it just me or is it that we went from cisc to risc and now going back to risc again?

I assumed less complex chips with optimizations coming from compile time were more efficient or cost effective?

Re:CISC? (2, Interesting)

tarpitcod (822436) | more than 8 years ago | (#14805355)

A key problem with CISC was that doing virtual memory and handling page faults on a CISC processor was so incredibly insanely complicated that you ended up going insane and designing your pipeline could throw multiple page faults on one instruction and you had a god-awful mess to clean up.

The problem with the Cell is actually pretty interesting. They decided to go for in-order CPU's for the SPE's which means that to get good performance you sure as hell better know what your dependencies are and take into account memory latency etc.

OTOH modern RISC CPU's normally do nice out-of-order stuff which whilst making the CPU more complicated makes life easier for the programmer - compiler.

Itanium took the clean approach - and it flies on FP workloads that the compiler can do a good job on. The PS3 (like Itanium) should rock - once programmers get lots of nice little kernels that do groovy stuff (think super shader programs) in the SPE's. Just that will make the eye candy pretty.

The counter argument is the 'Look at what happened with the i860'. It had amazing performance on kernels but was just totaly evil to program and compiler writers pulled out their hair.

I don't know enough about modern game programming to know if the PS3 route is a good one to take - and it's easy to bitch at Sony for going too far - OTOH look at the PS2 games now vs at release. The PS3 games should slowly get better and better and better if they don't crash and burn and give up...

--Tarp

Re:CISC? (2, Funny)

Tim Browse (9263) | more than 8 years ago | (#14805392)

Is it just me or is it that we went from cisc to risc and now going back to risc again?

Yeah, but the advantage of doing it this way is that the 2nd transition (from risc back to risc) is really quick!

Wasn't this the same mistake Sega made? (1)

Lead Butthead (321013) | more than 8 years ago | (#14805325)

I recall a common complaint by development houses about Sega consoles were that they were very difficult to code for because of hardware complexity. Isn't Sony now making the very same mistake that doomed Sega's console business? Speaking of which, is XB360 easier to code for than PS3?

Re:Wasn't this the same mistake Sega made? (2, Interesting)

MobileTatsu-NJG (946591) | more than 8 years ago | (#14805432)

"I recall a common complaint by development houses about Sega consoles were that they were very difficult to code for because of hardware complexity. Isn't Sony now making the very same mistake that doomed Sega's console business?"

Sega didn't make a single mistake, they made a LOT of them. I imagine you're thinking of the Saturn. It was supposed to be a SNES killer. In other words, all the fancy technology it had was meant to throw sprites on the screen. Then Sony showed up with it's fancy ass 3D architecture, and Sega said oops. So they band-aided some hardware in there to perform 3D functions. Unfortunately, this added another processor to the mix. The result? It was a bitch to program for, and it never really reached the performance levels of the PS. The result? Saturn games looked inferior to PS games. However, in the 2D fighter realm, the Saturn did quite well. As I recall, the Saturn was actually fairly successful in Japan for this.

The Genesis was pretty easy to program for, at least compared to the SNES. The SNES had a weaker CPU, but it had extra hardware to beef up its graphics. In the end, the SNES won, but not without a couple of years of Genesis superiority. I remember lots of people bitching about the SNES slowing down when it came to a lot of sprites on the screen. This complaint died when Donkey Kong Country hit the scene.

The Dreamcast... well I don't know as much about it. As I understand it, it wasn't too hard to program for. It even had some great hardware for throwing textures on the screen. This gave the DC an edge against the first generation of PS2 games despite having considerably weaker specs.

The Saturn definitely hurt Sega. One could attribute this to the difficulty of programming for the system, and they'd likely be correct. PS ports to the Saturn often came many months after the original release, and they simply didn't do as well graphically. Sega had also flooded the market with hardware. Between the Genesis, the Sega CD, the 32X, and the Saturn, the market was pretty confused. Sega wasn't focused where they should have been and it came back and bit them in the keyster.

Sega was in pretty sad shape financially when the DC was released. I vaguely recall that the president of Sega at the time had given up most of his shares of stock to keep the company afloat. (I want to say it was around 100 million dollars roughly, but I don't recall the specifics. I do remember thinking "wow, that's one dedicated dude.") In the end, though, Sega needed several hundred million dollars in order to get 10 million DCs out there in order to really start raking in money. But they simply didn't have the assets to do it. Kerplunk, the Dreamcast died, and Sega focused on software.

With all that said, I'm sure a number of people will chime in with their own contribuatory reasons for Sega's demise. They wouldn't necessarily be wrong, either. It took a number of things to take Sega down, not one key one.

"Speaking of which, is XB360 easier to code for than PS3?"

I read an interview with Carmack not too long ago, and his answer was basically 'yes'. He did NOT go one to say that the difference would be a huge huge factor or not, though. Frankly, I have difficulty imagining it making all that big of difference, at least from a financial point of view. As these machines get more powerful, the weight of development shifts more towards the artists than the actual programmers. That is just an opinion, though. I'm a 3D artist by trade. Maybe my view is biased. But I know how much it costs to keep me seated at my desk. I know about how the work piles up by orders of magnitude as projects get more ambitious. And I have a pretty good sense of how artistry in video games has evolved over the last decade. Compare Super Mario 64 to Resident Evil 4 and you'll see what I mean.

Simple parallelism? (1)

calambrac (722059) | more than 8 years ago | (#14805348)

I haven't done a lot of multi-threaded programming, so maybe this is actually commonly available, but I think a nice language-level parallelism feature would be something that could handle a really basic "for each" type loop:
serialCode();

pfor(element in collection) {
element.parallelCode();
}

serialCode();
without having to worry about manually setting up the threads, etc - if there are multiple resources available, they get used, if not, then it happens in serial. Is there anything like this out now? btw... how do you get proper indentation using this ecode tag?

Re:Simple parallelism? (1)

DichotomicAnalogist (955934) | more than 8 years ago | (#14805424)

Several variants of Fortran have something like this. Other concurrent languages... well, a number of other concurrent languages typically don't need or use for, but most likely offer concurrent/distributed map or fold, which are the common "clean" approximation of for.

Time to let C die ? (2, Interesting)

DichotomicAnalogist (955934) | more than 8 years ago | (#14805396)

(Warning : troll venting off.)
Let me summarize
  1. take one of the most unsafe, slowest-to-compile, pitfall-ish, unspecified languages in existence (ok, I might be exagerating on the "unspecified" part)
  2. add even more #pragmas and other half-specified annotations which are going to change the result of a program near invisibly
  3. don't provide a debugger
  4. require even more interactions between the programmer and the profiler, just to understand what's going on with his code
  5. add unguaranteed and slow static analysis
  6. ...
  7. lots of money ?
Am I the only one (with Unreal's Tim Sweeney [uni-sb.de] ) who thinks that now might be the right time to let C die, or at least return to its assembly-language niche ? I mean, C is a language based on technologies of the 50s 60s (yes, I know, the language itself only came around in the late 60s), and it shows. Since then, the world has seen
  • Lisp, Scheme, Dylan, ... -- maximize code reuse and programmer's ability to customize the language, automatic garbage-collection
  • ML, Ocaml, Haskell, ... -- remove all hidden dependencies, give more power to the compiler, make code easier to maintain, check statically for errors
  • Java, C#, VB, Objective-C ... -- remove pitfalls, make programming easier to understand, include a little bit of everything
  • Python, Ruby, JavaScript -- maximize programming speed, make code readable, make writing prototypes a breeze ...
  • Erlang, JoCaml, Mozart, Acute -- write distributed code (almost) automatically, without hidden dependencies, with code migration
  • Fortress -- high-performance low-level computing, with distribution
  • SQL, K, Q -- restrict the field of application, remove most of the errors in existence
  • and probably plenty of others I can't think of at the moment.

And what are C and C++ programmers stuck with ?
  • a macro system which was already obsolete when it was invented
  • slow compilers
  • no modules or any reasonable manner of modularizing code
  • neither static guarantees nor dynamic introspection
  • no static introspection
  • an unsafe language in which very little can be checked automatically
  • mostly-untyped programming (not to be confused with dynamically-typed programming)
  • about a thousand different incompatible manners of doing just about everything, starting with character strings
  • manual garbage-collection (yes, I know about the Boehm garbage-collector -- but I also know about it's limits, such as threads)
  • a false sense of safety with respect to portability
  • extreme verbosity of programs.

So, now, we hear that IBM is trying to maintain C alive, under perfusion. IBM, please stop. Let granddaddy rest in peace. He had his time of glory, but now, he deserves that rest.

Oh, and just for the record. I program in C/C++ quite often as an open-source developer and my field is distributed computing. But I try to keep these subjects as far away from each other as I can.
(well, venting off feels good)

I really hope.. (1)

LeddRokkenstud (945664) | more than 8 years ago | (#14805439)

This octopiler doesn't allow the PS3 to be the beginning of Skynet.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>