Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IEEE Spectrum On The PS3 Learning Curve

Zonk posted more than 7 years ago | from the see-the-cells-divide dept.

PlayStation (Games) 88

An anonymous reader writes "The Insomniacs is the cover article in the December issue of IEEE Spectrum, discussing developers ramping up to the PS3 hardware. The article features Insomniac Games, who developed the PS3 launch title Resistance: Fall of Man. Despite mixed reports in the press, the Insomniac folks are delighted to be working with Sony's technology, and describe the process of helping to make or break a console launch." From the article: "Despite the delays, there's something inside the PS3 that burnished Sony's reputation as a hardware company. The heart of the machine is the powerful new Cell Broadband Engine microprocessor. Developed over the last five years by Sony, IBM, and Toshiba on a reported budget of $400 million, the Cell is not just another chip: it is a giant leap beyond the current generation of computer processors into a nextgen muscle machine optimized for multimedia tasks."

cancel ×

88 comments

Oh man.... (3, Funny)

Pojut (1027544) | more than 7 years ago | (#17153002)

"...the Cell is not just another chip: it is a giant leap beyond the current generation of computer processors into a nextgen muscle machine optimized for multimedia tasks."

Anyone else react the same way I did?

Fox News is now spinning CPU development?

Re:Oh man.... (2, Insightful)

Broken scope (973885) | more than 7 years ago | (#17153074)

If that was not a canned response I don't know what is. OMG IT TEH CELL!!!! IT SO FUCKING AMAZING LOLZ!!!! HAX!!!!!!
Cellular processing. DO something with it on the console that could only be done on the console then TOUT IT DAMNIT.

Re:Oh man.... (0)

Pojut (1027544) | more than 7 years ago | (#17153124)

Not to mention the fundamental flaw in what the guy said...he said that it is a nextgen muscle machine...

at this point, wouldn't that make it current gen?

Re:Oh man.... (1, Insightful)

AKAImBatman (238306) | more than 7 years ago | (#17153174)

You heard Hirai at the E3: "The next generation doesn't start until Wii say it does!"

I don't think Nintendo has remembered to make the announcement yet. :P

I had to crank up the 'fish for this one. (1)

numbski (515011) | more than 7 years ago | (#17154528)

I entered:

"the Cell is not just another chip: it is a giant leap beyond the current generation of computer processors into a nextgen muscle machine optimized for multimedia tasks."

and the fish sayeth:

"Cell is not just another chip, it is a big buzzword buzzword buzzword processor buzzword buzzword buzzword buzzword."

So it's not just another chip...it's a processor! :D

Re:I had to crank up the 'fish for this one. (1)

Broken scope (973885) | more than 7 years ago | (#17154666)

I have an IBM system journal about the cell. They mention some cool things. They admit to some limitations.. Then they think up ways to make games nickel and dime the hell out of you. Want to join a server. $.05, want a gun ohhhh $.20, ohhhhhh want to move... $1.00!!!!

Re:Oh man.... (0)

Anonymous Coward | more than 7 years ago | (#17159324)

Were you not paying attention at E3?

Real time weapons switching?
Giant enemy crabs?
Ridge Racer?
Monsters Instantiating?

All these things made possible through the sheer power of Cell Processors.
No other consoles could ever even attempt to compete.

Re:Oh man.... (1)

pdabbadabba (720526) | more than 7 years ago | (#17153146)

naw. Then it would be a "nextgen freedom machine" [queue rousing patriotic music]...

Re:Oh man.... (1, Funny)

steveo777 (183629) | more than 7 years ago | (#17153156)

That's okay. They said the same thing about the Emotion engine years ago and everyone found out exactly how life-like all those images were when they fired up Evergrace [wikipedia.org] the first time. (oh, and if you haven't played it, save yourself the trouble and light firecrackers in both ears so you'll never hear again, next gouge your eyes out with rusty scalpels... trust me it would be a lot more fun)

Re:Oh man.... (2, Insightful)

Quantam (870027) | more than 7 years ago | (#17153944)

I think you're overreacting. While they may be exaggerating a bit, the Cell is a pretty insane piece of hardware. I'm a professional programmer (though not on the Cell), and I've been reading various architecture specs for the Cell. Its peculiar architecture means that it's difficult to make use of its full power for many types of tasks (don't ask me why they're selling Cell-based blade servers; it doesn't make much sense to me); but if you have an application that fits with what the Cell is optimized for, that thing is ungodly fast. Expect to see it become popular in rendering and scientific clusters. Whether it's a good fit for a game console requires more industry-specific knowledge than I have.

Re:Oh man.... (2, Insightful)

Pojut (1027544) | more than 7 years ago | (#17154580)

noted, but something that raised the red flag for me:

If it is truly as powerful as they say (for gaming purposes, of course) they wouldn't need to talk it up. They would simply say "hey, you will see...what you will experience will be beyond what I could convey to you here today"

I THINK nintendo did something along those lines, if I remember correctly...and now they have the little console that could going

Re:Oh man.... (2, Interesting)

be-fan (61476) | more than 7 years ago | (#17154806)

The article doesn't even involve Sony talking up the chip. It's an IEEE article.

A lot of the Cell press has nothing to do with Sony, actually. There are a lot of EE/CompE types who get a hard-on over Cell for the same reason they do for Itanium (simple, fast hardware driven by complex compilers).

Re:Oh man.... (1)

imroy (755) | more than 7 years ago | (#17157352)

Are you seriously trying to compare the Cell with the Itanium? The problem with Itanium (and probably all IA64) is that the CPU core can not adapt to changing conditions. Everything's encoded into the instructions by the compiler and AFAIK, can not be rescheduled by the CPU. So scientific and engineering workloads written in FORTRAN run great, but nothing else does. It's pretty mediocre at common server tasks e.g web, email, etc.

Now, the Cell is basically an under-performing PowerPC G5 core with 7 or 8 SPE vector coprocessors. Nobody is seriously suggesting that you drive the coprocessors with compiler magic. Auto-vectorising compilers have been around for a while and AFAIK they haven't achieved much. IBM offers a number of models for scheduling code to run on the SPE's (e.g fixed, dynamic, etc), but it always involves writing code especially for the SPE's.

The Cell is certainly facing a steep learning and development curve, but it's not because of compilers. Programmers simply have to learn how to identify the parts of their codebase that would run well on the vector units and break them out into little routines. Now, other development tools and a good framework for dealing with these SPE routines would help, and I believe IBM and others have already put quite a bit of work into those sorts of things, but the compiler is not the weak point.

Re:Oh man.... (1)

be-fan (61476) | more than 7 years ago | (#17157434)

Way to miss the point COMPLETELY.

Both Cell and Itanium remove complexity from the hardware, in order to fit more hardware into the available space, at the expense of more complexity in the compiler. The Itanium does this by using VLIW and eliminating OOO, and the Cell does this by eliminating OOO in the PPE and SPE, and eliminating dynamic branch prediction and hardware-managed cache in the SPE.

Hardware guys jack of to this sort of thing, because they don't have to write software for the damn things. That's one of the reasons why there has been a lot of press about Cell --- from a hardware guy's point of view (and remember, this article was in an IEEE publication), it's really an aggressive design.

Re:Oh man.... (1)

imroy (755) | more than 7 years ago | (#17157636)

Ok, so both the Itanium and Cell have simplified but highly parallel hardware. But where is the complexity in the Cell compiler? Everything I've read about the Cell says that programmers have to break up their code themselves for the SPE's, not the compiler.

Re:Oh man.... (1)

be-fan (61476) | more than 7 years ago | (#17158426)

IBM is working on a compiler to split that stuff up, but that's sort of besides the point. Writing a compiler for Cell is more traditional than writing one for Itanium, but its still really hard. The SPE and PPEs are in-order, and unlike Itanium, don't have any special latency hiding mechanisms like the advanced-load-table. They also have long pipelines (~18 stages), high cache latencies (5 cycle L1 on the PPE, 6-cycle LS load on the SPE), and poor (PPE) to no (SPE) branch prediction. All the technology exists to write a compiler for such an architecture (IBM has it), but its very hard. GCC is not going to generate good code for this thing, for example.

Anyway, my point wasn't really to point out that compilers for Cell were hard, but rather that one of the reasons its getting a lot of press is that its an aggressively simple architecture. There is a large contingent of EEs (those that worship the Alpha, among other things) that really like Cell because of its ambitious design. The press regarding Cell has indeed come more from the CPU folks responding to information from IBM than from anyone in the game community (including Sony).

Re:Oh man.... (1)

Retric (704075) | more than 7 years ago | (#17174350)

The programmer needs to find some use for 7 SPE's but the compiler let's you use each SPE to it's fullest.

Basically, the Cell is an 8way chip designed around having smart programmers and smart compilers that optimise for it. You can't make a desktop CPU based like this because you need to run old software fast but when starting over you can go for it.

Re:Oh man.... (1)

default luser (529332) | more than 7 years ago | (#17165954)

The Itanium looks great to hardware enginners until you realize that it executes BOTH paths from a branch (until it can verify the correct branch direction), then throws out the incorrect path. When you combine this with the fact that the compiler can not always schedule each VLIW with the full 3 instructions (due to dependencies between instructions), you've got a mediocre architecture for general-purpose code. Still, at least the Itanium is a scalar processor with multiple execution pipes - this is much simpler than programming for Cell.

The SPEs in the Cell run on independent threads, so that means getting the most out of your code is mindbogglingly complex, and you can't easily make a compiler to handle it. There are only a handful of general processing tasks on this planet that can make full use of multiple independent processing threads:

Rendering (an obvious parallel task) is already handled by the GPU. Physics is another obvious parallel task, but it can only add so much to the game (and is limited by the triangle rate of the video hardware anyway). AI is a much tougher boondoggle, and making it run in-parallel AND have it take full advantage of an SPE thread or two will be a difficult challenge. And sure, it's easy to parallelize tasks like physics, but once you have many threads running, real-time synchronization becomes a serious problem.

Unfortunately, moving the complexity to the software side is the way of the future, because we're seeing less and less returns moving the complexity to the hardware side. I like to think of Cell in one good light: it will give software designers a taste of what a pain real-time hardware design is like. Having to deal with timing and synchronization of 8 or more high-speed tasks is nothing compared to doing the same with millions of high-speed transistors on a processor.

Re:Oh man.... (1)

try_anything (880404) | more than 7 years ago | (#17167464)

(simple, fast hardware driven by complex compilers)

Unfortunately, the only compiler that can produce decent code for the Cell is the most complex, expensive, and undependable compiler on the market today.

Re:Oh man.... (1)

tbannist (230135) | more than 7 years ago | (#17161832)

Actually, "they" don't "need" to talk it up. But, marketing people get bored too, you know.

Re:Oh man.... (1)

Pojut (1027544) | more than 7 years ago | (#17161944)

Marketing is like a bar with a four drink minimum, what are you talking about?

Re:Oh man.... (1)

tbannist (230135) | more than 7 years ago | (#17162098)

Exactly, nobody havers like marketing!

And occasionally they have to put out, you know, stuff so that the people who hired them think they're actually working.

Re:Oh man.... (1)

Pojut (1027544) | more than 7 years ago | (#17162218)

you mean like a consultant, dogbert style?

"Here is how it works. I will walk around with large empty binders for a couple weeks. At the end of the couple of weeks, I will call your buisness a failure and give you the bill."

Re:Oh man.... (2, Insightful)

RzUpAnmsCwrds (262647) | more than 7 years ago | (#17154620)

Its peculiar architecture means that it's difficult to make use of its full power for many types of tasks (don't ask me why they're selling Cell-based blade servers; it doesn't make much sense to me); but if you have an application that fits with what the Cell is optimized for, that thing is ungodly fast.


So is a GPU. So is a DSP. So is an FPGA. So is an ASIC.

There have always been ICs that are "insane" compared to CPUs - the CPU's power comes not from its raw performance, but in its ease of programmability and flexibility. My GeForce 6200 can do more GFLOPS than the fastest Core 2 Duo, but you're not going to run Linux on it.

Cell is a very interesting piece of hardware, and it will no doubt see wide use in many different applications. But calling it a "revolution" is just plain wrong. It's just a better, more integrated version of what we have been doing for years.

Re:Oh man.... (1)

ClamIAm (926466) | more than 7 years ago | (#17155406)

My GeForce 6200 can do more GFLOPS than the fastest Core 2 Duo, but you're not going to run Linux on it.

I see your 6200 and raise you a Tamagotchi.

Re:Oh man.... (1)

mrchaotica (681592) | more than 7 years ago | (#17156194)

My GeForce 6200 can do more GFLOPS than the fastest Core 2 Duo, but you're not going to run Linux on it.

O rly? [nvidia.com]

(Yes, I'm aware that the 6200 isn't CUDA-compatible. And yes, I'm aware that CUDA doesn't compile all the code to run on the GPU. It is getting closer, however...)

Re:Oh man.... (1)

adam31 (817930) | more than 7 years ago | (#17156478)

But calling it a "revolution" is just plain wrong. It's just a better, more integrated version of what we have been doing for years.


See, it's such an integrated version of what we've been doing for years, that I would call it a revolution... not to mention the education possibilities it opens. Technologically, it takes these different facets of computing-- Supercomputers using high-bandwidth interconnects (turns into 25/30 GB/s IOIF and 300 GB/s EIB), GPUs using separate superscalar vector processors + mainstream CPUs growing bigger and bigger banks of low-latency memory (becomes programmable SPUs with 256kb of single-cycle cache), and combines it with high enough bandwidth memory to pump and drain 8 local processors.

My hope is that, with the availability of PS3 Linux, colleges offering "Game Development" curricula will finally begin to offer real professional level training on real console hardware. You wouldn't believe how many people can graduate with the concept of a computer consisting of just 2 components-- CPU and Main Memory in one, and GPU and VRAM in the other.

I think it's not that Cell is particularly difficult to program for or in the realm reserved for genius. It's just that people aren't taught to conceptualize the flow of data or its latency, or how the internal pipelines and dependencies of algorithms map onto hardware. With Cell, you have to address this complexity explicitly and I think students of this will learn that it's complicated... but not too complicated. You just have to learn to think "how does the hardware want to process my algorithm."

Re:Oh man.... (1)

7Prime (871679) | more than 7 years ago | (#17154772)

But the problem is, for all the power increase you get, you have to spend that much more time and money to actually take advantage of it. To make bigger, more immsersive worlds, you have to invest in more programmers, more art designers, more architects, etc. The question is, are game companies ready and willing to take advantage of this power to its full extent? Or will it simply be a distraction from what's really important: making a good game.

Re:Oh man.... (2, Insightful)

xero314 (722674) | more than 7 years ago | (#17155554)

But the problem is, for all the power increase you get, you have to spend that much more time and money to actually take advantage of it.
This is only true in the sense that unified libraries or processes need to be conceived and/or developed before it becomes easy to take advantage of the more complex architectures. This can be seen in the history of GPUs where originally it took specialized knowledge of the specific GPU to get the most out of it, but now standardized APIs have been developed which allow one to make effective use of the GPU (though not necessarily maximum) with out need to understand the specific hardware. The same thing will happen with the Cell Processor, in that common libraries will be converted to effectively use the power of the processor (or more specifically it's SPEs). In the end developing for the Cell will be as easy as developing for any processor, and yes something will be better on the cell (single precision floating point intensive applications) and some not(double precision floating point intensive applications, though I assume a good library will get around this as well by using single precision operations when ever possible even if they have to do more total calculations).
The question is, are game companies ready and willing to take advantage of this power to its full extent?
Ready? Probably not, but some are certainly willing. Insomniac (Ratchet and Clank), Naughty Dog (Jak), and SCEI(Ico) are just a few that I am fairly confident we will see some amazing works from.

Re:Oh man.... (1)

mrchaotica (681592) | more than 7 years ago | (#17156228)

In the end developing for the Cell will be as easy as developing for any processor

No it won't, because to take advantage of the Cell you have to write parallel code. Any way you slice it, that's always going to be inherently harder than writing single-threaded code.

Re:Oh man.... (1)

xero314 (722674) | more than 7 years ago | (#17158514)

No it won't, because to take advantage of the Cell you have to write parallel code.
Incorrect. Your program must compile to parallel instructions, important and not very subtle difference. This can be hidden by a good library. The most you should be expected to be able to do is manage threading, which exist in most architectures out there. The fact that a thread would be run on an SPE rather than a general purpose CPU should be transparent with the right APIs. The SPEs are turing complete and so should be able to execute any code (in it's instruction set), and the compiler should determine which code is appropriate for the PPE rather than an SPE. Compiler writers and select Library developers will need to know how to use the lower level details, but the average developer should not.

Re:Oh man.... (1)

UnknownSoldier (67820) | more than 7 years ago | (#17159654)

No disrespect, but have you even done any console or graphics development??

The graphics pipeline is much more rigid and simple, compared to trying to best make use a multi-threaded architecture. Not all algorithms are parallelizable; most are serial. Sure you split the general execution of your game loop using N cores up into streaming, render, physics, audio, input, and AI, but each of the sub-components are still relatively single-threading without adding further complications about dealing with multi-threaded code! And with designers doing more coding these days, they sure as hell aren't going to want to touch MT.

The other problem you seem to completely ignore, is that each "GPU" core still needs access to the scene assets. i.e. Lighting data needs to be shared -- meaning that when changes happen, you need to propage them amongst all the cores. That's a lot of extra data pushing. Sure you could dedicate a core for landscape rendering, one for each character, one for static objects, one for dynamic objects, but how are you going to synchronize access to the framebuffer? Oh, and lets add multiplayer complexity on top of that.

I'm a PS2 dev -- the PS2 is already complicated enough, that getting optimal performance out of it, is all about balancing the various pipelines / processors. And you're going to tell me that some "magic lib" is going to make it easy on the PS3?! Riiiiiiiight.

The root problem is that we still deal with far too many algorithms are that only run in serial. There is a logical limit to how much parallelization we can break the game loop up into. My guess is that XBox 360 provides a good starting point to slowly wean PC programmers to start thinking about, and eventually coming up with solutions to good engine design. The PS3 just overly obfuscates an already complicated subject.

Cheers

--
When you're PS2 render guy wears an XBox shirt to work, you have to wonder why Sony can't provide take the time & money to provide dev tools as simple as Microsoft's.

Re:Oh man.... (1)

xero314 (722674) | more than 7 years ago | (#17164102)

Thanks for the insightful information. I like what you have to say and you hit the nail on the head in a round about way. Working effectively with architectures such as the PS2 and PS3 does require a change in though, but interestingly enough it's a reversion back to older thought process rather than a newer one. I am a software engineer, though I have not worked on any game development in many years, but these newer architectures do get me interested again. If you look at the early PCs you will see that working with processes executing on seperate processors was actually very common.

I recall the need to write portions of programs so that they could run on the SPUs of the main system or the CPUs of a Peripheral(This is when certain manufactures, like commodore, had specialized CPUs in their peripherals) such as the sound processor or even and external drive. This code was not always specific to the processor in question, such as running small logic code on a drives CPU during times when no disk access was happening so that the main CPU could be free for other purposes. This was when you tried to get the absolute most out of the system you were working on and programs were considerably more efficiet than they are today. I really doubt modern programming is as difficult, even on these "new" architectures as it was when the developer had to directly manipulate memory by address, atleast not for your average developer.

My point is that the serial vs. parallel problem is often, but not always, a mater of the way the problem is thought about, not just how it is coded.

That all being said, as a bit of a side.
And you're going to tell me that some "magic lib" is going to make it easy on the PS3?!
I'm not saying that it will be done, but I am saying that it could and should be done. Remember that the reason alot of developers don't understand the issue you brought up is because they don't need to know them since there are libraries that hide those complexities. It is very possible, since it is done, to write complex games, cross platform compilable through the use of common libraries, APIs and languages such as OpenGL or Cg.

The only issue here is if the Cell processor will have enough longevity and popularity to make this happen. If Sony, or another manufacturer were to use the Cell processor in the next Generation we would certainly see some of these libaries and tools become available.

Re:Oh man.... (1)

be-fan (61476) | more than 7 years ago | (#17157562)

That's not really true. More powerful hardware allows you to use higher-level programming techniques, pre-made game engines, etc, all of which reduce cost and reduce the time required. Also, a lot of game art is scalable, or can be made scalable. Shaders, for example, don't really care what the polygon count is, or what the output resolution is. Indeed, more power can help reduce art development time as well. Game models, for example, are usually developed at a high level, then the polygon count is pared down to optimize for the target system. If you don't have to do this process, you can save time. And powerful systems also open the way for more automated content generation tools that were perhaps not feasible before because the algorithms couldn't tune for low poly-count as well as a human designer could. For example, its pretty easy to get a terrain generator to create detailed terrain, but more complicated to get a human designer to make a terrain that looks good at low polygon counts.

Yes, game budgets are going up, but attributing it to more powerful hardware is a bit of a cop-out. The major reason game budgets are increasing is because gaming as an industry is getting bigger. The gaming industry is bigger now than Hollywood, and people are expecting more highly polished products.

Re:Oh man.... (2, Informative)

xero314 (722674) | more than 7 years ago | (#17154928)

don't ask me why they're selling Cell-based blade servers
They are selling Cell based servers, blade or otherwise, because the cell processor was designed with Scientific Computing in mind. For those that don't know this is the category of computing that is done on all Super Computers at this time. IBM is hoping to replace the current generation of x86/Power based super computers, and super clusters, with Cell based clusters. The current top rank Supercomputer is capable of 367 teraflops peak using 131072 IBM PowerPC CPUs. This configuration could, in theory, be replaced with 896 cell processors. This is a massive savings in power consumption, physical space and cooling requirements. This could also be used to scale up to even faster Supercomputers using thousands of cell processors, which where built for distributed computing as well as the scientific aspect.

Re:Oh man.... (1)

aevans (933829) | more than 7 years ago | (#17155502)

Whoa! A professional programmer on Slashdot!

Re:Oh man.... (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17156716)

Whoa! A professional programmer on Slashdot!

Yeah, it's amazing, isn't it? Please let us in on your secrets Mr. Programmer! Even though I bet you develop crap in VB.NET or Java, I really want to know your opinion of the Cell because you can PROGRAM!!!!1111 which we all know is a dark art restricted to all but the most brilliant people on the planet!

I live in awe of your 31337 VB sk1llz. Please tell me how to live my life next. k thx bye.

Re:Oh man.... (1)

Anne Thwacks (531696) | more than 7 years ago | (#17159740)

don't ask me why they're selling Cell-based blade servers; it doesn't make much sense to me

Its pretty clear that you are not involved in any projects requiring real-time voice recognition of hundreds of datastreams simultaneously then. This is the only realistic architecture for that kind of task. Its also pretty good for many kinds of massive database processing, such as might be required for real-world artificial intelligence, such as face recognition from poor cctv pictures, and problems such as computing the risk envelope of incoming meteor strikes where there are many particles on poorly determined trajectories. Once we have a decent tool chain (RTFA), we can expect to see games where you can talk to your colleagues on the battle field, and have them recognise your (non-American) accent, and reply in appropriate language, while acting on your orders.

In short, it will address problems where the Pentium is three orders of magnitude or more short of the processing power required.

I accept that the performance of Cell architecture with MS Word is not impressive. MS Word actually works fine on my P1, so no problem here.

Re:Oh man.... (1)

big4ared (1029122) | more than 7 years ago | (#17156386)

They should at least get their facts straight. The article continually states that the Cell has 1 PPU and 8 SPUs, which is WRONG. It has 1 PPU and 7 SPUs because one of the SPUS is disabled to increase yields.

Re:Oh man.... (1)

be-fan (61476) | more than 7 years ago | (#17157602)

The article is right and you are wrong. Cell is a joint development of Sony and IBM. The ones this article is talking about, the ones that will be used in scientific computing, manufactured by IBM, has 8 SPUs, because IBM throws away the ones that have a defective SPU. Sony also makes Cell processors, for use in the PS3, and that Cell is specced to have 7 active SPUs.

Re:Oh man.... (0)

Anonymous Coward | more than 7 years ago | (#17169566)

All current Cell processors have 8 cores, whether produced by Sony in Japan or IBM in North America. The PS3 Cell CPUs have the 8th core disabled, and you can actually tell whether the disabled core was good or not by reading a particular bit in a register (my NDA would prohibit me from saying more than this). It's actually theoretically possible to re-enable that core.

I've verified that my friend's PS3 (20GB) had a working 8th core, but unfortunately my PS3 (60GB) had a bad 8th core. I'm thinking of trading him up and working on re-enabling that core.

Re:Oh man.... (1)

be-fan (61476) | more than 7 years ago | (#17176900)

That's exactly what I said. The Sony Cells are specced to have 7 active cores, so they can still use ones whose 8th core is defective.

Serious problems? I should say so. (3, Funny)

Control Group (105494) | more than 7 years ago | (#17153266)

See any serious problems with this story? Email our on-duty editor.

Yeah, I sure do - it's about good news for the PS3.

That can't be right.

Just wait (0, Troll)

Square Snow Man (985909) | more than 7 years ago | (#17153274)

I think we all should just wait and see what this cell CPU is capable of, just like the "emotion engine" it was laughed at but did fairly well (imagine for example "Hitman: blood money" coming out when the Ps2 was released :)). Also don't expect games from (EA, Ubisoft) to be using all of playstation3's power as they will probably write it in a generic way so it can be ported around from crapbox360 to playstations3 and visa versa.

Re:Just wait (1)

fitten (521191) | more than 7 years ago | (#17153444)

Not a fanboi are you? You are right, though... Exclusives will make/break the PS3 because it's so different that the only way for it to shine will be the exclusives. Similarly, the cross platform games will all default down to a single PPC (with two threads) because that's all they have in common and the graphics libraries will hopefully take care of the graphics differences. The rough part of exclusives for the PS3 is that the programming model for the Cell is very, very complicated... much more complicated than the XBox360 model (which is just plain SMP). It's even further complicated by (from what I hear) the tools available for it being not-so-great.

The Cell's programming model isn't that new... it's been around for a while (PPC surrounded by a bunch of DSPs that talk to each other, the PPC, and the world through DMA). I've programmed that model before... it's a massive pain compared to the relative simplicity of generic SMP.

Re:Just wait (1)

Square Snow Man (985909) | more than 7 years ago | (#17153658)

To make life for developers easy, they (IBM and Sony) should start writing patches for GCC so it can produce code that makes full usage of the Cell CPU. This does not need to be a hard task, they can make guidelines for programmers to make it easy for the compiler to optimize code for example. In the end I hope it will be like switching on a compile flag like you would do to compile with support for SIMD.

Not so easy or useful. (1)

LWATCDR (28044) | more than 7 years ago | (#17153728)

SIMD is easy compared to working with the cell.
I am sure that eventually their will be some tool-kits that will optimize some common tasks for the SPEs but will not easy.
My guess is that it will be a few years before any game really uses the Cell to the max.

Re:Just wait (2, Insightful)

badboy_tw2002 (524611) | more than 7 years ago | (#17153918)

Can a compiler flag redesign your code? No? Then it can't do what you're asking. In order to take full advantage of the cell, you have to break your task up into parallel units and have them run on the SPUs. The SPU has a low memory overhead so you can't just throw any old piece of code at it. If you were writing something for a similar architecture I could see it, but otherwise just enabling "cell mode" won't get you much.

Re:Just wait (1)

reanjr (588767) | more than 7 years ago | (#17155278)

No, but there are some pretty sweet refactoring tools that IBM has. If they wanted, they could come up with a tool that does something similar.

Re:Just wait (1)

fitten (521191) | more than 7 years ago | (#17162802)

Good luck with that... people have been working on it for a long time with no solution yet. Just refactoring code is not horribly difficult. Data partitioning and data flow (overlapping communication with computation in meaningful ways) are things that are a bit more difficult to handle automagically just by looking at a bunch of source code (and those are only an example of some of the problems that such a tool would face).

Re:Just wait (1)

be-fan (61476) | more than 7 years ago | (#17153842)

The Cell's model isn't so much complicated as it is different. The only game programmers that have any experience with multi-threading are PC developers, and SMP is the only thing they know. SMP is hardly simple, and its very hard to get right in a scalable way. In contrast, Cell's SPEs aren't DSPs, but fully general-purpose processors. As a result, if you're familiar with an MPI model of concurrency (Cell has a built-in message-passing mechanism to facilitate this), then you should feel right at home.

Re:Just wait (1)

k8to (9046) | more than 7 years ago | (#17154786)

The only programmers familiar with multithreading are PC developers? Do you think the concurrent programming on the Playstation 2, for example, are really so different? Of course concurrent hardware programming on game consoles in some form has been standard issue since at least the days of the genesis with its 68k and z80 (iirc), and probably longer than that. Of course the skills of generations ago are fairly different, but I'd say there are plenty of people in the industry familiar with concurrency.

I'd go so far, actually, as to suggest the average Windows developer is poorer at concurrency than the average game programming veteran.

Of course all of this doesn't really contradict your argument that concurrency is not easy and will require adaptation.

Re:Just wait (1)

be-fan (61476) | more than 7 years ago | (#17157356)

That's not really the same sort of concurrency, though. game consoles have tended to have asymmetric concurrency, so the work involved less of splitting up a single algorithm to run on multiple processors and more figuring out how to run different algorithms on each processor. That's where a lot of the "one core can handle graphics, one core can handle sound, one core can handle AI" bunk comes from. Both the Xenon's cores and the Cell's SPEs are symmetric (with each other). Algorithms can be designed to utilize each of the processors for the same algorithm. Scientific programmers have a lot of experience with this (parallelizing a big matrix inversion in a finite elements code), while game programmers do not.

Re:Just wait (1)

k8to (9046) | more than 7 years ago | (#17232142)

Hmm, maybe you're right about the difference between hardware juggling and threading.

I disagree that message passing and MPAR will give a lot if insight into optimal 9-way "SMP" design. MPAR algorithms typically assume "sufficient" nodes. 9 is quiet discrete.

Re:Just wait (1)

fitten (521191) | more than 7 years ago | (#17162752)

Heh, yeah... I'm familiar with MPI (message passing) models of concurrency and with various MPI (the Message Passing Interface standard) libraries. From everything I've read, the SPEs are comparable to modern DSPs. They have lots of functionality but aren't really suited to running a general purpose OS (compared to the PPE, for example) for a number of reasons. Do you have a good link that talks about Cell's built-in message passing? Having built-in message passing functionality would help a lot, obviously, over having to write libraries to handle such things built only on their DMA capabilities.

Sony Hype Machine (1, Informative)

Cadallin (863437) | more than 7 years ago | (#17153306)

The only thing I've seen for the PS3 that looks even remotely impressive is White Knight, and I won't believe that would be impossible on the 360 for a second. If the PS3 has any edge at all over the 360, its the one year newer Nvidia graphics chip. The PS3 MAY have more memory speed than the 360, but the 360 has twice the capacity (512 vs. 256), so I'd say that's a wash, especially since I'm not sure how the RAMBUS components in the PS3 compare latency wise to the GDDR3 in the 360. I'm sick of the Sony Hype Machine. Of course I'm also tired of the Final Fantasy Fanboys that keep Sony alive in the console market. Both Sony and Squeenix need to be taken down a notch or two.

Re:Sony Hype Machine (1)

Sycraft-fu (314770) | more than 7 years ago | (#17153684)

Actually, the nVidia chip probably isn't anything special. Given the date that the PS3 was supposed to ship and the specs that have been released, it looks like the RSX is basically a 7900. While a 7900 is a great card, it's nothing more than what the 360 can do (which is something of a modified ATi 1900).

http://en.wikipedia.org/wiki/RSX_'Reality_Synthesi zer [wikipedia.org] '

Re:Sony Hype Machine (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17153946)

"the RSX is basically a 7900."

What an idiot.

The PS3's graphics system is a hybrid design where you have massive geometry transforming power on the Cell side connected to massive pixel painting power on the RSX side. Only a fucking idiot would talk about 'the GPU in the PS3'. PS3 engines are designed to load balance between the to halves of the rendering hardware(Cell/RSX) depending on the nature of the currently displayed scene composition. Cell is used for the majority of transform operations, but RSX can do so to a lesser extent. And RSX is used for the majority of pixel painting tasks, but Cell is able to do so to a lesser extent.

Every fucking clown with a pc now thinks he has the competence to run his mouth off about console hardware. Please keep your dimwitted mouth shut about shit you have no clue about.

Re:Sony Hype Machine (1)

fistfullast33l (819270) | more than 7 years ago | (#17155098)

I can't tell if that's sarcasm or truth. Scary.

Re:Sony Hype Machine (1)

ravyne (858869) | more than 7 years ago | (#17157232)

Sorry to break it to you, but you're dead wrong buddy.

The RSX does have Vertex Processing Units, 8 to be specific, which are the primary Sources for geometry calculations. Its true that the Cell *can* do these calculations, and can actually do them quite well, but it can not do even half of what the RSX is capable of, even discounting all the other tasks the Cell would typically be handling.

It's again true that the Cell *can* do Pixel calculations, and again it actually does the calculations quite well, however the memory read bandwidth from GPU ram into the Cell CPU is excruciatingly slow, at under 20 MB/s. There's a VRAM to SPU DMA transfer mode thats much faster, however if you tried that you would end up with coherency problems between the GPU's buffers (color, depth, stencil, etc.) because the Cell memory would not recieve updated info from the GPU when it goes on drawing more stuff.

Bottom line: To call the Cell/RSX combo a "hybrid design" capable of "load balancing" is either wishful thinking or gross misunderstanding. Graphically, the Cell can only be usefull to Suppliment Vertex calculations, but its far outstripped by the GPU's capability (and besides that, few if any games have been geometry-limited since the GeForce4.) What it's actually quite useful for is do as much graphical pre-calculation (various forms of culling) as possible so that the GPU can focus as much as possible on stuff that will end up in the final scene.

To the grandparent poster -

The Xenon CPU is actually nothing like the ATI X1900 series. Its a completely different architecture that was originally designed as the successor the 9800 series (which itself has evolved into the X900 and X1900 series due to its success.) ATI's (AMD's now, I suppose) next chip, R600, is based on the same design from which the Xenon GPU is derived, but lacks the embedded framebuffer memory from what I've heard.

Somewhat ironically, while both designs perform well, its the ATI Xenon chip that is actually "next gen" from an architectural standpoint (dispite the fact that the 360 has been in production a full year,) while the RSX is, relatively speaking, old-hat -- having a design that has been available on the desktop for more than a year (and which itself is basically and evolution of the 6800 series, making it even more old-hat.)

In many ways, the 7900/RSX (as well as the X1900) architectures represent the pinacle of modern-day graphics, while the Xenon/R600 (as well as the 8800) architectures represent the first steps towards the future of graphic card technology.

Re:Sony Hype Machine (1)

cptgrudge (177113) | more than 7 years ago | (#17154270)

Both Sony and Squeenix need to be taken down a notch or two.

To be fair, Square Enix has titles for both the PS2 and Gamecube, though not the same ones. They've said within the past couple years that they'd like to go more multiplatform. There are new Crystal Chronicles games in development for the DS and Wii. And I don't think that they've definitively chosen a console for the next Kingdom Hearts game. They have a lot more titles other than just Final Fantasy.

But I get your point. I'll take my time to cut down the people that play Final Fantasy games to the exclusion of other "RPG" games and have that "OMG FinFan is PS2 only FTW!!!!!" mentality.

Re:Sony Hype Machine (1)

be-fan (61476) | more than 7 years ago | (#17155166)

The problem is that its not just FF that's only on the PS2. So is Star Ocean, Xenosaga, Shadow Hearts, etc. The XBox and GC had a few RPGs here and there, but that was it. If you're an RPG fan (and there are a lot, the FF series is one of the best-selling of all time), then buying a non-Sony console just doesn't make any sense.

This might change a bit, with the RPG scene looking better on the 360 (with Blue Dragon and whatnot), but Microsoft's presence in Japan is still minimal, and Japan is still where nearly all the RPG development*

*) And a disproportionate amount of the game development in general, really. EA at #2 is an American company, and Ubisoft at #3-#4 is a French company, but most of the rest of the $500m+ third-party companies are Japanese (Capcom, Square-Enix, Konami, Sega, etc), and so are the biggest first-party companies (Nintendo, Sony).

Re:Sony Hype Machine (3, Insightful)

androvsky (974733) | more than 7 years ago | (#17154566)

Last I checked, both systems had the same amount of memory. 512 MB. The Xbox 360 cpu and gpu have to share it, while in the ps3 the Cell gets 256 MB, and the gpu gets 256 MB. As simple as the math here is, this is the second time today I've seen someone post the "fact" that the 360 has twice the memory. Where is the 360 supposed to store geometry and textures? The ps3 nvidia chip really isn't anything special compared to the 360's ATI chip... the only really interesting thing is that the Cell has easily twice the vector processing capabilities of the three-core xbox cpu. That probably won't translate into better graphics for at least a year, if ever. It could make for some interesting background applications, like enhanced physics processing or doing something interesting (read: Wii-like) with the HD eyetoy. We won't know for a while though. I agree Sony needs the competition, but it's not as if Microsoft doesn't need to be taken down a couple notches also.

Re:Sony Hype Machine (1)

ravenshrike (808508) | more than 7 years ago | (#17155238)

The PS3 does have a competitor, one who will do much better then them for at the least a couple of years. Nintendo is going to have a much stronger showing this time around than it did before. It's easily going to wipe the floor with the 360. Apart from GoW the 360 does not have anything reall riviting, and the problem with GoW as a system seller is that it has a narrow focus for it's audience. As time goes by I'm betting the PS3 will slowly overtake the 360 and near the end of their respective lifespans the Wii and PS3 will probably be relatively close in # of consoles sold.

Re:Sony Hype Machine (1)

Cadallin (863437) | more than 7 years ago | (#17157812)

You are correct, and I apologize for my error. I'm so used to consoles being UMA systems I didn't read any further in the wikipedia article. Although, I think my general point, that I think the PS3 and 360 are largely equal in terms of performance and capability, stands.

I remain EXTREMELY skeptical about Cell's vector performance being leveraged in the ways Sony keeps claiming. Your statement about Microsoft is very true. Personally I'm rooting for Nintendo to come back in a big way, although I don't really expect that we'll have any indications one way or the other for at least 3-6 months.

Re:Sony Hype Machine (1)

big4ared (1029122) | more than 7 years ago | (#17156428)

Actually, they both 512 megs of ram. The difference is that on the Xbox 360, the GPU and the CPU share the same bank of 512 megs (plus an additional 10 megs of EDRAM for the framebuffer) whereas on the PS3, 256 megs are dedicated to the CPU and the other 256 megs are dedicated to the GPU.

Wow... (0)

Anonymous Coward | more than 7 years ago | (#17153570)

...even the staunchest fanboy must admit after reading that, that the PS3 has about 15 tons of potential. Can you imagine the level of AI and graphical detail you could suck out of that chip! Tough as it may be to develop for right now, I can't imagine any developer who isn't drooling over the possibilities.

The burning question is will enough developers take the challenge and endure the headaches to reap the rewards when two much simpler boxes are out there albeit with less potential.

I for one hope folks step up to the plate.

Re:Wow... (1)

HeavenlyBankAcct (1024233) | more than 7 years ago | (#17153824)

I agree with the fact that the Cell certainly does exhibit a lot of potential. However, I think that drawing the conclusion that excellent hardware marks an end to all criticism is ignoring the primary sticking point of the console itself -- The price. I don't think many people out there, aside from the rabid Nintendo types, honestly believe that the PS3 is a lackluster system without the ability to play great games. My sole issue with the console, and I'm sure it's one that's fairly commonly shared, is that $599 is a hell of a lot to play for a toy, no matter how amazing of a toy it is. Like it or not, the price-point difference between $250 and $599 is more than a marginal one, and there are plenty of more fiscally-reserved folks out there like me who will absolutely refuse to drop what amounts to the majority of a month's rent on a machine used to play games.

There are plenty of hardcore gamers out there that think this is a reasonable price, and I'm not knocking them for it. I'm just stating the "everyman" point of view here, and that is that Sony cannot expect to achieve any sort of pervasiveness with this unit until they adjust the price so it lands somewhere in the universe of a "casual purchase" instead of a "fairly serious investment." I could easily purchase a Wii and not feel like I'd make any major sacrifice for it, however, a PS3 would require budgeting -- and budgeting around a new game system is not something that the average individual is going to be willing to do when there are other options that there that could easily fall into the realm of general affordability.

The good news, I suppose, is that developers should really be starting to create some great stuff for the PS3 right around the time it becomes something I can justify purchasing without hating myself for.

Re:Wow... (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17154064)

LOL, some fanboy trying to claim the PS3 costs $599. Who the fuck do you think you are impressing with your bullshit?

The PS3 costs $499.

Go buy your fucking Wii and its shitty graphics and gimmick controller imbecile. No one cares.

Re:Wow... (1)

CronoCloud (590650) | more than 7 years ago | (#17154252)

This Sony fanboy knows that there are two PS3 versions, the one $499 and the one for $599, the more expensive having a larger hard drive, built in wi-fi and a built in card reader. So when parent says it costs $599, it costs $599.

I think it's worth it, for all that it does, but I don't have the $599 right now.

Re:Wow... (1)

HeavenlyBankAcct (1024233) | more than 7 years ago | (#17154918)

As a fanboy of neither item, I wasn't accurate about the price. My apologies. I think my point still stands, though, is that the relatively more expensive price of the PS3 does, to a lot of consumers, push it into a different realm of purchase. I don't really see how that stands to be an arguable point, unless your concept of money is somehow different than mine.

As for "going to buy my fucking Wii", I'm going to wait a few months until it's cheaper, which is the nature of these sort of beasts. My world will continue to spin without a "next-gen gaming system" at its center.

While I recognize the appeal of attempting to frame all dissent as a "NINTENDO vs. SONY" war, I stopped investing my emotions in the goings-on of multi-national corporations with which I have no connection years ago. I made this post not to criticize the PS3 in any way, just to point out the fact that, for some of us, "potential" does not equate to a justifiably higher price point.

Now, deep breaths and cool your jets.

Re:Wow... (1)

oliverthered (187439) | more than 7 years ago | (#17163868)

My girlfriend's intersted in the PS3 because of it's potential to become a media centre. There are plenty of people that would fork our $599 for a Blueray, media centre, games console. (just look at the number of people who pay more than that for a TV!)

Re:Wow... (1)

tompatman (936656) | more than 7 years ago | (#17165024)

How much did the Atari 2600 cost when it first came out?

The chip we've been dreaming of for years (5, Interesting)

Anonymous Coward | more than 7 years ago | (#17153602)

Back seven years ago I remember whiteboard discussions with other engineers when we first started work on the first PS2 devkits about where we hoped Sony would take the amazing technology we now had access to with the PS2 hardware. Cell is in essence exactly what we wanted to see Sony take the PS2 design philosophy.

As game developers we spend a huge amount of our time 1) organizing data 2) feeding that data to someplace to operate on it 3) sending that data back to step one to repeat the process

Cell's design makes our lives vastly simpler. It is an absolute dream to work with.

The insanely high floating point power is what is talked about most with the Broadband Engine, but it is the memory architecture that is the best part of the architecture. The internal ring bus allows us to write code that hide memory latency.

Writing for Cell is extremely straightforward. You have each SPU setup to operate on three regions of internal memory: 1) Static data 2&3) doubled buffer of dynamic data. Data is being fed into one buffer while the SPU operates on the other. With this setup optimal Cell code has all available SPUs plowing through data with very little latency from the memory subsystem.

In many ways it is very similar to writing old style code where you got your data into the chips cache, operated on it, and then wrote that data back out to main memory or somewhere else. But with Cell you now have total control of how the data is loaded into your cache due to the SPU ability to scatter DMA into local memory, and you have the internal ring bus to pass data around to other SPUs instead of having to go out to slow main memory, and of course you have 6-8(depending on the hardware you are using) SPUs all running in parallel.

It is wonderful that every PS3 is setup to easily allow install Linux and have access to the Cell devkit. There is a wonderful world beyond the archaic x86 architecture just waiting for you.

Re:The chip we've been dreaming of for years (0)

Anonymous Coward | more than 7 years ago | (#17154200)

You've got a little cum on your chin

Re:The chip we've been dreaming of for years (1)

CronoCloud (590650) | more than 7 years ago | (#17154308)

Cell is in essence exactly what we wanted to see Sony take the PS2 design philosophy.



Yes, it's ideas introduced with the PS2, taken to the "next level". SPE's instead of the VU's

single precision floating point rounding error??? (1)

mosel-saar-ruwer (732341) | more than 7 years ago | (#17157890)


As game developers we spend a huge amount of our time 1) organizing data 2) feeding that data to someplace to operate on it 3) sending that data back to step one to repeat the process

I assume that most of this "operating on it" involves floating point operations on triangles.

And I suppose that most of the results are essentially triplets of eight bit RGB [red, green, blue] values [i.e. your results are expressed in 24-bit color], and I assume that you rarely venture much beyond screen sizes of about 1600 X 1200 pixels, or refresh rates much greater than about 60 frames per second.

Now current instantiations of the Cell processor can only perform 32-bit [single precision] floating point operations in hardware; as I understand it, 64-bit [double precision] floating point operations suffer an enormous performance penalty by contrast.

But 32-bit [single precision] floating point numbers are notoriously inaccurate; for instance, they begin to lose integer granularity as early as 16 million [2^24].

So here's my question: Have you seen any instances where 32-bit [single precision] floating point number rounding error caused unacceptable inaccuracies?

Any of your triangles come out blurry, or mis-colored, or mis-placed, or mis-aligned, simply because 32-bit floating point calculations were insufficiently exact for, say, 24 bits of color, 1600 x 1200 pixels, and 60 frames per second?

Re:single precision floating point rounding error? (0)

Anonymous Coward | more than 7 years ago | (#17159162)

Most of what you're talking about (colors of triangles) is the job of the GPU, which is not really a part of the CBE. But to address some of your questions:

as I understand it, 64-bit [double precision] floating point operations suffer an enormous performance penalty by contrast.

Well, it's not exactly an enormous penalty. CBE cannot pipeline double-precision math instructions, which have a throughput of 7 cycles. Additionally, you can only store 2 DP floats per register instead of 4 SP, so your double performance is effectively 1/14 of single. However, many things can prevent you from peaking SP performance in the first place, such as instruction-level latency stall due to dependency, and memory-level latency due to unpredictable memory-access or being DMA-bound, branch penalties, load/store latency, etc. DP is certainly slower, but if that's your biggest bottleneck then you are already "winning".

But 32-bit [single precision] floating point numbers are notoriously inaccurate; for instance, they begin to lose integer granularity as early as 16 million [2^24]. So here's my question: Have you seen any instances where 32-bit [single precision] floating point number rounding error caused unacceptable inaccuracies?

Constantly. One of the keys to using single precision is to never rely on precision. In fact, even within the domain of SP precision some crucial instructions are approximations: reciprocal, reciprocal square root. Even operations like mul+add vs madd differ in the result. Everyone knows that if you actually try to dot two "normalized" vectors and take the acosf of the result, you're heading to NaN in a hurry. Fortunately, there are no FP exceptions so you can pretty much just do your math and then mask away bogus results at the end.

This doesn't really apply to rendering though. Mostly collision detection and resolution are subject to the vagaries of the precision gods ("If something ever can go wrong, it will.") The other edge of the precision sword is that you don't pay for what you don't need. Particles, for instance, go through tons of sin, cos, and quaternion operations where no more than 8 or 9 bits of precision are needed... and that's all the polynomial they get.

Thanks! And a request... (1)

mosel-saar-ruwer (732341) | more than 7 years ago | (#17164156)


I realize it's a really big [as in REALLY BIG] subject, but do you know of any books that treat this sort of thing very well?

Also, because the Cell can perform so many single-precision floating point operations in parallel, do you know of any good texts which concentrate on the theory of the parallelization of common floating point algorithms [or, better yet, on the provability of the NON-existence of parallelization of floating point algorithms]?

Re:single precision floating point rounding error? (2, Interesting)

shplorb (24647) | more than 7 years ago | (#17160394)

They can, but then anyone making a game with maps large enough to cause such issues commonly split up the map into chunks with their own co-ordinate space and re-centre the global co-ordinate space's origin to the origin of each chunk as the camera moves into it.

Animating objects like characters and such have all of their calculations performed in their local co-ordinate space before the result is transformed into world space.

Most also use the scale of 1.0f = 1M, so you'll be going on for a few KM's before precision becomes much of an issue.

So overall, it's hardly an issue.

Unbiased Source (2, Insightful)

HappySqurriel (1010623) | more than 7 years ago | (#17153894)

" article features Insomniac Games, who developed the PS3 launch title Resistance: Fall of Man."

Which is a game that is published by Sony developed by a company that is owned by Sony ...

What's next "Bungie, the Developers of the XBox 360's highly anticipated shooter Halo 3, have announced that the XBox 360 is Super Powerful and that Sony Rapes Babies!"

I want to hear from EA, Ubisoft, Activision and Sega (ie. companies which have little interest in the platform) on which is easy/hard to develop for; so far EA has said that next-gen development is insanely expensive.

Mod Parent Up! (1)

fujiman (912957) | more than 7 years ago | (#17155466)

This is the real problem with the Article. Has nothing to do with the Cell per se, just that we're getting PR crap instead of engineering information. Ask someone whose salary is not tied to the success of the platform.

Re:Unbiased Source (0)

Anonymous Coward | more than 7 years ago | (#17155478)

Sony doesn't own Insomniac. They might as well do so, since Insomniac only publishes on Sony hardware, but Insomniac are at least technically capable of saying "Hey, lets only make dreamcast games from now on!" if they feel like it.

Proof [insomniacgames.com] . See the word "independent".

Anyway, you real point stands. I'd rather hear about PS3 vs 360 from Rockstar, Capcom or the Ubisoft guys working on Assasin's Creed too.

Re:Unbiased Source (1)

DarkJC (810888) | more than 7 years ago | (#17155520)

Insomniac is an independant. They've so far chosen to release to playstation and be published by Sony, but in an interview I read with Ted Price they are still completely independant from Sony.

Re:Unbiased Source (1)

PingSpike (947548) | more than 7 years ago | (#17165416)

So...what you're saying that a company that releases playstation 3 exclusives should have no bias for the playstation 3 at all then. That makes sense, afterall, its not like they've tied their very success to that particular console or anything.

Re:Unbiased Source (1)

DarkJC (810888) | more than 7 years ago | (#17183088)

No, I'm correcting the parent. Bungie is owned by Microsoft, of course they'll say wonders about the 360. Insomniac is not owned by Sony, therefore the comparison wasn't fair. Stop sticking words in my mouth.

Re:Unbiased Source (1)

daveisfera (832409) | more than 7 years ago | (#17155698)

I couldn't have said it any better.

Well, maybe I would have left out the "raping babies" comment, and that would have probably been at least slightly better.

Re:Unbiased Source (1)

buffer-overflowed (588867) | more than 7 years ago | (#17158156)

As has now been said twice, insomniac is not owned by Sony. They have extremely close relations with the Sony second party Naughty Dog, and tend to be published by Sony on Sony platforms(I think they've made a game or three not on a Sony platform though), but they're indy.

Of course, they're so far up Sony's ass you'd have to dig in there with that drill made from unobtainium from that horrible movie "the core" to find them.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...