Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Microsoft Sony Entertainment Games Hardware Technology

How Sony's Development of the Cell Processor Benefited Microsoft 155

The Wall Street Journal is running an article about a recently released book entitled "The Race for a New Game Machine" which details Sony's development of the Cell processor, written by two of the engineers who worked on it. They also discuss how Sony's efforts to create a next-gen system backfired by directly helping Microsoft, one of their main competitors. Quoting: "Sony, Toshiba and IBM committed themselves to spending $400 million over five years to design the Cell, not counting the millions of dollars it would take to build two production facilities for making the chip itself. IBM provided the bulk of the manpower, with the design team headquartered at its Austin, Texas, offices. ... But a funny thing happened along the way: A new 'partner' entered the picture. In late 2002, Microsoft approached IBM about making the chip for Microsoft's rival game console, the (as yet unnamed) Xbox 360. In 2003, IBM's Adam Bennett showed Microsoft specs for the still-in-development Cell core. Microsoft was interested and contracted with IBM for their own chip, to be built around the core that IBM was still building with Sony. All three of the original partners had agreed that IBM would eventually sell the Cell to other clients. But it does not seem to have occurred to Sony that IBM would sell key parts of the Cell before it was complete and to Sony's primary videogame-console competitor. The result was that Sony's R&D money was spent creating a component for Microsoft to use against it."
This discussion has been archived. No new comments can be posted.

How Sony's Development of the Cell Processor Benefited Microsoft

Comments Filter:
  • by symbolset ( 646467 ) on Thursday January 01, 2009 @05:53AM (#26289735) Journal
    Pray I do not alter it further.
  • E_TOO_VAGUE (Score:1, Informative)

    by Anonymous Coward

    What parts of the processor did IBM pass on to Microsoft? The XBox 360 processor Xenon is basically a three core hyperthreaded PowerPC. The Playstation 3 has a single PowerPC core (not hyperthreaded) and 7 (or 8) simpler SPU processors.

    • Re: (Score:3, Informative)

      by CronoCloud ( 590650 )

      Cell is Hyperthreaded, as any Linux on the PS3 user can show you:

      [CronoCloud@mideel ~]$ cat /proc/cpuinfo
      processor : 0
      cpu : Cell Broadband Engine, altivec supported
      clock : 3192.000000MHz
      revision : 5.1 (pvr 0070 0501)

      processor : 1
      cpu : Cell Broadband Engine, altivec supported
      clock : 3192.000000MHz
      revision : 5.1 (pvr 0070 0501)

      timebase : 79800000
      platform : PS3
      model : SonyPS3

  • a few facts please? (Score:3, Interesting)

    by Anonymous Coward on Thursday January 01, 2009 @06:08AM (#26289777)

    the cell is not "made from scratch". It's based on powerpc. Minus the branch prediction and some other goodies, and with additional cores specialised for numerics called "SPEs". Without the SPEs it's a piece of junk. And the xbox360's processor doesn't have the SPEs.

    This article is full of shit.

    Big deal if M$ got their hands on a crap, slow design based on the G5 powerpc, and they made it able to execute 2 threads per core and put 3 cores on a die. It has NOTHING LIKE the gigaflops of the cell.

    • by TheRaven64 ( 641858 ) on Thursday January 01, 2009 @07:56AM (#26290017) Journal
      Even the SPEs aren't exactly built from scratch. They're based on the VMX units from the PowerPC 970 with widened register sets and a modified memory architecture with explicit DMA commands. If the meeting in question took place, I'd imagine IBM showed Microsoft the Cell, the PowerPC 980MP, the 40x, and said 'we can do anything on this spectrum - what requirements do you have?'.

      The chip they sold to Microsoft in the end is more or less the same design as the PPU core in the Cell, but that, in turn, is an in-order variant of the 970 with a few bits from the POWER4 that were originally dropped (the 970 itself was a cut-down POWER4 with a VMX unit bolted on) re-added.

      IBM would be crazy not to reuse parts of old designs on any new one. They've spent hundreds of millions of dollars creating a library of CPU designs that fit anywhere from a mobile phone to a supercomputer. You're very unlikely to have a set of requirements that they can't meet with a tweaked version of one of their existing designs, and if you really need them to work from scratch then you probably can't afford the final product.

      • Re: (Score:3, Interesting)

        by Mr Z ( 6791 )

        I came here to make pretty much the same point. IBM has a habit of reusing the same microarchitecture with tweaks to run different variants of the POWER or PowerPC instruction set as needed to fit a particular application niche.

        I suspect IBM didn't say specifically "Here's the Cell Broadband Architecture and what it can do." Rather, since the Xbox360's CPU doesn't have any SPEs, I imagine the presentation had more to do with what the PPU would be capable of, and was part of the IBM processor roadmap anywa

    • Re: (Score:2, Flamebait)

      Big deal if M$ got their hands on a crap, slow design based on the G5 powerpc, and they made it able to execute 2 threads per core and put 3 cores on a die. It has NOTHING LIKE the gigaflops of the cell.

      And the observation I have to make on that is - you are right, big deal because MS 'obviously' getting the inferior CPU has certainly affected market share against them, hasn't it? Or is it really a case of the Cell hasn't lived up to the massive hype granted it in the run up to the PS3s release?

      • by TheRaven64 ( 641858 ) on Thursday January 01, 2009 @08:53AM (#26290211) Journal

        They are very different approaches. The 360's CPU is basically a 3-core, 6-context, in-order variant of the POWER4 with a vector unit. In terms of pure number crunching ability, it's pretty pathetic next to the Cell. On the other hand, it is based on a model that we have spent 30 years building compilers for. You only need to write slightly-parallel, conventional code to get close to 100% of the theoretical performance out of it.

        In contrast, the Cell has one PPU which is roughly equivalent to one context on the 360's CPU (somewhere between 1/3 and 1/6 of the speed). It also has 7 SPUs. These are very fast, but they're basically small DSPs. They have very wide vector units and are limited to working on 256KB of data at a time. You can use them to implement accelerator kernels for certain classes of algorithm, but getting good performance out of them is hard.

        In terms of on-paper performance, the Cell is a long way out in front, but it is a long way behind in ease of programming, meaning that you generally get a much smaller fraction of the maximum throughput.

        • Thats precisely my point - the Cells supposed technical superiority doesn't matter because it isnt being utilised to anywhere near its capability, and doesn't look to be any time soon.

          At the end of the day, the argument that the Cell is harder to develop for is technically valid, but totally missing the point - the market isnt about to say 'ahhh, OK then! Lets all wait for the developers to get their shit together!'.

          Or in other words, going for the technically superior solution is not necessarily the
          • Re: (Score:3, Informative)

            by ZosX ( 517789 )

            It was the same problem with the PS2. It took developers a few good years to really start to push the hardware. Look at some of the later games that really push the envelope like say Final Fantasy XII or Shadow of Colossus. The PS2 was certainly capable of some nice visuals but the other consoles were ultimately superior while basically using off the shelf hardware. Developers were pushing the Xbox and the Gamecube almost nearly from day one. I think the cell has backfired, but not for the reason that Mic

    • Although the Cell does have more raw power than the 360 CPU, the SPUs have to do the brunt of the transform and lighting for rendering on the PS3, whereas that's handled by the GPU on the 360.

      For games you can't really compare the CPU on its own as so much work gets handed off to the GPU. The PS3 wins on CPU power, the 360 on GPU power.

      • by Michael Hunt ( 585391 ) on Thursday January 01, 2009 @09:13PM (#26295413) Homepage

        Do you have even a vague understanding of what 'transform' and 'lighting' actually mean? Allow me to elucidate.

        'transform' refers to the act of converting vertex positions in model space (the coordinate system used in the vertex buffers) to clip (camera) space. This is typically one matrix * vector multiplication per vertex; the vertex's position in model space is multiplied by the world-view-projection matrix. On modern hardware, this is generally done in the vertex program (other things may be done to the vertex's position before or after the co-ord transform, mind you, such as multiplication by a set of bone matrices for hardware animation, etc.)

        'lighting' refers to the process of deciding the colour of each fragment ('potential pixel'). Before programmable graphics hardware, this was done by taking the dot product of the vertex normal and the light vector (or position, depending on light type), and multiplying it by the light's diffuse colour. The resulting colour intensity was then linearly interpolated across the face between vertices, and used to light the texture in conjunction with the ambient term. With modern programmable hardware, lighting is usually done per-fragment based on a normal map, which is input as a second texture to the fragment program. The light position is converted from object (or world) space into 'tangent' space, which is a coordinate system whose basis vectors are parallel and orthogonal to the plane being lit, and the surface is lit based on the dot product of the light vector in tangent space and the normal from the normal map.

        Back in the bad old days, when men with beards owned IRIX boxes and everybody else had a TNT2 or worse, transform and lighting were done in software for most folks, by a client of the rendering system, before the primitives were submitted as draw calls to the rendering system. Post-about-2001, cards with hardware T&L, such as Geforce 256, showed up in the PC space. These cards were the first consumer 3D hardware to perform fixed-function transform and lighting (roughly as I've described it above) in silicon. The API didn't change much, although there was a DirectX version bump (6 to 7). OpenGL programmers didn't really notice; the library itself, obviously, had to know if it was talking to a fixed-function card or a dumb card, but most OpenGLs were provided by hardware vendors in any event, so this wasn't a factor.

        Fast forward to today, everybody's using hardware which allows parts (most, these days) of the rendering pipeline to be replaced entirely with programs written by the engine developer (or even the artist, in some cases.) Transforming vertices can be done in conjunction with all manner of other crap, and lighting can be done using whatever model the programmer/artist desire. Regardless, however, it's all done in the same pipeline on the GPU. If the SPUs, as you suggest, were pre-transforming and pre-lighting vertices before writing them to 3d hardware's vertex buffers, then all you'd get is some really confused 3d hardware. RSX (the 3D chip loosely based on nvidia's G70 architecture) has 8 vertex pipelines and 24 fragment pipelines, all programmable. This is more than enough power to do significantly more with each vertex than simple transformation, and enough power to perform even complex effects such as steep parallax-mapped lighting in the fragment pipeline.

        In conclusion, while Xenos (360's GPU) may or may not be better than RSX, RSX is CERTAINLY more than powerful enough to handle its own T&L. Cell's SPEs are, at least on some level, a compromise between the massively data-parallel yet somewhat braindead pipelines of a GPU and the more-or-less serial yet significantly intelligent threads of a modern CPU. They'd be great for accelerating physics (Bullet, i believe, has a Cell backend) or AI, but really add fuck all to the 3D rendering side of the console.

        • by Guspaz ( 556486 )

          Actually, I understand they perform rather poorly when it comes to AI simulations. Perhaps because they lack any sort of branch prediction. So there are going to be some serious bubbles in the SPE pipelines whenever they hit a branch instruction, which sort of defeats their purpose for such work.

          The same argument was used against physics early on, until people who knew better spoke up and informed everyone that branch prediction doesn't do much for physics interactions anyhow.

        • by Glonk ( 103787 ) on Friday January 02, 2009 @01:55AM (#26297247) Homepage

          While it certainly sounds like you know what you're talking about, it's pretty clear to anyone with a game-dev background you do not.

          Cell's SPEs are actually PRIMARILY used as aids to graphics processing (T&L) by most developers. Look into how games like Heavenly Sword use the SPEs as part of its "faux" HDR or games like Killzone 2 use SPEs to implement deferred rendering for awesome smoke effects. The SPEs are, in PRACTICAL TERMS to PS3 game developers, very essential to the 3D rendering side of the console.

          While RSX is "powerful enough" to do its own T&L, it cannot compare to the standalone power of the 360's Xenos chip. There are many reasons for this (6 fixed vertex shaders on RSX vs the unified shaders on the 360 which permit far higher vertex workloads, to the RSX's limited bandwidth vs the 360's eDRAM bandwidth, to triangle setup rates). On the PS3, developers need to leverage Cell in intelligent ways to draw comparable graphics to the 360. If an intelligent and determined PS3 developer really leverages Cell, it can make unparalleled graphic in the console world. The problem is, it costs a fortune in time and money to do it and very few developers can. It's simply not worth it to even attempt it for most developers.

          As a sidenote, Cell is not at all good for most game AI for many reasons (not the least of which is the lack of branch predictors in the SPEs).

          Additionally, people keep making the mistake of assuming the PPU in the Cell is basically the same as each core in the 360's CPU. That's not at all true. There are some significant differences, including native Direct3D format support in the 360's CPU to the new VMX128 vector units (which have 128 registers per context per core [6 total], vs 32 on the PPU) as well as additional instructions specifically tailored towards 3D games (like single-cycle dot-product instructions). The combined triple VMX-128 units on the 360 are still faster than most quad-core Core i7 in vector processing, so I'm perplexed by the notion that it's somewhat slow or underpowered from what I've read from some people.

          If you're truly interested in how PS3 games use Cell, check out the Beyond3D community where PS3 developers post in detail about how they do what they do. And Cell is a major factor in 3D rendering on the PS3. It has to be.

          • A breath of fresh air after the stank of Mike Hunt's post.

            Oh man, did I just really type that?

            Srsly, that was some liar's club quality stank and I couldn't have responded to it any better.
            The Cell is poor for AI (as AI is currently designed), excellent for image and signal processing, including rendering duties.

            I could think of some ways the SPE units could be useful for AI, but that would mean throwing out everything currently in use for this purpose, and starting over with a Minsky like hive-model for the

  • by Anonymous Coward on Thursday January 01, 2009 @06:09AM (#26289785)

    I'm not a games console programmer, but I understood that the 'core' of the Cell and the chip used in the XBox 360 are both derivatives of the standard PowerPC chip. This smells like a couple of trolls being mischievous. IBM can do what they like with PowerPC, and that includes selling it to both Micrsoft for the XBox 360 and to Nintendo to power the Wii.

    Sony's payback comes when Playstation3 programmers learn to fully utilize the Cell architecture.

    • by Bastard of Subhumani ( 827601 ) on Thursday January 01, 2009 @06:46AM (#26289877) Journal

      Sony's payback comes when Playstation3 programmers learn to fully utilize the Cell architecture.

      It has direct hardware support for rootkits.

    • Sony's payback comes when Playstation3 programmers learn to fully utilize the Cell architecture.

      Yeah, that EIEIO instruction is a real bitch: http://publib.boulder.ibm.com/infocenter/systems/index.jsp?topic=/com.ibm.aix.aixassem/doc/alangref/eieio.htm [ibm.com]

      Or maybe it is a joke. I dunno.

      • by Animats ( 122034 ) on Thursday January 01, 2009 @12:22PM (#26291157) Homepage

        Sony's payback comes when Playstation3 programmers learn to fully utilize the Cell architecture.

        As someone else pointed out, if that was going to happen, it would have happened by now.

        The fundamental problem with the Cell is that each SPU only has 256KB of RAM. (Not 256MB, 256KB.) Data can be moved in and out of main memory in the background with explicit DMA-like operations. Given that model, you have to turn your problem into a data-flow problem, where a data set is pumped sequentially through a Cell processor. The audio guys love this. It's useful for compression and decompression. It's a pain for everything else.

        It's not good for graphics. There's not enough memory for a full frame, not enough memory for textures, not enough memory for the geometry, and not enough processors to divide the frame up into squares or bands. Sony had to hang a conventional nVidia GPU on the back to fix that. It's useful for particle systems. If you need snow, or waves, or grenade fragments, the Cell is helpful, because that's a pipelineable problem.

        There are some other special-purpose situations where a Cell SPU is useful. But not many. If each SPU had, say, 16MB, the things might be more useful. But at 256KB, it's like having a DSP chip. The Cell part belongs in a cell phone tower, processing signal streams, not in a game machine. It's a great cryptanalysis engine, though. Cryptanalysis is all crunch, with little intercommunication, so that fits the Cell architecture.

        We're back to a historical truth about multi-CPU architecture - there are only two things that work. Shared-memory multiprocessors ("multi-core" CPUs, or the Xbox 360) work; they're well understood and straightforward to program. Clusters, like Google/Amazon/any web farm, also work; each machine has enough resources to do its own work and can live with limited intercommunication. Everything in between those extremes has historically been a flop: SIMD machines (Illiac IV through Thinking Machines), dataflow machines (tried in the 1980s), and mesh machines (nCube, BBN Butterfly). The only exception to this are graphics processors and supercomputers derived from them. [nvidia.com] That, not the Cell, is cutting edge architecture.

        I've met one of the architects of the Cell processor, and his attitude was "build it and they will come". They didn't.

        • Re: (Score:2, Interesting)

          by 91degrees ( 207121 )
          I never thought the cell was intended for graphics anyway. 3D hardware is simple SIMD, with very long pipelines. Unless you're after ray tracing, a more general purpose chip would be a waste of resources.

          Cell is probably good for complex physics, and sophisticated AI, but that's a bit of a problem because programmers haven't really worked out how to use the resources efficiently yet. Game developers have a very procedural approach to solving problems.
      • by Ilgaz ( 86384 )

        If you look at the reference URL you gave, it is actually AIX documentation. If they expect game developers code in mainframe or gigantic server style to get the power of a gaming chip, trouble begins right there.

    • by sleeponthemic ( 1253494 ) on Thursday January 01, 2009 @08:04AM (#26290041) Homepage

      Sony's payback comes when Playstation3 programmers learn to fully utilize the Cell architecture.

      That statement is fast becoming another hallowed urban myth of gaming.

      Techspecs aside, do you really believe the hype when absolutely nothing has come out on ps3 that blows the 360's capabilities away? Haven't they had enough time? Where is the practical proof? Folding at home performance? Not really applicable.

      Not to mention the fact that no developer making cross platform games is going to go very much further on a ps3 version. There's just simply no point.

    • Sony's payback comes when Playstation3 programmers learn to fully utilize the Cell architecture.

      Whens that going to be? How long is the customer base willing to wait for the developers to get their act together?

      • How many developers can afford to? The Cell is very fast, but it has a very different programming model to most other processors, and to all other mainstream gaming processors. If you write optimised code for the Cell, it's investment that only goes in to the PS3 port. If you write optimised code for a conventional architecture then it makes every version better (or, if it's just a small amount of asm tweaking, costs a lot less than an SPU rewrite). Which would you choose?
        • Very good point :) The benefit for the developer is very niche - especially when there is no guarantee the next PlayStation architecture will follow the same lines...
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I'm not a games console programmer, but I understood that the 'core' of the Cell and the chip used in the XBox 360 are both derivatives of the standard PowerPC chip.

      There is no such thing as a "standard PowerPC chip." PowerPC and POWER is an architectural specification and there are a wide variety of implementations of those specifications, ranging from embedded system-on-chip CPUs all the way to supercomputer processors.

      The story here is that IBM created a specific PowerPC implementation which serves as t

  • by Sarusa ( 104047 ) on Thursday January 01, 2009 @06:19AM (#26289817)

    This is really kind of misleading. The PowerPC, which is at the core of the Cell and is what MS uses as the cores of the Xbox 360, has been IBM's baby for years.

    The Xbox 360 uses 3 of the cores. The Cell uses one of the cores plus 8 SPEs (6 of which you can actually use in a game). If you will recall, the Wii uses a PowerPC too, a slightly beefed up Gamecube CPU which IBM made for Nintendo even before they made Cell. And of course Apple used to use PowerPCs (and IBM itself did and does, for servers).

    Anyhow, without the Cell's SPEs, there's not a lot to really 'steal'. The lack of SPEs is what makes the Xbox 360 so easy to program for, but the SPEs are what really define the Cell and make it such a floating point crunching monster (better suited for supercomputing than writing video games for in my opinion, and that's not intended as a dis here).

    • I thought that Apple would end up using Cell processors.
  • It also helped MS (Score:4, Informative)

    by Sycraft-fu ( 314770 ) on Thursday January 01, 2009 @06:22AM (#26289825)

    Because it was a really misdirected effort when it came to a console. Sony really had no idea what the hell they were doing as far as making a chip for their console. Originally, they thought the Cell would be the graphics chip. Ya well turned out not to be near powerful enough for that, so late in the development cycle they went to nVidia to get a chip. Problem was, with the time frame they needed, they couldn't get it very well customized.

    For example in a console, you normally want all the RAM shared between GPU and CPU. There's no reason to have them have separate RAM modules. The Xbox 360 does this, there's 512MB of RAM that is usable in general. The PS3 doesn't, it had 256MB for each CPU and GPU. Reason is that's how nVidia GPUs work in PCs and that's where it came from. nVidia didn't have the time to make them a custom one for the console, as ATi did for Microsoft. This leads to situations where the PS3 runs out of memory for textures and the 360 doesn't. It also means that the Cell can't fiddle with video RAM directly. It's power could perhaps be better used if it could directly do operations at full speed on data in VRAM but it can't.

    So what they ended up with is a neat processor that is expensive, and not that useful. The SPEs that make up the bulk of the Cell's muscle are hard to use in games given the PS3's setup, and often you are waiting on the core to get data to and from them.

    It's a neat processor, but a really bad idea for a video game console. Thus despite the cost and hype, the PS3 isn't able to outdo the 360 in terms of graphics (in some games it even falls behind).

    I really don't know what the hell Sony was thinking with putting a brand new kind of processor in a console. I'm willing to bet in 10 years there are compilers and systems out there that make real good use of the Cell. However that does you no good with games today.

    Thus we see the current situation of the PS3 having weak sales as compares to the 360 and Wii. It is high priced, with the idea that it brings the best performance, but that just doesn't bare out in reality.

    • Re: (Score:3, Informative)

      by master_p ( 608214 )

      I don't think the PS3 has the problem you mention (SPEs not being able to work directly on VRAM). It has a huge bus and it can easily stream huge amount of texture data from the SPE cores to the GPU.

      • Re:It also helped MS (Score:5, Informative)

        by Sycraft-fu ( 314770 ) on Thursday January 01, 2009 @07:39AM (#26289985)

        Specs on it I see show the system bus as being around 2GB/sec. That's comparable to PCIe (about the same as an 8x connection). While that's fine, it isn't really enough to do much in terms of back and forth operations. You'll find on a PC if you try that things get real slow. You need to send the data to the graphics card and have it work on its own RAM.

        Now that isn't to say that you can't do things to the data before you send it, but then that's of limited use. What I'm talking about is doing things like, say, you write code that handles some dynamic lighting that the CPU does. So it goes in and modifies the texture data directly in VRAM. Well you can't really do that over a bus that slow. 2GB sounds like a lot but it is an order of magnitude below the speed that the VRAM works at. It is too slow to be doing the "read data, run math, write data, repeat a couple million times a frame" sort of thing that you'd be talking about.

        You see the same sort of thing on a PC. While in theory PCIe lets you use system memory for your GPU transparently, in reality you take a massive hit if you do. The PCIe bus is just way too slow to keep up with higher resolution, high frame rate rendering.

        So while it's fine in terms of the processor getting the data ready and sending it over to the GPU (which is what is done) it isn't a fast enough bus to have the SPEs act as additional shaders, which is what they'd probably be the most useful for.

        • Re: (Score:2, Informative)

          by hptux06 ( 879970 )

          2Gb/s? The RSX is on the FlexIO bus, giving it ~20Gb/s to play with according to specs [ign.com].

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            OP is talking about gigabytes per sec (GB/s), not gigabits per secont (Gb/s). 20Gb/s = 2.5 GB/s. Everything the OP says is accurate. 20 gigabit isn't that fast, especially for an internal bus.

      • I don't think the PS3 has the problem you mention (SPEs not being able to work directly on VRAM). It has a huge bus and it can easily stream huge amount of texture data from the SPE cores to the GPU.

        Well, the PS3 DOES have the problem he mentions. The PPE ends up being mostly a means to shovel data into and out of SPEs and you can barely use it, which is another reason this article is fucking stupid. On the PS3 the PPC is a bus driver. On the Xbox 360, the PPCs are the main processors.

    • Re:It also helped MS (Score:5, Informative)

      by Zixx ( 530813 ) on Thursday January 01, 2009 @08:04AM (#26290043)

      For example in a console, you normally want all the RAM shared between GPU and CPU. There's no reason to have them have separate RAM modules. The Xbox 360 does this, there's 512MB of RAM that is usable in general. The PS3 doesn't, it had 256MB for each CPU and GPU. Reason is that's how nVidia GPUs work in PCs and that's where it came from. nVidia didn't have the time to make them a custom one for the console, as ATi did for Microsoft. This leads to situations where the PS3 runs out of memory for textures and the 360 doesn't. It also means that the Cell can't fiddle with video RAM directly. It's power could perhaps be better used if it could directly do operations at full speed on data in VRAM but it can't.

      Being a (former) PS3 and 360 dev, I have to say this is not true. Let's start with the memory split. Both consoles have about 20GB/s of memory bandwidth per memory system. Only the PS3 has two of them, giving it twice the memory bandwidth. The 360 compensates for that by having EDRAM attached to the GPU, which removes the ROP's share from your bandwidth budget. Especially with a lot of overdraw, the bandwidth needed by the ROPs can get huge (20GB/s, anyone?), so this would be a nice solution where it not for two things: the limited EDRAM-size and the costs of resolving from EDRAM to DRAM.
      The RSX can also read and write both to XDR (main memory) and DDR (VRAM), giving it access to all of memory. The reason it is tighter on texture memory is because the OS is heavier.

      About access to VRAM, it is true that reading from VRAM is something you don't want the Cell to do. It's a 14MB/s bus, so it is of no practical use for texture data. Writing into VRAM is actually pretty ok, as it's at 5GB/s, which is more or less achievable without trouble. At 60fps that's more than 80MB per frame.

      In general, both design teams made sound decisions. The 360 has a significant ease-of-use advantage to PC developers with DirectX experience. The PS3 on the other hand is a lot more to-the-metal, but allows for some pretty crazy stuff. Sadly, most development these days is cross-platform, so you won't see a lot Cell-specific solutions. It's just not cost-effective.

    • Re:It also helped MS (Score:5, Interesting)

      by xero314 ( 722674 ) on Thursday January 01, 2009 @08:06AM (#26290049)
      There are a number of errors in the comment above and a number of oversights.

      First it is true that the Graphics processing of the PS3 was originally intended to be handled by a Cell processor, but this is not the same as saying the Cell processor was built to be a graphics processor. The original specs for the PS3 included 4 fully functional cell processors. This would have meant that there would be no need for dedicated GPU. Time and cost made this configuration prohibited.

      The reason the PS3 does not have dedicated memory is because it is a very different design. First the PS3 contains a very high speed data bus, which allows the system to keep it's lower amount of memory full of the data it needs at any given time, with no need to store data not actively in use. Secondly the GPU in the PS3 has direct access to almost all of the memory in the system (480MB to be exact). It's just not the same picture that some people would like to paint. Dedicated memory has it's advantages (which is why all high end PC GPUs have such).

      Now the reason that Sony, Toshiba and IBM design the Cell and crammed it into a PS3, prematurely, is ingenious, but we wont see this for a number of years. The Cell processor is designed from the ground up to work effective as a single node of a multi processor system. This means that you can include more than one, utilize the same code, and get a much faster program rate. What this means is that for computing today you can use a single Cell processor and have a fast machine. In the future you can have a machine with 4, 8, 16, or more cell processors and have an unbelievable fast machine. On top of that speed you also get a very energy efficient machine. Take a look at the top 500 supercomputer list to see what a difference the cell processor makes. Putting in the PS3 on the other hand was a good move because it meant mass production and greatly reducing costs so that they can finally build the system they want in the next console generation.

      Ok I'm to tired to finish this, but as you can see if you look, the cell is an interesting chip with great potential, and has already surpassed other chips a number of applications.
      • Problem with this assumption simply is that the processor itself is fast, which the cell obviously just is in data processing, the core of the cell sucks due to being a powerpc stripped from anything which even remotely would give it speed. You basically can see that at the PS3, the 2d processing of the PS3 is awesome I have yet to see a better MPEG2 scaling, but problems start wherever more is needed than data shoveling, then suddenly games start to become slower than their xbox counterparts etc...
        I person

      • The problem with this assumption simply is that 8-16 cell processors by the time this is viable will be easier to handle and cheaper than a 16 core prozessor, I rather doubt it.

        • by xero314 ( 722674 )

          The problem with this assumption simply is that 8-16 cell processors by the time this is viable will be easier to handle and cheaper than a 16 core prozessor, I rather doubt it.

          Not sure if you realize this but 16 fully functional Cell processors contains 144 cores (1 in the PPE and 1 in each of 8 SPEs), not 16. The cell already contains more cores than any currently commercially available processor. Intel's top offering right now is the i7 965 ( a 4 core processor that cost over $1k), which is capable of 51.2 GFlops (single precision). The low end Cell used in the PS3 (available to consumers for $400 with a game console included) has already shown to exceed 100GFlops (double prec

          • Well there is a huge problem with this kind of design besides that you reach a high number of cores (btw. sun is still king here, if you count out modern graphics hardware which has around 400-600 cores which do very simple simd tasks!)

            Sony probably wants to push so many cores in the long run to make the graphics processors obsolete, but I rather doubt that will work out. AMD and NVidia already are at the design level sony wants to achieve with the cell, in one single processor. The latest ATI graphic cards

          • ah btw. I almost forgot, modern pc graphics hardware currently is for pure general purpose computing around 1-1.6 tflop for single precision, with the mid range being around 1 tflop!
            Double precision probably around 500-1000 (I have yet to dig up the numbers)
            But the problem is that the number of domains where you can apply that stuff to its full extent is limited! Classical 2d processing, some parts of 3d, and everything you can work on data streams and have to perform operations onto it is.

    • Re: (Score:2, Informative)

      by hptux06 ( 879970 )

      The SPEs that make up the bulk of the Cell's muscle are hard to use in games given the PS3's setup, and often you are waiting on the core to get data to and from them.

      While I agree the SPEs are a pain in the neck to program for, one of their redeeming features is that they use asynchronous IO when writing/reading to/from main memory. One of the key design points with Cell was that modern processors spend an enormous amount of time waiting on memory operations to complete, something that gets worse when you introduce extra processors competing for memory cycles. An individual SPE can be reading in one set of data, writing back another, and processing a third all at the sa

    • Re: (Score:3, Interesting)

      Well the idea of sony was to advance the PS2 design further, in my opinion a broken design having two SIMD Vector processors doing everything.
      They probably wanted to reduce the design down to one SIMD processor doing everything.
      The design of Sony seems to be vector processing is everything you dont need multithreading anyway. The problem with this approach is that even for modern games you need a good mix of everything, good vector processing for graphical stuff and physics, good general purpose processing

      • by Otis_INF ( 130595 ) on Thursday January 01, 2009 @10:03AM (#26290497) Homepage

        Well the idea of sony was to advance the PS2 design further, in my opinion a broken design having two SIMD Vector processors doing everything

        It's not broken, it's just an advanced system so a developer who wants to write really fast code has to know how it works. If you look at God of war 2 for example, what the engine can do on a system with 32MB of ram and a pretty slow CPU, it really shows that a skilled developer who knows what s/he's doing can get the results desired.

        I.o.w.: a 'lamer' can't get the performance desired. Well, what a shame, ain't it? If one really understands what it takes to write 3D engine code, it shouldn't be hard to understand that what the PS3 offers is in theory not really broken, but an opportunity to really get results which are beyond what one could imagine.

        Sure it's hard to write that code, but that's no different from writing solid, performing, scalable data-access code for example. It doesn't require thousands of developers to write that code: only a few are required, they can write the hard part, the rest of the developers can build on top of that. After all, a game is often mostly written in a script-like language of the engine or C/C++ utilizing engine libraries, not a lot of people developing games are really writing engine cores.

        • Well the problem with the simd approach simply is, is it worth it. You wont gain that much, you can use simd in classical mathematical operations which needs matrices, that reduces them to sound, 2d/3d processing, and to some physics depending on the power. For graphics dedicated hardware is more expensive but better. This works out for some games, but makes the engine programming 10 times harder than it needs to be. Microsoft went the PC route successfully! The problem with the simd approach simply is, the


        • It's not broken, it's just an advanced system so a developer who wants to write really fast code has to know how it works.

          Presumably spoken by such a developer. That's great if you're a super-star and want to develop specifically for that machine. But what about the game development company who's looking to make a buck? Superstar developers are by definition, rare. Or maybe you ARE a superstar, and care more about cross-platform, or productivity?

          Designing a system that requires the developers to be extr

        • It's not broken, it's just an advanced system so a developer who wants to write really fast code has to know how it works. If you look at God of war 2 for example, what the engine can do on a system with 32MB of ram and a pretty slow CPU, it really shows that a skilled developer who knows what s/he's doing can get the results desired.

          The issue is that a 'lamer' can write a game that doesn't choke on the Xbox, but not on the PS2, or on the 360, but not on the PS3. And if you are trying to do a port to the PS2 or PS3, forget it. You're doing a rewrite. That, or you're writing a dog.

          My understanding is that the PS2's processor is actually made up of three MIPS cores itself; both VU0 and VU1 are actually based on the same core, but tweaked in different directions. They're glued together, again by a less popular processor, but back then Sony

    • Re: (Score:3, Interesting)

      by Otis_INF ( 130595 )

      It also means that the Cell can't fiddle with video RAM directly. It's power could perhaps be better used if it could directly do operations at full speed on data in VRAM but it can't.

      Everyone who has written assembler code for an Amiga 500 knows that this isn't true: if you have multiple processors fiddling with data in videomemory, they also share a bus, and that sharing is precisely why it makes it slow. At least compared to memory which is only for 1 processor.

      Microsoft's 512MB memory runs at a very slo

      • The question is more alike, does anyone outside of sony wants to use the extram stuff given, the other problem is, what are the benefits, can you achieve even some?
        The problem the PS3 currently simply has, it lacks cores, it lacks general purpose computing power. You cannot vectorize everything. All it does well is feed data and process matrices, but that is not everything, you might get decent particle effects you might get impressive physics, but even on the graphics side a 3 year old nvidia graphics proc

      • Re: (Score:3, Insightful)

        by faragon ( 789704 )

        Microsoft's 512MB memory runs at a very slow speed compared to the 3ghz frequency the PS3 cpu memory runs on. It's not a surprise why this is: the bus is shared: display hardware, video chip, main cpu, all have to utilize a bus to the same memory. To schedule all these requests, you have to use even/odd cycle schemes or similar, you can't use the bus all for one chip.

        RAM access cycle interleaving works for pre-burst memories (e.g. DRAM, SRAM). Current synchronous RAMs (since late nineties SDRAM) operate in bursts, i.e., the address is set in the bus, and then, at every clock a read (or write) operation is performed, being the next address is increased implicetely (burst transfer). So my bet is that there is not RAM cycle interleaving for modern synchronous DRAMs, as it would be very complex and nonsens to add a "interleaving logic" in between the DRAM controller and th

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday January 01, 2009 @10:23AM (#26290573) Homepage Journal

      I really don't know what the hell Sony was thinking with putting a brand new kind of processor in a console. I'm willing to bet in 10 years there are compilers and systems out there that make real good use of the Cell. However that does you no good with games today.

      It IS a bit hilarious isn't it? The Playstation murdered the Saturn in part because it was easier to develop for, with one CPU (a MIPS core at that!) and one graphics chip. Then Sony completely blew it with the PS2, made the most complicated video game console to program for ever and Microsoft made huge inroads. Then they blew it again with the PS3. A majority of developers willing to speak on such issues despise both systems. Sony would be out of the video game market completely at this point if it weren't for Xbox 360 RROD.

      • Well considering that the PS2 was the most successfull console ever and is STILL going strong, Sony must have done something right.

        • Consumerwise they did right by including DVD and they could ride on the PS1, it gave the machine enough momentum to program for (after all the developers are the last ones to be asked whether it was worthwhile to develop for a platform) that does not change the fact that developers still hate the platform, and they similarily hate the PS3 for exactly the same reason!
          You can see that at the quality, outside of sonys own development all stuff done for the ps3 is a rather unoptimized quick port, often the PS3

      • Re: (Score:3, Informative)

        by rtechie ( 244489 ) *

        most complicated video game console to program for ever

        Actually, the Saturn was probably the most difficult console to program for ever. Sega basically told developers "Here's some of the system calls and incomplete design docs. Have fun." and it NEVER got any better. To this day there are parts of the Sega Saturn that are basically totally undocumented. Notice how you've never seen any Saturn emulators? This is why.

        The first 2 years of the PS2 were painful, and then much better development tools arrived on the scene that handled much of the fiddly crap. Nowada

        • Actually, the Saturn was probably the most difficult console to program for ever. Sega basically told developers "Here's some of the system calls and incomplete design docs. Have fun." and it NEVER got any better.

          Yes, one developer likened it to "a pile of chips on a board" (as accurate a quote as I can remember anyway.) But on the plus side, it uses mostly-standard chips, and the two processors are symmetric. And let's be fair, only a few developers ever really got the power out of the PS2. Some games you look at and you're like, how did they do that. Most games you look at and you're like, I'd rather play this on the Xbox.

      • Re: (Score:3, Insightful)

        Almost every console that has ever failed, has failed because they screwed the developers.

        Developers jumped ship from Nintendo to Sony because Nintendo was still requiring carts, huge lead times, limited number of titles to each company, certification, and so on. Sony removed these restrictions, and developers didn't let the door hit their asses on the way out.

        Sega had a history of screwing up with developers; random hardware addons like the Sega CD, 32X and so on; if the Dreamcast had come out without sev

    • I was watching interviews with PS3 developers, and one who had made a decent-looking game said all they used was the PowerPC core and none of the SPEs.

      Any performance deficiency compared to the 360 is just a programming issue. The hardware is there.

      • Re: (Score:3, Informative)

        by Ilgaz ( 86384 )

        Programming issue as result of development tools? I am a Symbian user since Nokia 7650 (first S60) and I keep getting amazed at the developers love for iPhone, how a very advanced application like Fring can ship in matter of months without any kind of help from Apple and how wisely OpenGL (ES) acceleration was used while it is ignored on my poor UIQ3 Sony Ericsson P1i for years until Opera 9.5 beta.

        People say SSE could just reach the point of Altivec after new Xeons and yet as a G5 owner, I kept wondering w

        • Altivec was not used by apple that extensively because they already were planning to jump ship onto intel. The parallel intel port of osx always had to compile and run besides the powerpc port even at the time Apple bought NeXT!

        • AltiVec was not used much? It was used for the software fallback for OpenGL shaders,CoreImage (i.e. anything that did image processing) for most filters, in QuickTime (for all of the in-house Apple codecs including the MPEG series). Some things, like video conferencing with iChat, were AltiVec-only (they wouldn't work on a fast G3, but would on a slow G4). Apple also contributed a lot of autovectorisation stuff to GCC so a lot of code used AltiVec without explicit modification. Lots of third party devel
        • But it is just more complicated.

          Game developers had a hard time learning to take advantage of multi-core, so even earlier 360 titles weren't that hot. It's even harder with the PS3 with one CPU plus 6 processing elements.

          Not so much the tools, but a complete philosophy change in the design of the code.

      • Well for decent graphics you do not need the cpu that much you can offload things onto the gpu which is more or less an nvidia g70 fast enough to pull of decent graphics. Things become nasty once you get into the area of having to control multiple things parallely and when things become multithreaded!
        The SPEs are a neat addition if you need physics and particle effects and maybe realtime decompression of textures to bypass the memory limits, but thats it. The SPE units have their place but more in other mac

    • by Xest ( 935314 )

      "I really don't know what the hell Sony was thinking with putting a brand new kind of processor in a console."

      I do, exactly the same they were thinking with putting Bluray in it, only the Bluray gamble paid off.

      Sony were over-confident in their ability to carry over their first place console success from the PS1, to PS2 and then to the PS3, they didn't count on coming anything other than first place- you only have to look at their comments prior to and shortly after the PS3's release to see evidence of this

    • Re:It also helped MS (Score:4, Informative)

      by TheNetAvenger ( 624455 ) on Friday January 02, 2009 @01:33AM (#26297119)

      Ya well turned out not to be near powerful enough for that, so late in the development cycle they went to nVidia to get a chip.

      The funny part about the NVidia chip Sony is using in the PS3 only exists because of Microsoft and Microsoft funding.

      On the original XBox MS Engineers worked with NVidia to create what was the technology behind the Geforce4ti. The GPU created for the XBox was the first (NVidia at least) GPU that had Pixel Shader technology.

      It was the work from the MS engineers and NVidia that created this custom GPU that NVidia took on to become the Geforce4ti (high end) and the GeforceFX (5xxx series) line of GPUs.

      It wasn't until the 8xxx series of NVidia GPUs did they abandon the architecture that was co-designed and funded by Microsoft originally.

      This is why when NVidia was asking for more money per GPU for MFR on the XBox GPU, MS basically told them to pound sand, as they had already help to create and fund their entire line of PC GPUs that was giving NVidia the success they were having.

      So not only did Sony screw themselves by shoving a 'slower' Geforce 7900 into the PS3 that caused their own delays, the Geforce 7900 in the PS3 is based on designs from Microsoft engineers and MS funding that NVidia got during the original XBox development.

      Besides adding the GPU into the PS3 at a late date, Sony screwed themselves with their own problems that were beyond anything IBM was doing.

      Look at the PS3 Development tools. Even if Sony was waiting on parts from IBM, they could have at least had a mature set of development tools, instead even 'after' waiting on IBM or whatever their excuses are, their development tools sucked ass and can be argued to this day still don't properly harness the power of the Cell processor.

      So if it was just waiting on IBM, the development tools would have been done and waiting, instead, the hardware was available before even a realistic or solid set of development tools were available.

      In contrast, MS's development tools for the XBox 360 were ahead of the hardware and developers were using two G5 Macs running a custom version of Windows2003 x64 with a full set of development tools. And when the REAL XBox 360 hardware was made available to developers, they again got updated development tools from Microsoft that directly targeted the tri-core PowerPC and the MS designed ATI GPU that was optimized for the actual hardware.

      Basically MS didn't even have the XBox 360 hardware, but had development tools in the hands of game developers and even found a way to provide these on an emulated hardware configuration. - Sony could have done this, instead they screwed developers and still do, and not they blame IBM for delaying their 'precious' chip. Holy lord of the rings...

      With regard to the poster I am replying to, they are spot on with many things. The PS3 GPU is a slower version of the NVidia 7900 - this means laptops from 2005 have faster GPUs in them than a PS3. How is that for sad and scary...

      Additionally, it was MS designs (that they kept ownership to this time) on the ATI based GPU technology in the XBox 360 that set the standard for all current GPUs on the market today. It was a unified shader technology, with on chip cache for AA, and also was designed to use the shared memory architecture that the Vista WDDM model is built around.

      So every time you see a video card from ATI or NVidia with DX10, the design comes from MS engineers. (Yes NVidia didn't have access to, but used the design specifications behind the DX10 hardware specifications designed and written by Microsoft for their 8xxx and newer GPUs.)

      Technically the GPU in the XBox 360 is a DX11 based GPU that is ahead of the current generation of GPU architectures still, and won't see desktop PC equivalents until you see DX11 GPUs on the shelves. (As it has hardware WDDM 1.1 hooks that current desktop GPUs do not have.)

      I actually think the PS3 is a good gaming system for what it is. It is a good Blu-Ray player too.

      It was

  • by Anonymous Coward

    I don't see how "Sonys" research money was used or really in question for any this.
    the PowerPC both CPUs are based on is the PowerPC 970, the Processor Apple used in their G5 series- but from there the difference is that they disabled out of order execution- implemented SMT from the Power5, on sonys behalf they added 8 newly developed SIMD coprocessors known as SPEs.
    For microsoft well they wrote a new version av VMX called VMX-128, something not to be found on the Cell which still uses the old VMX (Mostly A

  • Hmm, really? (Score:5, Interesting)

    by Ecuador ( 740021 ) on Thursday January 01, 2009 @06:26AM (#26289835) Homepage

    Maybe I have to read the book to get a better picture, it is possible that the article blows things out of proportion. So, I thought that the whole "deal" about the Cell are the SPE's. The Xenon CPU that powers the Xbox 360 is just a custom-made triple core PowerPC. Now, I guess the "customization" of that core is similar to what is done for the PPE of the Cell, so research there could have overlapped, but I would not think that the PPE is the "essence" of the Cell - at least that is what Sony's and IBM's own claims have made me believe.
    Additionally, I have to admit that I always thought the usage of the Cell processor a very bad (or, more precisely, very arrogant) decision. It is not just that it has many "cores"; the fact that they are asymmetric and that SPE's are not your usual general-purpose cores, was bound to make it very hard for developers to utilize them. If you wanted to develop for many platforms there is no way you would want to optimize for the SPE's when all other architectures (PC, Xbox...) use symmetric, general purpose cores. So, in my book, the Microsoft engineers knew much better what they were doing than the Sony ones. I guess they are not the same engineers responsible for gems like Me, Vista or Zune firmware.
    What I would like to know are the differences that the modified core has compared to a "classic" PowerPC core? So, if MS had not benefited at all from Cell research and got a triple-core whose cores were closer to the original PowerPC, would it be a much different CPU? Anybody knows? If the answer is not, the whole discussion about MS benefits from Sony is moot...

    • Re: (Score:3, Informative)

      The XBox360 cores don't have any superscalar features, things like branch prediction, instruction re-ordering or speculative execution. That means they use much less power than a regular core (and so generate less heat), but only run branchy game logic type code at around half the speed.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        > instruction re-ordering

        I don't know if any of the current generation of console CPUs has re-ordering... and for good reason. In an ideal world the compiler would schedule instructions well (or, rather, "well enough that dicking with it further in hardware wouldn't be a productive use of silicon") However in the real world 99.999% of users aren't running gentoo — instead, they have binaries which weren't likely compiled for the exact cpu that they're running. A hardware reorderer can make signif

      • by Glonk ( 103787 )

        By definition, a superscalar processor does not need to be instruction re-ordering, speculative execution, or branch prediction. All a superscalar processor needs to do to be called superscalar is to dispatch more than one instruction per clock cycle to redundant functional units on the processor -- the 360's CPU is absolutely superscalar.

        Incidentally, you are incorrect other aspects anyway: the Xenon cores DO have branch prediction, just in a significantly diminished capacity compared to what we're used t

      • No branch prediction? I find that incredibly hard to believe. The average bit of general purpose code has a branch every 7 instructions. Even a 1-bit branch predictor gets around a 50% performance improvement over not having one. No one has designed CPUs (other than 8-bit microcontrollers) without branch prediction since the early '90s.
    • Re: (Score:3, Insightful)

      I agree here as well, there is nothing from the Cell design which went into the Microsoft PowerPC core. IBMs processor business nowadays is mostly to customize power pc processors for various customers. The design which went in from Microsoft is basically just a trimmed down G5 core with three cores, while the Cells, is a trimmed down G5 core with a load of SIMD units!

    • EXCUSES~1 .. (Score:3, Interesting)

      by rs232 ( 849320 )
      "Maybe I have to read the book to get a better picture, it is possible that the article blows things out of proportion. So, I thought that the whole "deal" about the Cell are the SPE's. The Xenon CPU that powers the Xbox 360 is just a custom-made triple core PowerPC", Ecuador

      Well unless you know different we'll just have to take the points raised in the article as accurate. And if the CELL was just a custom-made core then why the need to commit $400 million over five years?

      "I agree here as well, there
      • by Ecuador ( 740021 )

        "And if the CELL was just a custom-made core then why the need to commit $400 million over five years?"

        You are confusing me, while your statement agrees with mine, you write as in disagreement. Which is it?
        I exactly said that while the Xenon IS just a custom-made triple core, the WHOLE POINT (and the money I guess) of the Cell is what lies beyond its PowerPC-derived core.

        Still confused with your further comment on IamTheRealMike, did you just want to add the link for reference purposes? The link is what the poster said, while he also analyzed what "in order execution" means.

  • by stephen70 ( 1192101 ) on Thursday January 01, 2009 @09:33AM (#26290379)

    Slashdot users read and learn because anyone who fails to understand the following is uninformed >

    The SPU's on the Cell and the PPC Altivec unit on the Xenon(X360) are very closely associated never before has IBM done a 128register 128Bit Altivec unit. The 128bit X 128register Altivec VMX128 unit on the Xenon is the best of any CPU it is also an almost perfect subset or cut down version of the Cell's SPU !.

    In non braching calculations and assuming no cache misses VMX128 performance is equal to the SPU's performance this is not a coincidence it's a newly shared design feature in both the instruction sets and silicon fab and clearly shows the CPU designers shared alot.

    The older VMX is only 32 registers. Only the Xenon PPC cores and Cell's SPU's have this new VMX128 type arrangement with 128 SIMD registers - especially enhanced for multimedia and gaming.

    • by Ecuador ( 740021 )

      Could you please provide a link to a valid source? I tried to find something, but I always come up with statements like "Xbox 360 has VMX128 while PS3 only has VMX", and on the IBM website the only mentions of VMX128 are about the Xbox's Xenon CPU.

  • by EXTomar ( 78739 ) on Thursday January 01, 2009 @03:02PM (#26292177)

    Japan has always been like this. Take a look at the PS3 and Wii. Both offer highly proprietary, custom built, in ways convoluted technology to the same problem. But for some reason Sony is treated as idiots while the author sort of forgets Wii takes the prize. For whatever reason Japanese engineers like doing this: When there is no technology that exists that exactly fits to solve a problem, their engieneers tend to build a new one even if there are other pre-existing solutions that almost achieve it. Just like other capital projects, it sometimes pays off and sometimes fails.

    Another thing not considered is the fact the XBox 360 is most conservative console out of the three. The software and hardware technology in the Wii and PS3 are dramatically different then their predecessors where they have features that simply don't exist in the ancestors. On the other hand the XBox 360 is more like a beefier XBox. I think the real story is that Sony gambled on some fundamental technology shifts and it didn't pan out. Microsoft on the other hand "played safe" and iterated. There is nothing wrong with that but to claim its some technology shift or special insight, especially given their production and software problems is a bit much.

  • by CTho9305 ( 264265 ) on Thursday January 01, 2009 @03:53PM (#26292595) Homepage

    At first glance, the Xbox CPU [ctho.ath.cx] doesn't really resemble Cell [ctho.ath.cx], but if you just compare Cell's PPE to one of Xenon's three cores the similarity is striking: Xenon [ctho.ath.cx], Cell [ctho.ath.cx]

  • I have no patience with inflicting lawyerese on mass market consumers, but these are big boys playing in a big money game. They can afford to hire the best lawyers, especially when they're slinging this kind of money around.

    A good lawyer doesn't stand in the way of a business deal, he just makes what you assume about the business relationship explicit. If Sony was surprised by what IBM did, they have nobody to blame but themselves.

BLISS is ignorance.

Working...