Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

A Brief History of Chip Hype and Flops

kdawson posted more than 5 years ago | from the open-mouth-insert-marketing dept.

Hardware 275

On CNet.com, Brooke Crowthers has a review of some flops in the chip-making world — from IBM, Intel, and AMD — and the hype that surrounded them, which is arguably as interesting as the chips' failures. "First, I have to revisit Intel's Itanium. Simply because it's still around and still missing production target dates. The hype: 'This design philosophy will one day replace RISC and CISC. It is a gateway into the 64-bit future.' ... The reality: Yes, Itanium is still warm, still breathing in the rarefied very-high-end server market — where it does have a limited role. But... it certainly hasn't remade the computer industry."

Sorry! There are no comments related to the filter you selected.

Itanium would have worked-AMD screwed it for intel (4, Interesting)

Prodigy Savant (543565) | more than 5 years ago | (#26869929)

If AMD hadn't rushed with their 64 bit version of the x86, about now, itanium would be getting popular and hence cheap.
Market forces have so much to do with technology advancement. A lot of times, superior technology has to take a back seat ...

Re:Itanium would have worked-AMD screwed it for in (5, Insightful)

hannson (1369413) | more than 5 years ago | (#26869947)

I don't know enough about the architectures to say which one is better (x86-64 vs IA-64) but backwards compatibility with x86 is a big win for x86-64.

Re:Itanium would have worked-AMD screwed it for in (3, Interesting)

wisty (1335733) | more than 5 years ago | (#26870699)

It didn't help that part of the advantage of IA-64 was that it let programmers write their own branch prediction. Which they didn't want to do.

Re:Itanium would have worked-AMD screwed it for in (4, Informative)

Jurily (900488) | more than 5 years ago | (#26869973)

I don't think so. x86-64 is fully backwards-compatible with x86. Itanium is not.

Wanna guess why they're not that popular?

Re:Itanium would have worked-AMD screwed it for in (5, Informative)

snowgirl (978879) | more than 5 years ago | (#26870001)

I don't think so. x86-64 is fully backwards-compatible with x86. Itanium is not.

Wanna guess why they're not that popular?

You don't know the architecture? The first Itaniums had hardware x86 processors. The only reason they don't now, is that it was found to be faster to emulate the x86 than run it with a diminished hardware.

Re:Itanium would have worked-AMD screwed it for in (5, Interesting)

Bert64 (520050) | more than 5 years ago | (#26870349)

The first Itaniums were pretty much a dismal failure...
They ran at around 800mhz, so clocked lower than x86 systems of the time which were around 1.4ghz if i remember (and the mhz myth still very much alive, with intel fuelling it using the p4)... Their x86 support was roughly the speed of a p90 and therefore of little use beyond running one or two small legacy apps.
In terms of outright performance they were behind Alpha and Power at the time, so much for this new architecture. And when it came to price and power consumption they were behind everyone else.

When Itanium2 came around it performed a lot better, still guzzled power, and they realised that software emulation of x86 was faster than the hardware support, other than that the chips were still too expensive for what they were.

Now, Itanium is pretty much relegated to the high end niche that Alpha occupied before it was canned.

Itanium suffered from end users being locked in to proprietary binary only software - which only the original vendor could port... Some were unwilling, some didn't see the business case, some demanded that HP/Intel fund the porting, only they couldn't fund everything, so Itanium is left with a very limited set of apps...
OSS support was better, but it suffered from the high cost and rarity of the hardware, in that hobbyists had little chance of getting hold of the hardware to play with.

Personally i think HP/Intel would have been better off putting the effort into continued development of Alpha... It already had a software and user base, it already had x86 emulation which performed reasonably well, and it had a legacy behind it of old hardware that was cheaply available to OSS developers. Even today, Alpha versions of Linux seem far more active than the IA64 versions... Plus any customers already using Alpha would not have needed to migrate (and many of them migrated to Sun or IBM).

Re:Itanium would have worked-AMD screwed it for in (1, Interesting)

Anonymous Coward | more than 5 years ago | (#26870661)

It's probably not too important in the large scale of things, but the Alpha worked fine in workstations, something Itanium doesn't seem to be suitable for. Having workstations in the same architecture is likely one of the reasons Linux/alpha is doing well, and I it might be convenient for the other developers as well.

Re:Itanium would have worked-AMD screwed it for in (4, Interesting)

anothy (83176) | more than 5 years ago | (#26871095)

HP/Intel would have done better, technically, to work on Alpha, but they couldn't sufficiently dominate the market for their tastes in that case. half the point was to have something that they controlled, and Alpha, while technically great, was already too widespread for that.
which, really, is the most important response to the original parent's point. what was AMD supposed to do, sit around while Intel dictated what the terms of the next stage of the market would be? what gives Intel some inherent right to that sort of dominance? AMD did exactly the right thing, from a business perspective: they saw what they believed to be a strategic mistake that left a market hole open, and produced a product to fill it. turns out they were right.
turns out it was the right thing to do technically, too. when Itanium hype was at its peak, i remember lots of actual engineers i knew (and even some subset of the tech press) pointing out that EPIC was really just tweaked VLIW, and that had been tried and failed a few times. amd64 has consistently outperformed IA64.

even the quote in the summary is misleading. yes, IA64 is still plodding along in the high-end server market, but it's even an also-ran there. POWER and amd64, in particular, continue to trounce it, both for your normal "server" market and for the really high end scientific cluster stuff (it's got, what, one spot on the top500 list?). it's a pretty substantial failure, really all around.

Re:Itanium would have worked-AMD screwed it for in (1)

drinkypoo (153816) | more than 5 years ago | (#26871207)

Now, Itanium is pretty much relegated to the high end niche that Alpha occupied before it was canned.

All I know about iTanic sales is that the only customer I know of was unwilling. Yuba College in Marysville used a 4-way alphaserver to handle the system on which their student information is kept. The upgrade path for that software after the death of the Alpha was to go to iTanic. So they now have an 8-way iTanic2 server to do what? Some database shit, basically. Run HP-SUX. I used some of the extra horsepower to implement ipsec to the windows clients for them, but it's still a massive overkill and a massive waste of tax and tuition money. But they had no choice.

Given just how fucking dismal iTanic sales numbers have been, I wonder how many of those sales are customers given no choice by their vendor and forced to install such a system.

Re:Itanium would have worked-AMD screwed it for in (0)

Anonymous Coward | more than 5 years ago | (#26869977)

Well, I guess having better compilers for IA64 would helped greatly, considering that the architecture's performance is critically depending upon the compiler detecting instructions that are not interdependant.

Re:Itanium would have worked-AMD screwed it for in (4, Insightful)

snowgirl (978879) | more than 5 years ago | (#26870055)

Well, I guess having better compilers for IA64 would helped greatly, considering that the architecture's performance is critically depending upon the compiler detecting instructions that are not interdependant.

That's pretty much right on the head there. Intel made the IA64 under the assumption "make a better chip, and the compiler will follow", unfortunately, they didn't realize how much inertia was behind x86. AMD exploited it and POOF, Itanium goes down in flames. :(

Re:Itanium would have worked-AMD screwed it for in (1)

cheesybagel (670288) | more than 5 years ago | (#26870611)

The original Itanium was shit. That was why it went down in flames. It was worse than the end-lined processors it was meant to replace.

Re:Itanium would have worked-AMD screwed it for in (0)

Anonymous Coward | more than 5 years ago | (#26870405)

Compilers that one wasn't expected to PAY FOR? Yeah, I tried to get a compiler, way back when. I can't remember the price - but it was much more than I could afford to pay, just to play with it. And, I'm sure that some businesses that needed it pretty badly balked at the price tag.

Re:Itanium would have worked-AMD screwed it for in (5, Interesting)

learningtree (1117339) | more than 5 years ago | (#26870009)

The biggest advantage of AMD x64 over Itanium is the ability to run x86 32-bit code natively without any performance penalty.
The comparison is not just about better technology. Think of the trillions of lines of x86 32-bit code that has been written.
Would you render all this code unusable just because you want to move to a better architecture.

Re:Itanium would have worked-AMD screwed it for in (1)

harry666t (1062422) | more than 5 years ago | (#26870215)

> Would you render all this code unusable just because
> you want to move to a better architecture.

Yeah, and I'd put Debian on that machine.

Re:Itanium would have worked-AMD screwed it for in (3, Insightful)

Bert64 (520050) | more than 5 years ago | (#26870369)

Very little code is written in x86 assembly, the vast majority is written in higher level languages and then compiled or interpreted... When you have the source code, porting it to IA64 is relatively easy. Look at Linux, it runs on a variety of architectures, as do a huge number of applications. Many of the original authors of those apps would never have considered they might be running on IA64, Alpha, Arm, Mips or Sparc someday...

The problem is software being delivered as binaries. Binary software distribution is holding back progress, making it necessary to continue supporting old kludgy architectures instead of making a clean break to something new and modern.

Re:Itanium would have worked-AMD screwed it for in (0, Flamebait)

frdmfghtr (603968) | more than 5 years ago | (#26871035)

The problem is software being delivered as binaries. Binary software distribution is holding back progress, making it necessary to continue supporting old kludgy architectures instead of making a clean break to something new and modern.

How would you address the issue of software distribution to home users, who may have neither the time or patience to wait for the compilation process? I'll assume for now that the compiling would be added as part of the installation process.

"Pro" or "power" users that pay for big apps may appreciate the flexibility, but home users, as we all know, want it to "just work."

Re:Itanium would have worked-AMD screwed it for in (1)

gzipped_tar (1151931) | more than 5 years ago | (#26871143)

How would you address the issue of software distribution to home users, who may have neither the time or patience to wait for the compilation process? I'll assume for now that the compiling would be added as part of the installation process.

"Pro" or "power" users that pay for big apps may appreciate the flexibility, but home users, as we all know, want it to "just work."

Home users are "end users". No matter how the software is distributed, end users don't have to compile everything. It's the distributors' job to release pre-compiled binaries for all targeted architectures, and Open Source makes porting to a new architecture possible and easier for the distributors.

You are worrying about all home users going the Gentoo way, which is not happening.

Re:Itanium would have worked-AMD screwed it for in (0)

Anonymous Coward | more than 5 years ago | (#26870371)

I'd not hesitate, assuming I had a way to virtualize the code (16 bit, 32 bit, and amd64 bit) in such a way a user wouldn't care that it was running under a hypervisor.

Itanium not superior technology at all (5, Insightful)

TheLink (130905) | more than 5 years ago | (#26870017)

The Itanium is not superior at all.

Even before the AMD64, the Itanium was only good at mainly contrived FPU benchmarks. It was dismal in integer performance.

When you didn't care about x86 compatibility and wanted to spend lots of money for the usual reasons, it was better to go with IBM's offerings like POWER (which is still a decent contender in performance).

Intel couldn't offer you much else other than the CPU. They had to rely on HP, who just left their Tandem and VMS stuff to rot. Yes there were other big names pretending to do Itanium servers, but in practice it was HP.

The Itanic was an EPIC failure.

Re:Itanium not superior technology at all (2, Interesting)

BikeHelmet (1437881) | more than 5 years ago | (#26870617)

Didn't the Power6 have insane FPU performance? Double that of its contenders?

I think it still beats every CPU out there. (FPU only)

I remember seeing benchmarks where a 4 core Power6 beat 8 Xeon cores and 8 opteron cores, by a safe margin.

But those things are so huge... at the time of release, they were bigger than all GPUs. :P Lots and lots of transistors, and lots of ghz.

Re:Itanium would have worked-AMD screwed it for in (2, Informative)

anss123 (985305) | more than 5 years ago | (#26870073)

If AMD hadn't rushed with their 64 bit version of the x86, about now, Itanium would be getting popular and hence cheap. Market forces have so much to do with technology advancement. A lot of times, superior technology has to take a back seat ...

Perhaps, but how superior is that superior technology?

The idea with Itanium was to make a CPU that could perform on the level of RISC and CISC CPUs with a relatively simple front end. In essence the Itanium executes a fixed number of instructions each cycle, then leaves it to the compiler to select which instructions are to be executed in parallel and make sure they don't read and write to the same registers and such (instead of having logic in the CPU figuring this stuff out).

It was a neat idea, but advantages in manufacturing technology favored CPUs with more complicated front ends. The Itanium advantage never materialized on the desktop, so had this "superior" technology taken off we'd might have had faster computers at the cost of making our software run on this bling architecture.

Making big ISA changes for a mere speed boost is not worth it, and it's not certain you'd get even that as the Itanium does not always outperform the x86.

Re:Itanium would have worked-AMD screwed it for in (3, Insightful)

Hal_Porter (817932) | more than 5 years ago | (#26870155)

The idea with Itanium was to make a CPU that could perform on the level of RISC and CISC CPUs with a relatively simple front end. In essence the Itanium executes a fixed number of instructions each cycle, then leaves it to the compiler to select which instructions are to be executed in parallel and make sure they don't read and write to the same registers and such (instead of having logic in the CPU figuring this stuff out).

Actually you could see that Itanium was in deep trouble when it launched at a lower clock rate than x86. The whole idea behind EPIC "explicitly parallel instruction computing" was that you move instruction scheduling to the compiler, and that allows you to essentially out-RISC RISC, i.e. build a dumber chip that can be clocked faster. I think you're right about technology too. Back in the CISC vs RISC days an R4000 for example could be clocked faster than a 486 due to its ultra streamlined pipeline - MIPS originally meant "Microprocessor without Interlocked Pipeline Stages". Itaniums for a variety of reasons ended up clocked slower than x86. Partly I think too much stuff got added to the architecture, and partly I think x86 chips were already very close to process limit for frequency, so a simpler architecture wouldn't run any faster.

I sort of wonder if .Net might have been part of the sketchy Itanium strategy too. The big thing about .Net is that it is a VM that is designed to be JITted rather than interpreted. Part of EPIC was that chips would be binary compatible, at least for user code, but that old binaries would not necessarily run optimally. It's easy to see why - a binary compiled for an old chip with n functional units would have fewer instructions scheduled to run in parallel than one compiled for a new one with 2n units assuming the scheduling was done at compile time.

Of course with .Net the applications are compiled for a VM and then JITted. If you had a new chip, the .Net JITter could detect this and schedule optimally.

Re:Itanium would have worked-AMD screwed it for in (1)

Bert64 (520050) | more than 5 years ago | (#26870381)

IA64 wasn't so much about clock rate, as theoretical instructions per clock...
Rather than having multiple cores, the idea was a sort of SIMD throughout the processor... But relying on the compiler to generate optimal code...
Assuming you have optimal code, an Itanium should be able to get a lot more work done in a single clock cycle than any x86 chip.

Re:Itanium would have worked-AMD screwed it for in (2, Funny)

Anonymous Coward | more than 5 years ago | (#26870389)

Boohoo! AMD is being so *mean*! They're *competing* with us! It's just not *fair!

Re:Itanium would have worked-AMD screwed it for in (1)

howlingmadhowie (943150) | more than 5 years ago | (#26870471)

no the main problem is proprietary software. the amd64 could establish itself because it supported the existing proprietary software. it would be interesting to know what percentage of x86-64 systems are still running 32-bit software exclusively. i'd estimate about 90%. the reason? broken or missing flash for x86-64, broken or missing windows, broken or missing microsoft office, broken or missing photoshop, broken or missing autocad etc.

free software has the advantage that, if you aren't embedding assembler, the entire free software stack should work for a new architecture after adding a new target to gcc and a couple of assembly routines to the kernel code. this is the reason why i can change seamlessly from hppa over sparc32 over ppc over x86 over x86-64 over alpha over mips for all my daily needs without having to know which architecture i'm using.

Re:Itanium would have worked-AMD screwed it for in (1)

Tony Hoyle (11698) | more than 5 years ago | (#26871289)

It's a *little* harder than that... you have to worry about endianness, alignment issues, word sizes, etc. and these do affect higher level languages. Compiling for new architectures - even on code that's been compiled on other architectures before (if it hasn't, chances are it'll need modification) - is more than a simple recompile.

FTA: (1)

hannson (1369413) | more than 5 years ago | (#26869935)

Intel Itanium never took off / didn't replace Xeon

The PowerPC architecture was dumped by Apple and failed to challenge Intel in the PC market in a big way.

AMD is consistently beaten by Intel in the mobile marketplace.

Re:FTA: (4, Insightful)

snowgirl (978879) | more than 5 years ago | (#26870035)

The PowerPC architecture was dumped by Apple and failed to challenge Intel in the PC market in a big way.

You missed the proper order. The PowerPC architecture didn't have the money behind it that the x86 architecture did. Take a crappier design but spend a ton more money on it, and you can easily make it faster than a better design.

The PowerPC failed to compete effectively against the Intel/AMD competition, and thus, Apple was pretty much forced to switch because of simple economics.

Re:FTA: (1)

cheesybagel (670288) | more than 5 years ago | (#26870659)

One reason for PowerPCs market failure was when Apple killed the Mac clones by refusing to sell the OS to those vendors. Without a desktop OS for it, the machines themselves were pretty much useless and the platform dwindled ever since. Apple did this because they were taking heavy losses, since the cloners (e.g. Power Computing) did better machines than Apple themselves did.

Re:FTA: (1)

anothy (83176) | more than 5 years ago | (#26871157)

if what you're saying were true, we ought to expect to see OS X's market share decrease after the clones were killed; the inverse is true. the primary reason for PowerPC's failure to remain competitive on performance in the desktop or laptop markets is that it simply wasn't the focus of the main designer and manufacturer, IBM. Apple machines (including clones) were always a minority of the PowerPC market, and IBM (and Motorola, then Freescale) simply focused on the larger market. IBM also focused a lot of its engineering efforts on POWER, rather than PowerPC, for the very high end stuff, which while it had a limited trickle-down effect for the PowerPC stuff Apple eventually saw, it was delayed by a generation and had a different set of trade-offs on power consumption and heat generation that was reasonable for Apple's products (where's my G5 laptop again?).

Re:FTA: (1)

anothy (83176) | more than 5 years ago | (#26871173)

PowerPC failed to compete effectively against Intel/AMD in the laptop and desktop market, thus Apple was pretty much forced to switch. PowerPC was (and still is) doing quite well in very many embedded applications. Apple was the highest profile PowerPC user, but they never represented anything close to the majority of the market. most of the engineering work went into environments where a different set of trade-offs were appropriate.

How can we have a serious discussion about flops (3, Insightful)

Anonymous Coward | more than 5 years ago | (#26869975)

How could the writer blatantly ignore the 486sx, the winchip, or the original (cacheless) Celeron??

Although, I've always contended that TI's 486dlc (which fit in a 386 socket) was one of the worst chips I ever used, it overheated, lacked full 486 compatibility, and froze up the system with random halts whenever I needed to get something done on it!
 

Re:How can we have a serious discussion about flop (2, Insightful)

Cprossu (736997) | more than 5 years ago | (#26870033)

AC how could you have forgotten to mention the socket 4 Pentiums, or the K5 on AMD's side, the Transmetta Caruso, the Cyrix MII, or the slot 1 PIII 1.13?? From the extraordinary cost alone, you could have also called most of the intel overdrives a flop too.

although the winchip (shudders) I hope no one was unlucky enough to have to depend on a box with one of those running it

Re:How can we have a serious discussion about flop (2, Interesting)

anss123 (985305) | more than 5 years ago | (#26870121)

TI's 486dlc (which fit in a 386 socket) was one of the worst chips I ever used, it overheated, lacked full 486 compatibility.

What app did you run that needed full 486 compatibility? Being able to plug a 486 into an old 386 mobo seems like a neat idea, and any software that ran on that 386 would of course run on the nerfed 486 right?

Too bad about the overheating though.

Re:How can we have a serious discussion about flop (0)

Anonymous Coward | more than 5 years ago | (#26870277)

The program I needed the full compatibility for was a compiler iirc...

it was a cool idea except the original mobo I had for it didn't work right, so I bought another one which was compatible with it, and that got it working...sort of...

it was annoying because they didn't make 386 heatsinks for the most part, and a 486 heatsink wouldn't fit and clear all the components on the motherboard without modification...
It was more than a nerfed 486 though, it was more of a badly copied 386 with _some_ 486 instructions, as sometimes it failed to even run correct 386 assembler correctly...
the best day I had was when I replaced it with a ..I think it was a 486sl that was integrated on an IBM blue lightning motherboard....
speaking of flops, I really, really enjoyed the heck outa my blue lightning, the (rather tiny looking) 486sl onboard was clocked at 75mhz, and I found that could even run circles around the 75mhz Pentium that I later got... and yes, I did send my 75mhz pentium back to intel during the recall, I kinda wished I kept it for fun though.

Re:How can we have a serious discussion about flop (1)

drsmithy (35869) | more than 5 years ago | (#26871229)

How could the writer blatantly ignore the 486sx, the winchip, or the original (cacheless) Celeron??

The 486SX was hardly a flop, they _very_ well.

What about ACE? (4, Insightful)

Hal_Porter (817932) | more than 5 years ago | (#26869981)

Back in 1999 the ACE Consortium had Compaq, Microsoft, MIPS Computer Systems, DEC, SCO, and a a bunch of others [wikipedia.org] .

The plan was to launch a MIPS based open architecture system running Windows NT or Unix. Back then the MIPS CEO said MIPS would become "the most pervasive architecture in the world". The whole thing fell apart as Compaq defected, MIPS run out of cash and got bought by SGI. Dec obviously moved to supporting Alpha instead. Microsoft shipped NT for MIPS, Alpha and PPC for another few released and then gave up the ghost.

Re:What about ACE? (2, Informative)

anss123 (985305) | more than 5 years ago | (#26870163)

Back in 1999

Back on 1991 you mean?

I only know about that since it was mentioned in an article describing boot.ini. It was from an age before the web so I guess only those who bought certain dead tree magazines ever heard of it.

Re:What about ACE? (1)

Hal_Porter (817932) | more than 5 years ago | (#26870289)

Back in 1999

Back on 1991 you mean?

I only know about that since it was mentioned in an article describing boot.ini. It was from an age before the web so I guess only those who bought certain dead tree magazines ever heard of it.

Umm, yeah.

Re:What about ACE? (1)

Bert64 (520050) | more than 5 years ago | (#26870399)

Similar thing happened with PowerPC, it was going to be the next big thing... Apple came on board, Microsoft made a version of NT for it, Sun ported Solaris to it...

Motorola's m68k was the last big thing, so they assumed everyone would follow their migration path to PPC... Instead, most players dumped Motorola. They could have extended m68k like Intel has done with x86, the result would still have been messy but not as bad.

Re:What about ACE? (2, Interesting)

anss123 (985305) | more than 5 years ago | (#26870665)

They could have extended m68k like Intel has done with x86, the result would still have been messy but not as bad.

Don't be too sure about that. The good old m68k had some instructions that gave CPU designers headache at a glance :-) On the 68060 they literally dropped a number of commonly used instructions outright, don't think Intel ever did that, and with the Coldfire descendant they dropped so much that it's not possible to write a "Coldfire.libary" like Amiga users did for the 68060.

By luck or by wisdom, x86 avoids the hardest problems normally associated with CISC.

Re:What about ACE? (1)

cheesybagel (670288) | more than 5 years ago | (#26870711)

They did, for a time. It was called the 68060 and had Pentium like features. But it was too late and they didn't bother raising clock speed later on.

That's it? (5, Insightful)

Anonymous Coward | more than 5 years ago | (#26870011)

A short paragraph about Itanium (or, as the Register likes to call it, Itanic)? A few brief paragraphs about PowerPC? A few brief paragraphs about Puma?

Come on. There's a lot more scope for this sort of article. What about Rock [wikipedia.org] , promised three years ago, with tape out two years ago, and yet we're still waiting for systems? What about the iAPX 432 [wikipedia.org] ?

You've got the basis for a good article, but dear $DEITY, flesh it out! There's more meat on Kate Moss than on this article!

Re:That's it? (4, Insightful)

Hal_Porter (817932) | more than 5 years ago | (#26870281)

He'd be better off structuring the article as quiche eaters (computer scientists) vs hardware designers.

Hardware designers try to build something which can be clocked fast. They don't care if it's aesthetically pleasing and so on.

Quiche eaters moan about how limited von Neumann architectures are. They try to do a CISCy things like reduce the abstraction level between the programmer and the instruction set with lots of hard to implement features in the instruction set, and design ISA where it is impossible, newspeak style, to write incorrect code (e.g. segmentation or capability based addressing [wikipedia.org] ). The hardware engineer way to do this is a TLB and page table.

x86 has had input from both camps, but back compatibility has limited the damage the quiche eaters can do. In the end most of the quiche eater features end up unused (e.g. segmentation and complex instructions) and you end up running ugly, primitive but very fast instructions translated to run on Risc core. It kicked the ass of the quiche eater designed iAPX432 and Itanium.

Of course the dequicheffication of the x86 was to some extent triggered by competion from the very low quiche Risc chips. In fact MIPS did memory protection by implementing only a TLB in hardware, TLB writes and the rest of paging was done in software. Of course, sometimes RISC designs are so fundamentally anti quiche that the very fundamentalism is form of quiche eating, like Sparc's multiply and divide step instructions that ended up being slower than the 68K's full multiple and divide instructions.

Re:That's it? (2, Funny)

serveto (1028028) | more than 5 years ago | (#26870693)

I like quiche.

Re:That's it? (1)

PurPaBOO (604533) | more than 5 years ago | (#26870951)

"dequicheffication". Ha. I like that.

Re:That's it? (1)

RiotingPacifist (1228016) | more than 5 years ago | (#26870911)

i dunno enough to be sure, but reading part 1 leads me to two conclutions:
1.) The guy really hates the itanium
2.)

Advice for AMD: Hold the superlatives. First deliver in quantity the actual, viable physical chip that's supposed to do all these things better than the shipping Intel chip (shipping since October 2006). The adage "talk is cheap" has special meaning to journalists. And, I would imagine, special meaning to AMD's waiting customers.

The guy doesn't get marketing stratergies, talking up something that is late in a market your loosing is simply a holding tactic so that people don't run into the hands of the competition.

Re:That's it? (1)

drinkypoo (153816) | more than 5 years ago | (#26871225)

The AMD chips are still cheaper, so what if you can build a faster computer with intel chips? The average user isn't going to do nuclear blast modeling anyway. They'd do fine with a netbook with a 1.6GHz Atom, probably for the rest of their life if it wouldn't break.

The Software IS the Computer, Chips Just Carry H2O (4, Insightful)

ausoleil (322752) | more than 5 years ago | (#26870045)

Reading through the article, it seems that other than AMD's Puma, most of these failures have one thing in common: they are not backward compatible with the chips they replace.

People are loathe to buy a new computer and all-new versions of software to run on it. Look at the 64-bit Windows architectures. How many folks are running 32-bit software on those?

Bottom line is that the software IS the computer and the chips ultimately are sexy only to EE's and gearheads.

Re:The Software IS the Computer, Chips Just Carry (1)

jabithew (1340853) | more than 5 years ago | (#26870233)

Do you think that the i7's new socket will prove to be a barrier to upgrade?

I recently had to get a new motherboard, and the combined cost premium of an i7, taken over the processor and motherboard, was far too high to even consider. I could have bought three computers for it!

Re:The Software IS the Computer, Chips Just Carry (1)

anss123 (985305) | more than 5 years ago | (#26870309)

Do you think that the i7's new socket will prove to be a barrier to upgrade?

Nah. CPU only upgrades are actually pretty uncommon. New CPUs often require new FSB speeds and lower voltages so you'll end up having to change the mobo anyway.

Don't buy a mobo thinking you get to update to a much faster CPU later on - unless you buy a slow ass Celeron today and snag a cheap Extreme Edition of eBay in a few years (and even then you might be better off with the slow ass Celeron of the future :-)

Re:The Software IS the Computer, Chips Just Carry (1)

cowbutt (21077) | more than 5 years ago | (#26870895)

I've had mixed luck with CPU-only upgrades.

I've got a 440BX Asus P2B machine that went from a PII-266 in 1998, to a Celeron 500 in about 2000, and a PIII-450 in about 2003. I've also got a i845PE Gigabyte GA-8PE667 Ultra which went from a Celeron 1.7GHz in 2002 to a P4 2.53GHz in 2008. On the other hand, I've had two machines that I've never upgraded the CPU on because the upgrade path disappeared, or simply wasn't economic.

Re:The Software IS the Computer, Chips Just Carry (1)

hattig (47930) | more than 5 years ago | (#26870823)

Core i7 is enthusiast high end though.

Core i5 will be out soon, and yes, it has a new socket, but the motherboards will be cheaper.

I find it odd that when CPUs are reviewed against each other, the motherboard cost is very rarely factored in, so they'll pit the Intel CPU against the AMD CPU of the same price, and then declare the Intel the winner, without factoring in the $300 Intel motherboard price (for the i7) when the AMD is on a $100 board.

Still, AMD aren't executing very well, and haven't for a couple of years now. Core 2 knocked them back a lot. AMD could have spent some resources last year on developing a 45nm single-core CPU with built-in graphics and chipset and they could be winning the netbook war, but no, the company has no vision. Just roadmaps.

Oh, and the article sucked. PowerPC failed? What about Cell? 360? Wii? POWER servers? Hundreds of millions of embedded devices like set top boxes? It only failed as a viable desktop CPU because it didn't get the investment that x86 has over the past ten years.

Re:The Software IS the Computer, Chips Just Carry (1)

anothy (83176) | more than 5 years ago | (#26871253)

lack of chip-level backwards compatibility is an issue, but not a deal breaker. that can be reasonably managed, and has in plenty of cases you can point to without trying too hard. these examples failed to deliver on their promise for entirely unrelated reasons.

look at the examples given, and you'll see compatibility wasn't really a factor for the first two, either.
Itanic had explicit backwards compatibility, at first in hardware (through the use of a separate embedded core), then in software. that compatibility failed to save it from other market forces.
i'm not sure what you think backward compatibility did to PowerPC. it wasn't compatible with "the chip it replaced" (the Motorola 68k series), sure, but Apple managed that transition quite well, including backwards compatibility higher up the stack (Apple, you'll note, has a history of handling these potentially fatal cut-overs very well). it wasn't compatible with the x86, but it was never designed to replace that; rather, it competes with it.

Re:The Software IS the Computer, Chips Just Carry (0)

jellomizer (103300) | more than 5 years ago | (#26871271)

Exactly you could have the fastest computer in the world but if it doesn't run the software that the people want the people wont buy it. The Companies to make the software that people wont wont make software for that hardware if no one has it.

It is kinda of a catch 22. The only way out is to do minor upgrades Removing and old feature rarely used and adding a new feature that will not break much. And software companies will slowly add the new feature that take use of the new hardware features but the big switch off the old x86 to something newer and better will not happen anytime soon.

Chips... and their platforms too. (1)

w0mprat (1317953) | more than 5 years ago | (#26870053)

AMD's 4x4 Quad-FX dual socket motherboards were also a flop. AMD's line of FX-7x series processors for these boards were a limited run. Now considered collectors items. If you can find them! Intel's Skulltrail, was much hyped, but it is now very much quietly pensioned off by Intel, although it a few more boards sold than 4x4.

Anyway where are the Mandatory FLOP puns I was expecting? (Considering this is a brilliant set-up by article poster)

(Mandatory wiki linkage: http://en.wikipedia.org/wiki/AMD_Quad_FX_platform [wikipedia.org] )

Re:Chips... and their platforms too. (1)

Cprossu (736997) | more than 5 years ago | (#26870395)

I actually built a 4x4 rig for my cousin with 2 fx72's and a asus 'quadfather' motherboard,
fwiw, it's been rock solid stable and still is pretty quick.

Nice title... (2, Insightful)

V!NCENT (1105021) | more than 5 years ago | (#26870069)

I just got out of my bed 2 minutes ago and by vaguely reading the word FLOP I thought about Floating point Operations Per Second...

CISC vs RISC became a non-issue (2, Insightful)

m.dillon (147925) | more than 5 years ago | (#26870081)

It turns out that the cost of a translation layer has become irrelevant as chips have gotten faster. It's not even considered a pipeline stage any more, not really. That is, it is no longer a bottleneck to have to have a layer of essentially combinational logic to convert a CISC instruction set into a mostly RISC / VLIW one internally. This savings grace is also why the fairly badly bloated intel instruction set no longer has any real impact on the performance they can squeeze out of the chips.

-Matt

Re:CISC vs RISC became a non-issue (0)

Anonymous Coward | more than 5 years ago | (#26870149)

Also, when we were back at the 1m transistors per CPU stage (the original Alpha), that layer was a significant chunk of the die area and came straight out of your cache, but now we're at 100x that, it's nothing much at all.

Re:CISC vs RISC became a non-issue (3, Interesting)

hyc (241590) | more than 5 years ago | (#26870227)

But we can't stay at that 100x level, and in reality we don't need to be there all the time. Intel Atom proves that - you can get *enough* useful work done with a simpler design, and fewer transistors. Unfortunately, when you get down to the number of transistors that Atom uses, suddenly the frontend decoder *is* a significant proportion of your die real estate again. Inefficiency *always* costs you, and it's stupid to pretend that it doesn't. Atom may try to challenge ARM but it will fail, as long as it keeps the x86 ISA baggage. Efficiency *matters*.

Re:CISC vs RISC became a non-issue (1)

dfn_deux (535506) | more than 5 years ago | (#26870557)

This is a chicken egg argument. x86 won't be made irrelevant by other chips until/unless software developers support other target architectures. Software developers won't target something unless they feel that it will have wide enough acceptance as to make it worth their development time/effort/cost. If a chip maker has to sacrifice some percentage of their die/power to graft a translayer that allows people to continue to use legacy software then the trans layer provides a sufficient value for most applications as to negate nearly all the performance/efficiency cost of that layer. But don't take my word for it... Go right ahead and prove me wrong, point at a commercially successful x86 replacement that is making sufficient headway as to be considered a truly viable alternative to x86 chips. Atom is eating ARM's lunch right now in the consumer space and any penetration that atom sees into embedded platforms can just be considered frosting.

Re:CISC vs RISC became a non-issue (0)

Anonymous Coward | more than 5 years ago | (#26870875)

And ARM made Thumb 2 a part of the ARMv7 ISA, so it's no longer a fixed-length ISA. Okay, it's a two length ISA - 16 and 32 bit instructions, but they did up the complexity in order to increase code density whilst keeping performance the same.

Of course with 45nm processes, even ARM can afford to spend a few transistors on decoders. Then again the ARM Cortex A8 with L1 cache and L2 tags (but not L2 cache) is still under 4mm^2 on 65nm (http://www.arm.com/products/CPUs/ARM_Cortex-A8.html) so 2mm^2 on 45nm! Intel's Atom, with L2 cache, is 25mm^2 on 45nm.

Re:CISC vs RISC became a non-issue (1)

harry666t (1062422) | more than 5 years ago | (#26870263)

I think we still pay a price. My laptop could easily heat to 60-70'C when doing CPU-intensive stuff.

Re:CISC vs RISC became a non-issue (3, Insightful)

drinkypoo (153816) | more than 5 years ago | (#26871137)

"Bloat" is not the problem with x86. The problem is that there are zero general-purpose registers - many instructions require that the operands be in specific registers, which blows the whole idea of general-purpose registers right out of the water. This is compounded by the fact that there are only four registers which you could even call general-purpose with a straight face. You can sometimes use some of the others (if you're not using them for anything else, and sometimes you have to have pointers in the pointers) to stash something but they're not useful for computation. Just taking an existing program and recompiling it from x86 to x86_64 with any kind of competent compiler will result in a significant performance improvement, often pegged around 10-15% just due to avoiding register starvation issues. While register renaming somewhat mitigates the issues with the "general" purpose registers in x86, it does not eliminate them entirely.

On the flip side, x86's variable instruction lengths result in smaller code which can improve execution time on massively superscalar processors simply by virtue of getting the instructions into the processor faster.

Transmeta Crusoe? (4, Insightful)

Jeppe Salvesen (101622) | more than 5 years ago | (#26870127)

That definitely belongs in there. Sorry, Linus.

Re:Transmeta Crusoe? (2, Insightful)

paul248 (536459) | more than 5 years ago | (#26870259)

From what I've heard, Transmeta was creating some pretty remarkable CPU technology; they just made a series of awful business decisions.

unpublished disaster (2, Interesting)

ILuvRamen (1026668) | more than 5 years ago | (#26870133)

AMD still won't openly admit this but there's a timing problem with all or at least most of their Athlon X2s where the cores' clocls get out of sync with each other. That causes major graphics problems in games that rely on it like Runescape and Halo 2. It also causes really strange side effects where basically the computer gets slower and less responsive over time until you restart it. I never knew what was wrong with my computer and assumed it was inefficient software but then I heard about this and OMFG was I mad! They even have a program on their website that fixes some mysterious, unnamed problem with X2s and graphics and as soon as I installed it, it worked and yet they still won't admit to the public how badly they screwed up! I didn't even see the story on slashdot but it's all over the web.
Also they should add to the list of major screw ups, the entire naming system used by Intel. Centrino sounds like Celeron and they brought back Pentiums but the Pentium D's and Pentium Dual Cores are different and then there was Core Duo and Core 2 Duo which are easy to overlook. Ugh, it's just stupid!

Re:unpublished disaster (5, Informative)

Rockoon (1252108) | more than 5 years ago | (#26870345)

You are uninformed. The AMD multi-core "problem" is a software problem.

People who programmed for single-core systems assumed that the processors internal tick count, called the timestamp counter (read with the RDTSC instruction), would be monotonically increasing. The fact is that each core could have its own timestamp counter and if a process is migrated to another core by the OS scheduler, then the monotonically increasing assumption falls flat (time can appear to run backwards.) This is true for AMD multi-core processors as well as ALL (AMD and Intel) multi-processor setups.

The AMD patch does several things, one of which is to instruct windows to not use the timestamp counter for use in its own time-keeping. Windows XP defaulted to using this timestamp counter for timing, because both dual-core and multi-cpu systems essentially didnt even exist when it was released. This is accomplished by a simple alteration to boot.ini telling windows to use PMTIMER instead of its default.

Any modern games that are not fixed by the above patch were programmed by stupid people. Thats right... Stupid. They are accessing hardware directly rather than going through a standardized time keeping layer. Their assumptions about time are wrong when using RDTSC, because it isnt a time-keeper. Its a tick counter specifically associated with a CPU (Intel/AMD) or CORE (AMD)

Re:unpublished disaster (3, Interesting)

TheThiefMaster (992038) | more than 5 years ago | (#26870657)

A slight correction: Multi-processor systems had existed for a while, but dynamic clock speed scaling was new, and it was THAT that threw out the use of RDTSC as a timer. The problem just got more obvious when multi-socket chips were introduced that could change speed independently.

With a single chip that could adjust clock speed dynamically (based on load) the problem with using rdtsc wasn't too bad, because most games were (and still are) written to thrash a CPU (core) to 100% load anyway. However with two cpu (cores) in a system, one core could slow down while the other was running full-tilt. When this happened the tick counts would get out of sync. If the program using rdtsc then got scheduled onto the other cpu, it would see time as having jumped forwards or backwards.

It's worth noting that running different speed CPUs in a dual-socket board was possible before dynamic frequency scaling, as long as the FSBs matched. I accidentally had a 2GHz and a 600MHz cpu (133MHz FSB IIRC) in dual socket-A board at the same time once, and aside from horrifically confusing the dedicated server I was running on it, it ran fine. Not only were the rdtsc readings out of sync, causing it to keep thinking it had jumped into the past or future, but they were running at significantly different rates, causing it to keep switching between real-time and slomo or super-speed!

Re:unpublished disaster (1)

Waccoon (1186667) | more than 5 years ago | (#26870443)

It also causes really strange side effects where basically the computer gets slower and less responsive over time until you restart it.

I've had this problem with Intel systems, too, ever since I started working with Core Duo machines. I can't offer any insight, but I noticed right away that on multiple computers, the GUI of every program on WindowsXP noticeably slows down during the course of an hour. Benchmark performance doesn't seem to be affected, but responsiveness slows down a LOT. The good news is, restarting the affected program fixes the problem -- a Windows restart isn't needed. I've only seen this problem on dual core systems.

I've been wondering for a while if this wouldn't be a problem if I had bought an AMD system. I use Linux on my old single core computer, and my Mac is PPC, so I haven't tested anything other than Windows so far.

Re:unpublished disaster (1)

wisty (1335733) | more than 5 years ago | (#26871275)

You can probably play around with the affinity, so that the process only runs on the one core.

Itanic (0)

Anonymous Coward | more than 5 years ago | (#26870157)

Still one of the best register 'articles' http://www.theregister.co.uk/2006/02/17/itanic_oracle_idc/

I worked on I-Tanic: Why it failed (5, Interesting)

Anonymous Coward | more than 5 years ago | (#26870323)

I worked on Itanium/Merced. Keep in mind I was mid-level (not high enough to see the good political fights first hand, only getting the after effects). Below is my opinion from what information I saw or collected at the time. Take it or leave it as you will.

Itanium (or I-Tanic) was supposed to be the P7, back when Intel still used P#s for chips. That Pentium 4 was never supposed to exist. Basically, Itanium was so bad, the Portland design teams came in a ate the Santa Clara team's lunch.

The biggest problem for I-Tanic was management, on many levels.
1) No good top guy
The main and original project lead was more focused on marketing and "the platform" than actually making the chip. So, there was no top leadership at the CPU design level. This allowed the "lieutenants" to squabble among themselves (more later).
They finally got a good guy in (who's name I hate to say I forget. It was a long time ago). I believe he had done Kalamath. The project was in a never-ending re-design spin at this point. When he was there you knew there was a Captain of the ship. You weren't 100% sure he was sailing in the right direction, but felt things were moving ... finally. He lasted about 3 months, until his wife (supposedly) gave him the "me or CPU design" ultimatum. He then moved up to start the Intel DuPont site (which was supposed to be as big as the Portland cite). That didn't work out so well for him.
His hand-picked successor lasted about 1 week before "family reasons" caused his resignation. I assume he looked at the state of the now 2 year delayed chip and ran.

2) Dot.com boom & Silicon Valley
The "lieutenants" didn't give a rat's ass about the project. It was mostly a "pump and dump". Being the Dot.com boom and in Silicon Valley, their main concerns were taking over ownership of a "cluster" (State sized chuck of the chip), getting the ownership on their resume, finding a new non-Intel job, and splitting.
So, every part of the chip got a new guy every 9-12 months who blamed everything on the previous owner, forced a re-design on the part (which may have been needed, but seemed to be needed an awful lot), and then left (forcing the cycle to repeat).

3) Constant Re-Design
Look I know re-design is part of engineering. But perpetual hamster-wheel-like re-design is not good. Nothing got finished!!!! No specification was stable (let alone the written specs; I mean verbal specs). You ask people (and this was years, years into the project) about your interface to their part of the chip and they wouldn't have coded it up yet. So, who knows what the Hell the timing issues would be. "Can I move a flip-flop to your unit?" "Go fish. I haven't coded that."
Let us also remember that back then (I doubt they still do this) you coded in iHDL (not VHDL or Verilog) using macros for AND & OR gates. So, you're basically doing stencil EE work using a programming language. You want an IF-THEN construct, well break out the K-maps because you'll need them.

4) Moral
After the chip had slipped 2+years, no one wanted to work on this thing anymore. They had to freeze internal transfers. You had to threaten to quit to get out. "I am leaving Itanium. Are you going to make me leave Intel to do it?"

Itanium and Flops ? (1)

eulernet (1132389) | more than 5 years ago | (#26870375)

Why stop at flops ?
Itanium easily qualifies itself as a mega-flop !

Re:Itanium and Flops ? (0)

Anonymous Coward | more than 5 years ago | (#26870885)

They forgot to mention Munchos, or any potato chip for that matter, but you don't see me crying.

They clubbed folks over the head with Itanium... (4, Insightful)

JakiChan (141719) | more than 5 years ago | (#26870413)

Itanium did one thing well...it killed a lot of other chips. The threat of it killed MIPS post-R12K plans - and the Alpha, and PA-RISC architectures as well.

I remember how SGI kept the team around that was going to work on their next-gen processor while they were negotiating with Intel. These guys had no work - they just played a lot of foosball in good old Building 40 (yeah, Google, you weren't nearly cool enough to build that campus). Then once SGI had sold it's soul they axed the project (and the team). That was a sad day...

Re:They clubbed folks over the head with Itanium.. (4, Interesting)

anss123 (985305) | more than 5 years ago | (#26870477)

Itanium did one thing well...it killed a lot of other chips. The threat of it killed MIPS post-R12K plans - and the Alpha, and PA-RISC architectures as well.

Here's an idea: Let's throw out years of proven engineering in favor of an architecture that has yet hit silicon. That way we can fire our engineers and pocket the change. What could possibly go wrong?

I feel a big bonus is coming up, and just to be safe let's add a parachute too.

Re:They clubbed folks over the head with Itanium.. (1)

JakiChan (141719) | more than 5 years ago | (#26870669)

Here's an idea: Let's throw out years of proven engineering in favor of an architecture that has yet hit silicon. That way we can fire our engineers and pocket the change. What could possibly go wrong?

You must have been there...Belluzzobub, is that you?

Re:They clubbed folks over the head with Itanium.. (1)

drinkypoo (153816) | more than 5 years ago | (#26871179)

Alpha was starting to hit clock speed limits, or at least, DEC wasn't able to increase them (shock amazement.) PA-RISC = garbage, at least compared to the modern competition. MIPS is still around as an embedded core - it wasn't keeping up with x86 either, which is why SGI tried to make x86 machines. All of these processors have basically no reason to exist whatsoever now that Hammer is around, with superior TDP and unparalleled ease of SMP. Then again, the same is true of iTanic :)

Re:They clubbed folks over the head with Itanium.. (1)

TheGratefulNet (143330) | more than 5 years ago | (#26871191)

I was at SGI (mtn view) during that time, also. we called the intel chip and system 'IBT' (intel box thing) ;)

it killed MIPS and was helping to kill IRIX, too (irix has little relevance outside of the mips cpu).

it was truly the beginning of the end for SGI. I watched as SGI disappeared before my eyes. very sad.

SGI was dying anyway but this chip really did put the nail on the coffin for SGI.

Re:They clubbed folks over the head with Itanium.. (1)

TheGratefulNet (143330) | more than 5 years ago | (#26871221)

Google, you weren't nearly cool enough to build that campus

little known fact: SGI was in its last days when it built the 'charleston buildings' (ones very close to the shoreline park). I was on the site I/S team that was doing the building planning, network planning and whole add/move/change team.

what struck me as 'interesting' was that we designed those 3 floor buildings FOR US but intended *eventually* to be leased out to multiple different non-related companies. that makes designing your network infra. a bit more interesting since you have to design in security (physical) so that you can break 1 building into 3 or more, later on, for even competing occupants.

its almost like building a house knowing you'll only live there a year and then rent it out. that was our mode for the new SGI buildings (back in 1998 timeframe).

google basically just inherited SGI's buildings. probably for pennies on the dollar, no doubt, too. but some of them were meant for multiple companies. kind of funny that ONE company bought our *multiple* buildings rather than multiple companies living in a *single* building.

POWER and PowerPC? (5, Insightful)

dlundh (158421) | more than 5 years ago | (#26870441)

Why is that even in there? It "only" powers all three current games consoles and IBMs Power Systems server lines (i and p).

If that's a failure, I hope IBM has many more failures in the future.

TFA misrepresents PowerPC (2, Interesting)

OrangeTide (124937) | more than 5 years ago | (#26870761)

We probably have as many PowerPC chips in our homes than x86 these days. How many people own two of the following game consoles but only have 1 PC in their home? GameCube, Wii, xbox360, PS3?

It's true that Apple killed PowerPC on the desktop and it will probably never come back. And ARM and Atom will fight over the mobile and netbook market.

The article doesn't mention POWER, so I think we can technically assume it only considers PowerPC a failure (which is wrong of course). Even though POWER and PowerPC are almost the same thing, they aren't the same thing. But governments and corporations are still ordering iSeries systems, and IBM is still making plenty of money off them. (although I bet they sell less than 100 of them a year).

Re:TFA misrepresents PowerPC (1)

drinkypoo (153816) | more than 5 years ago | (#26871159)

PowerPC is a massive failure on the desktop and everyone who invested in a PowerPC-based desktop computer got burned. End of story! I might add that it was a technical failure as a desktop processor as well. It was the most powerful thing going twice, for about fifteen seconds each time; with the G3 (which was NOT faster on all workloads) and the G5 (which was about the most expensive thing Apple ever kicked out the door.) Everyone stuck with a PPC mac right now has been enjoying a reduced level of support and compatibility since Apple went x86, and is going to have to abandon the platform soon as there will be no further support, even if the machine is doing their work just fine.

Of course, Apple went PPC for "backwards compatibility" reasons... And for the most part it was a working strategy. But while the PPC might have been slightly better suited to executing translated 68k code, the x86 platform would have been a much smarter choice even then.

Re:TFA misrepresents PowerPC (1)

unfunk (804468) | more than 5 years ago | (#26871263)

Agreed. Wikipedia [wikipedia.org] also points out that the PPC architecture has been the polar opposite of a "flop". No, it didn't take over the Desktop Computing world, but it sure made an impression just about everywhere else.

What about Motorola 88000 and Intel i860 (4, Informative)

thbb (200684) | more than 5 years ago | (#26870447)

Commenters seem very young today. Noone remembers the failures of Intel's and Motorola first attemps at addressing RISC designs? Both the Motorola 88000 [wikipedia.org] and the Intel i860 [wikipedia.org] were great designs that failed.

Re:What about Motorola 88000 and Intel i860 (1, Interesting)

Anonymous Coward | more than 5 years ago | (#26870527)

The 88000 was weird, and more annoying as the SPARC and the MIPS.

The problem, exposing the pipeline explicit to the system. In the SPARC and MIPS case, this is done partially through the existence of branch delay slots (the SPARC is even more annoying as it has register windows, they make assembly easy to read, but are the source of numerous extremely difficult to find bugs). The 88000 exposed the pipeline even more, and when doing context switches, not only did the program counters and register state need to be stored away, but also other pipeline state.

The problem here is that, pipeline state or specifics should not be exposed to the programmer, it is better to handle it in hardware with out of order execution as it is very likely that the pipeline might need to be redesigned later on, this would entail difficult to maintain software running on-top of the hardware or that the original pipeline model is used for the programmer even after it has been completely redesigned.

Anyway, back to work, need to prepare a lecture about something similar.

Re:What about Motorola 88000 and Intel i860 (1)

struberg (757804) | more than 5 years ago | (#26870543)

yes, and also AMD made them: Am29000

From the 'old' RISC cores, there are only 3 really alive, and all of them mainly in the ÂC area:

.) Power (lot used in the automotive area from Motorola)
.) MIPS (R3000 aka 3k for 32 bit and R4000 aka 4k for 64 bit)
.) Acorn (remember the Archimedes?) ARM 7 and 9 cores which are now used in most handset devices.

Re:What about Motorola 88000 and Intel i860 (1)

struberg (757804) | more than 5 years ago | (#26870605)

seems like /. is having a unicode problem? ;) ÂC -> uC -> micro controller

Re:What about Motorola 88000 and Intel i860 (1)

TheGratefulNet (143330) | more than 5 years ago | (#26871175)

i860 shows up on a LOT of hardware raid controller boards.

hardly a failure of the chip or its design. worked great and didn't need huge heatsinks.

its not a gen purpose cpu - so what's your point?

Transmeta (1, Redundant)

tkrotchko (124118) | more than 5 years ago | (#26870481)

It probably won't be popular to say around here, but Transmeta was a fairly spectacular failure, particularly the Crusoe line.

Re:Transmeta (1)

Lisandro (799651) | more than 5 years ago | (#26871105)

I won't be popular, but it's still true. The whole idea of a cheap, low power, code morphing, software-upgradeable x86 CPU sounded great on paper... until actual benchmarks that it performed rather poorly, with marginal power consumption improvement.

Another spectacular failure, IMHO, was the transputer [wikipedia.org] - an amazing concept, specially for its time.

Flip flops? (1)

amirulbahr (1216502) | more than 5 years ago | (#26870505)

Did anybody else read that as A Brief History of Flip Flops?

MAJC missing? (2, Interesting)

inkhorn (650877) | more than 5 years ago | (#26870595)

And what sort of thorough article would this be in missing out Sun Microsystems' MAJC chip from the 1990s ?

Promised to accelerate JAVA instructions, the chip was a multithreading multicore design (can you say Niagara?) but Sun couldn't get it to market fast enough and advances in general purpose CPUs left it for dead.

Sadly MAJC only made it into two models of Suns own-brand graphics cards before it was dropped, though it's design principles live on in Niagara and Rock.

plus 1, tRoll) (-1, Troll)

Anonymous Coward | more than 5 years ago | (#26870715)

lea3 to '3leaner [goat.cx]

PowerPc is alive and well. (1)

Shivetya (243324) | more than 5 years ago | (#26871005)

Just because a chip isn't available in the PC at your local Best Buy does not make it a failure.

From zSeries, iSeries, and pSeries, machines which make up a large number of server and midrange hardware sold to variations of the theme in some of today's popular gaming platforms I think the PowerPc as an architecture does just fine. G5 is alive.

Two from the embedded world.. (3, Interesting)

gnalre (323830) | more than 5 years ago | (#26871113)

Intel's i960 was a nice chip for embedded development. One of its nicest features was the large number of individual interrupt vectors which is really useful when you want to hang off a large number of I/O devices off it. Compare that to the x86 where they have to share interrupt vectors. For some reason however Intel decided to drop the whole line and move to ARM architecture instead.

However the second one is a what might of been. During the 80's we did a lot of development using INMOS T2 and T8 transputers. They were a joy to use and made parallel programming at software and hardware level so easy and natural. The next iteration was to be the T9000. It promised a lot, much improved execution speed, a faster and more flexible processor interconnects. It looked so good we had even sold our next project based on it. However when we started getting the first samples there was obviously something wrong. Bits of the chip did not work or would fail. At the end of the day it looked like INMOS just could not deliver. The T9000 never became a reality but anyone who used transputers how good they were and and could if it had been done right with enough finance could of fundamentally changed the computer industry.

Successful chips killed by process... (3, Interesting)

argent (18001) | more than 5 years ago | (#26871241)

... to be precise, by intel's bankroll and investment in process.

Power PC and Alpha were outcompeted by the fundamentally inferior x86 family not because of flaws in their designs, but because intel spent more on improving their process than anyone else.

Both the Power PC and Pentium turned into furnaces, the Pentium 4 and G5 were both following the "megahertz myth" into long pipelines to let the clock speed ramp up. Neither got the clock speeds they were hoping for. Both were too hot for mobile processors. In both cases the solution was going to be shorter pipelines, slower but more clock-efficient cores, and faster busses. The Freescale e700 was torpedoed when Apple went with Intel's Core Duo... because Intel had the resources to get their respin of the PIII out quicker than Freescale could get their respin of the G4 online.

So now we're still using hacks upon hack on the truly horrible x86 architecture.

Well, it could have been worse. It could have been SPARC.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?