Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Microsoft Announces End of the Line For Itanium Support

Soulskill posted more than 4 years ago | from the development-dollars-well-spent dept.

Windows 227

WrongSizeGlass writes "Ars Technica is reporting that Microsoft has announced on its Windows Server blog the end of its support for Itanium. 'Windows Server 2008 R2, SQL Server 2008 R2, and Visual Studio 2010 will represent the last versions to support Intel's Itanium architecture.' Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?"

cancel ×

227 comments

Sorry! There are no comments related to the filter you selected.

Of course it means the end. (5, Funny)

John Hasler (414242) | more than 4 years ago | (#31741468)

How could anyone possibly have any use for servers that don't run Windows?

Re:Of course it means the end. (0)

Anonymous Coward | more than 4 years ago | (#31741516)

Yeah, we need all the useless brick Windows servers to build a wall to protect us from the zombie hordes.

Re:Of course it means the end. (4, Funny)

C0vardeAn0nim0 (232451) | more than 4 years ago | (#31741682)

yeah, servers with windows are like women playing soccer on high heels. nice to look at, until one of them falls and breaks an ankle.

Re:Of course it means the end. (4, Informative)

the linux geek (799780) | more than 4 years ago | (#31741712)

Exactly. Approximately 85% of Itanium servers are running HP-UX or OpenVMS. Windows and Linux are roughly split on the remaining 15%. Itanium faces challenges, but they're from POWER and SPARC, not from Microosoft killing Windows.

Re:Of course it means the end. (2, Insightful)

Third Position (1725934) | more than 4 years ago | (#31741984)

Indeed. The ultimate fate of Itanium is to wind up as HP's upgrade to PA-RISC. You have to wonder how much further interest Intel is going to have in it's development. I suspect it will end up getting tossed back into HP's lap.

Re:Of course it means the end. (1)

Anpheus (908711) | more than 4 years ago | (#31742522)

HP has fabs and/or competent CPU designers?

I doubt Intel really cares who they sell it to, as long as someone keeps buying. When HP moves on from Itanium, it's done for.

Re:Of course it means the end. (4, Insightful)

jmauro (32523) | more than 4 years ago | (#31743316)

Competent CPU designers, yes. It's the only reason Itanium has lasted this long. Intel's solo early designs were less than successful. HP designer came in redid the whole thing and lo-and-behold it worked. HP really needs Intel to fab the chip, not design it.

Re:Of course it means the end. (2, Interesting)

Peach Rings (1782482) | more than 4 years ago | (#31744020)

I don't know, Itanium seems pretty impressive. This presentation [infoq.com] appeared on slashdot awhile ago and does a good job of giving a face to the name Itanium instead of just reading "Failed processor line that was really expensive."

The huge amount of instruction-level parallelism (dependent on a very good compiler) really seems like the best way to do things. It's too bad it doesn't work out in practice.

Re:Of course it means the end. (1)

rubycodez (864176) | more than 4 years ago | (#31743056)

or upgrade from Alpha (for VMS shops) or upgrade from MIPS for NonStop shops

Re:Of course it means the end. (4, Funny)

$RANDOMLUSER (804576) | more than 4 years ago | (#31743096)

or upgrade from Alpha (for VMS shops) or upgrade from MIPS for NonStop shops

Blasphemy!! Heretic!! Burn the witch! Burn blasphemer!! Burn!!!!

Re:Of course it means the end. (1)

diegocg (1680514) | more than 4 years ago | (#31742494)

Red hat will not support Itanium in RHEL6. So that 85% will be a 100% in the future.

Re:Of course it means the end. (1)

rubycodez (864176) | more than 4 years ago | (#31742672)

yes, but at least RedHat will support the Enterprise 5 on Itanium2 until 2014. I work for an HP VAR and I've *never* seen any HP Integrity run any Linux but RedHat though there are a few other distros out there.

Re:Of course it means the end. (1)

Macka (9388) | more than 4 years ago | (#31743386)

In reality that's only going to be of use to customers who are already running Red Hat on Itanium. No one making a decision today is going to commit to a solution that only has a 4 year shelf life. If they want Red Hat today and they're in that enterprise space they'll go Nehalem-EX for the best combination of RAS + performance + price.

Re:Of course it means the end. (1)

Curate (783077) | more than 4 years ago | (#31742890)

Citation on those statistics?

Re:Of course it means the end. (0)

Anonymous Coward | more than 4 years ago | (#31742948)

That is exactly why itaniums are destined for the dustbin, The limited support from the windows/linux is pretty much there death, The HP-UX and OpenVMS market for them just isn't big enough by itself to support this architectures advancement.

No one can stop the x86 train, not even Intel. (4, Insightful)

A12m0v (1315511) | more than 4 years ago | (#31742212)

No one can stop the x86 train, not even Intel.

Re:No one can stop the x86 train, not even Intel. (5, Funny)

Anonymous Coward | more than 4 years ago | (#31742878)

No one can stop the x86 train, not even Intel.

Maybe not. But certainly some people are trying to strong-ARM the situation.

Probably not (1)

xZgf6xHx2uhoAj9D (1160707) | more than 4 years ago | (#31741494)

Were many Itanium users running Windows? My impression was that most Itanium users were running some sort of *nix. I don't think it's a huge deal for Itanium.

I also don't see Itanium going anywhere any time soon. As much as people like to talk about its demise, its numbers do grow every year. Or at least they were growing up until a couple years ago; I assume they're still growing. They're not growing very quickly, but they're still going.

It's a shame. It's a remarkably beautifully designed architecture, especially when it was first designed (1991-ish?). It's a shame no one can build a good chip for it or write a decent compiler for it :P

Re:Probably not (2, Interesting)

_merlin (160982) | more than 4 years ago | (#31741756)

Were many Itanium users running Windows? My impression was that most Itanium users were running some sort of *nix. I don't think it's a huge deal for Itanium.

The only Itanium servers I encounter regularly run OpenVMS in order to host the popular OM stock exchange platform. OM-based stock exchanges (ASX, HKFE, OMX, SGX, IDEM) all seem to be a hell of a lot more stable than the .NET-based Tradelect/Infolect system used on LSE for the last few years. I don't know why anyone would actually want to run Windows on Itanium.

Re:Probably not (1)

rickb928 (945187) | more than 4 years ago | (#31741900)

I've racked a bunch of Itanium servers running Windows Server 2003 and supporting SAP installs.

It is not unheard of. And I suspect these will migrate over to a much more desireable platform - in fact, I expec they will decommission these bad boys and I will be in line to scarf up some interesting hardware cheap.

I will not have to try and flim-flam them into a hardware swap. It's the only way they can actually do this. And I don't sell them any hardware. I'm just one of the few around here that seem to be able to work with EFI. Kinda sad, it really isn't as bad as EISA was.

Re:Probably not (1)

FuckingNickName (1362625) | more than 4 years ago | (#31742582)

What are you talking about? EISA is a bus; EFI is firmware.

Re:Probably not (1)

rickb928 (945187) | more than 4 years ago | (#31742810)

EISA setup was a lot like EFI.

Re:Probably not (1)

briantf (116180) | more than 4 years ago | (#31743970)

Uhh, ever hear of the EISA partition? Probably not.

Regards,
Brian in CA

Re:Probably not (5, Interesting)

lgw (121541) | more than 4 years ago | (#31742752)

Microsoft has had a strict policy since the dawn of Windows that Windows be built for at least 2 processor architectures at all times. They really worried about i386-isms creeping into the kernel. It pretty much doesn't matter what 2 you choose, as long as it's more than one (and they're somewhat different), it keeps the kernel devs honest. I wonder what they're doing now: perhaps they just decided that i386 and "amd64" are different enough to serve their purpose.

Re:Probably not (3, Interesting)

bhtooefr (649901) | more than 4 years ago | (#31742848)

The other thing is, keep a full build internally.

The rumor mill says that Microsoft has current versions of Windows built for ARM internally... sorta like how Apple kept x86 builds of Mac OS X internally the whole time.

Re:Probably not (0)

Anonymous Coward | more than 4 years ago | (#31743002)

The Xbox360 code base is in great part Windows for PPC.

Re:Probably not (1)

Bill, Shooter of Bul (629286) | more than 4 years ago | (#31743220)

Arm would be the most logical choice, if they didn't decide that i38x and amd64 weren't enough.

Re:Probably not (1)

Dragoniz3r (992309) | more than 4 years ago | (#31743490)

Or if ARM netbooks really take off. It's plausible enough that I can see Microsoft not wanting to miss out on the potential money. They've gotta write for a 2nd architecture anyways, might as well make it the one that shows signs of encroaching upon the desktop environment.

Re:Probably not (1)

C0vardeAn0nim0 (232451) | more than 4 years ago | (#31743778)

i'm not an expert on this, but according to this [hoffmanlabs.com] , windows so far has been built only for litte endian architectures, or chips that can change endian-ness at boot or on-the-fly. this limits MS's choice of target architectures somewhat.

i'd like to see if they're capable of building a version for big-endian chips like SPARC or latest PPCs.

Re:Probably not (0)

Anonymous Coward | more than 4 years ago | (#31743922)

What about XBox 360 [msdn.com] ?

Oh Noes! (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#31741518)

It would appear that the good ship Itanic has struck an MS Iceberg 2010 Datacenter Edition R2!

Seriously, though: is this an admission by Microsoft that HP-UX is(somehow) hanging on at the high end, despite HP's every attempt to mismanage it, or (more likely) is this a consequence of the fact that, at this point, there is nothing Itanium can do that Intel couldn't do better and cheaper just by bolting some extra cache and a few extra Itanium features onto Xeons?

Re:Oh Noes! (1)

vadim_t (324782) | more than 4 years ago | (#31741618)

is this an admission by Microsoft that HP-UX is(somehow) hanging on at the high end, despite HP's every attempt to mismanage it

Doubt it. I don't think Microsoft would give up if there was competition to drive out. They'd do like with the Xbox and keep throwing money at it until it worked. I take it this means that even if they had the marketshare, there would be no (or not enough) profit in it.

Re:Oh Noes! (1)

Matheus (586080) | more than 4 years ago | (#31742236)

That should be exactly right... their portion of that 15% market share was probably not justifying the resources needed to support the additional architecture.

I'm guessing they get to lay off some really expensive Itanium knowledge base from their core dev teams as well as all the other baggage necessary for release/support of the ports. Those guys are really hoping there's room for hire on the HP-UX team now :)

Re:Oh Noes! (1)

michael_cain (66650) | more than 4 years ago | (#31742960)

Doubt it. I don't think Microsoft would give up if there was competition to drive out.

The difficulty of driving out the competition probably matters also. I wonder how many of the non-Windows Itanium systems are running application software for which there is no drop-in replacement available for Windows? So MS would have to convince the owners that not only is Windows a better/cheaper/whatever OS, but enough so that it's worth replacing application software as well.

Re:Oh Noes! (1)

dave562 (969951) | more than 4 years ago | (#31742318)

Intel couldn't do better and cheaper just by bolting some extra cache and a few extra Itanium features onto Xeons

That is exactly what Intel is doing. They are rolling some core Itanium features into the next generation Xeon processors. There was an article on it in the Wall Street Journal last week. It came across as a marketing piece from Intel where they were attempting to reassure Itanium owners that they weren't going to be abandoned.

Re:Oh Noes! (0)

Anonymous Coward | more than 4 years ago | (#31743150)

It came across as a marketing piece from Intel where they were attempting to reassure Itanium owners that they weren't going to be abandoned.

Like Sony reassured a few weeks ago that they would not drop the Otheros feature in the PS3?

Re:Oh Noes! (1)

rubycodez (864176) | more than 4 years ago | (#31743152)

won't do any good for the Itanium2 owners if the Xeons can't run the IA-64 instruction set. The features Intel just brought to xeon from Itanium include MCA (machine check architecture recovery from failures), security and virtual machine migration. But not binary compatibility. But maybe HP will port VMS, NonStop, and HP/UX to the new, improved bullet-proof x86-64 (only with with appropriate supporting chipsets, of course)

The Itanic was Gandalf (3, Funny)

overshoot (39700) | more than 4 years ago | (#31741542)

With Alpha finally gone for good, its job is done and it can now sail off into the West.

Re:The Itanic was Galadriel (0)

Anonymous Coward | more than 4 years ago | (#31742338)

I think Galadriel [youtube.com] is probably a more apt comparison.

Intel also seems to be behind it... (1)

Suiggy (1544213) | more than 4 years ago | (#31741568)

Intel no longer supports Itanium in some of their own projects on Windows. For example, Intel Threading Building Blocks has x86 and a x86-64 support, but lacks Itanium support on Windows. It does however support Itanium on Linux.

Sans Red Hat too (1)

Macka (9388) | more than 4 years ago | (#31741750)

It does however support Itanium on Linux

Well kind of. Red Hat recently announced [redhat.com] that they were dropping Itanium starting with Red Hat Enterprise Linux 6. How long will it be before the rest of the distro gang follow suit?

Re:Sans Red Hat too (3, Funny)

Luke has no name (1423139) | more than 4 years ago | (#31742334)

Debian 27 plans to drop support.

Still supported on real OSes like Linux and HPUX. (1)

molo (94384) | more than 4 years ago | (#31741570)

Itanium has not been worth it in terms of price/performance for a while, this just confirms the inevitable. However, people will still be running this hardware for some time, and I expect HPUX and Linux to continue to support this hardware for the forseeable future. Hell, Debian supports the Alpha, and the M68k was removed from official support in just the previous revision of Debian (etch), but then only because it took too long to compile and would slow down the updates of the archives.

-molo

Re:Still supported on real OSes like Linux and HPU (2, Informative)

rubycodez (864176) | more than 4 years ago | (#31742352)

Itanium has not been worth it in terms of price/performance for a while
 
  Actually, in many categories, it does. Depends on the work to be done. For example, HP Integrity Superdome with HP/UX leads in price / performance and performance running TCP-H on 10 or 30TB Oracle database. Some on numerical benchmarks that are heavily SMP.

I don't like the Itanium, but certain database and numerical workloads it still kicks everyone else's butts.

Re:Still supported on real OSes like Linux and HPU (4, Informative)

Macka (9388) | more than 4 years ago | (#31743528)

Oh come on. It's really disingenuous to be quoting that kind of shit. Have you ever taken a really close look at the kind of hardware the vendors use to get these benchmark numbers? Database app benchmarks are almost always very sensitive to I/O, and these kinds of numbers are usually generated by systems that have their I/O card slots max'd out, with several hundred (if not thousands) of small high speed disks behind them. The cost of these solutions in real life would be crippling. Vendor quoted benchmarks should usually be taken with a generous pinch of salt.

Re:Still supported on real OSes like Linux and HPU (1, Informative)

Anonymous Coward | more than 4 years ago | (#31742560)

Yep. One of the best remaining reasons for the continued existence of Itanium is the HP NonStop servers, linear descendants of the Tandem NonStop [wikipedia.org] mainframes. HP had Intel add features to the Itanium chips that would allow them to do some of the processor pairing tricks that were used in Tandem mainframes. I have no idea how hard it would be to retrofit an equivalent feature into the x86-64 design, but that's one of the few reasons for people to still want to buy Itaniums.

The other shoe to drop would be HP-UX x64 (1)

fluffhead (32589) | more than 4 years ago | (#31741586)

Nah, the real "end" would be if HP finally bows to the inevitable and ports HPUX to x64. Don't hold your breath though....

Re:The other shoe to drop would be HP-UX x64 (0)

Anonymous Coward | more than 4 years ago | (#31742308)

HPUX (and OpenVMS) have already been ported to X64. HP decided against productizing them, because the profit margins are so much better on Integrity servers than on ProLiant.

Re:The other shoe to drop would be HP-UX x64 (1)

Macka (9388) | more than 4 years ago | (#31743632)

Hmm, I have my ear close to the ground on these things and I'm not sure I believe you. If by "ported to X64" you mean there's been a skunkworks project to get them to a point where they do a minimal boot, then possibly. But beyond that I very much doubt it. The amount of effort to port either O/S and all their associated layered products would be huge and expensive. I could see it happening for OpenVMS as that pretty much has its own ecosystem that's separate from and sort of immune from the machinations of *ix and windows, but it would make no sense for HP-UX. Do you really think that HP-UX on x86-64 would stand a snowball's chance in hell as a competitor to Linux? I don't. HP would be better off writing an HP-UX application compatibility layer for (for example) Red Hat Linux to allow their customers to port their apps easily. HP are a dominant player in the x86-64 server space, so they aught to be able to hang on those customers.

DEC Alpha? (4, Insightful)

Jeff- (95113) | more than 4 years ago | (#31741614)

I am incredibly offended that you would compare this bloated, brute-force, abomination of a chip to the incredibly well designed, elegant, and efficient Alpha (may it rest in peace).

Re:DEC Alpha? (1)

xZgf6xHx2uhoAj9D (1160707) | more than 4 years ago | (#31742312)

Might I ask what about Itanium makes it bloated, brute-force or an abomination? Its circuitry is not hand-designed like the Alpha's was, but its design is really beautiful, a testament to the later Berkeley RISC philosophy. It's everything SPARC should have been, really.

Re:DEC Alpha? (1, Informative)

Anonymous Coward | more than 4 years ago | (#31742652)

The short version is that in-order architectures only perform well on workloads where the memory access pattern and the branch pattern are predictable at compile time. Unfortunately, almost all of the things that people do with computers nowadays are exactly the opposite: neither pattern is predictable at compile time, which means the compiler cannot hide the latencies, so you *must* have an out-of-order architecture to get the speeds we expect nowadays. Both Alpha and Itanium were in-order, but Alpha was a classic RISC so it could have been made out-of-order without too much trouble. Itanium, however, was a VLIW, and it is enormously harder to implement an out-of-order VLIW.

Itanium also had features (like the NaT bits and the predicate registers) that were supposed to help compilers squeeze more speed out of unpredictable code, but in practice it turned out that they were very hard for compilers to take advantage of, and even when they helped, they would wind up wasting resources on instructions fetched, decoded, executed, and then the results discarded -- so much that they didn't actually translate to better time-to-completion. Which is all you really care about in the end.

In-order VLIW does still rule for DSP-type workloads; this is why your graphics card is so much faster than your CPU at doing the things that it does. The future probably looks a lot like the Cell -- some out-of-order cores to run the OS and do the general-purpose computation, some in-order DSPs to be tasked with the crypto or the 3D rendering or the video decode.

Re:DEC Alpha? (3, Informative)

mihalis (28146) | more than 4 years ago | (#31743238)

...Both Alpha and Itanium were in-order...

IIRC the Alpha 21264 was out of order actually, see http://courses.ece.illinois.edu/ece512/Papers/21264.pdf [illinois.edu]

Re:DEC Alpha? (0)

Anonymous Coward | more than 4 years ago | (#31743716)

IIRC the Alpha 21264 was out of order actually, see http://courses.ece.illinois.edu/ece512/Papers/21264.pdf [illinois.edu]

I knew that, but I had misremembered that the 264 never actually saw silicon (it was the 364 and 464 that were cancelled during development, according to wikipedia)

Sort of underlines the point, though, doesn't it? Even Intel's latest "Tukwila" rev of Itanium is still in-order as far as I can tell (it does on-chip multithreading, but that's not at all the same thing as proper out-of-order execution)

Re:DEC Alpha? (1)

EvanED (569694) | more than 4 years ago | (#31743800)

The Itanium features like predicated instructions and such always struck me as something that was really cool; it's too bad that tehy probably turned out too complicated for its own good.

Re:DEC Alpha? (1)

OFnow (1098151) | more than 4 years ago | (#31742664)

Beautiful? Did you ever really try to understand how the stack is
managed? It's a horrible mess with daunting complexity with
hardware running asynchronously from software.
And HP/Intel insisted on designing
their own software-stack data format, ignoring today's defacto
standard (DWARF).

Calling the design of anything in Itanium beautiful seems really odd.

Re:DEC Alpha? (1)

Lemming Mark (849014) | more than 4 years ago | (#31743170)

The architecture is nice (IMHO) but the obscene amounts of cache do make it look bloated in terms of silicon required. This is partly because it's a high-end chip, of course. But perhaps Intel were also having to take a brute-force approach to performance there (throwing transistors at it) rather than an efficient solution. I'd be sorry to see IA64 go, though, I really liked the design of the instruction set.

Not Very Comparable (2, Insightful)

damn_registrars (1103043) | more than 4 years ago | (#31741670)

The DEC Alpha was a much better chip than the Intel Itanium; and not just in the way that Johnny Mathis is way better than Diet Pepsi [wikipedia.org] .

The DEC Alpha was a brilliant RISC processor that could outrun a closet full of x86 chips of the same era (or even the era after). The DEC Alpha was sold by a hardware company that distributed their own Unix-derived OS for it that had the proper compilers ready to go as soon as the system was booted. The Itanium, on the other hand, was an odd attempt by Intel to make a 64bit CPU that could - mostly - run 32bit code as well. Unfortunately by the time the Itanium was released the Intel-Microsoft pairing was well established for most consumers and people wanted it to run Windows Server; which it didn't do particularly well.

So the Itanium may end up killed by the combined factors of lack of a market, lack of consumer interest, lack of consumer knowledge, and poor deployment. The DEC Alpha, on the other hand, was killed by upper level management who didn't seem to know what they had.

Re:Not Very Comparable (4, Interesting)

_merlin (160982) | more than 4 years ago | (#31741872)

Having used Alpha workstations, I beg to differ. The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage. This allowed very high clock speeds, and high theoretical peak performance with very deep pipelines. In reality, the deep pipelines' branch misprediction penalty was so bad you never got close to the theoretical peak performance, and the high clock speeds made them hot and unreliable - poor reliability was the main driving factor for switching to SPARC. Everyone should've been able to see the problems with the Pentium 4 well in advance - it was basically an Alpha with an x86 recompiler frontend, so it suffered from all the same problems.

DEC Tru64 had a lot going for it - lots of good ideas in there. When DEC and HP merged, they should have taken what was worthwhile from HP-UX and integrated it into Tru64, then ported the result to HP-PA. That would've produced a system that people wanted. (HP-UX horrible - nothing behave quite how it should. I'd be surprised if the thing really passed POSIX conformance without some money under the table.)

Re:Not Very Comparable (0)

Jah-Wren Ryel (80510) | more than 4 years ago | (#31741962)

(HP-UX horrible - nothing behave quite how it should. I'd be surprised if the thing really passed POSIX conformance without some money under the table.)

Lol. POSIX doesn't mean crap. POSIX was just a bunch of unix vendors who got together and wrote a 'standard' that was loose enough to cover all the idiosyncrasies of most their current implementations with a little hog-trading thrown in for some of the outliers.. In a way it was like RFCs - implementations were used as part of the process to define the final draft. But, the main difference is that a good RFC is purposely precise while most of POSIX is purposely vague.

Re:Not Very Comparable (0, Troll)

idontgno (624372) | more than 4 years ago | (#31742052)

/nod

Here's a hint about how meaningful POSIX compliance is.

Windows Server 2K8 includes Interix 6.1. This combination is POSIX-compliant.

Re:Not Very Comparable (1)

BlackSnake112 (912158) | more than 4 years ago | (#31742244)

Wasn't windows NT 4.0 POSIX complaint?

Re:Not Very Comparable (0)

Anonymous Coward | more than 4 years ago | (#31742666)

"yes". In as much as Microsoft ever fully follows another group's standard.

Re:Not Very Comparable (1)

Xtifr (1323) | more than 4 years ago | (#31742480)

POSIX was just a bunch of unix vendors who got together and wrote a 'standard' that was loose enough to cover all the idiosyncrasies of most their current implementations with a little hog-trading thrown in for some of the outliers..

Worse than that--DEC's involvement caused some wags to quip that POSIX was DEC's attempt to prove that OpenVMS was the One True Unix. :)

Now, the Single Unix Spec (SUS) on the other hand....

Re:Not Very Comparable (4, Interesting)

damn_registrars (1103043) | more than 4 years ago | (#31742196)

The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage

That is pretty much what RISC was about, in a nutshell.

and the high clock speeds made them hot and unreliable

I don't know what system you were running. I was using an AlphaServer ES40; four 667 Alphas with 8gb RAM. It was one of the most reliable systems I've ever used for HPC. There was a rack of intel x86 systems of the same era right next to it - something like 32 Intel Xeon CPUs - and the Alpha made the rack look silly and wasteful. On BLAST, the Alpha ran circles around the intel rack, and it became even more embarrasing for the intel rack when the data sets got larger. That was only one example, though; we found pretty much anything we could get source code for, the Alpha ran better. And that was going up against 1.8ghz Xeons.

By comparison, the Itanium wants to run native 32bit code (though it certainly doesn't do it well). The compilers aren't easy to setup (even in Linux) and it's hard to find a Linux distro that runs on one. I have an SGI cluster with Itanium2 CPUs in it; I know the care and feeding for this system well.

Re:Not Very Comparable (4, Interesting)

Bert64 (520050) | more than 4 years ago | (#31742226)

The alpha didn't even attempt to do out of order execution until the EV6 chip...
The EV4 and EV5 chips were strict in-order processors.

The difference with the P4, is that the p4 was expected to run code that was originally optimized for a 386, whereas the original alpha had code that specifically targeted it... In-order execution works very well when you can specifically target a particular processor (see games consoles), since you can tune the code to the available resources of the processor... The compiler for the alpha was also pretty good, it could beat gcc hands down at floating point code for instance.

In terms of alphas getting hot, the only workstation i remember which had heat problems was the rather poorly designed multia (which used a cut down alpha chip anyway).. other alpha systems i used were rock solid reliable and i still have several in the loft somewhere - one of which ran for 6 months after the fans failed before i noticed and shut it down...

Clock for clock the alpha was pretty quick too, unlike the p4 that was considerably slower than a p3 at the same clock...
http://forum.pcvsconsole.com/viewthread.php?tid=11606 [pcvsconsole.com] shows alphas getting specfp2000 scores higher than x86 chips running at 3x the clock rate.

A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base...

Re:Not Very Comparable (4, Interesting)

epine (68316) | more than 4 years ago | (#31743368)

If the 1.8GHz Xeon was based on the Netburst architecture, first you have to multiply by 2/3rds to correct for diet Pepsi clock cycles, then if your code base is scientific, you have to divide by two for the known x86 floating point catastrophe, and finally, if your scientific application is especially large register set friendly, there's another factor of 0.75. So on that particular code base, a 1.8GHz Netbust is about equal to a 400MHz Alpha (I only ever worked with the in-order edition). Netburst usually had some stinking fast benchmarks to show for itself if it happened to have exactly the right SSE instructions for the task at hand. And it gained a lot of relative performance on pure integer code. BTW, were you running Xeon in 64-bit mode? That could be another factor of 0.75.

A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base

Yeah, you and a lot of clear headed people with insight into the visible half of the problem space. Not good enough.

Alpha was a nice little miracle, but it fundamentally cheated in its fabrication tactics. This is a long time ago, but as I recall, in order to get single-cycle 64-bit carry propagation, they added extra metal layers for look-ahead carry generation. For a chip intended Intel scale mass production, this kind of thing probably makes an Intel engineer's eyebrows pop off. That chip was tuned like a Ferrari. I'm sure the Alpha was designed to scale, but almost certainly not at a cost of production that generates the fat margins Intel is accustomed to.

Around the time Itanium was first announced, I spent a week poking into transport triggered architectures. There was some kind of TTA tool download, from HP I think, and I poked my nose into a lot of the rationale and sundry documentation.

TTA actually contains a lot of valid insight into the design problem. The problem is that Intel muffed the translation, through a combination of monopolistic sugar cravings, management hubris, and cart before the horse engineering objectives. I'm sure many of the Intel engineers would like to take a Mulligan on some of the original design decisions. There might have been a decent in there somewhere trying to get out. Itanium was never that chip.

I pretty much threw in the towel on Itanium becoming the next standard platform for scientific computing when I discovered that the instruction bundles contained three *independent* instructions. They went the wrong way right there. They could have defined the bundles to contain up to seven highly dependent instructions, something like complex number multiplication: four operands, seven operations, two results. It should have been possible to encode that in a single bundle. Either the whole bundle retires, or not at all.

Dependencies *internal* to a bundle are easy to make explicit with a clever instruction encoding format. You wouldn't need a lot of circuitry to track these local dependencies. What you gain is that you only have to perform four reads from the register file and two writes to the register file to complete up to, in this example, seven ALU operations. Ports on the register file is one of the primary bottlenecks in TTA theory.

What you lose is that these bundles have a very long flight time before final retirement. Using P6 latencies, it's about ten clock cycles for the complex multiplication mul/add tree in this example (not assuming a fused mul-add). This means you have to keep a lot of the complexity of the P6 on the ROB side (retirement order buffer). But that also functions as a shock absorber for non-determinism, and takes a huge burden off the shoulders of the compiler writers. This was apparent to me long before the dust settled on the failure of the Itanium compiler initiative.

In my intuitively preferred approach, instructions within bundles would be tightly bound and suitable for dispatch to a single execution cluster. I would have tried to come up with rules to reduce that maximum interdependence across bundles. Not eliminate this, but constrain it to a dull roar, so that a modest investment in forwarding pathways was guaranteed to be sufficient to the generated code.

The trade-offs become complicated, and I'm not entirely qualified to compete as an arm-chair chip designed against the combined might of Intel. This doesn't mean I'm not right. I have good intuitions in this area, and my analysis wasn't constrained by Intel's agenda of world domination. But I do acknowledge that the devil is in the details, and that I barely scratched the surface on this.

Having chosen that starting point, it quickly becomes apparent that you can't specify six to eight operands (as well as and six to eight operations) in a single 128-bit bundle *if* you encode every operand with full orthogonality (a byte for each operand, to specify any of the 256 registers). Orthogonality? Bite me. That's not orthogonality, that's ludicrous over-provisioning. We get so caught up in learning that the travelling salesman problem is NP-complete, we fail to notice that in practice, close approximations to the optimal solution can be relatively easy to find.

Wikipedia: "Even though the problem is computationally difficult, a large number of heuristics and exact methods are known, so that some instances with tens of thousands of cities can be solved."

Let's blow ludicrous orthogonality out the door. Instead, let's imagine that each operand is specified by a four bit field, giving you a choice of sixteen input registers. Let's further suppose that those sixteen inputs are some random stochastic subset of the 256 registers in the register file, different for every instruction (some weak, mediocre, shallow bit hash of the full instruction encoding). Now the compiler is going to have to do register colouring to beat the band. News flash: we already know how to code extremely effective register colouring compilers (see the not-so NP complete travelling salesman example).

There's a PhD project or two to figure out exactly how many bits to allocate for each register argument and what hash patterns to use. It would be completely impossible to program this chip by hand without every second bundle being register-register move NOOPs. A good optimizing assembler would soon fix this. Against the encoding penalties, when these prove unavoidable, you make a lot of ground up when a single bundle encodes seven full operations, instead of just three.

That was my personal perspective on the road not taken. I remember a lot of people were irritated by the P6's non-determinism as compared to the P5. Itanium was a massive over-reaction back to hard determinism. Instead, what they needed to do was find the right radius of non-determinism, and then mandate that limit architecturally. Cap the non-scalable term, and keep the baby.

Intel is not the only chip design house in need of taking a well deserved bow here. ARM should step right up beside them. Thumb2 was the right answer from day one, but they were too RISC-brained to spot the balance point and ended up with 30% worse code density than they should have. This is stupid, because instruction decode has a relative constant transistor cost as the chip scales, whereas inefficient instruction cache utilization due to a stupidly rigid instruction format bites you all the way home. Brutally obvious. Unless ARM had a large net-present-value discount on surviving long enough as a company to reach GHz clock speeds, which is chicken shit, but not stupid.

Question: the x86 instruction format which ranges from one byte to sixteen bytes per instruction is a steaming pile of doo.

Answer 1: A blended 16/32 instruction format is a fine compromise between constant depth instruction decoder trees and i-cache code density.

Answer 2: You suck, we're going as far the other direction as humanly possible.

Amazing how often in engineering a billion dollars or ten is sunk into answer number two.

ding - worse is better (5, Interesting)

epine (68316) | more than 4 years ago | (#31743912)

This is a response to my own post. Sometimes after uncorking a minor screed, I note to myself "that was more obnoxious than normal" and then my subconscious goes "ding!" and I get what's grinding me.

The secret of x86 longevity is to have been so coyote-ugly that it turns into pablum the brain of any x86-hater who tries to make a chip to rid the planet of the scourge once and for all.

For three decades right-thinking chip designers have *wanted* x86 to prove as bad in reality as ugliness ought to dictate.

Instead of having a balanced perspective on beauty, the x86-haters succumb to the rule of thumb that the less like x86, the better. And almost always, that lead to a mistake, because x86 was never in fact rotten to the gore. You need a big design team, and it bleeds heat, but all other respects, it proved salvageable over and over and over again.

On the empirical evidence, high standards of beauty in CPU design are overrated. Instead, we should have been employing high standards of pragmatic compromise.

If any design team had aimed merely for "a hell of lot less ugly", instead of becoming mired in some beauty-driven conceptual over-reaction, maybe x86 might have died already.

Maybe instruction sets aren't meant to be beautiful. Of course, viewed that way, this is an age-old debate.

The Rise of ``Worse is Better'' [mit.edu]

Empirically, x86 won.

The lingering question is this: is less worse less better, or was there a way out, and all the beauty mongers failed to find it?

Re:Not Very Comparable (0)

Anonymous Coward | more than 4 years ago | (#31742616)

Correction: Compaq bought DEC in 1997, HP/Compaq merger was 2001-2003 (another stupid deal that netted Carly Fiona a fortune despite being fired). After Compaq gutted DEC because they did know what to do with DEC, IIRC, a lot technology and a big portion of the Alpha design team went to AMD.

Re:Not Very Comparable (1)

stevel (64802) | more than 4 years ago | (#31742882)

Correction: Compaq bought DEC in mid-1999, HP bought Compaq in late 2001. Otherwise, you are mostly correct, though the majority of the Alpha design team ended up at Intel in 2001, when Intel acquired the Alpha architecture and compiler teams. Some Alpha designers did go to AMD and AMD licensed the Alpha EV6 bus for Opteron.

Re:Not Very Comparable (1)

C0vardeAn0nim0 (232451) | more than 4 years ago | (#31743942)

correction: EV6 was licensed for use on athlons. opteron uses hypertransport.

Re:Not Very Comparable (1)

juuri (7678) | more than 4 years ago | (#31742700)

Well it is easy to bag on things in hindsight, but in 01/02? If you were doing something like running thousands of monte carlo simulations the Alpha was untouchable for commodity hardware. I won a bitter sweet war when I swore up and down with any data I could muster that Sun e420's fully populated couldn't even remotely touch a lowly ol' DS20 running about 1/3 the cost. Ended up with a lot of underutilized sun boxen.

Re:Not Very Comparable (1)

_merlin (160982) | more than 4 years ago | (#31743024)

It was definitely an ambitious design, and something that needed to be tried. It did what it promised for the first generation, but sadly it was a dead end. You're right - no-one could have known in advance that the Alpha would end up hitting insurmountable roadblocks; but they Intel should have seen what was coming when they used the concept in P4 NetBurst. Hopefully, the lessons learned have influenced today's processor designs.

killed by upper level management (1)

nurb432 (527695) | more than 4 years ago | (#31741912)

Like we are going to see happen with SPARC too im afraid.

Re:Not Very Comparable (1)

Bert64 (520050) | more than 4 years ago | (#31742046)

Itanium was killed primarily by closed source software...
A few years ago, an Itanium box made a very good but expensive linux box, as did alpha for that matter...

However, while windows was ported to itanium most of the apps people wanted to run weren't, windows was effectively useless on the itanium because it had no applications... Very few commercial software companies would write software for it because of the small number of users, and the number of users won't increase because of the lack of software. What few applications were ported, were usually done because hp or intel paid for them in one way or another.

Most open source apps however, were easily recompiled for itanium making it a pretty decent architecture if you only wanted to run open source on it... Unfortunately, because such people are relatively few in number, there was never the economies of scale necessary to make itanium systems affordable...

Re:Not Very Comparable (1)

JasonStevens (1574841) | more than 4 years ago | (#31742864)

The Compiler for Windows was the *WORST* I've ever dealt with. Even the Beta x86 32bit compilers were a dream come true, compared to the Alpha compilers. Even trival programs like gzip couldn't build with /O2 flags!!! The sad part, is by the time Visual C++ 6.0 for the Alpha shipped it was finally usable... And then they killed the platform.

Now I know you want to leap right in and defend DEC from the crappy C compiler, but their name was right in the copyrights....

Microsoft (R) & Digital (TM) AXP C/C++ Optimizing Compiler Version 8.03.JFa
Copyright (C) Microsoft Corp 1984-1994.
Copyright (C) Digitial Equipment Corporation 1992-1994.
All rights reserved.

And I still recall when the NT source was leaked, a lot of the swearing in the code was pointed towards the lackluster Alpha compiler.

But then I must be mad, as I keep my Alpha running NT 4.0.

Re:Not Very Comparable (0, Flamebait)

damn_registrars (1103043) | more than 4 years ago | (#31743020)

Compiler for Windows

Therein lies the problem. Why were you running an OS originally written for x86 (as in 8086) on a RISC processor? An argument could be made that Microsoft never would have written NT for Alpha had they not been paid specifically to do so. And even at that point you were running a 64bit CPU on a 32bit extensions to a 16bit GUI to an 8bit OS ... you know the rest.

The Alpha was supposed to run Unix - Tru64 Unix in particular. Running in a proper 64bit environment the Alpha was an incredible chip.

But then I must be mad, as I keep my Alpha running NT 4.0.

Trying real hard to think of a good reason to do that, especially knowing that there are Linux distros that run brilliantly on the Alpha ... nope, can't think of any good reasons to run NT on an Alpha. Indeed you might be crazy.

Re:Not Very Comparable (1)

Guy Harris (3803) | more than 4 years ago | (#31743956)

Compiler for Windows

Therein lies the problem. Why were you running an OS originally written for x86 (as in 8086) on a RISC processor?

What, somebody was running MS-DOS or Windows 95 on Alpha? (Windows NT was originally written for the Intel 80860, and later MIPS, and for 32-bit x86, according to this article [winsupersite.com] .)

The Alpha was supposed to run Unix - Tru64 Unix in particular. Running in a proper 64bit environment the Alpha was an incredible chip.

Well, Unix plus OpenVMS, but they both supported a 64-bit environment.

Re:Not Very Comparable (2, Informative)

gertam (1019200) | more than 4 years ago | (#31743332)

Compaq's upper level management's arguments about Itanium's inevitability in the marketplace and economies of scale are a prime example of how you should never let management make decisions of real consequence. I listened to meetings at Compaq where not a single engineer in the crowd agreed with management, but there was nothing they could do. Everyone knew that the game was over simply because a bunch of morons with MBAs thought Intel was unbeatable and they wanted to give up.

We couldn't understand it until much later, when it turned out to be obvious that they wanted to kill Alpha so they could eventually merge with HP. Another move done for purely political reasons, not business or technology reasons.

HP on the other hand, probably knew that Alpha was a decent chip, but figured they could reap the benefits of having Intel taking over most of the Alpha engineers. Tru64 was successfully ported to Itanium, and then promptly killed. Because it wasn't HPs technology, they had no interest. They also killed the TruCluster product for political reasons, and let the technology disappear. It was all a big tragedy.

Doubt it. (5, Interesting)

Jah-Wren Ryel (80510) | more than 4 years ago | (#31741680)

Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?

Kinda funny to make that comparison since the Alpha was killed to enable the Itanium. (Long story involving HP making a deal with Intel to hand over the last of PA-RISC/Itanium processor development to Intel and DEC killing Alpha at the same time to clear out the market since HP was in the process of purchasing DEC/Compaq, although the acquisition was not yet public at the time of the cpucide).

But I doubt its the end of Itanium. Itanium models have things that even the latest Xeons don't in terms of RAS. [wikipedia.org] Most customers don't care about the level of fault tolerance and reliability, but the ones who can't migrate to linux (or Windows) because they are dependent on features of more proprietary OSes like Tandem (now HP) NonStop [wikipedia.org] do need Itanium, and their software is unlikely to be ported to x86 anytime soon (it took at roughly 4 years to get NonStop ported to Itanium to begin with).

Re:Doubt it. (1)

BlackSnake112 (912158) | more than 4 years ago | (#31742356)

I thought Intel had partnered with DEC to make the Alpha chip. Also Intel held the patents on it. Intel finally decided to tell DEC sorry but we (Intel) do not want to use these (the Alpha chip designs) anymore. Or something like that anyway. Intel forced DEC to stop making the CPU which left DEC screwed. DEC's value dropped enough for HP to buy it.

Wasn't the pentium II more like a risc CPU with a cisc interrupter so it could run windows and the rest of the 32 bit cisc stuff? So Intel needed the Aplha to go away for the PII to happen.

Re:Doubt it. (4, Interesting)

stevel (64802) | more than 4 years ago | (#31742940)

I thought Intel had partnered with DEC to make the Alpha chip. Also Intel held the patents on it. Intel finally decided to tell DEC sorry but we (Intel) do not want to use these (the Alpha chip designs) anymore. Or something like that anyway. Intel forced DEC to stop making the CPU which left DEC screwed.

Sorry, that is not even close. DEC sued Intel over infringements of the Alpha patents in Pentium processors. One of the results of the settlement was that Intel acquired DEC's Hudson, MA fab (which still operates today). In no way were DEC and Intel partners in Alpha, though ironically, Intel ended up making Alpha chips in the Hudson fab for several years under contract to DEC. What killed Alpha was years of neglect by Bob Palmer (DEC CEO) followed by Compaq's cluelessness. HP ended up with both Alpha and Itanium and bet the farm on the latter, but by that time it probably didn't matter.

Re:Doubt it. (1)

Macka (9388) | more than 4 years ago | (#31743862)

That's how I remember it as well. But it wasn't just the Alpha chip that Intel were forced to manufacture (after being forced to buy the Hudson MA fab) but StrongARM as well. Remember that? Ultimately it was this that set the stage for the death of Alpha. After suffering years of neglect at the hands of Intel in fabrication technology advancements, and missing out on many planned die shrinks that would have kept it ahead, it finally got the axe before the EV8 variant had a chance to see the light of day. That was surely political. EV8's design was so far ahead of the competition that the first generation of Itanium would have been still born. Management could not allow that to happen and risk jeopardizing all the money they were going to make, so they killed it.

Re:Doubt it. (4, Informative)

dave562 (969951) | more than 4 years ago | (#31742430)

The WSJ mentioned that Intel was porting a lot of the Itanium specific fault tolerance features over to the Xeons.

Re:Doubt it. (1)

fm6 (162816) | more than 4 years ago | (#31743406)

This may or may not count as irony, but VMS (DEC's main OS) survives solely as an OS for HP's Itanium based systems. Further weirdness: a major app for this platform is RDB, a DBMS that Oracle bought from DEC over a decade ago. It's interesting that two companies whose mainstay is competing tech (x86 servers for HP, Oracle DBMS and now x86 and SPARC Sun servers for Oracle) work so hard to keep this particular legacy stack alive.

Not dead yet (1, Insightful)

PoiBoy (525770) | more than 4 years ago | (#31741714)

In addition to HPUX, OpenVMS and NonStop both require Itanium hardware. Itanium might be relegated to a role like IBM's Power chips, but it's not dead yet, Jim.

They will be in millions of homes? (2, Insightful)

HannethCom (585323) | more than 4 years ago | (#31742782)

IBM's Power chips make up the main processor(s) for the Xbox360 and PS3, so are you saying that the Itanium will end up in multiple consoles found in millions of homes?

It'll change lunch time (0)

Anonymous Coward | more than 4 years ago | (#31741822)

The microwave is broke so how am I supposed to cook lunch if they drop the Itaniums?

Re:It'll change lunch time (1)

BlackSnake112 (912158) | more than 4 years ago | (#31742392)

Pick up a pentium 4 cpu. See if you can get one of the 3.6 GHz ones. Watch the type of cooler you use. You want to cook on it not set the place on fire.

*shovels in some more troll food*

Re:It'll change lunch time (0)

Anonymous Coward | more than 4 years ago | (#31742858)

ugh I have a P4 3ghz from a baseline hp server, and yea the massive 4 heatpipe, dual fan heatsink can barley keep it under 60c

I bet if I put it in a micro atx case you could bake cookies in it

What does Netcraft say? (2, Funny)

idontgno (624372) | more than 4 years ago | (#31741974)

Windows on IA-64 can't be dying until Netcraft confirms it!

Re:What does Netcraft say? (1)

selven (1556643) | more than 4 years ago | (#31742518)

Netcraft is dying, Netcraft confirms it.

Every Chip is a DEC Alpha (3, Insightful)

SwedishChef (69313) | more than 4 years ago | (#31742016)

They all get outmoded.

What architectures ARE used in datacenters? (0)

Anonymous Coward | more than 4 years ago | (#31742158)

Going AC on this so as not to damage my past, present and future reputation.

I've been in a datacenter or two, and I've seen x86(64), SPARC (legacy stuff), and IBM z series for production web/Java stuff.

What other CPU architectures are used somewhat widely?

Re:What architectures ARE used in datacenters? (0)

Anonymous Coward | more than 4 years ago | (#31742422)

I've seen POWER architecture, as well.

Itanium tends not to be a "datacenter" platform. It seems to be more of a "one or two ultra-high-availability systems within the datacenter" platform.

And Microsoft has slowly been removing support for awhile. Windows Server 2008 R2 only supports two "roles" on Itanium: Application Server (aka: generic "server" for third-party apps,) and Web Server. And, honestly, who is going to use it as a webserver? (Well, other than me... I have a surplus Itanium server running in my basement doubling as a space heater and web/file server.) It doesn't even officially support the "File Server" role! (Although all the background file serving architecture is there.) Itanium seems to mostly be about third-party apps nowadays, not mainstream 'server' duties.

DEC -Alpha was the best test platform! (0)

Anonymous Coward | more than 4 years ago | (#31742248)

We used to support HP-UX, Sun/Solaris, Iris-SGI, WinNT, and DEC-Alpha. We have used all kinds of tools like Purify and Boundschecker to make sure the code is robust. But the best platform to test the code was Dec-Alpha. None of the software tools are as capable of DEC in sniffing out bad programming errors. At the slightest whiff of an array bound violation or freeing an unallocated memory or mixing delete with delete [], the damn thing will crash. Sometimes it would crash if someone near by sneezed. Absolutely painful thing to debug and fix the errors. But once your code runs without crashing in DEC-Alpha you don't have to run purify or boundschecker! There was one Government customer who used to demand we support DEC-Alpha. We tried hard to ditch the platform. That agency would pay us two engineer year worth of maintenance and give us two machines to build and test!

byebye Microsoft (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31742454)

Those MOTHERFUCKERS at HP and Microsoft promised me that buying Itanium based servers is the best migration path from Dec Alpha with Tru64 and Oracle to Windows and Microsoft SQL server.
I should have bought an IBM zSeries and DB2, assholes!

Thank God (1)

jayhawk88 (160512) | more than 4 years ago | (#31742542)

Now I won't have to decline all those useless Itanium updates in WSUS console every month.

OS support for recent hardware (0)

Anonymous Coward | more than 4 years ago | (#31743078)

So MS is retiring support for hardware that was released this year (Itanium 9300), yet Apple gets slammed for retiring support on hardware that is 5 years old?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>