Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Details Eight-Core Poulson Itanium Processor

Soulskill posted about 2 years ago | from the here's-a-thing-that-exists dept.

Intel 102

MojoKid writes "Intel has unveiled details of their new Itanium 9500 family, codenamed Poulson, and the new CPU appears to be the most significant refresh Intel has ever done to the Itanium architecture. Moving from 65nm to 32nm technology substantially reduces power consumption and increases clock speeds, but Intel has also overhauled virtually every aspect of the CPU. Poulson can issue 11 instructions per cycle compared to the previous generation Itanium's six. It adds execution units and re-balances those units to favor server workloads over HPC and workstation capabilities. Its multi-threading capabilities have been overhauled and it uses faster QPI links between CPU cores. The L3 cache design has also changed. Previous Itanium 9300 processors had a dedicated L3 cache for each core. Poulson, in contrast, has a unified L3 that's attached to all its cores by a common ring bus. All told, the new architecture is claimed to offer more than twice the performance of the previous generation Itanium."

Sorry! There are no comments related to the filter you selected.

Failtanium (-1, Flamebait)

Anonymous Coward | about 2 years ago | (#41933179)

And all 3 Itanium users are rejoicing!

There are a hell of a lot of Itanium users (4, Funny)

attemptedgoalie (634133) | about 2 years ago | (#41933203)

I'll be buying a number of systems with these in a few months when they hit the street and the budget's ready. I'll be able to virtualize a lot of our old PA-RISC boxes into a smaller and more efficient set of systems.

But you're right, they suck because you can't play Angry Birds on it.

Re:There are a hell of a lot of Itanium users (0)

Anonymous Coward | about 2 years ago | (#41933355)

Itanium is pretty much an HP-only proc now. Sucks being anchored to one vendor, huh?
Imagine if you were able to host your applications on commodity hardware..

Re:There are a hell of a lot of Itanium users (1)

greg1104 (461138) | about 2 years ago | (#41933679)

Intel provides an Itanium reference board [intel.com] that makes it possible for other manufacturers to release OEM Itanium based systems. As a second manufacturer example, I've used on of Bull's Novascale Bullion [bull.com] servers. It wasn't very cost effective, but it did include 256 cores, and continued running just fine when one socket was damaged during shipping. The sort of applications that need that many cores and heavy redundancy against hardware failures exist, and no commodity hardware will satisfy them. There's just only a few hundred thousand of such systems sold each year.

Re:There are a hell of a lot of Itanium users (1)

Anonymous Coward | about 2 years ago | (#41933863)

Intel would have a wet dream if a few hundred thousand titanium systems were sold each year. I'd revise that number down a few orders.
The number of non-HP itanium vendors is very very very very narrow and getting smaller by the day. Niche products.
The market for 256+ core single image systems is also vanishingly small.

Re:There are a hell of a lot of Itanium users (0)

Anonymous Coward | about 2 years ago | (#41940775)

Google, Facebook etc have heavy redundancy against hardware failures.

Their approach doesn't work if your application needs "single system image" and still needs high redundancy.

I was wondering for years why Intel and the OS people didn't sit down and work out additional ways to use those extra transistors. But recently they seem to doing something: http://arstechnica.com/business/2012/02/transactional-memory-going-mainstream-with-intel-haswell/ [arstechnica.com]

Would be good if they can also do stuff to make Single System Image and clustering easier.

Heck just getting time of day and getting monotonic time is harder and crappier than it has to be on x86 platforms.

Re:There are a hell of a lot of Itanium users (1)

unixisc (2429386) | about 2 years ago | (#41941625)

What I'm wondering is why Intel doesn't sit down w/ organizations that still support Itanium, such as Debian and FreeBSD, to optimize those OSs for the Itanium. Similarly, they could also work w/ the GCC and Clang guys to make sure that additional work gets done to finetune those compilers so that they can make the best use of these CPUs.

But these CPUs seem more ideal for supercomputing work, not the Google or Facebook types. Their needs, aside from Xeon or Opteron, can be met even by SPARC servers - those too can run Linux or BSD, and they can then do what they do but on more failsafe hardware.

Re:There are a hell of a lot of Itanium users (1)

Desler (1608317) | about 2 years ago | (#41944197)

What I'm wondering is why Intel doesn't sit down w/ organizations that still support Itanium, such as Debian and FreeBSD, to optimize those OSs for the Itanium.

Because it wouldn't benefit them economically?

Similarly, they could also work w/ the GCC and Clang guys to make sure that additional work gets done to finetune those compilers so that they can make the best use of these CPUs.

Why would they bother? They already dropped IA-64 support in their own compilers after version 11.1.

Re:There are a hell of a lot of Itanium users (0)

Anonymous Coward | about 2 years ago | (#41947805)

> I was wondering for years why Intel and the OS people didn't sit down and work out additional ways to use those extra transistors.

They do. They have done so for a long time. They regularly work with OS developers (Linux and Windows at least, but possibly others too), compiler and toolchain people, and important application developers. They employ large labs of Linux and Windows and compiler developers, for example.

Re:There are a hell of a lot of Itanium users (1)

Anonymous Coward | about 2 years ago | (#41933817)

With the software requiring to fill your raised floor with Itanic servers, the cost of hardware usually is the lesser of your worries.

Re:There are a hell of a lot of Itanium users (0)

Anonymous Coward | about 2 years ago | (#41933549)

U mad bro? Have fun paying premium costs because of your vendor lock-in.

A project for you. (0)

attemptedgoalie (634133) | about 2 years ago | (#41934765)

Go find an open source or commodity system that can be deployed in a heavily regulated power industry with SCADA systems.

Make sure it's so cheap that the difference in cost for buying Itaniums and this software will pay the millions in training people all over the country, interfacing in the financial and billing systems, as well as covering the cost of redeveloping all of the customized code that is required to operate coal, natural gas and nuclear plants.

Please call me when you're done.

Re:There are a hell of a lot of Itanium users (0)

Anonymous Coward | about 2 years ago | (#41936691)

WOW! The New itanium 9500 family offers have the performance of POWER7.

Re:There are a hell of a lot of Itanium users (1)

unixisc (2429386) | about 2 years ago | (#41941631)

In the meantime, how is POWER8 coming along?

Re:There are a hell of a lot of Itanium users (0)

Anonymous Coward | about 2 years ago | (#41937095)

Um...Itanium can run Windows. Windows can run Chrome. Chrome can run Angry Birds.

Re:There are a hell of a lot of Itanium users (1)

unixisc (2429386) | about 2 years ago | (#41941727)

Uh, no, Itanium could only run Windows Server 2008, but that too was Itanium II. For Itanium III, software would have to be recompiled, and since Microsoft has dropped support for that CPU, you can be sure that it won't happen.

Heck, even the Linuxes have dropped support for Itanium - the only exception being Debian. On the BSD side, only FBSD supported it initially, although looks like NBSD support might be there in version 6.0. But w/ the compatibility breakage b/w each generation, looks like it would have to be recompiled each and every time (unless they freeze a certain core, say the 9500, and then just improve performance by adding more cores). In which case, GNU HURD should be ported to this platform - sounds like the perfect OS for it.

Re:Failtanium (1)

rwa2 (4391) | about 2 years ago | (#41933207)

I ought itanic it n ceberg nd ank nto he ea

Re:Failtanium (4, Funny)

TheLink (130905) | about 2 years ago | (#41933239)

I think there is a world market for about 5 Itanium computers.

Re:Failtanium (0)

Anonymous Coward | about 2 years ago | (#41933661)

That's funny, because I've personally worked on hundreds of them.

Re:Failtanium (1)

tgd (2822) | about 2 years ago | (#41934581)

That's funny, because I've personally worked on hundreds of them.

*woosh*

Re:Failtanium (1)

Anonymous Coward | about 2 years ago | (#41933741)

3 Words: VMS + Fortran + Mission Critical

If it's going to cost you 250M to migrate to another platform, or 20M to buy replacment itanic hardware,
which one are you going to do?

Companies and large institutions with these kinds of equations exist in many places.

They are the ones that actually needed computers when computers 1st came out.

Re:Failtanium (1)

Tapewolf (1639955) | about 2 years ago | (#41934261)

Balls, that was supposed to be a 'funny' mod.

Re:Failtanium (0)

Anonymous Coward | about 2 years ago | (#41938909)

What's funny is a bunch of basement jockeys thinking that they know better than the techs for the fortune 500 companies. And actually believe they could show up with some linux boxes and take over.

Why? (3, Interesting)

PCK (4192) | about 2 years ago | (#41933201)

I was under the impression that Itanium was all but dead. I'm guessing Intel must be contract bound to bring out new versions.

Re:Why? (1)

Anonymous Coward | about 2 years ago | (#41933281)

If that was the case, why bother making performance improvements inside the core? Why not just move it to 32nm and double/triple the number of cores / socket?

Though I agree, this was likely a significant loss on intel's books.

Re:Why? (4, Funny)

Guignol (159087) | about 2 years ago | (#41933395)

I understand
In death, an agent of project Itanium has a name
His name is Robert Poulson

ahh, beat me to it. Lol (0)

Anonymous Coward | about 2 years ago | (#41934377)

Robert 'Bob' Paulson: Go ahead, Cornelius, you can cry.

Narrator: [V.O] This is Bob. Bob had bitch tits.
[Camera pans to a REMAINING MEN TOGETHER sign]
Narrator: [V.O] This was a support group for men with testicular cancer. The big moosie slobbering all over me... that was Bob.
Robert 'Bob' Paulson: We're still men.
Narrator: [slightly muffled due to Bob's enormous breasts] Yes, we're men. Men is what we are.
Narrator: [V.O] Eight months ago, Bob's testicles were removed. Then hormone therapy. He developed bitch tits because his testosterone was too high and his body upped the estrogen. And that was where I fit...
Robert 'Bob' Paulson: They're gonna have to open my pecs again to drain the fluid.
Narrator: [V.O] Between those huge sweating tits that hung enormous, the way you'd think of God's as big.

Re:Why? (1)

Anonymous Coward | about 2 years ago | (#41935335)

I came for this joke. /. does not disappoint.

Re:Why? (2)

ewanm89 (1052822) | about 2 years ago | (#41933461)

Yeah, I'm sure there was a big argument with oracle threatening to sue when Intel said they were dropping Itanium architecture several months ago.

Re:Why? (0)

Anonymous Coward | about 2 years ago | (#41935343)

Know what you are talking about before you open your mouth

"Intel is reaffirming its commitment to the architecture and slapping down Oracle’s suggestion that Itanium is nearing the end of its life."

http://allthingsd.com/20110323/intel-to-oracle-thats-okay-well-have-a-great-itanium-party-without-you/

Re:Why? (0)

Anonymous Coward | about 2 years ago | (#41933865)

Itanium is not dead. Intel is still selling billions of dollars worths of chips. Just the Itanium division sell more chips by market cap than all of AMD division combined

Re:Why? (0)

Anonymous Coward | about 2 years ago | (#41934129)

Intel is still selling billions of dollars worths of chips.

Bullshit. If it was selling so well they wouldn't have dropped support for it from their C++ and Fortan compilers in 2011. Any version beyond 11.1 no longer supports it. Also, HP makes up around 95% of Itanium system sales and has only averaged around 2-3 billion in sales. So unless they're paying Intel equivalent to 50-67% of their revenues it's highly doubtful Intel is making billions off of it.

Re:Why? (0)

Anonymous Coward | about 2 years ago | (#41936837)

Then again, Intel haven't supported HP-UX, Non-stop or others with their compilers anyway.

Re:Why? (1)

hairyfeet (841228) | about 2 years ago | (#41935713)

This is what I don't get, you look up the numbers and the chip is all but a corpse, my guess is they already had these designs done before itanic lost any chance of gaining real share and now they are hoping to simply carve a niche out with it, something like how POWER and SPARC have their niches.

But according to the Wikipedia they sell than 200k Itanium chips a year [wikipedia.org] which while that makes it not a money loser for a giant like Intel you have to wonder if its worth keeping the lights on for such a small market share. All I can figure is they can keep this a step or two behind on the die shrinks and use them to fill capacity. After all in TFA they say its 32nm and isn't Intel about to release 20nm X86 chips? or did they get them out the door already?

Who knows, maybe there is some niche contractor that Intel wants to keep happy, look how they kept cranking the 486 chips out until just a couple of years ago, they get enough sales from the US military and defense contractors for the 486 (because its easy to harden against radiation) that even with such a tiny niche it was worth keeping it running just to keep them happy. last i heard they had a warehouse full of 386 and 486 where they had that last run cranked to the max so they'd have plenty of spares to keep the contractors happy for several more years.

Re:Why? (2)

unixisc (2429386) | about 2 years ago | (#41938007)

The recommended ASP is ~$4000/tray. Anyone know how many itaniums are there in a tray? Multiply the unit price by 200k, and you'll get the cash that Intel would be making on those.

But honestly, there are some markets Intel should attack w/ this CPU. For starters, supercomputers. The platform from Cray discussed yesterday - that one looks just perfect for a whole bunch of these. There are quite a few supercomputer projects in a number of countries, and Intel should target the Itanium at all of them. That alone would have a bunch of them flying off the shelves.

One thing I believe - by tossing more cores at the problem, just like w/ the i-Cores and others, Intel has possibly eliminated a major drawback that Itanium, as a VLIW based CPU had - namely a complete break in compatibility b/w generations. This was something that would have threatened to sink the platform, but since Itanium I & II didn't take off, there ain't a whole bunch of legacy software for Itanium III to support. But Itanium is the wrong platform for legacy OSs such as NonStop or OpenVMS - it's probably perfect for supercomputers, but not much more.

Re:Why? (1)

turgid (580780) | about 2 years ago | (#41942283)

But honestly, there are some markets Intel should attack w/ this CPU. For starters, supercomputers. The platform from Cray discussed yesterday - that one looks just perfect for a whole bunch of these. There are quite a few supercomputer projects in a number of countries, and Intel should target the Itanium at all of them. That alone would have a bunch of them flying off the shelves.

Er, no. Itanic is just an over-grown, over-engineered DSP. The GPUs that they use as co-processors in supercomputers these days do a much better job (orders of magnitude faster, cooler, cheaper and supported by standard - and open source - software libraries).

Re:Why? (1)

g00ey (1494205) | about 2 years ago | (#41938629)

Who is to judge whether developing and marketing the Itanium is worthwhile other than Intel themselves? Perhaps the development and marketing of these chips will give them valuable information that is useful for the development of future generation processors.

The EPIC architecture (which is looked upon as a continuation of the development of the WLIV architecture) is significantly different from other more wide-spread architectures and perhaps the performance issues are there because people have not yet figured out how to fully utilize such an architecture in an efficient manner. So maybe one day when the compiler tools get more mature we might see EPIC CPUs with a competitive price/performance in the market. But that's my two cents.

Btw, damn to the depths whatever muttonhead thought up 'all butt'!

Re:Why? (1)

unixisc (2429386) | about 2 years ago | (#41941835)

RISC was actually the optimal CPU architecture. CISC had a lot of things, such as variable instruction lengths, different modes of addressing and so on that complicated the hardware. RISC simplified some of that by reducing the number of instructions that were needed since all the programming was done in higher level languages like C, but still kept techniques like branch prediction, speculative execution and register renaming in the CPU itself. As a result, RISC never had problems maintaining compatibility b/w different generations of CPUs. The simplified hardware allowed them to either accelarate clock speeds, or have improved performance/watt.

VLIW went a step further than RISC and moved everything to the compiler. The idea being that once very little work was left to the hardware, it could be simplified and overclocked to speeds above the fastest superpipelined RISC CPUs, such as the Alpha. It was a good theory, except for 2 things:

  • The compilers for such a platform that would do all the dynamic analysis was near impossible to write
  • Compatibility b/w generations of CPUs was automatically broken every time they would add a few register or even a few pipelines or pipeline stages - the compiler would have to take care of that to makes sure that all the pipes were utilized

Also, the amount of circuitry in RISC that does that dynamic analysis was already quite small, so the percentage of real estate saved by going to VLIW was not very much. So not much cost savings there, while in the meantime, the compilers didn't get much better. In the meantime, on the RISC side, both Alpha and POWER incorporated a lot of the other MIMD techniques inherent in VLIW, making that difference even more theoretical.

Ultimately, it was tragic that Itanium brought about the premature deaths of several far better RISC CPUs such as the Alpha, MIPS V and PA-RISC. It would have been good as a new platform from Intel targeted very specifically at scientific applications, such as supercomputers, where the effort could have gone into making exactly the sort of compilers that would have been needed, while in the meantime, PA-RISC could have continued to sustain HP/UX, Alpha could have continued to sustain OVMS and MIPS V could have continued to sustain both NonStop, even if SGI went under. Itanium certainly does not play in the same area as SPARC or even POWER7, and it could have been fine tuned for what it would have been best at, instead of pretending to be a successor to the x86 platform, as Intel originally planned.

Re:Why? (1)

g00ey (1494205) | about 2 years ago | (#41948517)

Perhaps breaking of compatibility between CPU generations is not a weakness of the VLIW/EPIC architecture per se but rather a weakness in how people look at software and software distribution. First of all, why should software be distributed as pre-compiled binaries? A much better way would be to distribute the sources while maintaining a compiler/installation environment that automatically handles the software. This environment would then automatically optimize the software for the specific computer system and its particular hardware configuration during the installation process and migration of this software to newer generation systems would be a non-issue.

Another approach would be to add an abstraction layer between the hardware and software very much like what is done with virtualization, Java, ZFS, LVM, DirectX, Crossbow et al. That would make the software more independent of the underlying hardware...

Re:Why? (1)

Tore S B (711705) | about 2 years ago | (#41954541)

Another approach would be to add an abstraction layer between the hardware and software very much like what is done with virtualization, Java, ZFS, LVM, DirectX, Crossbow et al. That would make the software more independent of the underlying hardware...

Isn't that basically how CISC works nowadays?

Re:Why? (1)

g00ey (1494205) | about 2 years ago | (#41955017)

The problem many CISC CPUs (such as x86 based CPUS) are facing today is that they are encumbered by legacy instruction sets so as to maintain backwards compatibility. I understand that there is an abstraction layer in many x86 CPUs that emulates some of these legacy instructions at the hardware level. The downside with this is that the wafer space required for this circuitry logic could be used for something else that would improve performance instead of maintaining this backwards compatibility.

As an abstraction layer between hardware and software; CISC cannot be compared to the implementations I mentioned in prior post. Assume that Intel introduces a new instruction set that would make any concurrent CPU without it pale in comparison. Let us call this instruction set SSE6. Any precompiled software will not take advantage of this new instruction set. The software has to be recompiled. In the examples I mentioned, the hardware support is determined at the driver level while the applications take advantage of whatever is available. As we all know, hardware and their drivers/compiler stick together like a horse and carriage.

Maybe the ideal CPU is EPIC based, maybe it is a CISC that is not encumbered by legacy instructions or even a RISC. We will not know until we spend time and research to find out. Most likely, what is optimal will depend on circumstances or the quantum mechanical properties of the materials used which is likely to change as newer and more efficient materials are discovered. Maybe we will see all these CPUs in one and the same system eventually as they are all good at specific tasks. So it would mean a great deal if existing software could immediately take advantage of the new hardware features and optimizations as they reach the market.

Re:Why? (1)

hairyfeet (841228) | about 2 years ago | (#41957323)

Oh please! While FOSS might be able to work in the server niche that is pretty much the ONLY place it works and even then not great, as the redistribution clause means you can't stop people from simply making endless copies. By distributing as a binary you make people buy the next version to get bug fixes and upgrades, which in turn pays for the bug fixes and upgrades to be written. There is a good reason why the largest FOSS company on the planet, Red hat, wouldn't make a pimple on the ass of MSFT and Apple, and that's because you can simply take all their work for free with Cent, the FOSS model simply cuts your own throat if you need to actually get paid, see Canonical putting ads into ubuntu and holding out a tin cup begging for donations as an example.

Now as for why Itanic went nowhere? simple...VLIW. With VLIW the compiler is everything, it has to be damned near perfect to get any kind of performance and no matter how well you write it there will still be tons of corner cases that will cause it to stall, yet in the case of Itanic Intel had only a half ass compiler out the gate and by the time they came out with a halfway decent one AMD had forced them to go X64 and multicore which fricking killed any advantage Itanic had. It is obvious that VLIW is a dead end, hell even the GPUs are moving away from it, with AMD replacing VLIW for Vector, first in their GPUs followed by their APUs.

The simple fact is you can do everything you could on Itanic with a Tesla or Firepro card and STILL enjoy the incredible amount of software and OSes for X86, no point in taking a platform with less software and compiler issues.

Re:Why? (1)

rot26 (240034) | about 2 years ago | (#41938599)

Yep.

It's GOOD to be an important supplier to a black project with a black budget. *cough* NSA *cough*

There will be trainloads sent to Bluffdale, Utah, in boxes labeled as containing Donnie and Marie CD's. I imagine much if not most of the development was done by SAIC contractors with TS clearances. There will no doubt be a few thousand crippled versions marketed though the normal channels.

Re:Why? (1)

tigersha (151319) | about 2 years ago | (#41944793)

The rest of us jerk off to pictures of girls instead of conspiracy theories. Try it one day!

Great news for Oracle databases! (2)

gtirloni (1531285) | about 2 years ago | (#41933283)

The next upgrade will surely make things fly!

What's twice a small number? (2)

davecb (6526) | about 2 years ago | (#41933287)

My leaky/biased memory says these machines were a speed disappointment. Is this doubling going to make them faster or slower than an x86?

--dave

Re:What's twice a small number? (2)

betterunixthanunix (980855) | about 2 years ago | (#41933581)

At least according to Wikipedia, Itanium's performance was disappointed when compared to other RISC architectures, ten years ago:

https://en.wikipedia.org/wiki/Itanium#Itanium_.28Merced.29:_2001 [wikipedia.org]

One of the traps Intel tends to fall into, at least according to someone I know who worked there during the Itanium "hype days," is that the architecture team does not communicate with the compiler team. Both Itanium and x86 fall into this trap, although x86 is far more illustrative of the problem (most compilers can only take advantage of a small fraction of the total number of x86 instructions; most instructions are too complicated, and most programming languages do not make it easy to specify when such complex instructions are advantageous). I suspect that in a few decades, compiler technology will have advanced enough that Itanium would beat the pants off x86 in typical "enterprise" applications, although by then Itanium will probably have been forgotten.

Re:What's twice a small number? (3, Interesting)

Anonymous Coward | about 2 years ago | (#41934493)

Perhaps Intel fell into the trap of not communicating with the compiler team, but HP certainly did not.

The development of the EPIC concept at HP already started in 1992 as the to-be-successor for the HP-PA architecture. Look up e.g. the many joint research papers of CPU-architecture and compiler engineers for PlayDoh (or see e.g. http://www.hpl.hp.com/techreports/93/HPL-93-80.html for an intro to PlayDoh).

The compiler technology to do well for EPIC architectures was mostly available by the time IA64 launched. Arguably it hadn't advanced enough to be considered "ready for prime time", but things like data flow analysis of predicated code (http://www.hpl.hp.com/techreports/96/HPL-96-119.html), if-conversion (various influential papers), VLIW scheduling (e.g. selective scheduling a la Moon&Ebciolu), interprocedural analysis and various other interprocedural analyses were available and actually implemented in the HP compiler and probably also in the Intel compiler.

(John C. Dvorak wrote "How the Itanium Killed the Computer Industry". Perhaps one day I'll write "How the Itanium Revived Compiler Scalar Optimization Research" :-)

I think a bigger problem for Itanium was the underwhelming Merced. We used them mostly to heat the SuSE Maxtor building, they weren't good for much else. Slow, power-hungry, inefficient. And by that time, the market for things like database servers and high-end engineering workstations (where HP-PA was big) was imploding due to the advances on commodity architectures like AMD64 running Linux or even just Wintel32. Merced was a disaster, the chip wasn't ready and the compiler technology was not available widely enough, and by the time Madison came along the reputation damage was too big to be undone.

It probably also didn't help that support for Itanium in Linux was never very good. The "typical hacker" didn't have access to IA64 and the major companies supporting IA64 didn't invest in Linux-for-IA64. Compare how IA64 funding for Cygnus/Redhat got cut before binutils was complete (to this day, binutils for ia64 is still far from complete) to how AMD funded and cooperated with SuSE to get a good x86-64 Linux ecosystem even before First Iron. Also, GCC is only since recently beginning to catch up with the proprietary compilers of HP and Intel (and also e.g. SGI's Open64), but neither HP nor Intel ever really understood how GCC is a corner stone for the whole GNU+Linux system. For example, bash compiled by ecc was much faster than bash compiler by gcc -- but no Linux distribution ever shipped an ecc based complete distribution. An official LLVM port for IA64 doesn't even exist, but a port for the long-dead Alpha *does*. What does that tell you?!

Mis-management and lack of vision are as much a cause for Itanium's failure as the technology itself....

Re:What's twice a small number? (1)

fatphil (181876) | about 2 years ago | (#41935831)

I understand that for "compatibility" they squeezed a little 386-compatible core in the corner of the chip. It was also my understanding that some people benchmarked the chip by feeding it x86 code, and saw a 10-year-old core struggle with the load. This was not good publicity for their enterprise flagship.

Might all be urban legend, or misremembered, or I might be on drugs.

Re:What's twice a small number? (2)

eabrek (880144) | about 2 years ago | (#41936319)

The early processors had a functional unit for translating x86 into Itanium (it was probably area-wise bigger than a 386, but it just read x86 opcodes and produced Itanium instructions). It was later removed, and x86 support was handled in software: http://www.xbitlabs.com/news/cpu/display/20060120105942.html [xbitlabs.com]

Re:What's twice a small number? (0)

Anonymous Coward | about 2 years ago | (#41938437)

You remember correctly. The first Itanium chips had an on-chip interpreter for x86 code, and also for HP-PAWW code although I'm not sure that was ever used by anyone.

In any case, the chip could also run in big-endian or little-endian mode. So it was really 4 different architectures on one chip.

The first Itaniums had all the hallmarks of "Designed by committee", and that bloat probably contributed to the poor Watt-per-instruction ratios these things were producing.

(At the time it was suggested Itanium would have been better off with a software interpreter, much like what Transmeta tried so *cough* successfully with its Crusoe...)

Re:What's twice a small number? (1)

TheRealMindChild (743925) | about 2 years ago | (#41936711)

The "typical hacker" didn't have access to IA64 and the major companies supporting IA64 didn't invest in Linux-for-IA64

This has nothing to do with availability, or even cost per unit factor. These machines were plagued with the same problem that Alpha servers had... they weigh in excess of 200lbs. Even as a hobby, for a free machine, I'm not paying shipping on that bastard

Re:What's twice a small number? (1)

Tore S B (711705) | about 2 years ago | (#41954561)

Just as the AlphaServer had desktop equivalents, so did the Itanium, which were discontinued.

Re:What's twice a small number? (1)

unixisc (2429386) | about 2 years ago | (#41941963)

HP had in the 90s acquired 2 VLIW companies - Multiflow and Cydrome - and already had a lead in VLIW compiler technology. Once they made the alliance w/ Intel, they had the grand vision of replacing both the x86 and PA-RISC lines w/ the successor Merced. As I pointed out above, leading RISC CPUs were already adapting MIMD techniques intrinsic to VLIW, while moving the dynamic analysis to the CPU to the compiler didn't do much for the CPU real estate, since they weren't using much to begin w/.

Yeah, Intel and HP were both short sighted by not recognizing the importance of both Linux and BSD to the future success of Itanium. Initially, it was all focussed on various projects, such as HP/UX, OSF/1, Solaris and so on. Oh, and then there was that SCO led project Monterrey, which was supposed to merge Unixware, AIX and some other Unixes into a single Unix for the Itanium. It never materialized. Linux would have been pretty happy to go on the Itanium had Intel and HP pushed it that way. As it is, when SGI moved from MIPS V to Itanium, they also moved simultaneously from Irix to Linux. This was the perfect opportunity to unify several Unixes so that the only differences would be in licensing - say if they had Monterrey for proprietary software, FreeBSD and Linux for Open Source, then the CPU would have had a simple array of choices and taken off.

On the compiler front too, I agree w/ you. Intel should have worked w/ GCC, and much later, Clang, to ensure that more active development on their compilers happened outside the company. That would have enabled both Itanium and the various OSs to work synergistically w/ each other.

OVMS in the meantime could have continued to live on the Alpha, NonStop on MIPS, XP on x64 and Solaris on SPARC. Itanium need not have been targeted at platforms where it was ill suited.

Re:What's twice a small number? (1)

Anonymous Coward | about 2 years ago | (#41933973)

The current Xeon E5-2670 (8 core, 2.6GHz, 2012) can do roughly 4x the performance of the previous Itanium 9350 chip (4 core, 1.73GHz, 2010), according to spec.org CPU2006 benchmark results. I think Itaniums do slightly better with FP, the Xeons win with INT.

But that's per chip, and the Itanium systems are going up to 8 chips, so a single 8 socket Itanium system was getting roughly the same performance (in 2010) as a 2-socket E5-2670 in 2012. I don't think the Xeons go up to 8-sockets.

Re:What's twice a small number? (1)

petermgreen (876956) | about 2 years ago | (#41937417)

I don't think the Xeons go up to 8-sockets.

Intel do have xeon processors that support 8-socket systems and afaict at least HP and supermicro make 8-socket xeon solutions (I think HP sell them as fully-built servers while supermicro sell them as a "barebones" system to which you add processors and drives yourself).

However afaict the processors that support 8 socket setups are both underwhelming (high core counts but low clockspeeds and still on nahelm technology) and expensive compare to those for 2-socket systems.

Re:What's twice a small number? (1)

TemporalBeing (803363) | about 2 years ago | (#41936621)

My leaky/biased memory says these machines were a speed disappointment. Is this doubling going to make them faster or slower than an x86?

--dave

The big issue, IIRC, is that Itanium was dead slow in its x86 emulation in the first few rounds. Intel's ideas as initially to emulate the x86 chips in software so that the Itanium wouldn't lose their x86 market and they could switch everyone over. They later went back and remove the software emulation and put an x86 die on to do the work in order to make it faster.

In native mode, I've never heard a complaint about Itanium and speed - only its x86 support mode.

Re:What's twice a small number? (1)

davecb (6526) | about 2 years ago | (#41945039)

I looked at some older TPC results, and see the previous Itanium delivering 4/7 the speed of the T5440, one of Sun's oldest threads-not-clock-speed boxes. Compared to IBM Power 7, Itanium delivered 4/10, so the doubling should being it up to 80% of the IBM.

Not to be sneezed at! Nevertheless, not competitive with Power, Fujitsu (Sun) M series or even the new Sun T4 boxes.

--dave

Re:What's twice a small number? (1)

TemporalBeing (803363) | about 2 years ago | (#41956189)

I looked at some older TPC results, and see the previous Itanium delivering 4/7 the speed of the T5440, one of Sun's oldest threads-not-clock-speed boxes. Compared to IBM Power 7, Itanium delivered 4/10, so the doubling should being it up to 80% of the IBM.

Not to be sneezed at! Nevertheless, not competitive with Power, Fujitsu (Sun) M series or even the new Sun T4 boxes.

One question that begs: were those TPC tests done on Itanium optimized well enough for Itanium? Or was there another bottleneck other than the processor?

One of the early on issues with Itanium was that it was hard for the optimizers to get right. I think they solved that, but I don't know when.

And of course it is hard to make apples-to-apples comparisons between architectures unless you have a reference system where the only thing you change is the processor, and verify that the code running on top of it is equally optimized for it and the hardware in the reference system. There are many factors - something as simple as a bad network card could throw off the results if you had to converse over the network. I'm not familiar with TPC tests, but I would assume they would be including processor, memory, network, and storage - not simply processor given that they are trying to test the transactional performance of a system, which has little to do with the processor itself and more to do with the networ, storage, and application being tested.

Re:What's twice a small number? (1)

davecb (6526) | about 2 years ago | (#41956319)

They're TPC results, so they are from the vendors, and optimized up the gazoo (:-)) --dave

Why waste a product run? (1)

Anonymous Coward | about 2 years ago | (#41933341)

From Intel's view as an innovation company, it kind of makes sense to try out new stuff on a platform that will not matter that much.
And since they know HP will buy them, Intel know they will be field tested.

The article misquotes facts (0)

Anonymous Coward | about 2 years ago | (#41933403)

If OOOE is out-of-order execution, itanium does oooe fine. It just expects compiler to tell more about it.

Re:The article misquotes facts (1)

Anonymous Coward | about 2 years ago | (#41933469)

By 'more', don't you mean 'everything'? I'm no Itanium expert, but I was under the impression that the compiler had to tell it precisely which instructions it could execute in parallel in any clock cycle.

We learned more than a decade ago that relying on the compiler to tell the CPU how to work was insane. I have very not fond memories of early RISC CPUs which didn't have any instruction interlocks so you had to order instructions to ensure a calculation would be complete by the time you read the result.

Re:The article misquotes facts (1)

Dogtanian (588974) | about 2 years ago | (#41934771)

If OOOE is out-of-order execution, itanium does oooe fine. It just expects compiler to tell more about it.

We learned more than a decade ago that relying on the compiler to tell the CPU how to work was insane.

Correct me if I'm wrong, but IIRC one very major problem with Itanium was that Intel, having designed it around this philosophy, never properly implemented (or were able to implement) the compilers it relied on to do this.

Re:The article misquotes facts (1)

wisty (1335733) | about 2 years ago | (#41938407)

It would be nice for virtual machines. Branch prediction is where VMs shine. But I can't see Sun (the Java people) having a big interest in it, and LLVM wasn't so widely used when Itanium had mindshare.

Re:The article misquotes facts (0)

Anonymous Coward | about 2 years ago | (#41938557)

To make good use of EPIC, the compiler has to produce bundles of data-independent instructions. For Itanium pre-9500 that was 2 bundles of 3 instructions. You can add "stop bits" between instructions in a bundle that do have a data dependence, but that stalls at least one of the functional units (and usually more than that).

But this is actually not hard for a modern compiler to do, and it's no different from compiling for, say, a VLIW DSP. The problem is finding enough instructions to fill the bundles.

For the "typical" VLIW DSP this is not a big deal because of their application, which usually involves some kernels with loops that can be unrolled and scheduled. For a general-purpose architecture it's not so easy. There was a paper once (I can't find it right now) that showed for some typical set of programs that the average number of instructions per basic block in the control flow graph of a compiler was 3. Itanium's predicated execution model is the work-around for this: turn control dependencies into data dependencies to create longer basic blocks. And to avoid long latencies for loads and stores, memory ops can be speculated also.

When you look at the ideas behind EPIC, they really make sense and it's not difficult to understand why everyone was looking at it as the next great thing (people sometimes forget that IBM, SGI, Sequent, and many other computer manufactures were all on board of the Itanic in the late 1990s).

Really, the "expose everything" isn't Itanium's problem. The problem is finding enough instruction level parallelism to fill the bundles.

Re:The article misquotes facts (1)

turgid (580780) | about 2 years ago | (#41936611)

If OOOE is out-of-order execution, itanium does oooe fine. It just expects compiler to tell more about it.

They have clairvoyant [skepdic.com] compilers now, do they?

His name is Robert... (0)

Anonymous Coward | about 2 years ago | (#41933435)

Poulson.

Sorry just had to make that joke :)

What ever happened with the shared Xeon/Itanium sockets anyway? I want to buy one of those secondhand and throw it in a Xeon board so I can start porting videogames to 'em :D

Thank you HP? (3, Interesting)

jandrese (485) | about 2 years ago | (#41933515)

I guess all of that money that HP has been dumping into Itanium development is finally paying off. Everybody else assumed Intel was just going to discontinue the product for obvious reasons, but here they are releasing a major upgrade to the core architecture. It still makes me wonder what HP sees in Itanium that makes them so gung ho about it though. Is it the vendor lock in? Is this upgrade enough to finally push Itanium past x86 based processors in some performance metric?

Re:Thank you HP? (2)

Desler (1608317) | about 2 years ago | (#41933745)

It's because they spent a shit ton of money porting software to it. They don't want to have to incur that cost again to port away.

Re:Thank you HP? (2)

Billly Gates (198444) | about 2 years ago | (#41933993)

Still who is going to buy it now?

Remember the Alpha? Slashdot ran on Alpha's for 5 years. They had a new version out and it didn't matter. HP wanted Itanium and purposedly made sure people wouldn't buy it and crippled the product line for the inferior Itanium. Makes you wonder why they bought it?

After Windows 2000 dropped support in RC 3, it didn't matter. Who in their right mind would invest in a dead platform?

This new chip could be 20x faster than a xeon and use 1/10 of the power! No one wants to invest in it and be dumped later by the likes of Oracle.

Re:Thank you HP? (1)

TheRaven64 (641858) | about 2 years ago | (#41937205)

OpenVMS and NonStop effectively only run on Alpha and a surprising number of companies have mission-critical software that works on one of these two platforms.

Re:Thank you HP? (1)

Billly Gates (198444) | about 2 years ago | (#41937421)

I wonder what management is going to do or are doing? I expect they are already underway replacing them. I doubt HP is porting them to x86 or ARM as it maybe too late for those that are retiring these with win32 or Linux equilivent of different applications that do the same tasks. It is not like you can get an emulator for these but these are systems I would not want to invest a penny into anymore as it would be a penny lost 3 years down the road when intel stops production and I can no longer even get motherboards if one server fails.

HP has their heads up their ass if they do not have a migration plan at least so these can be run under virtual machines for decades to come.

Re:Thank you HP? (1)

tigersha (151319) | about 2 years ago | (#41944735)

If you think some brand name beige Linux box is going to replace a nonstop system do yourself a favour and come out of mom's basement.

Nonstop actually means what they say.

No. Stops.
Period.

Re:Thank you HP? (1)

unixisc (2429386) | about 2 years ago | (#41941981)

NonStop on MIPS, not Alpha

Re:Thank you HP? (1)

TheRaven64 (641858) | about 2 years ago | (#41942693)

Uh, I meant Itanium. Freud got me again - I'm still bitter about it killing a superior architecture through employing better sales drones.

Re:Thank you HP? (1)

unixisc (2429386) | about 2 years ago | (#41943589)

True, and then not being able to replace it. Now, that was lame!

Re:Thank you HP? (2)

Abalamahalamatandra (639919) | about 2 years ago | (#41934021)

Hell, on the OpenVMS side, it wouldn't shock me a bit to find out that they don't even HAVE a team any more that's capable of porting it to other architectures. They likely say they do, to fulfill government contracts that specify that OpenVMS can't be orphaned, but I wonder what the reality is.

Re:Thank you HP? (0)

Anonymous Coward | about 2 years ago | (#41935703)

2013 will be the year of the itanium server.

Re:Thank you HP? (1)

TheRealMindChild (743925) | about 2 years ago | (#41936727)

It still makes me wonder what HP sees in Itanium that makes them so gung ho about it though

The same thing Apple saw in MIPS

Re:Thank you HP? (1)

linatux (63153) | about 2 years ago | (#41940907)

If HP ditch Itanium, they effectively ditch HP-UX. They can (have?) ported HP-UX to x86, but why would anyone pay top $ for HP-UX on x86 - they would just use Linux instead. Without HP-UX, they don't have a tier 1 platform & will be drowned by Red Hat & SuSE.

Meanwhile, Intel is busy building the RAS features of Itanium into x86 - as these get implemented into Linux, HP-UX will become irrelevent anyway.

IBM & Power have a little more headroom - be interesting to see how long it lasts.

Too bad (1)

Billly Gates (198444) | about 2 years ago | (#41933965)

We already switched. ... ok a former customer I worked with already switched.

Thank you Oracle for convincing us that it is dead.

No one will touch it with a 10 foot pole. I hope HP wins the lawsuit agaisnt them and Intel also sued Oracle for damages. When they violated that contract it gave a lot of hurt for those who have invested so much in Itanium.

Now it doesn't matter as no one will touch it.

The most important improvement... (4, Funny)

eap (91469) | about 2 years ago | (#41934003)

From TFA:

Poulson can issue 11 instructions per cycle compared to Tukwila's six.

These go to eleven.

You can still buy Itanium?!? (3, Funny)

vinn (4370) | about 2 years ago | (#41934397)

You can still buy Itanium chips? Holy crap. Are they found on the same aisle of the department store as the iceboxes and cotton gins?

Re:You can still buy Itanium?!? (0)

Anonymous Coward | about 2 years ago | (#41935217)

Yes. They are hard to spot though. They are on the left, just above the the 22.5 and 67.5 volt batteries.

It's simple: (1)

Type44Q (1233630) | about 2 years ago | (#41934607)

This is just Intel putting on a show of competing with themselves so that they don't get accused of monopolistic behavior... :p

Behind x86 in process (1)

erice (13380) | about 2 years ago | (#41934897)

This is an annoucment for a 32nm Itanium. Intel has been shipping 22nm x86 since spring.

Re:Behind x86 in process (0)

Anonymous Coward | about 2 years ago | (#41935745)

Notice that there still aren't any high-end Xeons. The only 22nm Xeon you can get is the Ivy Bridge E3-1230v2.

The high-end Intel chips don't appear to follow the same tick-tock cycle as Intel's desktop/laptop chips. Likewise for the really low-end, like Atom. Intel is still selling many of those at like 45nm, but they'll soon jump to 22nm.

Re:Behind x86 in process (1)

petermgreen (876956) | about 2 years ago | (#41937623)

Intel has been shipping 22nm x86 since spring.

It seems that in the Intel x86 world the higher you move up the product line the older the technology gets.

Intels x86 processors right now are best grouped by the sockets they use. There are basically three "current" (that is not "replaced" by a newer socket) sockets.

LGA1155 is the mainstream desktop and low end single socket server socket. This is the only socket for which 22nm parts are currently available.
LGA1356 is intended for low end dual socket systems but I get the impression it didn't really catch on (newegg lists 10 dual lga1356 boards and 34 dual lga2011 boards under "server motherboards"). Afaict it is currently using 32nm sandy bridge based parts.
LGA2011 is used for high end desktop parts and 1-4 socket servers. It is currently using 32nm sandy bridge based parts.
LGA1567 is used for systems that need 8 sockets or insane ammounts of ram. It's also 32nm but is using the older westmere microarchitecture.

Re:Behind x86 in process (1)

unixisc (2429386) | about 2 years ago | (#41942001)

Probably want to fill up fab utilization.

It's easy to criticize Itanium in hindsight (1)

Larry_Dillon (20347) | about 2 years ago | (#41935183)

If it hadn't been for AMD's 64 bit extensions, we'd all be running Itanium servers right now. AMD forced Intel to release a 64bit x86. If AMD hadn't, all of the effort that is being put into Intel's current 64bit chips would have go into Itanium and it would be a very strong platform. The alternative, PAE, sucked.

Re: It's easy to criticize Itanium in hindsight (1)

Desler (1608317) | about 2 years ago | (#41935733)

Well as long as you ignore that all the legacy x86 software that is still running today wasn't going to be ported to Itanium. People would have just stuck to x86 rather than spending billions on porting and rebuying working software.

Re: It's easy to criticize Itanium in hindsight (1)

TheRaven64 (641858) | about 2 years ago | (#41937217)

It was easy to criticise Itanium at the time, in comparison to Alpha, or PowerPC too. If we'd somehow all been forced to rewrite all of our legacy x86 code, either of these would have been a better choice. In fact, emulating x86 on PowerPC is a lot easier than on Itanium, so it would have been a more natural path if Intel had managed to kill x86. Lucky for them, they failed...

Sounds good. (0)

Anonymous Coward | about 2 years ago | (#41935639)

Sounds good.

The Itanium is still around? (1)

davydagger (2566757) | about 2 years ago | (#41935685)

https://www.youtube.com/watch?v=HLgQMtquS6Y

Do these chips make the user's face glow blue ? (1)

vikingpower (768921) | about 2 years ago | (#41936417)

I remember that in the "Hyperion" space opera, there is a "Poulsen" anti-age treatment. It has only one drawback: repeated applications of it make the beneficiary's face glow ever bluer. I wonder about these ones...

hothardware eh? (1)

turgid (580780) | about 2 years ago | (#41936511)

Sounds like the itanic all right.

They had one at LinuxExpo once, back in the day, allegedly running DeadRat, but we couldn't see it because it had overheated and they took it away.

Its name was Itanic Poulson... (1)

Nexion (1064) | about 2 years ago | (#41936819)

its name was Itanic Poulson... its name was...

Meanwhile Oracle... (0)

Anonymous Coward | about 2 years ago | (#41937565)

In Oracle's position, I'd continue to use the compiler they had and never bother to upgrade it. Static scheduling of instructions can offer extremely good performance, but if their current compiler assumes it can only execute 6 instructions at once, performance dies an agonizing death. This of course is assuming HP does get a judge to tell Oracle to continue Itanium support. I think this would be poetic justice, and would do a very good job of demonstrating why the lack of an on-chip scheduler is a Bad Idea(tm).

I keep an Itanium CPU at my desk (0)

Anonymous Coward | about 2 years ago | (#41940481)

..As a symbol of a $10 Billion investment loss. It throws some nice perspective on the occasional monetary loss around the office.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?