×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

160 comments

Awesome! (3, Funny)

Wakko Warner (324) | more than 11 years ago | (#3532240)

Now I, and the other two IA64 users, will have some programs to run on our Linux-64 boxes!

Can someone please port nethack for us?

- A.P.

Re:Awesome! (0)

morbid (4258) | more than 11 years ago | (#3532263)

Poor you. Maybe you can swap it with someone for an electric storage heater or a gas central heating boiler? The latter would be cheaper to run.

64 bitness (-1)

AnonymousCowheard (239159) | more than 11 years ago | (#3532588)

I, being the owner of a DEC Alpha 21264b-based computer system, the programming methodolgy enjoyed on the 64bit DEC Alpha platforms applies to Intel's IA64. In respect to the IA64 and the Alpha platform, it is generally not necessary for so many different programming methodologies to split the computer programming world. Rather, the programming environment should once again abstract the hardware and the underlying kernel software, in order for programs to remain portable to other platforms and operating systems in general. Such a risk to split the programming industry on product, rather than by platform, is suicide to competitors as the only companies able to survive are the ones that band together (ie Microsoft and Intel and Blizzard). At the same rate, some companies take a verry financially unethical approach to supporting other platforms in respect to supporting a given platform's adoption (ID Software porting/releasing programs on Linux, Solaris, Alpha, et al).

So, it is to valid conclusion that the only chance of industry adoption of Intel's IA64 platform is which market share supports it first(Linux!) or which company creates yet another hardware abstraction layer(Microsoft) to allow compatibility with software of previous platforms(8086,80286,80386/ia32,win16,win32,ia64, et al).

So, in respect to the DOJ ability to fillander their poor legal efforts, Intel's IA64 will prove another success of how Microsoft may monopolize an industry based on its position to make decisions that utterly control the success of other companies and corporations. I rest my case.

Re:Awesome! (1)

conway (536486) | more than 11 years ago | (#3532989)

This might be a huge surprise to you, but a very large perentage of Linux apps (over 90%) port to linux ia-64 without any modifications.
This is largely thanks to the fact that linux already runs on 64-bit architectures -- Alpha, Sparc, etc. and most apps have been adapted to that already. There's not much conceptual difference in the high-level programmer's view between IA64 and any other 64-bit linux platform.

damn skippy (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#3532246)

seconds

I'm a wacked out dude (0, Insightful)

Anonymous Coward | more than 11 years ago | (#3532258)

AND IT FEELS GOOD!

Major difference (1, Funny)

Anonymous Coward | more than 11 years ago | (#3532259)

The major difference between IA32 and IA64 is price.

There are a number of adjustments to make. (1, Funny)

Anonymous Coward | more than 11 years ago | (#3532267)

IA64 is twice as wide as IA32. Therefore, it will be necessary to remember to halve the size of all variables to compensate in your programs. Additionally, we now have to type twice as much for each command or function. It really sucks that we will no longer see Ms. Portman onscreen in Star Wars anymore. So, in conclusion, nuts to IA64: I'm sticking with my Athlon, thank you very much.

MOSIX + porting (3, Funny)

Ed Avis (5917) | more than 11 years ago | (#3532302)

Well obviously what we'll see next is a kernel extension that dynamically 'ports' all your applications to IA-64 and transparently migrates them to IA-64 machines elsewhere in the cluster. When Intel's next Great Leap Forward is released, you'll be able to transparently migrate to that as well. In fact it will be so transparent, you won't notice any difference and you can continue working at your 80286-based machine without any interruption.

Re:MOSIX + porting (2, Funny)

morbid (4258) | more than 11 years ago | (#3532322)

Last summer at the London Linux Expo I asked this HP reseller (who had a big itanic display) "what about legacy code?"

He replied,"16-bit code?"

I sighed and moved on...

Re:MOSIX + porting (2)

gr (4059) | more than 11 years ago | (#3532683)

Unlikely.

Migration of a running process, even when going between identical processors, is expensive. Going even to a similar processor would be more so. (And going from, say, a Sparc to a m68k is totally out of the question, not that you're suggesting that.)

It's *really* hard to justify a policy of process migration in a cluster except with extremely long-running, massivley-parallel jobs. For most stuff, you'll waste less time just letting it finish. (GLUnix [berkeley.edu] does do process migration. Note that when you come back to a workstation that's been horfed by GLUnix, you'll be waiting about two minutes before you get your UI back.)

As for *starting* IA32 binaries on an IA64 processor, that's doable, but most cross-platform clustering systems function by keeping binaries for all their constituent processor types and having a hacked shell to convert PATH to the architecture-dependent path. (And by "most cross-platform clustering systems", I mean most that have been designed, since I know of none that work.)

Key difference (0)

Anonymous Coward | more than 11 years ago | (#3532310)

The key difference between IA32 and IA64 is not 32 or 64 bit technology. It is price/performance, as Intel is sadly aware of.

Just learning assembly now (1)

El_Nofx (514455) | more than 11 years ago | (#3532313)

I figured this would be coming now, I just started my first Assembly class as a CS undergrad, a whole new group of registers to memorise!

Re:Just learning assembly now (1)

Tower (37395) | more than 11 years ago | (#3532438)

I sincerely hope that your assembly class won't be using x86 (or any derivative thereof)... a nice 6811/68332/PowerPC would be far more useful as a learning tool without the cruft... PowerPC assembly is actually fun...

Re:Just learning assembly now (1)

El_Nofx (514455) | more than 11 years ago | (#3532494)

The Instructor actually brought that up the first day, he said in the past there has been demand for a PPC version of the class, but since each platform has it's own unique instruction set there would be no overlap and you would just have to learn the language all over again for Intel coding. They mostly go on the demand of the market, (they just switched their main taught language from c/c++ to java) so they pretty much just teach the x86 version now.

I would have to agree with them that there would be alot more demand for someone programming assembly on an Intel box then on a Mac.

I would say what we have done so far is fun though. Any programming can be if you make it.

Re:Just learning assembly now (1, Funny)

Anonymous Coward | more than 11 years ago | (#3532658)

So, you learn x86 assmebly and Java. I guess you'll do XOR in the long run, eh?

Re:Just learning assembly now (1)

Tower (37395) | more than 11 years ago | (#3532661)

True, it can all be fun. I think the register set of the PPC lends itself to some more creative solutions to some problems, and when you look at low level programming in assembly, much of the work is in the embedded space, where there are a *ton* of PowerPCs (and Motorola chips). I wasn't thinking as much about the PC/Mac situation.

Re:Just learning assembly now (2, Insightful)

Wildcat J (552122) | more than 11 years ago | (#3532666)

When I was in college, the only assembly programming we did was for MIPS. For our compiler project, we originally put out MIPS assembly and then retargeted it for the Sparc. I never once had to do any x86 assembly in school.

There's really not that much demand for any assembly in the industry at large. Even microcode is being done in high-level languages these days. I would wager that most of the people doing assembly coding now are in highly specialized fields, especially embedded programming. So, there isn't necessarily any more demand for x86 assembly programmers than for any other (possibly non-standard) architecture. In my opinion (and this is only opinion), while you should learn an assembly language in school to understand the basic building blocks, the choice of architecture isn't crucial. However, since it's not crucial to learn one or the other, I think they should stick with a simple one. x86 is kind of a mess; MIPS was easy to learn. As far as access to the hardware goes, there are simulators for most processors, which is sufficient for education.

-J

Re:Just learning assembly now (1)

drewness (85694) | more than 11 years ago | (#3532764)

I'm taking an assembler class myself right now. The real point of taking an assembler class anymore is to help you understand how computers work at a lower level, so you make better decisions programming in a higher level language. Very few programs should need to have asm anymore. Even linux kernel drivers are mostly written in C.

The x86 is and odd choice if that's the goal, because it just kludge upon kludge trying to make an 8 bit processor be 16 bit, then 32, and now 64. I don't know any x86 asm, but it is rather wonky and makes you jump through some hoops as I am told.

At OSU we are learning SPARC asm. When Sun went from 32 to 64 bit I think that for the most part they just had to change all the register sizes to 64 bit, because it was designed with the future a little bit more in mind than the x86.I'm just taking a really basic class (it's actually called "Introduction to Computer Systems"), so we aren't going to deal with things like the differences between a SPARC and UltraSPARC, but like I said it is apparently an easy transition. I'd imagine that the PPC is probably easy too. (Both are 32bit bigendian with the possibility of 64bit in the future designed in, I think)

What's the deal with IA64? (1)

ArchMagus (32772) | more than 11 years ago | (#3532314)

Isn't that the instruction set of the Itanium processor that isn't selling worth crap? I was under the impression that intel was going to eventually drop (or push to a back burner) support for this and go with x86-64 (the AMD 64 bit architecture being rolled out with the Opteron.)

Re:What's the deal with IA64? (1)

Cheeko (165493) | more than 11 years ago | (#3532342)

Hardly. HP and Intel are pushing full speed ahead with these. Supposedly there will be commercial systems by the end of the year. Also if IA64 was to be pushed back, Intel would likely switch to its own 386-64 architecture, currently codenamed Yamhill, if I recall.

Re:What's the deal with IA64? (0)

Anonymous Coward | more than 11 years ago | (#3532407)

Stick a fork in it!

Intel can't stick with IA64 now that AMD is rolling out their 64bit chips. They'd just fall too far behind the curve.

After all the IA64 chips are too expensive and too slow.

Re:What's the deal with IA64? (2, Interesting)

NanoGator (522640) | more than 11 years ago | (#3532433)

"Intel can't stick with IA64 now that AMD is rolling out their 64bit chips. They'd just fall too far behind the curve."

Yeah, I mean its not like Intel knows how to develop chips or stay in business or anything.

Re:What's the deal with IA64? (1, Insightful)

Anonymous Coward | more than 11 years ago | (#3532899)

Look, IA64 and AMDs 64 bit instruction set are two very different things. One will succeed and one will fail, if the market doesn't dictate this Microsoft will. The IA64 products may never reach the performance of the competing chips and the price to performance ratio will NEVER touch that of the AMD 64 chips.

Give me one reason anyone will care about the IA64 chips if cheaper faster 64bit chips will already be out.

IA64 is significantly more expensive than the problem it was trying to solve. Oops.

Re:What's the deal with IA64? (0)

Cheeko (165493) | more than 11 years ago | (#3532435)

IA64 and AMD's 386-64 don't even compete for the same market. One is a high-end chip to replace the big iron RISC chips, while the other is a chip for low end intel servers that currently run on IA32, but could benefit from an increased address space.

Re:What's the deal with IA64? (1)

guacamole (24270) | more than 11 years ago | (#3532457)

The current generation of IA64 is not really meant for the general public. It is useful only for early adopters (that is developers). We'll be able to tell
whether IA64 succeeded or not a few years down the
road when it is somewhere in its third generation..

Re:What's the deal with IA64? (2, Informative)

Master Bait (115103) | more than 11 years ago | (#3532621)

Intel hasn't made any announcements about their Yamhill, and HPQ still seems to think that IA64 is a go. The new(!) Itanium II is supposed to make this pathetic architecture up to 50% faster. Then it will have integer op performance comparible to today's fastest Celeron.

Look for Sun and/or IBM to be selling 8-way Hammer machines by this time next year, according to my Spirit Guides.

jewish plot (-1)

Ralph JewHater Nader (450769) | more than 11 years ago | (#3532316)

IA64 stinks like a dead kike. Considering how painful the transition will be, it is surprising that Intel is willing to bet so big on the new architecture. I sense jews at work.

Even Better (0)

Anonymous Coward | more than 11 years ago | (#3532319)

Even more exciting is porting Linux to the N64 platform!

size_t (2, Informative)

$pacemold (248347) | more than 11 years ago | (#3532328)

Oh please.

return (char *) ((((long) cp) + 15) & ~15);

is not portable.

return (char *) ((((size_t) cp) + 15) & ~15);

is much better.

Re:size_t (1, Informative)

morbid (4258) | more than 11 years ago | (#3532426)

Sad isn't it?
What he doesn't mention, is that most Linux people have gcc, and last time I looked, the object code produced by gcc on IA64 was +20% of the speed of the intel compiler. This isn't a criticism of gcc, it's just that the IA64 arch. is so different that you absolutely _must_ have the intel compiler to get any performance out of it.

Re:size_t (1)

Bert64 (520050) | more than 11 years ago | (#3532715)

This is the case on every architecture, gcc massively underperforms compared to a vendor compiler.. x86 is the architecture where the difference is the smallest, and its still significant.

Re:size_t (1)

Karel Capek (409952) | more than 11 years ago | (#3532555)

Actually, you probably want to use ptrdiff_t

Re:size_t (1)

Brainchild (4234) | more than 11 years ago | (#3532789)

Actually, you probably want to use ptrdiff_t

No. ptrdiff_t is a signed type. cp is a pointer, and hence an unsigned type. size_t is the correct type to use for the typecast.

Re:size_t (0)

Anonymous Coward | more than 11 years ago | (#3532863)

There's no guarantee that intmax_t is large enough to store a pointer, much less size_t. If you want to do arithmetic on pointers to arbitrary data, use (char*).

The more things change ..... (3, Informative)

binaryDigit (557647) | more than 11 years ago | (#3532344)

Ah, porting to homogeneous isa but with a bigger word size. Funny how it's the same old issues over and over again. Structs change in size, bad assumptions about the size of things such as size_t, sizeof(void *) != sizeof(int) (though sizeof(void *) == sizeof(long) seems to be pretty good at holding true here), etc. Of course now there are concerns about misaligned memory accesses, which on IA32 was just a performance hit. Most IA32 types are not used to being forced to be concerned about this (of course many *NIX/RISC types are very used to this).

When things were shifting from 16 to 32 bit (seems like just yesterday, oh wait, for M$ it was just yesterday), we had pretty much the same issues. Never had to do any 8 -> 16bit ports (since pretty much everything was either in BASIC, where it didn't matter, or assembler, which you couldn't "port" anyway).

Speaking of assembler, I guess the days of hand crafting code out of assembler is really going to take a hit if IA64 ever takes off. The assembler code would be so tied to a specific rev of EPIC, that it would be hard to justify the future expense of doing so. It would be interesting to see what type of tools are available for the assembler developer. Does the chip provide any enhanced debugging capabilities (keeping writes straight at a particular point in execution, can you see speculative writes too?). It'd be cool if the assembler IDE could automagically group parallelizable (is that a word?) together as you are coding.

Re:The more things change ..... (4, Informative)

CFN (114345) | more than 11 years ago | (#3532523)

Well, they days of hand crafted assembly, except for a few special purposes, have long since past. And no one expects assembly writers to be competitive with the compiler's ability to discover and explot ILP.

But the example you mention won't actually cause assembly writers any problems: the code won't be tied to a specific version of EPIC.

The IA-64 assembly contains so-called "stop bits", which specify that the instruction(s) following the bit cannot be run in parallel with those before the bit.
Those bits have nothing to do with the actual number of instructions that the machine is capable of handling.
For example, if a program consisted of 100 independent instructions, the assembly would not contain any stop bits. Now the actual machine implementation might only handle 2 or 4 or 8 instructions at a time, but that does not appear anywhere in the assembly. The only requirement is that the machine respect the stop bits.

Now, you might question how it deals with load-value dependencies (ie. load a value into a register, use that register). Obviously, the load and use must be on different sides of a stop bit, but that would still not guarantee correctness. I'm not sure how IA64 actually works (and someone should reply with the real answer) but I imagine that either: a) loads have a fixed max latency, and the compiler is required to insert as many stop bits between the load and the use to ensure correctness, or b) the machine will stall (like current machines).

Either way, the whole point of speculative loads is to avoid that being a problem.

Re:The more things change ..... (2)

binaryDigit (557647) | more than 11 years ago | (#3532692)

Actually my point was that for anyone to code in assembler usually implies coding for max performance therefore you would maximize the number of parallel instructions for the particular version of EPIC you were targeting. That in turn would make your code either non portable (going down in # of EU's) or non optimized (going up in # of EU's).

I too would be interested in hearing about how the cpu handles the dependencies. The only modern "general purpose" cpu that I know of that _doesn't_ stall is the MIPS.

printf() (0)

Anonymous Coward | more than 11 years ago | (#3532345)


It's always bugged me that there's no portable
way to print out most int-like datatypes.

I usually just cast them to long. So if I had
a pid_t, I'd print it like this:
printf( "%ld\n", (long int)pid );

The way it *should* work, if I were king of the
universe, would be:
printf( "%{pid}\n", pid );
printf( "%{uid_t}\n", getuid() );
etc.

Re:printf() (1)

$pacemold (248347) | more than 11 years ago | (#3532423)


The way it *should* work, if I were king of the universe, would be:

printf( "%{pid}\n", pid );
printf( "%{uid_t}\n", getuid() );

etc.
#define MYPRINTF(fmt, var) myprintf((fmt),sizeof(var),(var))

Designing the rest of the API, writing the myprintf() function and dealing with macros with variable number of parameters is left as an exercise to the implementor.

Re:printf() (0)

Anonymous Coward | more than 11 years ago | (#3532877)

This is what iostreams are for. Encoding a type in a string offers nothing more than an opportunity to get it wrong.

Re:printf() (2)

descubes (35093) | more than 11 years ago | (#3533134)

The way it *should* work, if I were king of the
universe, would be:
printf( "%{pid}\n", pid );
printf( "%{uid_t}\n", getuid() );
etc.


The way it *does* work in the little universe where I am the king [sf.net] is:

procedure Write(pid_t pid; others) is
// Write "(pid) 1FEDDE" on output
Write "(pid) ", HEX, pid as integer
// Write other arguments
Write others

procedure Write(uid_t uid; others) is
// Write "(uid) 1FEDDE" on output
Write "(uid) ", HEX, uid as integer
// Write other arguments
Write others

// Let's add a "WriteLn" capability
procedure WriteLn(others) is
Write others
Write NewLineCharacter

// And use it:
procedure Main() is
var pid_t pid := GetPID()
var uid_t uid := GetUID()
WriteLn "Hello, PID=", pid, " and UID=", uid


This way is arguably better, because it's type safe, and easier on the users. Of course, since it's not Compatible With C, it will never be used by anybody :-(

The number of bits is like the length of your dick (0)

Anonymous Coward | more than 11 years ago | (#3532354)

After you reach a certain point longer isn't better, it becomes inconvenient. I'm sticking to my 32-inch architecture thank you.

Re:The number of bits is like the length of your d (0)

Anonymous Coward | more than 11 years ago | (#3532466)

I'm running on a very fat little 8-bit machine...

i386 not designed for servers? (3, Interesting)

Ed Avis (5917) | more than 11 years ago | (#3532355)

From the article:
Back in the early '80s, nobody at Intel thought their microprocessors would one day be used for servers; the inherent architecture of the i386 family shows that clearly.
That's funny, I thought that the i386 was specifically designed to run MULTICS, which was the very definition of a 'server' operating system (computing power as a utility, like water and electricity). The early 80s was the time Intel designed the i386 wasn't it?

Re:i386 not designed for servers? (0)

Anonymous Coward | more than 11 years ago | (#3532518)

By the 80s Multics was very dead. It was in the history section of my college OS textbook.
Unix as very much on peoples minds then. OS/2 had a bad start because it version 1 did not have a GUI but I think that was the late 80s.
Was the 386 designed as a server? I doubt it. Servers where not a big thing then. I think the 386 was supposed to be a workstation chip to take on Sun and Apollo.

Re:i386 not designed for servers? (3, Interesting)

Russ Steffen (263) | more than 11 years ago | (#3532540)

What's really funny is that I have an Intel propoganda book for the "brand new 80386." It spends two whole chapters talking about how the 386 is the perfect CPU for LAN servers. Of course, it also had to spend almost that much space describing what a LAN is and what a server might do, since very few people had ever heard of a LAN at that point, much less had one.

Re:i386 not designed for servers? (0)

Anonymous Coward | more than 11 years ago | (#3532710)

Yeah! the first server I ever saw (I saw older ones after that, but these were the first I saw) were IBM PS/2 i386+FPU, I *think* 8 o even 16MB RAM... those pretty towers.

Re:i386 not designed for servers? (1)

Ed Avis (5917) | more than 11 years ago | (#3533064)

I have one of those at home, ex-Midland Bank fileserver, I put Linux on it. Although it now has an IBM Blue Lightning CPU instead of the original 386 processor, and more than the original 12 megs of memory.

Re:i386 not designed for servers? (2)

david duncan scott (206421) | more than 11 years ago | (#3532761)

Yeah, and the early 80's was also when Honeywell stopped Multics development. FWIW, I'd describe MULTICS as a "timesharing" OS, rather than "server", which to me implies "client".

Re:i386 not designed for servers? (3, Interesting)

hey! (33014) | more than 11 years ago | (#3533007)

386 designed for Multics? I doubt it. Running multics on a 386 would be like scoring Beethoven's ninth for a kazoo.


Multics was pretty much tied to it's unique mainframe hardware with loads more weird addressing and virtual memory management features that would never have fit the paltry 275,000 transitors of the 80386. Also, at the time (1985) Multics was a legacy system; Unix was seen the operating system of the future, in particular because it was portable to microprocessors and didn't require much special hardware.

Debian on the IA64 (5, Informative)

hereward_Cooper (520836) | more than 11 years ago | (#3532364)

Debian is already ported to the IA64 -- not sure about the number of packages ported yet, but I know they intend to release the new 3.0 (woody) with a IA64 port.

See here [debian.org] for more details

Re:Debian on the IA64 (3, Informative)

BacOs (33082) | more than 11 years ago | (#3532575)

From #debian-ia64 on irc.openprojects.net [openprojects.net]:

Topic for #debian-ia64 is 95.70% up-to-date, 96.07% if also counting uploaded pkgs

There are over 8000 packages for i386 (the most up to date architecture) - ia64 currently has about 7650 or so packages built

More stats are available at buildd.debian.org/stats/ [debian.org]

PA-RISC and IA32 Native Execution (3, Interesting)

morbid (4258) | more than 11 years ago | (#3532373)

In the article he mentions that itanic can execute IA32 code _and_ PA-RISC code natively, as well as its own, but these features will be taken away sometime in the future.
Does anyone remember the leaked benchmarks that showed the itanic executing IA32 code at roughly 10% of the speed of an equivalently-clocked PIII?
I wonder how it shapes up on PA-RISC performance?
It has to offer some sort of advantage over existing chips, or no one will buy it.
On the other hand, maybe its tremendous heat dissipation will reduce drastically when they remove all that circuitry for running IA32 and PA-RISC code.
Which leads me to think, why didn't they invest the time and money in software technology like dynamic recompilation, which Apple did very successfully when they made the transition from 69k to PPC?

Re:PA-RISC and IA32 Native Execution (0)

Anonymous Coward | more than 11 years ago | (#3532926)

You mean 68k to PPC.

Re:PA-RISC and IA32 Native Execution (0)

morbid (4258) | more than 11 years ago | (#3532979)

Indeed I do :-)
My eyesight and tryp[ing ain;t what they used to nbe :-)

Re:PA-RISC and IA32 Native Execution (2)

descubes (35093) | more than 11 years ago | (#3533050)

In the current Itanium, only user-space IA-32 instructions are implemented with hardware assistance. Since this is essentially microcode, this is not too fast. The architecture specifies how the instructions work, which IA-64 registers they use to store IA-32 registers, etc. But the whole thing can be implemented in firmware or software in future revisions of the chip.

IA-64 machines also offer firmware emulation of IA-32 system instructions. This allows you, in theory, to boot an unmodified IA-32 OS. I've never used it myself, however.

Last, the PA-RISC support is a piece of software integrated in HP-UX. There's no help from the hardware, except numerous design similarities (IA-64 began its life as HP PA-Wide Word). So you won't be able to run PA-RISC Linux binaries on IA-64 Linux any time soon...

Re:PA-RISC and IA32 Native Execution (2)

NovaX (37364) | more than 11 years ago | (#3533083)

Actually, the IA-64 instruction set is based off of PA-RISC, as it is the next generation of that architecture. Various projects designing processors with high levels of ILP were conducted at HP, blooming into the partnership between HP and Intel (who had been floating around an idea of a 64-bit x86 architecture, but recieved poor supportive responces) that created IA-64. HP-UX developers have stated that only minor changes must occur to port an application, and have created what equates to a shell process that converts a PA-RISC instruction directly into its IA-64 counterpart.

So, PA-RISC is native via design. The x86 instructions were tacked on, origionally supposed to be an entire processor but proved to be to costly. You have to remember that x86 is hardly needed, as its mostly important for developers porting and testing applications, and for Microsoft to run 'legacy' applications. McKinly has a newer design that should boost the x86 performance substantially. If extra is needed, I'm sure something similar to Sun's x86 PCI card will be devised.

As to heat and the rest, taking out the x86 would help of course. From what I've heard, the control logic on current IA-64 chips is actually smaller then that of the Pentium 4, which was the point of the architecture - simplify. Simplifying meant spending more time on higher level logic rather OOO techniques, etc that could be done via software. The chip is so large due to *lots* of cache.

Anyways, a few good links are:
here [209.67.253.150] and here [clemson.edu].

Why can't i386 assembler be used? (3, Insightful)

Ed Avis (5917) | more than 11 years ago | (#3532385)

From the article:
Quite obviously, inline assembly must be rewritten from scratch.

I don't see what is so obvious - isn't one of the selling points of Itanium its backward i386 compatibility? Even if running the 64-bit version of Linux it should still be possible to switch the processor into i386-compatible mode to execute some 386 opcodes and then back again. After all, the claim is that old Linux/i386 binaries will continue to work. Or is there some factor that means the choice of 32 bit vs 64 bit code must be made process-by-process?

Interesting question: which would run faster, hand-optimized i386 code running under emulation on an Itanium, or native IA-64 code produced by gcc? They say that writing a decent IA-64 compiler is difficult, and I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed (if not quite as fast as a P4 at the same clock).

Re:Why can't i386 assembler be used? (4, Interesting)

NanoGator (522640) | more than 11 years ago | (#3532471)

" isn't one of the selling points of Itanium its backward i386 compatibility?"

If I remember clearly, the 386 instructions are interpreted instead of being on the chip. That means that those instructions will execute alot slower. It would work, but it wouldnt work well. Its nice because you could transition to IA 64 now and wait for the new software to arrive.

Personally, I dont think that selling point is that worthwhile, but Ill let Intel do their marketing without me.

Re:Why can't i386 assembler be used? (1)

$pacemold (248347) | more than 11 years ago | (#3532543)

> I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed

:)

Look up what happened when:

1. 80286 was emulating 8086 in protected mode
2. Pentium Pro was running 16-bit code

Re:Why can't i386 assembler be used? (2)

Ed Avis (5917) | more than 11 years ago | (#3533089)

The PPro sucked at runnin 16 bit code - because at the time it was designed Intel didn't anticipate that people would _still_ be running 16-bit stuff in the mid-90s - but the next iteration the Pentium II was better. I wonder if McKinley is expected to give a boost to legacy code compared to Itanic.

Re:Sparc and Alpha ahead of Itanium (0)

Anonymous Coward | more than 11 years ago | (#3532618)

If you want a pure 64 Bit environment then go with Sparc or Alpha. AMD has 64bit version too. Intel has an advantage by keeping the i386 op codes it makes it easier to continue using 32bit code but there is a performance hit. If you need to use 32 but then go with Itanium or AMD if you need raw power of 64 BIT then go with Sparc or Alpha. I cannot see any reason to code 32bit when 64 Bit Itanium is released to the desktop unless you have legacy hardware that needs the support. My two cents 64BBIT pure would be best served on your database servers that need the raw power to crunch all that data. Your can still run your office apps off a 32BIT server as there would be little perfomance gain on pure 64BIT. Remember its performance that matters and what you are going to use the server or workstaion for that fiqures into the hardware. CAD developers would love pure 64 BIT and they have been using SPARC 64BIT for years because INTEL bites performance when crunching data like floating point ect...

Re:Why can't i386 assembler be used? (1)

Slashamatic (553801) | more than 11 years ago | (#3532696)

It is an interesting comparison to look what Digital did to get people from the VAX to the Alpha. The had a sophisticated binary translator and for low level code where you the source, VAX Assembler can be compiled.

The end result is that it easn't to difficult to move architectures, even though the Alpha does not know the VAX instruction set and no interpreter was provided.

The only gotcha is that Digital had to provide some special extra instructions to implement some primitives used by the OS, such as interlocked queues.

Intel is primarily a hardware company so they would tend to ignore software solutions, but the one-architecture approach kept the Alpha from getting too complicated.

Re:Why can't i386 assembler be used? (3, Informative)

iabervon (1971) | more than 11 years ago | (#3532811)

Changing modes for a single assembly block is not going to work. All of your data is in IA-64 registers, the processor pipeline is filled with IA-64 instructions, and so forth. Switching is a major slowdown (might as well be another process), and the point of having sections in assembly is to speed up critical sections.

In any case, what makes it difficult to write an IA-64 compiler is taking advantage of the things that the new instruction set lets you tell the processor. It's not hard to write code for the IA64 that's as good as some code for the i386. It's just that you won't get the benefits of the new architecture until you write better code, and the processors aren't optimized for running code that doesn't take advantage of the architecture.

Re:Why can't i386 assembler be used? (2)

Ed Avis (5917) | more than 11 years ago | (#3533076)

If the entire critical loop is in assembler (not just a small part of it) then it could be worth switching. Although based on what another poster wrote, it sounds like the emulation is so lousy that no matter how suboptimal gcc's code generation

Re:Why can't i386 assembler be used? (4, Informative)

Chris Burke (6130) | more than 11 years ago | (#3532838)

I don't see what is so obvious - isn't one of the selling points of Itanium its backward i386 compatibility?

Yes. Compatability. Nothing more. Your old apps will run, but not fast. It's basically a bullet point to try to make the transition to Itanium sound more palatable.

Or is there some factor that means the choice of 32 bit vs 64 bit code must be made process-by-process?

It is highly likely that the procedure to change from 64 to 32 bit mode is a privileged operation, meaning you need operating system intervention. Which means the operating system would have to provide an interface for user code to switch modes, just so a small block of inline assembly can be executed. I highly doubt such an interface exists (ick... IA-64 specific syscalls).

Interesting question: which would run faster, hand-optimized i386 code running under emulation on an Itanium, or native IA-64 code produced by gcc?

An interesting question, but one for which the answer is clear: gcc will be faster, and by a lot. Itanium is horrible at 32-bit code. It isn't designed for it, it has to emulate it, and it stinks a lot at it.

They say that writing a decent IA-64 compiler is difficult, and I'm sure Intel has put a lot of work into making the backwards compatibility perform at a reasonable speed (if not quite as fast as a P4 at the same clock).

Writing the compiler is difficult, but a surmountable task. And your surety does not enhance IA-64 32-bit support in any way. It is quite poor, well behind a P4 at the same clock, and of course at a much lower clock. Even with a highly sub-optimal compiler and the top-notch x86 assembly, you're better off going native on Itanium.

Re:64 BIT Assembler Project (0)

Anonymous Coward | more than 11 years ago | (#3533051)

Gotcha! Coders instead of pissing about how hard it is to write a good compiler why do we not start the 64 BIT Assembler Project. It would not take much time to get some code out for testing. SourceForge could host this project and perhaps IBM, SUN, INTEL, HP, AMD, REDHAT, SUSE, TURBO, MANDRAKE, CALDERA ect... would provide support for such a project. This way there would be a good 64BIT compiler that had some agreed standards that would allow porting of code.

Re:Why can't i386 assembler be used? (1)

n0ano (148272) | more than 11 years ago | (#3533054)

isn't one of the selling points of Itanium its backward i386 compatibility

The article was referring to inline assembly in the kernel code. The IA32 compatibility built into the IA64 CPU is strictly for user mode, all system functions are executed in IA64 mode. Although it would be technically possible to enter kernel mode, swith to the IA32 instruction set, exec some IA32 code and then swith back, in practice this is unfeasible. The IA32 code would be using different data structures and it couldn't call any of the kernel internal routines with somehow finding a way to swith from IA32 to IA64 mode and back on each subroutine call.


The problems of mixing IA32 and IA64 code, especially inside the kernel, are just too difficult and provide little benefit. For these reasons the Linux/IA64 team decided not to support this.

Has anyone thought of... (0, Offtopic)

Ben Edwards (579847) | more than 11 years ago | (#3532492)

It's an old frustration I've had with Windows having to do with the time it takes to boot. Why can't they put Windows on an EPROM chip (perhaps on the motherboard, perhaps on a card) so that the OS is all in hardware? Booting would be so much faster.

Has anyone thought of doing this with Linux?

Re:Has anyone thought of... (-1)

AnonymousCowheard (239159) | more than 11 years ago | (#3532778)

WTF are you tempting Microsoft in doing? Do you actually want to have embeded Microsoft Windows XP? Maybe the act of embedding Microsoft Windows XP in a toaster oven in which Microsoft can prove normal operation as inserting bread constitutes Windows XP to load, cook your bread, and finally eject your toast; constituting a true-MS blue screen of death system hang. True from Bill Gates' mouth, and I quote,

"Who uses their toaster oven longer than 25 days anyway?"

Well, actually you have quite a good idea. Linux is stable after it ejects your toast, so maybe its better the operating system just hangs itself, ie turn-off, instead of wasting all that time with uptime. Then again, the same could be said about websites that don't use their bandwidth because of non-popularity, yet run linux and provide 99.9% uptime. Wow, I never imagined saying this... By Microsoft Windows crashing every week or two, Microsoft is in fact saving people money on their electricity bill.

Great Idea!!

Re:Has anyone thought of... (1)

ocelotbob (173602) | more than 11 years ago | (#3532965)

Has anyone thought of doing this with Linux?
Of course. [lanl.gov] They're mostly used in clustering situations, but they are definitely out there.

NULL barfage (3, Informative)

dark-nl (568618) | more than 11 years ago | (#3532507)

The examples he gives for usage of null pointers are both wrong. When a null pointer (whether written as 0 or NULL) is passed to a varargs function, it should be cast to a pointer of the appropriate type. See the comp.lang.c faq [eskimo.com] for details. The relevant questions are 5.4 and 5.6. But feel free to read them all!

How is that different from a PPC? (3, Interesting)

jmv (93421) | more than 11 years ago | (#3532531)

A while ago, I tried compiling and running my program (http://freespeech.sourceforge.net/overflow.html) on a Linux PPC machine and (to my surprise) everything went fine. Does that mean that it should work on ia64 too since (AFAIK) both are big-endian 64-bit architectures?

Re:How is that different from a PPC? (2)

Gothmolly (148874) | more than 11 years ago | (#3532586)

No, because as he says in the article, IA64 is little endian.

Re:How is that different from a PPC? (0)

Anonymous Coward | more than 11 years ago | (#3532713)

No, IA64 is technically bi-endian. Although it wouldn't surprise me if Intel boxes will only really work as little endian.

HP/UX wants to be big-endian, and thats why Itanium is not a fixed-endian system.

Tom

Re:How is that different from a PPC? (2)

gr (4059) | more than 11 years ago | (#3532628)

Whatchew talkin' 'bout, Willis?

PowerPC is 32-bit and IA64 is little endian.

Duh?

Re:How is that different from a PPC? (2)

jmv (93421) | more than 11 years ago | (#3532740)

PowerPC is 32-bit and IA64 is little endian.

(After a quick check) It does seem like the PowerPC is a 64-bit chip (though maybe linux uses it as a 32-bit for some operations). Also, both PPC and Itanium can act like big-endian or little-endian.

Re:How is that different from a PPC? (1)

sagi (314445) | more than 11 years ago | (#3532784)

Actually, the poster was right - the regular PowerPC is 32-bit.

From http://penguinppc.org/intro.shtml:
There are actually two separate ports of Linux to PowerPC: 32-bit and 64-bit. Most PowerPC cpus are 32-bit processors and thus run the 32-bit PowerPC/Linux kernel. 64-bit PowerPC cpus are currently only found in IBM's eServer pSeries and iSeries machines. The smaller 64-bit pSeries and iSeries machines can run the 32-bit kernel, using the cpu in 32-bit mode. This web page concentrates primarily on the 32-bit kernel. See the ppc64 site for details of the 64-bit kernel port.

Re:How is that different from a PPC? (2)

gr (4059) | more than 11 years ago | (#3532944)

Extremely few PowerPC processors are 32-bit.

Certainly none you're likely to be compiling software on with any kind of regularity. (By which I mean: Apple's never sold a 64-bit processor. ;^>)

Star Wars post (0)

Anonymous Coward | more than 11 years ago | (#3532545)

But I was going to tashi station to pick up some power converters!

IASixtyTroll (0)

Anonymous Coward | more than 11 years ago | (#3532562)

This is a troll post. There are thousands more like it, but this one is mine.

No FP in kernel? (1)

d-rock (113041) | more than 11 years ago | (#3532595)

When I was reading the article, the part about no Floating Point in the Kernel stuck out for me. Is this an absolute, or a "don't do it, it's bad"? I looked at the Mossberg presentation on the IA-64 kernel and it looked like they were using some of the fp registers for internal state, but it didn't look like all of them.

Derek

Re:No FP in kernel? (1)

T-Punkt (90023) | more than 11 years ago | (#3532886)

It's for performance reasons I guess - NetBSD does the same for quite a few of its ports (e.g. the m68k and powerpc ones). The kernel does nearly no floating point calculations and if you do the few ones the kernel does with soft-float to avoid using the floating point instructions you manage to keep the contents of the FP registers unchanged. So there's no need to save and restore them when the CPU switches between user and kernel mode (syscalls etc.). Storing/loading n floating point registers in memory for every syscall is quite expensive, you know.

This of course is not necessary if the CPU has two (at least partly) different sets of FP registers for kernel (supervisor, privileged, ...) and user (unprivileged) mode or an instruction to quickly exchange (parts of) the FP register sets. (SPARCs do have this to some degree with its concept of register windows).

Re:No FP in kernel? (4, Informative)

descubes (35093) | more than 11 years ago | (#3532917)

There are two reasons:

1/ The massive amount of FP state in IA-64 (128 FP registers). So the linux kernel is compiled in such a way that only some FP registers can be used by the compiler. This means that on kernel entry and exit, only those FP registers need to be saved/restored. Also, by software conventions, these FP registers are "scratch" (modified by a call), so the kernel needs not save/restore them on a system call (which is seen as a call by the user code)

2/ The "software assist" for some FP operations. For instance, the FP divide and square root are not completely implemented in hardware (it's actually dependent on the particular IA-64 implementation, so future chips may implement it). For corner cases such as overflow, underflow, infinites, etc, the processor traps ("floating-point software assist" or FPSWA trap). The IA-64 Linux kernel designers decided to not support FPSWA from the kernel itself, which means that you can't do a FP divide in the kernel. I suspect this is what is more problematic for the application in question (load balancer doing FP computations, probably has some divides in there...)

XL: Programming in the large [sf.net]

Re:No FP in kernel? (2)

plastik55 (218435) | more than 11 years ago | (#3533094)

The kernel interrupt handler don't bother to save the state of the FP registers, mainly for performance reasons. That means if you use FP in the kernel you'll probably fubar any user-space process that's using the FPU.


It's not specific to IA64 or Linux-- PPC and IA32 also work this way, and Windows does the same thing. You can get around it, possibly, by inlining some assembly which saves and restores the FP registers before and after you use them. You need to be careful that the kernel won't switch out of context or go back to userland while you're using FP registers--preemptive kernels make this much harder.


However, there really aren't many reasons why you would want to use FP in the kernel in the first place. Real-time data acquisition and signal processing is the only example that comes to mind, but you'd be better off using something like RTLinux in that case.

Power consumption with no IA32 & PA support (0)

Anonymous Coward | more than 11 years ago | (#3532600)

The itanium consumer quite some power compared to
say Hitachi SH4, Arm/Intel Xscale etc.

In the link Moshe Bar writes that Itanium has hardware support for both the IA32-legacy (I knew that) and HP's PA-RISC architecture (new to me). Does anyone know how much less power the Itanium would consume if those were to be dropped?

Since the IA32-core in Itanium is alow anyway, how much slower would it be to use software emulation like Apple did (Ie emulating a MC680x0 with the PowerPC CPU)?

See, Linux is not dying (0)

Anonymous Coward | more than 11 years ago | (#3532687)

Even though Linux is blamed for setting back the state of computing by 10 years reinventing the VM and networking stack which works much better in FreeBSD than in Linux, Linux is still being ported to new platforms. Linux is like coachroaches---once you get them, you never get rid of them.

Will 64 bit chips ever make it? (3, Interesting)

00_NOP (559413) | more than 11 years ago | (#3532735)

When I started messing about with computers 8 bit chips were stanard on the desktop and 4 bit in the embedded sphere.

Within four years 16 bit was the emerging standard for the desktop and four more than that 32 bit was emerging.

In the 12 years since then, well...

32 bit rules in both the desktop world and in the embedded world. Can someone tell me why we aren't on 128 bit chips or more by now? Why do 64 bit chips not amke it - is this a problem of the physics of mobos or what?

Re:Will 64 bit chips ever make it? (5, Insightful)

Chris Burke (6130) | more than 11 years ago | (#3533114)

It's really not that complicated.

While 4-bit and 8-bit chips were cool and all, no one really thought they were -sufficient-. The limitations of an 8-bit machine hit in you in the face, even if you're coding fairly simple stuff. 16 bits was better but, despite an oft quoted presumption suggesting otherwise, that as well was clearly not going to work for too long.

Then, 32 bits came around. With 32-bit machines, it was natural to work with up to around 4 GB of memory without any crude hacks. Doing arithmetic on fairly large numbers wasn't difficult either. The limitations of the machine were suddenly a lot farther away. Thus it took longer for those limitations to become a problem. You'll notice that for those spaces where 4GB was a limiting factor the switch to 64 bits happened a long time ago. The reason we are hearing so much about 64 bits now is that the "low end" servers that run on the commodity x86 architecture are getting to the point where 4GB isn't enough anymore. Eventually I imagine desktops will want 64 bits as well. I've already got 1.5GB in the workstation I'm typing this on.

When will 128 bit chips come about? I don't know, but I'm sure it will take longer than it will take for 64 bits to become mainstream. The reason is simple: Exponential growth. Super-exponential, in a way. 64 bits isn't twice as big as 32 bits, it's 2^32 times bigger. While 2^32 was quite a bit of ram, 2^64 is really, really huge. I won't say that we'll never need more than 2^64 bytes of memory, but I feel confident it won't be any time soon.

An interesting end to this: At some point, there -is- a maximum bit size. For some generation n with a bit size 2^n and a maximum memory space of 2^2^n you have reached the point where you could use the quantum state of every particle in the universe to store your data, and still have more than enough bits to address it. Though this won't hold true if, say, we discover that there are an infinite number of universes (that we can use to store more data). Heh.

Porting applications. (1)

Bert64 (520050) | more than 11 years ago | (#3532746)

Surely porting of applications written in C and other high level languages shouldn`t be difficult. In that respect Itanium is nothing new, a 64bit little-endian architecture.. alpha anyone?
64bit machines have been commercially available for atleast 10 years, you`d think coders would have got used to writing 64bit clean software by now.

It's pretty cool... (2)

Time Doctor (79352) | more than 11 years ago | (#3532797)

nvidia [nvidia.com] already has drivers out [nvidia.com] for Linux/IA64 with some of their higher end cards (quadro line).

Re:It's pretty cool... (0)

Anonymous Coward | more than 11 years ago | (#3532908)

Note that you're probably screwed if you bought an affordable NVIDIA card. And you deserve it for knowingly doing business with a vendor that ships proprietary drivers instead of supporting (or at least documenting) what you bought.

Re:It's pretty cool... (2)

Time Doctor (79352) | more than 11 years ago | (#3532999)

Not that any commercial games are compiled for IA64 yet.

You might try getting to know the facts before posting, even as a coward.

IA64 to be a success? (-1, Troll)

dtjohnson (102237) | more than 11 years ago | (#3533055)

--from the article...
"Although the initial acceptance of Itanium-based servers and workstations has been slow, there is little doubt that it will eventually succeed in becoming the next-generation platform."

...as soon as it overcomes poor performance, high cost, poor initial acceptance, sledgehammer, a lack of applications, and a very strange name.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...