Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Despite Aging Design, x86 Still in Charge

Zonk posted more than 7 years ago | from the king-of-the-hill dept.

Technology 475

An anonymous reader writes "The x86 chip architecture is still kicking, almost 30 years after it was first introduced. A News.com article looks into the reasons why we're not likely to see it phased out any time soon, and the history of a well-known instruction set architecture. 'Every time [there is a dramatic new requirement or change in the marketplace], whether it's the invention of the browser or low-cost network computers that were supposed to make PCs go away, the engineers behind x86 find a way to make it adapt to the situation. Is that a problem? Critics say x86 is saddled with the burden of supporting outdated features and software, and that improvements in energy efficiency and software development have been sacrificed to its legacy. And a comedian would say it all depends on what you think about disco.'"

cancel ×

475 comments

English is 700 years old (4, Funny)

athloi (1075845) | more than 7 years ago | (#18587741)

It should be replaced with Esperanto when we all upgrade to Vista.

Re:English is 700 years old (-1, Offtopic)

Hoi Polloi (522990) | more than 7 years ago | (#18587799)

The horse and buggy was good enough for me and my grandpappy so it should be good enough for you!

Re:English is 700 years old (4, Funny)

bWareiWare.co.uk (660144) | more than 7 years ago | (#18588477)

If 8086 was a horse, then x86_64 would have sixteen legs and be capable of mach 3.

Re:English is 700 years old (0, Offtopic)

Anonymous Coward | more than 7 years ago | (#18587933)

The difference is English is actually _somewhat_ sensible, with at least the basics of grammar that even a child can learn in school.

X86, by contrast, is nonsensical instruction decoding baggage on top of a RISC these days. It's wasting silicon space, adding cost, wasting power, hurting performance (that's why there's an instruction decoding _cache_ these days). Why can't compilers just go straight to the RISC microcode level?

unfortunately (1)

game kid (805301) | more than 7 years ago | (#18588033)

Multilingual User Interface packs only come with Vista Ultimate. Oh, how I hate when a language strengthens monopoly power through such evil, costly means!

Re:English is 700 years old (2, Funny)

SighKoPath (956085) | more than 7 years ago | (#18588065)

when we all upgrade to Vista
So you mean... never?

Re:English is 700 years old (-1, Flamebait)

MarkByers (770551) | more than 7 years ago | (#18588351)

What's wrong with Vista?

I hate all the Linux fanboys on Slashdot. Just try Vista. It's not that expensive. If you can afford a computer, you can afford a copy of Vista.

Re:English is 700 years old (1)

Rosonowski (250492) | more than 7 years ago | (#18588443)

Not true. I paid extremely little for my hardware. Most of it was broken when I bought it, but I'm patient, and at least half-decent with a soldering iron when I need to be. (Capacitor replacements being the biggest one)

Re:English is 700 years old (0, Redundant)

Xabraxas (654195) | more than 7 years ago | (#18588555)

I'm a Linux user and I have used Vista and I don't like it. There are definitely some improvements over XP but it is buggier than XP and lacks compatibility with a lot of software and hardware. When Vista stabilizes in a year or two and drivers and software are more abundant it will be a better operating system than XP but I still won't use it over Linux. I haven't seen anything that would make me switch.

Re:English is 700 years old (4, Informative)

Yst (936212) | more than 7 years ago | (#18588331)

Modern English is about 750 years old. English is at least 1550 years old. Tradition is to trace the English presence in Britain to the quasi-historical Anglo-Saxon incursions of the mid-5th century, but migration almost certainly preceded military confrontation. The starting point for the English language (and the Old English era) is the introduction of a continuous Anglic presence to Britain. And that linguistic heritage, termed English, begins at least 1550 years ago.

Re:English is 700 years old (0)

Anonymous Coward | more than 7 years ago | (#18588627)

And you started a sentence with a conjunction. That's just not good English!
People like you make Nazi's look good.

Re:Engrish English (1)

micromuncher (171881) | more than 7 years ago | (#18588553)

www.engrish.com

'nuff said.

Let me guess... (4, Insightful)

Anonymous Brave Guy (457657) | more than 7 years ago | (#18587763)

A News.com article looks into the reasons why we're not likely to see it phased out any time soon

I'm going to go with:

  1. Installed base.
  2. Installed base.
  3. Installed base.

Did I miss anything?

Re:Let me guess... (5, Funny)

Half a dent (952274) | more than 7 years ago | (#18587793)

4. ???
5. Profit

Re:Let me guess... (4, Funny)

morgan_greywolf (835522) | more than 7 years ago | (#18587821)

Did I miss anything?


I think you forgot to mention installed base.

Re:Let me guess... (5, Funny)

precize (83096) | more than 7 years ago | (#18588013)

The one time "All your base are belong to us" is actually an on-topic comment

Weeellll there's also: (5, Insightful)

anss123 (985305) | more than 7 years ago | (#18588219)

4. Price / performance. A segment the x86 have done well in.
5. Security. Will my x86 progs be supported in 20 years? The answer: yes.
6. Availability. Hmm... Intel, I'd like to 1 000 000 CPUs. Intel: Sure thing.
7. Good will. What should we buy, Intel or PPC. PPC? What's that? Go Intel! Yes boss. (Just look how far Itanium got on Intel's name, alone.)

:D

5 and 6 are (1)

wiredog (43288) | more than 7 years ago | (#18588613)

Installed Base...

Re:Let me guess... (4, Informative)

leuk_he (194174) | more than 7 years ago | (#18588323)

"There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."

I think 50% of the transistors on a modern cpu are cache, you could call that legacy stuff. But the 60% figure makes no sense. For the real, seldom used, legacy instructions, less time is spend on optimizing them in Microcode [wikipedia.org] . And the microcode does not take THAT much space on a cpu.

Some sources:
Cpu die picture, est 50% = cache [hexus.net]
P6 takes ~ 40% for compatibility reasons [arstechnica.com] . And as the total grows, the percentage should DECREASE, not INCREASE. If the amount grows it is for performance reasons, not compatibility reasons.

However when you count the source "XenSource Chief Technology officler" it is not surprising that backwards compatibility gets that much attention. A main reason virtualization exists is to run older platforms so they are compatible.

Re:Let me guess... (1)

rolfwind (528248) | more than 7 years ago | (#18588431)

If Linux goes mainstream (and this is a great possibility in many countries not Europe/US) there is less to tie it to the x86 family as many things can just be recompiled.

But I don't really bet on the x86 being supplanted soon - even Intel couldn't do it. However, I don't see it lasting forever either.

When the gains for other designs are really a magnitude of an order greater than the current design, people will migrate. So far, other prospects were better, but only on the same scale, nothing outrageous better.

Re:Let me guess... (2, Insightful)

ooze (307871) | more than 7 years ago | (#18588505)

The x86 dominance is basically a result of two crooked architecure holding each other up: if MS DOS wasn't so crappy that it depends on x86 then the processor could be changed. If x86 wasn't too crappy to properly emulate it, then MS DOS or it's successors could be changed. As it is, we are stuck with both, because noone wants to change both at the same time, and you cannot really change each independently.
There is something I hope for:
Vista Tanks mightly, OS X and it's successors become the dominant OS in 10 years. Those are instructions set agnostic, and have been proven to be able to run on multiple platforms with pretty little effort, and does so atm run on 3 different instruction sets: x86, POWER, and the iPhone on ARM. Linux runs everywhere too. And as soon as you have this, there is no reason not to drop the most expensive to develop for and least effcient architecture.
But as long as people still use MS Operating systems, we will be stuck with x86 and have to pay the price ... energy price.

The X86 is a pig. (2, Insightful)

LWATCDR (28044) | more than 7 years ago | (#18587785)

The X86 ISA is a mess. It is a total pig. It is short on registers and it was just an unpleasant ISA to use from day one.
The problem is that it is a bloody fast and cheap pig that runs a ton of software and has billions or trillions of dollars invested in keeping it useful. I am afraid we are stuck with it. At least the X86-64 is a little better.

Re:The X86 is a pig. (2, Interesting)

Hoi Polloi (522990) | more than 7 years ago | (#18588061)

I don't know squat about processor design and I'm risking abuse but anyway...

In this day and age of multi-core CPUs, why not have a processor with a X64 ISA core and a core with the desired architecture. Let them run in parallel like 32/64 bit compatible CPUs. Old software would run on the X64 cpu and newer software or updated versions could run on the newer core. Maybe this could provide a crutch for the PC world to modernize over time.

Re:The X86 is a pig. (5, Insightful)

fitten (521191) | more than 7 years ago | (#18588311)

Already been done, didn't catch on (see Itanium).

Because there is such a massive amount of installed x86 software base that you'd be throwing away silicon. To be sure that software ran on the most systems possible, software would still be written for x86 and not the 'desired' architecture.

That being said, OSS tends to have good inroads in that you get all the source so can recompile to whatever architecture you want. However, since x86 is still the huge marketshare, other architectures get less attention. Also, all of the JIT languages (Java, C#, etc.) make transitioning easier IF you can get the frameworks ported to a stable environment on the 'desired' architecture.

The main problem is that there is *so* much legacy code in binary (EXE) format only (the source code for many of those has been literally lost) that can be directly tracked to money. There are systems that companies continue to use and have so much momentum that changing platforms would require extreme amounts of money to reverse engineer the current system - complete with quirks and oddities, rewrite, and (here is a big part that many people fail to add in) retest and revalidate, that many companies don't want to spend that kind of money to replace something that 'works'.

There's so much work/time/effort invested in x86 now that it's hard to jump off that train. AMD's x86-64 is a good approach in that you can run all the old stuff and develop on the new at the same time with few performance penalties. However, I don't know if we'll ever be able to shrug off the burden of x86.... at least not for a long time to come. It'd take something truly disruptive to divert from it (and what people are currently invisioning as quantum computing is not that disruption).

Re:The X86 is a pig. (3, Interesting)

kestasjk (933987) | more than 7 years ago | (#18588499)

In this day and age of multi-core CPUs, why not have a processor with a X64 ISA core and a core with the desired architecture. Let them run in parallel like 32/64 bit compatible CPUs.
Because that uses very valuable die real estate. These days x86 is already converted into micro-ops, which is like another instruction set altogether, which can be more easily re-ordered to be made more efficient.

Basically x86 isn't a perfect instruction set for today's landscape, but then again UNIX isn't a perfect operating system for today's landscape; that doesn't mean it's not still very good and we shouldn't praise those who have made it so good.
Some say plan9 has a better design than Linux, some say that PPC has a better design than x86, but apparently design isn't everything.

Lots of things could be better if we could get everyone to migrate from what they currently use, but would it be worth it in this case? I don't think so, at least not until we reach the limits that better design & hardware can do.

Re:The X86 is a pig. (0)

Anonymous Coward | more than 7 years ago | (#18588519)

Multicore is only easy is they are identical.

Re:The X86 is a pig. (2, Interesting)

phunctor (964194) | more than 7 years ago | (#18588091)

Yabbut... the ISA gets turned into a plasma of pico-ops, which then dispatch, somewhat out of order, on the Real ISA (which changes from each "x86" to the next "x86"). It doesn't really matter how fugly the ISA *was* as long as the Real ISA is apt for keeping the ALUs well fed.

It's convenient to have a consistent interface layer, and the gate count cost of the translation is asymptotically zero. It makes writing good optimizing compilers for "generic x86" all but impossible, but fortunately the final levels of optimization are done in real time in the plasma processor. It's actually a pretty cool approach to squeezing as much parallelism as possible out of non-parallel code, given a transistor budget in the neighborhood of 1e8.

--
phunctor
+/- epsilon on the details...

Re:The X86 is a pig. (1)

$RANDOMLUSER (804576) | more than 7 years ago | (#18588233)

Yabbut....

push bp
mov bp,sp
<function body>
pop bp
ret

Has to be fetched from main memory, decoded and executed, no matter what happens internally to the CPU.

Re:The X86 is a pig. (1)

gr8_phk (621180) | more than 7 years ago | (#18588321)

Agreed. And a bunch of idiots are going to point out that nobody actually implements it directly. x86 instructions are "translated" on the fly to whatever RISC type processor is actually doing the work - or some such. They'll claim it doesn't matter what the ISA is any more because of this capability. There are two problems with these arguments. 1) it takes circuitry and power to break down crappy instructions into nice ones. 2) the inefficient encoding takes more space - this requires extra unwanted instruction cache (circuitry and power).

I'm not so sure... (1)

anss123 (985305) | more than 7 years ago | (#18588457)

Even RISC processors, like the fabled G5, have decode stages these days (i.e. translating instructions). I speculate that separating the inner workings of the CPU from the ISA simplifies the design somewhat.

Re:The X86 is a pig. (4, Interesting)

afidel (530433) | more than 7 years ago | (#18588597)

Actually the encoding is VERY efficient where it matters most, cache density and limiting the number of calls to main memory. Having complex instructions helps in the areas where real world performance is most hurt and that is why we have a CISC frontend to an efficient RISC backend. This balance was reached even in the "RISC" camp, look at the PPC970 with the more complex instructions that get broken down in uops and dispatched to execution units, very similar in many ways to how modern x86 processors work. The translation layer is less than one percent of die space and probably a much lower percent of power usage on modern x86 chips.

Re:The X86 is a pig. (1)

parvenu74 (310712) | more than 7 years ago | (#18588355)

My understanding is that modern processors don't run x86 natively either, but are doing highly optimized translations of x86 instructions on the fly. The path for this way of doing things was blazed by the likes of Transmeta and HP. Read Ars Technica's CPU theory and praxis articles [arstechnica.com] for more information.

Re:The X86 is a pig. (0)

Anonymous Coward | more than 7 years ago | (#18588439)

Intel/AMD could create a new ISA to be run in parallel with x86 decoders. The payoff in performance or marketability is obviously not there or they'd already have done it.

x86 is the VHS of computing.

lock in (4, Insightful)

J.R. Random (801334) | more than 7 years ago | (#18587817)

The x86 instruction set will be retired in the same year as the QWERTY keyboard layout.

Re:lock in (1)

softwave (145750) | more than 7 years ago | (#18588103)

I'm using azerty keyboard layout, you insensitive clod!

It's too damn economical to stop (0, Redundant)

MasterGwaha (1033282) | more than 7 years ago | (#18587877)

...using it right now.

Simple! (4, Insightful)

VincenzoRomano (881055) | more than 7 years ago | (#18587887)

Just like the four stroke engine. It's not the best one, it can be largely enhanced and made better, but it's still here.
And just like the four stroke engine, modern engines just burn gasoline and push car forward. This is where the similarity with the original engines end.

Re:Simple! (5, Insightful)

Wite_Noiz (887188) | more than 7 years ago | (#18588075)

I've heard loads of metaphors about why x86 will be around for years to come, but none of the really hold.
An engine is black-box - petrol in, kinetic energy out (simply) - whereas the architecture on a processor is not.

AMD and Intel can make as many additions to x86 as they like, but if they stop supporting the existing instruction set, they'll sell nothing.

I'm sure Linux would be compiled on to a new architecture overnight, but I doubt MS would move any time soon - and their opinion holds a lot of weight on the desktop.

RISC ftw!

Re:Simple! (1)

tom17 (659054) | more than 7 years ago | (#18588113)

And just like the four stroke engine, modern engines just burn gasoline and push car forward. This is where the similarity with the original engines end.
Am I reading you wrong? Most modern engines *are* 4-stroke engines...

Re:Simple! (2, Funny)

Quasicorps (897116) | more than 7 years ago | (#18588437)

I think he means one of those newfangled three-stroke engines that are all the rage.

Re:Simple! (5, Insightful)

smenor (905244) | more than 7 years ago | (#18588677)

Am I reading you wrong? Most modern engines *are* 4-stroke engines...

I think that's the point, actually.

If we were going to start over and design the best way to extract usable power from gasoline from the ground up, we could probably do better than the 4-stroke, just like we could do better than the x86 ISA, and just like we could do better than LCDs for flat panel displays.

The problem is that, if you take an intrinsically inferior technology, and spend years upon years optimizing it, it will have such a head start that it is almost impossible for a newer, 'better', technology to compete.

Does it matter? (4, Interesting)

MBCook (132727) | more than 7 years ago | (#18587927)

At this point, does it matter as much? As we move on the future is clearly x86-64 which is MASSIVELY cleaned up compared to x86 and is really rather clean compared to that. Sure at this point we still boot into 8086 mode and have to switch up to x86-64 but that's not that important, it only lasts a short while.

As we move off of x86 onto -64, are things really still that bad? Memory isn't segmented, you have like 32 different registers, you don't have operands tied to registers (all add instructions must use AX or something like that) as some 16/32 bit instructions were.

Of course, we should have used a nice clean architecture like 68k from the start, but that wasn't what was in the first IBM.... and we all know how things went from there.

Re:Does it matter? (1, Funny)

Anonymous Coward | more than 7 years ago | (#18588069)

Funny, my old Alpha from 1998 had all that.

I wonder why it took you Intel lovers nearly 10 years to catch up to what I was using 10 years ago?

Re:Does it matter? (1)

swb (14022) | more than 7 years ago | (#18588211)

Funny, my old Alpha from 1998 had all that.

I wonder why it took you Intel lovers nearly 10 years to catch up to what I was using 10 years ago?
If it was so great, why is HPaq phasing them out?

Re:Does it matter? (0, Redundant)

Zo0ok (209803) | more than 7 years ago | (#18588249)

Considering your low Slashdot-ID you should know ;)

Re:Does it matter? (1)

Lockejaw (955650) | more than 7 years ago | (#18588619)

If it was so great, why is HPaq phasing them out?
Because it doesn't run Windows?

Re:Does it matter? (1)

prefect42 (141309) | more than 7 years ago | (#18588471)

Don't forget MIPS.
R4000, 64bit 100MHz in 1991, and with oodles of registers (32?).

Re:Does it matter? (1)

mgiuca (1040724) | more than 7 years ago | (#18588513)

I don't think we (the population of the Earth at large) can be considered "Intel lovers". We're just using the only viable architecture.

The funnyman above correctly pointed out that it's all about installed base. The current engineers are doing their best to migrate this architecture up to a clean implementation. While it's true that Alpha had all of this 10 years ago, there hasn't yet been a 64-bit architecture WITH the installed base. It's very important.

Re:Does it matter? (3, Insightful)

Zo0ok (209803) | more than 7 years ago | (#18588161)

And since the 386 consisted of 275000 transistors while modern cpus have more than 200 millions transistors the cost/waste of backwards compability with the 386 is very little.

Re:Does it matter? (1)

MBCook (132727) | more than 7 years ago | (#18588469)

Yes but it is being used less and less. No one really uses the 16 bit support and such in Linux. In the future even the 32 bit support will be used less. When MS drops compatibility at some point (they can't keep going forever) they can put in a software emulation layer. The demand to run 8 and 16 bit DOS programs won't keep being worth it forever. When that happens, after a few years it will be possible to start dropping those portions of the chip since they are so little used and we have emulators at this point (like BOCHS) that could take over for running 286 code.

Re:Does it matter? (1)

renoX (11677) | more than 7 years ago | (#18588345)

>you have like 32 different registers,

16 integer registers, not 32!

Re:Does it matter? (0)

Anonymous Coward | more than 7 years ago | (#18588539)

16 integer registers + 16 floating point registers = 32 registers.

Re:Does it matter? (1)

TomRC (231027) | more than 7 years ago | (#18588551)

R0 - R15, xmm0 - xmm15 Yep, that's 16 registers!
(FP stack doesn't count, MMX is dead)

Re:Does it matter? (0)

Anonymous Coward | more than 7 years ago | (#18588417)

You claim that x86-64 is "massively cleaned up" compared to x86 but aren't sure of how many registers there are or that "operands aren't tied to registers, or something like that"?

However, I agree about the 68k (it will always have a warm spot in my heart) :)

Re:Does it matter? (1)

MBCook (132727) | more than 7 years ago | (#18588703)

I don't do assembly programming as a job. I've read about all the architectures and their internal structures, the improvements that x86-64 brought, etc. But I code Java for a living. I just have no need to remember those exact numbers. I know from experience and reading that -64 increased the number of registers. I remember trying my hand at assembly programming YEARS and YEARS ago and there were certain instructions that could only operate on certain registers, you couldn't use whatever you wanted (like with a 68k). All registers were not equal, where I believe with the -64 they mostly are (for integer stuff).

Re:Does it matter? Less than it did (3, Interesting)

RetiredMidn (441788) | more than 7 years ago | (#18588565)

Good points all.

I would add to this that ISA mattered a lot more when I wrote code in assembly language. For a clean (and simple) instruction set architecture, I fondly remember the PDP-11 [wikipedia.org] . Later on, the 680x0 offered more powerful addressing modes for less simplicity (and consistency). Compared to both, the x86 was infuriating to work with.

ISA's still mattered, but less, in my early "C" days when source-level debugging was less robust, or even to understand what the compiler was turning my code into so I could figure out where to optimize.

Today, it hardly matters at all. Looking at generated code tells me little about how the processor with multiple execution units is going to process it; it is necessary to trust the compiler and its optimization strategy. It matters even less with interpreted or JIT'd languages, where the work eventually performed by the processor is far removed from my code. Knowing what's happening at runtime involves much more important factors than the ISA.

Aging? (0, Redundant)

nurb432 (527695) | more than 7 years ago | (#18587951)

The architecture sucked when it was first introduced.

Just shows you what good marketing can accomplish with garbage.

If it ain't broke, don't fix it (5, Insightful)

InsaneProcessor (869563) | more than 7 years ago | (#18587987)

Yes, the instruction set is old, but, it does still work. As a consumer, why should I have to re-invest in software that I purchased and does the job, just becuase my hardware failed, or faster hardware becomes available and I upgrade. Apple bit that one some time ago. Last year, I had an investment of $4000.00 in software when Intel came out with a significantly faster part that was dropping in price. Just by upgrading my hardware (cost $800) my invenstment improved significantly. $4800.00 did not justify the upgrade but the low cost of hardware only, did. Also, there was not learning curve involved.

You don't buy a new car just becuase the tires need replaceing (well some people do, but that is rarely the fiscally responsible thing).

If it ain't broke, it doesn't need fixing.

Re:If it ain't broke, don't fix it (2, Funny)

richdun (672214) | more than 7 years ago | (#18588217)

You don't buy a new car just becuase the tires need replaceing (well some people do, but that is rarely the fiscally responsible thing).

I hate to use a car analogy, but yeah. Cars have changed tremendously over the past 50+ years, but all in all, they're still four tires attached to two axles, with a transmission converting power from the engine to rotational energy in the axles, with a cabin on top of these axles with seats and a single driver's wheel, pedals, and control area. All of those components have seen upgrades, but the "basic architecture" has remained the same. Sure, there might be a better way to do a car, and concept vehicles look nice and all, but if you radically change the car, no matter how great and "better" it is, what kind of market share would Apple and the PowerPC^H^H^H^H those new "better" cars get? People will resist, not because what they have is best or even better, but because it is different, and economically, marginal upgrades in each generation is far cheaper than one giant upgrade during one generation.

ESR may disagree.. (0)

jcarter (726183) | more than 7 years ago | (#18587991)

Come the revolution, the x86 will be the first against the wall!! Eh.. maybe.

Here's an paper by Eric S. Raymond describing his (and a couple friends) reasons for believing that there very much is a revolution in hardware coming soon to a technological infrastructure near you.. as soon as next year.

http://www.catb.org/~esr/writings/world-domination /world-domination-201.html [catb.org]

Re:ESR may disagree.. (1)

jcarter (726183) | more than 7 years ago | (#18588031)

Erk.. irrelevant. My bad.

Re:ESR may disagree.. (0)

kad77 (805601) | more than 7 years ago | (#18588313)

Your link was pure drivel, written by children. "Punctuated Equilibrium: Stability through network effects"? Linux must dominate the world OS market by 2008-- because the next platform transition won't be until 2050?

These people need treatment for the abuse of crack-cocaine, or lack thereof.

News Flash (-1, Troll)

Anonymous Coward | more than 7 years ago | (#18588009)

*** NEWS FLASH ***

People Still Using x86 Processors !!

WOW !! Thank I obsessively refresh Slashdot every 10 seconds, otherwise I may not have known this.

Anything 10 times better? (3, Insightful)

PineHall (206441) | more than 7 years ago | (#18588021)

It has been said that people will not change unless something is preceived to be 10 times better. The problem is nothing has been perceived to be that much better, so people stay with what they know.
Paul

It's hairy to emulate, too (5, Interesting)

kabdib (81955) | more than 7 years ago | (#18588041)

Things would be a lot easier if the darned thing wasn't so bloody complex to emulate. I mean if we were "stuck" with (say) an ARM or even a 68K we'd be able to use virtual machines to dig ourselves out of a similar architectural hole (though with an ARM we'd be unlikely to want to).

The x86 has so many modes of operation (SMM, real/protected, lots of choices for vectorizing instructions, 16/32/64 bit modes) and special cases that it's a pretty big project to get emulation working correctly (much less fast). You're pretty much stuck with a 10x reduction clock-for-clock on a host. Making an emulated environment secure is hard, too; you don't necessarily need specialized hardware here (e.g., specialized MMU mapping modes), but it helps.

And now, with transistor speeds bottoming-out, they want to go multicore and make *more* of the things, which is exactly the opposite direction that I want to go in... :-)

Re:It's hairy to emulate, too (1)

InsaneProcessor (869563) | more than 7 years ago | (#18588137)

Blame Micro$oft.

Re:It's hairy to emulate, too (1)

Zo0ok (209803) | more than 7 years ago | (#18588337)

How do you mean? Virtual PC for Apple/PPC emulated x86 quite well. I think a 500MHz PPC-processor was roughly able to emulate a 350 MHz Pentium Equivalent Processor.

Emulating a CISC architecture on a RISC architecture is not that hard. The other way around is much harder - you cant very well emulate a PPC/SPARC/MIPS on a x86-computer. Then you would suffer 10x clock-for-clock reduction.

Re:It's hairy to emulate, too (1)

John Nowak (872479) | more than 7 years ago | (#18588535)

The other way around is much harder - you cant very well emulate a PPC/SPARC/MIPS on a x86-computer. Then you would suffer 10x clock-for-clock reduction.

Except I'm doing it now on OS X and it works fine. 60% speed penalty at most.

This says it all for me: (3, Insightful)

FredDC (1048502) | more than 7 years ago | (#18588077)

If a chipmaker declared its chip could run only software written past some date such as 1990 or 1995, you would see a dramatic decrease in cost and power consumption, Crosby said. The problem is that deep inside Windows is code taken from the MS-DOS operating system of the early 1980s, and that code looks for certain instructions when it boots.
 
Even new software might (and often does) use the so-called old instructions. If you want to completely redesign the hardware you would also have to completely rewrite the software from scratch as you would not be able to rely on previously written code and libraries. This is simply not feasible on a global scale...

Re:This says it all for me: (2, Insightful)

stevey (64018) | more than 7 years ago | (#18588405)

That isn't entirely true. Sure code might exist in the wild which uses old instructions, but it wouldn't need to be rewritten - just recompiled with a suitable compiler. (Ignoring people who hand-roll assembly of course!) (Of course whether the source still exists is an entirely separate issue!

However with all the microcode on board chips these days it should be possible to emulate older instructions, providing Intel can persuade compiler-writers to depreciate certain opcodes the situation should essentially resolve itself in a few years.

Re:This says it all for me: (0)

Anonymous Coward | more than 7 years ago | (#18588451)

Only if you used assembly language. The vast majority of software would just need a recompile, not a rewrite.

Re:This says it all for me: (1)

je ne sais quoi (987177) | more than 7 years ago | (#18588511)

I beg to differ. As an example of how things should go, I hold up Apple, who switched their OS relatively seamlessly when it became apparent there was a better chip to use. If these old archaic instructions are still in Vista, its proof that monopolistic practices really do hold back progress. If the marketplace were functioning as it should, such a hideous beast of a program like windows would have been replaced long ago.

Re:This says it all for me: (1)

mlk (18543) | more than 7 years ago | (#18588683)

You would have to rewrite applications written in machine code, and compilers. Not many modern applications are written in machine code.

60% of transistors used for legacy modes? (5, Interesting)

trigeek (662294) | more than 7 years ago | (#18588079)

"There's no reason whatsoever why the Intel architecture remains so complex," said XenSource Chief Technology Officer Simon Crosby. "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."

Who is this guy and what is he smoking? Over half of a modern processor is cache. The instruction decoding and address decoding are a small fraction of the remainder. Where does he get the 60% from?

Re:60% of transistors used for legacy modes? (4, Funny)

TodMinuit (1026042) | more than 7 years ago | (#18588125)

He only used 60% of his brain when writing the article. Sadly, he collected 100% of his pay check.

(Obl: 43% of people know that all statistics are made up.)

Excuse to Sell you Crap (2)

queenb**ch (446380) | more than 7 years ago | (#18588259)

Actually, I suspect that this has far *more* to do with the money and far *less* to do with the technology. Commodity hardware is available to the home consumer for the first time ever. A quick jaunt out to some of the parts pricing web sites. RAM - PC2-8000 - 18 cents per MB. HDD - SATA II - 2 cents per MB of storage. Motherboards are cheap. Cases are cheap. However, if they start changing the system architecture they can talk all of us into buying new, high priced performance parts.

2 cents,

QueenB

I think I know... (5, Funny)

TheVelvetFlamebait (986083) | more than 7 years ago | (#18588489)

Where does he get the 60% from?
His ass looks suspiciously spacious...

Re:I think I know... (1)

Hanners1979 (959741) | more than 7 years ago | (#18588671)

That's because he's removed all of the useless, unnecessary legacy parts from his body, giving himself 60% more space! *

* On a completely unrelated note, he died shortly after giving that quote.

Windows (1)

Dancindan84 (1056246) | more than 7 years ago | (#18588155)

The problem is that deep inside Windows is code taken from the MS-DOS operating system of the early 1980s, and that code looks for certain instructions when it boots.
Does anyone know if that code still exists in Vista? Does Linux/Unix have similar holdout code from it's roots? I'd be interested to know if the article is correct that Microsoft's use of legacy code is the only thing holding efficiency/power of CPUs back.

Not Windows or Linux per se _but_... (4, Informative)

burnttoy (754394) | more than 7 years ago | (#18588385)

Boot loaders tend to be 16bit segment model code 8086, at least they contain enough code to get into 32bit mode. The BIOS will be 16bit legacy code, at least some anyway as a x86 PC chip still boots in Real Mode (there is a 386 embedded variant that doesn't). Windows 9x series is _RIDDLED_ with 16 bit code esp the display drivers, although many of these switch to 32bit mode ASAP the entry points are 16 bit code. Any attempt at killing off 16bit code would stop any 9X system running.

For WinNT and variants (2K, XP) I don't know how much 16bit code is in there. I've written drivers for 2K/XP and could not find a single 16bit style instruction however even NT series for x86 uses segments. FS is used for process & thread info. IIRC even AMD64 long mode implements FS & GS to make OS porting easier.

Lastly. 16bit code (instruction operating on 16bits of a 32bit register) are trivial in 32bit mode - all you have to do is preceed an instruction with 0x66 and/or 0x67 to switch a 32bit instruction to a 16bit instruction.

The problem transcends MSDOS and goes to the BIOS and boot sequence itself. Intel tried to address the with EFI but that seems to be slow gaining traction - probably because of backwards compatibility.

Re:Windows (1, Interesting)

Anonymous Coward | more than 7 years ago | (#18588537)

NT has no MSDOS code in it. The only big change in NT 6.0 is to remove the MSDOS emulation (NTVDM.EXE) from the install dvd. It's true that the NT bootloader depends on an ibm-pc-compatible BIOS to boot. But in NT 6.0 the EFI support means that you will (at long last) be able to boot on non-pc-compatible machines as soon as you have one. And then, you will be able to say goodbye to 8088 legacy.

But the real point is: since you don't have any choice since this is a monopoly, what is the point of asking such questions? I mean, if NT was in fact just a bunch of MSDOS scripts faking to be a multiuser kernel, would you be able to complain about it? Would you be able to move to any other system anyway?

Re:Windows (1)

mlk (18543) | more than 7 years ago | (#18588579)

I would not had thought it exists in NT (so 2000, and Vista), as NT exists on PPC & Alpha. I would be surprised if Vista could not be compiled for say PPC machines.

Word, VB and the like are a different matter.

Re:Windows (1)

mlk (18543) | more than 7 years ago | (#18588629)

Wow, I should not post when knackered.

Windows NT (which is what both 2000 & Vista build on) came in PPC and Alpha flavours. As such I would be surprised if MS had added x86 stuff to the boot. I would actually expect Microsoft to make sure Windows NT (and above) still compile for at least one other platform, as other processors might be better suited for certain tasks. Take hand held PCs, and game consoles for example.

Re:Windows (1)

kad77 (805601) | more than 7 years ago | (#18588603)

Pretty naive buddy. Try answering your own question with examples of other microprocessor architectures that aren't bound to microsoft. Look where they are at. Some better than intel, some not. If a company could produce a general cpu that was a quantum leap forward in power efficiency, and still as powerful computation-wise, they would.

PA Semiconductor is trying as much with the PowerPC platform, as an example.

Legacy Support Drives It (4, Interesting)

WED Fan (911325) | more than 7 years ago | (#18588175)

I know we all bitch about old designs, legacy support for outdated features, but, one of the things that keep people from moving from one OS to another is "existing base of installed software" and "knowledge of exisiting software". Like it or not, the major player is Microsoft. No matter how much a geek says, MS UI's suck, people are comfy with them. If alternative OS's had the same software offerings with the same UI, people would be able to move to them. The same holds true for processors.

No matter how well a processor performs, if there is no application base for it, no one is going to buy a machine with that processor. In this case, perception is reality. You walk into a software store, you see 16 rows of Windows applications, half a row of Linux, and 5 rows of Apple.

What processor family runs each of these? Guess who has moved to the dominant processor?

The only way to build a software base is to build in legacy support. Then start weening users away from the legacy features, get programmers to stop using those features (mainly those building the compilers that developers use), and move towards the more advanced features.

x86 rules for a reason. Microsoft rules for a reason. The customer is comfortable with them, and their perception is reinforced everytime they go to the store.

I remember the good old days (-1, Troll)

stratjakt (596332) | more than 7 years ago | (#18588197)

Mac zealots would constantly talk trash about their superior mega PowerPC processors which are "faster than light".

Funny, for some reason or another, they've all shut the f*ck up about it. I don't have to hear any crap about "contiguous memory addresses" ooo.. I gots me a bonar.

It must be a crappo architecture, I only see Microsoft using it in any serious way with the 360.

Re:I remember the good old days (1, Insightful)

Anonymous Coward | more than 7 years ago | (#18588509)

And the Playstation 3, and the Wii, and your fridge...

Emulation (1, Interesting)

Anonymous Coward | more than 7 years ago | (#18588205)

Since the legacy (DOS, 16-bit Windows) applications were designed to run on much slower computers, aren't we at the point where we can simply use software emulation of the CPU for those applications? Of course, for this to be commercially viable, Microsoft would need to do some substantial work and provide it as a free update for XP and Vista.

Unfortunately, that would only eliminate a small fraction of the baggage. And I can't honestly say I'd trust Microsoft to do it right. If I depended on legacy apps for my business I would probably want to stick with the hardware implementation.

Nevermind.

YUO FAIL iT!! (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18588227)

violated. In the bloc in 0rder to

Judging by the title... (1)

TheVelvetFlamebait (986083) | more than 7 years ago | (#18588347)

... this is OLD news!

Disco Stu on x86 (2, Funny)

andy314159pi (787550) | more than 7 years ago | (#18588375)

And a comedian would say it all depends on what you think about disco.
Disco Stu does the x86 boogaloo

60% (2, Informative)

anss123 (985305) | more than 7 years ago | (#18588383)

From the article:
"There's no reason whatsoever why the Intel architecture remains so complex," said XenSource Chief Technology Officer Simon Crosby. "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."
(Emphasis mine)

Ehe, according to the latest in depth articles, the legacy cruft take less than 10% of the chip. A far cry from Crosby's claim of 60 percent, and that from a Chief Technology Officer no less :p

Being mostly compatible doesn't pay (4, Insightful)

scgops (598104) | more than 7 years ago | (#18588415)

Computer manufacturers have tried making non-compatible machines. Commodore 64, VIC 20, Coleco Adam, Atari ST. They all had their place in time and their niche in the market before fading out.

Something they all had in common, though, is that they sold better than IBM's mostly-compatible PCjr. I attribute that difference to software and compatibility problems. Because of BIOS differences, a number of programs written for the PC couldn't run on the PCjr. That led to a fragmentation of shelf space at software retailers and confusion among retail customers, and led to customers avoiding the platform in favor of easier-to-understand options.

I would expect something similar to happen if Intel, AMD, or anyone else started making mostly-compatible x86 processors. It wouldn't sell unless all of the software people are used to running still worked. Sure, someone could take Transmeta's approach and emulate little-used functionality in firmware rather than continuing to implement everything in silicon, but it all pretty much needs to keep working, so why bother?

Seriously, why would anyone undertake the effort and expense needed to slim-down x86 processors when the potential gains are small and the market risk is pretty huge? No chip manufacturer wants to replace the math-challenged Pentium as the most recent mass-market processor to demonstrably not work right.

Pundits and nerds can talk all they want about why the x86 architecture should be put out to pasture, but it won't happen until a successor is available that can run Windows, OSX, and virtually all current software titles at acceptable speeds. At that seems pretty unlikely to happen on anything other than yet another generation of x86 chips.

Is x86 _really_ in charge? (0, Troll)

burnttoy (754394) | more than 7 years ago | (#18588459)

In terms of volume shipments ARM and even Z80 sell _BILLIONS_. However margins are lower and the devices are either embedded or software compatibility is simply not an issue e.g. mobile phones where one uses the JVM or data is provided to an app and apps are recompiled per platform.

In terms of the PC - well, x86 is in charge and always will be. Without x86 a PC wouldn't really be a PC. Can one emulate/simulate x86? Yup, been done - especially well with FX!32. Is it more cost effective than just using x86? Not really.

Same Reason (1)

TheLoneWolf071 (1063682) | more than 7 years ago | (#18588523)

It's The same reason people don't switch to Mac or linux, because there standard code will not work on something new. almost all popular software is written for the x86 arch, so if we upgrade, to say PPC, all the devs are going to have to rewrite there code, etc.

Proprietary software locks us in (3, Insightful)

astrashe (7452) | more than 7 years ago | (#18588547)

If free software ever goes truly mainstream, and the stacks people use are free from top to bottom, lock in goes away in general. Even hardware lock in.

A couple of years ago, I was shifting some stuff around and I needed to clean off my main desktop machine, an x86 box. I installed the same linux distro on a G4 mac and just copied my home directory over. Everything was exactly the same -- my browser bookmarks and stored passwords, my email, my office docs, etc.

A lot of people take Apple's jump from PowerPC to x86 as a sign that x86 is unstoppable. But I'd argue that the comparative ease with which the migration took place shows how weak processor lock in is becoming. The shift from PPC to x86 was nothing compared to the jump from MacOS Classic to OS X.

The real reason x86 won't go away any time soon is that MS has decided that's the only thing it's going to support, and MS powers most of the computers in the world. Windows is closed, so MS's decision on this is final, and impossible to appeal.

Die pictures (1)

kestasjk (933987) | more than 7 years ago | (#18588571)

A little off-topic:
I've had a picture of a die for my desktop wallpaper for a while now, and I think it works well. I'd really like some larger pictures of the dies they give here [com.com] . Does anyone know where I would find larger ones?

Versatility vs. Lack of Vision (2, Insightful)

Vexler (127353) | more than 7 years ago | (#18588583)

As part of an operating systems course I am currently taking, we watched a video of a presenter from Intel who lectured on the changes associated with the Itanium processor. In his presentation (see the video at http://online.stanford.edu/courses/ee380/040218-ee 380-100.asx [stanford.edu] ), he pointed out that Intel has gone from having one or two major ideas to drive chip design to having fifteen or twenty minor ideas that they can cram in. The thinking is that if they can amass enough of these "little ideas" together, they can probably cobble together enough performance enhancement to justify production and sales of these chips. Part of the issue is that, as the author of this article also admits, there is currently no "big ideas" coming around the bend in terms of truly revolutionary performance increase.

The problem, though, is that when you introduce many smaller features, you cannot always anticipate how these features will interact with one another. This is why it is counterintuitive to many people that "new and improved" is not always so, and that you actually risk introducing bugs into the design more subtle than you can detect. That, combined with the continuing support for legacy code, means that complexity (and power consumption) goes through the roof with each iteration. While it is a testament of the robustness and versatility of the x86 architecture that it has survived thus far, one could argue that the architecture *had* to survive because we couldn't come up with the next paradigm shift.

The good news is that there are solutions to this situation. The bad news is that all of the solutions involve massive change in the way the software industry clings to the tried-and-true, or truly revolutionary innovation in chip re-architecture, or billions of dollars, etc. As the article points out, experience with EPIC has demonstrated how NOT to introduce a completely new architecture. There is no easy way out, but there are several possible paths.

Need for 8086 and real mode? (2, Informative)

tji (74570) | more than 7 years ago | (#18588615)

The article claims that Windows still requires the old compatibility modes to boot. Is this true? I could see how Win95-like OS's could because they basically boot on DOS. But, for NT and beyond, wouldn't they be fine with removing those old legacy capabilities?

The question that leads to is: What is gained by removing the legacy junk? The guy from Xen-Source in the article claimed "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes." Which seems ridiculous. Maybe he's talking about 60% of the silicon in a certain subsystem of the CPU, because it certainly can't remove 60% of the total transistors.

If the savings is minimal, and those modes don't effect anything once you've changed to 32 or 64 bit protected mode, then maybe it's a moot point.

To really shift the Instruction Set, you obviously have to do it in an evolutionary way. Such as, allowing access to the lower level IS (i.e. the instructions that the x86 gets translated into) in a virtual machine environment. So, you could have a more efficient Linux OS running in a VM, and if the benefits of that are substantial, more people might use that mode for the host OS (which could then run x86 VMs for legacy). It's easy to see that being used for Linux and even Mac OS as their portability is already proven, and they began as modern OS's - working only in protected mode.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...