×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Do We Use x86 CPUs?

Cliff posted more than 7 years ago | from the because-their-cheap dept.

Intel 552

bluefoxlucid asks: "With Apple having now switched to x86 CPUs, I've been wondering for a while why we use the x86 architecture at all. The Power architecture was known for its better performance per clock; and still other RISC architectures such as the various ARM models provide very high performance per clock as well as reduced power usage, opening some potential for low-power laptops. Compilers can also deal with optimization in RISC architectures more easily, since the instruction set is smaller and the possible scheduling arrangements are thus reduced greatly. With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work. So really, what do you all think about our choice of primary CPU architecture? Are x86 and x86_64 a good choice; or should we have shot for PPC64 or a 64-bit ARM solution?" The problem right now is that if we were going to try to "vote with our wallets" for computing architecture, the only vote would be x86. How long do you see Intel maintaining its dominance in the home PC market?

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

552 comments

Easy (5, Insightful)

Short Circuit (52384) | more than 7 years ago | (#17458188)

Until someone replaces the PC.

PC architecture sits in a local minima where the fastest route to greater profits lies in improving existing designs, rather than developing new approaches.

The reason "We" use x86 is because "we" use PCs, where x86 technology is dominant and obvious. However, "we" also use PDAs, cell phones, TiVos and even game console systems. As the functions of those devices melt into a new class of unified devices, other architectures will advance.

The real irony is that, for most of these other devices, the underlying architecture is invisible. Few know that Palm switched processors a few years back. Fewer still know what kind of cpu powers their cell phone.

Re:Easy (1)

polar red (215081) | more than 7 years ago | (#17458772)

I think the moment all applications could be run virtually, within the browser, and i mean ALL applications, the underlying processor(+OS) wouldn't matter so much anymore. So, start pushing all your applications onto a webserver.

Re:Easy (4, Interesting)

HappySqurriel (1010623) | more than 7 years ago | (#17458842)

The reason "We" use x86 is because "we" use PCs, where x86 technology is dominant and obvious. However, "we" also use PDAs, cell phones, TiVos and even game console systems. As the functions of those devices melt into a new class of unified devices, other architectures will advance.

Honestly, I think it is much simpler than that ...

The problem has very little to do with the processors that are used and is entirely related to the software that we run. Even in the 80s/90s it would have been completely possible for Microsoft to support a wide range of processors ( if their OS was designed correctly ) and produce OS related libraries which abstracted software development from needing to directly access the underlying hardware; on install, necessary files would be re-compiled and all over the shelf software could run on any architecture that Windows/dos supported. In general, the concept is combining standard C/C++ with libraries (like OpenGL) and recompiling to ensure that no one was tied to a particular hardware architecture.

Just think of how many different architectures Linux has been ported to, if DOS/Windows was built in a similar way you'd be able to choose between any architecture you wanted and still be able to run any program you wanted.

Re:Easy (2, Insightful)

Canthros (5769) | more than 7 years ago | (#17458954)

I think you severely overestimate computing power in the 80s and 90s. I last compiled XFree86 around 2000. Took a bit over a day.

I really wouldn't have wanted to wait that long to install Office or DevStudio.

Re:Easy (1)

HappySqurriel (1010623) | more than 7 years ago | (#17459238)

Well ...

Compiling from source could have (potentially) have taken too long for most setups, but that doesn't mean that the general idea was impossible. If you started from an intermediate step in the compile process you could (hypothetically) greatly reduce the compile time, or (after 95 or so when CD players were available on most PCs) you could have included dozens of binaries for the most common setups and the source code in case someone wanted to run the program on hardware that wasn't currently supported.

Re:Easy (4, Insightful)

Marillion (33728) | more than 7 years ago | (#17459128)

Windows NT was designed to run on i386, MIPS, PPC and Alpha. Over the years, Microsoft discontinued support for the various platforms - IIRC MIPS and Alpha ended with NT3.51 - PPC ended with NT4. NT5 (aka - Windows 2000) was the first i386 only version of NT.

Re:Easy (0)

Anonymous Coward | more than 7 years ago | (#17459430)

WRONG. I've seen windows 2000 running on an axp workstation.

God knows why anyone would want to waste a nice alpha machine on windows, but hey, whatever floats your boat.

Re:Easy (0, Redundant)

m50d (797211) | more than 7 years ago | (#17459148)

Windows was ported to different architectures; there was windows NT on Alpha. Which was quite possibly the fastest per-clock architecture around at the time.

Re:Easy - yeah, right (0)

Anonymous Coward | more than 7 years ago | (#17459426)

Over 10 years ago, Microsoft supported PPC, ALPHA, and MIPS in NT 3.5.1 and early versions of NT 4.0. I remember a COMDEX where NEC was showing off dual-CPU MIPS boxes that were running NT 4 and running circles around the fastest x86 at the time. About the same time, the IBM/Apple/Motorola PPC tent was the first thing you saw coming into the show and it was to be the NEXT BIG THING (even saw one of the two 500 MHz PPC in existance, when the fastest x86 was about 100MHz).

However, if you wanted to get a PPC to play with, it was quite expensive and not readily available. IMHO, it was always more expensive than an inexpensive PC. Apple was the only reasonably-priced PPC (and even then it was at a premium over x86 for similar capabilities). We had an Alpha that we ran an early 64-bit Linux on (RH 4 or such), but it was just a test machine (too expensive, too hard to use as no standard BIOS, limited, and not the customer's standard -- which is now DELL).

When MS decided to stop supporting NT on other architectures (including anything 64-bit....), about 10% of NT was supposedly on MIPS in Asia. They basically killed any chance for non-x86 architecture in the mainstream other than embedded, server, or engineering applications. The kill off of non-x86 was a marketing decision by MS and most likely saved them a bundle in R&D, hence contributing to the bottom line, while not having much of a long-term impact on the market (those using non-x86 NT HAD to migrate, adding more sales for both Intel and MS).

They did (0, Redundant)

wiredog (43288) | more than 7 years ago | (#17459452)

IIRC, Dec Alpha, The Sparc, PPC, and x86 were all supported by WinNT. No one bought WinNT for anything other than x86, however.

easy (5, Funny)

exspecto (513607) | more than 7 years ago | (#17458204)

because they don't cost an ARM and a leg and they don't pose as much of a RISC

Re:easy (1, Funny)

Anonymous Coward | more than 7 years ago | (#17458858)

Best
Post
Evar.

+5 Funny and +5 Insightful.

momentum (5, Informative)

nuggz (69912) | more than 7 years ago | (#17458226)

Change is expensive.
So don't change unless there is a compelling reason.

Hard to optimize? You only have to optimize the compiler once, over the millions of devices this cost is small.

Runtime interpreter/compilers, you lose the speed advantage.

Volume and competition makes x86 series products cheap

Re:momentum (1, Informative)

jZnat (793348) | more than 7 years ago | (#17458258)

GCC is already architectured such that it's trivial to optimise the compiled code for any architecture, new or old. Parent's idea is pretty much wrong.

Re:momentum (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#17458358)

Well asshole, not everyone uses GCC.

Re:momentum (4, Interesting)

Frumious Wombat (845680) | more than 7 years ago | (#17458430)

If you're dead-set on using GCC, yes. Alternately, if you use the native compilers which only have to support a fairly narrow architecture, you can get much higher performance. XLF on RS/6K and Macs was one example (capable of halving your run-time in some cases), IFORT/ICC on x86 and up, or FORT/CCC on DEC/Compaq/HP Alpha-Unix and Alpha-Linux were others. Currently GCC is not bad on almost everything, but native-mode compilers will still tend to dust it, especially for numeric work.

Which brings back the other problem; not only are x86 chips cheap to make, but we have 30 years of practice optimizing for them. Their replacement is going to have to be enough better to overcome those two factors.

Re:momentum (2, Informative)

UtucXul (658400) | more than 7 years ago | (#17458534)

IFORT/ICC on x86 and up
Funny thing about IFORT is that while in simple tests it always outperforms g77 (I've since switched to gfortran, but haven't tested it too well yet), for complex things (a few thousand lines of FORTRAN 77 using mpi), it is very unpredictable. I have lots of cases where g77 outperforms ifort in real world cases (as real world as astronomy gets anyway) and cases where ifort wins. It just seems to me that either ifort is not the best compiler, or optimizing for x86 is funnier business than it seems (or there is some other variable I'm missing which is always possible).

Re:momentum (1)

morgan_greywolf (835522) | more than 7 years ago | (#17458866)

Actually, I've got to say that while IFORT/ICC do well on simple tests, it's not always cut and dry in the real world. gcc/g77 can do just as well as IFORT/ICC on more complex programs, and sometimes they do better. Other times, IFORT/ICC do better. The bottom line is that optimizing for the legacy cruft that is the x86 architecture just isn't very straight forward. There's a lot of voodoo involved, as any x86 assembly language programmer worth their salt will tell you.

Re:momentum (2, Informative)

Short Circuit (52384) | more than 7 years ago | (#17458448)

GCC 4.x is designed to enable optimizations that will work across architectures, by providing an intermediate code layer for compiler hackers to work with.

There are still optimizations possible at the assembly level for each architecture that depend on the quirks and features of those architectures and even their specific implementations.

The intermediate level optimizations are intended to reduce code duplication by allowing optimizations common across all architectures to be applied to a common intermediate architecture.

Why Apple moved to x86 (5, Informative)

plambert (16507) | more than 7 years ago | (#17458250)

The reason given, which people seem to keep forgetting, was pretty simple and believable:

Performance per watt.

The PPC architecture was not improving _at all_ in performance per watt. Apple's market was growing fastest in the portable space, but it was becoming impossible to keep temperatures and power consumption down with PPC processors.

And IBM's future plans for the product line were focusing on the Power series (for high-end servers) and the Core processors (for Xbox 360's) and not on the PowerPCs themselves.

While I've never had any particular love for the x86 instruction sets, I, for one, enjoy the performance of my Macbook Pro Core 2 Duo, and the fact that it doesn't burn my lap off, like a PowerPC G5-based laptop would.

Re:Why Apple moved to x86 (1)

Corporate Troll (537873) | more than 7 years ago | (#17458374)

And IBM's future plans for the product line were focusing on the Power series (for high-end servers) and the Core processors (for Xbox 360's)

You're confusing Core with Cell... Core is an Intel product.

Re:Why Apple moved to x86 (1)

jamesbulman (103594) | more than 7 years ago | (#17458508)

I think that was a miss capitalisation of the word "core". The Xbox 350 is powered by an IBM PowerPC derived processor. http://en.wikipedia.org/wiki/Xbox_360#Central_proc essing_unit [wikipedia.org]

Re:Why Apple moved to x86 (1)

Corporate Troll (537873) | more than 7 years ago | (#17458702)

Sorry, I thought it was powered by (one or more) Cells, but that is the PS3. As for the XBox 350, I don't know that thing ;-))

Re:Why Apple moved to x86 (1)

Short Circuit (52384) | more than 7 years ago | (#17458512)

And Cell was intended for Sony's PS3...IBM's Xenon [wikipedia.org] was used the XBox 360.

Interestingly enough, both the Cell and Xenon are PPC-based.

Re:Why Apple moved to x86 (1)

Corporate Troll (537873) | more than 7 years ago | (#17458740)

Yes, sorry, I thought that the XBox 360 was also using cell. Still, using "Core" wasn't right in either case, except if it was a bad capitalization as your sibling post said. The sentence doesn't make much sense in that case though.

Re:Why Apple moved to x86 (3, Interesting)

RAMMS+EIN (578166) | more than 7 years ago | (#17458468)

``Performance per watt.''

Not as I remember. As I remember, the PPW of PowerPC CPUs was pretty good, and getting better thanks to Freescale, but the problem was that Freescale's CPUs didn't have enough raw performance, and IBM's didn't have low enough power consumption. Freescale was committed to the mobile platform and thus was only focusing on PPW, whereas IBM was focusing on the server market, and thus favored performance over low power consumption. Seeing that the situation wasn't likely to improve anytime soon, Apple switched to Intel.

Re:Why Apple moved to x86 (2, Insightful)

loony (37622) | more than 7 years ago | (#17458640)

> The PPC architecture was not improving _at all_ in performance per watt. Apple's market was growing fastest in the portable space, but it was becoming impossible to keep temperatures and power consumption down with PPC processors.

Dual core 2GHz PPC below 25W isn't an improvement I guess? Look at PA-Semi...

Seriously, this has nothing at all to do with it.. What home user really cares if their PC takes 150W or 180W ? Nobody...

Peter.

Re:Why Apple moved to x86 (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17458828)

What home user really cares if their PC takes 150W or 180W ? Nobody...
If you leave your home PC powered on 24x365, your 30 watt delta is about $50 in electricity per year. It adds up.

Re:Why Apple moved to x86 (1)

heinousjay (683506) | more than 7 years ago | (#17458846)

I care. Power consumption is one of my top criteria when buying any electronic equipment. Maybe I'm alone out here, but somehow, I doubt it.

Re:Why Apple moved to x86 (1)

Idaho (12907) | more than 7 years ago | (#17458998)

Seriously, this has nothing at all to do with it.. What home user really cares if their PC takes 150W or 180W ? Nobody...

In addition to the fact that some people *do* actually care about the power savings, even if you don't you should still care because most of that power is transformed into heat, which the processor has to somehow get rid of. So you need larger (heavier) heatsinks, CPU coolers etc. not to mention that high power usage often means that increasing the size of the die and/or the clock frequency runs into limits where it is simply not possible to get rid of all that heat fast enough. Hence, why the PPC development was getting "stuck". This also happened to Intel and AMD btw, hence why they're now trying to make processors faster by putting multiple processors on 1 die. At least they seem to manage better than the PPC to keep power consumption down, anyway...

Re:Why Apple moved to x86 (4, Insightful)

Wdomburg (141264) | more than 7 years ago | (#17459232)

Dual core 2GHz PPC below 25W isn't an improvement I guess? Look at PA-Semi...

You mean a processor from a fabless company announced six months after Apple announced the switch to Intel, and wasn't expected to sample until nearly a year after the first Intel Macintosh shipped? It's an interesting product, particularly if the performance of the cores is any good (hard to say, since there seems to be much in the way of benchmarks), but it didn't exist as a product until recently. Even if it had, there's the significant question of whether they could secure the fab capacity to supply a major customer like Apple.

What home user really cares if their PC takes 150W or 180W ? Nobody...

Desktops aren't the dealbreaker here. Try asking "who cares if their laptop runs for 5 hours or 3 hours?" or maybe "who cares if their laptop can be used comfortably in their lap?" or perhaps "who cares if they can get reasonable performance for photo-editing, video-editing and what not in a portable form factor?".

Cast an eye toward the business market and performance per watt on the desktop is important. You may not care about a 30W savings but an a company with 500 seats may well care about 28800kWh in savings per year (assuming 240 work eight hour work days a year after factoring out weekends holidays and vacation).

Re:Why Apple moved to x86 (1, Informative)

Anonymous Coward | more than 7 years ago | (#17458768)

Apple switched to x86 because, they wanted to, Performance per watt(PPW) was a red herring. For years Apple said the PPC whether it be Motorola, now Freescales, implementation or IBM's, was a better performing machine, and that was the marketing line. Without a reason people would have started to question the validity of Apple's prior statements. So was the G5 a laptop ready chip, heck no it wasn't designed to be, but there were others available to Apple they chose not to use, because they wanted to be on x86. When Apple released it's PPW numbers it was total FUD they compared the numbers between a just released Intel part and 2+ year old IBM part. Now why would Apple want Intel,
#1 economies of scale, intel offered lower cost per chip, so Apple can increase profit while holding the cost of the product the same.
#2 Ability to more frequently upgrade. Intel releases newer faster version of processors much more rapidly then can be done with a custom processor.
#3 It's the software. The Achilles heal for Apple was always look at how little software it runs or the every popular but I want to run games. Granted there were ways to try and get some programs working but it was tedious and an undertaking 95%+ of the population probably wouldn't take. Now by being on Intel the greatly simplify the porting of software.
 
It's all about the software now, while certain people may drool over hardware, I'm one of them, I love hardware, It's the software that sells, it's the software that defines a box. Apple has essentially simplified the battlefield, taking away it's weakness to more concentrate on MSFT. Look we run on the same hardware they do, look we run the same programs they do, but wait we do have that pesky MSFT OS.

Anyway my 2 cents

Good question... (4, Informative)

kotj.mf (645325) | more than 7 years ago | (#17458254)

I just got done reading about the PWRficient [realworldtech.com] (via Ars):
  • Two 64-bit, superscalar, out-of-order PowerPC processor cores with Altivec/VMX
  • Two DDR2 memory controllers (one per core!)
  • 2MB shared L2 cache
  • I/O unit that has support for: eight PCIe controllers, two 10 Gigabit Ethernet controllers, four Gigabit Ethernet controllers
  • 65nm process
  • 5-13 watts typical @ 2GHz, depending on the application

Now I have to wait for the boner this gave me to go away before I can get up and walk around the office.

Maybe Apple could have put off the Switch after all...

Re:Good question... (1)

jpietrzak (143114) | more than 7 years ago | (#17458934)

Most Apple software today is being produced as "Universal" binaries, meaning it is built to run on both Intel and PPC architectures, and that'll be the case for some time (until they officially drop support for the PPC Macs). As such, it wouldn't seem to be all that hard for them to experiment with adding a PPC-based system back to their lineup, if they wanted...

Re:Good question... (2, Funny)

the_humeister (922869) | more than 7 years ago | (#17459018)

Now I have to wait for the boner this gave me to go away before I can get up and walk around the office.

It's called priapism [wikipedia.org]; you might want mosey on over to the emergency room quickly!

And the answer is: (-1, Offtopic)

JamesP (688957) | more than 7 years ago | (#17458276)

Pipoul are dumb

Really, it's been what, 5 years since AMD64 was introduced and still people don't know if it runs 32 bit software in an 64bit OS.

That and other PHB ideas (I wanna stab the next jerk who complains that their clipper application is running slow, and they've already bough a very expensive P4 to run it)

People want computers to compensate for their lack of dick size, hence PHB executives buying powerful notebooks to play solitaire and use MS Office

Re:And the answer is: (1)

east coast (590680) | more than 7 years ago | (#17458798)

People want computers to compensate for their lack of dick size, hence PHB executives buying powerful notebooks to play solitaire and use MS Office

Actually, my guess is it's because they don't know any better.

Most CEO Joes are probably more likely to remember the cost of their latest laptop instead of the processor speed, memory size or HD capacity. So in some cases they would like to use it to brag about their 3K USD laptop. But aside from that the same guy who can't tell you if he has a P3 or P4 under the hood of his ThinkPad probably isn't going to understand that a 350-P2 is going to run Office just as well as his new Intel Duo 2 rig.

As for dick size? That's what the car is all about. Most CEO Joes understand all the base stats about their car. They understand what a 5 litre engine is, they know the impression bullshit like trailer hitches and fog lamps leave on others as they cruise along in their Navigator. They know that people are going to be more impressed by a 120K USD Humvee over a 3K USD laptop.

The only people who really think that people are impressed by their laptop are geeks and wanna-bes (wanna-bes moreso).

Why do we ... (5, Insightful)

jbeaupre (752124) | more than 7 years ago | (#17458294)

Why do we drive on the right side of the road in some places, left in others?
Why do most screws tighten clockwise?
Why do we use a 7 day calender, 60 second minutes, 60 minute hours, and a 24 hour clock like the Sumerians instead of base 10?
Why do we count in base 10 instead of binary, hex, base 12?
Why don't we all switch to Esperanto or some other idealized language?
Or if you're familiar with the story: Why are the Space Shuttle boosters the size they are?

Because sometimes it's easier to stick with a standard.

There. Question answered. Next article please.

Re:Why do we ... (1)

eln (21727) | more than 7 years ago | (#17458566)

Why is this a troll? The momentum behind the standard is clearly the primary reason x86 architecture is still so dominant in computing these days.

Re:Why do we ... (1)

jbeaupre (752124) | more than 7 years ago | (#17458668)

I think I pissed off a someone without a car, a watch, nuts, ability to speak, and has never learned to count.

I'll give them the benefit of the doubt on the space shuttle.

Re:Why do we ... (1)

ToxikFetus (925966) | more than 7 years ago | (#17458666)

Mod parent up. Just because something *newer* and *better* comes along doesn't mean it is enough to overcome the inertia of the legacy product. Hell, the US is still using the Imperial System because we're all too lazy to learn kilometers and change the damn road signs. A new product is not only fighting its competitor, but also its competitor's years of accumulated history.

But you do use the metric system (2, Informative)

OzPeter (195038) | more than 7 years ago | (#17458840)

Obligatory wiki page [wikipedia.org]

And while you seem to be holding out, I did see one website that suggested less thna 7% of the worlds population doesn't use the metric system .. and the US is 80% of that 7%

Re:Why do we ... (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#17458708)

"Why do we drive on the right side of the road in some places, left in others?" Where do we drive on the left in the U.S.? We changed shortly after landing on this rock so we could be different from mother England. Horse drawn carriages back then. :) "Why do most screws tighten clockwise?" NIST standard, I think. It stated back in the 1800's when there was a huge fire in Baltimore. They called in help from Chicago, but when the Chicago fire fighters got there, they couldn't hook up their hoses. Thus the standards board was created and the rest is, shall we say 'history'. "Why do we use a 7 day calender, 60 second minutes, 60 minute hours, and a 24 hour clock like the Sumerians instead of base 10?" Thank the Catholic church for this one. "Why do we count in base 10 instead of binary, hex, base 12?" Easier to teach to grade school kids. And more universally accepted. "Why don't we all switch to Esperanto or some other idealized language?" What's the point if no one else will switch w/ us? "Or if you're familiar with the story: Why are the Space Shuttle boosters the size they are?" I'm not really famaliar w/ the story. Got a link?

Re:Why do we ... (1)

jbeaupre (752124) | more than 7 years ago | (#17459158)

I put in the left side reference because some readers are from other countries. There are standards for left hand threads. I've had to specify them for some applications. Very annoying. Divisions of time were widely used around the world before the Catholic church existed. Very interesting history. I agree on the counting thing with kids, but it's been pointed out that some base math is actually easier to work with in the long run. Base 12 is easily divisible by 2,3,4,6. The Espertanto analogy may have been my only one. http://www.succulent-plant.com/ephemera01.html [succulent-plant.com]

Re:Why do we ... (1)

Merkwurdigeliebe (1046824) | more than 7 years ago | (#17458756)

Parent is not a troll. Parent is simply making rhetorical comparisons to obviate the initial question. To label parent troll is to be lazy and easily annoyed and shows dislike for dissonance.

Chicken and Egg (3, Interesting)

RAMMS+EIN (578166) | more than 7 years ago | (#17458324)

I think it's a chicken and egg proposition. We use x86, because we use it. Historically, this is because of the popularity of the PC. A lot of people bought them. A lot of software was written for them. Other architectures did not succeed to displace the PC, because of the reluctance of people to abandon their software. Now, with years and years of this happening, the PC has actually become the most performant platform in its price class, while simultaneously becoming powerful enough that it could rival Real computers.

Slowly, other architectures became more like PCs: Alpha's got PCI buses, Power Macs got PCI buses, Sun workstations got PCI buses, etc. Eventually, the same happened to the CPUs: the Alpha line was discontinued, Sun started shipping x86-64 systems, and Apple started shipping x86 systems. The reason this happened is that most of the action was in the PC world; other platforms just couldn't keep up, in price and performance.

Apple Didn't 'Switch', They Got Dumped By IBM (0, Informative)

Anonymous Coward | more than 7 years ago | (#17458330)

Gotta hand it to Jobs ability to spread bullshit, but no one honestly believes the damage control story that Apple ever wanted to land in x86 land.

After years of chip order games and being an all around pain in the ass to work with company, IBM, having recently locked up all three major console manufacturers, decided Apple was no longer worth the measly four percent of their chip business for the major hassle it was to deal with them. So IBM decided to dump Apple as a customer and not make a mobile version of the G5.

Jobs in a panic ran to PA Semi to bail Apple out and was turned away.

And AMD didn't have the capacity to sell to Apple.

So Apple was left with only Intel - as their 'first choice'

Bravo Jobs!

Re:Apple Didn't 'Switch', They Got Dumped By IBM (0)

Anonymous Coward | more than 7 years ago | (#17458964)

Not saying I don't believe you however do you have any sources to back up your claims?

Re:Apple Didn't 'Switch', They Got Dumped By IBM (1)

HappySqurriel (1010623) | more than 7 years ago | (#17459026)

I think you're a little delusional if you believe that ...

I know of a few people who recently bought Macs because Apple switched to Intel based processors, and because Apple was smart enough to realize why most people were buying Windows PCs rather than Macs; people buy Windows based PCs because they believe that it would be expensive to replace their software library by switching operating systems (I say believe because 90% of software you own will likely be upgraded before you use it again, meaning you'd replace the software anyways). Being able to buy a Mac, run OSX, Windows, and Linux gives people the flexability they have never had before.

I wouldn't be surprised if you were right to a certain extent though. I expect that with IBM working on the Wii, XBox 360 and PS3 processors the resources that were devoted towards Apple were probably being reduced leaving Apple unhappy; IBM may have ignored complaints from Apple being that even the worst performing of the 'next generation' systems would likely sell more processors than Apple would have.

Why do we use x86 CPUs? (1)

neiljt (238527) | more than 7 years ago | (#17458334)

The same reason we use Windows.

It's not that we are particularly in love with either, but because they represent the well-trod path. Speaking for myself, I take some small comfort from knowing that my problem may already have been someone else's problem, and that the answer may very well therefore be available with a little research.

Having (privately) purchased AMD processors for the past 5 years or so, I have recently switched back to Intel -- sorry, Mr Dell -- but wish AMD and others a long and healthy life to keep both development and pricing competitive!

Not a technical reason (4, Interesting)

ivan256 (17499) | more than 7 years ago | (#17458342)

The reason is that intel provides better infrastructure and services that any other high performance microprocessor vendor in the industry. When Motorola or IBM tried to make a sale, intel would swoop in and offer to develop the customer's entire board for them. The variety of intel reference designs is unmatched. Intel not only provides every chip you need for a full solution, but they do it for more possible solution sets than you can imagine. Intel will manufacture your entire product including chassis and bezel. Nobody even comes close to intel's infrastructure services. That is why even when other vendors have had superior processors for periods of time over the years, intel has held on to market leadership. There may be other reasons too, but there don't have to be. That one alone is sufficient.

The other answer, of course, is that we don't always... ARM/xScale has become *very* widely used, but that is still coming from intel. There are also probably more MIPS processors in people's homes than x86 processors since the cores are embedded in everything.

Re:Not a technical reason (2, Informative)

Cassini2 (956052) | more than 7 years ago | (#17459132)

Intel's support infrastructure also includes some of the best semiconductor fabrication facilities in the business. Intel has consistently held a significant process advantage at its fabs (fabrication facilities) over the life of the x86 architecture. Essentially, no one else can deliver the volume and performance of chips that Intel can. Even AMD is struggling to compete against Intel (90 nm vs 60 nm).

The process advantage means Intel can get a horrible architecture (x86) to perform acceptably at a decent price/performance point. RISC chips, while faster, require different software. People aren't going to change their software unless a good reason exists. The process advantage of Intel, means that Intel can sell good processors at a reasonable price. Given that, why switch? The x86 is even clobbering Intel's own Itanium (Itanic) architecture in terms of sales.

Other hardware vendors are competitive in market segments that place very high values on particular system metrics. For instance, the ARM processor is very competitive for low power dissipations and 32-bit applications. The 8-bit embedded microcontrollers (PIC, 8051) are really cheap. RISC chips still dominate the high performance computing market.

x86 != Intel (1)

RAMMS+EIN (578166) | more than 7 years ago | (#17458360)

``The problem right now is that if we were going to try to "vote with our wallets" for computing architecture, the only vote would be x86. How long do you see Intel maintaining its dominance in the home PC market? ''

These two things have little to do with one another. Intel isn't the only company making x86 CPUs. It's entirely possible for x86 to stick around while Intel is displaced.

Re:x86 != Intel (0)

Anonymous Coward | more than 7 years ago | (#17458444)

Intel isn't the only company making x86 CPUs

I think that was entirely the point - you vote with your wallet and you are just voting for a different x86 implementation.

Re:x86 != Intel (1)

RAMMS+EIN (578166) | more than 7 years ago | (#17458638)

``I think that was entirely the point - you vote with your wallet and you are just voting for a different x86 implementation.''

But that can make a world of difference. Prescott, Nehemiah, Dothan, Crusoe, Venice, Merom and Brisbane are all x86 cores, but they're very different from one another.

Problems with this question (1)

M1rth (790840) | more than 7 years ago | (#17458388)

#1 - Power architecture got more performance per clock? Fine. IBM couldn't get it to ramp up clockspeed though, so it couldn't keep up with higher-clocked Intels that were still faster in genuine terms. That's why Apple dumped them in the first place.

#2 - Just-in-time x86 emulation... ahh, you mean Itanic! We all remember how well THAT little experiment on Intel's part went over; you had a processor of 1.5+ Ghz that ran x86 apps about as well as a 386sx processor.

#3 - Compilers deal in RISC more easily because the instruction set is less... but it takes you more instructions (and more clock cycles) to get the same operation done if the instruction isn't in the set. See also Cache Misses and performance degradation. RISC architecture is fine when you can accurately predict what the most-used instructions are and be sure they will be included, but if you miss one, performance hurts. Sure, in a single-case scenario like a PDA, RISC might work well, but on the scale of a desktop, you never know when someone will come up with a program that uses a heck of a lot of instructions that aren't in your RISC set, and that's where CISC will blow your little RISC chip out of the water.

Side note: this is why the major gaming platforms use the PPC architecture; because when you're writing games, the vast majority of programming options are known quantities and you can just make sure they are in your instruction set.

#4 - BACKWARDS COMPATIBILITY. I recognize you tried to include this with your quip about just-in-time compilation, but no emulation or alternative compilation is ever flawless (as Microsoft is discovering the hard way and emulation programmers have known for years). If you're running similar architecture, you've got a better chance of older programs still working.

Code Size is the answer (0)

Anonymous Coward | more than 7 years ago | (#17458390)

In order to run lots of instructions per second you must have the memory bandwidth to move those instructions from main memory into L1 cache. You must also the memory bandwidth for whatever processing you're doing. Because of this Von Neumann bottleneck, your program must compete with your data for memory bandwidth. As it turns out, there is also a disk bottleneck. Every 4k page of code has to come from a disk which requires a few ms to seek and read the data. Thus, your program will run faster if it is compiled for x86 because it requires half as many disk seeks when it's demand-paged in.

RISC CPUs with 4-byte instructions that don't do very much require lots of memory bandwidth to execute. The x86 instruction set has lots of 1-byte instructions and multi-byte instructions that do a lot. In other words, x86 is really just a compression scheme for instruction sets.

Modern x86 CPUs take the instruction stream and convert it into RISC-like chunks that actually get executed, so moving to x86 isn't a move away from RISC, it's a move away from verbose instruction encoding.

dom

Re:Code Size is the answer (3, Interesting)

AndrewHowe (60826) | more than 7 years ago | (#17459254)

I can see where you're going with this... But... Well, not so much.

RISC CPUs with 4-byte instructions that don't do very much require lots of memory bandwidth to execute.

Well, I'm currently working on ARM, and stuff almost always ends up smaller than x86 code. Those 4-byte instructions actually do quite a lot. Oh, and that's with straight ARM code, not Thumb or Thumb-2.

The x86 instruction set has lots of 1-byte instructions

Not so many actually, and the ones it does have are mostly totally useless these days!

and multi-byte instructions that do a lot.

Well, you get to do fancy addressing modes on the rare occasions that you need them... But not too fancy, no pre/post increment/decrement etc.

In other words, x86 is really just a compression scheme for instruction sets.

Sort of, except that it was never designed to be one, and it's not very good at it at all.
Well, you could say that it was an OK (but not great) encoding for 8086, but it's totally unsuited to encoding the instructions that modern software actually uses.

Where The Money Is (5, Insightful)

spoonboy42 (146048) | more than 7 years ago | (#17458470)

There's no doubt that x86 is an ugly, hacked-together architecture whose life has been extended far beyond reason by various extensions which were hobbled by having to maintain backwards compatibility. x86 was designed nearly 30 years ago as an entry level processor for the technology of the day. It was originally built as a 16-bit architecture, then extended to 32-bit, and recently 64-bit (compare to PowerPC, designed for 64-bit and, for the earlier models, scaled back to 32-bit with forward-looking design features). Even the major x86 hardware vendors, Intel and AMD, have long since stopped implementing x86 in hardware, choosing instead to design decoders which rapidly translate x86 instructions to the native RISC instruction set used by the cores.

So why the hell do we use x86? A major reason is inertia. The PC is centered around the x86, and there are mountains and mountains of legacy software in use that depend on it. For those of us in the open-source world, it's not to difficult to recompile and maintain a new binary architecture, but for all of the software out there that's only available in binary form, emulation remains the only option. And although binary emulation of x86 is always improving, it remains much slower than native code, even with translation caches. Emulation is, at this point, fine for applications that aren't computationally intensive, but the overhead is such that the clocks-per-instruction and performance-per-watt advantages of better-designed architectures disappears.

A side effect of the enormous inertia behind x86 is that a vast volume of sales goes to Intel and AMD, which in turn funds massive engineering projects to improve x86. All things being equal, the same investment of engineer man-hours would bear more performance fruit on MIPS, SPARC, POWER, ARM, Alpha, or any of a number of other more modern architectures, but because of the huge volumes the x86 manufacturers deal in, they can afford to spend the extra effort improving the x86. Nowadays, x86 has gotten fast enough that there are basically only 2 competing architectures left for general-purpose computing (the embedded space is another matter, though): SPARC and POWER. SPARC, in the form of the Niagra, has a very high-throughput multithreaded processor design great for server work, but it's very lackluster for low-latency and floating-point workloads. POWER has some extremely nice designs powering next-generation consoles (Xenon and the even more impressive Cell), but the Cell in particular is so radically different from a standard processor design that it requires changes in coding practice to really take advantage of it. So, even though the Cell can mop the floor with a Core 2 or an Opteron when fully optimized code is used, it's easier (right now at least) to develop code that uses an x86 well than code which fully utilizes the Cell.

Re:Where The Money Is (0)

Anonymous Coward | more than 7 years ago | (#17459002)

...and we have a correct answer. Moderators: please mod parent up. Others: move on, there's nothing more to add =)

Parent nails it (2, Insightful)

metamatic (202216) | more than 7 years ago | (#17459342)

OK, parent is the first post I've seen that explains the real reason why the x86 has become basically the only instruction set in mainstream computing.

There's no technical advantage to x86. In fact, IBM picked it specifically because it sucked--they didn't want the PC to compete with their professional workstations. Grafted on sets of extensions (SSE, MMX etc) have just made x86 more baroque over the years, and backward compatibility requirements have prevented cleaning away crap like segmented memory.

However, once a big enough chunk of the market got behind x86, it became impossible for any other design to keep up in R&D across all segments (mobile, desktop, server etc). Intel collects truckloads of cash, so they can spend more on engineering and make up for x86's deficiencies. IBM can compete with Intel, but even IBM decided it wasn't financially viable to be competitive in all segments, and basically dropped desktop PowerPC to focus on embedded (game consoles) and servers, hence Apple's switch to Intel. Similarly, AMD can compete, but only in desktop and servers. VIA compete, but only in embedded and low-end desktop.

The interesting question is whether the same thing will happen with operating systems. We're now basically down to Windows and Unix, plus a few niche OSs for embedded systems and high end servers. Microsoft finds itself in the awkward position of having to compete against most of the rest of the computing industry, including Sun, IBM, Apple, HP... At the same time they have certainly the biggest--and likely the cruftiest--codebase in the history of computing.

Ten years ago they were able to deliver technology before the competition, albeit not original technology--DDE and OLE shipped in usable form before the Publish-and-subscribe and OpenDoc they were copied from. Now things are different, Microsoft is struggling to keep up with Apple. It seems they can't copy Apple's new technology as fast as Apple can invent more new stuff. And at the same time, they're trying to fight two more wars in the embedded space with Xbox and Windows CE. Hence 5 years between desktop OS releases, while Apple has a release every 18 months.

Color me wrong, but... (1)

chewedtoothpick (564184) | more than 7 years ago | (#17458474)

If I remember right, weren't some Cyrix processors based on RISC architecture using a solid-state translation system a while ago? Whoever it was, I remember that already happening but the translation made it about half the speed of competing systems.

I think with virtualization growing, that a future architecture change is almost unavoidable. What we should focus on is developing virtualization to the point where we can switch to RISC or ARM and then run a virtualized system for any x86 based compatibility.

It's superior, of course! (0)

Anonymous Coward | more than 7 years ago | (#17458492)

<sarcasm>

Why bother with all the trouble of teaching your compiler to optimize with register coloring when you can just have a processor that has a couple registers and does the register renaming for you! And everyone knows that more instructions make a more powerful processor. To say nothing of the quantum leap that is stack-based floating point math, because there's nothing more powerful than an HP calculator.

And little endian... it's so great! In C, if you have unsigned x and you want to put the bottom 8 bits into unsigned char y, both y = (unsigned char)x; and y = *(unsigned char*)&x; result in the same, minimal code! (As compared with big-endian, where pointer math must be done in at least the latter case, wasting precious cycles!) One of these days, Motorola, the Internet, and the Arabic number system will understand the foolishness of putting most-significant digits first in numbers.

</sarcasm>

Seriously, though, it's just inertia. Most people don't like to reinvent wheels. Nothing more.

I'm just getting old, and sick of seeing the worst available solutions to problems getting standardized.

Dynamic binary translation (x86 - ARM etc) (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17458528)

Dynamic translation through JIT optimisation isn't really that efficient for the general case, at least from an x86 source*.

To see a good example of this, look at Transmeta Crusoe, which appeared to be an x86-compatible device, but was actually a 2-issue VLIW core running a software x86 emulator with a JIT compiler. Crusoe was really efficient at benchmarks, but its performance for "real world" applications was not so good. The simplest methods of optimisation - cache, branch prediction table, superscalar issue unit - seem to be more effective than complex optimisations involving recompilation.

In Crusoe, the processor was specifically designed to operate by dynamic translation. It had hardware support for some things, like undo-ing speculatively executed instructions. If you pick a random ARM or PPC processor as your target, you don't get this, so performance will be even worse.

If your source language isn't x86 code, you can clearly do more. If, for example, your source is written in C, you can do as much as an ordinary compiler. If your source is an intermediate register-transfer language, you can do almost as much. But x86 code doesn't provide much information to facilitate recompilation.

* disclaimer: my PhD is in this subject area but I've not finished it yet.

Re:Dynamic binary translation (x86 - ARM etc) (0)

Anonymous Coward | more than 7 years ago | (#17459200)

But it worked well for HP with Dynamo, right? Do you know if there's a reason that makes PA-RISC more suited to software translation than x86?

There are no really significant issues with x86 (0)

Anonymous Coward | more than 7 years ago | (#17458542)

There are no non x86 companies that can afford the required investment to develop and mass produce a suitably competitive architecture at a reasonable unit price. I guess the one exception to this is POWER, the last of the remaining high performance RISC architectures. It is a big risk to launch a completely new architecture that no one may want to buy, unless that architecture will offer massive improvements over anything that currently exists. Intel should remember this, and start making Itaniums on a current production process, or soon desktop chips will be sporting enough cache, and will be able to address enough memory to make Itanium irrelevant (in my experience, the only programs that perform better on Itanium are those that utilise massive amounts of cache effectively)
x86 is not so bad these days. The AMD64 extensions negate some of the traditional problems (lack of registers), and the modern FP instructions are somewhat better than the original crappy FPU, and the variable length instructions often act as a crude hardware compression scheme for code being read across the bus.
Incidentally, the instruction set for RISC chips is not necessarily 'smaller'. It is is however more regular, and it is easier to choose the correct instruction sequence for a particular purpose.

The Ugly Architecture Runs Well (5, Informative)

forkazoo (138186) | more than 7 years ago | (#17458558)

One perspective on the question:

Non x86 architectures are certainly not inherently better clock for clock. That's a matter of specific chip designs more than anything else. The P4 was a fairly fast chip, but miserable clock for clock against a G4. An Athlon however, was much closer to a G4. (Remember kids, not all code takes advantage of SIMD like AltiVec!) And, the G4 wasn't very easy get bring to super high clock rates. The whole argument of architectural elegance no longer applies.

The RISC Revolution started at a time when decoding an ugly architecture like VAX or x86 would require a significant portion of the available chip area. The legacy modes of x86 significantly held back performance because the 8086 and 80286 compatibility areas took up space that could have been used for cache or floating point hardware, or whatever. Then, transistor budgets grew. People stopped manually placing individual transistors, and then they stopped manually fiddling with individual gates for the most part. Chips grew in transistor count to the point where basically, nobody knew what to do with all the extra space. When that happened, x86 instruction decoding became a tiny area of the chip. Removing legacy cruft from x86 really wouldn't have been a significant design win after about P6/K7.

Instead of being a design win, the fixed instruction length of the RISc architectures no longer meant improved performance through simple decoding. They meant that even simple instructions took as much space as average instructions. Really complex instructions weren't allowed, so they had to be implimented as multiple instructions. Something that was one byte on x86 was always exactly 4 bytes on MIPS. Something that was 12 bytes on x86 might be done as four instruction on MIPS, and thus take 16 bytes. So, effective instruction cache sizes and effective instruction fetch bandwidth grew on X86 compared to purer RISC architectures.

At the same time, the gap between compute performance and memory bandwidth on all architectures was widening. Instruction fetch badwidth was irrelevent in the time of the PC XT, because RAM fetches could actually be done in like a single cycle. Less that it takes to get to SRAM on-chip caches today. But, as time went on, memory accesses became more and more costly. So, if a MIPS machine was in a super tight loop that ran in L1 cache, it might be okay. But, it it was just going balls to the wall through sequential instructions, or a loop that was much larger than cache, then it didn't matter how fast it could compute the instructions if it couldn't fetch them quick enough to keep the processor fed. but, X86 absurdly ugly instruction encoding acted like a sort of compression, meaning that a loop was more likely to fit in a particularly sized cache, and that better use of instruction fetch bandwidth was made.

Also, people had software that ran on X86, so they bought 9000 bazillion chips to run it all. The money spent on those 9000 bazillion chips got invested in building better chips. If somebody had the sort of financial resources that Intel had to build a better chip, and they shipped it in that sort of volume, we might well se an extremely competetive desktop SPARC or ARM chip.

What are you on? (4, Insightful)

Chibi Merrow (226057) | more than 7 years ago | (#17458612)

With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work.

Do you really believe that? If so, how does one get to this fantasy land you live in? This may be true sometime in the future, but that day is not today.

I happen to own a PowerBook G4. I like it very much. I love nice little purpose-designed chips based on POWER like the Gekko in the GameCube and it's successor in the Wii. But until we're at a point where you can effortlessly and flawlessly run everything from fifteen year old accounting packages to the latest thing to come off the shelf WITHOUT some PHB type knowing any funny business is going on behind the scenes, x86 is here to stay.

Plus, RISC has its own problems. It's not the second coming. It's nice, but not for everyone.

Re:What are you on? (1)

jimicus (737525) | more than 7 years ago | (#17458880)

IBM managed it when they migrated the architecture of their AS/400 (now zSeries) mainframe from some custom CISC chip to a POWER-based platform.

But to return to the original topic, I'm given to understand that way back in the mid '90s (back when there were a lot of architectures), Intel announced that they were working on their own "next generation" chip which would replace x86 and ultimately hammer everything else into the ground - the Itanium. Back then the x86 wasn't much, and it was easily beaten by Sparc, MIPS, Alpha and almost anything else you can think of.

But Intel had a lot of money, and the companies behind many of these chips started shaking in their little silicon socks. What's the point of continuing to develop a MIPS workstation processor (CPU development is very expensive) when Intel are going to the roundly thrash them in the marketplace within 2 years? Far better to spend the money on developing the platform, where there was still some competition.

Intel's processor was delayed and delayed. When it eventually showed up, it was pretty poor. But by then it was far too late - many of these companies had essentially stopped developing their architectures and the x86 had caught up. Because of the economies of scale, suddenly it was possible to build a serious server on an x86 platform and it would be a lot cheaper than anything else on the market.

The rest, as they say, is history.

We don't (3, Insightful)

BenjyD (316700) | more than 7 years ago | (#17458616)

We don't really use x86 CPUs, they're all RISC with an x86->RISC decode stage at the front of the pipeline. As far as I understand it, we use the x86 ISA because there has always been too much x86 specific code around for people to switch easily, which gave Intel huge amounts of money to spend on research and fabs.

Unfair business practices? (1)

plopez (54068) | more than 7 years ago | (#17458620)

If you google 'Intel Busness Practices' you will find a number of probes into Intel, its monopoly status, using that monopoly status to keep competitors down, dumping chips to depress prices for competitors, locking AMD out by restrictive licensing etc.

AMD may be a victim, IBM and the PPC chip may also be a victim in all this. Also, the 'Itanic' may be a huge loser of a chip but it served its purpose, it killed off the Alpha (a damn good chip),HP RISC and created FUD about the viability of other RISC chips. The 'Itanic' will probably eventually go down, but Intel will still win.

You can argue specs and technical merits as much as you want. But the real reason Intel dominates is more to a business model than to technical merits. Which is pretty common in IT in general, software as well as hardware.

Duh (1)

The MAZZTer (911996) | more than 7 years ago | (#17458686)

Because all our favorite programs run on x86, and not on whatever other alternative we would choose otherwise. And then we make more programs for x86, ensuring we will continue to use it.

Speak for yourself (1)

Anne Thwacks (531696) | more than 7 years ago | (#17458690)

Not everyone used x86 (i386) Some of us use UltraSparc instead.

Perhaps more would if Sun supported FreeBSD better.

Market share and economies of scale (1)

Tester (591) | more than 7 years ago | (#17458734)

The reason x86 is better is because its more popular and therefore Intel and AMD have more money to pour into R&D than anyone else. Or I would probably even say everyone else combined. So they make better chips, which sell more and the cycle continues. The other reason is that most PC software currently used is non-Free and runs on Windows and windows runs on x86. Obviously here we are only talking about PCs. Embedded is a completely different story where x86 is marginal, but the performance requirements are completely different and most of the time much lower or very specialised.

Like it or not, x86 is the portable ISA (5, Interesting)

swillden (191260) | more than 7 years ago | (#17458760)

The x86 ISA hasn't been bound to Intel for some time now. There are currently at least three manufacturers making processors that implement the ISA, and of course there is a vast number of companies making software that runs on that ISA. Not only that, Intel isn't even the source of all of the changes/enhancements in their own ISA -- see AMD64.

With all of that momentum, it's hard to see how any other ISA could make as much practical sense.

And it's not like the ISA actually constrains the processor design much, either. NONE of the current x86 implementations actually execute the x86 instructions directly. x86 is basically a portable bytecode which gets translated by the processor into the RISC-like instruction set that *really* gets executed. You can almost think of x86 as a macro language.

For very small processors, perhaps the additional overhead of translating the x86 instructions into whatever internal microcode will actually be executed isn't acceptable. But in the desktop and even laptop space, modern CPUs pack so many millions of transistors that the cost of the additional translation is trivial, at least in terms of silicon real estate.

From the perspective of performance, that same overhead is a long term advantage because it allows generations of processors from different vendors to decouple the internal architecture from the external instruction set. Since it's not feasible, at least in the closed source world, for every processor generation from every vendor to use a different ISA, coupling the ISA to the internal architecture would constrain the performance improvements that CPU designers could make. Taking a 1% performance hit from the translation (and it's probably not that large) enables chipmakers to stay close to the performance improvement curve suggested by Moore's law[*], without requiring software vendors to support a half dozen ISAs.

In short, x86 may not be the best ISA ever designed from a theoretical standpoint, but it does the job and it provides a well-known standard around which both the software and hardware worlds can build and compete.

It's not going anywhere anytime soon.


[*] Yes, I know Moore's law is about transistor counts, not performance.

It's largely a Microsoft thing (0)

Anonymous Coward | more than 7 years ago | (#17458794)

The reason that we use the x86 at all is because that is what IBM used for the PC. Prior to the PC, there were lots of different computers using many different chips. The PC was, in a sense, open source because IBM published the Technical Reference. That made it possible for anyone to build plug-in boards, and not long after, clones. All the other computers disappeared fairly quickly. Apple had been burned by Apple II clones. They prevented that from happening with the Mac. If they hadn't been so successful, we might all be using 68xxx chips.

Microsoft runs on x86 and they haven't seen any reason to diverge. Just as IBM had become the de facto standard, Microsoft is now the de facto standard. Things probably won't change while that is the case. We should also note that a few years ago there were different chips used in servers. The alpha comes to mind. Those have pretty much disappeared because x86 performance has increased to the point where the other chips provide no advantage and x86 is a lot cheaper.

Given that Linux can be made to run on just about anything from small embedded systems to supercomputers, there's nothing technical that prevents one from using a non-x86 cpu. Given that our computers are becoming very power hungry, it seems reasonable that someone will come up with a better architecture. The basic technology has to change to make that likely though. A cpu chip and memory that natively communicate optically might be such a change. It would be a chance for everything to start fresh because much of the old infrastructure would be obsoleted and there would be no reason to stick with it.

Re:It's largely a Microsoft thing (1)

NullProg (70833) | more than 7 years ago | (#17459262)

Apple had been burned by Apple II clones. They prevented that from happening with the Mac. If they hadn't been so successful, we might all be using 68xxx chips.
You seemed to be confusing ROM/Firmware chips with CPUs.

Microsoft runs on x86 and they haven't seen any reason to diverge.
Microsoft has written software for the 6502/PowerPC/68xxx/Alpha and now the POWER5 processors.

Just as IBM had become the de facto standard, Microsoft is now the de facto standard.
Microsoft software isn't run on 98 percent of the worlds CPUs. Think embedded systems.

Given that Linux can be made to run on just about anything
Yeah, I'm still waiting on my 64k Apple IIe linux distro that will fit on a 128k single sided floppy.

Your confusing CPU instructions with Operating System services.

Enjoy,

CISC (x86) vs RISC (2, Informative)

Spazmania (174582) | more than 7 years ago | (#17458868)

These days there is a limited amount difference under the hood between a CISC processor like the x86 series and a RISC processor. They're mostly RISC under the hood but a CPU like the x86 has a layer of microcode embedded in the processor which implements the complex instructions.

http://www.heyrick.co.uk/assembler/riscvcisc.html [heyrick.co.uk]

But we did (4, Insightful)

teflaime (738532) | more than 7 years ago | (#17458882)

vote with our wallets. The x86 architecture was cheaper than ppc, so that's what consumers chose. It is still consistently cheaper than other architectures. That's ultimately why Apple is moving to it too; they weren't selling enough product (yes, not being able to put their best chip in their laptops hurt, but most people were saying why am I paying $1000 more for a Mac when I can get almost everything I want from a PC)?

Translation in software or hardware (0)

Anonymous Coward | more than 7 years ago | (#17458884)

With Just-in-Time compilation, legacy x86 programs could be painlessly run on ARM/PPC by translating them dynamically at run time, similar to how CIL and Java work.

Well, that's what Transmeta thought. They even had special-purpose hardware to assist the software in the translation. It didn't work for them. It appears that dedicated hardware translating x86 into whatever internal instruction set the processor uses is the way to go (it's what Intel does since the Pentium Pro and AMD since they bought NexGen, for instance).

Why do consumers love x86 CPUs (1)

supabeast! (84658) | more than 7 years ago | (#17458988)

We use x86 CPUs because they're cheap, versatile, and run all of our old software. All of the little things the OP complains about might matter to a seriously nerdy programmer, but to 99% of the people using computers, those words are just gibberish. Something else to keep in mind about non-x86 CPUs is that yes, they may be faster than x86 at task X or cheaper for task Z, but that's because most of them aren't really designed for general use; if they were used by everybody, the architectures would change to reflect that, and those chips would quickly become less nerd-friendly.

Performance per Time (1)

netpixie (155816) | more than 7 years ago | (#17459034)

The magic number isn't "performance per clock" or "performance per watt" but "performance per unit of time spent writing the program".

x86 is still with us because its architecture fits the way that humans write code.

Cause of Bill G's mom! (0)

Anonymous Coward | more than 7 years ago | (#17459054)

Its because Bill Gates mom was in the Unitedway board with the guy responcible for the IBM PC. I though everyone knew that!

Software vs. Hardware (1)

LibertineR (591918) | more than 7 years ago | (#17459122)

Because back in the days when Hardware dictated Software, Software generally sucked, for the end-user and wholly unapprochable by anyone without geek-cred. Think COBOL. Hardware has never dictated Software success in the marketplace, but the reverse is true.

DOS and Windows MADE the market for the X-86 machines, just as Apple made the market for the Motorola 68000 series. Companies will purchase whatever hardware necessary to run their preferred apps. Almost never will you see an organization purchase particular software just so that can use a particular Hardware platform. That died out after the Notes on Unix experience.

performance isn't the only factor (4, Insightful)

briancnorton (586947) | more than 7 years ago | (#17459216)

Why don't we all drive Formula 1 Cars? Why not hybrids? Why not motorcycles or electric scooters? The reason is that there is an infrastructure built around supporting a platform that is reliable, robust, and surprisingly extensible. (MMX, SSE, 64bit, etc) Intel asked the same question and came up with the Itanium. It is fast, efficient, and well understood. This is the same big reason that people don't use Linux, it's hard to switch for minimal tangible benefits. (not a flame, just an observation)

Cost effective (1)

Lord Apathy (584315) | more than 7 years ago | (#17459242)

The reason that we use intel now most general purpose computing is simply is cost effective. Intel based arch. may not be the best in the world but its good enough. There are 20 years of development behind the processor so its well known for what it can do.

There is no reason to go out an develop a proprietary processor when a intel based chip will do off the shelf. The processor wars are over and sadly or not intel won. The have a cheap processor that works for 99.9% of all computing applications.

This doesn't mean that the proprietary are completely dead. There are some areas where a general purpose x86 isn't a good fit. The processor in a pda or a cell phone would be a good example. But for general purpose computing the x86 design is good enough.

Old, old argument (0)

Anonymous Coward | more than 7 years ago | (#17459292)

The Power architecture was known for its better performance per clock; and still other RISC architectures such as the various ARM models provide very high performance per clock as well as reduced power usage...

This is the old argument about RISC vs CISC. The very first thing to ask is better performance per clock how? When a RISC architecture is implemented to do simple things on every clock in order to be able to perform at much higher clock rates, then you end up using more clocks to do anything useful. In addition, you pay a penalty in memory efficiency because many more instructions (all doing simple little things) are required to do things that CISC processors can do in one or two instructions. More instructions in a CISC architecture lead to larger cache sizes to be able to contain typical loops, thereby negating some of the power efficiencies you speak of. Fully half of the power expended in modern processors is burned by the cache, not the actual CPU.

The only RISC architectures that actually managed decent performance used all the tricks in the book: pipelines, predictive branching, caching, etc. Do any of these things sound familiar? They are all things that give the present x86 architecture its performance. By the time you add all these things to a RISC architecture to gain performance, then a CISC architecture beacomes viable again because of memory density tradeoffs, cache sizing and power dissipation. There aren't any real simple answers.

Asking myself as well... (1)

DarkDust (239124) | more than 7 years ago | (#17459378)

I was asking myself that very same question for several years now...

Assembler-wise, I know ARMv5, Motorola 68k and Intel x86. Compared to the former two, x86 is not just plain ugly, it's just primitive and dumb. For example, since there are no real all-purpose registers (every register is eventually required by some instruction to be used in a special way), you always have to use the stack or memory. Using the stack is now quite efficient and cached by the CPU, AFAIK, I don't think it's a match to having a few all-purpose registers.

And the legacy of the 8086 (which was a hack to get to market quickly with a 16-bit processor) and then the 80386 are still with us, and I'm pretty sure todays processors could be faster and/or more efficient if things would have been designed better back then. Even Intel seems to think it's a bad design and AFAIK tried to replace it several times (iAPX 432, maybe i960, Itanium), but they failed horribly because those CPUs were too slow or too late (market penetration of the x86 was too huge then).

Oh well, there's a saying: "Programming is like sex: one mistake and you have to support it for the rest of your life". Same is true for hardware, it seems. That's why we still have this 1980'ish BIOS and boot process and other stuff that were mistakes from day one.

The reason why they're still here is that back then, the solution wasn't so horrible and only meant to stay for a few years, not decades. If people at Intel and IBM would have known that their stuff would stick with us for this long, they would have done a lot of things differently, I'm sure. But to everyones surprise the 8086 and IBM PC were big successes, and once you've got a certain market penetration you can survive even when there are better alternatives... history has shown this several times already :-/

Why do we use VHS? (1)

iminplaya (723125) | more than 7 years ago | (#17459388)

Why do we use VHS when it is understood that beta is a better technology?

Marketing.
The herding instinct. We go with the flow.
IP law. And PPC isn't the only superior tech being locked down. The Alpha chip rots on the shelf while we clunk along in our model Ts. *sigh* Let's hope this "electronic paper" thing can kill off the hardware monopolies.

Because ISA doesn't matter (3, Insightful)

Erich (151) | more than 7 years ago | (#17459396)

It's because if you're willing to throw some effort at the problem, the instruction set doesn't matter.

Intel chips haven't really executed x86 since the original Pentium. They translate the instructions into a more convienient form and execute those. They do analysis in hardware and find all sorts of opportunities to make code run faster.

As it turns out, you have to do most of the same analysis to get good performance on a RISC too. So you have a bit of extra decoding work, which you might not have to do on a MIPS or something, but you gain some flexibility in what lies underneath. And if you're producing 10x the amount of processors as Freescale, you're going to be able to make up for any marginal increase in cost the extra complexity costs you.

Also, don't buy into the hype. You can't buy that much from a good ISA on high-end processors. Look at the SPEC numbers for the Core 2 duo vs. anyone else if you don't believe me. IA64 was supposed to be the greatest thing ever because compilers could do all the work at compile time. There's almost every instruction set hook imaginable in IA64. And look how that architecture has turned out.

We use x86 because instruction translation is pretty easy and very effective... the same reason why Java algorithms perform pretty well, Transmeta had a halfway decent chip, Alpha could execute x86 code pretty well, and Apple can run PPC apps pretty well on x86. It's not bad enough to be completely broken, and we can engineer our way out of the problems in the architecture.

Of course, if you're counting transistors and joules, some of this breaks down... that's why ARM and DSPs have been effective at the low end.

it also meant an implicit OS choice (0)

Anonymous Coward | more than 7 years ago | (#17459420)

"Voting with your wallet" for PowerPC meant implicitly choosing Apple hardware, Mac OS and the option to install Linux. That was/is not the mainstream choice... seems to me that the prevalence of the Intel x86 architecture has more to do with the fact that Windows runs on on Intel.

Architecture is meaningless for the end user (2, Insightful)

ebunga (95613) | more than 7 years ago | (#17459442)

Architecture is meaningless for the end user and almost meaningless for the application developer. The preferences of OS designers and compiler writers are meaningless unless it can somehow Make Things Better for the end user.

Microsoft Windows (2)

quarrel (194077) | more than 7 years ago | (#17459456)

There are lots of posts already outlining the technical aspects of why (Speed/Power/Momentum/whatever), and while they are certainly important, I think it misses the crux entirely.

x86 is dominant, because Microsoft Windows has a monopoly on the desktop computer market, with an operating system that runs on x86. Intel and Microsoft have massive synergies - Intel gets dominance of the CPU market because it has Microsoft Windows, and so it can spend massive amounts of R&D and win the speed/power/technical merit wars (sometimes, or enough), and this massive amount of CPU power allows Microsoft to bring us amazing breakthroughs like the Aero interface (and the new Office ribbon!)...

Why Microsoft got that monopoly, and why it does/doesn't deserve to keep it, gives us endless comments on slashdot already, so no real point in going in to it here.

(Yes, I'm aware there has been Windows for other architectures, but the massive backlog of x86 software that runs on Windows and won't be recompiled for something else is HUGELY important)

--Q
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...