Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

x86 Evolution Still Driving the Revolution

kdawson posted more than 6 years ago | from the what's-a-few-nanometers-among-friends dept.

Intel 82

An anonymous reader writes "The x86 instruction set may be ancient, in technology terms, but that doesn't mean it's not exciting or innovative. In fact the future of x86 is looking brighter than it has in years. Geek.com has an article pointing out how at 30 years old x86 is still a moving force in technological advancement and, despite calls for change and numerous alternatives, it will still be the technology that gets us where we want to go. Quoting: 'As far as the world of the x86 goes, the future is very bright. There are so many new markets that 45nm products enable. Intel has really nailed the future with this goal. And in the future when they produce 32nm, and underclock their existing processors to allow the extremely low power requirements of cell phones and other items, then the x86 will be the power-house for our home computers, our notebooks, our cell phones, our MIDs and other unrealized devices today.'"

cancel ×

82 comments

Sorry! There are no comments related to the filter you selected.

technology that gets us where we want to go... (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#23349034)

... or technology that gets us closer to hell, who knows?

Re:technology that gets us where we want to go... (1, Funny)

Anonymous Coward | more than 6 years ago | (#23349958)

It only depends on the OS you install.

To rehash the same old story (5, Interesting)

Kjella (173770) | more than 6 years ago | (#23349040)

x86 processosr aren't x86 processors, and haven't been for many years. They all decode the x86 instruction set to microops which they execute internally. The x86 instruction decoder doesn't take up any significant space, and if there really was an advantage to direct microop code, producers would have offered a "native" microop mode long ago. SSE instructions has provided a lot of the explicit parallelism without touchnig the standard x86 set. The mathematical complexity doesn't get less than an ADD or MUL anyway, so it would have been all about arranging the queue inside the CPU. So yeah, ADD and MUL survives but like in mathematics it's just the symbols, in implementation it can be done with everything from microops to an abacus.

Re:To rehash the same old story (0)

Anonymous Coward | more than 6 years ago | (#23349274)

Boring.

Re:To rehash the same old story (1, Informative)

dreamchaser (49529) | more than 6 years ago | (#23349526)

I respectfully disagree. An x86 processor is any processor that can execute x86 instructions. The underlying architecture (RISC vs CISC, etc.) is irrelevant.

Re:To rehash the same old story (1)

MBGMorden (803437) | more than 6 years ago | (#23350578)

I respectfully disagree. An x86 processor is any processor that can execute x86 instructions. The underlying architecture (RISC vs CISC, etc.) is irrelevant.
RISC and CISC describe instruction sets though, which is what x86 IS. So x86 can't really be either RISC or CISC - it's CISC by definition. A RISC instruction set is "smaller" (though I've never seen a set mark for how small they need to be, but x86 most certainly qualifies is too long), but specifically RISC instruction sets must have fixed-length instructions. x86 uses variable length instructions and that explicitly makes it CISC.

Re:To rehash the same old story (3, Informative)

mikael (484) | more than 6 years ago | (#23351018)

Ars Technica has a good article on this debate

RISC vs. CISC - the Post-RISC Era [arstechnica.com] , and Bibliography [arstechnica.com]

In defence of RISC [embedded.com]

The majority of software written for any chip is compiled by a relatively small number of compilers, and those compilers tend to use pretty much the same subset of instructions. The UNIX portable C compiler for example used less than 30% of the Motorola 68000 instruction set.

Re:To rehash the same old story (1)

dreamchaser (49529) | more than 6 years ago | (#23351282)

I used the RISC vs CISC example because starting with the original Pentium x86 processors use a very RISC like internal architecture and microops. What you say is very true though.

Re:To rehash the same old story (0)

Anonymous Coward | more than 6 years ago | (#23354176)

No, it was the Pentium Pro (which became the Pentium II). The original Pentium was just a highly-optimised CISC.

Re:To rehash the same old story (0)

Anonymous Coward | more than 6 years ago | (#23354346)

Bzzzzzzzzt. Try again after you take a look at the Pentium technical docs. The inner, superscalar core used RISC like microoperations.

Re:To rehash the same old story (0)

Anonymous Coward | more than 6 years ago | (#23356234)

Revisionist history appears again. That's the first time I've ever seen that claim. The pentium didn't have any RISC in it at all. It did have highly optimised microcode with many simple instructions executing in one clock cycle and a second integer pipeline which could execute simple (1 instruction per clock instructions) in parallel if the planets were aligned. That isn't RISC. It's just cleverly-optimised CISC.

AMD and Cyrix had the K5 and M1 which were RISC internally at about the same time.

Re:To rehash the same old story (1)

dreamchaser (49529) | more than 6 years ago | (#23354542)

I am sorry to have to aruge with you, but you are not quite right here. The original Pentium was more of a 'hybrid' architecture. Under the hood it took advantage of a lot of RISCy architectural features, and it's microcode was very RISC like as well. The Pentium Pro took it a step further and aside from the outward exposed instruction set the internals looked much like any other RISC processor, albeit with fewer registers.

There is really no such thing as RISC and CISC anymore. Again, I was just using that as an example.

Re:To rehash the same old story (1, Informative)

Anonymous Coward | more than 6 years ago | (#23349820)

They are x86 processors. Maybe you don't know that there's more to an ISA than simply how instructions are encoded?

x86 comes from a time when transistors weren't essentially free, so while its design might have made sense in the microcoded-machine era, x86 processors now have a lot of cruft they have to deal with.

Its performance is limited by some of this cruft. For example, x86 has a hardcoded page table structure, and its TLB has no application-specific identifiers. Context switches become much more expensive on x86 compared to some other architectures because on x86, one must flush every entry in the TLB before proceeding.

Re:To rehash the same old story (2, Interesting)

renoX (11677) | more than 6 years ago | (#23351430)

>x86 processosr aren't x86 processors, and haven't been for many years. They all decode the x86 instruction set to microops which they execute internally.

Wrong, even the early x86 processors were microcoded, so all the x86 CPUs have these decoding phase just with a varying amount of instruction decoded to microops or executed directly.
All these CPU are x86 CPU, because they're *designed* to run the x86 instructions *efficiently* whatever the implementation details are, so a 80286 or Core2 Duo are both x86 CPUs but an Abacus or an Alpha aren't..

That said I wonder how much the braindead x86 ISA design costs in terms of performance instead of a reasonable ISA like say the ARM (with the Thumb2 ISA extension it can get code density close to the x86), of course it depends on the code but I remember that the change from 8 to 16 integer registers in the migration from x86 to x86-64 could bring up to 20% improvement which is huge!

Of course there's also design cost: the x86 ISA is so bizarre, that it's quite difficult to implement, but the payoff is huge given that x86 are nearly everwhere..
And yes I agree with the article that it could become a serious competitor to ARM in the future (but not with the Atom).

Re:To rehash the same old story (1)

Stradivarius (7490) | more than 6 years ago | (#23353008)

At least when I was still in school (2001-2002), the logic to decode x86 into the native micro-ops was actually a very sizable fraction of the chip area (almost half IIRC).

That's a large part of how Transmeta was able to get such insane power reductions with their Crusoe CPUs - they offloaded the x86-to-VLIW-micro-op translation step into software, rather than do it in circuitry. That caused a performance hit but saved a LOT of power.

The power of competition... (4, Interesting)

Bert64 (520050) | more than 6 years ago | (#23349068)

It just goes to show what can be achieved in an open market with multiple competitors (intel, amd, cyrix, via, idt etc), as opposed to a stifled closed market with one party or a small number of collaborators (alpha, hppa, ia64)....

A few years ago, x86 was utter garbage compared to virtually every other architecture out there... But the size and competitiveness of the x86 compatible market has forced companies to invest lots of money in improving their products, to the point that x86 is now ahead of most if not all of it's proprietary counterparts.

The sooner microsoft's strangle hold on the industry is broken, the better, so that the software world can start providing the benefits we got from the x86 compatible hardware market.

Re:The power of competition... (3, Interesting)

moderatorrater (1095745) | more than 6 years ago | (#23349582)

The sooner microsoft's strangle hold on the industry is broken, the better
It's interesting that you should say that considering everything that's going on. Ubuntu's the friendliest desktop distro to come around ever as far as most people are concerned. Apple keeps gaining market share, slowly but surely eating away at Microsoft. Vista came out and it included things that Macs and Linux have had for years, including a 3d desktop and something akin to sudo. In the desktop market, the pressure's building.

In the server market Windows has always had must more competition, and it's not getting any smaller. Solaris has ZFS which is creating a lot of buzz; I remember when WinFS sounded cool, now it sounds like it would be an incremental upgrade in the face of the ZFS revolution. It wasn't even a year ago that the story came out about the Microsoft sysadmins who had to switch from linux to windows server and hated it, prompting microsoft to look into more configuration in text files.

In the browser market, Microsoft has finally started seeing that they can't rely on IE6 forever, and now they've got IE7 out with IE8 in the works. They're moving closer to standards compliance, although they're taking their sweet time to do it and they're not taking a direct route. Safari's generating buzz, especially on the iphone, opera's dominating the embedded market and they're still the browser of choice for those who like to feel superior, and firefox is spreading like fire as swift as a fox! (it was a stretch, I know, but I couldn't resist)

The point is that Microsoft is feeling the pinch. Vista came out and showed everyone that they were wounded, and now all the little guys are running up and taking bites out of their markets before Microsoft can respond. They'll come back with efforts to maintain market share, but the competition is heating up and Microsoft can't (and doesn't) ignore it any longer.

Re:The power of competition... (1)

Bert64 (520050) | more than 6 years ago | (#23349866)

Yes the situation is improving, but microsoft are still powerful enough to make it very difficult to run anything else... Once those barriers are gone, the situation should change very rapidly.

Re:The power of competition... (1)

FiestaFan (1258734) | more than 6 years ago | (#23354822)

Yes the situation is improving, but microsoft are still powerful enough to make it very difficult to run anything else... Once those barriers are gone, the situation should change very rapidly.
Problem is a lot of software, especially specialty software, is Windows only. How is that going to change any other way than very slowly? Wine?

Re:The power of competition... (1)

nullchar (446050) | more than 6 years ago | (#23356252)

Unfortunately, Microsoft's strong-arm tactics of "encouraging" Windows on mobile devices (like the eeePC) are keeping them on top.

The "shrunken" PC and the "enlarged" mobile device will converge soon and that's where the market is at.

If linux can be on top in the growing mobile market, it will succeed. Otherwise, it will be an even longer battle.

Re:The power of competition... (4, Interesting)

dpilot (134227) | more than 6 years ago | (#23349832)

> stifled closed market ... (alpha, hppa, ia64)....

Into this thought we have to insert IA64, and I'm not sure how the heck we do. With any discussion of IA64, competition, and closed market is has to come up. IA64 was designed first and foremost to be a closed market, utterly unclonable. Though an Intel/HP joint venture, neither company owns any of the IP related to IA64. Instead the IP is owned by a separate company, and Intel and HP have a license to the IP from that company. That way, the IA64 IP is protected from any cross-licensing agreements that Intel or HP may have made, or may make in the future, since they don't have the rights to make any such agreements.

IA64 is closed as no architecture ever has been before. But it has been practical matters preventing its widespread adoption, not the competition-proof IP bomb that is its basic nature.

Oh yeah, IANAL.

Re:The power of competition... (2, Informative)

NotBornYesterday (1093817) | more than 6 years ago | (#23350146)

I agree with your point about competition being good, but technically, Intel tried to keep x86 closed and proprietary. Competition from AMD and others grew despite the spec not being open.

x86 (0, Redundant)

kueball (248452) | more than 6 years ago | (#23349104)

I for one welcome our x86 overlords

Baloney (3, Informative)

LizardKing (5245) | more than 6 years ago | (#23349116)

The article appears to be written from the perspective of someone who knows fuck all about the embedded market. The majority of embedded products that have something more sophisticated than an 8bit processor are using Motorola M68K, ARM or MIPS derivatives. That's likely to stay that way, as x86 processors tend to be large, comparatively power hungry and focused on high clock speeds - especially the ones from Intel and AMD. In fact, the only vaguely embedded device I've come across with an x86 chip was using a 486 clone (from Cyrix I think).

Re:Baloney (1)

Bandman (86149) | more than 6 years ago | (#23349352)

Right, but what they're talking about is having x86 chips small enough, less power hungry, and able to take the place of less powerful chips in embedded devices.

Re:Baloney (2, Informative)

LizardKing (5245) | more than 6 years ago | (#23349494)

Yup, but the authors argument that familiarity with development tools for x86 (and what seems like an assumption that those don't exist for other architectures) is going to be appealing also shows he's clueless. There are already excellent suites of tools for embedded development, in fact most of them are the same as you'd use for desktop or server development - particularly gcc, gdb and so forth targeted for your particular architecture, along with IDEs and emulators you can run on a typical PC. If the author thinks something like Visual Studio is going to appeal for embedded programming then he's even more of a nitwit.

Re:Baloney (1)

Goaway (82658) | more than 6 years ago | (#23349578)

That would require completely new x86 chips. You can't just re-used desktop processors for embedded systems, there's far too much support circuitry required. Embedded processors need to be highly integrated, with lots of circuitry on-chip.

And if you need new chips for that, why use x86 for those when you can use ARM?

Re:Baloney (1)

getnate (518090) | more than 6 years ago | (#23350846)

Intel is working on Atom processors for this purpose.

Re:Baloney (1)

RupW (515653) | more than 6 years ago | (#23349558)

In fact, the only vaguely embedded device I've come across with an x86 chip was using a 486 clone (from Cyrix I think).
The Madge MkIII token ring network card was built around a lower-power stripped-down x86-clone core. They chose it, IIRC, for the programming tools available. Alas I can't find any more details :-/ and the chip package just says "K2 Ringrunner".

Re:Baloney (1)

Moridineas (213502) | more than 6 years ago | (#23349750)

Depends on what exactly the definition of embedded device is, but Soekris (http://www.soekris.com/ [soekris.com] ) and a number of competitors are quite popular. Very cool products, all of them.

I'm currently designing a system using one to monitor weather + soil conditions in my garden.

Still driving the revolution (1)

$RANDOMLUSER (804576) | more than 6 years ago | (#23349144)

Because, like Robespierre, it, (and the "inevitability" of Itanic) has killed off all the possible rivals. Mips, Alpha, PA-RISC, SPARC, PPC, take your choice.

Misspelt `inertia'. (0, Interesting)

Anonymous Coward | more than 6 years ago | (#23349160)

Innovation is bringing something on the market that wasn't there before. Most innovation isn't about inventions, but about tweaks to businessmodels, minor changes to productlines, or even just a revamp of the sales material.

This silly blog post looks back at x86 and only x86, fails to put it in perspective, and is otherwise not well researched. In short, a contribution in the best web 2.0 tradition.

As to x86, the major software vendor's complete failure to move platforms (something which that other, different, company managed twice), and its proposed johnny-come-lately successor is a major failure. x86-64 isn't from intel but it finds itself forced to run along with it.

Sometimes I wonder what'd happened if IBM would've done the right thing from a technical perspective and chose anything else, like the m68k or the zilog Z8000 for their PC. Much less braindamage to millions of programmers, for one.

The whole thing lives by inertia and collective mediocrity. IBM, microsoft, and intel; this is the best they could come up with?

Re:Misspelt `inertia'. (1)

Uncle Focker (1277658) | more than 6 years ago | (#23351334)

This silly blog post looks back at x86 and only x86
Wait. A blog post about x86 only talks about x86? OMG HOW SCANDALOUS!!!

Re:Misspelt `inertia'. (2, Interesting)

drsmithy (35869) | more than 6 years ago | (#23353120)

As to x86, the major software vendor's complete failure to move platforms (something which that other, different, company managed twice) [...]

What an idiotic comparison. What would the business benefit of moving to another architecture have been ?

(We'll ignore for a second that the "major software vendor's" product has been sold for five or six different architectures (depending on how you count) and internally ported to several others.)

Mobile phones + x86 ... again! (4, Interesting)

bestinshow (985111) | more than 6 years ago | (#23349210)

I think that ARM will be rather more tenacious than this guy thinks. 32nm will not be a miracle thing that somehow magically drops x86 (even Atom) down into a mobile phone friendly CPU in terms of power consumption and size (never mind the supporting chipset). Companies with years of ARM code will not suddenly decide to port to x86 on the off-chance that x86 will get more than a tiny proportion of the mobile phone market.

ARM in a CPU costs under a dollar to license. Those ARM SoCs probably cost under $20 each, and they're tiny and have everything you need on them. Intel would have to provide a dozen Atom variants (in terms of features and size, not clock speeds and number of cores) to even gain the interest of this marketplace. That's why 3 billion ARM based cores are created every year. There's a huge variety of options available in a truly competitive market.

Re:Mobile phones + x86 ... again! (1)

Bandman (86149) | more than 6 years ago | (#23349370)

Companies with years of ARM code will not suddenly decide to port to x86 on the off-chance that x86 will get more than a tiny proportion of the mobile phone market.

They said the same things about Apple and moto chips.

Of course, in that case, there was a single controlling power that told people how it would be. There's no "Steve Jobs" of the embedded market.

Re:Mobile phones + x86 ... again! (1)

LizardKing (5245) | more than 6 years ago | (#23349580)

They said the same things about Apple and moto chips.

Yes, but the article discusses processors for embedded devices. What do you think's inside an iPod or iPhone? An ARM processor.

Re:Mobile phones + x86 ... again! (1)

ajlitt (19055) | more than 6 years ago | (#23354622)

Writing an emulation layer is fine if you're Apple. It's not fine if you're a 10k unit/year medical equipment vendor with hundreds of thousands of dollars spent on qualifying your product for clinical use. It's not fine in the low-margin consumer electronics market where you buy most of your software components, often tied to one architecture or another, to save on development costs.

Re:Mobile phones + x86 ... again! (1)

4D6963 (933028) | more than 6 years ago | (#23355458)

Writing an emulation layer is fine if you're Apple

Actually they pretty much just bought Rosetta from whichever company independently made it. Also if I can add my two cents on the subject, I think ARM pretty much won the embedded market. Maybe not forever, and maybe it has some serious concurrence out there, but I don't think anyone has to worry about their dominant position for a while.

Re:Mobile phones + x86 ... again! (1)

ajlitt (19055) | more than 6 years ago | (#23355526)

Apple did buy Rosetta, but I was thinking about their 68k-PPC transition for some reason. They wrote that one in-house.

ARM is pretty much the winner in the 32-bit embedded world, though MIPS has a hold in video apps.

Re:Mobile phones + x86 ... again! (4, Insightful)

ajlitt (19055) | more than 6 years ago | (#23349588)

Right on. Besides, the mobile market is fueled by the further integration of peripherals into SOCs. Performance and power aside: if I were going to design a smartphone, I wouldn't want to go with a three-piece cpu and chipset, not to mention licensing and development for BIOS on a new platform. And that's before including special ASICs for functionality not built into the chipset (3D accel, radio interfaces, LCD & touch panel). And then I'd be stuck with one of the few vendors who make modern embedded x86 chips.

If I go with ARM instead, I get a wide choice of SOCs from which I can pick and choose the built-in features (including the ones mentioned above). Bootloaders are generally included as part of the BSP for any given embedded OS, and if I don't like that there's always redboot or uboot (probably more too, I haven't been in the embedded world in a few years). If I don't want to use vendor A's product on revision 2 of the product, then I choose from one of the many remaining products out there, and my code ports over cleanly.

are there any... (1)

zogger (617870) | more than 6 years ago | (#23349616)

...normal desktops or laptops that use that ARM?

Re:are there any... (1)

LizardKing (5245) | more than 6 years ago | (#23349804)

Not sure whether it counts as a laptop, but despite the size the Sharp Zaurus CL3200 had all the features of one and used an ARM processor. As for desktops, the Archimedes had an ARM processor (in fact the processor was invented for it) and was an amazing machine in its day. Nowadays, you can get an ARM based desktop machine from Iyonix [iyonix.com] but they're a very niche product.

Re:are there any... (0)

Anonymous Coward | more than 6 years ago | (#23349940)

Not since the mid-1990s.


Acorn Archimedes [wikipedia.org]

Re:Mobile phones + x86 ... again! (1)

renoX (11677) | more than 6 years ago | (#23359788)

Nobody has said that the replacement of ARM by x86 would be done in one day, but still Intel has a huge investment in fabs, remember how Intel beat RISCs in PC/servers?

By putting more transistors in the x86 CPUs (which gave adequate performance) at a lower price than the competitors.

Sure using more transistors means consuming more power usually which is a disavandage in the embedded market, but if Intel can come with a better low-power process than the competitors then it's possible that x86 would beat the ARM CPUs even with the burden of the complex x86 decoder to have..
So in fact x86 vs ARM is a competition between Intel fab and TSMC fab (and the other) and usually Intel has better process, enough to beat ARM? I don't know..

Re:Mobile phones + x86 ... again! (1)

pslam (97660) | more than 6 years ago | (#23361108)

So in fact x86 vs ARM is a competition between Intel fab and TSMC fab (and the other) and usually Intel has better process, enough to beat ARM? I don't know..

Absolutely not. No amount of process refinement is going to push x86 to the same power consumption as ARM. Atom is about 10-100 times the power consumption per MHz of current mobile ARMs. It's orders of magnitude short.

The mobile and low power embedded industries have long ago found that they don't need to stick to one architecture. In fact, the desktop industries are starting to realise this too (e.g Apple). You can compile any standards-compliant C/C++ code to another architecture without a problem these days. When you can do that, why would you pick x86? The answer: you don't. You pick ARM as there's a huge number low power, highly integrated SoCs available. Or you pick MIPS for set-top boxes because there's lots of choice of video processing cores for it. Nobody in their right mind picks x86.

Intel needs to make Atom about 100 times better, and until then they'll be laughed out of every mobile phone business visit within minutes.

Re:Mobile phones + x86 ... again! (0)

Anonymous Coward | more than 6 years ago | (#23361712)

Intel needs to make Atom about 100 times

Rubbish. Intel would need to make a much larger number of Atom chips in order to amortize fab costs.

Anyway, Atom is more powerful because of multiplier pipeline and built-in registers. Also ARM is really slow at running open-source software. And when you use ARM you need a MEMORY BUS which takes up loads of power and makes the overall solution more inefficient than Atom.

Re:Mobile phones + x86 ... again! (1)

renoX (11677) | more than 6 years ago | (#23362406)

> Atom is about 10-100 times the power consumption per MHz of current mobile ARMs. It's orders of magnitude short.

That's because Intel didn't target the same power envelope for the Atom as the ARM does:
Atom target the OLPC, EEE: ultra mobile PCs not phones, that's all..
BUT Intel has announced that they're going to build a CPU which will be in the same 'power envelope' as ARM, this will be the real competition to ARM, not the Atom.

As you said the embedded industry is not linked to a given architecture, they'll choose what has the best performance/power (depending on price also of course) so if Intel's lead in fab produce the best CPU (which isn't of course automatic, it could be another lemon like the P4), even if it has an x86 ISA they'll use that..

As for the desktop, Apple has switched from PPC to x86, so it's not a good example!

>Nobody in their right mind picks x86.

For now sure, much like x86 weren't competitive with RISC at one time, in the future, we'll see.

Sure, but... (4, Insightful)

MostAwesomeDude (980382) | more than 6 years ago | (#23349282)

Although it's true that we have been forced to use x86 for quite a while, and as a result have gotten quite good at using it, that doesn't mean that it is an optimal instruction set. amd64 is an ugly hack, as is PAE, and although they do work, they don't change the fact that x86 was never intended to handle 64-bit spaces.

Consider the various POWER arches, and the ridiculously powerful ARM arch. ARM, for example, has an SIMD extension called Neon, which makes audio decoding possible at something like 15 MHz. These are very cool and potentially powerful architectures that have never been fully explored due to Microsoft's monopoly in the nineties.

(To be fair, Microsoft couldn't have forced adoption of another arch even if they wanted to; they homogenized the market way too far.)

Re:Sure, but... (4, Interesting)

Moridineas (213502) | more than 6 years ago | (#23349794)

Although it's true that we have been forced to use x86 for quite a while, and as a result have gotten quite good at using it, that doesn't mean that it is an optimal instruction set. amd64 is an ugly hack, as is PAE, and although they do work, they don't change the fact that x86 was never intended to handle 64-bit spaces.
The point is, who cares one iota if x86 is an "ugly" architecture. It gets the job done, and hands down beats most of the performance in what matters most of the time--speed. Saying something like "amd64 is an ugly hack" is just completely irrelevant. If you're one of the very few programmers in the world who regularly write assembly level code, you might have a valid complaint. If you're a more typical developer or an enduser, the ancestral design of your CPU couldn't be less important.

Re:Sure, but... (2, Insightful)

LizardKing (5245) | more than 6 years ago | (#23350038)

Speed comes much further down the list of priorities in most embedded applications. Size, power consumption, heat dissipation and even code size matter more - and code size is related to instruction set. Even when it comes to performance, x86 is relatively inferior compared to something like an ARM processor - it's mostly the higher clock speed and Intel's ability to build new fabs faster than anyone else that's kept them in the game.

Re:Sure, but... (1)

Moridineas (213502) | more than 6 years ago | (#23350500)

I'm not arguing the case of embedded applications--though I WOULD point out my other post to this article which mentions devices like Soekris http://www.soekris.com/ [soekris.com] which are x86, powerful, and small.

No doubt some/many embedded devices benefit greatly from non-x86. X86 is very steadily improving. Part of this is for sure because of Intel+AMD research divisions and fabs. What I'm saying is, the "why" is irrelevant.

How can you say that x86 is relatively inferior when compared to ARM, performancewise? Show me an ARM that competes with the latest offerings from AMD or Intel? It's all theory! Incidentally, I've read papers analyzing relative performance that suggest the modern ia32/ia64 architectures actually benefit from their hybrid risc/cisc design in terms of optimizing the flow of microops.

Re:Sure, but... (2, Interesting)

Waffle Iron (339739) | more than 6 years ago | (#23350884)

Even when it comes to performance, x86 is relatively inferior compared to something like an ARM processor - it's mostly the higher clock speed

I don't believe that. I got a Compaq iPaq PDA a few years back so I could play around with it. I was excited that it had a 200MHz ARM CPU, and I was expecting that it would run with similar performance to a 200MHz Pentium.

I loaded Linux on to the thing and compiled a few test programs. I was highly disappointed to find out that the CPU actually ran with a performance level closer to a 66MHz 486. Live and learn. Well, it turns out that that's the price you pay for having almost no cache and a single ALU with in-order execution. This CPU certainly wasn't defeated by Intel's high clock speeds.

Re:Sure, but... (1)

LizardKing (5245) | more than 6 years ago | (#23353770)

The 200MHz Pentium would be roughly four or five times of the ARM found in an iPaq, so something had to give - and that was the cache and some complexity. The ARM chip also ran much cooler and with lower power consumption than the Pentium, which needed a fan and sizable heat sink. My point about an x86 processor being inferior is that it's crippled by the instruction set, which requires a lot of decoding before the RISC-like core can actually do its work. There are diagrams that show how much real estate on a number of x86 processors from Intel is taken up by the decoder, and it's considerably more than on a processor with a more efficient and elegant instruction set.

Re:Sure, but... (1)

Moridineas (213502) | more than 6 years ago | (#23354008)

There are diagrams that show how much real estate on a number of x86 processors from Intel is taken up by the decoder, and it's considerably more than on a processor with a more efficient and elegant instruction set.
And the decoder also allows for efficient instruction reordering, etc. This is not nearly so 1-dimensional an issue as you make it seem!

Re:Sure, but... (2, Interesting)

Waffle Iron (339739) | more than 6 years ago | (#23354284)

The size of the x86 decoder as a percentage of die area has been decreasing ever since the days of the 386. It's now pretty negligible. In return for that, you get a very compact instruction set coding that saves on cache space, thus cutting down on the largest single consumer of real estate on the die.

I notice that the ARM has added a whole alternative instruction set to save on code size, too. So the idea must have some merit.

Re:Sure, but... (2, Interesting)

Skapare (16644) | more than 6 years ago | (#23353874)

If all the effort that has been put into x86 had instead been put into another architecture that was cleaner to begin with, and designed specifically for being able to migrate to 64 bit, who's to say we wouldn't be even better off than we are now with the x86 ancestry?

Sure, I agree, we've made x86 work well. But we are comparing a processor that has had a tremendous focus to a few alternatives that have had much less focus in terms of bringing them up to speed.

There is what I refer to as "the x86 cost". The "ugly" architecture often means that people have to spend more time figuring out things in it due to the many layers of improvements that have been done. It makes life far more complicated for those that have to work at the "bare metal" level.

One excuse often made to justify upgrading an architecture is the need for compatibility. The suggestion I made back when 64-bit was being talked about was to go with a dual processor compatibility setup, where you have both the old 32-bit processor, and an all new 64-bit processor, side-by-side in the same machine. The oldest operating systems would just ignore the new processor. New operating systems would run on the new processor and manage the old processor like a virtual machine (so you can still run the old OS at the same time). As this migration proceeds, hardware vendors would begin making the machines with the 32-bit CPU optional. Eventually, it would be gone and the space used for a 2nd 64-bit processor. There's your compatibility.

Re:Sure, but... (1)

Moridineas (213502) | more than 6 years ago | (#23354212)

If all the effort that has been put into x86 had instead been put into another architecture that was cleaner to begin with, and designed specifically for being able to migrate to 64 bit, who's to say we wouldn't be even better off than we are now with the x86 ancestry?
This is very possible. We've certainly seen some promising architectures that have either fizzled due to market share or scaleability. On the other hand, a lot of design from scratch to be clean and forward looking architectures have never left the starting pad virtually (Think itanium). And then there's x86 which has been consistently cheap, and is consistently able to scale. Can we really be doing that much better than x86? I'm not sure.

And on the other hand, as I mentioned in other post, the hybrid design of decoding ia32/64 instructions into microops offers some great opportunities for hardware optimization. It's not quite so simple as it seems.

The "ugly" architecture often means that people have to spend more time figuring out things in it due to the many layers of improvements that have been done. It makes life far more complicated for those that have to work at the "bare metal" level
And for people at the bare metal level, yeah, x86 is cruftier looking than MIPS or something else. ia32/64 also has some great compilers and a lot of people with a lot of experience.

Compatibility -- imho, with the growth of managed+interpreted languages, along with the rise of virtualization and perhaps even more importantly multiple cores, this is going to become less and less of an issue.

Re:Sure, but... (4, Insightful)

the_humeister (922869) | more than 6 years ago | (#23350128)

These are very cool and potentially powerful architectures that have never been fully explored due to Microsoft's monopoly in the nineties.
How exactly is an ISA monoculture Microsoft's fault? Microsoft did make Windows for multiple CPU architectures. Guess which ones people bought? The x86 version because the hardware is a lot less expensive. If there's any entity to blame, it's IBM, HP, DEC, Sun etc for not bringing down the prices of their architectures.

Re:Sure, but... (1)

higuita (129722) | more than 6 years ago | (#23364926)

i even agree with you, that wasnt MS fault for x86 monoculture, but MS didnt help also... they only offered windows for alpha and even then had many problems and quickly dropped...
alpha didnt last either and all other CPUs filled their niches, the only "old" CPUs still there, out of their niched markets are the MIPS (mostly scaled down to embedded hardware) and powerPC (in the Mac market, but now, with it lost, the PowerPC computer is severely minor)

IMHO, only powerPC had a change to fight the x86, but they had to convince MS to port windows to it and/or agressively push the OS/2 (or even linux, although a little early) to the market and do it extremely cheap (by OEM, enterprise licenses and not officially, easy to pirate floppies/CDs)... and by market, i mean BOTH the home and company ones ... only Mac (and server) supporting the powerpc grow was too little.

Re:Sure, but... (1)

ichigo 2.0 (900288) | more than 6 years ago | (#23350600)

ARM, for example, has an SIMD extension called Neon, which makes audio decoding possible at something like 15 MHz.
What, a specialized processor is able to do a task in less cycles than a more general processor? You must be joking!

The instruction set doesn't dictate how the hardware is built. I could design an ARM processor completely unsuitable for audio decoding which needs 1 Ghz to do it in real time. Does that mean the ARM instruction set sucks? No, it just means that my glorious processor is not designed for that purpose.

Re:Sure, but... (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23350678)

Microsoft didn't homogenize anything; it was the hardware manufacturers who did. MS wrote DOS for the IBM PC and when other people copied the PC, they licensed DOS for it. MS wrote software for other personal computers (Apple being the best example), but it was the PC clones that took over the marketplace. Indeed, you could argue that if the market did not become homogenized, home computers would not be the ubiquitous devices they are today. Computers would instead be cheap toys for hobbyists or expensive tools for scientists and engineers.

Keep in mind that MS has had Windows available for i860 (codenamed N-Ten, the source of the NT moniker), MIPS, x86, Alpha, PPC, IA64, and x86-64. Also, Windows did not become popular until version 3.0, OS/2 never really took off, and Windows NT didn't become pervasive until XP. When you look at things this way, it is pretty clear that MS has almost no control over the market.

dom

Re:Sure, but... (2, Informative)

QX-Mat (460729) | more than 6 years ago | (#23350810)

ARM, for example, has an SIMD extension called Neon, which makes audio decoding possible at something like 15 MHz.
ARM is a heavily pipelined architecture with a reduced instruction set designed to perform a specific tasks like decoding. It takes a lot of silicon to allow a pipeline to decode things outside a tradition math/vector unit. Rarely is there any kind of cross over or feedback late in the execution stage making pipelines less predicable. To make things worst, they're hard to fence, which makes pipelined operations awkward to preempt.

I don't think it's been an abuse of position (in the vertical monopolistic sense), but rather the development of a technology that created a parallel effect on the market winner/common supplier. I believe there are more benefits from using general purpose CPUs. If MS had taken the pure RISC route we'd have co-processors for everything now.

The future will lie in complex instruction sets that are incrementally updated with very long word "feature pipelines". Transmeta had a point with VLW CPUs, but suffered because they tried to use the tech to emulate general purpose functionality, rather than have legacy fetch-decode-execute silicon to do the mundane stuff, and offload to VLW for bespoke applications.

Your comments on PAE are spot on btw: it is an ugly hack, but so are most methods of indirect access. I can't see translation going any time soon. We do it everywhere - protected state, dynamic linking, mmio... everywhere. Unless CPU manufacturers start providing wider internal archs that aren't linked to the width of the address bus, we're not going to see that (multiplexing is expensive!!)

Matt

Re:Sure, but... (2, Informative)

argent (18001) | more than 6 years ago | (#23354140)

Pipelines are an implementation technique, not part of an architecture. Some architectures make it easier to take advantage of pipelining than others, but that doesn't mean they're pipelined architectures. Hell, the intel x86-family processors have had longer pipelines than just about anything else for at least a decade. P4 family chips had up to 33 pipeline stages, neatly beating the profligate G5's max-23-stage pipeline.

The Core 2 still has 14 stages in its pipeline.

As for the ARM, the XScale has 5 stages, other arm implementations have had up to 8.

Re:Sure, but... (1)

QX-Mat (460729) | more than 6 years ago | (#23356530)

Pipelines are an implementation technique, not part of an architecture
I disagree with you somewhat. Pipelines are integral to foundation of the processing of the execution of the architecture and not simply an implementation technique.

I'm happy to admit the modern demand on data flow is giving the effect that pipelines are a method of implementation (take vector units and the need to poll them), but if you ignore data bottle necks, you'll still find a von Neumann CPU will be a pipelined machine with many alternative accumulators, execution paths and multipliers than a similarly functional Harvard CPU which will employ limited execution paths and fewer accumulators/multipliers/adders instead favouring raw clock rate.

I am of course referring the execute stage - not the instruction pipeline: this can be confusing, as many CPUs have multiple state pipelines so that they may perform ground work such as fetch, decode and execute simultaneously - but this is not a heavily pipelined mechanism unless they can accumulate on the same clock. Heavily pipelined architectures will allow simultaneous execution and then accumulate, whereas general purpose machines will accumulate on the next clock.

Matt

Re:Sure, but... (1)

argent (18001) | more than 6 years ago | (#23356946)

Pipelines are integral to foundation of the processing of the execution of the architecture and not simply an implementation technique.

I can't parse that.

Re:Sure, but... (3, Interesting)

edwdig (47888) | more than 6 years ago | (#23351682)

Although it's true that we have been forced to use x86 for quite a while, and as a result have gotten quite good at using it, that doesn't mean that it is an optimal instruction set. amd64 is an ugly hack, as is PAE, and although they do work, they don't change the fact that x86 was never intended to handle 64-bit spaces.

x86 wasn't intended to handle 32 bit either. But when it made that jump, they actually cleaned things up and made the instruction set nicer. There's a lot less weird limitations on the instruction set in 32 bit mode than 16 bit mode. The jump to 64 bit mode cleaned things up even further and actually makes things rather nice. It's not an ugly hack in any way, it's actually quite elegantly done.

PAE, yeah, that's an ugly hack, but it's really all you can do if people are demanding > 4 GB memory on a 32 bit processor. You could do things nicer if you used segmentation, but most people developed a hatred of it due to the weird way it was implemented on the 8086 and refused to consider it ever since.

Re:Sure, but... (0)

Anonymous Coward | more than 6 years ago | (#23352336)

did you ever try and check the differences between x86 64bit and 32bit opcodes?

i dunno, but when you talk about ugly hacks it doesnt look to me like you ever read a single line about them

only a single prefix controls, if you use additional registers/64bit wide registers

no prefix, and the opcode is virtually the same like in 32bit mode (exceptions like jmp and call outside)

stack usage is now always 16byte aligned

some really unneeded stuff got thrown out, and other useful things added (like the nx bit)

it didnt make x86 opcodes any uglier, indeed from my point of view it cleaned them up a lot

Re:Sure, but... (0)

Anonymous Coward | more than 6 years ago | (#23352704)

Although it's true that we have been forced to use x86 for quite a while
I feel your pain. Suffer, malcontent.

doesn't mean that it is an optimal instruction set.
Whatever. Been saying 'whatever' to that for about 20 years. Keep yapping about it.

amd64 is an ugly hack, as is PAE, and although they do work, they don't change the fact that x86 was never intended to handle 64-bit spaces.
For every one 64 bit instruction executed on POWER or whatever favorite boutique CPU you care to bring up, a couple million execute on x86-64. Only you malcontents are complaining about it.

Consider the various POWER arches
No. You can eat your $5000 POWER CPUs with your whine and cheese.

Re:Sure, but... (1)

jamesh (87723) | more than 6 years ago | (#23358622)

To be fair, Microsoft couldn't have forced adoption of another arch even if they wanted to

Do you remember Windows NT on Alpha? MIPS? PowerPC? SPARC? i860/i960?

I think Microsoft dropped everything except Alpha by sp6a though.

So they actually did try and capture some other markets, although you're right in that they would never have gotten people off of x86.

Is there any reason for PPC any more? (1)

r_jensen11 (598210) | more than 6 years ago | (#23349590)

I know that several of the cores in the Cell resemble PPC's and I seem to recall an association of PPC's and one of the X-Boxes.

Is there any reason to use a PPC these days? At least, for desktop usage?

Re:Is there any reason for PPC any more? (1)

the_humeister (922869) | more than 6 years ago | (#23350188)

For desktop usage? No. You use it to be different (hence the raging Mac vs. PC wars back in the day).

Re:Is there any reason for PPC any more? (1)

Detritus (11846) | more than 6 years ago | (#23350624)

They were designed into a whole bunch of digital cameras. That's an application that requires low power and high speed.

Re:Is there any reason for PPC any more? (0)

Anonymous Coward | more than 6 years ago | (#23351034)

Close but no cigar. The Cell is basically one PPC core and a bunch of underpowered RISCs. 360's CPU is a triple core PPC.

Re:Is there any reason for PPC any more? (1)

pstorry (47673) | more than 6 years ago | (#23351352)

The PowerPC's desktop presence was pretty much killed when Apple switched.

I don't think IBM makes an workstations that use the PPC chips anymore - but they still use the related POWER architecture in their higher end servers.

So on the desktop, it's dead.

In the device and embedded market, however, it's quite popular. It has an unusual niche "above" ARM and "below" x86, so to speak.

This is because it has higher performance capabilities and better integration with commodity computing hardware than most ARM chips can provide, whilst having lower power requirements and higher per-watt performance than X86 chips.

This article from IBM's developerWorks has two sections in which PPC is compared with X86 and ARM:
http://www-128.ibm.com/developerworks/library/pa-migrate/ [ibm.com]
It's not as biased as its IBM provenance might make you think, and provides a nice summary of the differences in real world usage.

As for where PPC is being used - well, you probably own a device with a PPC chip in it, and just don't know it.
They're used in vehicle management systems by Ford, they're in a wide variety of laser printers, they're used in some network/NAS devices.

Oh, and they're also used in all the current generation consoles, of course - so maybe you do own a PPC processor and knew it after all! ;-)

Re:Is there any reason for PPC any more? (0)

Anonymous Coward | more than 6 years ago | (#23352428)

I don't think IBM makes an workstations that use the PPC chips anymore


Nope, they still sell intellistation power desktops [ibm.com] . Also, you can get both the 520 [ibm.com] and 550 [ibm.com] servers in "deskside" style cases.

Re:Is there any reason for PPC any more? (0)

Anonymous Coward | more than 6 years ago | (#23353130)

Not really.

[bullet 1] Since Mac's have dropped the processor, support for open linux PPC have been falling behind Sparc on the whole.
(Ubuntu supports x86, x86_64, sparc)
(Flash has been released for sparc over linux ppc)

[bullet 2] The Cisc Risc battle of ideology doesn't exist anymore...Risc won secretly.

[bullet 3] PPC hasn't had a real speed advantage on X86 chips for a while.

you mean.... (1)

pxuongl (758399) | more than 6 years ago | (#23350440)

you mean x86 Intelligent Design is Still Driving the Revolution. Evolution is a theory, not a fact.

Re:you mean.... (1)

Vectronic (1221470) | more than 6 years ago | (#23356218)

Hmm... I think a more appropriate correction would be: "The x86 Revolution Is Still Driving The Evolution"

Because "Revolution" is a change of ideas, an "Evolution" is a change of fact.

Evolution, as far as passing, or discarding various mutations in the parent animal onto its children, may be a "theory" (to some)

But the evolution of processors is a fact, because its entirely documentented exactly what changed, how it changed, and why it changed by hundreds if not thousands of individuals.

"revolution" (2, Interesting)

nguy (1207026) | more than 6 years ago | (#23355958)

That's "revolution" as in "spinning in place"? :-)

Seriously, x86 these days is just a compression format for a kind of RISC processor. It's probably not a very good compression format, but that probably also doesn't make a big difference.

Just another blind x86 enthuast.... (1)

jozmala (101511) | more than 6 years ago | (#23360044)

Article is quite too enthuastic about x86 pushing to other domains.

Lets make it clear.Modern x86 decoder is more complex than entire simple single issue risc processor and it consumes more power.
Yes, thats SINGLE unit in the front end of pipeline, that risc processors do not need.

About those "risc ops" which x86 instructions are translated as. First they are HUGE compared to risc instructions. ~4x as large since they need to map worst case size for each element for everyinstruction. Those are bits that need to be moved and processed in several pipeline stages before execution stage. And take there power.

Then extra local memory operations due to 2 operand instructions and register spills.

All the ugliness creates bad power efficiency in low power applications.

X86 has won in the desktop by sheer amount of engineering resources put in its development, and the engineering resources to get superior manufacturing process. Getting upwards to higher end is possible.

x86 is elephant, and you can make elephant to fly if you apply enough force, like Intel does. But there is serious doubt that no company can create enough force to push it through a key hole.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>