×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

RISC Vs. CISC In Mobile Computing

kdawson posted more than 5 years ago | from the comin'-around-again-on-the-guitar dept.

Hardware 126

eldavojohn writes "For the processor geeks here, Jon Stokes has a thoughtful article up at Ars Technica analyzing RISC vs. CISC in mobile phones (Wikipedia on Reduced Instruction Set Computers and Complex Instruction Set Computers). He wraps it up with two questions: 'How much is the legacy x86 code base really worth for mobile and ultramobile devices? The consensus seems to be "not much," and I vacillate on this question quite a bit. This question merits an entire article of its own, though,' and 'Will Intel retain its process leadership vs. foundries like TSMC, which are rapidly catching up to it in their timetables for process transitions? ARM, MIPS, and other players in the mobile device space that I haven't mentioned, like NVIDIA, AMD/ATI, VIA, and PowerVR, all depend on these foundries to get their chips to market, so being one process node behind hurts them. But if these RISC and mobile graphics products can compete with Intel's offerings on feature size, then that will neutralize Intel's considerable process advantage.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

126 comments

CISC is dead (5, Insightful)

jmv (93421) | more than 5 years ago | (#23469032)

There are no CISC CPUs anymore. There are RISC CPUs with RISC instruction sets (e.g. ARM) and there are RISC CPUs with CISC instruction sets (e.g. x86). The cores are mostly the same, except that the chips with CISC instructions need to do a little more work in the decoder. It requires a bit extra transistors and a bit more power, but it's not a huge deal for PCs and servers. Of course, for embedded applications, it makes a difference and for those it makes sense to have more "specialised" architectures (from microcontrollers to DSPs, ARM and all kinds of hybrids).

long live CISC (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#23469182)

nt (sucks)

Re:CISC is dead (1)

gangien (151940) | more than 5 years ago | (#23469192)

i believe the hardware that mainframes run on would be classified as a CISC (ESA/390 i think?), which aren't dead. Not by a long shot, unfortunately.

Re:CISC is dead (3, Informative)

Anonymous Coward | more than 5 years ago | (#23469254)

The CPU's in today's IBM Mainframes are based on the POWER archictecture. That makes them technically RISC processors. You're a bit behind the times.

Re:CISC is dead (1)

gangien (151940) | more than 5 years ago | (#23469516)

ok but there's not tons of old main frames running still?

Re:CISC is dead (3, Informative)

hey! (33014) | more than 5 years ago | (#23469796)

ok but there's not tons of old main frames running still?


No, there's not lots of old mainframes running still. But there are probably more new mainframes running than when computers were exclusively located in data centers. Back on the day, your chances of working directly with a mainframe, given that you worked with computers, was 1.0; now it's probably more like 0.001. But there's a lot more people working with computers.

Re:CISC is dead (1)

Guy Harris (3803) | more than 5 years ago | (#23469734)

The CPU's in today's IBM Mainframes are based on the POWER archictecture.

Not according to an IBM slide presentation on the Z6 microprocessor [ibm.com], which is the latest processor for the z/Architecture machines (the machines that are descendants of the System/360 mainframes).

The AS/400^WiSeries^WSystem i midrange machines do use PowerPC processors, but they're different from the z/Architecture machines.

Re:CISC is dead (1)

crunchy_one (1047426) | more than 4 years ago | (#23470972)

The third slide in the presentation clearly states that the Z6 is a sibling of the Power6 and that the Z6 uses many of the functional units of the Power6. Further along in the presentation, slide 14 talks about the use of multiple-passes and millicode to handle CISC ops.

Clearly the Z6 is exquisitely optimized to execute the z/Architecture instruction efficiently. It is also clear that it is part of the Power6 family.

Re:CISC is dead (3, Informative)

Guy Harris (3803) | more than 4 years ago | (#23472176)

The third slide in the presentation clearly states that the Z6 is a sibling of the Power6

As that slide says, "Siblings, not identical twins", "Different personalities", and "Very different ISAs=> very different cores".

Further along in the presentation, slide 14 talks about the use of multiple-passes and millicode to handle CISC ops.

To be precise, it says "Multi-pass handling of special cases" and "Leverage millicode for complex operations"; that means "complex instructions trap to millicode", where "millicode" is similar to, for example, PALcode in Alpha processors - it's z/Architecture machine code plus some special millicode-mode-only instructions to, for example, manipulate internal status registers. See, for example, "Millicode in an IBM zSeries processor" [ibm.com].

Clearly the Z6 is exquisitely optimized to execute the z/Architecture instruction efficiently. It is also clear that it is part of the Power6 family.

It's clear that, as the the third slide says, the Z6 "share[s] lots of DNA" with the Power6, i.e. it shares the fab technology, some low-level "design building blocks", large portions of some functional units, the pipeline design style, and many of the designers.

It's not at all clear, however, that it would belong to a family with "Power" in its name, given that it does not implement the Power ISA. If it's a sibling of the Power6, that makes it, in a sense, a member of a family that includes the Power6, but, given that its native instruction set is different from the Power instruction set, it makes no sense to give that family a name such as "Power6" or anything else involving "Power" - and it certainly makes no sense to assert that it's "sed on the POWER archictecture", as the person to whom I was responding asserted.

Re:CISC is dead (1)

GlobalMind (597374) | more than 4 years ago | (#23473676)

The Power Systems line is comprised of a line of servers which runs IBM i, AIX, and Power Linux. This includes the JS12 & JS22 POWER6 blades.

There is effectively no more System p or System i, just Power Systems.

System z isn't part of that.

Re:CISC is dead (5, Informative)

RecessionCone (1062552) | more than 5 years ago | (#23469238)

Actually, have you heard of micro-op and macro-op fusion? Intel is touting them as a big plus for their Core microarchitecture: basically, they take RISC internal instructions and fuse them into CISC internal instructions (micro-op fusion) and also take sets of CISC external instructions and fuse them into CISC internal instructions (macro-op fusion).

So basically, things are so much more complicated these days that you can't even call x86 chips RISC CPUs with CISC instruction sets.

We're in a post-RISC era.

Re:CISC is dead (1)

AKAImBatman (238306) | more than 5 years ago | (#23469266)

We're in a post-RISC era.
You know you've been on Slashdot too long when you see a statement like this and immediately try to make a "post-Columbine era" joke out of it.

CISC is alive and well and so is RISC (5, Interesting)

erice (13380) | more than 5 years ago | (#23469268)

They just aren't very important distinctions anymore.
Both refer to the instruction sets, not the internal workings. x86 was CISC in 1978 and it's still CISC in 2008. ARM was RISC in 1988 and still RISC in 2008. AMD64 is a border line case.

People get confused with the way current x86's break apart instructions into microops. That's doesn't make it RISC. That just make it microcoded. That's how most CISC processors work. RISC process rarely use anything like microcode and when they do, it is looked upon as very unRISCy.

Today, the internals of RISC and CISC processors are so complex that the almighty instruction set processing is barely a shim. There are still some advantages to RISC but they are dwarfed by out-of-order execution, vector extensions, branch prediction and other enormously complex features of modern processors.

Re:CISC is alive and well and so is RISC (4, Informative)

Waffle Iron (339739) | more than 5 years ago | (#23469580)

People get confused with the way current x86's break apart instructions into microops. That's doesn't make it RISC. That just make it microcoded.

That most certainly does not make it microcoded. Microcode is a set of words encoded in ROM memory that are read out one per clock, whose bits directly control the logic units of a processor. Microcode usually runs sequentially, in a fixed order, may contain subroutines, and is usually not very efficient.

Modern CISC CPUs translate the incoming instructions into a different set of hardware instructions. These instructions are not coded in a ROM, and they can run independently, out of order and concurrently. They are much closer to RISC instructions than to any microcode.

The X86 still contains real microcode to handle the stupid complex instructions from the 80286 era that nobody uses anymore. They usually take many clocks per instruction, and using them is not recommended.

Looks like microcode, smells like microcode,... (2, Interesting)

MarkusQ (450076) | more than 4 years ago | (#23470990)

That most certainly does not make it microcoded. Microcode is a set of words encoded in ROM memory that are read out one per clock, whose bits directly control the logic units of a processor. Microcode usually runs sequentially, in a fixed order, may contain subroutines, and is usually not very efficient.

Modern CISC CPUs translate the incoming instructions into a different set of hardware instructions. These instructions are not coded in a ROM, and they can run independently, out of order and concurrently. They are much closer to RISC instructions than to any microcode.

The distinction you seem to be trying to draw here is not very sound. Modern CPUs "translating instructions into hardware instructions" with a gate maze is essentially the same thing as pulling a wide microcode word from ROM whose bits directly control the logic units. In both cases you put some bits in to start the process off, and you get a larger number of bits as a wide bus of signals out, which are used to direct traffic inside the CPU. The picture only looks

Specifically, the different parts of each microcode instruction executed in parallel then, just as now, though out of order execution was much rarer (some DSPs had it IIRC). This was not because microcode as it was then conceived couldn't handle it, but that the in-CPU hardware to support it wasn't there. There's no point going through gymnastics to feed your ALU if you've only got one and it's an order of magnitude slower than the circuit that feeds it.

One of the biggest annoyances of staying in any one field for too long is having to watch some technology following the logical path from conception to fruition go through an endless series of renaming (AKA jargon upgrades) that add nothing but confusion and pomposity to the field.

--MarkusQ

Re:Looks like microcode, smells like microcode,... (3, Informative)

Waffle Iron (339739) | more than 4 years ago | (#23472124)

Modern CPUs "translating instructions into hardware instructions" with a gate maze is essentially the same thing as pulling a wide microcode word from ROM whose bits directly control the logic units.

Only if you ignore the mechanism of how it's done. However, the term "microcode" was created to describe the mechanism, not the result.

Under your definition, it would appear any division of an instruction into multiple suboperations would qualify as microcode. That would presumably include the old-time CPUs that used state machine sequencers made from random flip flops and gates to run multi-step operations.

The end result of those state machines was the same as microcode, and the microcode ROM (which included the next ROM address as part of the word) was logically a form of state machine. However, the word microcode was used to differentiate a specific type of state machine, where the logic functions were encoded in a regular grid-shaped ROM array, from other types of state machines. Modern CISC code translation does not involve ROM encoding, and is not this type of state machine.

Re:CISC is dead (1, Insightful)

Anonymous Coward | more than 5 years ago | (#23469494)

CPU architecture not a huge deal?

"Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture [Burroughs B5000] was a good idea.

Just as an aside, to give you an interesting benchmark -- on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore's law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there's approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

The myth that it doesn't matter what your processor architecture is -- that Moore's law will take care of you -- is totally false." --Alan Kay

I'm not saying (nor is Alan, I suspect) that RISC is better, or even that RISC versus CISC matters, but architecture certainly does. If we didn't have x86 compatibility as a goal, do you think your CPU would look anything like the Core2Duo? Have you ever built a large system where initial architectural assumptions did *not* significantly affect the final performance?

Re:CISC is dead (1)

exley (221867) | more than 5 years ago | (#23469520)

There are no CISC CPUs anymore.
For cutting-edge processors that is definitely the case. CISC really doesn't lend itself well to techniques like pipelining, so Intel takes its "complicated" legacy CISC instructions and breaks them down into several smaller RISC operations (much like parent was saying).

I believe there are still a lot of older processors out there still being used that would be considered CISC CPUs though. This would be more true in spaces like embedded systems where they don't need the latest and greatest to accomplish what they need to do.

Re:CISC is dead (2, Interesting)

Darinbob (1142669) | more than 5 years ago | (#23469910)

There are no CISC CPUs anymore.
One big difference between CISC and RISC was with philosophy. CISC didn't really have a philosphy though, it was just the default. The RISC philosophy was to trim out the fat; speed up the processing and make efficient use of chip resources even if it makes the assembler code. Ie, toss out the middle-man that is the microcode engine, moving some down to hardware and some up to the programmer's level. Then use that extra savings for more registers, concurrency, etc.

The new x86 Intel CPUs don't really have that philosopy. They use many techniques pioneered on RISC CPUs, but they haven't disposed of the instruction set. Compilers are still stuck trying to optimize at the CISC level. The microcode engine is still there in some sense, converting high level x86 code to internal micro operations. Intel keeps CISC working by pouring huge amounts of resources into the design.

Of course Intel is in a bind here. They can't dump the x86, it's their bread, butter, and dessert. They have to make CISC fast because their enormous customer base demands it. They're forever stuck trying to keep an instruction set from 1978 going strong since they can't just toss it all out and make something simpler.

Re:CISC is dead (1)

marxmarv (30295) | more than 5 years ago | (#23470410)

One big difference between CISC and RISC was with philosophy. CISC didn't really have a philosphy though, it was just the default.
Insofar as there was a philosophy, it was to make best use of the scarce resources of the time: memory (so pack as much functionality as possible into an instruction byte) and programmer labor (so make the assembly language as versatile to hand-code as possible).

Of course Intel is in a bind here. They can't dump the x86, it's their bread, butter, and dessert. They have to make CISC fast because their enormous customer base demands it. They're forever stuck trying to keep an instruction set from 1978 going strong since they can't just toss it all out and make something simpler.
I wouldn't be quite so sympathetic toward them. They do it because it's a good barrier to entry and they already own most of the ways around it. Besides, people just won't buy a desktop processor that doesn't natively run x86 code, so if you want the world to spend less cycles running x86 code you either have to run x86 code plus a RISCier instruction set (and maybe ARM would be nice and license Thumb patents toward that end), OR dump the legacy x86 code entirely through a platform/perspective/way of life shift, as seems to be happening in mobile platforms.

Re:CISC is dead (1)

jmv (93421) | more than 5 years ago | (#23470448)

Compilers are still stuck trying to optimize at the CISC level.

No. Compilers are fully aware of what's happening and take into account what happens after CISC instructions are broken down. Of course there is some gymnastics and overhead involved in translating CISC instructions into RISC instructions, but it's not as bad as you make it sound. What's really complex in modern general-purpose CPUs is all the stuff related to superscalar execution: dependencies, out-of-order execution, branch prediction. Those are still there on a Power chip (except the Cell, which is special) or other high-performance RISC CPUs. Personally, I would love to see x86 go away (we have the source code so who cares!), but I don't think we'd see huge gains either.

Re:CISC is dead (2, Informative)

level_headed_midwest (888889) | more than 4 years ago | (#23470936)

They tried to throw x86 out with the Itanium. That initially went over about as well as selling ice to Eskimos in December but IA64 has started to get a little more traction in the huge-iron arena as of late. While it would be nice to be done with x86, IA64 isn't where it's at as Intel owns the ISA licenses lock, stock, and barrel. This means it's back to the Bad Old Days when chips cost a fortune and performance increases were small and infrequent. Also, IA64's EPIC model sucks on most code as it's strictly in-order.

Re:CISC is dead (3, Insightful)

RzUpAnmsCwrds (262647) | more than 4 years ago | (#23470982)

People don't get RISC, and they don't get CISC.

The defining characteristic of CISC is that it assumes that the fetch part of the fetch/execute cycle is expensive. Therefore, instructions are designed to do as much as possible so you need to use as few as possible.

The defining characteristic of RISC is pipelining. RISC assumes that fetches are cheap (because of caches) and thus higher instruction throughput is the goal.

The KEY difference between RISC and CISC isn't the number of instructions or how "complex" they are.

RISC instructions are fixed-size (usually 32 or 62 bits). CISC instructions tend to vary in size, with added words for immediate data and other trimmings.

CISC has lots of addressing modes, RISC tends to have very few.

CISC allows memory access with most instructions. Most RISC instructions operate only on registers.

CISC has few registers. RISC has many registers.

Arguing about whether CISC or RISC is faster is moot. Core 2 isn't fast because it's "CISC" or "RISC", it's fast because it's a very well designed architecture. The fact is, designing ANY competitive CPU today is extraordinarily difficult. RISC made a difference in the early 90s when CISC designs were microcoded and RISC could be pipelined. But most performance CPUs today are vastly different internally.

Re:CISC is dead (1)

jhol13 (1087781) | more than 4 years ago | (#23471122)

When the RISC was "invented" the CPU was the bottleneck. Therefore making it simpler could make it faster (MHz, pipeline, ...) and that way speed up the computation.

This is no longer true, now the bottleneck is memory.

Re:CISC is dead (1)

arktemplar (1060050) | more than 4 years ago | (#23473830)

Actually, the superscalar nature of the Intel processors lends to some interestig low level optimisations that the hardware itself does with the instructions, out of order execution, branching pipeline etc. (although it sounds like jargon, it isn't when you ahve worked on it for optimising along the lines of the BLAS libraries, you end up having to deal with this stuff).

To reiterate :- compilers are sticking to optimising the CISC code- true.
however that is becaues :-
The SSC architecture causes very low level optimisation to take place 'autmoatically' (the hardware takes care of that).

Re:CISC is dead (2, Informative)

phantomfive (622387) | more than 5 years ago | (#23469952)

Don't know if you read the article, but the author goes into great detail about the advantages of RISC over CISC. While you are right, that Intel has managed to play some tricks to get CISC running really fast, it has been at the cost of other things. Imagine if all that space on the die used for transistors to do microcode translation had been used for cache, instead. Also, as you mention, it takes more power. This is extremely important in the embedded area, and is becoming more important in the server room as well.

Some more advantages of RISC over CISC: it is easier to work with, giving designers more time to optimize other areas of the chip. AMD and Intel have spent a bundle of cash to get the old x86 to run decently.
RISC is easier for compiler writers. In the x86, there are so many instructions, the chip designers don't optimize all of them equally. If you want maximum efficiency, you will need to use the correct instruction, and it may vary from chip to chip. Whereas with a RISC architecture, it's a lot easier to guess which instruction to use (there may be only one).

There really is no advantage to CISC, other than the backwards compatibility of the x86 architecture.

Re:CISC is dead (1)

jmv (93421) | more than 5 years ago | (#23470516)

If you had bothered to read my post, it is essentially what I'm saying. There *are* penalties to CISC, but they're not that big on *non* embedded CPUs. And I consider x86 CPUs to be mostly RISC chips with a CISC front-end. They've been like that since the Pentium Pro (hence why I said CISC is dead).

In the x86, there are so many instructions, the chip designers don't optimize all of them equally. If you want maximum efficiency, you will need to use the correct instruction, and it may vary from chip to chip. Whereas with a RISC architecture, it's a lot easier to guess which instruction to use (there may be only one).

The way things work now, it's hard to optimise regardless of whether it's RISC or CISC because the problems have changed. It's no longer about knowing how many cycles each instruction takes. Now, it's about latencies, pipelines, dependencies, cache, branch prediction, and so on. You end up with the same problems on any super-scalar architecture, not just the ones with CISC front-ends.

There really is no advantage to CISC, other than the backwards compatibility of the x86 architecture.

Hence the subject of my original post.

What the Heck? (5, Insightful)

AKAImBatman (238306) | more than 5 years ago | (#23469086)

RISC vs. CISC? What is this, the early 90's? There are no RISC chips anymore, except as product lines that were originally developed with the RISC methodology in mind. Similarly, true CISC doesn't exist either. Microcode has done wonders in turning complex instructions into a series of simpler instructions like one would find on a RISC processor.

The author's real point appears to be: x86 vs. Other Embedded Architectures. Without even looking at the article (which I did do), it's not hard to answer that one: There is no need for x86 code in a mobile platform. The hardware is going to be different than a PC, the interface is going to be different than a PC, and the usage of the device is going to be different than a PC. Providing x86 compatibility thus offers few, if any, real advantages over an ARM or other mobile chip.

If Intel's ATOM takes off, it will be on the merits of the processor and not on its x86 compatibility. Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.

Re:What the Heck? (1)

cyberbill79 (1268994) | more than 5 years ago | (#23469214)

Reminds me of a 90s movie:

It's a P6 chip. Triple the speed of the Pentium
Yeah. It's not just the chip, it has a PCI bus
RISC architecture is gonna change everything

Yes, I know. Not technically correct. But had to do it.

Re:What the Heck? (1)

billcopc (196330) | more than 5 years ago | (#23469414)

Perhaps I hold a fonder memory of that film than most, but I thought the P6 line was a nice little "what if" moment. Sure, they got a lot of the tech wrong, but isn't it funny how 10 years later, Macs run on Intel ? Might it be that the movie's technical consultant dreamed of a faster Intel-based Mac ? The whole point of that scene was to convey the fact that the kids had bleeding-edge hardware.

*sigh* Despite its flaws, it was a fun shiney movie. Nobody would pay $10 to see what geeks REALLY do. Not even Darren Aronofsky could instill wonder in long pan shots of a E16 desktop with a dozen transparent terminal windows running ntop, tcpdump and nethack.

Re:What the Heck? (1)

cyberbill79 (1268994) | more than 5 years ago | (#23469548)

Despite most, I still enjoy this movie. Actually have it running now due to the inspiration this article gave me. Still have my red book somewhere. ;)
It really was 'bleeding edge' tech (in theory) when it came out. We've come so far so quickly. A friend and I still joke about the '28.8bps' modem (which I had the zoom modem featured in the film).
Now on the same note, the virus attack scene at the end was repeated today at my place of work. I haven't seen bugz crawling across the screen literally eating it for so long, I couldn't stop laughing. :)
They don't like to pay me for my tech work, so had them call the local exterminator. Too bad. Last place I ran the show, did too good of a job, didn't have a single virus call. ;)

There are very few RISC, but there are some (4, Informative)

EmbeddedJanitor (597831) | more than 5 years ago | (#23469308)

Mostly little 8-bitters (PIC and AVR), but there are many processors that tend towards the RISC end of the spectrum (ARM, MIPS etc) which clearly have RISC roots. ARM, MIPS etc dominate in mobile space because they switch less transistors to achieve the same function (one of the goals of RISC design) and thus use less power.

The only real point in x86 is Windows compatability. Linux runs fine on ARM and many other architectures. There are probably more ARM Linux systems than x86-based Linux systems (all those Linux cellphones run ARM).

Apart from some very low level stuff, modern code tends to be very CPU agnostic.

Re:There are very few RISC, but there are some (1)

Salgat (1098063) | more than 5 years ago | (#23469532)

The AVR32 is a RISC Processor that is competing for the mobile embedded market.

Re:There are very few RISC, but there are some (1)

NuShrike (561140) | more than 5 years ago | (#23470712)

What is 'Windows compatibility'?

Currently deployed Windows Mobile is a working subset of the Win32 api used on PCs, uses the same 'standards compliant' protocols to communicate with other computers, and the only difference is it was compiled for the ARM.

With such a blistering example that the x86 instruction set is unnecessary to achieve platform portability, what advantage can the Atom bring the table? Out-of-order execution isn't really a big advantage especially when ARM already is already sampling such cpus already.

The current SoC embedded deployments are converging with PC's destiny, and what will really come to matter is the packaging prettiness (iPhone) and implementation of the system and ISA (unlike HTC's and Qualcomm's inability to ship working video drivers for the Kaiser/Tilt), than how fast and fancy the CPU is.

The last point is especially important when we want a subset of the desktop/laptop functionality with all the expected modern polish without destroying the technology we still can't improve - the battery.

Re:What the Heck? (0)

Anonymous Coward | more than 5 years ago | (#23469650)

There are no RISC chips anymore

Weird... this guy [slashdot.org] disagrees with you.

Re:What the Heck? (3, Informative)

AKAImBatman (238306) | more than 5 years ago | (#23469874)

Weird. Half the responders disagreed with him and you didn't notice?

RISC design was really, really attractive from an architectural standpoint. It simplified the hardware to such a great degree that it was completely worth the pain and suffering it put compiler writers through. With microcode, even stupid CISC architectures like x86 were able to run on a RISC CPU.

But here's the rub: It is always slower to use multiple instructions to complete a task that could be completed in a single instruction with dedicated silicon.

With that simple fact in mind, it didn't take long for CISC-style instructions to start reappearing in the silicon designs. Especially once the fab technologies improved enough to negate the speed advantages in early RISC chips. (e.g. Alpha seriously kicked ass back in the day.) Chip designers like Intel took note of what instructions were slowing things down and began adding them back into the silicon.

Thus the bar moved. Rather than trying to keep the silicon clean, the next arms race began over who could put fancier vector instructions into their CPUs. Thus began the war over SIMD instructions. (Which, honestly, not that many people cared about. They are cool instructions, though, and can be blazingly fast when used appropriately.)

An interesting trend you'll notice is that instructions take more or fewer instructions to execute between revisions of processors. (Especially with x86.) Part of this is definitely changes in the microcode and CPU design. But part of it is a re-evaluation of silicon usage. Some instructions which used to be fast thus become slow when they move to microcode, and some instructions that were slow become fast when they move to silicon.

Rather interesting to watch in action. :-)

Re:What the Heck? (2, Interesting)

eyal0 (912653) | more than 4 years ago | (#23472818)

I didn't think about it that way, but you're right, it's true. If you don't care how hot or big your chip gets, give your user as many instructions as you can. Having a bunch of little instructions means that they all take as long as the slowest one, even if most of them don't need a full clock cycle.

The interesting part of the article is about the process. Intel's domination has been in their process, always a few steps ahead of the competition (maybe just a half step ahead of TSMC). Newer processes have always yielded faster, smaller, and cooler chips. Not anymore. 60nm didn't make chips use less power and 45nm doesn't help either.

In a sense, one dimension of the playing-field has become level for Intel and the custom fabs. And that's the level in which embedded plays.

Re:What the Heck? (2, Interesting)

Darinbob (1142669) | more than 5 years ago | (#23469948)

Microcode has done wonders in turning complex instructions into a series of simpler instructions like one would find on a RISC processor.
But that's exactly what most CISC style computers were doing when RISC was first thought about. This is the classic CISC computer design model, such as with the VAX. High level instructions with complex addressing modes, all handled by a microcode engine that had it's own programming with a simpler and finer-grained instruction set (some had a VLIW-like microcode, some was more RISC-like).

Microsoft dependency or lack of... (1)

DrYak (748999) | more than 5 years ago | (#23470572)

The author's real point appears to be: x86 vs. Other Embedded Architectures. Without even looking at the article (which I did do), it's not hard to answer that one: There is no need for x86 code in a mobile platform. The hardware is going to be different than a PC, the interface is going to be different than a PC, and the usage of the device is going to be different than a PC.[...] Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
Back in the 90s Intel's x86 ISA managed to take over the desktop, not as much because of its inherent benefits, but because of software.
The market for workstation was increasingly flooded with users who were looking for the only thing they recognize : a machine running Windows with associated software.
Given that Microsoft's presence and quality in non-Intel architecture was a joke at best, all of these newcomers catered to Windows-capable architecture. Then with market economics doing their work, price of x86 hardware fell enough to interest also the rest of the workstation people : a off-the-shelf x86 processor running linux became a cheaper alternative to some proprietary UNIX running on some obscure RISC architecture.

Nowadays the situation is radically different. What users of those ultra-light machine are expecting isn't "taking microsoft vista and turbotax" in their pocket. But rather having pocketable internet. Which doesn't depend that much on specific software vendor. That is very well illustrated with the success that Linux-based subnotebooks such as the EEE PC are enjoying. And Linux has the big advantage of being very easy to compile cross-platform. Particulary the kernel which has already seen massive use in embed device such as routers, modems, etc. powered by MIPSs and ARMs.

Thus small device can very well accomodate ARMs. Intel has no real advantage, except maybe for a couple of key technologies from vendor too lazy or too talentless to port their code to other architectures (like Flash). Intel's only hope is in Flash being such a killer feature that will require x86 ISA.

Microsoft being desperate to push Windows XP for subnotebook even if it is going to deprecate soon is a clear indicator of this tendency.

Intel themselves got caught at their own ISA lock-in when they tried to launch Itanium. It mostly tanked because it lacked ports of key software (and because non-ported code was very slow).

Re:What the Heck? (2, Insightful)

benhattman (1258918) | more than 4 years ago | (#23470762)

If Intel's ATOM takes off, it will be on the merits of the processor and not on its x86 compatibility. Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
I for one think this might be an excellent migration path for the industry. Let the mobile industry settle on a non-x86 processor. Then develop all the necessary software for that processor (lightweight OS, web browser, etc). Then produce an amped up version of that chip for laptops/desktops. Voila, we bootstrap the software that is needed to sell a chip, and we end up with a significantly more efficient platform than anything we'll ever see with x86.

A guy can dream can't he?

Re:What the Heck? (0)

Anonymous Coward | more than 4 years ago | (#23473214)

Actually, this might what Apple is up to since they bought PASemi.

PASemi had a chip that crushed Intel's offerings in performance per watt and was designed on what Intel spends on new x86 architecture design in a week or so (much less than one month for sure).

IMHO Dobberpuhl and co would not have accepted being bought by Apple for anything else. This guy and his team know how to design processors as a function of their optimization goal (Alpha was top performance, ARM was low power, and PA6T was a great compromise between the two).

However, now that they are working for Apple, their next product will (very likely) only be available with Apple's sexy packaging tax.

Re:What the Heck? (0)

Anonymous Coward | more than 4 years ago | (#23472902)

The author's real point appears to be: x86 vs. Other Embedded Architectures. [...] The hardware is going to be different than a PC, the interface is going to be different than a PC, and the usage of the device is going to be different than a PC. Providing x86 compatibility thus offers few, if any, real advantages over an ARM or other mobile chip.
Certainly, for my mobile phone with its 2-inch-diagonal screen, being binary-compatible with Adobe Photoshop would not be that useful.

However, I've heard a bit on Slashdot recently about the sub-notebook form factor - you know, the eee pc, OLPC, and suchlike. Personally the eee form factor seems a bit strange to me, but some people seem to think it's going to be the next big thing. A form factor bigger than my 2" screen phone but smaller than my 13" screen laptop.

I can believe that x86 compatibility would be a pretty big benefit for something like the eee pc - in that if you want to install OpenOffice, it's more user-friendly if you can just apt-get (or whatever) a binary package, compared to compiling from source.

Who cares with the iPhone out? (0, Funny)

Anonymous Coward | more than 5 years ago | (#23469114)

The iPhone makes any other mobile device pointless. Why even bring this topic up?

Re:Who cares with the iPhone out? (1)

Pengo (28814) | more than 5 years ago | (#23469244)


Having an iPhone but being stuck on ATT's network is like having a supermodel girlfriend that refuses to put out.

I actually paid the $170 to cancel my att contract and get back onto verizon, ATT was that terrible for me.

How much is the legacy x86 code base really worth? (1)

madsenj37 (612413) | more than 5 years ago | (#23469118)

Is not the only question. What about how much are the Intel and AMD brand names worth?

Completely pointless (5, Interesting)

El Cabri (13930) | more than 5 years ago | (#23469122)

RISC vs CISC was the architecture flamewar of the late 1980s. Welcome to the 21th century, you'll like it here. It's a world when, since the late 90s, the ISA (instruction set architecture), is so abstracted away from the actual micro-architecture of microprocessor, as to make it completely pointless to make distinctions between the two. Modern processors are RISC, they are CISC, they are vector machines, they're everything you want them to be. Move on, the modern problems are now in multi-core architecture and their issues of memory coherence, cache sharing, memory bandwidth, interlocking mechanisms, uniform vs non-uniform, etc. The "pure RISC" standard bearers of yore have disappeared or have been expelled from the personnal computing sphere (remember Apple ditching PowerPC ? Alpha anyone ? Where are those shiny MIPS-based SGIs gone?). Even Intel couldn't impose a new ISA on its own (poor adoption of IA-64). The only RISC ISA that has any existence in the personnal computing arena, including mobile, is ARM, but precisely, they do only mobile. There's really no reason at all to build any device on which you plan to run generic OSes and rich computing experience on anything else than x86 or x86-64 machines.

Re:Completely pointless (1)

noidentity (188756) | more than 5 years ago | (#23469316)

The "pure RISC" standard bearers of yore have disappeared or have been expelled from the personnal computing sphere (remember Apple ditching PowerPC ? Alpha anyone ? Where are those shiny MIPS-based SGIs gone?). Even Intel couldn't impose a new ISA on its own (poor adoption of IA-64).

Expelled simply because backwards-compatibility is important and performance comparisons between different architectures was difficult. Everywhere else, good design/low power is what selects which are used most.

Re:Completely pointless (3, Insightful)

vought (160908) | more than 5 years ago | (#23469762)

Not in disagreement, but Apple didn't ditch PowerPC because RISC offered no performance advantage; indeed, the G5 at lower clock speeds marginally outperformed the first Intel-based Macs at the same price points.

Apple got rid of PowerPC because Motorola and IBM had no incentive to innovate and develop competitive processors in the mid-range; RISC was most worthwhile in the high-end big iron IBM machines using POWER and the low end embedded market served by Motorola/Freescale.

Re:Completely pointless (3, Interesting)

prockcore (543967) | more than 5 years ago | (#23470374)

The PowerPC is nothing without the AltiVec vector unit, which is a decidely CISC concept.

Re:Completely pointless (1)

renoX (11677) | more than 4 years ago | (#23473486)

>The PowerPC is nothing without the AltiVec vector unit, which is a decidely CISC concept.

Ahem, the first PowerPC didn't have the AltiVec vector unit..

As for being a CISC concept, in some way it's true that they are complex intructions (division is also a complex instruction provided by many RISC), but they have also a fixed length, register-register operation only with separated load-store, etc so they're also RISC in many ways..

Re:Completely pointless (1)

Cyberax (705495) | more than 5 years ago | (#23469332)

To be fair, there's also MIPS (on almost all wireless home routers) and SH.

Personally, I prefer MIPS for my embedded devices. It's cleaner than ARM and dev-boards are easier to use.

Re:Completely pointless (1)

Lost Engineer (459920) | more than 4 years ago | (#23471280)

Many of the MIPS dev boards are actually FPGA's. Therefore if you need to switch to a different variant you just reprogram the FPGA. Very cool.

Re:Completely pointless (0, Troll)

TheRealMindChild (743925) | more than 5 years ago | (#23469362)

they are vector machines

Yeah, I like vectors. Takes all of the hard work of having a dynamic-sized array of thingies.

Re:Completely pointless (2, Informative)

LWATCDR (28044) | more than 5 years ago | (#23469466)

SPARC? POWER?
Both of those are actually popular when it comes to big iron. Yes Intel is it on the desktop but for big honking server it is just so so. For small lower power devices it is pretty lame. There is no reason why a small light mobile device has to be an X86.

Intel have a poor track record... (2, Interesting)

serviscope_minor (664417) | more than 5 years ago | (#23469128)

Intel sucessfully killed the high end CPU manufacturers. However, recently they have had poor performance in the very low power arena. Their main offering (XScale, until they sold it) was poor compared ot the competitors. Compare the Intel PXA27x to the Philips LPC3180. The philips chip has about the same instruction rate for integer instructions (at half the clock rate), hardware floating point (so it's about 5x as fast at this) and draws about 1/5 of the power. I know which one I prefer...

Unlike the old RISC workstation manufacturers which relied on a small market of high margin machines, the current embedded CPU manufacturers operate in a huge, cut-throat world where they need to squeeze up the price/performance ratio as high as possible to maintain a lead. I think this market will be somewhat tougher to crack than the workstation market, since intel does not have what they had before: an advantage in volume shipped.

Insane mods. (1)

serviscope_minor (664417) | more than 5 years ago | (#23470012)

How is the parent post offtopic?

The article mentions that Intel killed the old workstation RISC vendors. Parent posts suggests why this is not so easy for Intel.

And further, how is intel having a poor track record for embedded processors compared to other manufactureres offtopic for an article about intel producing embedded processors?

What value? (1)

Microlith (54737) | more than 5 years ago | (#23469144)

The only "benefit" that has come of having x86 processors in MIDs so far has been seeing the developers cram Vista on an already slow device, making it crawl even worse. Or they stick XP on it, packing an OS completely not designed for MID use on it.

Using ARM on mobile platforms at least offers some hope of making a clean break from all the backwards compatbility cruft that x86 has dragged along with it for decades now.

ARM is RISC in name only (2, Insightful)

RecessionCone (1062552) | more than 5 years ago | (#23469218)

The RISC philosophy was to have every instruction be as simple as possible, so that the execution of each instruction could be as efficient as possible. The idea was that even though you might have to execute more instructions to get the job done, the speed you gained from the simple instruction set would compensate.

I've had to work with the ARM ISA in the past (I was studying its implementation as a soft core on an FPGA), and I can tell you it doesn't follow the RISC philosophy well, if at all.

One very non-RISC thing ARM did was move the shift instructions into every arithmetic instruction. That's right: there are no dedicated shift instructions. When you need a shift instruction, you have to encode it as part of a move operation or an add. In effect, every add, and, or, sub, etc. is actually a an add+shift, and+shift, or+shift, etc. This is the opposite of the RISC philosophy, and it significantly complicates the hardware, since a variable shifter has to be on the ALU critical path.

Other non-RISC things ARM did include the Java instruction set extensions, the Thumb instruction set extensions (further reduce code size), vector & media instruction set instructions, etc.

I think calling ARM "RISC" is a marketing decision only, done for historical reasons. It doesn't have much to do with the technical reality, IMO. Jon Stokes would have done better to say ARM vs. x86, instead of RISC vs. CISC, which is an outdated idea back from the 80s & 90s.

Re:ARM is RISC in name only (1)

plalonde2 (527372) | more than 5 years ago | (#23469428)

The shifter doesn't have to be on the ALU path as such, it can be wired to write-back register bus at relatively low cost. You can set up the variable shift in parallel to the ALU work. I don't know ARM's instruction encoding, but throwing 5 bits at that shift register seems like it might not be a bad tradeoff, considering how nice it is to have power-of-two manipulations for free.

Re:ARM is RISC in name only (1)

marxmarv (30295) | more than 5 years ago | (#23470640)

You can only shift one of the source registers, right? So think of the shifter as an extension of the register number and it fits neatly into a RISC architecture, and into the register fetch stage where you probably have time to burn.

Re:ARM is RISC in name only (2, Interesting)

Chris Burke (6130) | more than 5 years ago | (#23469432)

That's pretty standard in a lot of "RISCy" architectures, though. The POWER instruction set has a lot of ALU instructions that look like multiple operations jammed together. It has one particularly complicated shifting and masking instruction that makes me think that they decided to add programmatic access to the load data aligner in the data cache. I've always wondered if they regretted that as they changed the micro-architecture, and most likely the DC ended up being farther away from the integer scheduler. Maybe a similar motivation is behind the shifting on every alu op in ARM; I don't really know.

Ultimately, though, I think "RISC" is still a pretty valid description. Sure the complexity of some instructions strains the ideals behind RISC philosophy, but it certainly has what I consider the most important aspects of a RISC ISA:
1) Fixed instruction width. Makes superscalar instruction fetch and decode a breeze.
2) Pure load/store design. Instructions are -either- a load, a store, or an operation on registers. This makes dispatch and scheduling simpler.

These I consider critical to being "RISC", and they're also solid and easily definable characteristics. "Complexity of instructions" is subjective. Personally if I had to draw a hard and fast line, I'd say any ISA that can be completely implemented without microcode, and still follows the above two rules, qualifies as not being "too complex". I mean, it's relative, right? And since some x86 instructions get decoded into hundreds of micro-ops, I don't think a mere conjoining of two alu operations is all that bad.

Re:ARM is RISC in name only (1)

cbrocious (764766) | more than 5 years ago | (#23469480)

It has one particularly complicated shifting and masking instruction that makes me think that they decided to add programmatic access to the load data aligner in the data cache.
Ugh, PPC is full of shit like that. I'm implementing a PPC core as part of my emulation platform ( http://sourceforge.net/projects/ironbabel/ [sourceforge.net] ) right now, and instructions like rlwinmx, srawx, srwx, slwx, cntlzwx, crxor are just painful. PPC/Power is really well designed, but god it's painful to deal with.

Re:ARM is RISC in name only (1)

cbrocious (764766) | more than 5 years ago | (#23469442)

Also can't forget the conditional prefixes on instructions. Makes hacking on ARM code easy, but damn it makes things more complex.

Re:ARM is RISC in name only (1)

Aardpig (622459) | more than 5 years ago | (#23470286)

About your last remark: when ARM was originally developed by Acorn, it stood for "Acorn RISC Machine". It was one of the first 32-bit RISC CPUs available in the personal computing arena. Having written assembly language for the ARM back then, it most certainly was a RISC architecture.

RISC is good (1)

bluefoxlucid (723572) | more than 5 years ago | (#23469282)

RISC architecture, interestingly, makes things hella fast. The decoder stage follows an easy path; conditionals occur as prefixes, just like cmov in i686 land. This means when the chip makes an instruction fetch, it does pretty much no extraneous work; it just wanders through a couple paths (a decision to execute or skip, and then into execution or back to fetch) in one go. Modern CPUs do a hell of a lot of work just to decide how to handle an instruction.

There is no RISC vs CISC any more (2, Informative)

m.dillon (147925) | more than 5 years ago | (#23469402)

There's no distinction between the two any more, and hasn't been for a long time. The whole point of RISC was to simplify the instruction format and pipeline.

The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything, it doesn't even slow the chip down because the chip is decoding from a wide cache line (or multiple wide cache lines) anyway.

So what does that leave us with? A load-store instruction architecture verses a read-modify-write instruction architecture? Completely irrelevant now that all modern processors have write buffer pipelines. And, it turns out, you need to have a RMW style instruction anyway, even if you are RISC, if you want to have any hope of operating in a SMP environment. And regardless of the distinction cpu architectures already have to optimize across multiple instructions, so again the concept devolves into trivialities.

Power savings are certainly a function of the design principles used in creating the architecture, but it has nothing whatsoever to do with the high level concept of 'RISC' vs 'CISC'. Not any more.

So what does that leave us with? Nothing.

-Matt

RTFA much? (4, Informative)

tepples (727027) | more than 5 years ago | (#23469714)

The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything
Did you understand the article? Page 2 [arstechnica.com] is entirely about how the decoder on Atom isn't "such a tiny, isolated piece of the chip that it doesn't count for anything".

And, it turns out, you need to have a RMW style instruction anyway, even if you are RISC, if you want to have any hope of operating in a SMP environment.
But if only one instruction is an atomic swap, that means it doesn't need to be on the critical path, right?

Re:There is no RISC vs CISC any more (2, Interesting)

Darinbob (1142669) | more than 5 years ago | (#23470014)

And, it turns out, you need to have a RMW style instruction anyway, even if you are RISC, if you want to have any hope of operating in a SMP environment.
PowerPC manages without that. It still has to use 1 special load and 1 special store instruction though, but it has no read-modify-write or test-and-set instructions.

Re:There is no RISC vs CISC any more (0)

Anonymous Coward | more than 4 years ago | (#23473116)

Actually at least MIPS and Alpha (don't know about Sparc and ARM) also use the same method as PPC. A store linked to a load whose address is watched to check that nobody else accesses it while the data is being modified in the registers between the load and the store.

Even Linus described it as a superior solution, oh, about a decade ago. It needs less hardware, allows to skip the write if you don't need it (therefore not forcibly dirtying a cache line in all cases) and allows more operation than the limited set of x86 atomic operations (basically you can perform any transformation on the data except nesting another locking load/store pair). But you can for example atomically set and clear any subset of bits (great for flag words) or even have atomic floating point operations (god forbid).

You all miss the point. (4, Informative)

Anonymous Coward | more than 5 years ago | (#23470320)

""
The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything, it doesn't even slow the chip down because the chip is decoding from a wide cache line (or multiple wide cache lines) anyway.
""

The problem with your assumption is that it's _wrong_.

It does cost something. The WHOLE ARTICLE explains in very good detail the type of overhead that goes into supporting x86 processors.

The whole point of ATOM is Intel's attempt to make the ancient CISC _instruction_set_ work on a embedded style processor with the performance to handle multimedia and limited gaming.

The overhead of CISC is the complex arrangement that takes the x86 ISA and translates it to the RISC-like chip that Intel uses to do the actual processing.

When your dealing with a huge chip like a Xeon or Core2Duo with a huge battery or connected directly to the wall then it doesn't matter. Your taking a chip that would use 80watts TPD and going to 96.

But with ARM platform you not only have to make it so small that it can fit in your pocket, but you have to make the battery last at least 8-10 _hours_.

This is a hell of a lot easier when you can deal with a instruction set that is designed specifically to be stuck in a tiny space.

If you don't understand this, you know NOTHING about hardware or processors.

Re:There is no RISC vs CISC any more (1)

squizzar (1031726) | more than 4 years ago | (#23473124)

I've been looking a for a good point to jump in to this, and it's as good as I've found, so sorry if I'm a little of the track of your previous post. Anyways..

There are a vast number of applications where it does cost a huge amount to have complex instruction format. I'm thinking of the realms of DSP and real-time embedded systems, where the performance must be completely predictable. This does not necessarily imply it has to be lightning fast, but that code executes in the exact same amount of time, every time. These are processors that sometimes don't have cache, and almost certainly don't have microcode so that the behaviour can be predicted.

For example: You have an audio DSP, running a filter with incoming samples at 48KHz, giving you approx 208 us to process a sample. With a 120MHz processor you have 2500 cycles for each sample. The complex approach is to have an interrupt triggered when a sample arrives etc. However to minimise hardware resources and maximise the processing ability it is possible just to write the filter as a loop that takes 2500 cycles to complete. Run your CPU clock synchronous to the audio clock (or vice versa as appropriate) and you have your system with a lot less complexity.

This of course only works if the instruction accesses are entirely predictable. If you have a microcode based processor then it is possible that for certain specific cases it will need to load a microcode instruction (an unusual case in floating point arithmetic is an example). This can cause instructions to take hundreds of times longer than they should, and will completely break your system.

If you look at the ARM instruction set it is specifically designed with lots of features to allow this kind of functionality - e.g. conditional execution of instructions - for a bit of code that reads like:

while (1)
    if (i = 0)
        j = j+1
    else
        j = j-1
        k = k+1
    end if
end while

Depending on the value of i the loop may take longer to execute. Using the conditional instructions (the exact syntax escapes me) you'd have something which will resemble (in assembly).

while (1)
    if (i = 0) j = j+1
    if (i != 0) j = j-1
    if (i != 0) k = k+1
end while

When executed each instruction will take the same amount of time regardless of the condition, allowing the behaviour of the loop to be characterised exactly. There are also loads of other benefits due to jump instructions being avoided etc.

The number of embedded processors in the world outweighs the number of desktop/server style processors by quite a significant margin, so it seems a bit wrong to say that there is no distinction between the two. I'd be more inclined to say there is a wide range of processors from very simple RISC processors up to the very complex (IA-64) and that there the correct choice is highly dependent on the application.

I think people are missing the point (2, Interesting)

mykepredko (40154) | more than 5 years ago | (#23469486)

As I read the previous posts, it seems like the focus is on RISC vs CISC but I think the real question is there value-add for designers to have an x86 compatible embedded microcontroller?

People (and not just us) should be asking would end customers find it useful to be able to run their PC apps on their mobile devices? Current mobile devices typically have PowerPoint and Word readers (with maybe some editing capabilities) but would users find it worthwhile being able to load apps onto their mobile devices from the same CDs/DVDs that were used to load the apps onto their PCs?

If end customers do find this attractive, would they be willing to pay the extra money for the chips (the Atom looks to require considerably more gates than a comparable ARM) as well as for the extra memory (Flash, RAM & Disk) that would be required to support PC OSes and apps? Even if end customers found this approach attractive, I think OEMs are going to have a long, hard think about whether or not they want to port their radio code to the x86 with Windows/Linux when they already have infrastructures built up with the processors and tools they are currently using.

The whole thing doesn't really make sense to me because if Intel wanted to be in the MCU business, then why did they spin it off as Marvell (which also included the mobile WiFi technology as well)?

The whole think seems like a significant risk that customers will want products built from this chip and the need for Intel and OEMs to recreate the infrastructure they have for existing chips (ie ARM) for the x86 Atom.

myke

Marvell Commics? (1)

tepples (727027) | more than 5 years ago | (#23469722)

The whole thing doesn't really make sense to me because if Intel wanted to be in the MCU business, then why did they spin it off as Marvell (which also included the mobile WiFi technology as well)?
So people could make jokes about Spiderr-Mann ;-)

Re:I think people are missing the point (2, Insightful)

billnapier (33763) | more than 5 years ago | (#23470056)

Huh? It takes more than just the same processor to be able to run the same apps. You gotta have the same operating system. And running Vista on a cell phone doesn't sound like a good idea to me (the mouse is a poor interface for a cell phone).

Almost all application code written today is done in some portable manner. Writing custom assembly specific to a processor in an application is only done in certain performance critical things (ffmpeg anyone?). This is one of the reasons that Apple was so easily able to move from PPC to X86.

Re:I think people are missing the point (2, Informative)

Game_Ender (815505) | more than 4 years ago | (#23471520)

Except for VBA in Microsoft Office. Its implemented in tens of thousands of lines assembly (on both Windows and Mac) using specific knowledge of how the complier lays out the virtual function tables of C++ classes. Even the x86 assembly makes calls into the windows API, so its not even portable to other x86 platforms (like Intel Mac). In Excel, they have the floating point number formating routines hand codded in assembly. I assume the office team does this keep these apps nice and snappy under a large work load, but it certainly doesn't help ISA switches.

Most video/audio apps have significant features that depend on well tuned hand written assembly. That is why it took adobe so long to port photoshop, they had to recode all there PPC optimized processing routines. The above reason is why VBA was dropped from Mac Office 2008.

Old DOS games? (0)

Anonymous Coward | more than 5 years ago | (#23470096)

The biggest advantage might be to make it easier to use old applications and of those old apps, the most desireable would seem to be the mountains of VGA or lower resolution DOS games (because the screens on mobile devices are smaller so high res is not as usable.) I assume that if they ran some version of an x86 chip, then creating a VMWare like emulaiton would be simpler and the manufacturer could relicense/resell a ton of older games to a generation that had never played them.
On the other hand, why haven't wearable, glasses-style displays taken off for these things? I would love to have that for playing games or movies on plane trips.

RISC on a PC doesn't make sense anymore (1)

billcopc (196330) | more than 5 years ago | (#23469610)

I never delved too far into the RISC vs CISC debate, but my understanding is that RISC uses a small number of simple, generic instructions that execute very quickly, and the compiler builds functionality upon those tiny building blocks. CISC uses a larger number of specialized instructions, each one doing a larger amount of "work" as one black box, where the RISC chip would break that down into several smaller tasks. Since RISC executes faster, overall performance is still good.

So my point is: if RISC needs more instructions to do the same work, does it require a higher clock frequency to achieve similar performance to a CISC chip ? Since clock speeds do not scale to infinity, this implies that a RISC chip will hit the frequency wall sooner, thus limiting its maximum speed.

Much of the work done by a modern Intel CPU involves clever decoding, caching and scheduling, to extract as much parallelism as possible from the x86 instruction set. If you were to somehow disable all the prefetching, hyper-threading, predictive branching and all the other bullshit that isn't directly tied to x86 decoding and execution, that Core 2 chip would be no better than a superclocked 386. That "bullshit" works hard to alleviate or outright negate many of CISC's weaknesses.

The simplicity of a RISC design leads to excellent production cost advantages and remarkable power efficiency, because there's a lot less "bullshit" on the die. Low cost + moderate performance + high efficiency = embedded nirvana. That's why we see them in cell phones, RAID controllers, microwave ovens, TVs, etc.

Meanwhile, in PC land, things are more expensive, and performance is king. Nobody wants a slow laptop, because we have work to do, otherwise we wouldn't buy the stupid laptop in the first place. We also want the laptop to sync with our desktop, run the same apps and hook up to the office network. It's bad enough that we have to work through the flight (or bus ride), we don't want to run (and have to learn) heterogeneous platforms.

RISC will continue to reign in small, cheap, battery-powered gadgets. That's what it does best, by design and in practice. That's its turf, where big bad CISC will not dare tread, not even their redheaded stepchild Atom.

Re:RISC on a PC doesn't make sense anymore (1)

cbrocious (764766) | more than 5 years ago | (#23469670)

Perhaps doing a bit more research before posting is a Good Thing (TM).

RISC processors usually (always, in practice) have higher performance for the same clock speed when compared to CISC processors. Although they require multiple instructions to do things, these are almost always 1 or 2 cycles each. That means that although it may have to execute 3 instructions to do the same as 1 CISC instruction, it's often done it in half the clock cycles.

Re:RISC on a PC doesn't make sense anymore (2, Interesting)

vought (160908) | more than 5 years ago | (#23469812)

Although they require multiple instructions to do things, these are almost always 1 or 2 cycles each. That means that although it may have to execute 3 instructions to do the same as 1 CISC instruction, it's often done it in half the clock cycles.
Unfortunately, marketing rules the day in the mind of consumers, so AltiVec/VMX and Apple's PowerPC ISA advantages were lost on consumers looking for the "fastest" machines in the consumer space.

Until recently, there were still speed advantages to using a four core multi-processor G5 for some operations over the 3.0GHz eight-core Xeon Mac Pros because of VMX.

It is somewhat ironic that the Core architecture chips now used by Apple in all but the Mac Pros are all below the 3GHz clock "wall" that was never overcome by the G5, but the Intel name seems to have gone a long way in assuaging consumer doubts about buying a Mac.

Re:RISC on a PC doesn't make sense anymore (1)

Guy Harris (3803) | more than 5 years ago | (#23469836)

It is somewhat ironic that the Core architecture chips now used by Apple in all but the Mac Pros are all below the 3GHz clock "wall" that was never overcome by the G5

Mac Pros and top-of-the-line iMacs, as per the iMac technical specifications [apple.com].

Re:RISC on a PC doesn't make sense anymore (1)

Sentry21 (8183) | more than 5 years ago | (#23470332)

but the Intel name seems to have gone a long way in assuaging consumer doubts about buying a Mac.
Or, more likely, the ability to say 'I hate OS X, I'm going back to Windows' without having wasted a thousand dollars has provided a comfortable fallback so people aren't out as much if they decide they didn't like the choice they made.

Oh, and being able to run Windows at the same time as OS X is a pretty nice touch too.

Re:RISC on a PC doesn't make sense anymore (1)

fishbowl (7759) | more than 5 years ago | (#23469822)


>So my point is: if RISC needs more instructions to do the same work, does it require a higher clock frequency to
>achieve similar performance to a CISC chip ? Since clock speeds do not scale to infinity, this implies that a
>RISC chip will hit the frequency wall sooner, thus limiting its maximum speed.

Learn about things like pipelining and forwarding, out-of-order execution, etc.
A non-trivial amount of work is involved in decoding machine instructions, this
is far simpler on a RISC machine. Register architecture has tended to be simpler,
which gives compilers a simpler job. RISC machines have fewer addressing modes,
which also simplifies things, especially in a pipeline.

Benefits to CISC have been mainly for (us) assembly programmers.

RISC architectures are able to do things with the instructions that are too complicated to
do in a CISC architecture.

You probably don't realize how "RISC-ish" x86 has been since 4. Legacy instructions worked,
they just caused pipeline stalls. Now they get split up into separate instructions that can be pipelined.
Do not underestimate the magnitude of the gains that have been made by pipelining and forwarding, and this is MUCH easier to implement in a smaller, very consistent instruction set. So easy, in fact, implementing it, even going as far as to code a simulator for the MIPS architecture or writing compilers to the MIPS target, is pretty standard second-year undergrad material.

A very good book on the subject, very accessible:

http://www.amazon.com/Computer-Organization-Design-Hardware-Interface/dp/0123706068/ref=sr_1_6?ie=UTF8&s=books&qid=1211247399&sr=1-6 [amazon.com]

Re:RISC on a PC doesn't make sense anymore (1)

Darinbob (1142669) | more than 5 years ago | (#23470134)

Since clock speeds do not scale to infinity, this implies that a RISC chip will hit the frequency wall sooner, thus limiting its maximum speed.
This is why you have concurrency. Ie, long pipelines, multiple arithmetic units, etc. You should never use clock speed to figure out how fast a CPU is (no matter what the marketting says).

Even in the CISC world the same thing holds. They're limited by an internal clock speed. Just because they may have a instruction that does "indexed load, indexed load, add, indexed store" does not mean they can do that in a single clock cycle. Internally they need parallelism to get the most speed out of the silicon, just like RISC.

The biggest difference between CISC and RISC today is whether you're optimizing the code in hardware or with the compiler. RISC is about the philosophy of "keep it simple", whereas CISC is about "keep it compatible". Ie, do you spend your design resources on keeping things compatible, or do you spend them on making things fast? (Intel manages to do both for desktop CPUs, since it has enormous design resources)

RISC-like vs. CISC-like (1)

Chris Snook (872473) | more than 5 years ago | (#23469890)

As many other people have noted, the classical RISC vs. CISC debate is moot, since all modern processors have elements of both. The real fight in the mobile space now is between general-purpose low-power processors, which may consume more power when doing certain computationally expensive tasks, and processors with specialized acceleration units that optimize certain tasks, at the expense of performance and power efficiency for general-purpose work.

I suspect that in the near future, the more interesting fight will be between chips with media acceleration features, and separate offload chips that allow the device to handle media decode with a very low-power general-purpose chip that otherwise wouldn't be up to the task.

DSPs keep gaining ground in the mobile world... (1)

syn1kk (1082305) | more than 5 years ago | (#23470044)

The mobile world does lots of wireless communication ( My cellphone alone does: bluetooth v1 / 2, IR, GSM, Wifi ). The best chips suited for these wireless communication tasks are DSP / CISC chips. Given that there is lots of wireless communication tasks I forsee DSPs / CISC chips having a significant percentage of the tasks.

BUT the DSPs / CISC chips won't ever REPLACE the General Purpose Processors / RISC chips. Most cellphones today are actually a combo platform with a General-Purpose-Processor that does the user interface ( other related tasks ) and a DSP ( to do the wireless communication and heavy lifting tasks ).

Re:DSPs keep gaining ground in the mobile world... (1)

syn1kk (1082305) | more than 5 years ago | (#23470126)

"DSP Or A GPP? You Decide" , http://electronicdesign.com/Articles/Index.cfm?AD=1&ArticleID=7722 [electronicdesign.com] --> "Solutions that mix a GPP and a DSP in one chip have many advantages, but they significantly increase the complexity of the underlying system."

"Lost cost General Purpose Process vs low cost DSP" , http://www.bdti.com/articles/evolution/sld024.htm [bdti.com] , GPP gets score 7 vs DSP gets score of 10.

-----

FYI, BDTI is a company that specializes in designing benchmarks for GPP and DSP chips. Their benchmarks are widely used around the world when designers need to compare the power usage, performance, and memory usage of chips.

Re:DSPs keep gaining ground in the mobile world... (1)

Darinbob (1142669) | more than 5 years ago | (#23470232)

BUT the DSPs / CISC chips won't ever REPLACE the General Purpose Processors / RISC chips.
I always considered most DSPs to be somewhat RISC-like. Just because they have a lot of instructions doesn't make them CISC. They don't have a lot of unnecessary instructions, lots of addressing modes, etc. The "R" in RISC means "reduced", not "minimal".

Instruction compression, not complexity... (1, Interesting)

Anonymous Coward | more than 5 years ago | (#23470580)

The real question is whether ARM thumb instructions have higher code density than x86 instructions for infrequently executed code. Instruction bandwidth is far more precious than the execution complexity on-die (chip IOs toggling far outweigh any decoder logic), so for mobile it's really about how efficiently you can compress the instructions, not what kind of architecture they are based on. I'm guessing ARM still comes out ahead, but it would be an interesting experiment to run...

Fixed- vs. variable-length instruction encoding (1)

AcidPenguin9873 (911493) | more than 5 years ago | (#23470630)

x86, which is the classic CISC, is also a variable-length ISA. That means certain instructions take just a single byte to encode, compared with a fixed 4 bytes on the most common RISCs. This can be a factor in instruction cache size/effectiveness. Fewer bytes for instructions == more instructions fit in the ICache == ICache is more effective. I don't have any numbers, but I would expect the average instruction length on CISC to be in the 10s of % smaller than RISC. That means either greater performance, or lower power, or both. Perhaps enough of each to offset the greater power/die area required to decode these variable-length instructions.

It's not a big factor, but combined with the other points that have been made in this thread (about how CISCs translate to RISC internally, etc., the advantages of using a very-mature x86 OS/software stack), there are a number of reasons why an x86 embedded processor might be good.

Spotting those who RTFA'd (4, Insightful)

Jacques Chester (151652) | more than 4 years ago | (#23470748)

Every 4+ comment has the same "RISC|CISC is dead" comment talking about how x86 chips break down that massive, warty ISA into a series of RISC-like micro-ops for internal consumption. And that this has been the case since at least the Pentium Pro.

Read the article. Jon Stokes makes that point: but he also makes the point that in embedded processors, it does matter, because the transistor budget is much, much smaller than for a modern desktop CPU. It may come to pass in a few generations of die feature shrinking that we arrive back at the current situation of ISAs becoming irrelevant, but for the moment in the embedded space it does matter that you need to give up a few million transistors to buffering, chopping up and reissuing instructions compared to just reading and running them.

Remember, this is Jon Stokes we're talking about: he's the guy that taught most Slashdotters what they know about CISC and RISC as it is.

The concept of risc never made much sense to me. (2, Interesting)

bill_kress (99356) | more than 4 years ago | (#23470944)

I understand the theory--you simplify instructions, do things to speed up the processor so it can run faster, then optimize the processor to run as fast as you can.

In other words, you are designing your instruction set to your hardware.

Now, assuming that you are going to have close to infinite investment into speeding up the CPU, it seems that if you are going to fix an instruction set across that development time, you want the instruction set that is the smallest and most powerful you could get it.

That way for the same cycle instead of executing one simple instruction you are executing one powerful one (that does, say 5x more than the simple one)

Now at first the more powerful one will take more time than the simple one, but as the silicon becomes more powerful, The hardware designers are going to come up with a way to make it only take 2x as long as the simple one. Then less.

I guess I mean that you will get more relative benefit tweaking the performance of a hard instruction than an easy one.

Also, at some point the Memory to CPU channel will be the limit.

I'd kinda like to see Intel take on an instruction set designed for the compiler rather than the CPU (like Java Bytecode). Bytecode tends to be MUCH smaller--and a quad-core system that directly executes bytecode, once fully optimized, should blow away anything we have now in terms of overall speed.

Annoying advertisement 99% CPU (0)

Anonymous Coward | more than 4 years ago | (#23471376)

Ars Technica has a Flash advertisement that consumes 99% CPU time.

brainfuck (1)

bugs2squash (1132591) | more than 4 years ago | (#23471684)

If your device is essentially an FPGA, then implementing the app-specific stuff in programmable logic and throwing in in an ARM IP core keeps the component count down.

I don't think you can get "Atom" as a VHDL IP core.

You only need a half dozen or so instructions to program the whole thing in brainfuck anyway.

Poorly named article (1)

donatzsky (91033) | more than 4 years ago | (#23473420)

It should have been called something like "Atom architecture overview, it's future, and how it compares to ARM".
And to all those that rant how RISC is dead: Did you actually RTFA?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...