Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Linux-Proof Processor That Nobody Wants

samzenpus posted about 2 years ago | from the last-kid-picked dept.

Intel 403

Bruce Perens writes "Clover Trail, Intel's newly announced 'Linux proof' processor, is already a dead end for technical and business reasons. Clover Trail is said to include power-management that will make the Atom run longer under Windows. It had better, since Atom currently provides about 1/4 of the power efficiency of the ARM processors that run iOS and Android devices. The details of Clover Trail's power management won't be disclosed to Linux developers. Power management isn't magic, though — there is no great secret about shutting down hardware that isn't being used. Other CPU manufacturers, and Intel itself, will provide similar power management to Linux on later chips. Why has Atom lagged so far behind ARM? Simply because ARM requires fewer transistors to do the same job. Atom and most of Intel's line are based on the ia32 architecture. ia32 dates back to the 1970s and is the last bastion of CISC, Complex Instruction Set Computing. ARM and all later architectures are based on RISC, Reduced Instruction Set Computing, which provides very simple instructions that run fast. RISC chips allow the language compilers to perform complex tasks by combining instructions, rather than by selecting a single complex instruction that's 'perfect' for the task. As it happens, compilers are more likely to get optimal performance with a number of RISC instructions than with a few big instructions that are over-generalized or don't do exactly what the compiler requires. RISC instructions are much more likely to run in a single processor cycle than complex ones. So, ARM ends up being several times more efficient than Intel."

cancel ×

403 comments

Sorry! There are no comments related to the filter you selected.

The Year of Linux on Desktop Is Now (-1, Troll)

Makoska (2731039) | about 2 years ago | (#41352307)

For me, the year of linux on desktops is now. With Steam coming to Linux [steampowered.com] , along with Crossover and pure Linux-ported games, the inevitable has happened. I'm glad Visual Studio [microsoft.com] also runs perfectly on Wine (I'm also making sure to have a party with my friends on Visual Studio 2012 Virtual Launch Party, where thousands of geeks around the globe connect together to party the release of latest Visual Studio).

Everything I need works in Linux.

Re:The Year of Linux on Desktop Is Now (0, Troll)

Anonymous Coward | about 2 years ago | (#41352387)

Except it doesn't matter and hasn't for years. The world's most popular applications run in browsers, not desktops.

HTML Media Capture is not widely supported (1)

tepples (727027) | about 2 years ago | (#41352487)

The world's most popular applications run in browsers, not desktops.

So if I want to include microphone, camera, or gamepad support in something that I intend to become one of "[t]he world's most popular applications", what API should I use? Among desktop browsers, neither IE nor Firefox nor Safari supports HTML Media Capture, and nothing mobile supports it at all.

Re:HTML Media Capture is not widely supported (0)

Anonymous Coward | about 2 years ago | (#41352537)

http://dev.w3.org/2011/webrtc/editor/getusermedia.html
https://dvcs.w3.org/hg/gamepad/raw-file/default/gamepad.html

Those API's

Re:HTML Media Capture is not widely supported (0)

Anonymous Coward | about 2 years ago | (#41352583)

It is coming, hold on to your butt.

Hardware API is also on its way too. It is so far working with controllers last I checked.
File API is, I don't know actually, I haven't checked it forever.

Media Capture [html5rocks.com]

Yuck (0)

Anonymous Coward | about 2 years ago | (#41352617)

Bad grammar is to Slashdot as weeds are to lawns.

Visual Studio is great, but what about MyCleanPC? (4, Funny)

CRCulver (715279) | about 2 years ago | (#41352423)

I'm glad Visual Studio also runs perfectly on Wine (I'm also making sure to have a party with my friends on Visual Studio 2012 Virtual Launch Party, where thousands of geeks around the globe connect together to party the release of latest Visual Studio).

I'm happy for you that you can develop more efficiently with Visual Studio, but I'm piffed that MyCleanPC [slashdot.org] still isn't available for Linux. I mean, I'm looking at my friend on his Windows box, and ever since he installed MyCleanPC [slashdot.org] , his gigabits are running faster than ever!

Plus, MyCleanPC [slashdot.org] completely eradicated any viruses on his computer, sped up his internet connection and gave him some peace of mind! We desperately need a Linux port of such outstanding software as MyCleanPC [slashdot.org] !

Re:Visual Studio is great, but what about MyCleanP (3, Funny)

MysteriousPreacher (702266) | about 2 years ago | (#41352917)

This is better:

http://fix-kit.com/Explosive-diarrhea/repair/ [fix-kit.com]
http://fix-kit.com/Assassination-of-reigning-monarch/repair/ [fix-kit.com]

Finally, downloadable software for Windows that'll cure just about anything!

I imagine /. readers are savvy enough to realise that the site is a scam, and that downloading their software is akin to having unprotected sex with third-world prostitutes.

Re:The Year of Linux on Desktop Is Now (1)

Goodyob (2445598) | about 2 years ago | (#41352461)

I wish I could agree with you, but from what I can tell, Gaben has stalled developing the Linux port in favor of making hardware [extremetech.com] .

Re:The Year of Linux on Desktop Is Now (0)

Anonymous Coward | about 2 years ago | (#41352559)

That's even better. The hardware would be running Linux.

Re:The Year of Linux on Desktop Is Now (1)

MindlessAutomata (1282944) | about 2 years ago | (#41352975)

What? Valve has multiple teams on different projects. I can't believe you would even post this--hardware people aren't necessarily software people. And if they're doing hardware, how's it gonna run? Oh right, you need software as well. Durr.

Steam the new Unreal Tournament?? (1)

Andy Prough (2730467) | about 2 years ago | (#41352609)

Is this like the year that we got Unreal Tournament and Doom on Linux, and we were supposed to take 50% market share from Windows??? Yeah, that worked out well. Although, UT2003 was a lot of fun...

Re:Steam the new Unreal Tournament?? (2, Informative)

Anonymous Coward | about 2 years ago | (#41352789)

Just wait until these people see what "supporting Linux" means to Valve too. I run Steam on OS X and it's not the games fest that they make it out to be. Oh, to be sure there are a few great games there. But aside from CivV (which had a native OS X before Steam) just about every non-Valve game isn't supported except for a handful of "indie" games. The other day I was going down the upcoming release list and not a single major title release was being slated for OS X.

Oh, and wait until one of them has a Steam-centric problem with their system. Steam is a bunch of sweethearts on supporting that too.

Steam may get a few current Linux users to stop using Windows but it's not going to make anyone switch.

Re:The Year of Linux on Desktop Is Now (3, Insightful)

Alex Belits (437) | about 2 years ago | (#41352693)

Visual Studio

Please, please, please, stay on Windows, we don't need your Microsoft-infected minds spreading their diseases to other systems.

Re:The Year of Linux on Desktop Is Now (0)

Anonymous Coward | about 2 years ago | (#41352721)

Wow more vs shilling. Surprised only that you've been modded up.

Re:The Year of Linux on Desktop Is Now (1)

thegreatemu (1457577) | about 2 years ago | (#41352785)

Except MS office, which most of the corporate/academic world still uses for everything. Yes, Libre Office can do everything as well or in many cases better, but that doesn't matter when someone sends you a pptx file that Impress mangles into an unreadable smear.

Re:The Year of Linux on Desktop Is Now (5, Insightful)

ColdWetDog (752185) | about 2 years ago | (#41352943)

So does it matter when someone sends you a .pptx file that Office 2003 freezes on? Yeah, yeah, I'm pretty sure you can get a converter, but I like telling people that if their file has an 'x' in the extension it means that it's 'experimental' and they shouldn't send it to others. They need to send the version without the 'x'.

Re:The Year of Linux on Desktop Is Now (2)

Vince6791 (2639183) | about 2 years ago | (#41353047)

I had the same issue with office 2010 when people were sending us excel, word, ppt files in office 2003 format. Yup, some text were tabbed to the right, bullets were different sizes, lines appearing as thick double in excel, draw lines in word half in middle page while the end of the line showing up in the next page line. Word had the most issues compared to excel and powerpoint. I imported these 2003 format into libreoffice and it was not that bad compared to the office 2010, just 2 lines of text tabbed over.

Re:The Year of Linux on Desktop Is Now (1)

Vince6791 (2639183) | about 2 years ago | (#41352933)

Are you talking about visual studio 6.0 - 6.1? Visual c++ works so far but visual basic crashes every time I type MsgBox(. Yup it crashed at the "(". I can run Steam on wine right now in ubuntu 12.04.1 without issues but it does not run any game. I have cod4MW and of what i heard it runs fine on non-punk buster servers. I will try this today. If netflix came over to linux I would be even more happy but since they are assholes and it's running only in Chrome OS and android devices I have to run it in virtualbox which has video tearing issues. Linux is not bad. I had to install ubuntu 12.04.1 on a atom 1.3ghz netbook gma 500 with 2gbram because windows 7 was running dog slow when web browsing or watching videos. Actually the Poulsbo drivers run better on linux than the native intel gma 500 on windows 7.

Blast in time (0)

Anonymous Coward | about 2 years ago | (#41352359)

Hey, I remember reading this in 1987 [wikipedia.org] !

Re:Blast in time (4, Informative)

Pseudonym (62607) | about 2 years ago | (#41352445)

Hell, I remember using an Archimedes in 1988. Odd to think that my phone now has four of them.

Back to the topic, the border between RISC and CISC is a bit fuzzy these days. Every modern CISC chip is basically a dynamic translator on top of a RISC core. But even high-end ARM chips can do some of this with Jazelle.

To be fair, CISC does have a few performance advantages when power consumption isn't (as big) an issue. The code density is better on x86 (yes, even with Thumb), which does mean they tend to use instruction cache more effecitvely. ARM chips generally don't do out-of-order scheduling and retirement; that uses a lot of power, and is the main architectural difference between laptop-grade and desktop/server-grade x86en).

I'd like to see what a mobile-grade Alpha processor looks like. But I never will.

Re:Blast in time (4, Informative)

TheRaven64 (641858) | about 2 years ago | (#41352545)

Every modern CISC chip is basically a dynamic translator on top of a RISC core.

And that's the problem for power consumption. You can cut power to execution units that are not being used. You can't ever turn off the decoder ever (except in Xeons, where you do in loops, but you leave on the micro-op decoder, which uses as much power as an ARM decoder) because every instruction needs decoding.

But even high-end ARM chips can do some of this with Jazelle.

Jazelle has been gone for years. None of the Cortex series include it. It gave worse performance to a modern JIT, but in a lower memory footprint. It's only useful when you want to run Java apps in 4MB of RAM.

The code density is better on x86 (yes, even with Thumb), which does mean they tend to use instruction cache more effecitvely

That's not what my tests show, in either compiled core or hand-written assembly.

Re:Blast in time (1)

Truekaiser (724672) | about 2 years ago | (#41352867)

Seems you have not actually been keeping up with the arm architecture. They have had out-of-order-execution since the debut of the cortex a9.

RISC is not the silver bullet (3, Interesting)

wvmarle (1070040) | about 2 years ago | (#41352413)

Nice advertisement for RISC architecture.

Sure it has advantages, but obviously it's not all that great. After all Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back. It seems no-one can beat the price/performance of the CISC-based x86 chips...

Re:RISC is not the silver bullet (5, Insightful)

Anonymous Coward | about 2 years ago | (#41352491)

Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has.
Modern Intel processors are just RISC with a decoder to the old CISC instruction set.
RISC beats CISC in price performance trade-off, but backwards compatibility keeps the interface the same.

Re:RISC is not the silver bullet (3, Interesting)

vovick (1397387) | about 2 years ago | (#41352591)

The question is, how much can the hardware optimize the decoded RISC microcode? Or the optimization does not matter much at this point?

Re:RISC is not the silver bullet (-1)

Anonymous Coward | about 2 years ago | (#41352683)

Like I posted elsewhere, intel hasn't made real CISC processors for years

The "IS" in RISC/CISC stands for Instruction Set. The instruction set (at least that one that's exposed to developers/compilers) that modern x86 processors support is still x86, which is a CISC instruction set, therefore they are CISC processors.

Misleading slant on mention of Atom's RISC core (5, Informative)

Dogtanian (588974) | about 2 years ago | (#41352881)

Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has. Modern Intel processors are just RISC with a decoder to the old CISC instruction set.

Exactly. Intel has been doing this ever since the Pentium Pro and Pentium II came out in the 1990s. Anyone who knows much at all about x86 CPUs is aware of this, and Perens certainly will be. That's why I'm surprised that that article misleadingly states:-

So, we start with the fact that Atom isn't really the right architecture for portable devices (*) with limited power budgets. Intel has tried to address this by building a hidden core within the chip that actually runs RISC instructions, while providing the CISC instruction set that ia32 programs like Microsoft Windows expect.

The "hidden core" bit is, of course, correct, but the way it's stated here implies that this is (a) something new and (b) something that Intel have done to mitigate performance issues on such devices, when in fact it's the way that all Intel's "x86" processors have been designed for the past 15 years!

Perhaps I'm misinterpreting or misunderstanding the article, and he's saying that- unlike previous CPUs- the new Atom chips have their "internal" RISC instruction set directly accessible to the outside world. But I don't think that's what was meant.

(*) This is in the context of having explained why IA32 is a legacy architecture not suited to portable devices and presented Atom as an example of this.

Re:RISC is not the silver bullet (0)

Anonymous Coward | about 2 years ago | (#41352501)

That's not a RISC vs CISC thing. Intel has just put tons of money into making CISC not suck. They may have also done that to make Windows work on Mac or to relieve the burden on some software developers who add assembly to their code so they only have to make it once (i.e. sse3 instructions in Photoshop).

Re:RISC is not the silver bullet (2, Informative)

stripes (3681) | about 2 years ago | (#41352503)

Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back

FYI, all of Apple's iOS devices have ARM CPUs, which are RISC CPUs. So I'm not so sure your "don't seem to be in any hurry to move back" bit is all that accurate. In fact looking at Apple's major successful product lines we have:

  1. Apple I/Apple ][ on a 6502 (largely classed as CISC)
  2. Mac on 680x0 (CISC) then PPC (RISC), then x86 (CISC) and x86_64 (also CISC)
  3. iPod on ARM (RISC), I'm sure the first iPod was an ARM, I'm not positive about the rest of them, but I think they were as well
  4. iPhone/iPod Touch/iPad all on ARM (RISC)

So a pretty mixed bag. Neither a condemnation of CISC nor a ringing endorsement of it.

Re:RISC is not the silver bullet (1)

wvmarle (1070040) | about 2 years ago | (#41352589)

iPhone and iPad are not known as powerful devices; computing power lags far behind a typical desktop at double the price. Form factor (and the touch screens) add a lot of cost.

So far RISC is only found in low-power applications (when it comes to consumer devices at least).

Re:RISC is not the silver bullet (2)

stripes (3681) | about 2 years ago | (#41352783)

So far RISC is only found in low-power applications (when it comes to consumer devices at least).

Plus printers (or at least last I checked), game consoles (the original Xbox was the only CISC in the last 2~3 generations of game consoles not to be a RISC). Many of IBMs mainframes are RISCs these days. In fact I think the desktop market is the only place you can randomly pick a product and have a near certainty that it is a CISC CPU. Servers are a mixed bag. Network infrastructure is a mixed bag. Embedded devices are use to be CISC, but now that tends to vary a lot, lower cost embeded devices (under $10) tend to be CISC, but over $10 tends to be RISC.

Ah! You might find CISC dominant in radiation hard environments. There is a MIPS R2000-based silicon on sapphire design in that space, but pretty much everything else is CISC (I haven't looked in a while, but that is a very slow moving market).

Re:RISC is not the silver bullet (0)

Anonymous Coward | about 2 years ago | (#41352547)

RISC has similar problems as IPv6...
Everyone knows that they are better, but business that stifles investment and progress aka "buy the cheapest" will always keep an older inferior system alive as long as possible.
I bet that Apple did not make the decision based on technical grounds, it was probably a business decision.
Even in the end of 1980's there were big articles about how much better RISC is.
We seen how great it can preform in Sun Sparc and PowerPC machines..

And i think we finally are watching RISC to gain momentum, via mobile phones, tablets etc...
Even the "big" player Microsoft is releasing binaries for ARM (RISC) architecture.
There are rumors about servers with ARM instead of Intel/AMD

The interesting thing is to see if they "liftoff" before larger introduction of cell processors that seems to be the next step in processor architecture...

Re:RISC is not the silver bullet (1)

drinkypoo (153816) | about 2 years ago | (#41352631)

The summary is outright incorrect. First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC. Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops. Third, RISC may mean less gates for the same tasks, but it also means that some tasks get broken up into multiple tasks.

ARM doesn't scale as far as x86, so in the high end you need more cores and some tasks are less parallelizable than others. ARM should be recognized as the current killer in the low end, while x86 (well, amd64) is clearly still the best price-performance option in the high end.

Re:RISC is not the silver bullet (2)

slashping (2674483) | about 2 years ago | (#41352725)

The single-cycle rule is bogus. Plenty of ARM instructions (branches, multiply, load/store multiple) take more than 1 cycle, and plenty of x86 instructions only take 1.

Re:RISC is not the silver bullet (5, Interesting)

stripes (3681) | about 2 years ago | (#41352873)

First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC

LOAD and STORE aren't single cycle instructions on any RISC I know of. Lots of RISC designs also have multicycle floating point instructions. A lot of second or third generation RISCs added a MULTIPLY instruction and they were multiple cycle.

There are not a lot of hard and fast rules about what makes things RISCy, mostly just "they tend to this" and "tend not to that". Like "tend to have very simple addressing modes" (most have register+constant displacement -- but the AMD29k had an adder before you could get the register data out, so R[n+C1]+C2 which is more complext then the norm). Also "no more then two source registers and one destination register per instruction" (I think the PPC breaks this) -- oh, and "no condition register" but the PPC breaks that.

Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops.

Yeah, Intel invented microcode again, or a new marketing term for it. It doesn't make the x86 any more a RISC then the VAX was though. (for anyone too young to remember the VAX was the poster child for big fast CISC before the x86 became the big deal it is today).

Re:RISC is not the silver bullet (0)

Anonymous Coward | about 2 years ago | (#41352647)

The reason Apple left for it was to compete with Windows more.

IBM were lagging behind in Power because they, well, history already happened. They gave it up.
Intel, however, were going full-speed in to multicore processers, whereas IBM with Sony and Toshiba made the experimental CellBE, which could have went places if they never gave up on it. But 3 companies working on one thing like that was destined to fail.

If Apple switched to the same architecture that Windows used, they could get programmers making the same native apps on their OS without much of a hassle in porting them.
Power was what gave Apple that advantage for media creation up until Intel went multicore. At that point it was more-or-less the same until Intel eventually overtook IBM. Still makes me laugh that this thing is still used by Apple fanboys of today though, that advantage is long gone.

Re:RISC is not the silver bullet (5, Informative)

UnknowingFool (672806) | about 2 years ago | (#41352695)

I would argue the problem for Apple wasn't about performance but about updates, mobile, and logistics.. PowerPC originally held promise as a collaboration between Motorola, IBM, and Apple. IBM got much out of it as their current line of servers and workstations run on it. Apple's needs were different than IBM's. Apple needed new processors every year or so to keep up with Moore's law. Apple needed more power efficient mobile processors. Also Apple needed a stable supply of the processors.

Despite ordering millions of chips a year, Apple was never going to be a big customer for Motorola or IBM. Their chips would be highly customized that none of their other customers needed or wanted and Apple needed updates every year. So neither Motorola or IBM could dedicate huge resources for a small order of chips as they could make millions more for other customers. PowerPC might have eventually come up with a mobile G5 that could rival Intel but it would have taken many years and lots of R&D. IBM and Motorola didn't want to invest that kind of effort (again for one customer). So every year Apple would order enough chips they thought they needed. If they were short, they would have order more. Now Motorola and IBM like most manufacturers (including Apple) do not like carrying excess inventory. So they were never able to keep up with Apple's orders as their other customers had more steady and larger chip orders.

So what was Apple to do? Intel represented the best option. Intel's mobile x86 chips were more power efficient than PowerPC versions. Intel would keep up the yearly updates of their chips. If Apple increased their orders from Intel, Intel could handle it because if Apple wasn't ordering a custom part, they were ordering more of a stock part. There are some cases where Apple has Intel design custom chips for them, mostly on the lower power side; however, Intel still can sell these to their other customers.

As a side note, as a difference in the relationship between IBM and Apple look at the relationship between MS and IBM for the Xbox 360 Xenon chip [wikipedia.org] . This was a custom design by IBM for MS, but the basic chip design hasn't changed in seven years. As such chip manufacturing has been able to move the chip to smaller lithographies (90nm --> 45nm in 2008) both increasing yield and lowering cost.

Re:RISC is not the silver bullet (1)

Anonymous Coward | about 2 years ago | (#41352743)

Nice strawman:

Apple ditched PowerPC because IBM was better at getting performance rather than power management. This isn't a fault of the RISC architecture just like a Pinto's issues are not due to having four wheels.

Apple also knew that x86 is cheaper and that people would buy Macs to run Windows. Being able to use PC hardware cut their manufacturing costs by a significant margin.

No matter how much state-of-the-art lipstick Intel puts on it, there is still a 1970s-era pig underneath.

Yes, the RISC/CISC battle went Intel's way, but it was not for the CPU architecture. It was Intel's ablity to keep wrapping a shell around the antediluvian x86 command set, then add the 64 bit extensions after AMD added theirs.

ARM may not be as fast, but because it doesn't have to have the exotic microcode the x86 descendants have, it can perform fewer switches per CPU calculation, thus less power consumed.

Re:RISC is not the silver bullet (1)

Anonymous Coward | about 2 years ago | (#41352749)

That was in part because IBM was taking the PPC places that Apple was not interested in, and other suppliers had no product ready within reasonable time that could keep up with x86. The problem was not RISC, it was down to the business plans of the PPC producers at that time. Now Apple do their own chips, in the form of an ARM variant. Likely they have some variant in the lab that is set up and clock more like a desktop CPU than a mobile SoC. And if Intel ever gets uppity about Apple demands, well hey here is a ARM based Mac and the tools to yet again produce fat binaries for the transition. Or Apple slowly exits the traditional desktop market by slowly importing more and more design concepts from iOS into OSX.

Re:RISC is not the silver bullet (1)

gweihir (88907) | about 2 years ago | (#41352813)

Unless you have energy constraints, that is. Then RISC architecture rules. Given that most computers today are smartphones (and most run Linux, some run iOS), and many other CPUs are in data-centers were energy consumption also matters very much, I think discounting RISK this way does not reflect reality. Sure, enough people do run full-sized computers with wired network and power-grid access at home and these will remain enough to keep that model alive, but RISK has won the battle for supremacy a while ago.

As to Linux: It is already dominant in the server space. The year of "Linux in the pocket" has been a while ago. On the desktop? That will come, as soon as MS customers realize that MS cannot innovate away from what works well. The ribbon, their desktop-unfriendly new window manager, etc., are all attempts to be different, even if it degrades usability. At that time, the alternatives will catch up and "Linux or Windows" will not matter that much anymore. On the plus side, that will also be the time when Windows finally catches up and become a modern, reasonable OS. Might still take a while though and require significant depletion of the MS war chest. But the atrocious Win8 is a step in the right direction. Nobody in their right mind will want that either on the desktop or on a tablet or smartphone. Forget about enterprise users as well, they are currently thinking about whether all the changes in Win7 are too much to migrate away from XP or have just migrated to Win7. I also know some pretty large Enterprises that are still on Office 2003, because they do not want to retrain their people for the Ribbon.

Re:RISC is not the silver bullet (0)

Anonymous Coward | about 2 years ago | (#41353103)

They moved, most likely because it was and still is cheaper to go with x86 chips. IBM was the one making their powerpc chips correct? While every other chip maker for desktops is making x86 chips... Arm and most of the risc type chipsets are for mobile devices not the desktop, though I bet some of these high end mobile chips would be good enough you prob couldn't tell a difference until you got to the gpu.

Also not as much software runs on risc compared to x86. I mean yeah linux can support it and windows could if they compiled a version for it, but all the other apps,games and other programs already out will not run on risc unless it can emulate x86 or x86-64. Wouldn't be quite as much of an issue for new releases as they can just compile for whatever the new arch is.

Though it does pose an interesting area to explore. Now with multi-cores being the norm even for x86 cisc chips. imagine how many of these small risc cores you could pack into the same space? would be neat to try

Thanks (0)

Anonymous Coward | about 2 years ago | (#41352417)

Thanks for that quick history lesson in computer architecture.
BTW, intel processors haven't been CISC for years.
They're all RISC with a components that translates from the CISC instructions to RISC.

Re:Thanks (2)

stripes (3681) | about 2 years ago | (#41352573)

BTW, intel processors haven't been CISC for years. They're all RISC with a components that translates from the CISC instructions to RISC

Nice marketing talk. So was the VAX (most of them anyway - I think the VAX9000 was a notable exception) I mean it had this hardware instruction decoder, and it did simple instructions in hardware, and then it slopped all the complex stuff over onto microcode. In fact most CISC CPUs work that way - in the past all of the "cheap" ones did, and now pretty much all of them do. So if you call any CPU that executes "only simple components directly and translates the rest" it is hard to find any non-RISC CPU. Of corse internally they aren't so much "RISCy" as "VLIWy"...

The x86 is still the poster boy for CISC. (and hey, CISC isn't all bad pick up a copy of Hennasy and Patterson and read up on the relevant topics)

Re:Thanks (1)

Anonymous Coward | about 2 years ago | (#41352771)

Yes. I consider all these words including RISC, CISC, VLIW, etc to be completely useless for today's processors.
They are only useful for a historical understanding for why things are done a certain way, and how the techniques were developed.
Any kind of serious commercial processor will combine elements from these different architectures.

BTW your argument that somehow intel processors are CISC because other processors use microcode as well doesn't make sense.
It just means that they are both not CISC.

Re:Thanks (1)

stripes (3681) | about 2 years ago | (#41353099)

That situation existed in the early 1990s/late 1980s when the terms CISC and RISC were invented. The x86 existed and was CISCy on the outside and microcoded inside. The VAX was the same. The arguments were never "you can't implement CISC internally the same as a RISC" because they were all already done that way. It was "if you avoid X, Y and Z in your programmer visible instruction set you don't need all that cruft in the chip". What makes something RISC or CISC was originally all about the instruction set, and I see nothing that has changed in the last 20 years that makes it useful to change the definitions.

Collapsing two useful words into one useless meaning doesn't add value to the language, it destroys it (well, not the whole language, just those two words). So why do it? If the new meanings actually had some value, sure I can see adopting new usage, but why switch to something worse?

oversimplified (5, Insightful)

kenorland (2691677) | about 2 years ago | (#41352425)

ia32 dates back to the 1970's and is the last bastion of CISC,

The x86 instruction set is pretty awful and Atom is a pretty lousy processor. But that's probably not due to RISC vs. CISC. IA32 today is little more than an encoding for a sequence of RISC instructions, and the decoder takes up very little silicon. If there really were large intrinsic performance differences, companies like Apple wouldn't have switched to x86 and RISC would have won in the desktop and workstation markets, both of which are performance sensitive.

I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.

Re:oversimplified (-1, Troll)

gadzook33 (740455) | about 2 years ago | (#41352533)

This is just a rant session about Atom. Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else. This will probably get modded as trolling but if I said the opposite thing about MS it would be insightful. In my opinion this entire article is trolling.

Re:oversimplified (2)

aaron552 (1621603) | about 2 years ago | (#41352603)

Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else.

Remind me how being the core of the most-used mobile operating system is "terrible for almost everything else"

Re:oversimplified (2)

gadzook33 (740455) | about 2 years ago | (#41352727)

Well it's a matter of personal preference but personally I think android is pretty lousy and suffers from many of the same issues that desktop Linux does such as lack of standardization and a somewhat clunky interface.

Re:oversimplified (0, Troll)

Anonymous Coward | about 2 years ago | (#41352889)

Well it's a matter of personal preference but personally I think android is pretty lousy and suffers from many of the same issues that desktop Linux does such as lack of standardization and a somewhat clunky interface.

This. Android's smartphone marketshare reminds me of Windows' desktop marketshare. Just because it's ubiquitous doesn't mean it's any good. Android is free/cheap, and "good enough." But jesus, do I cringe whenever I see someone on the bus scrolling through a list of contacts or panning/zooming on a webpage on their Android phone.

Re:oversimplified (1, Informative)

MindlessAutomata (1282944) | about 2 years ago | (#41352993)

How is an iPhone better? Having used both an iPhone and iPad I was far from impressed. The thing I was mostly impressed about was that I had great difficulty trying to use it, because it takes awhile to figure you you actually can't do a lot of very basic computer operations as it's very dumbed down (file system access for one--what the hell?)

Re:oversimplified (1)

postbigbang (761081) | about 2 years ago | (#41352635)

I don't think you're trolling, just living in the early 2000s. If Intel doesn't release ATOM power settings, no big deal. Easily.Avoided. Atom 64-bit is still a few years away until practical implementations are made. Atom is their answer to a bridge between ARM and fat Xeons. As a design, it's a compromise, and is unlikely to live long.

That Intel doesn't release specs is a questionable assertion. That we care is also questionable. It's more of the Intel-Microsoft boogie-man propaganda meme that's rooted in some false premise. ARM is really good and graftable-- in low power apps in a 32-bit space. Currently, that's a sweet spot for entrant/basic computing devices. Intel (and to an extent, AMD) can't write a sufficient amount of basics that allow operating systems to use all the cores that they can slab into a socket, so virtualization has become the strength of multi-core devices, and there's plenty of current popularity in *that*.

Atom is fun, but isn't a big deal, and IMHO, won't be around very long. Getting one's feathers ruffled over it is nihilistic and distracts from other, deeper rivalries between F/OSS and Microsoft and Google's models.

Re:oversimplified (1)

postbigbang (761081) | about 2 years ago | (#41352653)

Let me correct: ARM 64-bit is still a few years away.

Re:oversimplified (1)

gadzook33 (740455) | about 2 years ago | (#41352711)

Ok oddly enough this I agree with.

Re:oversimplified (3, Interesting)

RabidReindeer (2625839) | about 2 years ago | (#41352761)

This is just a rant session about Atom. Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else. This will probably get modded as trolling but if I said the opposite thing about MS it would be insightful. In my opinion this entire article is trolling.

Well, excuse me for living.

I boot up Windows for 3 reasons:

1. Tax preparation
2. Diagnosing IE-specific issues
3. Flight Simulator (Yes, I know, there's a flight simulator for Linux, but I like the MS one OK)

Mostly the Windows box is powered off, because those are infrequent occasions and I'd rather not add another 500 watts to the A/C load. All of the day-to-day stuff I do to make a living is running on Linux. If for no other reason that the fact that I'd have to take out a second mortgage to pay for all the Windows equivalents of the databases, software development and text-processing tools that come free with Linux. Or in some cases, free for Linux.

If you said "Linux is terrible for almost everything else" and gave specific examples, you'd be insightful. Given however, that I'm quite happy with at least 2 of the "everything else"s (desktop and Android), lack of specific illustrations makes you a troll.

Re:oversimplified (1)

postbigbang (761081) | about 2 years ago | (#41352807)

I can't vouch for the GPU or hefty graphics card you're using, but you could do the same thing with lots of heft for under 150watts, including the freaking LCD monitor. There are a number of nice, high stroke CPUs out there that don't need a 500w supply.

Yes, Flight Simulator is a pig. But it seems to like AMD's math over AMDs, and in terms of graphics, I haven't kept up, so can't address that particular piece. Nonetheless, 500w can power a decent electric scooter.

Re:oversimplified (1)

RabidReindeer (2625839) | about 2 years ago | (#41352965)

I can't vouch for the GPU or hefty graphics card you're using, but you could do the same thing with lots of heft for under 150watts, including the freaking LCD monitor. There are a number of nice, high stroke CPUs out there that don't need a 500w supply.

Yes, Flight Simulator is a pig. But it seems to like AMD's math over AMDs, and in terms of graphics, I haven't kept up, so can't address that particular piece. Nonetheless, 500w can power a decent electric scooter.

Forgive me. I haven't actually measured the true wattage. I don't have gamer hardware on this box - it's just a bog-standard small form-factor desktop with a mobo graphics card that's so crappy that I've been known to use Remote Desktop in from the Linux box just to preserve my eyes.

All I can say for certain is that it can make the office nice and toasty when I've already got my regular equipment running. Good for January. Not so good for July.

I'd probably do better making a VM out of it and running it on the Linux box, but one of the things that Linux is exactly the opposite of "terrible" for versus Windows is that I cannot just go throwing around Windows VM images around with wild abandon the way I can with Linux, BSD, OpenSolaris, etc. Windows has to be licensed and blessed by Redmond.

Re:oversimplified (4, Informative)

Zero__Kelvin (151819) | about 2 years ago | (#41353051)

"Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else"

You don't know anything about Linux. It powers all RISC / ARM based Android smartphones. It also runs on more than 33 different CPU architectures. A huge number of those platforms are embedded systems that are probably sitting in your living room and enabling you to watch TV, DVDs, Blue Ray, etc as well as listen to it all in Surround Sound [wikipedia.org] .

" In my opinion this entire article is trolling."

To be blatantly honest, I haven't quite figured out if it is you that is trolling, or you are really just that ignorant of the facts.

Re:oversimplified (1)

PCK (4192) | about 2 years ago | (#41352587)

Well there has never been a CISC-RISC instruction conversion from an Intel processor, AFAIK the AMD-K5 was the only processor to do so and the original Pentiums pretty much out performed them.

Out of order Intel processors since the Pentium Pro have converted instructios into very simple uOPs, in fact many RISC processors do the same thing.

Re:oversimplified (3, Interesting)

Anonymous Coward | about 2 years ago | (#41352679)

What really kills x86's performance/power ratio is that it has to maintain compatibility with ancient implementations. When x86 was designed, things like caches and page tables didn't exist; they got tacked on later. Today's x86 CPUs are forced to use optimizations such as caches (because it's the only way to get any performance) while still maintaining the illusion that they don't, as far as software is concerned. For example, x86 has to implement memory snooping on page tables to automatically invalidate TLBs when the page table entry is modified by software, because there is no architectural requirement that software invalidate TLBs (and in fact no instructions to individually invalidate TLB entries, IIRC). Similarly, x86 requires data and instruction cache coherency, so there has to be a bunch of logic snooping on one cache and invalidating the other.

RISC CPUs were originally designed with performance in mind: instead of having the CPU include all that logic to deal with the performance optimizations and make them transparent, the software has to handle them explicitly (e.g. TLB flushes, cache flushes and invalidation, explicit memory barriers, etc.). As it turns out, it's much easier for software to do that job (because the programmer knows when they are needed, instead of having the CPU check every single instruction for these issues), and it makes the CPU much more efficient.

Re:oversimplified (3, Insightful)

stripes (3681) | about 2 years ago | (#41352687)

I'de say the x86 being the dominant CPU in the desktop has given Intel the R&D budget to overcome the disadvantages of being a 1970s instruction set. Anything they lose by not being able to wipe the slate clean (complex addressing modes in the critical data path, and complex instruction decoders for example), they get to offset by pouring tons of R&D onto either finding a way to "do the inefficient, efficiently", or finding another area they can make fast enough to offset the slowness they can't fix.

The x86 is inelegant, and nothing will ever fix that, but if you want to bang some numbers around, well, the inelegance isn't slowing it down this decade.

P.S.:

IA32 today is little more than an encoding for a sequence of RISC instructions

That was true of many CPUs over the years, even when RISC was new. In fact even before RISC existed as a concept. One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team". While the instruction set matters, it isn't the only thing. RISCs have very very simple addressing modes (sometimes no addressing modes) which means they can get some of the advantages of OOO without any hardware OOE support. When they get hardware OOE support nothing has to fuse results back together and so on. There are tons of things like that, but pretty much all of them can be combated with enough cleverness and die area. (but since die area tends to contribute to power usage, it'll be interesting to see if power efficiency is forever out of x86's reach, or if that too will eventually fall -- Intel seems to be doing a nice job chipping away at it)

Re:oversimplified (0)

Anonymous Coward | about 2 years ago | (#41352799)

Yep, Intel is offsetting design inefficiencies by having some of the most advanced production processes.

Intel is already pushing 22nm while the competition is still on 32 or even 45.

Re:oversimplified (1)

Bert64 (520050) | about 2 years ago | (#41352777)

Performance hasn't got a lot to do with it... Backwards compatibility is what matters, closely followed by price and availability.
While they were being actively developed and promoted, RISC architectures were beating x86 quite heavily on performance. However Intel had economies of scale on their side, they were able to sell millions of x86 chips and therefore outspend the RISC designers quite heavily.

Intel tried to move on from x86 too, with IA64... They failed, largely because of a lack of backwards compatibility...

Re:oversimplified (5, Insightful)

lkcl (517947) | about 2 years ago | (#41352871)

I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.

i've covered this a couple of times on slashdot: simply put it's down to the differences in execution speed vs the storage size of those instructions. slightly interfering with that is of course the sizes of the L1 and L2 caches, but that's another story.

in essence: the x86 instruction set is *extremely* efficiently memory-packed. it was designed when memory was at a premium. each new revision added extra "escape codes" which kept the compactness but increased the complexity. by contrast, RISC instructions consume quite a lot more memory as they waste quite a few bits. in some cases *double* the amount of memory is required to store the instructions for a given program [hence where the L1 and L2 cache problem starts to come into play, but leaving that aside for now...]

so what that means is that *regardless* of the fact that CISC instructions are translated into RISC ones, the main part of the CPU has to run at a *much* faster clock rate than an equivalent RISC processor, just to keep up with decode rate. we've seen this clearly in an "empirical observable" way in the demo by ARM last year, of a 500mhz Dual-Core ARM Cortex A9 clearly keeping up with a 1.6ghz Intel Atom in side-by-side running of a web browser, which you can find on youtube.

now, as we well know, power consumption is a square law of the clock rate. so in a rough comparison, in the same geometry (e.g. 45nm), that 1.6ghz CPU is going to be roughly TEN times more power consumption than that dual-core ARM Cortex A9. e.g. that 500mhz dual-core Cortex A9 is going to be about 0.5 watts (roughly true) and the 1.6ghz Intel Atom is going to be about 5 watts (roughly true).

what that means is that x86 is basicallly onto a losing game.... period. the only way to "win" is for Intel and AMD to have access to geometries that are at least 2x better than anything else available in the world. each new geometry that comes out is not going to *stay* 2x better for very long. when everyone has access to 45nm, intel and AMD have to have access to 22nm or better... *at the same time*. not "in 6-12 months time", but *at the same time*. when everyone else has access to 28nm, intel and AMD have to have access to 14nm or better.

intel know this, and AMD don't. it's why intel will sell their fab R&D plant when hell freezes over. AMD have a slight advantage in that they've added in parallel execution which *just* keeps them in the game i.e. their CPUs have always run at a clock rate that's *lower* than an intel CPU, forcing them to publish "equivalent clock rate" numbers in order to not appear to be behind intel. this trick - of doing more at a lower speed - will keep them in the game for a while.

but, if intel and AMD don't come out with a RISC-based (or VILW or other parallel-instruction) processor soon, they'll pay the price. intel bought up that company that did the x86-to-DEC-Alpha JIT assembly translation stuff (back in the 1990s) so i know that they have the technology to keep things "x86-like".

Re:oversimplified (0)

Anonymous Coward | about 2 years ago | (#41352903)

This half assed meme about CISC vs RISC and how the ARM is magically more efficient than stupid old x86 has to die. It's wrong and technically illiterate.

ARM has been hugely successful because THEY SELL DESIGNS. They sell them to system-on-a-chip designers who compete to make the best ones, and those are then built into products. The SOCs are low power, reliable and pack masses of features into one chip... and chinese manufacturers compete crank out products that use them as they don't really care about really high end performance.

It's great.. but it's not magic. Intel won't sell license their chip designs, and neither will AMD (partly through the contracts they have with Intel). So no-one gets the kind of flexbility and competition that's coming with ARM.

Good on the ARM guys for hitting mark... so Intel and AMD are competing by being every more hidden and controlling and getting further into bed with Microsoft and trusted computing (see UEFI).

Wow (0, Troll)

Anonymous Coward | about 2 years ago | (#41352459)

Author, a well-known Linux fanboy, has butthurt because Intel won't share their toys with his favorite operating system in the whole world. News at eleven.

Windows developers (1)

sunderland56 (621843) | about 2 years ago | (#41352467)

The details of Clover Trail's power management won't be disclosed to Linux developers.

So sign up as a Windows developer, get the info, and use it to improve Linux.

Re:Windows developers (2)

Alex Belits (437) | about 2 years ago | (#41352709)

The "info" is "just use it on Windows 8 with our great modified kernel".

Short term vs Long term thinking (4, Interesting)

UnknowingFool (672806) | about 2 years ago | (#41352485)

Some here were immediately crying anti-trust and not understanding why Intel won't support Linux for Clover Tail. It's not an easy answer but power efficiency for Intel has been their weakness against ARM. If consumers had a choice between ARM based Android or Intel based Android, the Intel one might be slightly more powerful in computing but comes at the cost of battery life. For how tablets are used for most consumers, the increase in computing isn't worth the decrease in battery life. For geeks, it's worth it but general consumers don't see the value. Now if the tablet used a desktop OS like Windows or Linux, then the advantages are more transparent; however, the numbers favor Windows are there are more likely to be desktop Windows users with an Intel tablet than desktop Linux users with an Intel tablet. For short term strategy, it makes sense.

Long term, I would say Intel isn't paying attention. Considering how MS have treated past partners, Intel is being short-sighted if they want to bet their mobile computing hopes on MS. Also have they seen Windows 8? Intel based tablets might appeal to businesses but Win 8 is a consumer OS. So consumers aren't going to buy it; businesses aren't going to buy it. Intel may have bet on the wrong horse.

Re:Short term vs Long term thinking (1)

Anonymous Coward | about 2 years ago | (#41352977)

That isn't the issue.

He issue is that Intel isn't providing the drivers or even the documentation so that Linux CAN support these features.

Sorry Bruce, but that is total nonsense. (5, Insightful)

guidryp (702488) | about 2 years ago | (#41352495)

"ARM ends up being several times more efficient than Intel"

Wow. Someone suffered a flashback to the ancient CISC vs RISC wars.

This is really totally out to lunch. Seek out some analysis from actual CPU designers on the topic. What I read generally pegs the x86 CISC overhead at maybe 10%, not several times.

While I do feel it is annoying that Intel is pushing an Anti-Linux platform, it doesn't make sense to trot out ancient CISC/RISC myths to attack it.

Intel Chips have lagged because they were targeting much different performance envelopes. But now the performance envelopes are converging and so are the power envelopes.

Medfield has already been demonstrated at competetive power envelope in smartphones.

http://www.anandtech.com/show/5770/lava-xolo-x900-review-the-first-intel-medfield-phone/6 [anandtech.com]

Again we see reasonable numbers for the X900 but nothing stellar. The good news is that the whole x86 can't be power efficient argument appears to be completely debunked with the release of a single device.

Re:Sorry Bruce, but that is total nonsense. (2)

CajunArson (465943) | about 2 years ago | (#41352741)

If it were possible to give a +6, then your post would deserve one...

One other thing about the pro-ARM propaganda on this site practically every day: How come the exact same people throwing a hissy-fit over Clovertrail never make a peep when ARM bends over backwards to cooperate with companies like Nokia & Apple whose ARM chips don't work with Linux in the slightest? By comparison, making a few tweaks to turn on Cloverfield's power saving features will be trivial compared to trying to get Linux running on an iPhone 5's A6 SoC....

Re:Sorry Bruce, but that is total nonsense. (3, Informative)

Truekaiser (724672) | about 2 years ago | (#41353021)

arm does not make their own chips. They design the instruction sets and the silicon photo masks(look up how chips are made) but other companies make the actuall physical silicon product. Those companies can pick and choose what parts of the cpu they want to use and what instruction sets they want in it.

to use food as a analogy, Intel is every store or restaurant that you can buy food pre made and ready to eat. arm would be like someone selling a recipe to you. it's up to you to make it, and what you put into it.

So it's not arm's fault for not supporting linux on the nokia and apple variants of the arm v7 instruction set. It's those respective companies. So if you had enough money and access to either rent or own a cpu fab plant, you too could make your own version of a arm chip and make it only be support on haiku os for example.

Re:Sorry Bruce, but that is total nonsense. (1)

recoiledsnake (879048) | about 2 years ago | (#41352791)

Thanks for posting that. The article felt nothing like a hit piece against all things Intel and AMD just because they're not officially supporting one processor on Linux at the time of release. Intel is very good at releasing Linux drivers for their GPUs etc. compared to others. I think they figure that too many Linux folks won't be falling over themselves buying Windows 8 touch tablets and running Ubuntu on them. The Slashdot consensus seems to be that Windows 8 tablets suck and will be a massive failure, so why even bother at this point about Clover Trail?

From TFA:

>AMD's "Hondo" processor is being marketed as "Windows Only" too. Microsoft must be paying both manufacturers a lot for this

I don't think Microsoft really cares about the 1% Linux PC market enough to spend any money on exclusivity. They know hobbyists will add support regardless and I don't think they care one bit. Their real target is the iPad here, whether their strategy succeeds or not is a different issue, but this sure feels like a sour grapes hit piece written for the Slashdot audience.

Re:Sorry Bruce, but that is total nonsense. (1)

pissoncutler (66050) | about 2 years ago | (#41353035)

Thank you for adding some facts to this Intel-bash-fest. The CloverTrail processor was designed with Microsoft specifically for Windows 8. That's like saying that if Logitech makes a left-handed trackball, they are anti right hand.

Intel isn't anti-Linux. Intel has been one of the biggest contributors to Linux for the past decade.

"The top 10 organizations sponsoring Linux kernel development since the last report (or Linux kernel 2.6.36) are Red Hat, Intel, Novell, IBM, Texas Instruments, Broadcom, Nokia" ref: http://www.linuxfoundation.org/news-media/announcements/2012/04/linux-foundation-releases-annual-linux-development-report [linuxfoundation.org]

x86 to blame? (4, Insightful)

leromarinvit (1462031) | about 2 years ago | (#41352499)

Is it really true that x86 is necessarily (substantially) less efficient than ARM? x86 instruction decoding has been a tiny part of the chip area for many years now. While it's probably relatively more on smaller processors like Atom, it's still small. The rest of the architecture is already RISC. Atom might still be a bad architecture, but I don't think it's fair to say x86 always causes that.

Also, there is exactly one x86 Android phone that I know of, and while its power efficiency isn't stellar, the difference is nowhere near 4x. From the benchmarks I've seen, it seems to be right in the middle of the pack. I'd really like to see the source for that claim.

Re:x86 to blame? (0)

Anonymous Coward | about 2 years ago | (#41352627)

The decoder logic becomes a larger part of the equation when you move less powerful but more power efficient cores.

For an android phone the display and the radios tend to dominate power consumption, because the cpus are generally efficient
enough to not be in the way.

As for x86 vs ARM. For all of the years Intel worked to make x86 processors the fastest on the planet. Arm worked to make processors the most power efficient and embeddable processors on the planet. ARM's generally have most of their supporting circuitry on die. While Atom continues to need more support chips. More chips generally equals more power.

In short Intel comes to the race for the best embedded processor 20 years late with a handicap.

Re:x86 to blame? (1)

leromarinvit (1462031) | about 2 years ago | (#41352877)

Yeah, for really embedded stuff ARM is much better suited, becuase it is a lot simpler. I don't think it's possible or would make sense to squeeze the whole x86 legacy baggage into e.g. a tiny uC with a few KB SRAM, and still get decent performance and features. But I don't think this is where Intel is aiming at.

Re:x86 to blame? (1)

StripedCow (776465) | about 2 years ago | (#41352763)

I don't understand why people put so much weight on instruction-level compatibility. As if compiler technology does not exist. Heck, even today compilers can translate efficiently from one instruction-set to the other (see e.g. virtual machines, emulators, etc).

Granted, there will always be some parts of code (the "innermost loops") that need to be handcrafted to be as efficient as possible, but I don't believe this is so important to base your whole roadmap on as a semiconductor design house.

Linux-proof? Challenge Accepted! (2)

nopainogain (1091795) | about 2 years ago | (#41352517)

just send me the hardware.

Re:Linux-proof? Challenge Accepted! (1)

Teresita (982888) | about 2 years ago | (#41352567)

Windows 3.1 AARD code made it DR-DOS proof, but that's about the only way to do this sort of thing. Undocumented opcodes.

Re:Linux-proof? Challenge Accepted! (1)

Anonymous Coward | about 2 years ago | (#41352623)

Reading between the lines, Intel probably decided that Clover Trail is a dog compared to ARM, but by specializing in Windows 8 they can grab a large piece of the market to buy them time to retool their manufacturing process. Also, there are probably power management features that the Linux kernel doesn't support. Just guessing that this may include disabling obsolete portions of the microcode like 286 protected mode and some of the good ol' complex instructions in the original 8086, like string instructions with the REP prefix.

Inetl having to work backwards (1)

PCK (4192) | about 2 years ago | (#41352529)

Intel have always designed their processors for performance first where as with ARM it was for power consumption, hell only recently did ARM get a hardware integer divide instruction. x86 instruction decode is not so complicated that it requires four times the amount of power, if it did Intel would never be able to produce high performance chips.

The CISC/RISC debate is pretty much a red-herring but it keeps on coming up over and over again, because as you increase performance the instruction decode becomes a smaller part of the processor, this is why on the A9 you have a fifth extra core for stand-by which has been engineered for low power.

It is n't the 80s and 90s any more.

Intel will fail at mobile (3, Interesting)

leathered (780018) | about 2 years ago | (#41352601)

.. and the reason is not efficiency or performance.. Intel enjoys huge (50%+) margins on x86 CPUs that simply will not be tolerated by the tablet or mobile device vendors. Contrast this with the pennies that ARM and their fab partners make for each unit sold. Even Intel's excellent process tech can't save them cost wise when you can get a complete ARM SoC with integrated GPU for $7. [rhombus-tech.net]

Intel... pfffft !! (0)

Anonymous Coward | about 2 years ago | (#41352641)

Nothing but a kludgy Zenith knockoff. Garbage since day one. But... the price was right, and that's all that counts.

ia32 dates back to the 1970's -- B.S. (-1)

Anonymous Coward | about 2 years ago | (#41352651)

Say again? Are you telling me they had a 32-bit architecture in the 1970s...? I call BS.
His whole argument is BS.

RISC used to be faster, and a bigger fan of it you would not have found besides me. But, CISC
has caught up and passes it in most cases. Instructions that used to take many cycles on CISC
have been reduced because they can put more silicone to the task; instruction scheduling,
pipe-lining, overlapping execution, to name a few, make CISC much more attractive than RISC.

Modern compiler, including GCC, know how to take advantage of these features in their optimization.

The biggest performance issue is memory speed and the bus to memory. Those will always be a
limiting/cost factor to performance.

Re: ia32 dates back to the 1970's -- B.S. (2)

stripes (3681) | about 2 years ago | (#41352987)

Say again? Are you telling me they had a 32-bit architecture in the 1970s...? I call BS.

No, but the way ia32 is binary compatible with the 16 bit x86 code from the 1970s makes it relevant. You still have to handle AL and AH as aliases to AX. Ask Transmeta how much of a pain that was (hint: that is a big part of why their x86 CPU ran windows like a dog...the other part being they benchmarked Windows apps too late in the game to hit the market with something that efficiently handled the register aliases). If x86 mode was a fully distinct mode that ditched anything from the past that Intel decided made stuff slow then yes, we would be talking about ia32 as a 1980s architecture.

Backwards Compatibility (0)

Anonymous Coward | about 2 years ago | (#41352753)

The X86 architecture has maintained backwards compatibility at the binary level for decades.

There are billions of applications for the X86 arch,while the ARM cores have just barely begun to dig themselves out of obscurity.

If Intel were to suddenly get rid of say , X86-16, and X86-32 mode, the CPU's would be much smaller, much more efficient power-wise, but the number of available applications would put Intel back into the dark ages.

There is a reason why everyone still uses an X86/32/64 desktop, and not ARM.

RISC vs CISC, really? (4, Informative)

fermion (181285) | about 2 years ago | (#41352795)

Most 70's era microprocessor pretty much had 50 opcode and a few registers. It was possible to memorize these all and decompile from hex in your head. I never had the mental acuity to do so, but many of my friends in high school could. By the 1980's, there was a lot of big iron that used RISC, but as I recall these had more opcodes than, say, a 6502, and I know that RISC does not just mean reduced instruction. It is a simplified instruction set. Right now I think we have a lot of hybrid chips on the market. The war between CISC and RISC has come to place where both are used as needed. In the x86 space, legacy is an issue. MS has not done what Apple does which is to say support a machine for 3-5 years, then develop something that meets current demands. The common person would not even see a RISC processor until Apple switched to the PowerPC, which brought the conflict between CISC and RISC to the public. It is interesting to have this conversation now because this was exactly what was said back them. RISC is more efficient, so the chip can be about half as fast, and still be as fast as the CISC chip.

So this OS specific chip is nothing new, and *nix exclusion is not new. Many microcomputers could not run *nix because they did not have a PMMU. The ATT computer ran a 68K processor with a custom PMMU. Over the past 10 years there have been MS Windows only printers and cameras which offloaded work to the computer to make the peripheral cheaper.

Which is to say that there are clearly benefits for RISC and CISC. MS built and empire on CISC, and clearly intends to continue to do so, only moving to RISC on a limited basis for high end highly efficient devices. For the tablet for the rest of us, if they can ship MS Windows 8 on a $400 device that runs just like a laptop, they will do so., If efficiency were the only issue, then we would be running Apple type hardware, which, I guess, on the tablet we are. But while 50 million tablets are sold, MS wants the other 100 million laptop users that do not have a tablet, yet, because it is not MS Windows.

Come on Bruce (1)

Anonymous Coward | about 2 years ago | (#41352801)

Can you please refrain from insulting our intelligence with your baby-talk summary of RISC vs CISC? Furthermore, it has jack shot to do with why Clover Trail is Win8-only.

In other words... (1)

StripedCow (776465) | about 2 years ago | (#41352827)

In other words, Intel says they failed at hiding their power consumption details from the API (instruction set).

Intel has to go mobile (1)

bigt405 (2705553) | about 2 years ago | (#41352829)

Come on, look at the shift your average user has taken. Grandmothers aren't using eMachines, they prefer iPads. My laptop is heavy so I stick to my Atrix as often as possible when out and about.

It'd be stupid for Intel to ignore this market, especially with Win 8 RT not supporting the "full desktop" experience we love and know.

Only benefits.. (2)

Bert64 (520050) | about 2 years ago | (#41352853)

The only advantages x86 has over ARM are performance and the ability to run closed source x86-only binaries...

Performance is generally less important than power consumption in an embedded device, and this CPU is clearly designed for lower power use so it may not be much faster than comparable ARM designs...

And when it comes to x86-only binaries, there is very little linux software which is x86 only and even less for android... Conversely there are a lot of closed source android applications which are arm-only... So at best you have a linux device which offers no advantages over ARM, at worst you have an android device which cannot run large numbers of android apps while costing more, being slower and having inferior battery life.

Windows on the other hand does have huge numbers of apps which are tied to x86, which for some users may outweigh any other downsides. On the other hand, most windows apps are not designed for a touchscreen interface and might not be very usable on tablets, and any new apps designed for such devices might well be ported to arm too.

Re:Only benefits.. (1)

Dwedit (232252) | about 2 years ago | (#41352921)

You want a good x86-only Linux program? Wine. There's a good one for you.

Reality check (3, Interesting)

shutdown -p now (807394) | about 2 years ago | (#41352869)

If nobody wants it and it's a dead-end for technical and business reasons, then how come that there is a slew of x86 Win8 devices announced by different manufacturers - including guys such as Samsung, who don't have any problems earning boatloads of money on Android today?

Heck, it's even funnier than that - what about Android devices already running Medfield?

hah (1)

smash (1351) | about 2 years ago | (#41352979)

No wireless. Less space than a nomad. Lame.

I predict clover trail will be a roaring success.

Don't bet on it being so much more efficient (2)

samuisan (142967) | about 2 years ago | (#41352981)

> It had better, since Atom currently provides about 1/4 of the power efficiency of the
> ARM processors that run IOS and Android devices.

Don't bet on it. The ARM design in itself is more efficient for sure, but Intel are frankly well ahead of anyone else in actual manufacture.

If they decide to build these with their Finfets and the latest node they have, then the gap between Intel Atoms and ARMs made at Samsung, TSMC or anyone else won't be so noticeable, unless that is that the Atoms actually pull ahead.

It will mean Intel using their latest high cost Fabs for Atoms though, rather than server or high end desktop chips.

backroom agreement? (2)

zman58 (1753390) | about 2 years ago | (#41353067)

"The details of Clover Trail's power management won't be disclosed to Linux developers." ...Perhaps this is because Microsoft is helping to fund development of the Intel solution behind the scenes? Perhaps they have worked out an agreement of some sort to prevent Linux from finding its way onto the chip.

I would like to know why any information would be withheld from Linux developers--the only reason I could imagine for doing so would be to help Microsoft stage a lead on use of the chip. I can think of no good reason Intel would not reveal how the chip works to Linux developers. Providing the information openly would serve only to increase interest and possible additional revenue for Intel that an Android or other Linux based solution could provide to them. Looks like the same old gaming of the the system here--good old buddies.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>