Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Medfield SoC Specs Leak

Soulskill posted more than 2 years ago | from the just-over-the-horizon dept.

Cellphones 164

MrSeb writes "Specifications and benchmarks of Intel's 32nm Medfield platform — Chipzilla's latest iteration of Atom and first real system-on-a-chip oriented for smartphones and tablets — have leaked. The tablet reference platform is reported to be a 1.6GHz x86 CPU coupled with 1GB of DDR2 RAM, Wi-Fi, Bluetooth, and FM radios, and an as-yet-unknown GPU. The smartphone version will probably be clocked a bit slower, but otherwise the same. Benchmark-wise, Medfield seems to beat the ARM competition from Samsung, Qualcomm, and Nvidia — and, perhaps most importantly, it's also in line with ARM power consumption, with an idle TDP of around 2 watts and load around 3W."

Sorry! There are no comments related to the filter you selected.

frist piss (-1)

Anonymous Coward | more than 2 years ago | (#38510960)

frist

Dubious (1)

A12m0v (1315511) | more than 2 years ago | (#38511842)

Intel will need to bend the law of physics before their power hungry chips can match the the energy efficiency of ARM SoCs, but if anyone can do it it'll be Intel. Intel took x86 to workstations and supercomputers killing many RISC processors in the process. It'll be fun to see them pull it off again against ARM.

Re:Dubious (4, Insightful)

ArcherB (796902) | more than 2 years ago | (#38512132)

Intel took x86 to workstations and supercomputers killing many RISC processors in the process. It'll be fun to see them pull it off again against ARM.

No, it wouldn't. RISC is a superior instruction set. x86 only beat RISC because it was really the only game in town if you want to run Windows, which every non-mac user did. At the time, the desktop was king and made Intel lots and lots of money, which they used to beef up their server offerings. Now we are stuck with x86 with RISC being used only in "closed" architectures like smart phones, consoles and big-iron servers.

I like competition. I'd rather see ARM make gobs of money of designing chips that everyone can improve on than Intel make gobs and more gobs of money selling desktop, server and mobile chips that only they may design, produce and sell.

The final processor line that Intel makes will be the one they are producing when they become the only game in town.

Re:Dubious (2, Informative)

the linux geek (799780) | more than 2 years ago | (#38512322)

Every Windows release from the NT line since NT 3.1 has run on at least one RISC architecture.

Re:Dubious (3, Insightful)

Henriok (6762) | more than 2 years ago | (#38512838)

What RISC platform did XP, Vista and Windows 7 run on? XP had support for Itanium, but that's not a RISC platform. Vista and Win7 only support 32- and 64-bit x86. So.. It seems you are wrong in your statement.

Re:Dubious (1)

GennarinoParsifalle (714027) | more than 2 years ago | (#38512858)

Windows NT used to support MIPS R3000 and R4000 and Alpha processors from Digital (both RISCs). Wikipedia page claims also PowerPC and RISC support but I never saw personally any of the two ports.

Re:Dubious (0)

Anonymous Coward | more than 2 years ago | (#38512972)

Microsofts Xbox-360 runs a Win NT kernel, and the X-box360 uses a triple-core PowerPC processor, so Microsoft is still developing a PowerPC build of their software.

Re:Dubious (0)

Anonymous Coward | more than 2 years ago | (#38513364)

Yes, but the apps where mostly x86 run with FX!32, running slower than on cheaper x86.

Re:Dubious (1)

Anonymous Coward | more than 2 years ago | (#38512740)

> Now we are stuck with x86 with RISC being used only in "closed" architectures
> like smart phones, consoles and big-iron servers.

Every x86 chip since the Pentium Pro has had a RISC core.

The x86 CISC is just a wrapper around that.

Re:Dubious (4, Interesting)

SpinyNorman (33776) | more than 2 years ago | (#38512804)

RISC isn't an instruction set - it's a design strategy.

RISC = reduced instruction set computing
CISC = complex instruction set computing

The idea of RISC (have a small highly regular/orthogal instruction set) goes back to the early days of computing when chip design and compiler design wasn't what it is today. The idea was that a small simple instruction would correspond to a simpler chip design that could be clocked faster than a CISC design while at the same time being easier to compile optimized code.

Nowadays advances in chip design and compiler code generation/optimization have essentially undone these benefits of RISC, but the remaining benefits are that RISC chips have small die sizes hence low power requirements, high production yields and low cost, and these are the real reasons ARM is so successful, not the fact that the instruction set is "better".

Re:Dubious (1)

mozumder (178398) | more than 2 years ago | (#38512998)

No, it wouldn't. RISC is a superior instruction set

Everyone says this, but really, CISC is more efficient. CISC code is more compact than RISC code, which helps cache hit rates. Additionally, the most used opcodes tend to be the shortest.

The only thing RISC does is make some parts of the instruction pipeline (mostly decode) easier.

Sometimes you don't need a bank of 32 registers when 3 or so will do.

CISC was the best choice for small transistor count CPUs back in the 70's and it's the best choice now, where small transistor counts = less power.

ARM is about to get owned by Intel, especially with all their patented secret CPU sauce.

Re:Dubious (3, Interesting)

abainbridge (2177664) | more than 2 years ago | (#38513152)

> RISC is a superior instruction set. x86 only beat RISC because it was really the only game in town if you want to run Windows

Modern ARM processors aren't pure RISC processors. Most ARM code is written in Thumb-2, which is a variable length instruction code just like x86. Back in the 90s when transistor budgets were tiny, RISC was a win. When you only have a hundred thousand gates to play with, you're best off spending them on a simple pipelined execution unit. The downsides of RISC has always been the increased size of the program code and reduced freedom to access data efficiently (ie with unaligned accesses, byte addressing and powerful address offset instructions). With modern transistor budgets it is worth spending some gates to make the processor understand a compact and powerful instruction set. That way you save more gates in the rest of the computer than you spend doing this (ie in the caches, databuses and RAMs).

As a result of all this, in some ways, ARM chips are evolving to look more and more like an Intel x86 design. I'm still a big fan of ARM though. Intel will have a long way to go to compete on price, even if they can compete on power.

Re:Dubious (4, Interesting)

Runaway1956 (1322357) | more than 2 years ago | (#38512414)

Bloodthirsty bastard, aren't you? Killing off the competition is fun?

I haven't liked Intel very much since I read the first story of unethical business practices. Intel doesn't rank as highly on my shitlist as Microsoft, but they are on it.

One benchmark (2, Insightful)

teh31337one (1590023) | more than 2 years ago | (#38510988)

It beats the current crop of dual core ARM processors (Exynos, snapdragon s3 and Tegra 2) in one benchmark that "leaked".

Nothing fishy about that at all.

Re:One benchmark (4, Informative)

icebike (68054) | more than 2 years ago | (#38511084)

It beats the current crop of dual core ARM processors (Exynos, snapdragon s3 and Tegra 2) in one benchmark that "leaked".

Nothing fishy about that at all.

Quote Vrzone:

Intel Medfield 1.6GHz currently scores around 10,500 in Caffeinemark 3. For comparison, NVIDIA Tegra 2 scores around 7500, while Qualcomm Snapdragon MSM8260 scores 8000. Samsung Exynos is the current king of the crop, scoring 8500. True - we're waiting for the first Tegra 3 results to come through.

But the same paragraph says

Benchmark data is useless in the absence of real-world, hands-on testing,

If the performance figures are realistic this is one fast processor, and it appears to be a single core chip, (or at least I saw nothing to the contrary). That's impressive.

Single cores can get busy handling games or complex screen movements, leading to a laggy UI. If they put a good strong GPU on this thing you might never see any lag.

Re:One benchmark (1, Redundant)

teh31337one (1590023) | more than 2 years ago | (#38511178)

Sure, it's great, and compares well to this gen of processors (which are all at the 45/40nm fab (the exynos 4210 is 45nm, not 32nm as vr-zone incorrectly states)).
But what about the next gen chips that will have 32/28nm fab?

And how will it compare to the Quad-Core A9 processors (Tegra 3), higher clocked dual core A9 processors (Exynos 4212) or A15/A15 based designs? (Exynos 5250 and Krait/Snapdragon S4)

Re:One benchmark (2)

icebike (68054) | more than 2 years ago | (#38511254)

At the stated bench mark scores Medfield IS THE NEXT GEN Chip.
And its single core. When they dual well it, the others will still be trying hard to catch up.

Re:One benchmark (1)

teh31337one (1590023) | more than 2 years ago | (#38511284)

Great. How about comparing it to a NEXT GEN chip then? Not to mention power consumption - which is its Achilles heel - still can't compare to THE LAST GEN ARM CHIP.

Re:One benchmark (1)

icebike (68054) | more than 2 years ago | (#38511386)

Actually go read the story. It specifically states that the power consumption is pretty close to the ARM chips.

Re:One benchmark (4, Informative)

teh31337one (1590023) | more than 2 years ago | (#38511486)

Yeah... no.

vr-zone [vr-zone.com]

As it stands right now, the prototype version is consuming 2.6W in idle with the target being 2W, while the worst case scenarios are video playback: watching the video at 720p in Adobe Flash format will consume 3.6W, while the target for shipping parts should be 1W less (2.6W)

extremeTech [extremetech.com]

The final chips, which ship early next year, aim to cut this down to 2W and 2.6W respectively. This is in-line with the latest ARM chips, though again, we’ll need to get our hands on some production silicon to see how Medfield really performs.

Re:One benchmark (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38511620)

Yeah... no.

vr-zone [vr-zone.com]

As it stands right now, the prototype version is consuming 2.6W in idle with the target being 2W, while the worst case scenarios are video playback: watching the video at 720p in Adobe Flash format will consume 3.6W, while the target for shipping parts should be 1W less (2.6W)

extremeTech [extremetech.com]

The final chips, which ship early next year, aim to cut this down to 2W and 2.6W respectively. This is in-line with the latest ARM chips, though again, we’ll need to get our hands on some production silicon to see how Medfield really performs.

And which ARM SoC's idle at 2W? That's at least an order of magnitude greater than any ARM SoC - those typically idle at a few tens or hundreds of milliAmps. ARM's big.LITTLE architectures will bring that down even further.
So, Medfield may be competitive on speed and TDP at full load, but if you are a mobile device maker, would you care? You would probably be more interested in eking out more uptime from your tiny battery.

Re:One benchmark (1)

gl4ss (559668) | more than 2 years ago | (#38512538)

it would be usable for tablets and media consumption devices, and devices which can hibernate quickly.

of course there's this one nagging thing about using them for phones, which is.. well, fuck, they'd still need an arm chip or two there for the radios.

Re:One benchmark (4, Interesting)

LordLimecat (1103839) | more than 2 years ago | (#38511900)

According to what I could dig up (memory, and corroboration here [blogspot.com] ), snapdragons use about 500mw at idle. Thats one quarter to one sixth the power consumption of intel's offering.

Doing some research, it looks like Tegra3s use about .5w per core as well. Again, Intel is pretty far back if theyre throwing out a single core and hitting 2-3 watts.

Re:One benchmark (1)

Anonymous Coward | more than 2 years ago | (#38513142)

...

Doing some research, it looks like Tegra3s use about .5w per core as well. Again, Intel is pretty far back if theyre throwing out a single core and hitting 2-3 watts.

not to forget, that tegra3 should use their special power-efficient 500MHz core for idle tasks...

Re:One benchmark (1)

Anne Thwacks (531696) | more than 2 years ago | (#38513172)

I think there is a confusion between m and u here, or you are not talking about idle.

Re:One benchmark (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38511490)

I did read the story - but did you? Its idle TDP stands at 2.6W. A 1700mAH battery (typical in a cell phone) @ 3.6V = 6.12 Volt-Amps (i.e. Watts). So, you'll get around 2.5 hrs of uptime under idle conditions, assuming the battery is new. Good luck trying to charge that monster ever 2 hrs!
Who cares about performance when your phone will be dead before making a single call? Not much better in tablets either!
So, what is this chip competing against? Other laptop chips from Intel?

Re:One benchmark (1)

SuricouRaven (1897204) | more than 2 years ago | (#38512944)

That's just powering the processor. You've got all the support chips, radio and such to. The single biggest power draw in most phones and tablets is the screen, when it's on - a good deal of power management goes into just trying to keep it turned off as much as possible.

whoosh (4, Insightful)

decora (1710862) | more than 2 years ago | (#38511186)

teh37737one's point, if i may, was that this 'leak' was actually a 'plant', a PR move by Intel to get people posting ridiculous speculative nonsense, like, exactly the stuff you posted in your comment.

"if this is realistic, intel has an awesome CPU" etc etc etc.

Does anyone care if its realistic? Intel sure doesn't, it just wants people to speculate that it might be realistic, and then talk about Intel, and how awesome Intel is.

But of course, it might be a load of crap... when the actual numbers come out, who knows what they will say? And when real programs hit the thing, who knows what it will do?

That's why Intel is 'leaking' it. On purpose. So they can have 'plausible deniability'. They can churn the rumor mill, get their product mentioned in the 24 hour ADHD cycle of tech news, get people posting on slashdot, etc, but Intel itself never has to sully it's good name by engaging in outright pushing of vapor-ware.

If only the guys at Duke Nukem had been smart enough to 'leak' stuff 'anonymously' to the press, instead of giving out press releases...

Of course, another way to look at it is this: It's yet another example of the corporate philosophical suite that is drowning our civilization in garbage and awful values. Never say anything directly, never take responsibility for your words or actions, never be straight with people, and hide everything you are doing in layers and layers of techno jargon, babble, and nonsense.

Re:whoosh (3, Insightful)

Jeremi (14640) | more than 2 years ago | (#38511366)

Does anyone care if its realistic? Intel sure doesn't

Intel will care if the leaks create unrealistic expectations that their product can't meet. The result could be consumer rejection of an otherwise respectable product, because the public had been (mis)led to expect more than the product could actually deliver. (see: Itanium as replacement for x86)

So the "secret Intel propaganda strategy" only works if Intel actually has a reasonable chance of living up to their own unofficial hype. And based on their recent track record, they probably do.

Re:whoosh (4, Informative)

Svartalf (2997) | more than 2 years ago | (#38511944)

Recent track record... Yeah, sure...

http://www.pcper.com/reviews/Graphics-Cards/Larrabee-canceled-Intel-concedes-discrete-graphics-NVIDIA-AMDfor-now [pcper.com]

There's a few others like this one. This includes the GMA stuff where they claimed the Xy000 series of GMA's were capable of playing games, etc. They're better than their last passes at IGPs, but compared to AMD's lineup in that same space, they're below sub-par. Chipzilla rolls out stuff like this all the time. Been doing it for years now.

Larrabee.
Sandy Bridge (at it's beginnings...).
GMA X-series.
Pentium 4's NetBurst.
iAPX 432.

There's a past track record that implies your faith in this is a bit misplaced at this time.

Re:whoosh...you missed... (1)

Svartalf (2997) | more than 2 years ago | (#38511954)

Recent track record... Yeah, sure...

http://www.pcper.com/reviews/Graphics-Cards/Larrabee-canceled-Intel-concedes-discrete-graphics-NVIDIA-AMDfor-now [pcper.com]

There's a few others like this one. This includes the GMA stuff where they claimed the Xy000 series of GMA's were capable of playing games, etc. They're better than their last passes at IGPs, but compared to AMD's lineup in that same space, they're below sub-par. Chipzilla rolls out stuff like this all the time. Been doing it for years now.

Larrabee.
Sandy Bridge (at it's beginnings...).
GMA X-series.
Pentium 4's NetBurst.
iAPX 432.

There's a past track record that implies your faith in this is a bit misplaced at this time.

Re:whoosh (0)

Anonymous Coward | more than 2 years ago | (#38512124)

Does anyone care if its realistic? Intel sure doesn't

Intel will care if the leaks create unrealistic expectations that their product can't meet.

Bullshit. If this were true, why would they be hyping Medfield as a serious competitor to ARM (in real press releases, not just possibly-deliberate leaks) when they "hope" to get the idle power down to two fucking watts?! How much more unrealistic can you ask for?

To Intel,, perception is everything, reality is nothing -- as proven by their continuous predominance in the desktop despite AMD's frequent performance-per-dollar and performance-per-watt lead, and occasional absolute performance lead. We can only hope this bean-counter's strategy fails at breaking into markets as spectacularly as it has succeeded in defending entrenched markets.

Re:whoosh (3, Informative)

0123456 (636235) | more than 2 years ago | (#38512180)

To Intel,, perception is everything, reality is nothing -- as proven by their continuous predominance in the desktop despite AMD's frequent performance-per-dollar and performance-per-watt lead, and occasional absolute performance lead.

Ah, yes. No-one ever buys Intel chips because they're the best option, poor old AMD keep building the best x86 chips on the planet but stoopid consumers keep buying Intel anyway.

Back in the real world, at the time when AMD were the best choice you could hardly find anyone at all knowledgeable who was recommending Intel Pentium-4 space-heaters, and now that Intel is the best choice for desktop systems the only people recommending AMD CPUs are the dedicated fanboys. And in the low-power space, no-one uses Intel x86 CPUs because that would be absurd; even a 2W CPU can't compete against ARM.

Re:whoosh (1)

Xeranar (2029624) | more than 2 years ago | (#38512304)

Intel kept AMD down by signing huge discounted contracts with OEM manufacturers. That's the reality of how Intel won. The consumer had basically a plethora of names and configurations when the PC came around as a major player but 90% of on the shelf boxes were Intel inside. AMD never was able to sign a huge contract and thus was kept out of the office and shelf space. It wasn't like Intel had vastly better chips until the i-series and even now under the $220 range they aren't really that better, specially considering the average PC is using an i3 chip. It's a game of Intel leveraging vast resources and thin margins to marginalize other players. ARM manufacturers are too large to undercut substantially though and will not take Intel lying down. This medfield processor feels more inclined to be in a semi-stationary position like a light notebook or tablet rather than a mobile phone. But who knows? Maybe Intel will get it's TDP down and do dual-core running at half that wattage...

Re:whoosh (0)

Anonymous Coward | more than 2 years ago | (#38512974)

Yes because I am sure Samsung, Apple, Nokia, and other OEMs are freaking out right now over this. "OMG look at that power consumption, stop buying the ARMS at once!!!!1"

I have yet to run across someone who gives a shit which manufactures' CPU is powering their phone.

Re:One benchmark (1)

TheCouchPotatoFamine (628797) | more than 2 years ago | (#38511218)

What does the number of cores have to do with a "good, strong" GPU, or a lag of the UI? Lag is usually interrupt-based (think network or disk access), or the result of software, when operations are poorly ordered and/or uncached lookups perform often.. therefore your benchmark has a rabbit with a pancake on it's head -

Re:One benchmark (1)

icebike (68054) | more than 2 years ago | (#38511296)

If you have a good GPU, you can off load a lot of processing that might be otherwise have to do with the CPU. Sift enough work, and you can do with a single core what others might do with a dual. Thats all I was trying to point out.

Re:One benchmark (3, Informative)

Tr3vin (1220548) | more than 2 years ago | (#38512012)

UI lag is almost exclusively limited to fill-rate on mobile devices. This is a problem on Android, since it is hard for them to optimize it for all of the various chipsets. If the GPU cannot quickly fill pixels, more of the preparation of a frame has to be offloaded to the CPU. For modern GUIs, each pixel can be touched several times, so without a good fill rate, more heavy lifting is required from the CPU. Multiple cores can help, since more processing power can be dedicated to quickly updating the UI.

Re:One benchmark (1)

Nom du Keyboard (633989) | more than 2 years ago | (#38511150)

It beats the current crop of dual core ARM processors (Exynos, snapdragon s3 and Tegra 2) in one benchmark that "leaked".

Nothing fishy about that at all.

So it beats (maybe) the ARMs of the moment. But what about the ARMs when Medfield actually ships? And the quad-core ARMs now?

Re:One benchmark (0)

teh31337one (1590023) | more than 2 years ago | (#38511204)

Exactly. And the current crop mostly have 45/40nm fab. How will it compare to the 32/28nm fab, quad core and A15 based ARM processors? Will Intel still be trying to get the power consumption down to acceptable levels then?

Re:One benchmark (2)

Dr Max (1696200) | more than 2 years ago | (#38511400)

I have no doubt in my mind that intel can beat chips that have been on the market for close to 12 months but the next crop is what they are competing against. For example texas instruments omap 5430 well be coming out a bit after this one sporting 2 X 2 ghz a15 chips, usb 3, sata 2, support for 4gb or ram per core, capable of running 4 screens at 1080p and it's printed on 28nm, that has to give old intel a run for their money. That said 2013 when intel move it down to 22nm trigate architecture and make it multi core its going to be interesting.

Still need to wait for more figures... (1)

inflex (123318) | more than 2 years ago | (#38510992)

The Atom's have always had a reasonable core power consumption... but the external chipsets that ruin the system power consumption figure. Will be curious to see what the total system consumption is going to be with these new one, maybe time to look towards a nice replacement for my Asus eeebox B202's for the desktop.

Re:Still need to wait for more figures... (1)

the linux geek (799780) | more than 2 years ago | (#38511034)

It's a SoC. No external chipset necessary.

Re:Still need to wait for more figures... (2)

inflex (123318) | more than 2 years ago | (#38511174)

For sure, yes, it's a SoC, but I'm still going to wait for a complete "on the shelf" system to make an appearance before holding my hopes too high. Leaked releases are about as useful as "New solar cell technology yields 50% more efficiency" announcements.

What is interesting is that they only mention the elevated power consumption in relation to video playback (720p) which is something that'd likely be handed off to a dedicated section of silicon, not something done in the general purpose CPU core. Hopefully we can get some more comprehensive data soon so we can all stop speculating.

Re:Still need to wait for more figures... (1)

timeOday (582209) | more than 2 years ago | (#38511102)

The point of Medfield is to integrate the external chipsets. They aren't there any more, so I don't see where system power consumption would be at any disadvantage to an ARM chip.

It's hard to imagine how Intel could not win this, now or soon. They have the best chip designers and process.

Re:Still need to wait for more figures... (2)

darronb (217897) | more than 2 years ago | (#38511564)

Well, the limiting factor is quite certainly backwards compatibility.

The architecture itself very possibly cannot compete with ARM on low power... no matter what the "best chip designers and process" can bring to the table.

I think it's getting to be time to finally retire x86. It'll be hell to bring a new architecture to market... but what's the alternative? Microsoft is dying. Apple is starting to make their own chips.

They probably do have the best people and starting fresh they could very likely do amazing things.

Re:Still need to wait for more figures... (1)

Belial6 (794905) | more than 2 years ago | (#38512090)

The solution is actually pretty straight forward. Multi-core is the present. To make a complete clean sweep, the solution is for Intel to make a processor that has say, 4 x86 cores, and 2 xWayBetterThan86 cores. They then design the system to allow code that is written for the xWBT86 cores to run in parallel with the x86 cores. Get MS to port Windows to use the xWBT86 cores for the OS, and advocate to the developers. Basically turn the old x86 into a legacy compatibility co-processor. They would want to contribute the code necessary to make this happen on Linux while they were at it.

Since in theory, the xWBT86 cores would be Way Better Than the x86 cores, software developers would want to write code that runs on them, and the x86 legacy cores would become less and less necessary. As demand dictates, Intel could then start reducing the number of x86 cores, and increasing the number of xWBT86 cores. When demand for x86 gets low enough, they could switch to emulation.

If they really wanted to make it slick, they could design the chips so that the cores could be changed from x86 to xWBT86 by swapping out the microcode. The x86 command set is basically emulated now anyway.

Re:Still need to wait for more figures... (1)

Anonymous Coward | more than 2 years ago | (#38512280)

Didn't Transmeta design a xWBT86 architecture?

Re:Still need to wait for more figures... (0)

Anonymous Coward | more than 2 years ago | (#38512456)

So just like Itanium then? Where the x86 mode was so slow you could do it faster in software? Or the i860 before it, which could take up to somewhere around 2000 cycles to refresh it's multiple pipelines? (Not to mention both Itanium and i860 were VLIW, so you couldn't practically do any optimization to take advantage of the super-scalar pipeline.) Or the i960 before that, where they got sued halfway through and decided not to market the whole processor in exchange for a settlement which gave them the StrongARM? Or the iAPX 432 before that, which was so big it broke the chip in two because they decided to make an object oriented processor?

It's not that Intel is having problems moving away from x86, it's that Intel can't make anything better to move towards. x86 was just a nice little fluke which got them enough design wins to make it their breadwinner. Whenever they do try to make a better ISA, they wind up jumping on the latest fad without having any idea of how to properly implement it or even if it is a good idea in the first place.

Re:Still need to wait for more figures... (0)

Anonymous Coward | more than 2 years ago | (#38512578)

I'm not quite sure where you got the "Microsoft is dying" part from. They are making record profits. They are doing the opposite of dying.

Re:Still need to wait for more figures... (1)

SuricouRaven (1897204) | more than 2 years ago | (#38513012)

I think it's more a matter of not growing. They have a slight problem. Their main cash cows are desktop operating systems and office software, and when you have nearly 100% of the market, the only way to grow is to grow the market. That takes a long time. Microsoft is thriving, but they can't give the massive year-on-year growth of their earlier success any more. Their efforts to enter new markets have been less than successful... the xbox is a respectable console, but the Zune was a joke, and Windows Mobile is barely even seen.

Re:Still need to wait for more figures... (1, Insightful)

mollymoo (202721) | more than 2 years ago | (#38511704)

x86 is a a huge, complex instruction set. All else being equal. implementing it costs more silicon and more power than ARM architectures. Intel's great engineers and unmatched process can make up for this somewhat, but it would be a good effort for them just to achieve parity with ARM. To do so they're likely going to need to stay one process step ahead of the competition, which has cost implications.

Re:Still need to wait for more figures... (1)

timeOday (582209) | more than 2 years ago | (#38512228)

The first x86 processor, the 8086, only had 29,000 transistors total, whereas this new chip uses over 34,000 times that many (a billion) just for DRAM, so how much complexity can x86 really be adding? Too many other architectures have come and gone - 68K, PowerPC, SPARC, ARM on the desktop... whatever advantage they gained from more elegant architecture wasn't enough to overcome x86s' 30 years of refinement and Intel's lead in process and design. With Itanium, even Intel itself massively blundered over-estimating what they could get from a redesign.

Re:Still need to wait for more figures... (2)

makomk (752139) | more than 2 years ago | (#38513182)

The first x86 processor, the 8086, only had 29,000 transistors total, whereas this new chip uses over 34,000 times that many (a billion) just for DRAM, so how much complexity can x86 really be adding?

The 8086 was a 16-bit processor that could only address 1 MB of RAM (split up into 64k segments) with no support for virtual memory, didn't have any floating point hardware let alone stuff like SSE, and took an awfully large numbers of clock cycles to execute each instruction by modern standards. If you want something capable of actually running modern applications, you're looking at a lot more complexity.

Re:Still need to wait for more figures... (1)

jones_supa (887896) | more than 2 years ago | (#38512546)

The Atom's have always had a reasonable core power consumption... but the external chipsets that ruin the system power consumption figure. Will be curious to see what the total system consumption is going to be with these new one, maybe time to look towards a nice replacement for my Asus eeebox B202's for the desktop.

I thought that problem was only with the early Atoms paired the 945GSE, and then later Pine Trail included the power-proper NM10 chipset.

2 watts idle? (5, Funny)

viperidaenz (2515578) | more than 2 years ago | (#38510996)

Awesome, with smartphones these days containing 6 watt-hour batteries you'll get 3 hours standby time! Thats nearly as much an an iPhone 4S

Its not just a phone, its a handwarmer (1)

Anonymous Coward | more than 2 years ago | (#38511108)

Its a feature!

2W idle + 3W active?!!!

Not in line with power consumption of any ARM board I've played with-- more like 4-6 times active consumption and an order of magnitude+ higher idle.

But, if they can pull decent marketing of a portable hand warmer that doubles as a phone for a couple hours a day between charges, maybe they have a winner.

Re:2 watts idle? (1)

Ark42 (522144) | more than 2 years ago | (#38511608)

I think batteries are rated in mAh, not mWh. For example, my Netbook has a 6600mAh battery that outputs 11.1v. I think that's basically 73.26Wh, or 36 hours of idle time at 2W. I don't have any smartphone to compare, nor do I know the actual idle usage of the Atom CPU in my netbook, or the other components in it.

Re:2 watts idle? (0)

Anonymous Coward | more than 2 years ago | (#38511722)

my evo4g battery shows 5.55 watt hours on its label.

Re:2 watts idle? (0)

Anonymous Coward | more than 2 years ago | (#38511920)

I think batteries are rated in mAh, not mWh.

Either/both; given a nominal voltage, they're practically equivalent.

For example, my Netbook has a 6600mAh battery that outputs 11.1v. I think that's basically 73.26Wh, or 36 hours of idle time at 2W.

So I guess you knew that after all...
(And you do realize your 9-cell netbook battery is an order of magnitude bigger than a phone battery, yes?)

I don't have any smartphone to compare, nor do I know the actual idle usage of the Atom CPU in my netbook, or the other components in it.

So why even post? FYI, they're all 3.7V, and mostly 1.2-2.4Ah, or 4-8 Wh -- so GGP was right on, and the only thing you've added to the discussion is a vigorous display of your own ignorance.

Re:2 watts idle? (1)

Ark42 (522144) | more than 2 years ago | (#38512366)

I guess I figured smartphones might have a bigger battery than my RAZR's 3.7v 900mAh.

1200-2400mAh for something that does all kinds of applications and web access seems pretty small if 900mAh is what they give phones to people who don't even care about texting.

2W idle power consumption! (2, Interesting)

Anonymous Coward | more than 2 years ago | (#38511016)

That just doesn't cut it. Based on that, I'd assume the mobile version of the chip to consume at least 1W at idle loads. That _still_ doesn't cut it.

Re:2W idle power consumption! (1)

Baloroth (2370816) | more than 2 years ago | (#38511272)

It gets worse: the summary is actually misleading. They benchmarked it around 2.6 idle and 3.6 active ("around", right), and it "aims" to get down to 2W idle and 2.6 active by next year.

Re:2W idle power consumption! (2)

Baloroth (2370816) | more than 2 years ago | (#38511294)

Oops, my first "around" should have been "at". As in, they benchmarked it exactly at 2.6W and 3.6W idle and active, respectively (and rounded it down - way down - in the summary)

Re:2W idle power consumption! (4, Insightful)

mirix (1649853) | more than 2 years ago | (#38512358)

Bingo. My ageing Nokia, while lacking in horsepower, has excellent battery life... It has a 600MHz ARM, and a 3.2Wh battery. It manages to idle for a week at least, I'm sure it's hit 10 days before, but lets say 7, to be safe.

3.2W / 7 / 24 = 20mW idle. Two fucking orders of magnitude better than their *target*. (not to mention this includes the entire phone, not just the core, in real life).

I presume the more powerful android rigs still keep it within 100mW for the whole phone, idling. - That would give you roughly two days idle on a decent sized phone battery (5Wh). That's still more than an order of magnitude difference.

Looks cool (1)

symbolset (646467) | more than 2 years ago | (#38511056)

I think the tablet will be interesting.

beat ARM on what, 45nm? (3, Interesting)

Locutus (9039) | more than 2 years ago | (#38511112)

come on, when talking about comparing embedded SoC's is it really fair to say a new die shrunk version of another architecture best another using a much larger die size?

So here we have Intel putting their low cost product on their high cost process and claiming a victory? I don't buy it but since Intel is going to be selling these things at deep discounts, I might buy a product or two. I don't think in the long run they can continue this game but it's fun to see them attempting it.

LoB

Re:beat ARM on what, 45nm? (0)

Anonymous Coward | more than 2 years ago | (#38511360)

Intel is ahead of the pack in manufacturing technology, so yes it is fair as far as the market is concerned.

It's not Intel's high cost process (4, Informative)

Sycraft-fu (314770) | more than 2 years ago | (#38511382)

These days 32nm is their main process. They use 45nm still but not for a ton of stuff. Almost all their chips have moved to it. Heck they have 22nm online now and chips will be coming out rather soon for it (full retail availability in April).

Once of Intel's advantages is that they invest massive R&D in fabrication and thus are usually a node ahead of everyone else. They don't outsource fabbing the chips and they pour billions in to keeping on top of new fabrication tech.

So while 32nm is new to many places (or in some cases 28nm, places like TSMC skipped the 32nm node and instead did the 28nm half node) Intel has been doing 32nm for almost 2 years now (first commercial chips were out in January 2010).

Re:beat ARM on what, 45nm? (2)

Kjella (173770) | more than 2 years ago | (#38511612)

So here we have Intel putting their low cost product on their high cost process and claiming a victory?

Developing the process is ungodly expensive, pushing out chips is not. Why wouldn't Intel use their state of the art process? It's not like it would be cheaper to produce on 40/45nm, far from it.

I don't think in the long run they can continue this game but it's fun to see them attempting it.

Well, it's the game Intel's been running since the 1980s and has kept AMD a distant second even when their chip designers have been smoking crack. Smaller process = more dies/wafer and so higher margins and more money to funnel back into R&D.

Do I believe everything Intel says? Hell no. But their tick-tocks have been steady as clockwork, while for example graphics cards are now finally starting to move off 40nm with the HD7970. Intel's never had to cancel a process to my knowledge, like TSMC had to scrap their 32nm process. And recently GloFo was in the news because AMD had to scrap their 28nm process and start over at TSMC with gate-last. That burns vast amounts of cash and delays your products for a double kick in the balls. Meanwhile apparently Intel got a die shrink to 22nm and 3D transistors ready to go, since specs and prices for Ivy Bridge have been leaking all over the place. Can Intel do low-power design? Yes. No. Maybe. But if they can keep their process improvements coming on time and with decent yields, they'll be hell to compete with anyway.

Re:beat ARM on what, 45nm? (1)

darkmeridian (119044) | more than 2 years ago | (#38512086)

High cost process? The more you shrink the die, the cheaper it gets to produce. Once the R&D and fabs have been done, you want to move everything to the smaller process if the yields are okay. Smaller dies mean that you get more chips per wafer. That means lower costs.

Re:beat ARM on what, 45nm? (0)

Anonymous Coward | more than 2 years ago | (#38512544)

High cost process... Heh. Below 40nm is high cost to the ARM manufacturers. Intel process tech is in a different league entirely.

Re:beat ARM on what, 45nm? (1)

Anonymous Coward | more than 2 years ago | (#38512756)

Huh? Intel's certainly got state of the art manufacturing facilities, but the list of ARM licensees also reads like a "who's who" of the IC manufacturing industry:

http://www.arm.com/products/processors/licensees.php [arm.com]

It's also hard to see Intel wanting to use all their capacity churning out smartphone CPUs that sell for a fraction (1/10?) of the the price of desktop chips. The volume of ARM-based chips that are sold annually is staggering - billions - they are in essentially EVERY consumer electronic device other than desktop/laptop computers.

Re:beat ARM on what, 45nm? (1)

RightSaidFred99 (874576) | more than 2 years ago | (#38512732)

"Fair"? Huh? You compare products against each other based on performance and price. If one product is on a superior manufacturing process that is a differentiating advantage. Your post makes absolutely no sense.

Besides, some ARM implementations will be on 32nm soon if it isn't already. Really Medfield is a stop-gap. The really interesting time will be when/if Intel puts the Atom chips on first-class manufacturing capacity. Tri-gate 22nm Atom chips would be very competitive.

better be able to add more (external) RAM (0)

Anonymous Coward | more than 2 years ago | (#38511118)

> The tablet reference platform is reported to be a 1.6GHz x86 CPU coupled with 1GB of DDR2 RAM,

For tablets / netbooks, you better be able to add more RAM. My current tablet came with 2GB, but supports up to 4, which is what I have it at now (cheap upgrade). RAM often makes more difference to system responsiveness than CPU speed. 1GB is fine for phones but just isn't much for tablets and netbooks.

Re:better be able to add more (external) RAM (1)

Kjella (173770) | more than 2 years ago | (#38511816)

1GB is fine for phones but just isn't much for tablets and netbooks.

From "640kb is enough for everybody" (okay, maybe he didn't say it) to "1GB is fine for phones" in 30 years. I so badly want to go back with a time machine and show them one, and after they're amazed and all that and ask what kind of important things we use it for I'll just say "well, mostly we use it to play Angry Birds".

no AM? (2)

decora (1710862) | more than 2 years ago | (#38511144)

think of all those amplitudes not being modulated.

this is a terrible, terrible loss for America.

Re:no AM? (0)

Anonymous Coward | more than 2 years ago | (#38511626)

He's on FM as well, you know.

apples and oranges? (3, Interesting)

viperidaenz (2515578) | more than 2 years ago | (#38511244)

It looks like CaffeineMark 3 is single threaded. At least the online version is anyway.
How can you compare a 1.6ghz presumably single core against dual core cpus on a single thread benchmark?

I just compared my laptop which is 2.2ghz dual core with my desktop, 3ghz single core. laptop gets 16,000, desktop gets 24,000. Laptop was at 50% cpu, desktop was at 100%.

Why is the processor named "Medfield" (1)

roarkarchitect (2540406) | more than 2 years ago | (#38511374)

I was hoping someone would know ?

Re:Why is the processor named "Medfield" (0)

Anonymous Coward | more than 2 years ago | (#38511530)

It's just a code name. Don't worry about it too much.

Re:Why is the processor named "Medfield" (1)

QuantumRiff (120817) | more than 2 years ago | (#38511838)

http://en.wikipedia.org/wiki/List_of_Intel_codenames [wikipedia.org]

Most are named after bodies of water, and they used to all be rivers in the Pacific Northwest of the US, but they ran out, (and moved some development out of Portland, OR)

Just let x86 die, please. (3, Interesting)

VortexCortex (1117377) | more than 2 years ago | (#38511718)

It's bloated. It had its time. I LOVED writing in assembly on my 80286, the rich instruction set made quick work of even the most complex of paddle ball games...

However, that was when I was still a child. Now I'm grown, it's time to put away childish things. It's time to actually be platform independent and cross platform, like all of my C software is. It's time to get even better performance and power consumption with a leaner or newer instruction set while shrinking the die.

Please, just let all those legacy instruction's microcode go. You can't write truly cross platform code in assembly. It's time to INNOVATE AGAIN. Perhaps create an instruction set that lets you get more out of your MFG process; Maybe one that's cross platform (like ARM is). Let software emulation provide legacy support. Let's get software vendors used to releasing source code, or compiling for multiple architectures and platforms. Let's look at THAT problem and solve it with perhaps a new type of linker that turns object code into the proper machine code for the system during installation (sort of like how Android does). DO ANYTHING other than the same old: Same Inefficient Design made more efficient via shrinking.

Intel, it's time to let x86 go.

Re:Just let x86 die, please. (0)

Anonymous Coward | more than 2 years ago | (#38511914)

I mostly agree with you, but the trouble for intel with emulation (which is slow already) is that it's quite slow *before* you add in a dramatic scale-down in factor (from desktop/laptop to tablet/cell phone). Emulated x86 apps written for w32api are not going to get any kind of useful performance on an ARM emulating an x86, period.

The past stays with us (1)

Weaselmancer (533834) | more than 2 years ago | (#38512152)

We still use Imperial units in the US. And do you remember the shortage of competent Cobol programmers back when Y2K was the big worry?

The world does indeed move on, but the past stays with us for a long while. A low power x86 SOC is still a useful and a wonderful thing.

My first thought when I read the article was "Cool! You'll be able to get a notepad to run WINE and native Windows XP now!" I can see TONS of uses for this, even with the lousy power specs. Industrial/business types don't like to let go of legacy systems. I just know I'm going to get some work someday that this chip will be the perfect fit for. I'm thrilled. Excellent work Intel.

Is ARM better? In this marketspace, absolutely. It's the best forward thinking decision you could make. But not everyone looks forward or is willing to spend cash on rewriting some legacy system. This chip fits a need perfectly.

Re:Just let x86 die, please. (1)

the_humeister (922869) | more than 2 years ago | (#38512164)

Hahahaha... oh wait, you're serious.Ok, so what high performance, low-cost, and power efficient processor should we all migrate to then? And will you pay to have all my software ported over to this new ISA?

Re:Just let x86 die, please. (0)

Anonymous Coward | more than 2 years ago | (#38512212)

About the only thing cisc about x86 is its front end. The rest is an out of order, speculative, 3-4 instructions at a time, risc instruction set.

The CISC vs RISC is a moot point at this point. ARM is better on the power draw because of one thing. It has about 1/2 the transistor count (less heat less current leakage). It is also a slower processor because of it. Intel is going to beat ARM at its own game. It even has a new '3d' sort of transistor to do it with. TSMC does not have that tech.

Also the more layers you add in the more latency you add. All the 'cool' performance critical apps on android use the NDK not the SDK.

I used to think as you. That mips was going to whip the snot out of everyone. Didnt happen. Then it was powerpc, the it was... see a pattern?

Re:Just let x86 die, please. (4, Interesting)

AcidPenguin9873 (911493) | more than 2 years ago | (#38512242)

I scoured your post for one actual reason why you think x86 is an inferior ISA, but I couldn't find any. I'll give you a couple reasons why it is superior, or at least on par with, any given RISC ISA, on its own merits, not taking into account any backwards compatibility issues:

  • Variable length instruction encoding makes more efficient use of the instruction cache. It is basically code compression, and as such it gives a larger effective ICache size than a fixed length instruction encoding. Even if you have to add marker bits to determine instruction boundaries, it's still a win or at least a wash.
  • x86 has load-op instructions. Load-op is a very, very common programming idiom both for hand written assembly and for compiler generated code. ARM and other RISC ISAs require two instructions to accomplish the same thing.
  • AVX, the new encoding from Intel and AMD, gives you true RISC-like two source, one non-destructive dest instructions.
  • Dedicated stack pointer register allows for push/pop/call/return optimizations to unlink dependence chains from unrelated functions. With a GPR-based stack, RISC has false dependence problems for similar code sequences that they can't really optimize,
  • AMD64 got rid of cruft, added more GPRs, and added modern features like PC-relative addressing modes, removing that advantage from RISC too.
  • ARM's 64 bit extensions were just announced and won't be shipping until 2014. x86 has been 64 bit for 8 years.

x86 should be able to compete quite well with any RISC ISA on its own merits today.

Re:Just let x86 die, please. (2, Informative)

Anonymous Coward | more than 2 years ago | (#38512654)

1. Having variable length instructions complicates instruction decoding, which cost die space and cycles (once for actual decoding and once for instructions spanning fetch boundaries). Also several processor architectures save 16-bit instructions (ARM, SH, MIPS, TI C6x off the top of my head), still having access to 16 general purpose registers as x86-64 with its up to 16 byte insns.

2. Load-op insns and many others are split up internally in smaller micro-ops. They are about as useful as assembler macros. Load-op insns are also hurting performance - for example on Intel processors load-op are split on two mops, one of which is dispatched to port 2, which means that two load-ops cannot by dispatched on the same cycle, whereas up to three simple ops can can be dispatched in one cycle.

3. AVX is good, having the same style for general purpose insns is better.

4. Dedicated SP engine is a solution to a problem, which does not exist on common RISC architectures anyway. The dependency, which is eliminated by the stack pointer tracker is the dependency of a push/pop insn on that value of SP, which is a result of a previous push/pop. There's no such dependency if simple moves to/from memory (e.g. `movq %rbx, 10(%rsp)') are used as in typical RISC (or in x86 too). Also ARM (and THUMB) can save/restore multiple registers on stack with a single insn, so no dependency there either.

5. The advantage of 64-bit address space for an architecture, traditionally targeted at embedded and mobile applications is quite dubious.

x86 has no merits, but just age old quirks, which are solved by throwing in a ton of additional logic and gigahertz. Make no mistake, x86-64 CPUs are good because the manufacturing process is good and not because, but despite the ISA.

Re:Just let x86 die, please. (0)

Anonymous Coward | more than 2 years ago | (#38512702)

Variable length instruction encoding makes more efficient use of the instruction cache. It is basically code compression, and as such it gives a larger effective ICache size than a fixed length instruction encoding. Even if you have to add marker bits to determine instruction boundaries, it's still a win or at least a wash.

Modern x86 processors decode instructions before they are cached. i.e. the ICache holds fixed-width instructions like a RISC processor.
I'll give you the storage benefit on disk and in RAM but, as the meme goes, memory is now very cheap so the compression is unnecessary.

x86 has load-op instructions. Load-op is a very, very common programming idiom both for hand written assembly and for compiler generated code. ARM and other RISC ISAs require two instructions to accomplish the same thing.

This isn't a justification, just stating a feature. The all powerful "add eax, [mem_addr]" is a 2-clock instruction that executes as a "mov secret_reg, [mem_addr]; add eax, secret_reg" anyway. There is no benefit to this beyond the previously mentioned "compression". BTW, those directly-from-memory instructions are only important because the x86 has so few registers and, worst of all, those registers are not truly general purpose (look at mul, div, movsb).

AVX, the new encoding from Intel and AMD, gives you true RISC-like two source, one non-destructive dest instructions.

This works against you since Atom processors tend to cut back on the big stuff to reduce the die-size. What exactly are you going to do about all the existing software? Or is there a transcoder?
You're basically saying x86 is superior because Intel is desperately trying to get rid of x86 and replace it with a RISC recoding from the inside out. An efficient RISC architecture which can emulate x86 efficiently would work just as well (IA-64 was both crap and had poor emulation).

Dedicated stack pointer register allows for push/pop/call/return optimizations to unlink dependence chains from unrelated functions. With a GPR-based stack, RISC has false dependence problems for similar code sequences that they can't really optimize,

I won't argue the concept but I will point out that SP/ESP/RSP is NOT special. The call/push/pop/leave/ret instructions implicitly reference it, yes, but you can emulate the behavior with other instructions to use any register. That's basically calling convention standardization, unless you're referring to the hardware call rewind stack?

AMD64 got rid of cruft, added more GPRs, and added modern features like PC-relative addressing modes, removing that advantage from RISC too.

True. Except layering all that crap on top of the existing crap is a problem because Atom (due to the size-proportional-to-power-used problem) inherently makes it uncompetitive, most Atoms don't even have the 64bit mode at all for this reason. If you are suggesting that the Atom chips should just boot straight into EM64T and remove the 16bit instruction set then I'll buy into that.

ARM's 64 bit extensions were just announced and won't be shipping until 2014. x86 has been 64 bit for 8 years.

This is an argument about ARM, not RISC in general. ARM is an architecture that is customised by many licensors, x86 is Intel/AMD. Also, ARM is mostly embedded where 64bit memory addressing is unnecessary, it's only as smartphones have become confused about what a phone is supposed to do that large amounts of RAM are pushing the limits.
If you want to argue about RISC64, talk about how AMD64 compares to Power (PPC64).

---
If you want a blunt summary of the problems with x86:

  • It's too big, there are too many features, too many different modes of operation, modes that can be recursively active inside other modes
  • It's redundant, there are many instructions that do the same thing. This is a problem for picking the fastest combination and for hardware designers making the instructions faster
  • It's overly complex, large obtuse features that are almost completely unused: TSS, segmentation
  • It's inconsistent, General purpose registers that aren't really general, special purpose registers that aren't really special
  • The memory model is nice to program but causing serious problems as the number of cores increases (CPUs get stuck busy-waiting on the cache coherence protocol instead of doing anything useful)

Re:Just let x86 die, please. (1)

TheDarkMaster (1292526) | more than 2 years ago | (#38513368)

Well... I will like (and buy) a tablet where I can install and use Paint Shop Pro 7 (yep, is old) without emulation and can use my finger or a stylus to draw, and be capable of using Miranda IM without ugly hacks. Applications my friend, applications.

P.S: I know that I can have simmilar applications on Android or iPad... But I really, really do not like the idea of "happy walled garden" with pseudoapplications ("apps" in html/javascript? WTF), sorry.

Re:Just let x86 die, please. (1)

PolygamousRanchKid (1290638) | more than 2 years ago | (#38513416)

However, that was when I was still a child. Now I'm grown, it's time to put away childish things.

There is IBM Mainframe System 360 code from the '60s still running on current zEnterprise systems today. That code was probably written while you were still swimming around in your dad's balls (no offense intended; it's just an amusing expression).

This will also be the case with x86. It will stay around forever, because it has been around forever. Tautology intended.

However, supporting legacy stuff does not necessarily preclude innovation.

Re:Just let x86 die, please. (0)

Anonymous Coward | more than 2 years ago | (#38513446)

I doubt this will succeed anyway, because:

1. x86 has traditionally dominated because of compatibility with old software. Microsoft could never switch to ARM or none of the existing Windows software would run. Apple managed it (PowerPC->x86), but they don't care about backwards compatibility as much as Microsoft, and they also didn't have much to lose since few people used macs at the time.

2. Lots of Android software, and all iPhone software is compiled ARM code. Anyone choosing x86 for their Android device would have to accept that lots of software (mostly games) wouldn't run on it.

Snore... Ketchup. (0)

Anonymous Coward | more than 2 years ago | (#38511810)

Wake me when Intel releases a product I want. This part is going to be obsolete by the time the next iPad comes out.

What Intel needs to figure out is how to put 8GB of ram, x86-64, quadcores and then make it use 1 watt. Bonus is to figure out how to turn OFF the unused RAM by coming with a process that doesn't need to be refreshed. Then we'll see a chip that is competative to ARM design.
Intel's failure is that they aren't reducing TDP in any of their chips, rather it keeps increasing, this unfortunately has the problem of hitting the wall. Already you can only put 8 CPU's in a 15A rack. The TDP needs to come down for real, not with marketing.

I don't know what Intel is shooting for here. If they are shooting for the Tablet, it will be obsolete shortly with those specs. Current desktops are 8GB ram 64bit and quadcore as standard. Putting anything out there that is less, even for a "tablet" is just admitting they can't put a desktop experience in a tablet. This is where Apple figured out the right thing to do... pick a better power conserving CPU and then don't try to put the desktop on the tablet. Sure it is essentially the same OS, but by not putting the MacOS UI on it and instead coming up with something more stripped down that doesn't rely on any other unused libraries (think about why Windows is bloated... the same OS is used for the Server and Desktops.) And no development tools on the device it sellf (See *nix land)

Microsoft's entire screwup dates back to trying to integrate MSIE into the system too rigidly. Had they not tied it down to the OS and instead broke it accross componets that could be swapped out, we might not be were we are today with 2.5 competing alternatives. So the Server OS (NT) got spitpolished up and became Windows 2000, XP, Vista and 7, heading clear in the opposite direction of a lightweight GUI on top of DOS that ended in WindowsME. Meanwhile Microsoft botched WindowsCE/Mobile/Phone, several times. CE should have started out as a stripped down NT (as in figuring out how much to strip out of NT to make it work on embedded devices.) Instead Microsoft had to duplicate efforts countless times, ending with everyone dumping their PDA's (the predecessor to the Tablet.)

We have already seen Microsoft fail at the Tablet environment already, and we've seen why (OEM's produce throwaway expensive crap) Exactly what is happening on Android. Android's future is Windows Mobile 5's past already. The OEM's need to shape up and produce better hardware with the same user experience on the same OS. We've seen this game before with the CDi, 3DO, WindowsCE/Mobile, and even Palm. The OEM's produce inferior hardware and nobody wants to develop for it (except for some hardcore nerds) , the end users end up stuck with the bloatware that comes with the phones and tablet and never buy anything from the market because their brand new device doesn't run the current version of the OS.

Only would a mobile phone company have the gall to force it's users to throw away their product and purchase a new one within the warranty period to get the latest software. This is what the WindowsMobile OEM's were doing before. I've been burned there, I'm not getting burned again with Android. When I see a OEM support their tablet for 6 years (like a game console) I'll consider that brand as good. Till then, screw them all and let the grandmas use them for portable throwaway web browsers.

Re:Snore... Ketchup. (1)

oakgrove (845019) | more than 2 years ago | (#38512118)

Exactly what is happening on Android. Android's future is Windows Mobile 5's past already.

As a current user of Ice Cream Sandwich on a Motorola Xoom, you sir are quite wrong. What Android tablets needed was good software. That software is here.

Huh? (0)

Anonymous Coward | more than 2 years ago | (#38511818)

Typical ARM-based SoC have idle power around 50-200mW, 2W ain't going to cut it.

I'll believe it when I see it (1)

peppepz (1311345) | more than 2 years ago | (#38512940)

We already had sensationalist press reports about how Intel would displace ARM thanks to the Moorestown chips, that would let us run the unmodified World of Warcraft on our phones, and so on. Result in reality: zero devices using Moorestown shipped.

Somebody will also remember how Larrabee was meant to smother the market of video cards, the rumors that Sony was to build the PS4 upon it, etc. Result in reality: Larrabee shelved.

It's easy for Intel to beat ARM in performance benchmarks - they have been doing this since the beginning. The point is, that they have to beat ARM in power efficiency, and it looks like Medfield isn't quite there yet.

Intel's process tech is the best (1)

Dr. Spork (142693) | more than 2 years ago | (#38512976)

If this thing can compete on performance with ARM chips, it's only because Intel can make miracles happen in silicon. Of course, we might never know what would happen if Intel used their latest process technology to print ARM chips like Apple's A6 or whatever the next generation will be. But it would be very good.

Meego lives? (0)

Anonymous Coward | more than 2 years ago | (#38513132)

I wonder if this means Meego will finally get the last push it needs to be allowed onto the market. The chips, I could not care less about.

"Benchmark-wise"? (0)

Anonymous Coward | more than 2 years ago | (#38513240)

I Think Not. Really, this shouldn't need to be said, but apparently the submitter is clueless here. You can't presume proof before you have it, thanks. That means no claims about benchmarks without the benchmark numbers (and all the details, dammit, the numbers are useless enough as it is) in hand. Say the specs make you expect it will out-perform the competition, fine. But putting it like this is vaguely handwaving with extreme presumption. Techies really ought to know better than deploying marketeers' weasel word salad, thanks.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?