Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Taiwanese OEMs Consider ARM Products For Windows 8

timothy posted more than 3 years ago | from the just-being-polite dept.

AMD 167

siliconbits writes "At CeBIT 2011, we went around the stands from some of the biggest component manufacturers in the world and asked them a simple question, would you consider bringing out ARM products (barebones, laptops, tablets, motherboards) for Windows 8? The answer was a unanimous yes; like Microsoft, the same firms that have been faithful Intel and AMD partners for years are prepared to explore other territories as soon as Windows 8 will go live."

cancel ×

167 comments

Sorry! There are no comments related to the filter you selected.

Dumb question - of course they'll say yes. (5, Insightful)

tomhudson (43916) | more than 3 years ago | (#35397890)

What did you expect them to say - "No, we won't - we'll cede that market to our competitors, because our customers prefer products with crappy battery life"?

Re:Dumb question - of course they'll say yes. (3, Insightful)

RobertM1968 (951074) | more than 3 years ago | (#35398198)

What did you expect them to say - "No, we won't - we'll cede that market to our competitors, because our customers prefer products with crappy battery life"?

Parent is correct... and for even more reasons than indicated. (no, this next section is not a slam against MS... read through the whole thing) Sure, Win8 may bomb on such things (pick a reason: no interest, Microsoft yet again not fulfilling their promise to have something actually suitable for such devices, Win8's requirements being too absurd for such "minimalist" hardware, whatever)... but the simple fact is, it may gain traction and take off. On that possibility, there isn't one OEM with half a brain that would say "no, we aren't doing this" at this point in time. When the correct time comes to make a decision, they'll choose to (a) release some test bed units, (b) dive full in or (c) look away from Win8 and concentrate on other things - but now is definitely not them time for them to say no, especially under the possibility that they will need Microsoft's good will in the future (assuming Win8 proves suitable and desired on such devices).

Re:Dumb question - of course they'll say yes. (0)

Anonymous Coward | more than 3 years ago | (#35398650)

Also, it's not a lot of work to load an OS onto your device if the OS is designed for it. No skin off of their noses.

Re:Dumb question - of course they'll say yes. (1)

poetmatt (793785) | more than 3 years ago | (#35398960)

these companies are already making ARM products, it's just that they'll be making them for the next windows version when it comes out. So this is a change of nothing.

hint: it's not going to be called windows 8.

Good (3, Funny)

betterunixthanunix (980855) | more than 3 years ago | (#35397904)

OK, finally we are moving away from x86 and toward RISC. We are only 20 years behind schedule, but hey, better late than never.

Re:Good (0)

Anonymous Coward | more than 3 years ago | (#35397936)

And since it's "move to RISC if Windows moves to RISC too", we'll be trapped with RISC for another two decades if something even better comes along.

Re:Good (3, Informative)

hittman007 (206669) | more than 3 years ago | (#35398046)

As I recall Windows NT 4.0 was independent of hardware. They had this concept they called HAL, which did all of the communication when it came to hardware. You had an alpha chip, no problem, just get a alpha HAL. I have in person seen Windows NT 4.0 running on other architectures, including alphas and apples of the day (long before they switched to intel equipment).

I'm guessing they dropped this capability with one of the newer incarnations...

Re:Good (2)

RobertM1968 (951074) | more than 3 years ago | (#35398240)

As I recall Windows NT 4.0 was independent of hardware. They had this concept they called HAL, which did all of the communication when it came to hardware. You had an alpha chip, no problem, just get a alpha HAL. I have in person seen Windows NT 4.0 running on other architectures, including alphas and apples of the day (long before they switched to intel equipment).

I'm guessing they dropped this capability with one of the newer incarnations...

Ummm... yeah, kinda... other than the massive portions that are monolithic and definitely tied to the hardware in NT 4. XP has a HAL as well... guess which other versions do also? As a matter of fact, most OS's have a device abstraction layer. Such does not make a multi-platform OS, as both IBM (OS/2 PPC) and MS (Windows NT for certain RISC schemes) can tell you via the problems they had getting their operating systems to run on non Intel compatible hardware. Even with how modular OS/2 was (in comparison to Windows), there were still various problems due to... rewriting and/or recompiling the rest of the OS.

Re:Good (1)

perpenso (1613749) | more than 3 years ago | (#35398256)

As I recall Windows NT 4.0 was independent of hardware ... I'm guessing they dropped this capability with one of the newer incarnations...

They dropped it only in the sense that they no longer offered it to the public. Recall that Windows NT was started on MIPS, x86 came later. The goal was to make sure the code was portable between architectures. My understanding is that internally MS kept building NT (XP, Vista, 7, ...) on non-x86 platforms to maintain/verify portability.

Re:Good (1)

peragrin (659227) | more than 3 years ago | (#35398314)

x86 on MiPs required a secondary x86 coprocessor that changed the instruction set.

So if it started on MIPS then it did so very poorly.

Re:Good (1)

bhtooefr (649901) | more than 3 years ago | (#35398456)

No, it didn't require a secondary x86 coprocessor.

It just ran on MIPS.

The problem is, x86 userland software didn't run without being recompiled (except for a badly emulated DOS environment), so if your software vendor didn't compile for MIPS, you were screwed.

On Alpha, DEC did a profiling recompiler for NT, and due to the speed of the Alpha, that approach nearly worked, except for Compaq getting distracted by Itanium, and cancelling Alpha stuff just as it was picking up steam into Windows 2000.

Re:Good (1)

LO0G (606364) | more than 3 years ago | (#35398330)

NT has always supported RISC architectures. Even today Itanium is supported on Windows Server 2008 R2.

In fact, NT was first developed on MIPS and i860 RISC chips, the X86 port came later (seriously). Back in the 1990s, there were active NT versions for MIPS, PPC and Alpha and Itanium. There were even rumours of a Sparc port.

The reason that most of those ports aren't alive today is that the hardware manufacturers told MSFT that they weren't interested in supporting NT for their platform any more.

Re:Good (2)

spiffmastercow (1001386) | more than 3 years ago | (#35397946)

We moved away from x86 over a decade ago.. The instruction set for x86 is emulated in hardware. But yeah, hopefully this will mean at least a move to a sane assembly language. Not that anyone even uses assembly anymore...

Re:Good (2)

betterunixthanunix (980855) | more than 3 years ago | (#35398004)

The microarchitecture argument is not very convincing -- yes, the instruction set is translated, but in the end, you still expose the x86 instruction set. As an example, we still have to deal with floating point registers that are arranged as a stack, although some compilers (GCC for example) have a option to use SSE registers and instructions as an alternative. In general, x86 inherits a lot of very outdated designs, which can be very annoying when you are forced to deal with them (or which just waste space on the die if nobody uses them anymore).

Re:Good (5, Informative)

Rockoon (1252108) | more than 3 years ago | (#35398296)

Most (all?) 64-bit compilers produce SSE single precision and double precision code by default. It is the x87 stack that is the odd-man out, contrary to what you are making it sound like.

All x64 CPU's support both single and double precision SSE, which is why its the default for 64-bit targets. If you are targeting a 32-bit OS, then a 32-bit binary cannot simply assume that single precision SSE is available, let alone the later double-precision support of SSE2.

Also, the x87 FPU performs calculations in 80-bit precision, so is not directly comparable to SSE's single and double precision features.

Finally, it is not "some compilers".. its ALL THE MAJOR ONES, both 32-bit and 64-bit.

Re:Good (1)

Svartalf (2997) | more than 3 years ago | (#35398032)

Depends on if your'e doing kernel code or similar. The device driver and OS dev crowd still uses it...

Re:Good (2)

Sponge Bath (413667) | more than 3 years ago | (#35398174)

It's been a long, long time since I've used assembler for a driver in Windows (Win 3.1/95) era, and I've never needed it for Linux. Even for core kernel devs it is more the exception than the rule. These days, the ultimate language of all time, C, rightfully rules for these tasks.

Re:Good (0)

Anonymous Coward | more than 3 years ago | (#35399234)

you realize it's "assembly" and not "assembler" right?

Re:Good (2)

partyguerrilla (1597357) | more than 3 years ago | (#35398018)

RISC architecture is going to change everything.

Re:Good (1)

Anonymous Coward | more than 3 years ago | (#35398020)

OK, finally we are moving away from x86 and toward RISC. We are only 20 years behind schedule, but hey, better late than never.

MS-DOS was running on ARM's x86 emulator in 1987 [chriswhy.co.uk] .

Re:Good (1)

RobertM1968 (951074) | more than 3 years ago | (#35398270)

OK, finally we are moving away from x86 and toward RISC. We are only 20 years behind schedule, but hey, better late than never.

MS-DOS was running on ARM's x86 emulator in 1987 [chriswhy.co.uk] .

Well, that's not quite the same as running natively on it... even though somewhat similar to how things run on today's 64bit CPUs.

Re:Good (1)

Bert64 (520050) | more than 3 years ago | (#35398404)

You used to be able to emulate x86 on the Amiga too, using applications such as PC-Task or PCX...

Re:Good (0, Flamebait)

Desler (1608317) | more than 3 years ago | (#35398034)

A modern "x86" chip is RISC tardo. Way to be a decade or more behind the times when it comes to Intel chip designs.

Re:Good (1, Insightful)

the linux geek (799780) | more than 3 years ago | (#35398120)

No it isn't. Way to lose the whole point, which is predictability of how many cycles an instruction takes.

Not that it matters at this point. VLIW, like in high-performance DSP's and certain niche processors, is the future.

Re:Good (2)

Sponge Bath (413667) | more than 3 years ago | (#35398226)

"VLIW, like in high-performance DSP's and certain niche processors, is the future."

I want an Itanium in my iPhone!

Re:Good (0)

Anonymous Coward | more than 3 years ago | (#35398322)

Most VLIW is RISC not CISC. Also intel was not behind anything, the developers were: http://en.wikipedia.org/wiki/Intel_i860.

Re:Good (2)

turgid (580780) | more than 3 years ago | (#35399924)

Not that it matters at this point. VLIW, like in high-performance DSP's and certain niche processors, is the future.

Yes, VLIW has been the future since the 1970s.

Re:Good (5, Insightful)

Kjella (173770) | more than 3 years ago | (#35398096)

RISC won 20 years ago, all x86 processors decode to some internal instruction set. I am certain the engineers at Intel and AMD have tested exposing the native instructions and if it could perform much faster than x86 I'm sure they'd enable applications to bypass the hardware decoder and send micro-ops directly. While they still process the instructions the really obscure ones live in microcode instead of hardware, x86_64 adjusted the number of registers etc. so most things have been tweaked. I don't need to remind you that the last attempt to do better was the Itanic...

Re:Good (1)

Bert64 (520050) | more than 3 years ago | (#35398412)

Exposing the micro-ops would mean they have to keep some compatibility, keeping them hidden behind x86 means they can change the micro-op functions all they like without impacting compatibility.

Re:Good (1)

Rockoon (1252108) | more than 3 years ago | (#35398426)

RISC won 20 years ago, all x86 processors decode to some internal instruction set. I am certain the engineers at Intel and AMD have tested exposing the native instructions and if it could perform much faster than x86 I'm sure they'd enable applications to bypass the hardware decoder and send micro-ops directly.

No need, as they are already exposed directly. Plenty of instructions that emit a single micro-op... for example, most of AMD's DirectPath instructions emit a single micro-op and in fact, 100% of AMD's micro-op's can be found in the set of DirectPath instructions.

Re:Good (1)

DarkOx (621550) | more than 3 years ago | (#35399318)

I am not really sure what your point about itanium is exactly. We were discussing for the most part technical realities if RISC, CISC, and micro code. Itanium is first off still in production, in fact they just released new models, and second if its a failure it is so in the marking sense more so than the technical sense. The chip performs well.

MS Windows supported RISC 15 years ago ... (2)

perpenso (1613749) | more than 3 years ago | (#35398194)

OK, finally we are moving away from x86 and toward RISC. We are only 20 years behind schedule, but hey, better late than never.

MS Windows NT 4 supported RISC 15 years ago in 1996(*), Dec Alpha, IBM/Motorola PowerPC and MIPS. All on the standard Win NT 4 retail CD. Consumer oriented PowerPC machines were available. I recall Byte magazine comparing dual PowerPC and dual x86 systems. Alpha machines were available for the more serious users. Despite better computational performance on the RISC based machines x86 won due to price and software availability. ARM could fail as well. ARM may have better battery performance but is it so much better that it will outweigh the software availability issue?

Also as other have pointed out the x86 has a RISC core. x86 instructions are converted to RISC instruction on-the-fly, scheduled and executed. The "problem" is that we do not have direct access to this core and must go through the x86 facade.

(*) OK you can argue 1993, day 1 for Win NT, since MIPS was supported. However I don't think there was any real push towards a consumer MIPS machine. The motivation was more internal, making sure Win NT was portable to other architectures.

Re:MS Windows supported RISC 15 years ago ... (2)

the linux geek (799780) | more than 3 years ago | (#35398236)

Actually, MIcrosoft pushed for a MIPS reference architecture (ARCS) to be the successor to the PC architecture. They had some substantial support onboard, but it ended up breaking up due to DEC and a couple of other manufacturers pushing for Alpha to be the processor to be used, and then Compaq leaving and returning support to PC-compatibles.

Is ARM the new ACE? (1)

erice (13380) | more than 3 years ago | (#35399226)

(*) OK you can argue 1993, day 1 for Win NT, since MIPS was supported. However I don't think there was any real push towards a consumer MIPS machine. The motivation was more internal, making sure Win NT was portable to other architectures.

On the contrary, there was a major push by the ACE consortium [wikipedia.org] to replace the x86 PC with a common platform built around MIPS and Windows NT. Unfortunately, it was mostly industry hype with very little product appearing in the retail channel before the whole thing was discarded.

Re:MS Windows supported RISC 15 years ago ... (2)

DarkOx (621550) | more than 3 years ago | (#35399414)

The software issue might not be as big this time a round. Netbooks aside, ARM is driving today's tablets, cellphones, and embedded devices. That seems to be where computing is going in general. It simply won't make sense to bring most of the existing PC software into that world. The software that does make sense to go there is cross platform already. Hell I am running ArmedSlack on my GuruPlug and its package for package almost exactly the same as the x86 Slackware versions.

So people are really not going to be looking to move as many legacy apps over in the first place, frew applications people are really using will prove to be prohibitively difficult to port. Its not 1985 any more and we are not worried about running our DOS apps much, few applications written in the past 10 to 15 years have blobs of assembly sprinkled in and probably fewer make assumptions about PC hardware.

Re:Good (1)

WrongSizeGlass (838941) | more than 3 years ago | (#35398258)

OK, finally we are moving away from x86 and toward RISC. We are only 20 years behind schedule, but hey, better late than never.

Does this mean I'll finally be able to use my books on CHRP [wikipedia.org] ?

Re:Good (1)

lkcl (517947) | more than 3 years ago | (#35398678)

I've been trying to get an article through slashdot submission which describes exactly this: perhaps this article which has been accepted will trigger people to realise what i'm on about. if you put multiple RISC cores into 28nm or below, they SCREAM along at such unbelievably fast speeds that pure economics dictates that it is insane to ignore them. LEON4 by gaisler.com can do up to 8 cores, each at 1.5ghz, in 30nm. the size of the chip is so small that you can fit i believe it's around 10,000 processors onto a single 12in wafer. each wafer is $10k each, meaning that each IC is $1 each. add $1 for plastic packaging; add $1.50 for running test vectors at the plant, and you have a grand total of $3.50 for the manufacturing cost of each CPU, in mass-volume. of course, you have to amortise the NREs which are somewhere in the insane range of $5 million, but if you sell 5 million processors, that's only $1 per processor!

and so that's what... $5 or thereabouts... for an 8 core 1.5ghz processor, with 1.7DMIPS/Mhz performance (roughly the same as an ARM Cortex A9 or the MIPS 1074k). and, because it will be an "integrated" System-on-a-Chip, it will have on-board DDR2 or DDR3 RAM controller, HDMI, SATA-II, USB2, PCIe, Gigabit Ethernet - everything that is listed on the article presented by the OP - so you could have an unbelievably powerful Desktop or Server system, consuming only about 4 watts of power for the complete system, with 12ghz of processing power and 2gb of RAM costing only around $50 in parts.

so i have to ask - at what point does the economics become so blatantly in favour of RISC cores that people simply realise it is truly "Game Over" for Microsoft? what's it really going to take? do we _have_ to get down to 22nm or below, where 1.5ghz becomes 2.5 to 3ghz, and 10,000 cores becomes 20,000 cores on a single 12in wafer, and the price for 20ghz of processing power is $3 per CPU? really - what am i missing? i just don't get it.

Re:Good (0)

Anonymous Coward | more than 3 years ago | (#35399110)

I'd like to see evidence for that "1.7DMIPS/Mhz performance", because usually when you look closely you find out that in real world usage the truth is closer to "0.17DMIPS/Mhz performance".
Particularly since the numbers obviously don't include any cache beyond "joke" size. Caches tend to make out 50% of a chip size so you can't really save much by simplifying the control structures, doubly so when you need _more_ (instruction) cache because your instructions are coded inefficiently.

Opportunities (2)

Nerdfest (867930) | more than 3 years ago | (#35397916)

I'd actually prefer they didn't. Joke as you will, it's an excellent opportunity for Linux to make inroads to the more casual user. The last one (netbooks) didn't get much time before Microsoft jumped in with XP netbooks.

Re:Opportunities (0)

Anonymous Coward | more than 3 years ago | (#35397984)

genisi has a nice $200 arm laptop, well I think they do. I'll find out later today

Re:Opportunities (2)

Desler (1608317) | more than 3 years ago | (#35398040)

Yeah, I also prefer they don't cater to the wants of their customers rather than the wants of the minuscule minority that are fighting some petty OS war.

Re:Opportunities (0)

braeldiil (1349569) | more than 3 years ago | (#35398062)

Nice to see that even the Linux partisans know that the only way for Linux to have desktop success is to have no competition. Given a choice, almost no one would choose linux.

Re:Opportunities (0)

Anonymous Coward | more than 3 years ago | (#35398114)

Yeah, I know, right? Linux has no chance of ever succeeding when faced with any competition at all. Oh, wait... [starkinsider.com]

Re:Opportunities (1)

the linux geek (799780) | more than 3 years ago | (#35398242)

Android isn't really Linux. Linux is basically a bootloader for it. With minimal porting, you could run it on top of FreeBSD or Mach.

Re:Opportunities (1)

Desler (1608317) | more than 3 years ago | (#35398384)

Android is Linux when it benefits the people pushing market share numbers that are positive to Linux. When any negative news comes out about Android it no longer is Linux and people make sure to point out that Linux is just the kernel that Android uses.

Re:Opportunities (0)

Anonymous Coward | more than 3 years ago | (#35398814)

Yeah, because every person that has an opinion on either of those things is in complete and perfect lockstep. Or maybe you just need a little more straw. I'll let you know.

Re:Opportunities (0)

Anonymous Coward | more than 3 years ago | (#35398504)

Android isn't really Linux.

Bullshit. What api does Dalvik use? What manages memory on an Android device? What manages the hardware on an Android device? When I open a terminal on my Droid and start rtorrent or vi or python or sshd, what api do those applications use?

Oh, and +0.1 troll points for your nick. I bet you fool a few people with it.

Re:Opportunities (0)

Anonymous Coward | more than 3 years ago | (#35399416)

You're a fucking retard. Everyone customizes the Linux kernel they put in their products... fuck there's a GUI that comes with the kernel to do just that.

Re:Opportunities (1)

turgid (580780) | more than 3 years ago | (#35399952)

Android isn't really Linux. Linux is basically a bootloader for it.

That's one mighty large, over-engineered bootloader!. Why didn't they just use uBoot?

Re:Opportunities (1)

Kjella (173770) | more than 3 years ago | (#35398370)

Nice to see that even the Linux partisans know that the only way for Linux to have desktop success is to have no competition. Given a choice, almost no one would choose linux.

Migrate to Linux. To get people to switch to something - anything, it can't be "as good as". There has to be some specific reason to do it that you're not getting anywhere else, unless you're dealing with very idealistic/cost averse people. Running on ARM while Windows didn't could be one such thing. In fact I suspect will be one such thing, because most apps will only be x86 no matter what Microsoft does.

Personally it was the pre-SP Vista that did it, I was like "fuck if this is the way forward I'd rather grit my teeth on Linux". I stuck with it for 3.5 years too, because even though it had many quirks I kinda got used to them. But in the end I migrated back to Win7 because Microsoft fixed things while Gnome/KDE didn't. And my Windows games are now just a click rather than a reboot away.

Re:Opportunities (1)

pmontra (738736) | more than 3 years ago | (#35398068)

Linux is already making inroads in casual users' phones (the Android kernel). Not that they know or care about it. As a Linux desktop user I'm fine with that and I'm just happy that about all the web applications I worked on in the last years are running on Debian servers or on Debian-derivatives.

I just don't believe that casual users will ever massively switch to Linux. Maybe they'll start to use Chrome OS tablets but they won't know about them being Linux-inside, just like almost all iPhone and iPad users don't know that those devices are Unix-inside. We might end up living in a world dominated by Unix derivatives but only a few techies like us will know it.

Re:Opportunities (1)

RobertM1968 (951074) | more than 3 years ago | (#35398342)

I'd actually prefer they didn't. Joke as you will, it's an excellent opportunity for Linux to make inroads to the more casual user. The last one (netbooks) didn't get much time before Microsoft jumped in with XP netbooks.

If Microsoft's track record is a good indication, I would be happy if they DID go for it... can anyone count how many versions of Windows were targeted at tablets - and failed to get anywhere except niche markets like Home Depot's inventory carts? Heck, even skip the WinMo crap that was never suited for touchscreen.

"Everyone" wants a tablet nowadays. Apple and the various OEMs that build on Android are doing a phenomenal job. A blunder like taking iOS and Android on in markets they were designed for would do nothing except push interest even further away from any Microsoft offering. No bashing there - just a fact. Something similar happened in the smartphone arena... too little, too late, after too many broken promises of innovation and (planned) leadership in the market.If enough OEMs take the dive, then if (when?) Microsoft fails in this market again, it'll mean a bunch of vendors will be refitting overpowered devices with Android or some other Linux based option, providing us with higher end tabs at cheaper prices.

Re:Opportunities (1)

cyber-vandal (148830) | more than 3 years ago | (#35398938)

Who is "everyone"? Not me or most of the people I know.

Re:Opportunities (1)

RobertM1968 (951074) | more than 3 years ago | (#35399950)

Who is "everyone"? Not me or most of the people I know.

You didnt note the use of quotation marks in my post? Nor understand their meaning?

Re:Opportunities (2)

UnknowingFool (672806) | more than 3 years ago | (#35399050)

If Microsoft's track record is a good indication, I would be happy if they DID go for it... can anyone count how many versions of Windows were targeted at tablets - and failed to get anywhere except niche markets like Home Depot's inventory carts? Heck, even skip the WinMo crap that was never suited for touchscreen.

Why MS failed at tablets was all they did was shove Windows into a different form factor. And then called it done. Now other than having a touchscreen and using a stylus instead of a mouse, Windows tablets were just very expensive laptops. In the many years MS pushed tablets, they only wrote one application that truly used touch. They however left the rest of the OS very keyboard/mouse centric. So for the average consumer, why would they buy an expensive touchscreen laptop that gave them no real advantage to buying a cheaper laptop?

Re:Opportunities (1)

LO0G (606364) | more than 3 years ago | (#35399926)

Actually until the iPhone came out, touch was considered uninteresting in consumer devices. All the tablets I've ever seen were designed around stylus input, not touch.

When the iPhone came out it was a game changer in more ways than one - touch became the norm, capacitive screens instead of resistive ones, etc.

ARM Windows (5, Insightful)

devent (1627873) | more than 3 years ago | (#35397988)

How are they going to explain to the million of Windows users that no application they know will work on ARM Windows? It's the same as with Windows 64 bit and why we didn't saw much of it despite the prices for RAM are very low. I guess with Windows 7 the developers finally released some software for 64 bit. That's what, like 9 to 10 years since AMD came with the amd64 architecture?

Well, at least I can then finally buy some ARM notebooks and put a decent Linux distribution on it.

Re:ARM Windows (2)

The O Rly Factor (1977536) | more than 3 years ago | (#35398024)

How are they going to explain to the million of Windows users that no application they know will work on ARM Windows?

Clever marketing that appeals to yuppies.

"Don't be left behind with slow stupid x86 Microsoft Office, upgrade to the new better more powerful Microsoft ARM Office today. It's newer so you know its better, and come on it has the word "Arm" in it, which means powerful, duh!"

Re:ARM Windows (1)

Microlith (54737) | more than 3 years ago | (#35398088)

Well, at least I can then finally buy some ARM notebooks and put a decent Linux distribution on it.

And I expect the market for ARM-based Windows 8 devices to be just as horrible as it is now, in terms of replacing the OS, as it is for tablets and phones. Lack of drivers, binary only video drivers, and lock down to prevent people from actually removing the OS.

And here I was hoping that the transition to ARM would get us away from Microsoft's domination. Now it could very well be enforced in hardware.

Re:ARM Windows (0)

Anonymous Coward | more than 3 years ago | (#35398154)

Emulation still works.
Most programs they will want to use that are popular, such as browsers, media players, stuff like that, they will be ported pretty quickly.
The others won't need too much in terms of resources to run that emulation could be done without much overhead.

The one huge problem will be games, of course.
Yeah, i can't think of a way around that either.
Some older games could be emulated, but the transition to ARM will be pretty painful for those who prefer working close to the metal.

Re:ARM Windows (2)

Bert64 (520050) | more than 3 years ago | (#35398450)

Windows 64bit is different, most 32bit applications run on it just fine and the 32bit consumer versions are crippled (ie wont support more than 4gb of address space, even tho the hardware is capable of it using PAE)...

Windows on ARM won't run x86 applications natively, and if they provide an emulation option it will be almost certainly be extremely slow.

Re:ARM Windows (1)

whizzter (592586) | more than 3 years ago | (#35398576)

Actually i think you can enable PAE with a bit of hacking.

There are however a few big problems with PAE.

1: Pages in memory are 4mb instead of 4kb, some programs make silly assumptions about it and that decreases compability.
2: Ever more severe, many thirdparty drivers does the same. Thus PAE mode would induce a whole world of hurt in terms of compability and system crashes.
3: Bloat.. since most programs has code, constant, stack and data areas that usually end up on separate pages every minor app will require something like 16meg of memory. Not a big problem for a server with lots of mem and few programs but worse in desktop settings.

So.. when you go to PAE you could as well just jump to 64bit because of the driver and app issues and that will also be more efficient since it's really only the kernel that will consume more memory due to larger pointers.

The apps are still 32bit for most parts but can be put into separate areas with virtual mappings on a small (4k) granularity that doesn't induce bloat.

Re:ARM Windows (1)

dave420 (699308) | more than 3 years ago | (#35398826)

You can enable PAE rather easily [tipandtrick.net] .

Re:ARM Windows (3, Interesting)

UnknowingFool (672806) | more than 3 years ago | (#35398454)

It's the same as with Windows 64 bit and why we didn't saw much of it despite the prices for RAM are very low. I guess with Windows 7 the developers finally released some software for 64 bit. That's what, like 9 to 10 years since AMD came with the amd64 architecture?

The reason 64-bit wasn't was adopted quickly was more about need vs features. The model MS choose for their 64-bit migration (LLP) meant that 32-bit programs were backwards compatible. So there was no need for a consumer to get 64-bit Office because 32-bit Office would work fine in 64-bit Windows. If all the 32-bit programs worked either way on 64-bit or 32-bit OS, there wasn't as much as a push to migrate. Unfortunately 64-bit Windows would often times require new drivers. So there were more negatives moving to 64-bit on Windows unless the consumer had a specific need like more memory addressing. For the most part, businesses were more open to using 64-bit Windows Server as there was a need in many cases to access more than 3GB of RAM.

Software companies that wanted to take advantage of 64-bit for Windows had to maintain separate 32-bit and 64-bit binary and source code versions during the migration. While the 32-bit version would work on either Windows flavor, the 64-bit would not work on a 32-bit OS. Many companies were reluctant to maintain two versions especially if moving to 64-bit provided no real advantage.

The Linux/Unix/OS X model (LP) took a different approach as that model focused more on forward compatibility. A 32-bit program could be made into a 64-bit program with a recompile and testing to ensure there were no special scenarios that required 32-bit addresses, etc. Software companies would have to maintain two binary versions but for the most part could maintain one version of source code. With Linux/Unix/OS X, a great deal of software was open source so that it was far easier to make this migration.

Re:ARM Windows (1)

EvanED (569694) | more than 3 years ago | (#35398630)

Software companies that wanted to take advantage of 64-bit for Windows had to maintain separate 32-bit and 64-bit binary and source code versions during the migration. ... A 32-bit program could be made into a 64-bit program with a recompile and testing to ensure there were no special scenarios that required 32-bit addresses, etc. Software companies would have to maintain two binary versions but for the most part could maintain one version of source code.

Uh what? I'm curious what characteristics of the Windows model meant that the similar "recompile" model wouldn't work on Windows. Because I don't know of any.

Re:ARM Windows (1)

EvanED (569694) | more than 3 years ago | (#35398682)

Uh what? I'm curious what characteristics of the Windows model meant that the similar "recompile" model wouldn't work on Windows. Because I don't know of any.

Actually that's not quite true; I can think of at least one difference: if you use native C types and use a long to store a pointer value. With the typical Windows model, that would result in a truncation; with the typical Linux model, that would continue to work.

That said, that code is latently broken anyway; in some sense it doesn't deserve to work in the first place. But that doesn't mean that the Windows crew has to maintain two source trees; that's ridiculous. What it means instead is that if you have an 'int' variant that you want to hold an address, you should use 'intptr_t' instead. Which is what the Linux software should be doing anyway.

Re:ARM Windows (1)

UnknowingFool (672806) | more than 3 years ago | (#35398958)

That said, that code is latently broken anyway; in some sense it doesn't deserve to work in the first place. But that doesn't mean that the Windows crew has to maintain two source trees; that's ridiculous. What it means instead is that if you have an 'int' variant that you want to hold an address, you should use 'intptr_t' instead. Which is what the Linux software should be doing anyway.

Yes they could avoid the problem if everyone wrote in standard Ansi C99 but as you are no doubt aware, C varies when factoring compiler and hardware differences. Add to that software companies coding for Windows may not code in standard C but Microsoft C++ and in one of the MS programming platforms like .NET where they use MS data types instead of the low level "intptr t".

Re:ARM Windows (2)

UnknowingFool (672806) | more than 3 years ago | (#35398844)

Read about LLP vs LP [wikipedia.org] . MS choose LLP which introduced a new 64-bit data type: long long (that's not a typo) and kept a 32-bit type: long. In terms of backwards compatibility, LLP means you don't have to recompile or do anything to have your 32-bit program still work. However you cannot simply recompile a 32-bit program to take advantage 64-bit; you actually had to change source code and compile. In some cases, it was going to be a simple find and replace; in other cases, it wasn't that easy. This means that going forward, you would have to maintain two different source code trees thus two binary versions.

The LP model redefines "long" to be 64-bit. Unless there was some weird code that blew up if "long" went above 32-bit, all that would be required was a recompile. You would have to maintain two binary versions but you could maintain one version of source code.

Re:ARM Windows (1)

EvanED (569694) | more than 3 years ago | (#35398902)

See my reply to myself [slashdot.org] that I wrote almost immediately.

I was off in some fantasy world where code was actually generally well-written and didn't make architecture-specific assumptions about the size of integers.

That said, you still have a couple things wrong. First, LLP doesn't really "introduce" long long; that's been usable even in 32-bit software for ages. Second, while you do have to do some rewriting to get your code 64-bit clean if you aren't in my magical fantasy world, it's absolutely wrong to say you then need to maintain two source trees.

Re:ARM Windows (1)

UnknowingFool (672806) | more than 3 years ago | (#35399320)

That said, you still have a couple things wrong. First, LLP doesn't really "introduce" long long; that's been usable even in 32-bit software for ages.

Not according to unix.org [unix.org] . What you are talking about is the C99 standard which isn't about data models but a specific programming language specification. I don't have a definite history of LP64 vs LLP64 but the paper from unix.org suggests that it predates C99 by at least a year. Also as you no doubt aware, not every compiler follows the C99 standard fully. GCC supports it mostly. And some compilers use the older C89 standard. And if you are using Windows you are not using the C99 compiler, you are using the MS Visual C++ compiler which is not compliant.

Second, while you do have to do some rewriting to get your code 64-bit clean if you aren't in my magical fantasy world, it's absolutely wrong to say you then need to maintain two source trees.

If you want to develop and maintain your own 64-bit data structures and coding to ensure 32-bit OS handles them correctly, then no you don't need to maintain two source trees. However you would have to maintain those data structures forever instead of relying on MS data types. Or you could the very messy task of putting in #IFDEF everywhere to separate your 64-bit/32-bit parts if you wanted to use the MS data types but keep one version of source code. You could do all of that. Or you could maintain two versions. I would think it's far easier to maintain two versions.

Re:ARM Windows (1)

EvanED (569694) | more than 3 years ago | (#35399584)

Not according to unix.org. What you are talking about is the C99 standard which isn't about data models but a specific programming language specification.

'long long' was a common compiler extension well before C99. It was available by at least GCC 2.7.2.3, which was released Aug 1997. That's the earliest version I have access to without compiling stuff.

And to some extent you're right about the disconnect about the data model and C language -- but at the same extent, IMO they're so tightly coupled in Unix that I also think it's reasonable to talk about what was going on in the world of C.

However you would have to maintain those data structures forever instead of relying on MS data types. Or you could the very messy task of putting in #IFDEF everywhere to separate your 64-bit/32-bit parts if you wanted to use the MS data types but keep one version of source code. You could do all of that. Or you could maintain two versions. I would think it's far easier to maintain two versions.

While I'll admit that I don't have a ton of experience with coding in "the Windows way" using all their DWORD and LPVOID jazz, and I don't know exactly what effect the 64-bit switch has on them, I'm still not really seeing the problem. C provides 'intptr_t' for an address-sized integer, and Windows provides ULONG_PTR and DWORD_PTR. All of these types behave like 'long' does under Linux: it's 32-bit when compiling for a 32-bit target and 64-bit when compiling for a 64-bit target.

Re:ARM Windows (0)

Anonymous Coward | more than 3 years ago | (#35399562)

I work for a company that maintains and sells 32 and 64 bit software for Windows, Linux, Mac OS and other platforms from common source, and has been for years. I really feel like you're overblowing the significance of LP vs. LLP - anybody seriously in the business of cross-platform development should have a clean enough code base, and mature enough development practices, where this isn't really an issue at all.

Re:ARM Windows (1)

UnknowingFool (672806) | more than 3 years ago | (#35399824)

Not everyone who wrote a program for Windows considered 64-bit migration when they wrote their code. Nor would they need to consider it if their code never needed to take advantage of 64-bit data structures. If they wanted to take advantage of it, there was going to be a debate of the best approach going forward and how to support both 32-bit and 64-bit Windows. The effort wasn't exactly zero.

Re:ARM Windows (1)

cyber-vandal (148830) | more than 3 years ago | (#35398978)

The lack of a 64 bit version of Office was probably an issue too. Although Office 2010 is now 64 bit none of the previous ones were.

Re:ARM Windows (0)

Anonymous Coward | more than 3 years ago | (#35398478)

The 1st version of 64-bit Windows that is attractive to corporations is Windows 7 SP1, which is only about a month old. Many companies wait until at least SP1 before upgrading to a major new version of Windows.

I don't recall Windows XP 64-bit being marketed or bundled with new PC's like Windows 7. So that XP 64-bit was an experiment. And Vista? Heck...even the 32-bit version was, IMHO, beta quality so Vista is also ruled out as a viable 64-bit Windows OS.

So that leaves us with Windows 7 64-bit as the first viable 64-bit version of Windows. Before SP1, about half of all Windows 7 sold/bundled were 64-bit. Starting this year, 32-bit version of Windows 7 will be in the minority for new systems. When the average PC ships with more than 4 GB RAM, 32-bit market share will start falling off a cliff instead of today's gradual decline.

Re:ARM Windows (1)

CreateWindowEx (630955) | more than 3 years ago | (#35398534)

It's actually easier to recompile existing 32-bit x86 for 32-bit ARM than for 64-bit x64, especially if Microsoft released an ARM backend for the visual C compiler. As long as Windows-for-ARM came out before too many applications transitioned to 64-bit only, it's easy to imagine it could succeed.
If they're aiming at the tablet/netbook market, then the lack of hardware drivers won't be a problem, they just need to support the on-board hardware and a few key applications (IE, Office, Flash). Ironically, if Apple's AirPrint takes off, they won't even need printer drivers. if they were able to run .NET, that would give them a lot of compatibility for free, even in-house corporate apps.

Re:ARM Windows (1)

EvanED (569694) | more than 3 years ago | (#35398604)

I guess with Windows 7 the developers finally released some software for 64 bit. That's what, like 9 to 10 years since AMD came with the amd64 architecture?

XP 64 got off to a bit of a bad start, but I havn't run anything but 64-bit Windows on my home system for years; that goes back to the pre-Win 7 era (I was running 64-bit Server 2008 for a year or so).

And even the XP case is overstated, at least nowadays. Until fairly recently I was running 64-bit XP at work (I've switched to Linux); several other people in my group still are. It works fine.

Re:ARM Windows (1)

RyuuzakiTetsuya (195424) | more than 3 years ago | (#35398694)

By releasing prototype hardware to devs before going to launch so apps do exist for the platform?

We are no longer in the paradigm of "will my apps run?" but "will there be an app that lets me do $task with $data?"

Re:ARM Windows (0)

Anonymous Coward | more than 3 years ago | (#35398924)

No. They will just run slowly in emulation mode.

ARM Windows but not on desktop PC (1)

erice (13380) | more than 3 years ago | (#35399360)

I don't think desktop machines will move, or at least not move easily. However, unlike 1993, desktop machines aren't quite the PC universe anymore. On the top, we have legions of rack mounted servers. Coming up from the bottom are smart phones and tablets. Neither of these segments is as tightly wedded to Windows as the desktop. Tablets today already run ARM and don't run Windows. For Microsoft, this must be very disturbing.

With servers, the move hasn't happened yet but data centers are seriously looking at ARM. Microsoft is trying to make sure their OS and application don't get dumped along with the power hunger x86 servers they run on.

Duh (1)

c (8461) | more than 3 years ago | (#35397998)

Stupid question.

Of course any system builder will tell you they'd "consider" ARM for Windows 8. They'd also "consider" building 9.6GHz 8088 systems running MS-DOS powered by the blood of virgins if that's where it looked like the market might go.

Re:Duh (1)

RobertM1968 (951074) | more than 3 years ago | (#35398360)

Stupid question.

They'd also "consider" building 9.6GHz 8088 systems running MS-DOS powered by the blood of virgins if that's where it looked like the market might go.

Is there a website I can pre-order on?

Humongous Dud (0)

Anonymous Coward | more than 3 years ago | (#35398014)

Windows for ARM is going to be a humongous dud. The whole point of using Windows is backwards compatibility. Microsoft would have to use dynamic translation techniques similar to those used by Apple in the move from 68k to PowerPC to x86/x64, but ARM cores aren't faster than the Intel chips they're replacing and any power consumption advantage that ARM may have would be more than eliminated.

On the plus side, if manufacturers actually fall for it (as opposed to just using it as a bluff to put pressure on Intel), then we'll see more devices on which we can put Linux. I love my ARM based servers.

Re:Humongous Dud (1)

EvanED (569694) | more than 3 years ago | (#35398838)

You make a lot of assumptions here, and I don't think that they necessarily pan out in practice.

The whole point of using Windows is backwards compatibility.

That's part of it, but it's not the whole story. For instance, I use Windows on my home box partially because I'm one of those strange people who actually like it. (Actually that's not quite true; in reality, I tend to like Linux and Windows about the same, and I dislike both. But the important thing is that Linux and Windows annoy me in different ways, and I use Linux at work, so by running Windows at home I get some variety in my frustration instead of it always being focused on one thing.)

Now I do have some Windows-only software that I use (a couple games and Adobe Lightroom), but 90% of the time I've just got stuff that has easy cross-platform replacements.

Which brings me to my next point:

Microsoft would have to use dynamic translation techniques ... and any power consumption advantage that ARM may have would be more than eliminated.

But what if they only have to use those dynamic translation techniques 5% or 10% of the time? There are a lot of people whose use probably falls under that: most of the time they're using software they could get ARM versions of easily enough, or change to a slightly different program if they couldn't, and then once every few days they'd use a program that would need binary translation.

Now, does that apply to everyone? No. But I think it would apply to most people. (Look at how many people have switched to doing a lot of stuff on their iPhones and iPads.) The bigger question is how savvy you'd need to be to pick up the fact that you should and could get a new version of the software. (And here again I think it'd be reasonably easy to do that.)

Finally, who says that ARM needs to be confined to systems where battery life is important? I'm waiting with somewhat baited breath for more information about nVidia's Project Devner. Those are chips meant for desktops and servers. Will they be good enough to pick over whatever Intel and AMD have out at the time? I dunno... quite possibly not. But they may well be. And that would be very exciting.

Well ... (0)

Anonymous Coward | more than 3 years ago | (#35398016)

I just hope it won't be as bad as the current netbooks with Windows 7 Starter.
Had to deal with those as part of a software quality assurance project and if I never have to deal with crap like that again it'll be still too soon.

What kind of devices? (1)

gmuslera (3436) | more than 3 years ago | (#35398106)

Windows won't have an interface meant for i.e. tablets till late next year. If they want an OS for a full range of devices they should go in a way or another Linux, be Android, WebOS, MeeGo, or even normal distributions like Ubuntu with the right desktop environment. Even Maemo would be a better alternative if hadnt so much closed components. Not sure which other alternatives are around, iOS? Playbook's OS? Apple/RIM won't license to others their OSs, they want to sell the devices and keep the ecosystem for themselves.

Doesnt make sense. (0)

Anonymous Coward | more than 3 years ago | (#35398126)

THe whole article simply doesn't make sense.

Manufacturers manufacture hardware, it's up to users to decide what OS it will be on(or decline sales figures if such choice is not there). M$ made decision to have their future OS's to be compatible with ARM. Now what? Manufacturers will be on purpose create devices that are not Windows compatible?

Quality, as per flame wars in here - has nothing to do with OS being supported or not by hardware vendors.

Re:Doesnt make sense. (1)

UnknowingFool (672806) | more than 3 years ago | (#35398566)

Manufacturers manufacture hardware, it's up to users to decide what OS it will be on(or decline sales figures if such choice is not there). M$ made decision to have their future OS's to be compatible with ARM. Now what? Manufacturers will be on purpose create devices that are not Windows compatible?

For the most part, consumers don't care what OS they use. They just want their devices to work. Manufacturers may not care about developing software for their hardware, but they care whether their product has software. The lack of software would mean far fewer sales to consumers. In the past if Asus wanted to make an ARM based laptop, they had fewer choices on the OS and thus the software. Asus could either use Linux or BSD or come up with their own OS. And how many software makers would write software for their laptop? So Asus would have to develop some of their own software just to make it usable to consumers. As you would no doubt agree that Asus doesn't want to develop software at all. They would rather install an OS like ARM based Windows and have MS worry about the software side of things. All of this was before Android which has provided hardware makers with another choice.

So for a hardware maker, they either have to develop an ecosystem for their ARM based hardware or simply not make the device. With Apple and RIM they have gone to the trouble of developing the entire ecosystem. Such a task is not easy nor without risk. For the most part, Apple has succeeded by slowing building their ecosystem over several years. Why Android represents a larger threat than Apple to MS is that Apple is competing with MS for consumer usage indirectly; Android competes with MS directly for OEM partnerships.

Editors, please, don't allow publish such articles (1)

Pecisk (688001) | more than 3 years ago | (#35398128)

What's aim of this article? What's reasoning to begin with? Right, ARM is next hot cake, and Microsoft have no presence whatsoever on this platform. Therefore it must fall back to PR companies which tries to push articles like "Waiting for Windows 8", "ARM will be supported in Windows 8", "Hey, did you know Windows 8 is next best thing?" on portals like Slashdot.

Of course manufacturers will try to support any major operational system in the market - that includes Windows - if suddenly full blown Windows on ARM becomes reality. So this is worth separate article on Slashdot?

Not gonna happen (0)

JamesP (688957) | more than 3 years ago | (#35398162)

MS needed to have wakened up some 10 years ago.

Do they have a Windows version running on HW other than x86? Apart from XBOX 360, of course not

They used to have, but they believed their own crap about Wintel blah blah blah

Granted, they did a version for Itanic

But MS are the ones who where ultra-sluggish with AMD64

Apple had something like 3 versions of OS X on x86 before switching to Intel

And then people will buy and complain their 5 year old program doesn't work anymore.

Re:Not gonna happen (1)

the linux geek (799780) | more than 3 years ago | (#35398290)

Every version of Windows since NT 3.1 has run on architectures other than x86. 3.1 ran on MIPS, Alpha, and x86. 3.5 added PPC. 2000 killed those except Alpha (which was internal-only) and added IA64. XP added AMD64. Win8 is killing IA64 and adding ARM.

WinCE + (1)

nurb432 (527695) | more than 3 years ago | (#35398750)

Dont forget the embedded devices running windows too.

Re:Not gonna happen (0)

Anonymous Coward | more than 3 years ago | (#35399184)

The problem is, the applications didnt. What good is Windows if none of the applications wont work, nor inhouse built or the ones on the market?

Two differences are the net and the .NET (1)

tepples (727027) | more than 3 years ago | (#35399554)

One difference between Windows NT 4 and Windows 8 is that the latter has the .NET Framework. Once Microsoft ports the CLR and the UI toolkit, all fully managed applications are automatically ported. This includes any Silverlight application and any XNA game.

Another difference is that since the NT 4 days, home Internet access has become ubiquitous, mobile Internet access has become practical even if at a luxury price, efficient techniques for interpreting dynamic languages such as JavaScript have become known, and even APIs to let a web application run with an intermittent connection have been introduced. Web applications have begun to take advantage of these.

Hope the taiwanese got a big warehouse (1)

SmallFurryCreature (593017) | more than 3 years ago | (#35398292)

MS has in the past had its problems with delivering on time and companies have gotten burned if they planned a release of a product to need a unreleased MS product while MS was dragging its feet. Early Win95 games come to mind. There was a reason Quake was a DOS game. Blue Isle took a hit on making their next Battle Isle game require Win95.

I would be very hesistant to plan hardware yet on a completely unproven platform from a company that has never ever cared a tiny bit about its customers. See MS and the long long delay with 64 bit support until Intel and Dell were ready basically screwing AMD out of its lead advantage.

Just be careful taiwan, you don't want to end up with a stack of hardware getting outdated while MS delays the release month after month.

Re:Hope the taiwanese got a big warehouse (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35399142)

I strongly suspect that(in addition to the usual "people will tell annoying pollsters whatever they want to hear" effect), the OEM/ODM guys are not going to be taking any major risks on this one:

There is already a reasonably steady market for ARM-based android widgets, NAS devices, etc, etc. Microsoft will, presumably, have their own set of special requirements(as with tablet PCs needing a ctrl+alt+del key) and some sort of minimum spec floor; but the basic nature of the ARM SoC market means that they won't really have an option other than choosing one or more, likely the higher specced ones, as blessed platforms. At that point, producing a "Windows 8" variant will likely require nothing much more than a button layout and bootloader change...

Economics rule this out. ARM/MIPS Laptops... (1)

lkcl (517947) | more than 3 years ago | (#35398524)

he issue that you've got is that a) microsoft is not going to have windows for ARM until 2013, and even then it is impossible to get third party developers to do total rewrites of drivers b) emulation of x86, even with hardware assistance (similar to jazelle) only provides something like 30% equivalent performance. so you have a great processor, maybe 2ghz dual-core if you get the one from nufront, you smash its capabilities down to a staggering and mundane 700mhz, and you can only get up to about 1.5 gb of RAM because you need at least some memory for the Host OS.

now, yes you could instead use the ICT's "Godson" upcoming GS464V Quad-Core MIPS processor, which will have over 200+ hardware-accelerated assistance emulation instructions, but this CPU is designed to target the Chinese Government's desire to have the fastest supercomputer in the world - it would just also so happen to make a great Desktop / Server product, too, and the target power consumption is just a tad higher than any ARM processor.

overall, then, this is, very unfortunately, just pure wishful thinking on the part of every single taiwanese manufacturer. it's quite simple: to emulate another OS, the performance hit is so high that to compensate you might as well stick with the x86 processors, even if the higher performance ARM or MIPS processors were available they are actually significantly more expensive.

so instead, why not accept the fact that much much cheaper systems can be made, based around such low-power high-integrated embedded ARM and MIPS processors, and let the buyers decide?

http://lkcl.net/laptop.html [lkcl.net]

Re:Economics rule this out. ARM/MIPS Laptops... (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#35399206)

Unless MS is playing their classic "attempt to scuttle competitor's existing product with reports of what they will have Real Soon Now(tm)" game, or isn't going about this very cleverly(either is definitely possible); I would expect any push into non-x86 architecture to make heavy use of their .NET CLR stuff.

Virtualizing any classic win32 x86 binaries on ARM is going to suck so much, in terms of performance, that they might as well not bother. By the time Windows 8 actually makes it out the door, Intel will have something that may not beat ARM in the low-power game; but will curb-stomp ARM-emulating-x86. However, if Microsoft has an ARM CLR up and going, all the outfits that have been drinking the kool-aid for the past few years should need to do little more than drop their x86 installer packages in order to be fully compatible(and even if some x86 installshield package needs to be emulated long enough for it to copy over the .NET components, that won't be the end of the world)...

Re:Economics rule this out. ARM/MIPS Laptops... (1)

lkcl (517947) | more than 3 years ago | (#35399536)

i did hear that ARM has a jazelle-like acceleration for CLR. it is not well-understood, and, crucially as you point out, there isn't much call for it because you can't run silverlight on a non-existent OS! :)

logo? (1)

gbjbaanb (229885) | more than 3 years ago | (#35398752)

If this is a ARM story, why is the logo set to AMDs? they haven't bought them out yet have they?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?