×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Intel Confronts a Big Mobile Challenge: Native Compatibility

Soulskill posted about 6 months ago | from the write-once-run-nowhere dept.

Intel 230

smaxp writes: "Intel has solved the problem of ARM-native incompatibility. But will developers bite? App developers now frequently bypass Android's Dalvik VM for some parts of their apps in favor of the faster native C language. According to Intel, two thirds of the top 2,000 apps in the Google Play Store use natively compiled C code, the same language in which Android, the Dalvik VM, and the Android libraries are mostly written.

The natively compiled apps run faster and more efficiently, but at the cost of compatibility. The compiled code is targeted to a particular processor core's instruction set. In the Android universe, this instruction set is almost always the ARM instruction set. This is a compatibility problem for Intel because its Atom mobile processors use its X86 instruction set."

Sorry! There are no comments related to the filter you selected.

Ha ha (1, Interesting)

Anonymous Coward | about 6 months ago | (#47179231)

Can I just say that I remember people moaning about the ARM not being x86 compatible when it came out in '87 or thereabouts.

(Yes, am I that old.)

Fsck x86 (4, Insightful)

Anonymous Coward | about 6 months ago | (#47179233)

I like compatability, but I've had it with x86. Let ARM hog the limelight for a while, no reason it shouldn't have its fifteen minutes. And let x86 die, it's way past its BBE date and has outstayed its welcome by several generations.

Re:Fsck x86 (5, Insightful)

Anonymous Coward | about 6 months ago | (#47179507)

This person is likely in their 20s, I am assuming early 20s. With that said, I am in my 30s, somewhat early. My first PC was an 8088 and I've deep dived into every modern processor since then. Even with the debacle that was Windows 7 and 8, I am still going to stand behind x86 as a great architecture that can stand the sands of time.

Scalability: What other architecture has scaled so far that it was completely decimated two competing architectures from the past and the future at the same time. The original 8088/86 was 3mhz, the latest x86 offering is 4ghz.

Popularity: Both Apple and Sun saw the writing on the wall, Sun saw it too late, Apple saw it early (or saw what happened to Sun). They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.

Backwards Compatibilty: I know my x86 processor is still going start in 8-bit mode and I know that I can put it in 16bit mode and run my 1992 applications. But to that extent, x86-64 just extends the instruction set. eg ARM32 does not play on ARM64.

Let's face it. I witnessed Y2K. I witnessed every weak architecture under the sun get wiped out because it had shortcomings. Intel designed the best architecture with x86 and naysayers generally harp because it's "too big". I, for one, plan to teach my children x86 ASM so they understand the basics.. then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.

Re:Fsck x86 (5, Funny)

lagomorpha2 (1376475) | about 6 months ago | (#47179557)

I, for one, plan to teach my children x86 ASM so they understand the basics.. then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.

This is just a guess but you don't actually have children yet, do you?

Re:Fsck x86 (0)

Anonymous Coward | about 6 months ago | (#47179785)

I do (I am the person you are replying to), I have a 2 year old and another on the way. Yes I know my children may not find programming computers to be interesting.. but it is a geek father's hope isn't it?

Re:Fsck x86 (1)

Anonymous Coward | about 6 months ago | (#47179605)

Backwards Compatibilty: I know my x86 processor is still going start in 8-bit mode

x86 never had an 8 bit mode.

But to that extent, x86-64 just extends the instruction set. eg ARM32 does not play on ARM64.

all the information i've seen says that 64-bit arm chips can run 32-bit arm mode at least in user mode (not sure about kernel mode)

Re:Fsck x86 (0)

Anonymous Coward | about 6 months ago | (#47179611)

>What other architecture has scaled so far that it was completely decimated two competing architectures from the past and the future at the same time.

Doesn't mean the architecture isn't a sprawling pile of shit with shit tacked on of other shit on top of older shit.

Re:Fsck x86 (0)

Anonymous Coward | about 6 months ago | (#47179813)

Aye, and ARM is new hip and fresh without all the cruft right? ARM also doesn't scale nicely, so we should use ARM until we have something else that is better than ARM and then we migrate everything to that platform right? Because everyone will jump ship at the same time and we will never need to support multiple platforms. I mean it worked for Amiga so it should work for ARM right? Oh wait...

What can we replace x86 with, today, that won't also turn into a sprawling pile of backwards compatible shit?

Re:Fsck x86 (1)

K. S. Kyosuke (729550) | about 6 months ago | (#47179887)

ARM also doesn't scale nicely

You figured that out...how exactly? It's a RISC, RISCs have historically scaled very, very well.

Re:Fsck x86 (4, Interesting)

drinkypoo (153816) | about 6 months ago | (#47179967)

You figured that out...how exactly? It's a RISC, RISCs have historically scaled very, very well.

What does that even mean any more? Visibly-CISC processors are now internally-RISCy (All of them since Am586) and there is actually a benefit to be derived from variable-length instructions and the x86 decoder is a small portion of the modern CPU. But ARM cores have never gotten up into the big, big clock rates because they've never been designed for them, instead targeting efficiency. That's a much easier goal to reach than bigger shinier if you're on a constrained budget, and it certainly has paid off for them now that we care about power budgets, but they're still having trouble scaling.

I'd sure like to hear anything insightful anyone has to say about XScale. It was the fastest ARM implementation of its day, but it was also the most power-hungry, and AFAICT Intel never really managed to scale either performance up or power consumption down after their initial release, and then dropped it. Is that how it played out?

Re:Fsck x86 (1)

K. S. Kyosuke (729550) | about 6 months ago | (#47180343)

What does that even mean any more? Visibly-CISC processors are now internally-RISCy (All of them since Am586)

How is that relevant to the issue I was commenting on? That doesn't make ARM scale worse, that makes x86 scale better, doesn't it?

and there is actually a benefit to be derived from variable-length instructions and the x86 decoder is a small portion of the modern CPU

Sure, if you're cranking out single thread performance. Which may not be the best thing to do in all application areas, though. Especially if you're doing something like ARM's big.LITTLE configuration, where you may not want the LITTLE cores to have complicated instruction decoders - to keep them actually simple - but you still need them to support the same ISA, otherwise you'd have to use fat binaries and you wouldn't be able to move threads between the cores (not without huge hassles at least).

Re:Fsck x86 (3, Interesting)

gbjbaanb (229885) | about 6 months ago | (#47179767)

x86 is good in the same way that a modern police baton is good - its still a stick you hit people with, and serves its purpose. But there are better weapons available.

So saying that x86 is great because technology has had to improve to make up for its deficiencies is just stupid. x86 isn't some wonderful architecture, putting 4 cores on a single die isn't anything that x86 made happen that others couldn't do, fabrication techniques that shrunk the die size isn't anything to do with x86 either.

Think that a motorola 68000, way back in the day was better than the old 286s it compared to. Imagine that the 68000 took off instead of the 286 - if MS and IBM had built DOS for 68000 instead of x86... today we'd be in pretty much the same position but with a different chipset. But it would be faster and cheaper and more efficient.

BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen. The same as happens with A64 that allows old A32 and T32 instructions to still run on the same chip.

Re:Fsck x86 (1)

K. S. Kyosuke (729550) | about 6 months ago | (#47179907)

x86 is good in the same way that a modern police baton is good - its still a stick you hit people with, and serves its purpose. But there are better weapons available.

So Intel is essentially crying "Don't ARM me, bro!"...? ;-)

Re:Fsck x86 (0)

Anonymous Coward | about 6 months ago | (#47180355)

Imagine that the 68000 took off instead of the 286 - if MS and IBM had built DOS for 68000 instead of x86... today we'd be in pretty much the same position but with a different chipset. But it would be faster and cheaper and more efficient.

[Citation needed]

Re:Fsck x86 (1)

Cornelius the Great (555189) | about 6 months ago | (#47180643)

BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen. The same as happens with A64 that allows old A32 and T32 instructions to still run on the same chip.

Disregarding built-in microcode that converts CISC instructions into simpler RISC-like operations, this statement is not accurate. All x86-64 processors have the same native 32-bit registers and instructions that the original 386 had (some may be deprecated, but IIRC there is 100% compatibility). No hardware emulation is being done.

You may be confusing the virtual memory translation scheme (Wow64) that Windows uses to run 32-bit processes in Windows x64. Yes, there is some slight overhead, but it isn't considered to be emulation.

Re:Fsck x86 (2)

Type44Q (1233630) | about 6 months ago | (#47180651)

BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen.

Unless I'm mistaken, that's completely incorrect.

Re:Fsck x86 (2)

dave420 (699308) | about 6 months ago | (#47179775)

Windows 7 was not a debacle. ME & Vista, fair enough, but not 7.

Re:Fsck x86 (5, Interesting)

OneAhead (1495535) | about 6 months ago | (#47180161)

First of all, at this point, it is misguided to talk about x86 as an architecture; there is generally little or no architectural overlap between two x86 processors that are a few generation apart. x86 is an instruction set, or even more correct, a family of instruction sets. The distinction is important, especially in the age of complex instruction decoders; a lot of the more complex x86 instructions are internally decoded into smaller pieces, and one could say that the CPU internally runs its own, different instruction set. The fact of supporting a certain instruction set nowadays says almost nothing about the underlying architecture.

So we're talking about an instruction set. One that was conceived in an age where manual coding in machine language was far more common than it is today; the original x86 instruction set was designed to be easy for a human bitpusher to handle, whereas newer instruction sets like ARM are more geared to get the most out of a decent optimizing compiler. What followed for x86 was extension upon extension, and the instruction set is now so byzantine that x86 is a very difficult market to break into; the design complexity of the decoder can probably be overcome, but all the cross-licensing between Intel and AMD cannot. The complex, organically-grown instruction set also leads to some waste of silicon in having to support all those instructions, and waste of performance/energy efficiency in that the instruction set is not designed from the ground up towards efficiency. People on the x86 side make a compelling argument that this has become negligible, but the fact remains that I'm still not seeing any x86 processor getting (unbiased) performance/W scores that are close to common ARM processors.

My first computer contained a Z80, a true 8-bit processor (your claim that x86 has 8-bit mode is false; the lowest common denominator for x86 is the 16-bit 8086, which you're probably confusing with the 8-bit 8080 which is not x86 compatible). More relevantly, I also have experience running scientific workloads on a whole zoo of processors. I have particularly fond memories of the later Alphas, which wiped the floor with everything up until and including the Pentium 4, and were very competitive even against Athlon 64 and Core2 performance-wise (but not price-wise). Repeat after me: x86 has zero inherent architectural advantage!!! The big advantages it has is (1) economies of scale and (2) the higher profits of a mass market that generate more revenue to be pumped into R&D. There hasn't been a kid on the block that could compete on these fronts - not until ARM came around.

While Intel is sitting on an impressive pile of cash and R&D potential, their attempts to match ARM in performance/W are so far unsuccessful when looking at non-biased benchmark results, and ARM has profited from this in establishing a mass market of its own. Things are about to get interesting from here onwards. I can't predict whether x86 or ARM will win. The outcome might be coexistence, with x86 keeping its dominance in the server room, workstation, office and hard-core gaming, and everything else becoming ARM. Whatever the outcome might be, I am firmly rooting for ARM, though. Technically, it's leaner, more rationally designed instruction set that is more frugal with resources (die size, cache & memory usage) and better reflects present-day usage cases. And the fact that it's relatively straightforward for a newcomer to license it and start making chips will bring some healthy competition onto the stage.

Re:Fsck x86 (1)

the_B0fh (208483) | about 6 months ago | (#47180375)

Wow. A factual followup. Wish I had mod points.

Re:Fsck x86 (3, Insightful)

ezelkow1 (693205) | about 6 months ago | (#47180279)

then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.

Mips and arm as fads? You do realize mips has been around almost as long as x86 has, and is still widely used. People all too often forget that the majority of devices out there are not full fledged computers, they are embedded devices, to which mips and arm own the space. This is exactly why mips is still widely taught in colleges as it is readily accessible, open, and still used in the industry. It also gives a good foundation to build on when looking at other ISAs

Re: Fsck x86 (0)

Anonymous Coward | about 6 months ago | (#47180695)

MIPS is taught in schools because the instruction set has one particularly useful feature: sanity. I've written in MIPS and TI C6X and when I look at x86 code, my head begins to turn inside out. It's a complete fucking mess.

Re:Fsck x86 (1)

the_B0fh (208483) | about 6 months ago | (#47180323)

What kind of nonsense are you spouting? Every processor out there with the exception of that one open sourced one, *IS* proprietary! *YOU* go try and make an x86 and see if Intel sues the pants off you.

You also don't understand what scalability and all the other big words mean. You don't even know that your Intel processor is now effectively a RISC core with a CISC layer of microcode on top.

And who the hell cares if you can run 8bit or 16 mode shit? Those were all design inefficiencies and now you are fucking bragging about it? What an idiot.

Re:Fsck x86 (1)

sexconker (1179573) | about 6 months ago | (#47180621)

This person is likely in their 20s, I am assuming early 20s. With that said, I am in my 30s, somewhat early. My first PC was an 8088 and I've deep dived into every modern processor since then. Even with the debacle that was Windows 7 and 8, I am still going to stand behind x86 as a great architecture that can stand the sands of time.

I doubt your age and I doubt your "deep diving" into "every modern processor". What does Windows 7 or 8 have to do with x86 as an architecture? In what way was Window 7 a debacle? How is Windows 8 a debacle? The overblown bullshit about MS locking it down? The raving lunacy about the new Start menu? Seems to me the media has the problem, not the OS.

Scalability: What other architecture has scaled so far that it was completely decimated two competing architectures from the past and the future at the same time. The original 8088/86 was 3mhz, the latest x86 offering is 4ghz.

Decimate means to reduce by a factor of one tenth. WTF does the clock speed increase over decades have to do with anything?

Popularity: Both Apple and Sun saw the writing on the wall, Sun saw it too late, Apple saw it early (or saw what happened to Sun). They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.

x86 is hardly any less proprietary than PowerPC or SPARC. You've got Intel and AMD at the helm. VIA walked the plank ages ago.
Apple ditched PowerPC because Apple's market share was so fucking low that the only company compiling for PowerPC was Adobe. The decision to drop PowerPC had to do with market share and cost, not the architecture itself.

Backwards Compatibilty: I know my x86 processor is still going start in 8-bit mode and I know that I can put it in 16bit mode and run my 1992 applications. But to that extent, x86-64 just extends the instruction set. eg ARM32 does not play on ARM64.

x86 CPUs don't start in an 8-bit mode. They start in Real Mode. x86 CPUs are 16, 32, and 64 bits, not 8 bits. The 8088 simply has an 8-bit data bus. Registers are 16-bit.

Let's face it. I witnessed Y2K. I witnessed every weak architecture under the sun get wiped out because it had shortcomings. Intel designed the best architecture with x86 and naysayers generally harp because it's "too big". I, for one, plan to teach my children x86 ASM so they understand the basics.. then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.

Let's face it, you're an AC posting malarkey.

Re:Fsck x86 (0)

Anonymous Coward | about 6 months ago | (#47180631)

Backwards Compatibilty: I know my x86 processor is still going start in 8-bit mode

Maybe the 8008, but (IIRC) the 8086+ starts in 16-bit real mode.

Re:Fsck x86 (2)

JohnFen (1641097) | about 6 months ago | (#47180645)

Since we're all playing the age card, I'm 50 and have been actively developing software since I was 12 (using punch cards and the ultra-fast and modern paper tape!)

The x86 is a fine architecture, despite its numerous warts. However, so is arm. Each has distinct advantages and disadvantages -- and being able to operate in power-starved situations (such as with smartphones) is one of the main strengths of arm and one of the main weaknesses of x86. If my experiences has taught me anything, it's that there's no such thing as a single approach that works well in every situation. If all you have is a hammer...

"Intel designed the best architecture"

What is "best" depends on what it's used for. There's no such thing as a single architecture that is the "best" in every situation. Arm is hardly the "current fad". It's been around and going strong for a long, long time -- and it's being used in smart phones because it happens to be the best solution for that application currently on the market.

Re:Fsck x86 (0)

PhrostyMcByte (589271) | about 6 months ago | (#47179571)

ARM has already had its 15 minutes, just like AMD's Athlon did.

There's a good possibility that Intel will wipe the floor with all the ARM offerings. Maybe not with this generation of CPUs, maybe not the one following it, but they've got the best fab in the world and extremely smart people using it.

They've been actively focusing on increasing power efficiency for a number of years now, so I have no doubt they'll be able to bring strong competition.

Re:Fsck x86 (3, Insightful)

aethelrick (926305) | about 6 months ago | (#47180001)

ARM is massively dominant in the embedded and mobile markets. These markets make up a vast quantity of electronics gear. Intel (the X86 pushers) even make ARM chips. ARM is starting to make in-roads into larger devices and encroach on traditional Intel/X86 stomping grounds. ARM have plans for servers and PC's running with their chips. They are low cost, low power and quite good at what they do. Admittedly they won't be replacing your PC gaming rig any time soon, but they're not chasing that market (yet). You are sadly unaware of just what ARM is if you think it's had it's "15 minutes" it's just getting started at the edges of the PC market, it's backed by many vendors and I for one think it'll be around for a while yet. Look at the market share tablets have stolen from the PC, they are mostly ARM powered. Sure netbooks seem a little crusty, and havn't had the uptake their manufacturers were hoping for, but ARM server gear is taking off. Also the IT nippers are playing with ARM with their Arduino and Raspberry Pi gear. I wouldn't count ARM out just yet. Hell, I just replaced my decade-old trusty Linux server at home with a Wandboard Quad running Arch Linux for ARM. Guess what, it works really well, as a Samba, Backup and Email server for the family and I'm not even an ARM enthusiast, it was just MUCH cheaper than a regular Intel or AMD replacement and perfectly up to the task.

Re:Fsck x86 (1)

PhrostyMcByte (589271) | about 6 months ago | (#47180527)

I'm hardly counting ARM out. I doubt Intel will ever try to apply themselves to all the areas ARM is in. For phones and tablets, though? There is no doubt that ARM will have some very serious competition in the near.

I realize we like to root for the underdog here, but realistically, Intel's got a leg up in the long run.

intel and power efficiency (2)

Mspangler (770054) | about 6 months ago | (#47180145)

"They've been actively focusing on increasing power efficiency for a number of years now, so I have no doubt they'll be able to bring strong competition."

It Intel wants to, they can bring strong competition. They used to have their own ARM variant, but sold it off. They decided that there was no future in low power. Oops.

When they do get a low power chip they seem to lose interest, and then crank up its performance, and its power budget. Then Steve Jobs would yell at them, and they would produce another low power chip. Then repeat the cycle. Now that Steve is gone, will they go back to thinking a 135W CPU is acceptable?

In Intel's world, Grand Coulee dam exists to power their CPU, and the rest of the hydropower on the Columbia is to run the cooling system for that chip. Institutionally they haven't figured out that we have all the cycles per second we need, and battery life is now the critical parameter. Obviously if your dream PC has a 1000 W power supply on a dedicated circuit you will not care about power the same way you will if your phone keeps going dead every time you need it.

As is often the case, the problem is Management, not Engineering.

For the record, I'm using a 2.5 Ghz Core 2 Duo P8700. It's 6 year old technology and entirely fast enough. It has a 25 W power budget. The "ultra-low power" 2 core Haswell has a 35 w power budget. So they have gone backwards. Remember, I don't need more speed, so I don't care if the Haswell CPU is faster.

The question is does Intel get this point? If they say "you are not our target demographic" then fine, and I'll pay them just as much attention as I pay to Miley Cyrus. Which is to say none.

Re:Fsck x86 (2, Insightful)

Anonymous Coward | about 6 months ago | (#47179579)

You have no concept. Many companies, including Intel, have tried to move away from x86 - the market won't let them. There is too much software out there written to the x86 architecture to move away from it. You are completely underestimating the market forces behind x86.

Staying with x86 is *not* Intel's choice, of even their desire (they have tried to shift the market off x86). This is where real world forces/issues trump ivory tower technical perspectives.

Re:Fsck x86 (1)

ericloewe (2129490) | about 6 months ago | (#47179795)

Blind hate for an instruction set. Brilliant. /s

Re:Fsck x86 (0)

Anonymous Coward | about 6 months ago | (#47179905)

As always, it's more complicated than that.

The zealous ARMists are mostly those people who bought AMD during the speed-wars of the 90's. Intel won, and they can't stand that. ARM is their next hope for trying to get revenge on Intel (even though Intel is more than capable of optimizing ARM designs and fabricating them at half the size of any competitor). The minor fact that Intel's Atom line had surpassed ARM at every benchmark except the "power consumption when doing nothing" in 2012 [tomshardware.com] does not in any way alter their firmly held belief that ARM is infinitely efficient and the future of computing.

Re:Fsck x86 (1)

AcidPenguin9873 (911493) | about 6 months ago | (#47180045)

Can you name several reasons why the x86 ISA has a negative impact on your computing experience?

Re:Fsck x86 (1)

BitZtream (692029) | about 6 months ago | (#47180439)

Why?

LONG mode x86-64 works out most of the major 'problems' with x86 from a programmers perspective. Thanks AMD! ARM has its own set of silliness that devs at the assembly level have to deal with (For reference, I fluently speak x86, ARM, and ATmega ASM).

Furthermore, x86-64 is a language the rides on top of the core, the core doesn't actually speak x86 in pretty much any x86 processor, it has a translation unit in front of it that breaks the CISC instructions up into more RISC like ones for the core.

I wouldn't put it past Intel to put an ARM translation unit in front of their cores with little effort, performance wouldn't be 100% identical but it already isn't across different ARM cores so thats probably not really that large of an issue.

Other than 'its old', do you have an actual insightful reason that x86 needs to die or are you just spouting something you heard some one else say a long time ago without understanding at all why they said it?

Native (-1, Offtopic)

rossdee (243626) | about 6 months ago | (#47179241)

All I know is that the Native Language of most App Developers is not English
Have you seen the descriptions in the App store (especially Amazon which is where I get stuff)

Re:Native (0)

Anonymous Coward | about 6 months ago | (#47179453)

You has a probrem with dat?

Intel is run by the filthy Juden. (-1)

Anonymous Coward | about 6 months ago | (#47179261)

Intel is run by the filthy Juden.

Re:Intel is run by the filthy Juden. (2)

captjc (453680) | about 6 months ago | (#47180205)

Hey Juden, don't make it bad.
Take your sad chip and make it better.
Remember to put it into their phones
Then you can own
The Android market.

Intel has no Android phone market share (1)

Anonymous Coward | about 6 months ago | (#47179277)

Even according to their own developer website, only three phones on the entire market use x86. [intel.com]

This is a problem for tablets, then. But wait! This could all be solved by Google simply filtering their store offerings by what's available to x86 tablets vs ARM phones, which would be a real kick in the nuts for Intel.

Re:Intel has no Android phone market share (1)

EasyTarget (43516) | about 6 months ago | (#47179527)

I think they do that already; filter apps you see via a device compatibility matrix. Or is this only at a crude level to distinguish (say) Tablets from Phones, etc.

Price discrimination (1)

tepples (727027) | about 6 months ago | (#47179575)

I think they do that already; filter apps you see via a device compatibility matrix.

The challenge is to make the filtered list other-than-empty without having to convince developers to recompile. I imagine that some developers are likely to refuse to recompile in order to price discriminate between the desktop market and the mobile market, where people aren't willing to pay as much per app as in the desktop market.

Re:Price discrimination (1)

HiThere (15173) | about 6 months ago | (#47179707)

More to the point, you are asking them to cross-compile rather than just recompile. And if you don't have the hardware, you can't test the result.

Re:Price discrimination (1)

lister king of smeg (2481612) | about 6 months ago | (#47180655)

More to the point, you are asking them to cross-compile rather than just recompile. And if you don't have the hardware, you can't test the result.

Because no devs anywhere has any x86 or x86_64 units under their desk. And its not like we have dozens of emulators and virtual machines for ever instruction set under the sun. And it is absolutely impossible to buy touch screen after windows 8 was released... oh wait we have all of those more so than ever.

Re:Intel has no Android phone market share (1)

drinkypoo (153816) | about 6 months ago | (#47179939)

I don't know how it works precisely but I do know from experience that selecting a different model can permit you to install an app when you were disqualified for all manner of reasons, including such details as display resolution. Finless roms for MK908 show up as a Samsung device or something because otherwise lots of apps which work fine don't install

"newsy" bits (5, Informative)

bill_mcgonigle (4333) | about 6 months ago | (#47179287)

Somehow missing from TFS...

Intel has released a beta of its native development environment called Intel Integrated Native Developer Experience, (INDE) and written plugins for Eclipse the most Android developers use to build for Android so the apps can be X86 compatible and execute efficiently on Atom processor-based hardware.

Not useful to me, but I'll support Intel anyway. (5, Interesting)

Dr. Manhattan (29720) | about 6 months ago | (#47179509)

I made an app for Android - ported an emulator written in C++. (Link in sig, if you're interested, but this isn't a slashvertisement.)

So the core of the app, the 'engine', is in C++ and must be natively compiled, while the UI and such is in Java. Naturally, the binary's compiled for ARM first. This actually runs on a lot of Intel Android tablets because they have ARM emulators. But, thanks to a user finally asking, I put in some time and now I can make an Intel version. (Heck, the original source was written for Intel anyway, so it wasn't a big stretch.) The existing tools are sufficient for my purposes. And it runs dramatically faster now on Intel.

However, for the developer it's mildly painful. The main issue is that you have a choice to make, with drawbacks no matter which way you go. You can include native libraries for multiple platforms, but that makes the APK larger - and according to a Google dev video I saw, users tend to uninstall larger apps first. In my case, it'd nearly double the size. So instead I'm putting together multiple APKs, so that ARM users get the ARM version and Intel users get the Intel version - but only Google Play supports that, not third-party app stores. I haven't looked into other app stores, and now it's less likely I will.

Note that native development can be important to apps for a non-technical reason: preventing piracy. An app written purely in Java is relatively easy to decompile and analyze, and pirates have a lot of techniques for removing or disabling licensing code. Adding a native component makes the app much harder to reverse-engineer, at least delaying the day that your app appears on pirate sites.

Re:Not useful to me, but I'll support Intel anyway (1)

gbjbaanb (229885) | about 6 months ago | (#47179923)

what would be even better is if you could submit your source code to the Google store and it would compile it for you on a server farm and produce APKs optimised for each chipset they support.

i remember the days when sourceforge had such a thing, you supplied your code and it got built for all manner of Linux and (IIRC) windows architectures/platforms.

Not very well written then (1)

guruevi (827432) | about 6 months ago | (#47179297)

Well-written C can be cross-platform compatible. It's all in how you write things (or the libraries you use).

Re:Not very well written then (3, Informative)

Atzanteol (99067) | about 6 months ago | (#47179357)

A compiled binary doesn't care how well-written your C is if you are running it on the wrong platform.

Re:Not very well written then (0)

Anonymous Coward | about 6 months ago | (#47179403)

No but a system for recompiling ARM code to x86 does. RTFA.

Re:Not very well written then (1)

TheRaven64 (641858) | about 6 months ago | (#47179415)

Most people get Android apps from an App Store though, and it can easily select the correct version. This happens already for MIPS-based devices, so there's no reason why it wouldn't work for x86, if it were worth the effort for developers to provide them.

Re:Not very well written then (0)

Anonymous Coward | about 6 months ago | (#47179461)

So app developers can create multiple compilations of the same app, and the app store will automatically give the user the one most compatible to their device?

.

Re:Not very well written then (1)

tepples (727027) | about 6 months ago | (#47179603)

Yes, as I understand it. Perhaps the biggest things preventing it are 1. developer apathy and 2. use of third-party closed-source native libraries.

Re:Not very well written then (2)

Mad Bad Rabbit (539142) | about 6 months ago | (#47179441)

Clearly we just need a small set of POSIX apps to do 'git, 'make' and 'gcc' on your phone.
Download the signed source code from the app store.

Re:Not very well written then (1)

RyuuzakiTetsuya (195424) | about 6 months ago | (#47180637)

For Android, I'm shocked this isn't part of the install process. Either done server side and cached(Compile once, then cache the stored binary) or done on the phone. If compilation fails for the target, the app dev is notified and made unavailable for those on that platform.

Re:Not very well written then (1)

lister king of smeg (2481612) | about 6 months ago | (#47180687)

Clearly we just need a small set of POSIX apps to do 'git, 'make' and 'gcc' on your phone.
Download the signed source code from the app store.

like a gentoodroid

Re:Not very well written then (1)

AuMatar (183847) | about 6 months ago | (#47179667)

In fact one of the main reasons to write your logic in mobile in C is that it will run on any platform- then you only have to rewrite your UI layer. But this isn't what they're talking about- they're talking about multiple processors. However Android allows for fat binary apks with multiple versions of libraries, so it isn't that big a deal.

Intel once made ARM processors... (1)

supersat (639745) | about 6 months ago | (#47179345)

For a while they had their XScale line of ARM processors and SoCs. I think one of the dumber moves they've made was to sell that line of business off to Marvell in 2006 and go "all-in" on x86 before they were ready.

Re:Intel once made ARM processors... (1)

alen (225700) | about 6 months ago | (#47179381)

i've read that they kept their ARM architecture license. one of the few good licenses where they can make their own CPU and instruction set as long as it's compatible with ARM. like apple and qualcomm. not like the regular licenses where you simply manufacture whatever ARM designs with minor changes

Re:Intel once made ARM processors... (2)

TheRaven64 (641858) | about 6 months ago | (#47179477)

With ARMv8, a lot of companies have this kind of license. I think there are six independent ARMv8 implementations that I'm aware of, but there may be more. Well, I say independent - they all had engineers from ARM consult on the design, but they're quite different in pipeline structure. This is the problem Intel is going to face over the next few years. They could compete with AMD by outspending them on R&D: Intel could afford to design 10 processors and only bring 3 to market depending on what customers wanted, AMD couldn't afford to throw away that much investment. This is how the Pentium M happened: they rushed to market one of their back-burner designs. Now, however, they're competing with half a dozen companies all of whom have ISA-compatible chips and all of whom are content to heavily optimise their designs for a particular market segment.

Re:Intel once made ARM processors... (1)

bluefoxlucid (723572) | about 6 months ago | (#47179655)

XScale and StrongARM.

Apple did this when they switched to PPC. (3)

hey! (33014) | about 6 months ago | (#47179353)

It worked amazingly well, but it still sucked.

Re:Apple did this when they switched to PPC. (0)

Anonymous Coward | about 6 months ago | (#47179491)

Apple came up with a far better solution [wikipedia.org] .

Re:Apple did this when they switched to PPC. (3, Interesting)

Trepidity (597) | about 6 months ago | (#47179555)

They had fat binaries for apps compiled to both PPC and x86, but that wasn't the only solution, since with just that you wouldn't be able to run apps until the developer recompiled and shipped a new version. They also had a binary translator [wikipedia.org] to run unmodified PPC binaries on x86.

Re:Apple did this when they switched to PPC. (1)

tepples (727027) | about 6 months ago | (#47179635)

Fat binaries were fine when software was shipped on CD-ROM or over unmetered wired broadband. It's less fine when software is shipped over a cellular connection that's billed by the bit. It's also less fine when people routinely uninstall large applications from a device's single-digit GB storage to free space for other things. And without an emulator for legacy applications, it's also less fine for people who want to continue running applications that haven't been recompiled.

Re:Apple did this when they switched to PPC. (1)

0123456 (636235) | about 6 months ago | (#47179779)

Out of interest, what apps are big enough for doubling their size to matter, yet most of their storage usage isn't data of some kind that can be shared between both versions?

Games, for example, might be a few megabytes of code with tens or hundreds of megabytes of game data.

Re:Apple did this when they switched to PPC. (1)

ericloewe (2129490) | about 6 months ago | (#47179849)

The binaries are a small part of the whole package. Besides, you don't have to download all the binaries.

Re:Apple did this when they switched to PPC. (1)

BronsCon (927697) | about 6 months ago | (#47180247)

You do if the APK is signed. Removing files fro ma signed APK changes the checksum and invalidates the signature. However, there's a solution for that [slashdot.org] , though it doesn't solve the data transfer issue.

Re:Apple did this when they switched to PPC. (1)

ericloewe (2129490) | about 6 months ago | (#47180633)

The solution is easy: provide signatures for the various download options.

Re:Apple did this when they switched to PPC. (1)

NJRoadfan (1254248) | about 6 months ago | (#47180635)

Back in the 68k days, there were tools to strip the un-needed binaries from FAT applications depending on the machine you had. The forked files used by classic Mac OS were an advantage, you could store the common resources in the resource fork of the file for both PPC and 68k.

Re:Apple did this when they switched to PPC. (1)

BronsCon (927697) | about 6 months ago | (#47180233)

So, then, why not have the device be able to unpack the APK (oh look, it already can!), strip out the incompatible binaries, repack the APK with the remaining bits and pieces, and sign it with its own key? I know that doesn't solve the data transfer problem, but it does (and securely so) solve the "users uninstall largest apps first" problem. Hell, it would even be possible to use a different signing system for repacked APKs (making it obvious) and have the APK subsystem refuse to install repacked APKs from other devices, or at least devices not associated with your Google account (so you know the APK you're installing is from an official source, or one that was repacked by one of your own devices).

Any reason this wouldn't work?

Re:Apple did this when they switched to PPC. (1)

petermgreen (876956) | about 6 months ago | (#47179773)

Yeah it's nothing new to put such emulation in place, apple did it twice when they switched to powerpc and when they switched to intel. DEC did it for windows NT on alpha. Intel did it for windows and linux on itanium (the itanium originally had hardware x86 support but it sucked so much that software emulation was faster and it was removed in later versions). qemu can do it for linux binaries across a wide range of cpu architecture combinations.

It's doable but there is a significant performance penalty. Thats tolerable if your new CPU is significantly better than your old one/competitors one but if your new chip is only slightly better than your old one or your competitors one then it's going to suck badly.

Re:Apple did this when they switched to PPC. (1)

phantomfive (622387) | about 6 months ago | (#47180287)

What sucked is when Apple removed compatibility for PPC and all your applications (some of which were rather expensive) stopped working.

not like intel hasn't done this itself (1)

alen (225700) | about 6 months ago | (#47179393)

taking x64 except for one or two instructions to hurt AMD
or their SSE extensions

Old News (0)

Anonymous Coward | about 6 months ago | (#47179413)

This isn't news. Anyone who's been watching the Android space even casually for the past few years has seen this coming. ARM gets a strong foothold and a huge, predominant market share on the hardware side of Android. Intel then decides they want to shoehorn x86 into the mobile space, and they like Android. Chaos ensues.

A more interesting study would be to see what percentage of those top 2000 apps have dual-arch natives depending on your platform. MX Player, for instance, ships a bunch of different native codec packs (separate "apps" you install from the Play Store), compiled for different versions of ARM. Given their build process for this setup, they could probably add x86 (if they haven't already) very easily. There are production x86 Android devices on the market right now, so this needs to happen.

Bigger problem than Intel admits (5, Informative)

edxwelch (600979) | about 6 months ago | (#47179445)

ARM ran a survey of the top 500 Android apps in the market and found that only 20% are pure Java, 30% are native x86, 42% require binary translation and 6% do not work at all on Intel's platform. To make matters worse the level of compatibility is falling. They also found that running an app in binary translation mode takes a huge performance hit."
http://www.theregister.co.uk/2... [theregister.co.uk]

Re:Bigger problem than Intel admits (1)

Anonymous Coward | about 6 months ago | (#47180157)

Just for balance ....
http://www.theregister.co.uk/2014/06/05/intel_disputes_arms_claims_of_android_superiority/

Re:Bigger problem than Intel admits (1)

phantomfive (622387) | about 6 months ago | (#47180327)

I can't speak for all app developers, but the first time I got an x86 device on my desk at work, I ran machine code for several hours before even realizing it wasn't an ARM device. It was somewhat shocking when I finally ran 'cat .proc/cpuinfo.'

Thank You Google, you were Wrong. (0)

goruka (1721094) | about 6 months ago | (#47179459)

> two thirds of the top 2,000 apps in the Google Play Store use natively compiled C code

Of course, how else would one make code portable between platforms? Yet their support for using their native Java API from C or C++ is horrible. JNI is unsafe and crash prone and the NativeActivity is so limited that barely anyhthing can be made with it.

Actually, it needn't be a technical issue. (1)

Dr. Manhattan (29720) | about 6 months ago | (#47179559)

As I just got done saying in a comment above: Note that native development can be important to apps for a non-technical reason: preventing piracy. An app written purely in Java is relatively easy to decompile and analyze, and pirates have a lot of techniques for removing or disabling licensing code. Adding a native component makes the app much harder to reverse-engineer, at least delaying the day that your app appears on pirate sites.

Though I do agree that JNI is a serious pain. On the other hand, I've developed for Netware and Palm OS, so my standards for pain are probably artificially high.

Re:Thank You Google, you were Wrong. (1)

tepples (727027) | about 6 months ago | (#47179657)

Of course, how else would one make code portable between platforms?

By writing the app in JS instead of Java or C++.

Symptom of a much bigger problem (1, Insightful)

zerofoo (262795) | about 6 months ago | (#47179659)

Microsoft and Intel spent 20 years building bigger. Intel made bigger more complex silicon and Microsoft bloat happily expanded to fill that bigger silicon.

I remember times in the 90s where I was upgrading CPUs for clients that were 6 months old - crazy.

These two companies where wholly unprepared for the mobile revolution that required small and efficient. Neither company could shrink their offerings down fast enough. Unix on ARM was there to fill the need.

I say to both companies - tough cookies. Had they had an eye toward efficiency instead of bloat from the very beginning, they would have been much better prepared for the mobile/app revolution.

Re:Symptom of a much bigger problem (1)

ericloewe (2129490) | about 6 months ago | (#47179891)

Are you kidding?

Atom is now competitive on phones, better than ARM on tablets and Haswell destroys ARM on larger tablets and everything above.

The Windows NT kernel runs smoothly on hardware that would choke on Android.

Don't forget that 90s processors were slower than current mobile processors. Good luck using a Pentium (Pro/II/III) for anything useful these days, regardless of the OS.

LLVM byte code (3, Interesting)

reg (5428) | about 6 months ago | (#47179687)

I still don't understand why APKs are not just pure LLVM byte code, and either the store or the phone completes the byte code to native compile, including the final optimization passes...

Regards,
-Jeremy

Re:LLVM byte code (0)

Anonymous Coward | about 6 months ago | (#47179727)

Because LLVM bytecode is a moving target.

Re:LLVM byte code (2)

outlaw (59202) | about 6 months ago | (#47179925)

Those who don't remember the past are doomed to repeat it...

http://en.wikipedia.org/wiki/A... [wikipedia.org] (one of the earlier UNCOL)

I'll go back to my cave now

Re:LLVM byte code (1)

phantomfive (622387) | about 6 months ago | (#47180377)

The article explains what ANDF is, but it doesn't say what was wrong with it. What was wrong with ANDF?

I'll go back to my cave now

I feel that way all the time at work now, every time my manager tells me about some programming technology a company 'invented.'

Re:LLVM byte code (0)

Anonymous Coward | about 6 months ago | (#47179945)

Bad idea for god knows how many reasons.

http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/043719.html

Re:LLVM byte code (1)

Anonymous Coward | about 6 months ago | (#47180155)

Write once, debug everywhere...

Re:LLVM byte code (0)

Anonymous Coward | about 6 months ago | (#47180215)

Although LLVM bitcode can in theory be translated to native code on a variety of platforms, in practice it is at the wrong level of abstraction for this to work well because by the time you've run your source through Clang you've already run your platform's C header files, with all of their platform-specific bits and bobs, and compile-time endianness detection, etc, and bakes some of the calling convention into the bitcode instructions. You often need to return to the source to produce bitcode that will run well (or at all) on a different architecture.

C was designed to deal with compatibility at the source level, not at the binary level... LLVM managed to shift that abstraction a little, but it's not a magic bullet.

There are a few different LLVM mailing list discussions about this area, but I found one that seems particularly pertinent to your question [google.com] .

Of course (as alluded to by that thread) the PNaCL project is attempting to add an additional layer of abstraction to solve this problem. They use a mixture of techniques, including defining an idealized "platform" for their toolchain to target, subsetting the LLVM bitcode, and adding a bunch of special intrinsics that their specialized compiler is able to use. It definitely has promise, but I don't think it's really been proven as the ultimate solution just yet.

well... (1)

buddyglass (925859) | about 6 months ago | (#47179903)

Unless the native code includes ARM-specific inline assembly or uses ARM-specific processor features then the lack of x86 libs is just due to laziness on the part of developers. All the dev would need to do is compile his native code for x86 and include it in the APK. Devs feel free to be "lazy" in this way because ARM is so prevalent relative to x86.

Re:well... (1)

iggymanz (596061) | about 6 months ago | (#47180005)

wrong, different architectures can and do cause problems for pure C code too. here's tip of the iceberg for you, find out about various ARM model endian modes

Re:well... (1)

buddyglass (925859) | about 6 months ago | (#47180143)

"...or uses ARM-specific processor features..."

I'll count byte order as a processor feature.

Basically there's C code and then there's architecture-optimized C code. The former should be easily ported between architectures. So, if an app's native code is architecture-agnostic and the dev doesn't include x86 (and MIPS, for that matter) versions then he's just being lazy.

Re:well... (0)

Anonymous Coward | about 6 months ago | (#47180325)

What current ARM SoC use BIG endian?

No, they have not (0)

Anonymous Coward | about 6 months ago | (#47180043)

In fact, this is the opposite of a solution, it is a capitulation. A solution was part of the AZ210 phone by Intel, which had an ARM to Intel translator. This phone was quickly abandoned by Intel, and it is now irrelevant, stuck on Android 4.0, and it only supports ARMv6 code. Adobe AIR still does not work, although Flash did at some point.

Fat binaries just shift the blame to the developers. And with the track record that Intel has in abandoning mobile solutions, I doubt anybody will take it serious. Yes, there is the Galaxy Tab series, but it is a budget series - not where the money is. Otherwise mobile Intel is pretty much dead in Android land.

No love for Intel XScale? (1)

jendral_hxr (2263458) | about 6 months ago | (#47180183)

Welcome to Linux on PXA! We are just repeating more-or-less the same thing, at least it was good... very good.

Andriod vs iOS development (1)

fulldecent (598482) | about 6 months ago | (#47180255)

For I minute I though we had it bad because Apple is now creating a brand new language we have to learn just for "Apple development".

But actually it seems...

You're the ones getting fucked.

old news: Lenovo K800 had this 2yr ago (0)

Anonymous Coward | about 6 months ago | (#47180295)

I think some Intel PR wanker is recycling this story now that the launch of their K800 phone in India is two years old, and was updated by the K900 a year later. I guess they don't have a phone this year so they're making a news cycle instead. Intel wrote libraries for this phone to JIT-compile arm to x86, like Rosetta on Mac OS X. The libraries made their way into androvm as warez and have been there ever since.

two YEARS old, guys.

Speed is NOT the primary reason for native code (1)

FryingLizard (512858) | about 6 months ago | (#47180407)

It's not (just) for speed, if at all, it's BECAUSE YOU HAVE PORTABILITY WITH iOS (and other platforms)
If you use Java you're hosed. If you use regular C you can compile on both platforms, with a shim to interface to either iOS or Android as required.
The GLES code can easily be compatible. The UI stuff not so much but (high end) games generally implement their own UI in GL for specifically this reason.

It's not pretty but it's what most pro game developers have been doing since at least 2010, and it's a _hell_ of a lot better than having to totally rewrite your Java app in ObjectiveC or vice versa.

The extra performance is sometimes useful in some places but it's almost always about compatibility with iOS or (more rarely) with existing C libraries e.g. video encoding or whatever.

x86 using the NDK is simple (0)

Anonymous Coward | about 6 months ago | (#47180667)

When building with the Android NDK you already specify ABI(s). Newer NDK's are already x86 compatible and have been for some time. So your line line like this:
APP_ABI := armeabi armeabi-v7a

Becomes this:
APP_ABI := armeabi armeabi-v7a x86

And viola! Your code is now built for all three.

This is really not a issue.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?