Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

ARM Is a Promising Platform But Needs To Learn From the PC

Unknown Lamer posted more than 3 years ago | from the becoming-model-ctizens dept.

Linux 167

jbrodkin writes "Linux and ARM developers have clashed over what's been described as a 'United Nations-level complexity of the forks in the ARM section of the Linux kernel.' Linus Torvalds addressed the issue at LinuxCon this week on the 20th anniversary of Linux, saying the ARM platform has a lot to learn from the PC. While Torvalds noted that 'a lot of people love to hate the PC,' the fact that Intel, AMD, and hardware makers worked on building a common infrastructure 'made it very efficient and easy to support.' ARM, on the other hand, 'is missing it completely,' Torvalds said. 'ARM is this hodgepodge of five or six major companies and tens of minor companies making random pieces of hardware, and it looks like they're taking hardware and throwing it at a wall and seeing where it sticks, and making a chip out of what's stuck on the wall.'"

cancel ×

167 comments

Sorry! There are no comments related to the filter you selected.

ARM == shit. (-1)

Anonymous Coward | more than 3 years ago | (#37132128)

ARM == shit.

Re:ARM == shit. (0)

Anonymous Coward | more than 3 years ago | (#37132160)

Don't stick it so far in next time.

Re:ARM == shit. (2)

AnujMore (2009920) | more than 3 years ago | (#37132178)

returns value "false"

Wait, what? (1)

MrEricSir (398214) | more than 3 years ago | (#37132166)

"...tens of minor companies making random pieces of hardware..."

Has this guy never seen the PC hardware section at Fry's?

Re:Wait, what? (1, Flamebait)

Desler (1608317) | more than 3 years ago | (#37132174)

He's talking about CPUs, moron.

Re:Wait, what? (5, Insightful)

thsths (31372) | more than 3 years ago | (#37132272)

What is a desktop in the PC world is your SOC in the embedded world. It even comes with RAM and Flash (not on chip, but on package), if you want to.

The difference is that the PC environment has over a long time filtered down to a few typical devices for each type. Your network hardware is probably Realtek, or maybe Intel or an embedded AMD chip. You graphics card is NVidia, AMD or Intel. Your mouse does not matter, because it always talk USB HID etc.

In the ARM world, you also have standard components, but every integrator makes tiny (and usually pointless) changes that render them incompatible on the software level. Linus is right - this is neither necessary nor sustainable. It is one of the reasons that you can get software updates for a 5 year old PC, but not for a 6 months old smartphone.

Re:Wait, what? (3, Informative)

petermgreen (876956) | more than 3 years ago | (#37132488)

The difference is that the PC environment has over a long time filtered down to a few typical devices for each type. Your network hardware is probably Realtek, or maybe Intel or an embedded AMD chip. You graphics card is NVidia, AMD or Intel. Your mouse does not matter, because it always talk USB HID etc.

And perhaps most importantly your main system bus is either PCI or something that looks like PCI to software and by accessing the configuration space of that bus you can read the device IDs of everything on it whereas with ARM the software is expected to know the complete hardware setup in advance.

Quite agreeable (1)

symbolset (646467) | more than 3 years ago | (#37132560)

So I wish I could agree. But ARM is following the same diversity explosion and darwinian selection as FOSS, and for the same reasons. Out of this chaos comes the wonderful bounty of choices in our modern digital buffet. PCs have become stagnant. If you want one they are still available, but don't really do anything more than they did 15 years ago.

Re:Quite agreeable (1)

houstonbofh (602064) | more than 3 years ago | (#37132674)

Yet if you look at the FOSS projects with any real market penetration (outside the FOSS world) they are all the market leaders. Firefox, Apache, MySQL, Open Office, and so on. Yes, KOffice exists on Windows, but show me one non-linux type running it...

Right now ARM is a bunch of FOSS projects with no clear leader. Once there is one, it will get the mindshare, and hence the support. Then others will be compatible so they can use the ecosystem, and things will get better. But right now, it is Linux 1999.

Re:Quite agreeable (1)

symbolset (646467) | more than 3 years ago | (#37133064)

In 1999 linux platforms for personal use weren't moving 650,000 units a day. They are now, taking nearly half of the global smartphone market share of sales. ARM is in the same boat, putting a PC in every pocket at the fore of the mobile revolution, bringing all-day tablets to the masses and driving the only growth in the IT industry at unheard-of rates. You point and stare, calling out "You're doing it wrong!" as the platform is taking over the world.

Don't you think that's silly? Surely there's some other platform that needs some helpful guidance more than ARM.

Re:Quite agreeable (1)

dave420 (699308) | more than 3 years ago | (#37133382)

Just think how well they'd be doing if they actually had some coherency of design. Just because they're doing awesomely (and I agree - they are) doesn't mean they're doing as well as they could be.

Re:Wait, what? (3, Informative)

TheRaven64 (641858) | more than 3 years ago | (#37132608)

You're missing the point. He's not talking about add-ons like network adaptors, he's talking about fundamental core bits of hardware, like interrupt and DMA controllers, which need to be configured by the kernel before it can even bring things like serial ports online for a console.

Every PC, except some early Intel Macs, is capable of booting PC-DOS 1.0. It has interrupt controllers and device I/O configured in the same way and accessible via the standard BIOS interface. You don't get great performance if you use these, but you can bring up a system with them and then load better drivers if you have them. With ARM, every SoC needs its own set of drivers for core functionality long before you get to things like video or network adaptors. Oh, and on the subject of video, you don't even get something like the PC's text console as standard, let alone a framebuffer (via VESA).

Re:Wait, what? (3, Insightful)

kbolino (920292) | more than 3 years ago | (#37132250)

All of which is, more or less, interchangeable. The Intel x86/IBM PC platform, despite its many flaws, has reached a stable point where there are well accepted and commonly implemented standards for the boot process, the storage formats, the hardware interfaces, etc. ARM, despite a "purer" and "simpler" instruction architecture, lacks much of this common surrounding infrastructure.

Re:Wait, what? (0, Insightful)

Anonymous Coward | more than 3 years ago | (#37132296)

All of which is, more or less, interchangeable. The Intel x86/IBM PC platform, despite its many flaws, has reached a stable point where there are well accepted and commonly implemented standards for the boot process, the storage formats, the hardware interfaces, etc. ARM, despite a "purer" and "simpler" instruction architecture, lacks much of this common surrounding infrastructure.

Basically, ARM is to CPUs what Linux is to software.

Re:Wait, what? (-1)

Anonymous Coward | more than 3 years ago | (#37132418)

Basically, ARM is to CPUs what Linux is to software.

Mod parent +1, Nice burn.

Re:Wait, what? (1)

houstonbofh (602064) | more than 3 years ago | (#37132710)

Not really. If I take a hard drive from my Ubuntu running PC, and stick it in a totally different PC, it will boot. And even X may come up. (And to cover the different level of your analogy, I can copy my "CryaonPhysicsDelux" folder from my Ubuntu system to a Red Hat system and it will run.) If I take a boot image from one ARM device, and stick it in another it will hang.

Flawed analogies.You never find PC's in trashcans. (-1)

Anonymous Coward | more than 3 years ago | (#37133880)

The PC is a dead architecture that died when Packard Bell brought one of the first Pentium chips to consumers (70MHz) , with the help of the microprocessor fab's, isolated the PC from it's stable legacy conduct into the
new bussing and expansion interconnects.

PC was an open standard only durring the 286, 386 and 486 eras, when Microsoft actually had it's own Unix flavor called XENIX. Seriously, there is no analogy to make: PC is general-purpose architecture that was made into something else by Intel. Everyone else just followed the lead of Intel in hopes their chips and hardware could work in parallel.

Anyone in computing that was productive, either used a DEC Alpha or Amiga. You only find so-called Desktops of Intel or AMD builds in the trash -- never do you find an actual computer in the trash.

Re:Wait, what? (1)

Anonymous Coward | more than 3 years ago | (#37132748)

Linux follows the standards for the boot process, power management, suspend & resume, plug and play, disk format, etc, and Windows uses voodoo to make it all work. Nice try buddy. The reason why a lot of that stuff took a while to work well in Linux is because most of it was implemented according to standards, standards which hardware vendors would ignore and mess around with (take a look at suspend/resume methods), but they worked fine in Windows and its drivers would use kludges to force these things to work (and not without a bunch of headaches).

Re:Wait, what? (2)

LWATCDR (28044) | more than 3 years ago | (#37133626)

And that is called inovation.
The orignal PC "Standard" sucked.
You had to asigne memory spaces, interrupts, and IO ports when you added cards. Not every card worked with every PC.
PC compatibility was hit or miss. The magazines would use Lotus 123 and Microsoft Flight Simulator as the benchmarks. If both of those ran then it was PC compatible. Of course if you bought anything but a real IBM PC or at you could still find software that didn't run.
Then you had the x86 CPU which also was terrible. Segmented memory was with us until the 386 and even that was register starved. The 68k line of CPUs was much better.
Then you had the companies that dared to make better computers than IBMs. Both the Zenith Z-100 and the Tandy 2000 where much better computers than the standard PCs of the time. They used the x86 but with better graphics. Thing is that they where not PC compatible.
And then you had to hope that your software would support your printer and video card if you got a better card then an MDA or CGA card. Hercules was a pretty safe bet.
We where stuck in PC hell for years even when better solutions where available like the Mac, Atari ST, and Amiga. The reason was simple. You developed software for seats and more people that bought software bought PCs.
We can use the same solution that finally made PC less of a steaming pile of dung. It is called an OS. You make a board and put an OS on it.
To make Linux better at embedded I would suggest that standards need to be developed for GPIO, SPI, I2C "sort of have that now", and CAN. That would solve so many issues on the applications side it wouldn't be funny.
For goodness sakes do not trap us into the Lowest Common Denominator hell that was the PC for way too long! USE THE OS!

Re:Wait, what? (2)

Osgeld (1900440) | more than 3 years ago | (#37132270)

yea and that PC hardware uses the same cpu platform

with ARM well shit theres TI's flavor which doesnt play well with ST's version and lets not even get into the "arm based" stuff like PIC32

it is a mess, much like PC's in the late 70's and early 80's, they all have basic, but are totally incompatible

Re:Wait, what? (0)

Anonymous Coward | more than 3 years ago | (#37132502)

PIC32 is MIPS based.

Re:Wait, what? (3, Insightful)

jedidiah (1196) | more than 3 years ago | (#37132698)

It is NOTHING like computers in the 70s and 80s.

In the 80s, you had machines made out of standard 3rd party components. Your CPU was the same as the next guy even if he got his computer from a competing brand. This is why an Atari could emulate a Mac. The actual CPU was a particular part that everyone bought from the same place. This is why you can have versions of Linux targeting those 80s/90s era machines. A 68000 in one machine is the same as the next, or a 6502, or a 68030.

The old home computer landscape seems positively orderly by comparison.

Re:Wait, what? (3, Insightful)

JDG1980 (2438906) | more than 3 years ago | (#37133162)

The CPUs were standard, but little else was. Sure, the C-64 and Atari 800 both had a 6502-based CPU, but they also had different video chips, different sound chips, different and mutually incompatible disk drive formats and serial communications protocols, etc. One nice thing was that even though each company used their proprietary chips, they didn't feel the need to hide implementation details from users. If you wanted to know exactly what each register in the VIC-II chip did, it was right there in the manual.

Re:Wait, what? (1)

jedidiah (1196) | more than 3 years ago | (#37133988)

Apple was the only one that had a "mutually incompatable" format. The rest not so much.

While there were a lot of custom chips, there were also a good number of stock parts as well. This included floppy controllers, IO controllers, and sound chips.

Now the bit about everything being documented is a good point. This is how it is that I am still somewhat familiar with the parts that were in my old machine. This probably made the 030 Linux versions a lot easier to deal with.

Re:Wait, what? (1)

Megane (129182) | more than 3 years ago | (#37134210)

The CPUs were standard, but little else was. Sure, the C-64 and Atari 800 both had a 6502-based CPU, but they also had different video chips, different sound chips, different and mutually incompatible disk drive formats and serial communications protocols, etc

And that is probably the best analogy for the situation. The ARM CPU cores, while having quite a few differences from version to version, tend to be identical within a version, and are licensed as an entire unit from ARM, Inc. Very few companies (specifically Apple, thanks to the Newton era) have a license that allows them to mess with the core -- and even then they might not want to.

There is other stuff, at least in the current Cortex versions, such as basic interrupt control and probably the MMU, that is also part of that core (I work with Cortex M3 and no OS these days, so I really don't know about MMUs). But everything else is unique to the chip maker, even for the same type of interface. ST's I2C controller will be completely different from Cirrus's I2C controller, etc.

And FYI, the only reason an Atari ST could run MacOS was because the only assumption of MacOS in those days with respect to graphics was a 1-bit deep, 8 pixels per byte, bitmapped display. The Amiga also used a 68000, but it had a rocket science blitter thingy, which is why there wasn't a similar ability to run Amiga software on an ST.

Re:Wait, what? (1)

skids (119237) | more than 3 years ago | (#37133278)

It's a complete mess and currently a huge barrier to development. You don't even have to get into coding for the kernel -- just getting a toolchain for your particular flavor of ARM is enough to turn away lots of developers. We're talking several DAYS spent figuring out how to produce a goddamn libgcc.a that has the correct endianness, MMU-or-not, and doesn't hose the system because it uses an undefined instruction to implement prefetch()... and then another night trying to figure out how to get that libgcc.a to also contain the symbols you need, and fondle elf2flt correctly, for you binary format, if you are nommu.

Now, if distros and gcc were to collaborate to clean that up to the point where there are prepackaged multilib cross compilers for aspiring embedded coders, then we'd have significantly more ARM developers.

Oh, and if you think demo boards have strange and unusual diversity for their SoCs wait till you see the inside of an inkjet printer OS. Try a flash card reader implemented with a sea-of-gates chip. Standard SDIO register set? Hah! You're lucky if a given device isn't hung off a gpio using a 32-bit register set that's hung off an 8-bit indexed register set that's accessed using a 16-bit access-width where the registers are spaced 64 bytes apart. Supposedly that's to shave pennies off the hardware cost, but I bet many more pennies were wasted on the development side trying to get the thing to work.

Wait (-1, Troll)

Intron (870560) | more than 3 years ago | (#37132228)

So Linus is saying that they should have one central authority decide what's right for everyone? ** head asplodes **

Re:Wait (0)

Anonymous Coward | more than 3 years ago | (#37132274)

No, but Slashdot is saying you should practice reading comprehension.

Re:Wait (1)

denis-The-menace (471988) | more than 3 years ago | (#37132410)

Maybe some coordination to make sure that similar instructions work the same way across all ARM CPUs.

Re:Wait (4, Insightful)

west (39918) | more than 3 years ago | (#37132448)

I'm pretty certain he'd prefer a consortium that produces a common set of standards, but he raises an important point.

Choice costs.

It's wonderful that you have the a massively wide variety of choices, unconstrained by the a central authority, but don't forget that the cost of having that choice is going to be significant. There's a reason that almost all lines of business tend towards either a few big winners or, if the product is essentially identical, commoditization.

It's why I often wonder at why Linux users dream about taking over the desktop. If that did occur, it would mean a drive to lower cost that would result, almost inevitably, in the wholesale adoption of s single choice, reducing all the other choices to total irrelevance.

Re:Wait (1)

houstonbofh (602064) | more than 3 years ago | (#37132756)

It's why I often wonder at why Linux users dream about taking over the desktop. If that did occur, it would mean a drive to lower cost that would result, almost inevitably, in the wholesale adoption of s single choice, reducing all the other choices to total irrelevance.

But when that choice goes goofy, you can change it quickly. Like the watershead from KDE for a while. Next it will be from Unity and Gnome Shell, for a while. Then the leaders either shape up, or fall aside, like XFree86. http://en.wikipedia.org/wiki/XFree86 [wikipedia.org] You can have a market leader (A good thing for standards) and still have choice. (A good thing for freedom)

Re:Wait (1)

west (39918) | more than 3 years ago | (#37133096)

I'm not so certain. If the business community settles in on a standard, instead of the Linux community being composed of a dozen different distributions, all of which have roughly equal mindshare among contributors, you end up with only one to which you contribute if you want to be at all relevant, which means the alternatives wither from lack of customer and eventually programmer interest.

My thesis (speculation to be sure, but built on observation), is that you *cannot* sustain that level of choice in a market that is in any way mature.

Re:Wait (1)

symbolset (646467) | more than 3 years ago | (#37132866)

ARM chips sell over 10x IA chip volumes. More volume supports more choices. Linux runs on a good fraction and I'm writing this on one. It appears that the cost to migrate to a new chip is so low that a gang of volunteers can keep up with hundreds of phone and tablet models and deliver rapid platform updates quite swiftly. I don't see a problem.

This is the "beware fragmentation" pitch we laughed at in distros, in Linux apps, in Android phones and tablets. This is absurd and deserves ridicule. Fragmentation is choice. Do we need to cringe in peril when we discover that our corner convenience store's water market is so fragmented that it offers 17 choices in simple, unflavored uncolored water? No. We should ask why the water often costs more than Gasoline, but that is a separate issue.

Re:Wait (1)

nschubach (922175) | more than 3 years ago | (#37134088)

It's why I often wonder at why Linux users dream about taking over the desktop. If that did occur, it would mean a drive to lower cost that would result, almost inevitably, in the wholesale adoption of s single choice, reducing all the other choices to total irrelevance.

I don't understand this logic.if the hardware was standardized, anyone could make the chip and someone would find a way to compete (speed improvements, power consumption, ...)

The whole deal with ARM standards is probably going to be solved with Windows 8 (unfortunately) if it sticks to the promise of running on ARM. Microsoft will step in and say "Here is what we will support" and the chip shops will fall in step.

Re:Wait (1)

west (39918) | more than 3 years ago | (#37134266)

I don't understand this logic.if the hardware was standardized, anyone could make the chip and someone would find a way to compete (speed improvements, power consumption, ...)

My comment about the desktop was unrelated to ARM. I was trying to point out that if you become a significant player in a market where cost rather than flexibility is the main factor (i.e. the mainstream desktop), you are likely to *lose* a lot of your current choice.

Your point about Windows 8 is a very good one. We may lose a lot of choice because of it. On the other hand, much reduced costs to enter a now much larger market may well boost participation significantly.

Re:Wait (1)

obarthelemy (160321) | more than 3 years ago | (#37132808)

That's "Do as I say, not as I do" at its finest. Meanwhile, on the Linux front, choice is great, if you're not happy roll your own, and uniformization is death.

Re:Wait (1)

sjames (1099) | more than 3 years ago | (#37133220)

More like wouldn't it be nice if they would at least occasionally meet, talk shop, and perhaps agree voluntarily to be a bit more compatible. That and don't go making changes for the sake of changes. Pick a design that works and stick with it.

That's the trouble with a monolithic kernel (0, Troll)

Animats (122034) | more than 3 years ago | (#37132244)

The embedded world doesn't have much trouble with this. For QNX, there's the kernel, which is the same for all CPUs with the same instructions set, and a "board support package", which has the driver programs for a given board or variant.

Linux is a monolithic kernel, and so it has to be hacked all over the place to deal with architecture variations. Linux lacks a clean conceptual model of operating system vs. board support.

Re:That's the trouble with a monolithic kernel (2)

denis-The-menace (471988) | more than 3 years ago | (#37132266)

You mean like a HAL in Windows NT.

Now I know where MS got the idea from.

Re:That's the trouble with a monolithic kernel (2)

Yvan256 (722131) | more than 3 years ago | (#37132492)

And bad news for Windows NT users named Dave.

Re:That's the trouble with a monolithic kernel (0)

Anonymous Coward | more than 3 years ago | (#37133800)

You mean Dave Cutler?

Re:That's the trouble with a monolithic kernel (1)

Yvan256 (722131) | more than 3 years ago | (#37133862)

Nope, Dr. Dave Bowman.

Re:That's the trouble with a monolithic kernel (1)

Osgeld (1900440) | more than 3 years ago | (#37132308)

I have 2 arms sitting on the side of the bench now that are incompatible instruction sets ... so back to square one I guess

Re:That's the trouble with a monolithic kernel (0)

Anonymous Coward | more than 3 years ago | (#37132406)

I kinda wish MIPS had done a better job at pushing their embedded cores. If MIPS had won we would already have a full, stable, 64-bit ISA from which to build servers.

Re:That's the trouble with a monolithic kernel (0)

Anonymous Coward | more than 3 years ago | (#37132630)

I have 2 arms sitting on the side of the bench now that are incompatible instruction sets ... so back to square one I guess

Funny, my two arms seem to have incompatible instruction sets too...especially when I try to be the drummer in RockBand.... ;)

Re:That's the trouble with a monolithic kernel (1)

Megane (129182) | more than 3 years ago | (#37134282)

My arms are incompatible because they have thumbs on the opposite sides of the hand. I have to wear a different glove for each one.

Re:That's the trouble with a monolithic kernel (4, Interesting)

Anonymous Coward | more than 3 years ago | (#37132372)

The problem is that micro kernels have always been harder to develop and slower(if not done carefully). And not all "board features" can be separated/exposed from/to the kernel easily when done externally.

For instance, paging and memory management is usually something that would go in the kernel, even a microkernel. Do you know how many different ARM MMU interfaces there are, and also how many ARM processors don't have an MMU, or that implement only a subset of some other MMU. And then there is the dual-core processors now as well. I wonder how many different interfaces there are for controlling both multiple ARM cores or ARM processors.

Basically, ARM is a cluster fuck for OS development. They need some form of standardization if they ever hope to get widespread OS support. Linux is probably only supported by most boards because the board manufacturers submit patches to the Linux project. By widespread, I mean each board supporting a minimum of 3 different operating systems, for instance Windows, Linux, and something proprietary or a BSD.

Re:That's the trouble with a monolithic kernel (1)

Relayman (1068986) | more than 3 years ago | (#37132464)

Based on your comment, Linux should add an abstraction layer that is resolved at compile time (for optimum performance) that isolates the various flavors of ARM from the Linux kernel.

You make it sound like the best and brightest computer jocks aren't working on Linux for free. Imagine that...

Re:That's the trouble with a monolithic kernel (0)

Anonymous Coward | more than 3 years ago | (#37133750)

nope, even the kernel's core assembly instructions for various ARM instruction sets have to be different too. You're talking out of your ass, your employer has the "best and brightest" shortage.

Re:That's the trouble with a monolithic kernel (2)

Jonner (189691) | more than 3 years ago | (#37132490)

The embedded world doesn't have much trouble with this. For QNX, there's the kernel, which is the same for all CPUs with the same instructions set, and a "board support package", which has the driver programs for a given board or variant.

Linux is a monolithic kernel, and so it has to be hacked all over the place to deal with architecture variations. Linux lacks a clean conceptual model of operating system vs. board support.

Linux supported many architectures before ARM, so Linus's complaints don't come from a purely PC mindset. You also seem to be ignoring the fact that Linux is and has long been a major part the embedded world. How many smart phones run QNX?

Re:That's the trouble with a monolithic kernel (5, Informative)

Entrope (68843) | more than 3 years ago | (#37132640)

Microkernel versus monolithic kernel has nothing to do with board support packages.

Linux has the equivalent of "board support packages" -- they can be as small as one file, but are more often just a handful: a C file that describes memory and I/O mappings and other peripherals that cannot be safely detected at runtime, sometimes a default configuration (defconfig) file, and maybe some other pretty small driver-like files that manage some of the mess that Linus was talking about. (For example, the BeagleBoard has three C files: one to define the board, one to manage LCD video configuration, and one for audio setup; it shares a defconfig with every other board using an OMAP2/3/4 CPU.)

That is in sharp contrast to my experience with commercial RTOSes, where a BSP might consist of a dozen C source and header files, plus another half-dozen configuration files. For the boards I have used, Linux has the smallest set of board-specific files, a microkernel RTOS has the next smallest, and a Unix-based RTOS has the largest. Linux doesn't call its board-specific file sets BSPs because they are (a) too small to really call a "package" and (b) not controlled and shipped separately. (Linux is not about locking down what the end user can do, so there would be no point in having BSPs for officially supported boards.)

Re:That's the trouble with a monolithic kernel (5, Informative)

FrangoAssado (561740) | more than 3 years ago | (#37132894)

Exactly.

The problem is not that adding support to a new board in Linux is too hard, in fact, it's almost the opposite. There are already tens of slightly incompatible boards to support, and every time a company makes a new one, they don't even try to stick to any standard (not that there even *exists* a real standard), since it's very easy to just add new code to Linux. See this LKML thread [gmane.org] for Linus's description of the problem from some time ago.

Using a microkernel doesn't help at all; you still have to code for all of the slight incompatibilities, regardless of whatever differences in logical organization.

Re:That's the trouble with a monolithic kernel (2, Informative)

GooberToo (74388) | more than 3 years ago | (#37133050)

and so it has to be hacked all over the place to deal with architecture variations.

Bullshit. Linux abstracts such details though various standardized functions and macros. If you've bothered to pull your head from your ass and take even a quick look at the Linux source tree, you can clearly see the architecture variants are cleanly broken out.

Not only is your post NOT "Interesting", as was modded, it is factually, "Troll".

Different for embedded rigs than PCs (5, Insightful)

Anonymous Coward | more than 3 years ago | (#37132278)

They're not trying to cut corners for the hell of it, but for performance, power usage, and other actual engineering reasons.

You just cant build smartphones and tablets with that same common architecture, or else you're adding too many chips and circuits you don't need.

It's no big deal that PC's ship with empty PCI slots and huge chunks of the bios and chipset that are never used but rarely. (Onboard raid, ECC codes, so on and so on), but when you're trying to put together a device as trim and minimalist as possible, you're going to end up with something slightly different for each use case.

Re:Different for embedded rigs than PCs (1)

hedwards (940851) | more than 3 years ago | (#37132520)

He's acknowledging that, but at the same time discounting the advantages of having a minimalist option. I don't see any problem with having a heavier duty ARM available, but suggesting that there's not value to having chips that have just the necessary circuits is silly.

Re:Different for embedded rigs than PCs (3, Insightful)

RobertLTux (260313) | more than 3 years ago | (#37132598)

there is a difference between %feature% being present/absent and %feature% having 30 different implementations (of which 12 are actually hostile to others).

when you have to have a venn diagram with PLAID as one of the circles then you are in trouble.

Re:Different for embedded rigs than PCs (2)

UnknowingFool (672806) | more than 3 years ago | (#37132770)

Unfortunately one of advantages of ARM is that the chip maker can heavily customize what is on the SOC. Most of them don't mess with the core. I don't think that the different makers are intending to have hostile features but given time constraints for development, they can't check with other companies (some of them competitors) to see if there optimization hurts others.

Re:Different for embedded rigs than PCs (1)

NoNonAlphaCharsHere (2201864) | more than 3 years ago | (#37132616)

This whole article is bullshit. Is everyone forgetting the varying instruction sets of the 386, 486, Pentium, Pentium 2-4, Xeon, x86-64 etc., etc. Plus all the millions of Northbridge and Southbridge chipsets from Intel, Via, etc., plus all the different busses through the ages, plus 92 different kinds of temperature monitoring, USB, ATAPI, ACPI...

And we're badmouthing ARM for being a constantly moving target? And that manufacturers are throwing shit at the wall? Huh???

Re:Different for embedded rigs than PCs (3, Insightful)

Pentium100 (1240090) | more than 3 years ago | (#37133006)

And yet, you can run, say, DOS on all of those computers. Critical devices will support a "generic" instruction set. Any VGA card will support standard VGA instructions, disk drives can be accessed using standard IDE interface (SATA controllers can emulate it). SCSI drives can be accessed using INT13h, the controller BIOS takes care of it. Keyboard/mouse use one of the few interfaces (and USB can emulate PS/2).

Now, when you get the basic system running, you can load drivers to access all of the features of the hardware (for example, different resolutions of the VGA card).

For ARM you have to recompile the kernel for most of the chips and boards for it to even boot. So, how would you create a way to install an operating system from me media not using another PC?

Note to self: (3, Funny)

sgt scrub (869860) | more than 3 years ago | (#37132358)

Goals for Friday.
1) play all pink floyd albums in a continuous loop.
2) make bubbly gurgle sounds with my "sandwich".
3) contemplate "making a chip out of what sticks on the wall".

Sounds like Linux (0)

Anonymous Coward | more than 3 years ago | (#37132364)

This actually sounds a lot like the comparison between Linux (ARM) and Mac OS X / Windows (Intel CPU).

Companies trying to support Linux with closed-source software struggle with a hodgepodge of distributions, kernels, and compilers.

IOW (1)

CheerfulMacFanboy (1900788) | more than 3 years ago | (#37132396)

"ARM should be more like my previous employer Transmeta".

more like Transmeta? (1)

Lead Butthead (321013) | more than 3 years ago | (#37132602)

"ARM should be more like my previous employer Transmeta".

I hope by that he doesn't mean "unprofitable and get bought for pennies on the dollar" like Transmeta.

Openness? (2, Insightful)

Baloroth (2370816) | more than 3 years ago | (#37132426)

Is Linus Torvalds (implicitly, at least) criticizing ARM because it is open and therefore anyone can create their own version of it? As opposed to x86, which has a restricted licensing set (AMD/Intel/Via... Via still exists, right?)? Because that is, AFAICT, exactly why ARM is so varied: anyone can roll their own. With the result that many do.

Kinda ironic (and I do mean *ironic*) that the creator of Linux would be complaining about this. I guess he is finally discovering why, in some cases, a regulated and restricted environment can be good (note: if x86 was a monopoly, I would not be saying that. But AMD and Intel are fierce competitors, so it isn't at all monopolistic). Open environments often become "hodgepodges" and lend themselves to non-standardization. Closed ones don't (well, they can, but generally they don't. Definitely not as fast as an open one) and can be easily standardized (witness how Intel accepted AMD's x86-64 set for consumers over their own I64 system). The result is, in the case of CPUs, good for consumers.

Note: I am note proclaiming the virtues of proprietary systems, or claiming they are better than free and open ones. Just pointing out the irony of the situation.

Re:Openness? (0)

RyuuzakiTetsuya (195424) | more than 3 years ago | (#37132450)

Linus doesn't have the RMS/ESR stick up his ass about "open." Linux was built out of necessity because no good x86 based *NIX or BSD was available. if HURD got off the ground, Linus wouldn't have bothered with Linux.

Re:Openness? (0)

Anonymous Coward | more than 3 years ago | (#37132734)

Actually, the stick up Stallman's ass is about "free", not "open".

Re:Openness? (4, Insightful)

Jonner (189691) | more than 3 years ago | (#37132660)

Is Linus Torvalds (implicitly, at least) criticizing ARM because it is open and therefore anyone can create their own version of it? As opposed to x86, which has a restricted licensing set (AMD/Intel/Via... Via still exists, right?)? Because that is, AFAICT, exactly why ARM is so varied: anyone can roll their own. With the result that many do.

ARM is not any more "open" than x86. To sell chips implementing modern versions of either instruction set, you must obtain a license from at least one company and nothing prevents you from extending that instruction set. Many companies have implemented (and often extened) each set over the years, though there are fewer implementing x86 now than ARM. There are probably fewer implementors of x86 because it is much more complex.

I think Linus is criticizing the lack of a common platform surrounding ARM rather than the instructions themselves. The instruction set of x86 chips has grown a lot, especially with x86_64, but the way you boot a PC hasn't changed much for example.

Re:Openness? (1)

yuhong (1378501) | more than 3 years ago | (#37133524)

ARM is not any more "open" than x86. To sell chips implementing modern versions of either instruction set, you must obtain a license from at least one company and nothing prevents you from extending that instruction set

Yes, but I think ARM is much easier to license than x86.

Re:Openness? (0)

Anonymous Coward | more than 3 years ago | (#37134336)

Indeed; the ARM was originally designed precisely because Intel *wouldn't* license their processor designs for modification in the way Acorn wanted.

Re:Openness? (1)

Matt_Bennett (79107) | more than 3 years ago | (#37132784)

Open? How is ARM open? ARM is a very popular but *licensed* core that you must pay a good deal of money to license. According to the Wikipedia article on ARM, in 2006 it cost about $1,800,000 per license.

Re:Openness? (1)

gabebear (251933) | more than 3 years ago | (#37133076)

Licensing ARM is trivial, x86 is more or less impossible to license. The cost you are quoting is the average license cost, Wikipedia also breaks it down to a per-device cost of $0.11 per device.

Re:Openness? (1)

Matt_Bennett (79107) | more than 3 years ago | (#37133960)

I'm 100% certain that if ask ARM to license 10, 100 or even 1000 cores at $0.11 per core, they won't even talk to you. Developing a device around an ARM core is expensive and has high start-up costs. Remember that $1.8M is the average cost of a license, some people pay more, some less, but ARM holdings is a for-profit company, not a charity. They are out to make money. It is not in their business interests to license the core to you if they aren't going to make money off of it, and on average, they made $1.8M per license (in 2006).

Re:Openness? (1)

GooberToo (74388) | more than 3 years ago | (#37133092)

Open? How is ARM open?

Probably because there are royalty free, freely available ARM designs available for use by anyone. Its not their leaded edge designs, but ARM is freely available.

Re:Openness? (1)

zixxt (1547061) | more than 3 years ago | (#37133534)

Open? How is ARM open?

Probably because there are royalty free, freely available ARM designs available for use by anyone. Its not their leaded edge designs, but ARM is freely available.

Until ARM is like say OpenSPARC or LEON where you can get the source code of the chips and logic for free under the GPL or some other open source license, without the cost of paying any fee, then ARM is anything but free available.

Re:Openness? (1)

Arlet (29997) | more than 3 years ago | (#37133586)

Or, more likely, the $1 million+ license fees, and the royalties per core are not a big obstacle for dozens of different licensees.

In return for the license, you get a high quality core for your ASIC, so that's worth it for a lot of customers.

Re:Openness? (1)

Matt_Bennett (79107) | more than 3 years ago | (#37133604)

There are? OpenCores has one beta VHDL implementation (it hasn't been updated since December 2009) that I can find with a quick search- everything else I find leads to a dead-end. I don't see any ARM cores listed on opencores that have been ASIC proven.

While there may be some designs available, I don't think any of the ARM implementations that are in the Linux kernel are based on an open core. If you are aware of an open core that can run Linux, I would appreciate a pointer.

Beyond anything else, ARM is a trademark used to refer to one of a bunch of cores that the ARM Holdings company have made. Saying you have an open ARM core is only scratching the surface of what the part actually does- for example the first ARM core (ARMv1) had no cache, no MMU, and ran (typically) at less than 2 DMIP- not something you'd really have a hope of running the Linux kernel on.

Re:Openness? (1)

Seyedkevin (1633117) | more than 3 years ago | (#37132976)

I don't understand why people keep making the misconception that "open" means "incompatible is good".

Yes, there are people who make new standards. Perhaps because they felt it logical, impractical to adhere to pre-existing standards, or maybe they made a mistake. But the gross majority of open source is about implementing standards in a different ways.

When open source applications *do* make new standards, it's very common for the said standard to have a nice little library so that anyone else can reimplement the standard. This isn't like proprietary software where someone has to reverse engineer the program and go through obfuscation (see skype) for another application to be able to communicate, and, in other words, contribute to the universality of the standard.

But here, we're talking about hardware. You can't change hardware. You could say that every arm chip is like a different standard in which Linux is supposed to abstract away, which is redundant effort and a hassle for everyone. Openness, I think, is about allowing others to be able to contribute and standardization helps greatly to let this happen.

Heck, look at POSIX. It's done one of the most good for FLOSS since it allowed contributions from all compliant operating systems to contribute to one another.

It's *not* ironic because openness strives on standards.

Re:Openness? (2)

JackDW (904211) | more than 3 years ago | (#37133042)

Actually it is the other way around. The x86 platform is mostly based on open standards. There are more 486-compatible clones than you may realise. ARM, on the other hand, is strongly proprietary. There are no clones at all. The ARM fragmentation has occurred because of a lack of open standards - while the PC guys were standardising PCI, USB and VGA, every ARM licensee was reinventing the wheel to give their own SoC the features that nobody else had. While the core ISA is always the same, the system architecture is not.

When ARM CPUs were only used for embedded systems, this was fine, because each vendor could provide a BSP for each supported OS. Now that ARM CPUs are being used in general-purpose computers like Windows Phone 7 and Android handsets, the fragmentation has become an issue preventing users from loading alternative firmware. Clearly, this is a concern for Linus Torvalds (and Linux supporters who understand the issue) as it causes pain for kernel development and makes it essentially impossible to produce a single OS that could be installed (say) on any ARM-based smartphone.

Re:Openness? (1)

zixxt (1547061) | more than 3 years ago | (#37133694)

x86 is not open at all. It is one of most closed archs still around and there's less than a handful of companies license that can make x86 cpus, and Intel is never going to be selling you a license. One cannot compare the x86 arch to other arches such as Power(PC), SPARC, and say that x86 is open unless you mean open means closed. http://en.wikipedia.org/wiki/Comparison_of_CPU_architectures [wikipedia.org] look at the table for open and/or Royalty-free

Missing the point (2)

Weaselmancer (533834) | more than 3 years ago | (#37132452)

The reason why x86 is so unified is because they're all in PCs. You only have the one form factor to shoot at. So of course the CPUs will be highly similar.

ARM fills a different niche. You see ARM chips in tablets, phones, industrial control, routers...all over the place. Of course ARM chips will vary more wildly. They're trying to hit more targets. And those targets have unique and tightly defined parameters. That will put them at odds with other designs.

I mean hell, if the x86 has it all figured out so well, then why isn't your cellphone using one?

Re:Missing the point (0)

Anonymous Coward | more than 3 years ago | (#37132600)

My cellphone [aavamobile.com] is using one you insensitive clod!

Re:Missing the point (1)

gman003 (1693318) | more than 3 years ago | (#37132876)

Uh, x86 is everywhere. PCs. Supercomputers. Microcontrollers. Embedded systems (you can still buy i386 chips because a lot of embedded systems like traffic light controllers use them). There's even been a few game consoles using it (original XBox and the Wonderswan series). Quite a few of them don't follow the PC standard, and that's fine. But there should still be a standard for common uses - even just covering smartphones, tablets and netbooks would be a major improvement over the current chaos.

Re:Missing the point (1)

Svartalf (2997) | more than 3 years ago | (#37134372)

ARM's everywhere. Look at most of your consumer electronics... Odds on, you're looking at an ARM in most of them. There's at least 1-2 ARMs in your X86 machines as well, doing tasks you wouldn't relegate to the X86.

Re:Missing the point (1)

agent_vee (1801664) | more than 3 years ago | (#37133166)

The reason phones use ARM chips is because ARM focused on providing low power consumption solutions. Intel/AMD basically ignored the mobile phone sector for a long time.

ALSO SPRACH LINUX DUDE !! (0)

Anonymous Coward | more than 3 years ago | (#37132500)

Yawn

More coding, less jabbering, linux dude !!

Diversity - Good/Bad? (0)

Anonymous Coward | more than 3 years ago | (#37132506)

There's up-sides and down-sides. Diversity brings resilience in the face of software threats since you don't have homogeneous systems capable of executing the same malware payload code. The downside is exactly per article - support becomes much harder. I'd like to think if we had one CPU in the world then it would be rock-solid and extremely well tested because more eyes are looking at it. On the other hand it's a buisiness objective to "get it to 75% and ship it" so that one processor would probably just suck as much as any other hardware/software system out there but would cost 20x as much. You can't just say "open-source it" either because we all know how they love to fork everything to death. It's just the price of progress. Look on the bright side though, at least ASCII hasn't changed so our 7th grade poetry still loads even on these new-fangled iPads.

No, no no no NO! (1)

AdrianKemp (1988748) | more than 3 years ago | (#37132588)

You know what we got with a single consistent architecture for PC CPUs? The need to use ARM chips.

For years there have been more promising options for instruction sets, and even the basic design of the chips. None of them were taken seriously because we were stuck with a standard.

Now, let's keep our standard, it's good for many things. But ARM is meant to solve some of the problems that it created, so why the hell would we want to give it the same problems?

Pot calling tea kettle black (0)

Anonymous Coward | more than 3 years ago | (#37132590)

throwing it at a wall and seeing where it sticks, and making a chip out of what's stuck on the wall.

... is this different than how Linux and OSS in general have progressed?

They can't do that (1)

SnarfQuest (469614) | more than 3 years ago | (#37132620)

it looks like they're taking hardware and throwing it at a wall and seeing where it sticks, and making a chip out of what's stuck on the wall.

They can't do that! I have the patent!

Let us not forget Transmeta... (1)

synthesizerpatel (1210598) | more than 3 years ago | (#37132654)

One of the reasons ARM has succeeded over Intel in the embedded platform is exactly because it's a hodgepodge in terms of implementation.. Arm just designs the chip, they don't make it, they leave that up to others, who then in turn support their own chips by providing kernel patches - which has been amazingly successful for Linux (and incidentally the non-linuxy iPhone as well)

Not to talk trash, he definitely understands the kernel and software but the nuances of hardware development and what makes hardware successful or unsuccessful aren't in his core skill set. After all, way back when he could have picked any position anywhere he picked Transmeta.

A lot has changed since then but ARM has done nothing but help Linux. If your chip vendor has a poopy Linux implementation they'll sell less. If they have a great one (and great documentation) they'll sell more. TI's a pretty good example of an awesome ARM / Linux implementation, and.. there are less awesome examples..

Re:Let us not forget Transmeta... (1)

Jonner (189691) | more than 3 years ago | (#37132762)

A lot has changed since then but ARM has done nothing but help Linux. If your chip vendor has a poopy Linux implementation they'll sell less. If they have a great one (and great documentation) they'll sell more. TI's a pretty good example of an awesome ARM / Linux implementation, and.. there are less awesome examples..

How do you define "help Linux"? The popularity of Linux on ARM has produced a giant, acrimonious fork which is not helpful to the community in general. Obviously, this wouldn't have happened in the first place if Linux and ARM weren't good for each other, but for the community to function well, things need to change. Linus is hopeful that this will be resolved in four or five years as a result of his and others' efforts to fix the very problems he's complaining about. The problem is not so much "poopy" Linux forks on ARM devices as it is the fact that there are many good forks that don't fit together very well.

Microsoft to the rescue (0)

Anonymous Coward | more than 3 years ago | (#37132736)

What this needs is for Microsoft to work with an ARM SoC vendor to come up with a platform architecture for an ARM-based server running Windows. It's in Microsoft's best interests (being that Windows is a pay-for OS) to take the lead here and I expect they would do a damn fine job.

With an architecture in place and a single vendor building to that specification, other ARM SoC manufacturers who want to play at the Windows table will have to conform. This would be the start of a common ARM platform. It may not be perfect, but it would be common.

Hate on MS all you like, but they could own the shit out of this.

That would be great. (1)

symbolset (646467) | more than 3 years ago | (#37133178)

That should sell in the dozens.

Features of desktop vs mobile ARM (1)

Twinbee (767046) | more than 3 years ago | (#37132750)

Can someone give me an example of the kind of non-compatible functionality you'd get with a desktop ARM versus a mobile version.

It seems in my eyes they should implement all the features for great cross compatibility, but just make them slower if need be. I doubt it would up much more die space...

I really dislike fragmented environments, and at most, we should have 2 ARM versions, preferably one, and not for example 20000.

Re:Features of desktop vs mobile ARM (1)

Matt_Bennett (79107) | more than 3 years ago | (#37132968)

It's not desktop vs. mobile, it is manufacturer X vs. manufacturer Y. ARM is just the core- the company doesn't make chips. They license their core to people who design with it. What is fragmented is everything outside the core- that is, the value that each licensee adds to the core to make their own product. They're embedded processors- they get surrounded by many peripherals such as analog to digital converters, interrupt controllers, serial ports, memory interfaces ... the list goes on and on.

To me, it brings back memories of the early PC era, where you may have had a game with *awesome* graphics and sound, but you had to spend hours fiddling with the IRQs and other settings to get the program to work reliably.

Re:Features of desktop vs mobile ARM (1)

Twinbee (767046) | more than 3 years ago | (#37133222)

Thanks that helps clear things up.

But apart from maybe "memory interfaces" maybe, the other things you mentioned (like analog to digital converters) wouldn't be of concern to the average programmer who would still maintain cross-compatibility across devices, assuming he didn't write for special things like cameras, sounds recorders or networking etc.

what he means is - (0)

Anonymous Coward | more than 3 years ago | (#37132974)

Translation: "I'm irked that Apple has the best ARM silicon, followed closely by the Tegra and the Snapdragon, and all the other players are mostly just experiments which will fade away, which means the majority of the ARM market is not reachable with the Linux kernel."

Suggestion: If your interest is mass market appeal, then just focus on the 2 or 3 best pieces of silicon, and forget about the rest. Optimize the kernel for performance on those.

In other markets, ARM chips are widely differentiated for a reason, they target countless embedded systems, something the "PC" has never been good at. Either do the work to support all the different flavors, or stop worrying about it. I think ARM is successful specifically because it's the anti-Intel platform.

sounds very similar, but what are the solutions? (1)

lkcl (517947) | more than 3 years ago | (#37133174)

linus's views sound very similar to what i've written about, at some length on this subject: https://lkml.org/lkml/2011/7/1/473 [lkml.org]

the thing is that absolutely nobody has come up with any solutions. the only solution i've heard is the one that i recommended, and there's been no reaction or response to it, as of yet.

the problem is the sheer overwhelming diversity. therefore, the solution is to prioritise linux kernel patches that come from hardware syndicates or specifications that cover more than just the one hardware device. for example, a patch to add in ARM USB3 support would instantly be accepted, because it covers multiple hardware devices. for example, a reference board would be instantly accepted, because it allows companies plural to develop platforms based around it.

what people are forgetting is that because there is no BIOS in the ARM world and no "common hardware platform" (PCI, PCIe, Northbridge, Southbridge - in most cases all of those things are gone. ARM CPUs with PCIe are exceptionally rare). and often there are massive differences between CPUs even on a minor upgrade from the same manufacturer, each hardware device has to have a custom-tailored device layout, and that means a custom-tailored linux kernel.

arm vs x86 (1)

yupa (751893) | more than 3 years ago | (#37133392)

First arm only does the cpu, everything that it is around it (timer, interrupt controller, memory controller, uart ,...) can be anything.

And there is no easy way to discover them (like pci bus, acpi, ...). That what device tree want to do.
But that won't really solve the code size. Embedded company want to reduce cost and often design simple soc block (gpio, uart, ...), and this make the number of driver very big.
Also there no standard interface like ehci, ahci, ...
Even the same vendor can change the controller across the chip generation.

So I think it will be very difficult to unify things. What the advantage of arm for soc maker. There are free to do what they want.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?