×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Ugly State of ARM Support On Linux

Soulskill posted more than 2 years ago | from the penguins-have-weak-arms dept.

Open Source 94

jfruhlinger writes "Power-efficient ARM processors are moving up the food chain, to the extent that even Windows will soon see an ARM port. Linux, which has long been cross-platform, should have a long head start in this niche, right? Well, blogger Brian Proffitt explains just how messy the state of Linux support for ARM is right now, partially as a result of mutually conflicting kernel hacks from ARM manufacturers who just wanted to get their products out the door and weren't necessarily abiding by the GPL obligations to release code. Things are improving now, not least because Linus is taking a personal hand in things, but sorting the mess out will take time."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

94 comments

NSLU2 (0)

taktoa (1995544) | more than 2 years ago | (#36506224)

From my experience running Debian with Linux 2.4 on an NSLU2 overclocked to 266MHz (XScale processor), the problem with Linux on ARM is not really stability, but more about speed.

Re:NSLU2 (2)

larry bagina (561269) | more than 2 years ago | (#36506486)

I didn't have a speed problem with my NSLU2 (not overclocked), but the memory (32M) seemed insufficient. It doesn't matter how fast your processor is if you spend 90% of your time swapping.

Re:NSLU2 (2)

Runaway1956 (1322357) | more than 2 years ago | (#36508572)

It doesn't matter how fast your processor is if you spend 90% of your time swapping.

THAT needs to be publicized on all computer vendor's sites, prominently, so that even the most feeble minded consumer comes to understand it. I was involved in a discussion recently, among GAMERS or all people. A guy on a budget needed to be convinced that he would do better with an older, slower processor and > 4 GB of memory, than he would be with the fastest CPU he could afford, but only 2 GB of memory. Assuming Windows 7 was to be installed, along with a long list of "networking" apps, (teamview among others), those 2 gig of memory would have been woefully inadequate.

Any vendor today who sells a 64 bit computer with less than 4 GB installed memory should be sued for misrepresentation.

Re:NSLU2 (5, Interesting)

petermgreen (876956) | more than 2 years ago | (#36506874)

That is because the slug is old hardware, wasn't exactly high end when it was released and was bought in large numbers by linux hobbyists. So it's well-known but slow. The shortage of ram doesn't exactly help either (it's possible to upgrade it but it's not for the feint hearted). Modern arm hardware is faster though there are speed issues caused by the floating point mess.

AIUI the big issue on ARM is lack of a standard platform.

On a PC you can assume you have a BIOS that can load stuff from HDD and execute it in an environment with basic disk access services. You can assume the addresses of most of the basic hardware (real time clock, interrupt controllers etc) You can generally assume there is a PCI bus for auto-configuration of other devices and that PCI bus has it's configuring space mapped to the processor in a standard way. There is a standard way of reading out how much ram there is and how it's mapped and so on. These things mean you can build one kernel and use it with one bootloader on pretty much any PC.

On arm afaict there is no standard platform. Therefore each arm processor and sometimes each arm board needs specific support to tell the kernel things like how to find out where stuff is mapped in the processors address space, how to find out how much ram there is and all the other quirks of the new system. Often these things are hacked up as quickly as possible by vendors who want to get a working system out which appears to be what is pissing linus off*.

There is also the floating point mess. ARM has been used with many floating point units over the years. Right now there is one that is most common and debian at least seem to have decided that the way to go is to build two ports, armel for systems without FPUs (or systems with unsupported FPUs) and armhf for systems with vfp but if vfp falls out of favour then they will be left with either adding yet another port or trying to hack something up. Also afaict there is no easy way to migrate between different debian arm ports without reinstalling.

* and afaict pissing linus off is bad because if he doesn't merge code then it tends to bitrot unless it has very active maintainers.

Re:NSLU2 (2)

JackDW (904211) | more than 2 years ago | (#36507936)

Quite right. Good explanation of the issues. Fans of Linux and ARM often have difficulty grasping why support for the CPU architecture is not enough. Linux must support the system architecture as well, and if every SoC has a different architecture (and they do), then that's going to be really messy.

Re:NSLU2 (1)

LWATCDR (28044) | more than 2 years ago | (#36508168)

last time I looked there was also no real standard SPI device, no standard for A2Ds, or GPIO. I2C does at least have a standard system and com ports are well supported. I have no idea how well the CAN buss is supported but up till now embedded ARM seems to have been well on the back burner after servers, HPC, and the Desktop.

Re:NSLU2 (4, Informative)

EETech1 (1179269) | more than 2 years ago | (#36509062)

I imagine its very similar to what I find rewriting libraries for microcontrollers from various vendors and even different micros from the same vendor. While they all have similar hardware I.E. a CAN interface, there is no standard way of configuring the hardware for bit timing, or message ID's or acceptance masks and filters, the number of available mailboxes and their functionality differs, message tx rx signaling, interrupt types, error reporting, register descriptions, its all different! ADC's are the same way, timing, triggering, re-triggering, addressing, configuring, accessing, input scaling, reference source, result scaling, register access, all different for essentially (IE a 10 bit successive approximation ADC) the same hardware.

Every single one of the various little tidbits of IP that gets added is different from each and every manufacturer!
No two vendors do anything the same. And one would probably be sued by the other if they did. We had to get special approval from Motorola to have Infineon replicate similar functionality in one of their DSP's to allow us to use the same code output from Simulink across multiple ECU families.

You have to be different to be better, and all these vendors implement features attempting to be the best so you have a reason to purchase their device over the other 10 that are essentially just like it.

Makes it very difficult on the person developing the API to have consistency across multiple platforms without dumbing it down to lose some features striving for a common set, or having slightly different API's or slightly different usage per micro, or designing them around an application, and hiding much of the other functionality.

Cheers!

Re:NSLU2 (1)

LWATCDR (28044) | more than 2 years ago | (#36517508)

That is why we have operating systems to hid the differences. As an application developer I don't care if I am writing to an SCSI, SATA, or a USB Flash drive.
I would love to see standard interfaces for things like GPIO, A2D, CAN, SPI, and i2c. It would be great to have the option to just us say a printer port on a PC to act as GPIO and SPI and then write your code on the PC and debug it then throw it on a Linux SBC and have it run. You kind can do it now.
Yes I know that you will still be dealing with things at a lower level than the average applications programmer but some standard APIs would be nice to use.

Re:NSLU2 (2)

Nursie (632944) | more than 2 years ago | (#36508798)

I had a couple of NSLU2's. One tran a simple web server, IMAP and POP3 mail servers (with all sorts of spam filtering) and ssh server. The other ran mediatomb and torrenflux.

The mail server had a 4GB USB stick as it's main drive for several years, the other had a hard drive in a USB caddy which made a hell of a speed difference.

They weren't exactly quick but they were cheap and low powered. If you cut out everything you didn't need (and I mean everything) from the standard debian distro, you could get an acceptable system running. Then I bought a sheevaplug and the difference was incredible. Modern processor, modern speeds, comparatively lots of RAM...

I've also been making kernel alterations for the WD Sharespace, and have dug into the arch/arm source tree. It's not *that* messy, it just has loads of different pieces of initialisation code depending on processor variants, board types, attached devices etc etc

Screw ARM (-1)

Anonymous Coward | more than 2 years ago | (#36506240)

Need to support even more CPUs!

Problem is simple (4, Insightful)

JamesP (688957) | more than 2 years ago | (#36506314)

ARM manufacturers are idiots

Intel gets open source, most ARM manufacturers don't.

Hence, most BSP rely on proprietary drivers, they don't have up-to-date support for devices in the mainline kernel, etc

Also, there's a lack of a 'standart platform', even though ARM is pretty much homogeneous

Things are beginning to change, still. And ARM is still miles ahead from SH, embedded MIPS, etc

Re:Problem is simple (4, Interesting)

Microlith (54737) | more than 2 years ago | (#36506478)

most BSP rely on proprietary drivers

Not true. Almost every device released today has full driver support in the kernel sources that are dropped. Userspace components notwithstanding, the kernels released are fully capable of supporting other OSes when recompiled (assuming the device will boot them.)

What does happen, however, and I stated this elsewhere, is the drivers are released ONLY into those tarballs with no revision history, full of android-specific code and are never merged upstream into the kernel. This makes porting newer kernels to the device even harder, which you can see in the 2.6.36 and 2.6.37 changeup in how some sound drivers are structured. As a result, you've got tons of drivers for hardware sitting, and rotting, in obscure folders on corporate websites.

And all this mess is before the schism created in the userspace by Android.

Re:Problem is simple (3, Informative)

JamesP (688957) | more than 2 years ago | (#36506540)

I wasn't talking about Android, but the point stands.

If wireless controllers on Android devices don't depend on proprietary drivers, great! That's a start

But try Hw accelerated video playback, 3D drivers, etc

And some products absolutely depend on those. Think set-top-boxes, multimedia players, etc.

Re:Problem is simple (4, Informative)

Microlith (54737) | more than 2 years ago | (#36506616)

But try Hw accelerated video playback, 3D drivers, etc

Working on MeeGo makes me all too keenly aware of that mess. None of it really applies to the kernel though, since all interesting bits are in userspace. And the graphics core IP vendors (Qualcomm most notably) have already been refused entry into the kernel because of this.

Re:Problem is simple (1)

Kagetsuki (1620613) | more than 2 years ago | (#36508028)

I totally agree with you, but would reword one point: this has been happening since before Android, but Android has easily made the situation much worse.

Re:Problem is simple (3, Interesting)

serviscope_minor (664417) | more than 2 years ago | (#36506588)

Also, there's a lack of a 'standart platform', even though ARM is pretty much homogeneous.

Kind of. Actually things are not that bad. There are a lot of SoCs out there which bundle an arm core with a few other cores (ethernet, usb, etc). There are actually staggeringly few vendors for the peripheral cores. The SoC vendors don't generally mention who the core vendor is, but they provide a datasheet and stick the core at some random place in the address space.

As a result, there are a lot of reimplementations of the same drivers. This has been recognised and peopls are now trying to spot duplicate drivers and replace them with a asingle unified one.

Re:Problem is simple (3, Interesting)

Sun (104778) | more than 2 years ago | (#36509384)

Here's my experience. I did a project for a company that were producing a SoC themselves. We were using the designware SPI peripheral. We wrote the driver ourselves (don't remember right now why - the dw_spi module was not for the right chip or something along these lines. I didn't do the original development).

Turns out this chip doesn't have proper peripherals support. No NAND controller and no integrated MAC, so we use SPI for both persistent storage and for networking. Except the chip isn't fast enough to service the "SPI queue is almost empty" interrupt, despite the designware having a huge queue (256 bytes), and no matter how high we place the watermark, so we do some serious trickery in order to get things working (in essence - directing SPI chips select to a GPIO and manually controlling activation and deactivation). Poor SPI throughput. Worse, the driver is now unsubmittable, as it contains hacks which really only make sense to this particular chip.

So I come along, and suggest to hook the SPI driver to the existing on board DMA controller. Get the whole buffers through without the CPU needing to do anything. A bit of hard work, and the DMA is working (not improving performance, but that's another story). Except neither the DMA infrastructure nor the actual hardware are generic enough to do such a thing so that I don't care which DMA controller is hooked into the SPI controller. So, more hacks. In theory, I could rework the infrastructure so that it is more generic, but that's a project that will cost (man hours) about as much as the original SPI driver rewrite.

The project wound up being canceled, so things never progressed any further, but you can understand that none of that code was ever released. This is not due to the client's desire not to release. Search for Baruch Siach's contributions in the enc286 code for example of vanilla integrated code that were done on that client's dime and with their consent. It's just that there is a limit to how much time a company can authorize merely so that the code is generic enough to go into main.

Shachar

Re:Problem is simple (1)

samjam (256347) | more than 2 years ago | (#36509934)

Often the business case is reached long before the project maintainers standards are met.

The result is that the code cannot be accepted into the project and often isn't even submitted.

Re:Problem is simple (1)

JamesP (688957) | more than 2 years ago | (#36510688)

Yes

If your customer doesn't know how to do proper hardware, it's difficult to do proper software

Been there, done that (but not at a such low level of hw changes)

Still, for example, SPI wouldn't work on PowerQUICC processors (good thing it wasn't essential to the project)

Re:Problem is simple (1)

bgat (123664) | more than 2 years ago | (#36507080)

Anyone who thinks Intel "gets" open source has never worked with them on a technical level. I'm looking at you, Poulsbo.

Re:Problem is simple (1)

JamesP (688957) | more than 2 years ago | (#36507120)

Poulsbo was not done by Intel, IIRC, it's 3rd part (PowerVR) IP (yes, I got bitten by that) - http://en.wikipedia.org/wiki/System_Controller_Hub [wikipedia.org]

Apart from that, they are one of the least worse vendors.

Re:Problem is simple (0)

bgat (123664) | more than 2 years ago | (#36507172)

Apart from that, they are one of the least worse vendors.

Point is, any differences between the top-tier vendors where open source is concerned is mostly splitting hairs. :)

Re:Problem is simple (2)

Microlith (54737) | more than 2 years ago | (#36507824)

Poulsbo is simply the same problem that every ARM device with a PowerVR graphics core has. The company refuses to release sources for the userspace driver, the kernel space stubs are not in the mainline, and they absolutely don't work with end-users.

You're an Idiot (0)

Anonymous Coward | more than 2 years ago | (#36508530)

There is a non-profit consortium called Linaro that is comprised of the ARM players such as Freescale, IBM, Samsung, ST-Ericsson, Texas Instruments, and ARM that get do get Open Source.

Re:You're an Idiot (1)

JamesP (688957) | more than 2 years ago | (#36511560)

Very brave of you posting as Anonymous Coward

You've obviously never worked with a product from these companies as well.

The idiot is clearly you

The GPL remarks GPL in the article are nonsense. (4, Informative)

MatanZ (4571) | more than 2 years ago | (#36506336)

The ARM vendors (TI, Samsung, etc.) do release their kernel changes. What they do not do is work with Linus and RMK on getting their code merged upstream. The GPL does not require that they do that.

Re:The GPL remarks GPL in the article are nonsense (2)

10101001 10101001 (732688) | more than 2 years ago | (#36507900)

The ARM vendors (TI, Samsung, etc.) do release their kernel changes. What they do not do is work with Linus and RMK on getting their code merged upstream. The GPL does not require that they do that.

I think you're missing two points. One, except for the claim that some smaller ARM vendors might not be so diligently releasing their kernel source changes, the articles points out bigger vendors (presumably including TI, Samsung, etc) are complying properly with the GPL. Two, the article/blog was going out of its way to explain that some ARM vendors not working to get their code merged upstream is a bad thing and that it might result in some vendors code not at all being merged in the mainline kernel.

As much as nothing about the GPL requires that vendors try to merge code with the mainline of a open source project, it's in the same vein true that nothing about the existence of extant roadways/railways/waterways requires anyone to use them. It's just general stupid for most companies to outright avoid them, especially when it comes to building the smaller pieces that bridge to their front door or effectively damaging those transport pathways heavily in use. If an ARM vendor wants to reinvent the wheel while many other ARM vendors are cooperating upstream, they'll likely end up producing repeated code in the short term, increase code management issues in the long term, and have lengthy rewrites/patches/merges in their own forks if they every choose to try to realign with the mainline kernel in the future.

In short, the issue isn't very much about what GPL requires legally. It's that there's a synergy in cooperation that licenses like the GPL were meant to embody and are frequently used for, where many people can benefit from working together and share the fruits of that effort. Any vendor can always choose to "go it alone", but except in some circumstances it really doesn't make sense for the long term.

Re:The GPL remarks GPL in the article are nonsense (1)

Andy Dodd (701) | more than 2 years ago | (#36511826)

Great clarification there.

As far as really nasty offenders:

Huawei is one of those "smaller" (at least in terms of the US market for mobile devices) vendors that isn't cooperative. They don't release kernel sources until legally threatened to do so.

HTC seems to think it's OK to wait 30-90 days to release kernel sources.

Re:The GPL remarks GPL in the article are nonsense (1)

bryanbuckley (1989454) | more than 2 years ago | (#36522650)

Some employees try to upsteam some patches. They are free to do so since the vendors' trees are public anyway (you can go look at what the big boys are doing to get 3.0 running right now) Most would probably LOVE to see their stuff merged upstream (either to Linux mainline or at least to Google's) so that they don't have port patches to new kernel versions.

A few reasons you don't see more upstreaming:
    - Some of the code is chip specific (errata, custom IP)
    - The market demands fast iteration, so having an engineer work to get his patches merged upstream is quite a thorn in the side when considering the time frame that can take.
    - A lot of the code that is submitted upstream is flat out denied already (for expected reasons: not elegant enough, too "my way or the highway" on part of vendor)

It does look like Linus really wants to be able to build a generic kernel to be able to boot up on any ARM. That seems to be what Linus and company are getting at. Because currently you have to build the kernel with a specific configuration to be able to boot a specific SoC.

Really a lot of the problem lies in the make process for the kernel... there is just no good dependency handling right now...

AMEN (5, Informative)

synthesizerpatel (1210598) | more than 2 years ago | (#36506338)

Having worked on bring-up on three custom ARM projects, I can personally attest to how gnarly it can be. But it's not necessarily something that Linus will be able to fix, or the Linux kernel community at large.

The main problem is the custom board support - even though the source code is GPL, they give you full source code and even submit it to back into the eco-system, it's just haphazard code that was pushed out the door too quickly. Linus can't stop people from writing bad kernel code, he can stop them from submitting it back into the mainline, but thats kind of what we have right now. If your code isn't up to snuff it doesn't make it into mainline. That doesn't stop them from shipping a product and giving that code to customers.

In one case, the documentation for the ARM chip I recieved was a password protected PDF that you can't even cut text out of, describing how to use the features by writing your own device driver. In that case, they had minimal Linux support but for all the bells and whistles you had to do it yourself.

The problem is as dense and layered as the chips themselves - what really needs to happen is a standardized method for publishing SoC features in a structured format (XML?) where common features (FIFO registers with a bytes_remaining field? Write only configuration registers, Read only configuration register.. etc) could be defined and the code could in many cases just be automatically generated.

Need to set reg A to all f's, reg B to all zeros, flip bit 12 of reg C and then your PHY is configured - done.

For more complex interlocking mechanisms that would be difficult or impossible to communicate in a cure-all DSL, but even if you could eliminate 80% of the problems that'd be great.

Which brings me to the other problem - a lot of what you do to get ARM systems up and running happens way before you run Linux - in U-Boot/RedBoot or whatever else is out there.. And thats a whole other kettle of fish.

Re:AMEN (4, Interesting)

bgat (123664) | more than 2 years ago | (#36507030)

You think it's gnarly now? You should have seen it a couple of years ago! Things have improved by light-years since then.

It's true that ARM isn't as cleanly supported as, say, x86. But the simple explanation is that there is significantly more diversity in the ARM world than in the x86 world, so comparisons between the two are a bit like comparing apples to orangutans.

There are limits to what can be done to address the problem. I prefer having a diversity of ARM chips to having a BIOS--- and that would be the only way to tame this beast long-term. I think most platform developers (those who do both hardware and software) would agree with me: it's easier to port Linux to a good chip for your end application, than it is to use a less-than-ideal chip in the platform just because it has a mature Linux port. So while we should continue refactoring Linux on ARM, we should also accept that things will never be as clean as they are on x86. It isn't in anyone's best interest to even strive for that goal.

In parallel with all of this, we must be careful not to kill the goose that lays the golden eggs. ARM is the singular reason why Linux owns the embedded space for 32-bit CPUs that run OSes. Nobody else is even close. So despite all of Linux's warts for ARM, it still works really, really, REALLY well. Vendors of ARM SoC's should recognize this, and pony up some funding to clean up the mess as an investment in their futures.

Re:AMEN (0)

Anonymous Coward | more than 2 years ago | (#36511716)

I prefer having a diversity of ARM chips to having a BIOS--- and that would be the only way to tame this beast long-term

This is a problem that they are looking to solve with device trees (FDT). The idea is to let the firmware (OpenFirmware preferably) inform the kernel about the platform through a structured blob. No code/interfaces, just a data structure.

it's easier to port Linux to a good chip for your end application, than it is to use a less-than-ideal chip in the platform just because it has a mature Linux port. So while we should continue refactoring Linux on ARM, we should also accept that things will never be as clean as they are on x86.

That's because there is a standard on x86: Wintel. No vendor is stupid enough to design their hardware (i.e. hook up their SOCs) in such a non-standard way that Windows won't run on it. In contrast, ARM vendors appear to be very happy to all use the same ip cores but in slightly incompatible configurations (buffered vs non-buffered, memory-addressed vs register-based, amba vs direct-connect etc). This is why there are so many duplicate drivers, and this is a problem that can not be solved in software.

Long term, I'd want Linus to just flat out refuse duplicate drivers: if a new platform introduces a new driver for an existing soc/core, it'd better deprecate the old driver and incorporate its functionality or provide a forward migration path. This will never happen, not in the least because core drivers are (almost) never contributed by the manufacturer: they are written and submitted by their clients, the system integrators, so you always end up with a partial driver that just happens to work for this particular instantiation but with no guarantees about unlocking the full potential of the core. And if they get too much opposition submitting their driver, these downstream vendors will simply obscure the origin of their cores (or not submit at all) .

Incidentally, this infrastructure fragmentation is why I expect Microsoft to partner with a single vendor for Windows 8 (winvidia anyone?): they will (try to) dictate a platform that way. From a software point of view, there are much benefits to be had, but from a consumer point of view I have yet to see Microsoft standardize anything without stifling innovation.

In parallel with all of this, we must be careful not to kill the goose that lays the golden eggs. ARM is the singular reason why Linux owns the embedded space for 32-bit CPUs that run OSes. Nobody else is even close. So despite all of Linux's warts for ARM, it still works really, really, REALLY well. Vendors of ARM SoC's should recognize this, and pony up some funding to clean up the mess as an investment in their futures.

As above, for the ARM hardware platform to become sustainable, the hardware manufacturers need to start working with upstream themselves instead of leaving the software design to their downstream vendors. Right now, I only know TI to do so.

Re:AMEN (3, Interesting)

JackDW (904211) | more than 2 years ago | (#36507840)

I don't know if such a standard SoC description format could ever exist in a useful form. Anything even moderately complex would require driver code, not just descriptive data. Descriptions produced by vendors would inevitably be buggy, like ACPI data. This solution would probably just make the problem worse.

It would be much better to simply standardise the SoC, so that every ARM system has the same basic elements. Just like a PC, where the interrupt controller and memory are always in the same place, and the timer always has the same register map.

I assume that SoC vendors do not do this because (1) they don't need to, (2) they want to have "value-added" features like their own custom power management subsystem, and (3) the diversity makes it harder to use a different SoC as a drop-in replacement.

But they should standardise. There's no advantage to the user, the OEM, or the OS developers in having so many different SoCs.

Re:AMEN (1)

uiuyhn8i8 (1547077) | more than 2 years ago | (#36511656)

>Anything even moderately complex would require driver code, not just descriptive data.

Our chips with 3K pages of documentation and no driver code disagrees. And they run linux in millions of products. It really depends on the quality of the documentation. Which is generally pretty bad for mosts chips I have looked at.

>But they should standardise. There's no advantage to the user, >the OEM, or the OS developers in having so many different SoCs.

So you don't want any technical development? Or you think that ARM in a monopoly situation will still develop the core functionality at the same pace as all other chip design companies?

 

Re:AMEN (1)

JackDW (904211) | more than 2 years ago | (#36519844)

No, I want a situation like the one on the PC, where the basic architecture is completely standard, and "technical development" builds on top of that. This is best for users and OEMs and OS developers. You are, I am sure, not about to tell me that there have been no innovations on the PC architecture as a result of its continued compatibility with the original 1980s design.

The problem, as I see it, is that SoC designers are behaving as if their products were for embedded systems only. For products of this sort it is usual to have custom BSPs and custom Linux kernels. But these days the products are used in devices which are computers rather than embedded systems. Here, custom BSPs and custom kernels are an inconvenience that prevents end users and OEMs using off-the-shelf OSs. The mainline Linux kernel struggles to accommodate all of the SoC variants. There are too many and they are too different. If only the basic stuff was standard.

Re:AMEN (0)

Anonymous Coward | more than 2 years ago | (#36509778)

>> what really needs to happen is a standardized method for publishing SoC features in a structured format (XML?) where common features (FIFO registers with a bytes_remaining field? Write only configuration registers, Read only configuration register.. etc) could be defined and the code could in many cases just be automatically generated.

You read my mind. Scary. I have been saying this for years. And the driver code that is needed could just be included in the XML file as C source code and compiled as part of the auto generated code.

Re:AMEN (1)

eknagy (1056622) | more than 2 years ago | (#36510506)

... was a password protected PDF that you can't even cut text out of ...

Not even after pdf2ps and ps2pdf? ;)

Ran WinMo 2003 on an ARM processor years ago (1)

Anonymous Coward | more than 2 years ago | (#36506352)

I don't know why people are still acting surprised by Microsoft's support for ARM processor. I had an old Garmin handheld that used an Intel PXA272 Xscale processor that ran WinMo 2003. Maybe full blown Windows isn't ARM friendly but Microsoft has supported ARM in the past.
 
I guess some people refuse to look into this and see that Microsoft on ARM is nothing new.

Re:Ran WinMo 2003 on an ARM processor years ago (2)

dagamer34 (1012833) | more than 2 years ago | (#36506498)

The Windows Mobile and Windows NT kernel are not the same thing.

Re:Ran WinMo 2003 on an ARM processor years ago (1)

nomadic (141991) | more than 2 years ago | (#36506596)

They're not both made by Microsoft?

Re:Ran WinMo 2003 on an ARM processor years ago (1)

houstonbofh (602064) | more than 2 years ago | (#36506702)

Yes. Just like the F-150, the Escape and the GT-40 are all made by Ford. Same thing, right?

Re:Ran WinMo 2003 on an ARM processor years ago (1)

nomadic (141991) | more than 2 years ago | (#36506740)

Huh? What does that have to do with anything? The point raised was, Microsoft has previously supported ARM architecture. Someone else then argues a nonsequitur that one Microsoft platform is not the same as another. If the point is, the COMPANY obviously has no problem with supporting the ARM architecture, merely pointing out that not everything the company builds supports it does not refute the central point. A more appropriate analogy would be, "why should you be surprised that Ford is using Part X in the F-150? They've used it before in the Escape."

Re:Ran WinMo 2003 on an ARM processor years ago (1)

houstonbofh (602064) | more than 2 years ago | (#36506852)

Perhaps because the summery and the world at large was talking about Windows, not Windows Mobile. Windows and the NT KErnel ahve never supported ARM.

Re:Ran WinMo 2003 on an ARM processor years ago (0)

Anonymous Coward | more than 2 years ago | (#36510204)

Windows and the NT KErnel ahve never supported ARM.

LOL.. yeah.. like they just got up and rewrote the entire several hundred million line codebase of windows 8 just for arm. You nerds have funny defective brains. NT has always been cross platform from day 1. Ask any kernel developer on the windows team they always had the code working on multiple platforms. But yeah thats too much work, speculation is fun.

Re:Ran WinMo 2003 on an ARM processor years ago (1)

houstonbofh (602064) | more than 2 years ago | (#36512046)

Yep. And there is no difference between internal development code and production code. Ok, maybe that was a poor example...

Re:Ran WinMo 2003 on an ARM processor years ago (1)

jeremyp (130771) | more than 2 years ago | (#36514882)

When Windows NT was first released, it officially supported three processor architectures, x86, DEC Alpha and MIPS. None of those is ARM, but the design of the kernel included a hardware abstraction layer that makes it relatively straight forward to port it to new architectures.

Re:Ran WinMo 2003 on an ARM processor years ago (1)

houstonbofh (602064) | more than 2 years ago | (#36519154)

It was based on the Mach Kernel. http://en.wikipedia.org/wiki/Hybrid_kernel [wikipedia.org] "Other design goals shared with Mach included support for diverse architectures, a kernel with abstractions general enough to allow multiple operating system personalities to be implemented on top of it and an object-oriented organisation.[2][3]" After just three versions, that support was dropped. Wonder how many versions Arm will do?

Re:Ran WinMo 2003 on an ARM processor years ago (1)

Richard_at_work (517087) | more than 2 years ago | (#36510502)

Never publicly supported....

Remember that - there were rumours of a maintained x86 version of OSX around for years before the PPC to Intel switch was ever seriously considered by anyone following Apple. And low and behold, Apple releases a fully workable x86 build very quickly.

Its the same here - no public support, but for a major OS developer it makes sense to maintain a low key, low resource port just incase.

Re:Ran WinMo 2003 on an ARM processor years ago (1)

CompMD (522020) | more than 2 years ago | (#36506504)

Hey, cool, you're the guy who bought the iQue m5! You really made my day when I heard someone bought one.

I have an iQue m5 and iQue 3600 in the "museum" on my desk. :)

Get Radical: Raise Social Security (-1)

Anonymous Coward | more than 2 years ago | (#36506354)

As a labor lawyer I cringe when Democrats talk of “saving” Social Security. We should not “save” it but raise it. Right now Social Security pays out 39 percent of the average worker’s preretirement earnings. While jaws may drop inside the Beltway, we could raise that to 50 percent. We’d still be near the bottom of the league of the world’s richest countries — but at least it would be a basement with some food and air. We have elderly people living on less than $10,000 a year. Is that what Democrats want to “save”?

Re:Get Radical: Raise Social Security (-1)

Anonymous Coward | more than 2 years ago | (#36506606)

Are you wanting the US to go broke even faster? The mandatory spending part of the federal budget pie is 2/3 of what the Feds spend and it is what is driving the deficits. In order raise the payout to 50%, payroll taxes would have to be increased significantly.

Re:Get Radical: Raise Social Security (-1, Offtopic)

Hazel Bergeron (2015538) | more than 2 years ago | (#36506774)

The US is going broke because we are sufficiently technologically advanced not to need 40+ hour work weeks to the age of 60-65 (and could instead aim for 4 day weeks to 70+); because asymmetric tax and worker protections make it favourable to offshore; because the modern education and propaganda/advertising systems breed dependent consuming idiots; and because of the amount of money funneled by government to private corporations - mostly through wars from which those corporations benefit.

Maintaining or increasing poverty among the weak won't solve any problem at all, except that of satisfying sadists and fascists.

Re:Get Radical: Raise Social Security (-1, Offtopic)

The Dawn Of Time (2115350) | more than 2 years ago | (#36507782)

The government is going broke because it is spending more than it takes in, period. Giving the incapable more comfortable lives won't fix that at all, it'll just make people like you feel better. You are, of course, free to donate all of your money to that cause. You're just not free to donate mine.

Re:Get Radical: Raise Social Security (1)

Hazel Bergeron (2015538) | more than 2 years ago | (#36509880)

The government is going broke because it is spending more than it takes in, period.

And I listed some reasons why.

Giving the incapable more comfortable lives won't fix that at all

Giving people more comfortable lives is pretty much the goal of society. How you define "people" in this sentence is up for debate: some have excluded blacks, or Jews, or homosexuals, or cripples, etc. You appear to exclude at least cripples. Try harder.

Oh, and of course it'll improve it - though more is needed to "fix it". You give people help when they're incapable and they maintain a certain standard of health which makes them less burdensome to everyone from family up to a national level. Many people can be productive despite chronic illnesses after a sustained period of rehabilitation. Their carers can often be productive outside their caring role providing they're given appropriate support. Many social welfare systems understand the need for the state to have a relationship with a family which supports dependents: it's the choice between an entirely destitute family and a responsible family which takes the burden off the state and which can, as a whole, continue contributing to society at large.

You are, of course, free to donate all of your money to that cause. You're just not free to donate mine.

You have money because the people around you let you have it. This is how all property works in reality: certain property rights are protected because people as a whole regard them as beneficial on balance. If you stop doing what society demands in exchange then society will turn against you and stop offering you protection, and you'll lose what you consider to be "your money". (If you want to see this illustrated, stop paying tax.)

Re:Get Radical: Raise Social Security (0)

Homburg (213427) | more than 2 years ago | (#36507116)

Social security is paid for specifically out of the social security trust fund, which currently has a surplus. Social security has nothing to do with the deficit.

Re:Get Radical: Raise Social Security (1, Informative)

WorBlux (1751716) | more than 2 years ago | (#36507480)

If you want to consider that fund as reall, then you also have to add a couple trillion dollars to the current national debt figures. The fund is just an accounting fiction, money owed by one department of the U.S. government to another.

weak ARM support is not surprising (3, Interesting)

Anonymous Coward | more than 2 years ago | (#36506426)

weak ARM support is very much related to the constantly moving target of ARM hardware. there are several series of ARM cpus in use today and as soon as one becomes commonplace, it is phased out in favor of a "cheaper and better" cpu, sometimes in the same series, sometimes not.

this phenomenon is related to wireless providers having an economy of scale that doesn't make sense in an end-user context. for them, having a team of skilled programmers that cost > USD 10 mln / yr is nothing and they leverage the hell out of this fact. expect this sort of stuff to continue despite ARM cpus comprising the majority of cpus on the planet.

Re:weak ARM support is not surprising (1)

bgat (123664) | more than 2 years ago | (#36507064)

I wouldn't call it "weak" support, at all. Rather, it's a challenge to keep Linux abreast of the rapid pace of ARM development. Both on the CPU side, and on the platform side.

Linux is an incredibly strong OS for ARM. If you want support for the bleeding edge CPUs and SoCs, it can be pretty painful--- but if you step back even a little from that edge, Linux is solid.

Netwinder anyone ... 1999? (0)

Billly Gates (198444) | more than 2 years ago | (#36506506)

Anyone remember it?

Remember during the days of kernel 2.0 or 2.2 a decade ago you could buy a Netwinder appliance that came iwth Redhat Linux? Corel even shipped WordPerfect for Unix on it, and I remember reading a commentator who used it on LinuxMagazine.

ARM support has been supported in Linux for a very long time. This story is pure FUD.

Re:Netwinder anyone ... 1999? (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#36506600)

It's a good thing you didn't actually read the article, right? Or, hell, even read the summary as what you are arguing against was not what either the article or the summary was talking about.

Re:Netwinder anyone ... 1999? (2)

Jahava (946858) | more than 2 years ago | (#36506608)

Anyone remember it?

Remember during the days of kernel 2.0 or 2.2 a decade ago you could buy a Netwinder appliance that came iwth Redhat Linux? Corel even shipped WordPerfect for Unix on it, and I remember reading a commentator who used it on LinuxMagazine.

ARM support has been supported in Linux for a very long time. This story is pure FUD.

I know this is Slashdot and reading the article is sacrilegious, but you could at least read the summary!

Well, blogger Brian Proffitt explains just how messy the state of Linux support for ARM is right now, partially as a result of mutually conflicting kernel hacks from ARM manufacturers who just wanted to get their products out the door and weren't necessarily abiding by the GPL obligations to release code. Things are improving now, not least because Linus is taking a personal hand in things, but sorting the mess out will take time."

Nobody's challenging that ARM is supported by Linux. This article is about how Linux's ARM support is poorly-coded and internally-inconsistent. The problem is that the ARM code is neither scaleable nor maintainable. This is critical as both the Linux kernel and number of ARM systems supported continues to grow, which it almost certainly will.

It is pretty foolish to blow a Linux kernel issue off as "FUD" when the maintainer of said kernel himself is taking action to address it.

Re:Netwinder anyone ... 1999? (1)

Aardpig (622459) | more than 2 years ago | (#36508906)

I remember being the first person in the world to run Linux on an ARM 250 -- back in the mid 1990s.

Wow some errors in this article (3, Insightful)

Anonymous Coward | more than 2 years ago | (#36506656)

>> a threat that could effect dozens of companies' livelihoods
A lot of semiconductor companies were releasing linux-based SoCs way before the mainline kernel started consolidating code from vendors. If Linus stopped pulling ARM code, no business would shut down. I personally don't know any companies that rely on Linus' tree to ship their customers.

>> To make matters worse, even though the GPL v2 license on the Linux kernel requires these changes to be released back upstream to the main Linux kernel, often they were not.
This doesn't make any sense to me. GPL requires the changes to be released to the person who purchases your device/code. The vendors have zero responsibility to the mainline.

>> ...this is entirely the reason why the non-profit Linaro consortium (...) was put together...
One thing I wonder about Linaro is how they are going to be the leader and not play catch up. There are a lot of board-specific drivers they can consolidate, but as they consolidate, the vendors are coming out with even more.

>> [a]s an indication of the scale of this problem, each new kernel release sees about 70,000 new lines of ARM code, whereas there's roughly 5,000 lines of new x86 code added."
I find this comparison very unfair. Yes, that 70K number could be more like 20-25K but the devices with ARM processors have very different structures, designs, and end goals. One code can't fit them all. On the flip side, most x86 implementations are on either desktop or server side.

I'm surprised Likely didn't talk about the device-tree support for the ARM tree. I've implemented a few (ppc-based) boards with device trees. The initial learning curve was a bit painful, but once you understand it, it enables a lot of common code and cuts down development time too. synthesizerpatel above mentioned "a standardized method for publishing SoC features in a structured format" above and the device trees are exactly it (except they're not XML! so, even better!)

My preference as a lowly bring-up guy would be if the desktop/server kernel split up from the embedded kernel completely. Embedded kernel devs then can emphasize what's important to them (cut down development time, wide variety of device support, aggressive power mgmt) while the desktop/server devs can focus on their stuff.

Re:Wow some errors in this article (1)

gcl (1070302) | more than 2 years ago | (#36517948)

We did talk about device tree, but I didn't dwell on it because it is just another piece of the puzzle in cleaning up arm. Device tree support for ARM is now in mainline, and it is full steam ahead to adapt many of the current ARM subarchitectures.

The Ugly State of ARM Support on GCC (3, Informative)

Suiggy (1544213) | more than 2 years ago | (#36507312)

The Kernel isn't the only thing suffering from shoddy support. The ARM backend and code generator for GCC is suboptimal. The GCC __sync_* builtin functions for atomic memory access are unoptimized and call into kernel functions, which isn't always necessary, hopefully this will be fixed with the new C1x/C++0x atomics and memory model. And then the ARM NEON neon intrinsics/builtins implementation is in an absolutely horrendous state, I'm surprised NEON register allocator is even functional.

I'd fix it myself, but then I'd have to spend 2 months learning how to make changes to GCC, and wait another 6 months for my patches to be accepted.

Re:The Ugly State of ARM Support on GCC (0)

Anonymous Coward | more than 2 years ago | (#36507462)

This is a cogent and relevant post.

If you are serious about fixing it, start a project.

You would get help quicker than you think.

I am not a big ARM developer, but my latest project (embedded project on Gumstix COM's running pretty much the stock Angstrom build) has lead me to agree with you about gcc on ARM (for different reasons).

Re:The Ugly State of ARM Support on GCC (1)

arglebargle_xiv (2212710) | more than 2 years ago | (#36509984)

This is a cogent and relevant post. If you are serious about fixing it, start a project.

Already been done, for more than a year [llvm.org] (actually the project has been ongoing for more than a year, but the NEON support that the OP mentioned was done about a year ago. Works quite well too).

Re:The Ugly State of ARM Support on GCC (1)

Kagetsuki (1620613) | more than 2 years ago | (#36508062)

Oh, you forgot to mention about how sometimes char is unsigned on ARM as well as other bizarre anomalies. As someone who was tasked with making some very complex code very efficient on an ARM board I can personally tell you nobody is getting anywhere until the problems Suiggy pointed out and quite a few more are actually fixed in GCC/G++/Binutils.

Re:The Ugly State of ARM Support on GCC (2)

shutdown -p now (807394) | more than 2 years ago | (#36508202)

But these days, we also have Clang. Is it any better as far as ARM code generation goes? Can it be realistically used instead of gcc?

Re:The Ugly State of ARM Support on GCC (1)

KiloByte (825081) | more than 2 years ago | (#36510360)

At least as C++ goes, clang is a bad joke. It mostly works on C and ObjC, but then, it takes a week for a high school student with no former knowledge of grammar parsers to write a C compiler (ok, with hardly any optimizations...).

That's the front end, but with clang back end being behind gcc as well on x86, I wouldn't hold my breath.

Re:The Ugly State of ARM Support on GCC (1)

KiloByte (825081) | more than 2 years ago | (#36510704)

To provide some specific numbers on efficiency of ARM code:

clang trunk 28.493s
gcc 4.6 13.613s
(clang 2.9 fails to compile the code in question at all)

Pretty crushing, I'd say.

Re:The Ugly State of ARM Support on GCC (1)

shutdown -p now (807394) | more than 2 years ago | (#36511762)

Wow, that's seriously bad. Why is Apple pushing for Clang as the default compiler on their systems (isn't it already the default in the most recent version of Xcode?) with numbers like that?

Would also be interesting to see what VC++ can do on ARM.

Re:The Ugly State of ARM Support on GCC (1)

Suiggy (1544213) | more than 2 years ago | (#36515448)

Because of GCC having adopted GPLv3 with GCC 4.3 and later. Apple doesn't want to use it. Clang/LLVM trunk is actually starting to get pretty decent at x86/x86-64, but ARM is still pretty crappy.

Think is, Clang has come a very long way in just a couple of years, if they keep up the same pace of development, it will most likely surpass GCC either next year or the year after.

Not just Linux.... (2)

gatkinso (15975) | more than 2 years ago | (#36507494)

it is OSS in general with respect to ARM support.

My God.... the state of Angstrom, BitBake, OpenEmbedded... while the maintainers are doing great work, they are not nearly as stable and mature as established architectures.

Seriously, check out OpenEmbedded and try to roll that latest Gumstix omap-console-image. Count the number of things that are broken. It is a travesty!

This kind of shite would NEVER be accepted in a mature x86 based project.

Re:Not just Linux.... (1)

MtHuurne (602934) | more than 2 years ago | (#36508522)

This kind of shite would NEVER be accepted in a mature x86 based project.

Many of the problems that embedded Linux projects have come from packages that are mature on x86 but not on embedded architectures. One reason is that x86 is not strict about memory alignment, but on for example PPC or MIPS (I don't know about ARM) you get hit by a SIGBUS if you break alignment rules. But the main issue is that cross compilation is broken on many packages.

Some try to run the compiled binaries as part of the build process. Some detect cross compilation and then use different implementations of some routines, which are often broken because they are not re-tested as the code around them changes. Some detect cross compilation and then preemptively disable features that they fear might be broken, so you'll end up with a working application lacking a feature you want.

The article was about the kernel, but in my opinion the user space is in even worse shape than the kernel.

Re:Not just Linux.... (1)

daid303 (843777) | more than 2 years ago | (#36510194)

You sir, hit the nail on the head.

The kernel is fine. Yes, patches are lacking behind a bit, but in general they are stable and working great.

On userland it's a whole different level. Automake just needs to die, it's like the 7th circle of hell for cross compilation. I need to build for x86, ARM and PPC (ppc860 in my case) and getting configure to cooperate is just a disaster every time. Anyone who says otherwise should try to cross compile "tcpdump" with configure. Which I never got to build with configure, and I got to build easy without it.

(Oh, and you can enable auto alignment fix for ARM to fix the SIGBUS, it will hit performance, but it will work, see /proc/cpu/alignment)

Re:Not just Linux.... (1)

KiloByte (825081) | more than 2 years ago | (#36510480)

Have you tried compiling cross-compiling projects that use autotools compared to those with hand-written makefiles? There's a world of difference in favour of the former. Unless there are some utterly broken m4 macros mispasted by retarded monkeys (which, I admit, happen way too often), it just works.

Hand-written makefiles on the other hand support only systems they were explicitely ported to, with inevitable bit rot setting in, and if cross compilation happens to work, it's only for a certain pair of host and target arch, usually only with a single compiler version as well.

For SIGBUS errors, they tend to be trivial to fix. These are consistent errors in the same place, just fire up gdb and you get the culprit instantly. In the worst case, with a pointer passed around with a long way between it is created and used, you can add a few assertions for alignment on the way.

Linus (2)

mug funky (910186) | more than 2 years ago | (#36507522)

i know this article is slightly trollish, but it did make me wonder.

what's going to happen when Linus finally retires? will there be strong enough leadership in the ensuing vacuum? in spite of open-source philosophy, will linux remain Linus' brainchild?

Re:Linus (0)

Anonymous Coward | more than 2 years ago | (#36509130)

Thankfully, there are a lot of wonderful people underneath Linus.

If you don't follow the architectures and their leads, you'd never know their names, but case-in-point there are people who can step up when the time comes. Moreover, people who have already spent a fair amount of time adjudicating developer disputes. ;)

Re:Linus (1)

Kjella (173770) | more than 2 years ago | (#36509652)

There's only one benevolent dictator for life, at that is Linus. One of the subsystem maintainers would probably take over the role as project leader after a vote, but he'd not have nearly the same authority. I don't see the project overall going anywhere though, the entire structure, team and commercial backing would still be there - would Windows or OS X collapse if the lead guy disappeared? By the GPL there's not many other ways to run it than as open source collaboration.

The only thing I see is an increased risk of forks, right now you'd have to have balls the size of a small planet to try a true fork - and not merely a branch to ship a product or try some experimental feature, but one that tries taking over Linux development. It could fracture like the BSDs, but I doubt it. Driver developers and such want one target, not many so I think it'd quickly gravitate back towards one dominant version where most the work happens.

How big are Google's "little witnesses"? (1)

tepples (727027) | more than 2 years ago | (#36518978)

right now you'd have to have balls the size of a small planet to try a true fork

How big are Google's "little witnesses"? It already maintains a fork of Linux for its Android OS.

Re:Linus (1)

NullProg (70833) | more than 2 years ago | (#36515692)

what's going to happen when Linus finally retires?

I hear he will live on as a kernel module.

insmod linus

insmod: can't read 'linus':

Again my suggestions in the past (1)

dayton967 (647640) | more than 2 years ago | (#36508448)

My suggestion from well before kernel modules era. One thing that is needed for the Linux, is a seperation of the drivers from the base of Linux. To do this, create a standard interface for the kernel and drivers to communicate. Probably through the use of Stub drivers in the kernel. This would allow some level of standardization of the kernels, would allow for 3rd party drivers (I know I will get killed for this one), and more importantly is that urgent fixes in drivers could be pushed before the kernel, and may ultimately reduce the size of the distributions, as they would only have to push what drivers have changed.

Re:Again my suggestions in the past (1)

Alex Belits (437) | more than 2 years ago | (#36508896)

Oh, for fuck sake!

Driver interface does exist. It just isn't, and is not supposed to cater to proprietary drivers that exist only in the form of binary blobs. Not that it prevent such things from being developed whenever a hardware vendor wants to do so (by adding a driver-specific wrapper), but it's a stupid idea and this is why it is not done.

This has also absolutely nothing to do with ARM -- with ARM most problems are related to configuration that Linux reads or detects at initialization or somehow stores in itself -- ad-hoc BSPs vs. device tree vs. PCI and other bus-specific auto-configuration mechanisms. And this has less to do with ARM itself and more with variety of bus architectures used with ARM and long history of the platform.

ARM and x86 Products are Fundamentally Different (1)

svnt (697929) | more than 2 years ago | (#36508740)

There are basically three x86 processor manufacturers. The two smaller players work hard to stay compatible because their livelihood depends on it. Most of the interface functionality is off-chip.

There are many well-known ARM processor licensees. They all strive to differentiate their product offerings. In the majority of cases all of the major peripherals (which are one of the primary opportunities for differentiation) are on-chip.

As such, where minimizing differences by processor was clean and relatively straightforward for x86, expecting it to continue to work well for ARM is nonsensical. I really think Linus is missing the forest on this one.

Re:ARM and x86 Products are Fundamentally Differen (1)

gcl (1070302) | more than 2 years ago | (#36509786)

As such, where minimizing differences by processor was clean and relatively straightforward for x86, expecting it to continue to work well for ARM is nonsensical. I really think Linus is missing the forest on this one.

Not true. Pretty much all of us maintainers agree that the duplication of code and infrastructure in arch/arm is ridiculous. It has to be fixed, and we're actively working on it. Linus was perfectly correct in his statements.

Really hard (1)

nukem996 (624036) | more than 2 years ago | (#36509900)

I worked with Marvell extensively in Linux and while it was good all driver support was in their own patch which seemed to not integrate well with the kernel. Parts of it conflicted so I had to ensure certain standard features were turned off. Even userland tools were changed. I had to use a custom version of U-Boot tools. It was a mess. What they don't understand is that they end up spending more time maintaining their patches then just getting them right and submitting them upstream.

Boot loader (0)

Anonymous Coward | more than 2 years ago | (#36510270)

One area that is really going to hinder ARM adoption is the boot loader. Most devices use an essentially open source boot loader like u-boot but you can never compile it yourself without bricking your device. There's nothing like the BIOS on PCs that makes it essentially trivial to boot off a CD or USB stick. Most devices have some way to be rescued presumably because the developers need ways to test updated firmware themself but there's no standardisation. I've got a NAS which lets me ssh in during a brief window at boot to get a redboot prompt and a netbook that can boot off a memory card provided it has suitable contrived contents. The trouble is if you can't be sure of being able to update the operating system then the device has a limited life time. Manufacturers are just worrying about delivering a product. Most buyers are happy with it this way but it'd be better if you could easily take your pick of Android, Meego, Windows CE (or 8), Debian etc and install it yourself by simply downloading one file to a USB stick or memory card.

Food chain? Is in the 19fucking80s again? (1)

Hognoxious (631665) | more than 2 years ago | (#36510312)

Power-efficient ARM processors are moving up the food chain

They've changed from herbivores into carnivores?

It was always a retarded expression, but I hoped it had become extinct. I am disappointed to a literally exponential degree.

Interesting (1)

Pop69 (700500) | more than 2 years ago | (#36510714)

Windows 8 is supposed to be moving to support ARM and now there's an article suggesting that Linux support for ARM is a buggy collection of hacks and tricks. I'd bet money we'll see more of the same from wide ranging sources. When the Muellertroll writes about it we'll know for certain where it's coming from.

Makes you wonder if you should be paranoid or not ?

.Advanced RISC Machine is ARM. (1)

bobs666 (146801) | more than 2 years ago | (#36511588)


RISC is old news. Linux was ported many RISC platforms. Anything names advanced makes me wounder if it is advanced. When ARM is ready for real computing it will be ported to real computers.

It's is only news for Windows not Linux. Since ARM will break all backwards compatibility. I wounder if there will be a VM compatibility option? Probably not since Steve Ballmer wants you to buy all new software. We can only hope this will be the final nail in the coffin for Windows.

No cr*p, I spent the last week fscking with this. (1)

bored (40072) | more than 2 years ago | (#36520850)

I have a couple of GuruPlugs and OpenRD's. One of the openRD's is my NAS/dhcp/etc server. I just spent a week trying to get a newer u-boot to boot from USB with some consistency. Probably 3/4 of my time was spent fighting to get a u-boot and linux build that worked properly together. The remaining 1/4 was actually fixing the problem.

The list of sins on these devices is _VERY_ long. A partial list includes:

  • First they are sold as "open" devices but the PDF from Marvell just looks like it contains the tech docs. Probably 40% of the chip's peripherals are hidden behind an NDA wall. Do you want to know how to disable a device for power saving purposes? Well that section is completely missing from the public manual.
  • U-boot for the devices is a complete tree fork. And globalscale/marvell seem to conveniently forget to provide all the necessary patches against their tree to get a particular version to build. Only after people point out that some function that works on the shipped u-boot doesn't work with the shipped/public code do they provide a patch. Even when things "work" they rarely work 100%. The USB support has been unable to boot consistently from USB in any of the public trees. I have a fix, but i'm not sure why it works because, again the docs for the "near" EHCI controller don't exist without an NDA
  • The linux kernel is in the same mess. probably 50% of the device support isn't in any of the mainline linux trees. So you have to patch 10k+ lines of code in to get things like the graphics subsystem on the OpenRd to work.
  • Globalscale releases things without consulting the u-boot or LKML folks, so things like the machine ids are completely wrong and keep you from upgrading to a new kernel (actually new u-boots have an undocumented "machid" environment variable for globalscale devices) without upgrading u-boot. Or the reverse. Its an all or nothing proposition. You either upgrade both u-boot and the linux kernel or you upgrade neither.
  • The devices themselves often have hardware problems. My NAS's OpenRD has a problem with the JTAG (it only works in certain undocumented ways), and the GuruPlugs are well documented to have short lives due to heat problems.
  • Good luck getting a response from GlobalScale or Marvell about anything, including RMA's for failed devices.
  • Just about the only good thing about the devices is that the community is strong enough to badger GlobalScale into releasing necessary information. That and there are often people who have hit the same problems. The problem is that no one has gotten all the fixes and instructions consolidated. My fixes for the SATA port only exist in some forum postings, so anyone with more than 4 ports has to google search, get lucky enough to find them, and then roll them into their tree.
  • As the article suggests, one of the primary problems is that there are 600+ variations of USB controllers, so every board/chip needs to have a custom hacked driver. What ARM Corp needs to do is define some basic peripheral interface standards for PCIe/USB/ethernet/sata/etc and require that vendors making ARM devices stick to the basic specifications so that a common set of drivers can be created.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...