Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Journal MrIrwin's Journal: ARM yourself for an x86 free future. 2

The x86 architecture holds an even more dominant position in the hardware market than Microsoft does in the Software market. It's not quite the same, INTEL do not have an x86 monopoly and other producers can and do produce x86 compatible alternatives.

But at the end of the day the x86 is coming to the end of it's useful life. Allthougth numerous improvements have been made over the years it still clings on to a lot of legacy concepts, and innovations must work thier way round difficult obstacles if they are to upgrade, as opposed to replace, the x86 architecture. At a certain point one has to wipe the slate clean.

Before you jump to conclusions, this article is not going to talk about PowerPC's, Macs or 64 bit architectures. Sure they are relevant to x86 succession, but for the most part they relate to servers and specialized workstations. I want to talk about what most of us are likely to be using for our personal computing requirements in the future, and those needs will not be met by box on the corner of our desks but by our mobile phones, our pad computers, and the computers built into our home entertainment centres, car dasboards and the backs of airplane seats. And I here I think the future is going to be ARM.

So have I sold my shares in INTEL and purchased stocks in the company that makes ARM? No. Because they are the same. Or, to be more precise, ARM processors are manufactured by most of the major microprocessor manufacturers (including those who just serve embedded markets), and yet the architecture is owned by non of them. It is just one of several quirks that set the ARM processor, or more correctly the ARM architecture, apart from more 'conventional' processors. So who is this 'new kid on the block', and where does he come from? Well, the first lesson to learn is that he is by no means a new kid, in fact to find the roots of the ARM processor we must jump back 20 years or so.

Remember the days of the Commondore PET, the VIC 20, the TRS80 and the Sinclair ZX? The first generation of computers with keyboards and a text display capability that ordinary folks could purchase. They were to spark the home computing revolution, and they also marked the introduction of computers into high schools. In the UK the government wanted to introduce computing to the school curriculum, and did a deal with the BBC (state TV company) to produce a series of courses and programs. In order to overcome the incompatibility problems that existed between all these little BASIC in ROM type computers, they made thier own outline spec and put out an invitation to manufacturers to compete for this standard school computing platform.

The winner was Acorn computers, and thier 6502 based design could be described as being roughly similar to an Apple II. It cut down in some areas (the expansion bus was not as good), but it did have some nice features such as a few general purpose analog and digital I/O lines built in for doing classroom experiments. For many years this was the standard computer in UK schools and colleges. In addition the low price meant that many home users owned them as well. There was nothing snazzy about the BBC computer, or beeb as it was affectionately known, it was a simple and efficient and reliable solution. Allthougth it was more expensive than competitors such as the Sinclar spectrum and Commondore 64, it was more robust and had a usable keyboard. In short, it was less of a games console and more of a computer that could be used on a daily basis, albeit for very simple tasks.

The sensible no-frills but reliable and efficient design approach did not win Acorn many friends, and many questioned why they had one the contract in the first place. On a feature per buck basis the beeb did look very poor. But the design stood the test of time, and so it was that with computing classes and user requirements continually rising Acorn had the problem of coming up with the all new improved 16-bit version.

This placed them in a bit of an enviable position with respect to many microcomputer manufacturers. They had an established and stable user base, thanks to thier adoption in schools, and they had a reputation based on reliability on good design rather than 'feature packed'. They made an unusual step, they decided to design thier own processor for use in the new 16-bit design. They were not planning to go into the silicon business, but relying on the fact that thier 'trademark' meant more than the name of a processor in thier 'niche' market, they planned on getting a processor manufactured to thier own specs.

The UK computer industry was very much in the doldrums. Acorn was based in Cambridge, home to many of the most notable scientists and mathematicians in the world. There was certinaly no shortage of very bright people longing to get involved in computing design and not having anything to get thier hands on.

RISC processors were the big buzzword of the period. RISC means Reduced Instruction Set Computer. Over the years microprocessor designers had been adding instructions and gadgets into microprocessors to cover any remote requirement they could think off. Microprocessor instruction sets were begining to look much like swiss army knives. This was good new for the assembly language programmer, but most of the software run on computers was, by now, written in higher level languages that were then compiled into machine code. Compilers are not very good at making use of quircky and fancy features in an instruction set, and most of the 'extra' features that microprocessor designers were pumping into the CPU cores were not actually getting used.

The idea behind RISC processors is that you cut out all the spaghetti and design a processor that is optimized for running compiled code. The term Reduced Instruction Set came from the original concept that by implementing just the instruction set that a compiler typically used you could concentrate on building a leaner meaner (and hence faster) core. In reality, the real gains of this approach stem from optimizing the CPU to run the code it will eventually be running, rather than adding features and expecting software to use it, RISC designs in reality are not necessarily short on instructions. Nowdays all processor development is carried out by analyzing software requirements and chip manufacturers retain software developers who follow the bleeding edge of software development and OS design in order to better understand what developments would be most useful in getting better performance out of the latest software techniques.

But, back to Cambridge. Here we have a herd of scientific dons falling over themselves to make the simplest and most efficient 16-bit core possible for the beeb's replacement, and in the grand scheme of things they had also leapfrogged forward to 32 bit. This was the birth of the ARM architechture, but not the ARM processor as we know it today. Acorns replacement for the beeb computer was called the Archimedes. Check out Google if you want to know more as allthougth it is no longer marketed thier are a lot of enthusiasts home pages out there. It did not succeed in herediting the UK schools market. Working from the CPU up had meant that Acorm had spent a lot of time working on the Archimedes, and by the time it was ready for the market cheap PC's had arrived. As we all know, PC's took the market by shere weight and momentum. And, of course, schools were far more interested in teaching on machines that were the same as the computers that thier students would encounter in the real world.

Acorn had seen the writing on the wall for thier niche UK schools market, and realised that the best asset they had was a top notch RISC processor design. They reformed the company and spun off the processor design as a seperate entity. This was to be called Acorn Risc Machines, it was the birth of the ARM processor per se.

They still had no ideas about producing silicon. Thier idea was to sell the IP of the processor core to other semiconducter manufacturers, ARM themselves would concentrate on developing the CPU core and selling development tools and software. This meant that silicon manufacturers could widen thier portfolio of processors with very little investment and without the problem of having to build support tools for developers. The ARM chip slotted well into the embedded processor market. Flagship 16 bit designs from the major manufacturers had been competing in a features war, and many product designs had been pegged at 8 bits because of the complexity hike need to move to 16-bits. The ARM processor could achieve the performance of it's competitors with a design that was half or even a quarter the size of competitors design, and using lowwer clock speeds which yielded lowwer power consumption.

The ARM processor became very popular in embedded designs, especially mobile applications were it's low power coupled with a size that better enabled microntroller applications with all peripherals and memory on the same chip made it a winner.

ARM did not fall into the trap of widening thier product range. They concentrated on improving thier core (sic) product, indeed thier only product, the ARM processor core. Nor did they fall into the trap of a features battle with competing processors. They concentrated on thier customer basess requirements and looked for the simplest and most efficient way to meet them. They also had another trick up thier sleeve; very few people outside electronic engineering circles actually appreciated what they were doing. When xyz silicon manufacturing company produced a new processor range, or market specific IC, it was xyz corporations new IC. In fact as ARM do not do any work on peripherals or packages, there is no ARM chip, they appear as simple consultants who license some of the IP used in the IC.

That is how it has been for many years, so without picking our way througth the all the variations bit byt, let's jump to the present day to see how the product looks today.

On a 32 bit RISC CPU it comes as no suprise to find that the instructiions are 32 bit words and that an 'int' is 32 bits wide. That also means that a simple loop written in C to, say, iterate over 100 characters in a string will, unless the programmer is particularly diligent, use variables much greater than are required and opcodes that have a potential way over what is needed. The ARM core implements a double instruction set, a 16 bit one and a 32 bit one. When running in 16 bit mode Each instruction word contains two 16 bit instructions and the basic word size is 16 bit, this renders the processor far more efficient at doing the operations that only require 16 bit power (which is most on a general purpose computer). The switch to and from 16-bit mode is done when calling routines, so if a programmer is about to write a routine that will clearly only require 16 bit power, he just does a 16 bit routine instead of a 32 bit on. In a HLL such as C it is enougth to have a compiler directive for the function.

The full 32 bit instruction set, whilst being 'Reduced' or minimalistic, does include some powerfull extensions. In particlar there is a 32x32 MAC with 40 bit accumulator implemented with data pointers. What's that? An instruction that is very good for doing transforms, the operations at the heart of digital signal processing applications. On other processor families this may have been dubbed something like MegaMultiMediaPowerExtensions.

While we are on the subject of instructions sets, they have added a third alternative instruction set. Java Byte code. The use of this instruction set makes implementing and using a JVM far more efficient, allthougth of course you only need to use it when you are running Java. Such a dual personality is killer feature for high end phonesets where Java is widely embedded but must co-exist with high efficiency native apps such as codecs. It will be interesting to see if legacy computing environments ever latch on to the potential of this approach.

At this point I should point out that not all these features are implemented on all processors. The current ARM family may be broadly divded into three product groups. The ARM7 core is is still be deployed into new low end devices. A recent example from Phillips is the LPC4106. This microcontoller device comes in a 7mm wide package and costs a few dollars but hosts a complete 32 bit system with 128K Flash and 32K RAM and a selection of sysnchonous and asynchronous interfaces for connecting peripherals. It can run at up to 60Mips (i.e. considerably more powerfull than the 33MHz 386SX's that were typically used to run Windows 3.1). These low end devices generally have just the essentials and in particular do not generally have a MMU.

Moving up the scale the ARM9 is gradually taking over from the ARM 7 in new designs, starting from the top down, so you are more likely to find it in high end single app embedded devices such as GPS navigators. Many ARM9 based processors available have an MMU and come in a high pin count package with a full external bus and hence they may be used in systems with many megabytes of Flash and RAM, and are capable of running powerfull operating environments such as the ARM version of Linux.

These CPU cores communicate with the rest of the world via three interface busses. One of these is desgned for connecting memory whilst a second bus (which may run slowwer than the core, much like the PCI bus on a PC) is used to connect to the (generally) on chip peripheral devices. Allthougth the ARM standard does not include peripherals, ARM based chip manufacturers generally follow conventional solutions for peripherals unless they have reason to differ which further eases software integration and porting, not to mention the efficient re-use of licensed IP.

The third 'group' of ARM based processors is a bit different from the ARM7 and ARM9. In the mid 90's ARM worked with Digital to develop a high performance ARM chip. The basic CPU architecture and instruction set is the same but they added, amongst other things, instructiion and data caches which means they are not 100% compatible with the other devices, allthougth the the differences would only be noted in extreme hand coded assembly, and not normal compiled code. The processors were called strongARM and were later acquired by INTEL. They pursued the design and used it to replace thier not very succesful i960 line of processors. They recently renamed the product line XScale and agressivly market the device at high volume markets such as mobile phone handsets. All trace of "ARM" seems to have dissappeared form the product names and descriptions but under the skin they are still very much aligned with ARM who refer to them as technical partners.

There is a school of thought which says that in high volume embedded markets ease of software development is not an issue as software development costs only represent a small fraction of the product price. The reality is that software development costs are small because programming teams are small, and no matter how much money you have there is a limit to how many developers you can throw at a project before it starts to fall apart at the seams. This, in effect, means that ease of software development translates into less time to market, a critical element for high volume consumer devices. The scalability of the ARM archtecture from budget through to feature packed designs, means that development teams may use, and re-use, familiar and trusted hardware, software and development toolchains across a much wider range of products than is possible with many competitive designs.

The way I entitled this article implies that ARM "will be here real soon now". In reality it is allready deployed on a large scale. Automobiles, mobile phones, Videocameras, GPS navigators, printers, photocopiers, PDA's, routers not to mention more specialist areas such as Aerospace, medical equipment and instrumentation allready use ARM in large numbers and have been doing so for some time. But we do not think of it as a "general purpose computer" as we do with the x86 family or the PowerPC chip. Yet it does exist in these forms running Linux, WinCE and even "RISC OS", the operating system originally created for the Archimedes computer and still maintained as a slick MAC like OS by a small private company who sell it to 'beeb' enthusiasts. There are even a few specialist companies making desktop type computers in mini tower cases that look every bit like a PC. But it is unlikely that we will see the world full of ARM based PC's. We are more likely to see desktop computers replaced by high power mobile devices, and chip manufacters (INTEL included) are hastily developing highly integrated chips for mobile devices using the compact efficient and low power ARM architecture rather than x86 core solutions.

Wether you like it or not, your next 'personal' computing device is far more likely to be based on ARM rather than an x86 core. And that brings us to a closing thought. The IBM PC was based on a simple data entry device used in computer centres and nobody in thier wildest dreams imagined that the architecture would dominate the industry for the next 25 years (IBM market analysts projected total sales over product life at 186,000 units). This architecture looks to be replaced by a design that was born out of a low cost computer for schools project 15 years ago. In the meantime many very large projects to develop the 'successor' to the PC have fallen by the wayside. Think about this when you start your 'next big thing'!

This discussion has been archived. No new comments can be posted.

ARM yourself for an x86 free future.

Comments Filter:

Scientists will study your brain to learn more about your distant cousin, Man.

Working...