Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Licenses 64-bit Processor Design From ARM

Soulskill posted about 2 years ago | from the seeing-which-way-the-wind-is-blowing dept.

AMD 213

angry tapir writes "AMD has announced it will sell ARM-based server processors in 2014, ending its exclusive commitment to the x86 architecture and adding a new dimension to its decades-old battle with Intel. AMD will license a 64-bit processor design from ARM and combine it with the Freedom Fabric interconnect technology it acquired when it bought SeaMicro earlier this year."

cancel ×

213 comments

Sorry! There are no comments related to the filter you selected.

Happy Tuesday from The Golden Girls! (-1)

Anonymous Coward | about 2 years ago | (#41814121)

Thank you for being a friend
Traveled down the road and back again
Your heart is true, you're a pal and a cosmonaut.

And if you threw a party
Invited everyone you knew
You would see the biggest gift would be from me
And the card attached would say, thank you for being a friend.

Oh snap. (1)

Anonymous Coward | about 2 years ago | (#41814215)

This is going to change everything.

Re:Oh snap. (0)

Anonymous Coward | about 2 years ago | (#41814273)

An over-priced slow server, ARM will grow to dominate the market. The same way Intel's slow and over priced servers have become commonplace.

Welcome to the club (1, Interesting)

Taco Cowboy (5327) | about 2 years ago | (#41814373)

Welcome to the club, AMD !

Unlike the X86 community, there are so many more competitors in the ARMs camp - companies such as TI and Broadcom from USA, Samsung from Korea, Hitachi from Japan, Allwinner from China, which produces $7 ARM-based SoCs.

AMD, you can't even compete against ONE company in the x86 arena - Intel.

Are you sure you can complete against the whole slew of them, this time??

Re:Welcome to the club (5, Insightful)

fm6 (162816) | about 2 years ago | (#41814479)

Your facts are off two ways. First, going up against one big monopolistic company is a lot harder than going up against a lot of small ones. (Do you think it's easier to fight an elephant or a bunch of guys who are also fighting each other,) Second, they've managed to survive in the x86 market for 30 years. I think that counts as competing.

Re:Welcome to the club (2)

TubeSteak (669689) | about 2 years ago | (#41814931)

Second, they've managed to survive in the x86 market for 30 years. I think that counts as competing.

The OP has a point.
AMD has abandoned the high end CPU market to Intel. [anandtech.com]

AMD's brand new, 8-core, flagship CPU, is competing with Intel's 4-core i5 chip.
And despite being clocked higher, it loses to the i5 in almost(?) every single-core test.

I know AMD pioneered the multi-core field, but they've gotten left behind.

Re:Welcome to the club (5, Insightful)

lightknight (213164) | about 2 years ago | (#41815047)

Indeed. I am trying to grasp, somewhat desperately, the events that must have taken place inside AMD headquarters when the CPU design team said they wanted to do hyper-threading. Having seen how badly Intel got knocked around when they did it, and the fact that for the price of duplicating a fair amount of the CPU, you are still only occasionally eking out a slight performance gain...and sometimes, a performance loss, their strategy doesn't make sense. What was so hard about welding two Phenom II X6's together, using the hyperlinks already present in the CPU design, and calling it a day? Knowing full well that Intel wouldn't be able to compete with that design (they've been core adverse compared to AMD), being happy that all of the cores were full cores (who'd complain?), and that they'd be a hot item for system builders everywhere. Sure, some of the gaming websites like to barf about how single-threaded performance still matters, on some games that no one cares about (the GPU, of course, mattering a lot more than the single-threaded performance of a CPU here), but to take the advantage of having 6 full cores, and trade it in for 8 half-cores...was this some idiotic attempt at market segmentation? Did some moron in a suit have a brain fart, and think "we can't have 12-core Phenom IIIs, it will cannibalize our Opteron server sales"? Fire his ass, and cut the strings on his golden parachute on the way out.

For the life me, I just can't fathom how they turned a major market advantage, with the CPU design practically on the design table already, with a popular and critically acclaimed design, and decided that f*ck it, we're doing so well here, let's go for a lobotomy, and compete on Intel's turd with an unproven half-assed design. Let's go from a full-core design that everyone complements, to some terrible half-core design that nearly killed Intel at some point. Seriously, who is commanding AMD such that they were in their nappies when the whole Intel hyper-threading business was going down (which every half-decent tech knows about), and how did they get boardroom approval?

The proper response, of course, was not the Business School of Failure's attempt at mandating some perverse product differentiation, which bears as much similarity to surgery as bludgeoning a person to death with a hammer, but through true, non-crippling differentiation. Phenom IIIs get 12-cores, and the latest SSE instructions + something that the boys down in the instruction lab cook up; Opterons get larger caches + more cores + special server instruction sets that mean something concrete, even if it means implementing hardware Apache threads; that's on top of the SSE3 stuff and so forth. Would companies buy Opterons over Phenoms if one had hardware accelerated support for web services over the other? I believe the survey would say hell yes.

As for the GPU stuff, the low-cost, low-power stuff is nice for chump change, but it's a fierce market with many competitors. What you want, what large companies no doubt want, is the ability to slam in GPU-daughter boards, to add 10 or 20 7970 GPUs on a single board (preferably with sockets, which drives up the cost a few cents, but also taps into the smaller markets, where you may buy 4 GPUs now, and 6 later), so that they can drive those large super-computing projects that already make use of these GPUs, but do so more efficiently.

As for gaming, the more stream processors, I imagine, the better. When in doubt, double them, as it will give Intel and Nvidia something to curse over.

Re:Welcome to the club (3, Interesting)

segedunum (883035) | about 2 years ago | (#41815331)

I don't have mod points but I am equally as puzzled. AMD haven't had that many opportunities over the past few years (none at all really) but that was certainly one.

Sadly the systems I work on are all Intel because we do a great deal of report and post-processing on data and that requires CPU grunt and running as much as we can in parallel. Had AMD done this they would have been under consideration. Hyper-threading makes very little if any difference to us really, it's all about getting as many full cores on as possible.

Re:Welcome to the club (4, Informative)

TheRaven64 (641858) | about 2 years ago | (#41815677)

I am trying to grasp, somewhat desperately, the events that must have taken place inside AMD headquarters when the CPU design team said they wanted to do hyper-threading. Having seen how badly Intel got knocked around when they did it, and the fact that for the price of duplicating a fair amount of the CPU, you are still only occasionally eking out a slight performance gain...and sometimes, a performance loss, their strategy doesn't make sense

Perhaps they looked at IBM or Sun's implementation of SMT instead. Adding a second context to the POWER series added about 10% to the die area and gave around a 50% speedup. If you have multithreaded workloads (especially on a server) then it can significantly improve throughput for two very simple reasons. The first is that when one context has a cache miss, the CPU doesn't sit idle, it can let the other core work. The second is that it makes branch misprediction penalties lower, because if you're issuing instructions alternately from two contexts you can get the instruction that the branch depends on a lot closer to the end of the pipeline than before you need to make the prediction. This also helps with various other hazards, so you don't need so much logic for out-of-order execution to get the same throughput.

Very funny - desktop isn't the high end (2)

dbIII (701233) | about 2 years ago | (#41815069)

While you may have wandered in from the territory where the beige box is called the "hard drive" and the screen on the desk is called "the computer", there's people that work with the hardware, and some of that hardware is Xeons, 16 core AMD cpus, sparcs etc etc. Even though I'm only at the cheap end of that stuff I don't mistake the desktop for the "high end".

Re:Welcome to the club (5, Insightful)

Let's All Be Chinese (2654985) | about 2 years ago | (#41815101)

Your argument doesn't stack up.

First you say they're bringing an 8 core chip to compete with a 4 core chip. Fine. Then you complain the cores cannot keep up 1:1. So you're expecting AMD's chips to be twice as good as intel's to be able to compete.

That, of course, is rigging the test, and so is dishonest.

One could also say that with single cores not much worse than the competition, but double the number of cores, and a lower price to boot, you get better value. Moreso if you can make good use of the double number of cores.

And that's before considering that single-core benchmarks are entirely unrepresentative for multi-core performance thanks to various tricks like turbo core and turbo boost — that aren't 1:1 comparable so you'd have to do full, sustained benchmarks on all cores simultaneously to find out which delivers the most sustained instructions per second.

Meaning that AMD's offering takes more marketing footwork, but technically is not all bad. Not at all.

Re:Welcome to the club (1)

Anonymous Coward | about 2 years ago | (#41815247)

To be fair, he has a point. Per core performance is often at least if not more important than the number of cores in a cpu. Games are slowly moving towards use of multi-core architectures but they also frequently have a lot of interdependencies between different parts, making multi-core very difficult. The result is that some things like sound and networking each get their own threads but the brunt of the game simulation still happens on a single core that may or may not be a bottleneck in the entire system, depending on individual core performance.

Re:Welcome to the club (2)

unixisc (2429386) | about 2 years ago | (#41815377)

But it's not a lot of small ones - it's companies like TI, Broadcom, Qualcomm, NVIDIA, Freescale, Samsung, Hitachi, and so on. Each much larger than AMD.

Also, there is nothing about ARM that inherently makes it more powersaving @ the same performance level than other RISC CPUs, be it SPARC, POWER, MIPS and so on.

Re:Welcome to the club (5, Informative)

TheRaven64 (641858) | about 2 years ago | (#41815715)

Also, there is nothing about ARM that inherently makes it more powersaving @ the same performance level than other RISC CPUs, be it SPARC, POWER, MIPS and so on.

I can think of several things. For Thumb-2, there is instruction density. MIPS16 does about as well as Thumb-1, but it is massive pain to work with. AArch64 doesn't (yet) have a Thumb-3 encoding, but one will almost certainly appear after ARM has done a lot of profiling of the kinds of instruction that CPUs like to generate. Even in ARM mode, the big win over the other RISC architectures is the it has fairly complex addressing modes, so you can do things like structure and array offset calculations in one instruction on ARM or 3-4 on MIPS. For AArch32, you also have predicated instructions. These make a big difference on a very low power chip, because you don't need to have any branches for small conditionals. For AArch64, most of these are gone, but there is still a predicated move, which is a very powerful version of a select instruction and lets you do mostly the same things. With AArch32 you have store and load multiple instructions, which basically let you do all of your register spills and reloads in a single instruction (the instruction takes a mask of the registers to save, the register to use as the base, and whether to post- or pre- increment or decrement it as two flags). With AArch64, they replaced this with a store-pair instruction, which can store two registers, and has the advantage of being simpler to implement (fixed number of cycles to execute).

on a 1 billion trans die, = 500 cores? (1)

cheekyboy (598084) | about 2 years ago | (#41815239)

If they can fit 200-500 cores per die, and then have 8 dies per server, that will kick ass.

Since most server tasks are long in terms of seconds. And dont require 2800 bogomips per thread.

Re:Welcome to the club (3, Informative)

TheRaven64 (641858) | about 2 years ago | (#41815667)

It's a relatively small club. Note that both the headline and summary are wrong. AMD has not licensed a processor design, they have licensed the right to make their own implementation of the ARMv8 architecture (which isn't just a piece of paper, it includes access to ARM's rich set of regression tests and assistance from ARM engineers when requested on both the hardware design and the supporting software). I know of three other companies working on ARMv8 designs. For ARMv7, I think there is basically only ARM with the Cortex series and Qualcomm with the Snapdragon (which is a massively hacked-up Cortex A8, with a completely redesigned FPU, a better interconnect, and some other improvements, but not a complete independent implementation). Compare this with the ARMv4 and ARMv5 situation, where StrongARM and XScale were complete independent implementations. ARM has intentionally delayed producing their own ARMv8 design to give other companies a chance and promote more competition. This worked very well for x86 during the '90s, when Intel, AMD, Cyrix/IBM, IDT, and others were all pushing out compatible products at different market segments. In the ARM world, because they all have to go through the same set of conformance tests, compatibility should be even higher.

Re:Oh snap. (0)

Anonymous Coward | about 2 years ago | (#41814389)

Don't forget the power savings!

Re:Oh snap. (3, Informative)

SomePgmr (2021234) | about 2 years ago | (#41814441)

An over-priced slow server, ARM will grow to dominate the market. The same way Intel's slow and over priced servers have become commonplace.

Well we'd try something else, but it turns out monkeys with notepads and crayons are even slower (and more expensive).

Biodegradable, though.

The fat lady is singing (2, Interesting)

Anonymous Coward | about 2 years ago | (#41814221)

The panel discussion that accompanied the AMD news conference was absolutely painful to watch. The only thing I learned is how completely clueless the CxOs of the 'cloud computing era' really are. Seeing company officers from Dell, RedHat and Facebook drool allover themselves like that was yet another painful lesson that the fratboys of the world have turned the tech industry into their drunken biatch.

Re:The fat lady is singing (1)

Anonymous Coward | about 2 years ago | (#41814413)

Do you (or any other kind commenter) have a link to the video? I'd like to see what you're talking about.

Re:The fat lady is singing (4, Informative)

Anonymous Coward | about 2 years ago | (#41814591)

It usually normally takes a day or so for replays to be posted but it should show up on the AMD Investor Relations Website [amd.com] (same site that hosted the live webcast).

Re:The fat lady is singing (5, Informative)

hairyfeet (841228) | about 2 years ago | (#41814429)

What's sad is how bad the former CEO fucked AMD [insideris.com] by doing a total slash and burn on their engineering and R&D and pushing for cheaper automated layouts that simply don't cut it. The Athlon64 guys? GONE. The Cryix guys? GONE. they pretty much have their backs against the wall because the former CEO burned the fucking company to get a short term bounce, which I'm sure he cashed out on.

And anybody who thinks ARM will save them might be interested in some magic beans I have for sale, as ARM frankly doesn't scale very well and from the early looks ARM64 isn't gonna be really any better for power than the CULV Intel chips while having a HELL of a lot worse IPC. Frankly, and this is coming from someone who has been building AMD systems exclusively for awhile now and is still hanging onto AM3+ for all its worth, the only real selling point they had was "bang for the buck" but by burning R&D and killing Thuban the former CEO left them holding the bag without shit besides Bulldozer, which we all know blows too much power, is too damned hot, and frankly their octocores get stomped by Intel quads on IPC while using a third of the power.

I have to agree with the engineer in that link, they should have done the same thing Intel did with Core, go back to their earlier K8 designs and start from there just as Intel did with P3 mobile but now they just don't have the money or the time. I truly hope the Athlon64/Apple A6 chip designer they hired back can come up with a design to save the company because right now? Right now they really got nothing. Hell the former CEO even pulled the plug on Krishna, which would have been a sub 20w quad core bobcat, which is why all we're seeing now is minor speedbumps on a 3+ year old design. I swear they got fucked raw by bad management and I only hope they pull through. Maybe if they would have done this 4 years ago they could have the niche Nvidia now holds, but now? Its just not enough.

Re:The fat lady is singing (2, Informative)

Anonymous Coward | about 2 years ago | (#41814665)

I completely agree (although the Cyrix guys weren't a part of AMD if I recall correctly, they're now Via).

I don't get why Via and AMD don't do any collaboration. Via seems to have decent CPUs and some pretty bright sparks in their CPU design division but they use fucking awful graphics chipsets. Or Via and Nvidia for that matter.

Re:The fat lady is singing (1)

lightknight (213164) | about 2 years ago | (#41815055)

It's been a while, but wasn't VIA responsible for the really screwy AMD chipsets that used to make people curse under their breathe?

Re:The fat lady is singing (3, Insightful)

lightknight (213164) | about 2 years ago | (#41815097)

Indeed. The one order the CEO can give to save the company is this: "Magical turn-arounds for companies who have been f*cked only happens in textbooks and fair-tales; as such, all resources for CPU design will go into creating a Phenom III with 12 cores and PCI-Express 3.0 and an Opteron design which employs liquid cooling (for the short term), as we are going to give it a major Mhz boost on top of the extra cores / cache we are going to staple on."

Getting involved in the already overgrown ARM market shows nothing but lack of vision. "We're going where everyone else is going, that'll be profitable!" You are going to be *that* guy who shows up late to the party, and wonders why all the booze is gone. Seriously, how do you mismanage stuff this badly? You're a CPU company, and you come up with the brilliant plan that despite being a major competitor in the x86 market, you're going to fix things by buying an oversubscribed design for a CPU in a market that...recursion error.

Think of it being like Ford, not using its own resources to think up a new car design, but paying Honda to license it the design for the Civic. Things are either absolutely atrocious, like AMD's stock should be worth a Haitian penny right now bad and we just haven't been told anything, or somebody doesn't know what he's doing. Go get the old guys your predecessor fired, and bring them back for more money. Find the DEC guys, and offer stock options if you have to to get them on board. Then follow their advice. After a year or two of punishment, AMD will be back on firm ground again.

Re:The fat lady is singing (1)

unixisc (2429386) | about 2 years ago | (#41815405)

The more surprising thing is AMD going w/ a totally new architecture - the ARM64 - instead of one of the true tried & tested RISC CPUs out there. They could have licensed SPARC from Oracle, or POWER from IBM, or MIPS from Mips, taken an existing CPU design, made a chip quickly, and then in subsequent iterations, build on that.. There would also have been existing software platforms ready for them - not just Linux or BSD, but also things like Solaris, or AIX.

Not going to help (1)

Anonymous Coward | about 2 years ago | (#41814225)

Next thing you know, AMD is going to give up on making a really good ARM processor too, and focus on the low-end, no margin market where Samsung and Apple will crush it with 3-generation-old designs.

And they'll call a dual-core part at 1.2 GHz the "AMD Fusion G8 Quad 3800+".

I can't wait!

Re:Not going to help (1)

Anonymous Coward | about 2 years ago | (#41815443)

I wonder if they'll migrate to producing 68000s next.

AMD might stand a chance (1)

MrDoh! (71235) | about 2 years ago | (#41814233)

They're losing the x86 battle, even with great chips, ARM might give them a huge boost, good for them to expand business.

Re:AMD might stand a chance (3, Insightful)

CAIMLAS (41445) | about 2 years ago | (#41814353)

If AMD can push their engineering into ARM quickly, they might not only stand a chance but they might dominate fairly quickly, I'd think. They're not on par with Intel on die size, but IIRC they're pretty close - that knowledge is certainly applicable.

Remember, they've got good GPUs already. A lot of what they tried to do with the Mobility and later generations were very "ARM-like" already, it just didn't exactly work due to x86 limitations. I'd think they've got a pretty good chance overall. (If anything, it's a big market. Tegra# are really pushing NVidia along, after all...)

Re:AMD might stand a chance (2)

Dahamma (304068) | about 2 years ago | (#41814553)

Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance, and price of the CPU is a distant third.

Any ARM CPU is at least an order of magnitude behind the current x86-64 server CPUs. Not to mention the additional work required to support multiple ARM CPUs on a motherboard, and even convince the major server manufacturers to build an ARM-based server in the first place. Good luck AMD, though you won't need it since even luck won't help you here...

Re:AMD might stand a chance (2)

Max Littlemore (1001285) | about 2 years ago | (#41814889)

Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance, and price of the CPU is a distant third.

That is as has been, but I'm wondering if this is not a strategic move on their part. Perhaps They are thinking of large clusters of low power ARM cores that kick in as the workload demands with some kind of clever way of sharing resources (Freedom Fabric?). With the global political landscape the way it is, that could be an important point of difference.

Reducing energy consumption is now the "in thing" and will continue to grow in purchasing decisions as financial incentives to reduce carbon emissions grow. If a server can be run using a single voltage PSU instead of needing different voltage on different rails, there are likely to be energy savings over x86. If that server can idle at less than a Watt and then ramp up in small increments as demand requires, that might also yield an overall advantage.

Sure, for serious continuous load applications, it's probably not the best, but for a lot of cloud type applications I can see this as being useful for renewable energy supplied server farms.

I'm just speculating of course, and I fully agree GPUs are irrelevant, but I think the idea that power usage is always secondary to performance has reached it's use by date.

Re:AMD might stand a chance (2)

Dahamma (304068) | about 2 years ago | (#41814981)

That is as has been, but I'm wondering if this is not a strategic move on their part. Perhaps They are thinking of large clusters of low power ARM cores that kick in as the workload demands with some kind of clever way of sharing resources (Freedom Fabric?). ...
If that server can idle at less than a Watt and then ramp up in small increments as demand requires, that might also yield an overall advantage.

After 20 years of Wintel I finally caved to try a Mac. The new MBP Retina is insanely fast CPU-wise for the same battery use over my old laptop (not quite as much GPU-wise vs my Windows desktop, but we are not talking GPUs). And when I'm not doing much it uses very little power. That's with a 4-core (hyperthreaded to 8) i7.

Intel has already come up with solutions for standby & lower power scenarios - when doing nothing there isn't much difference. When doing "a little bit" ARM definitely wins, and when loaded the ARM isn't even in the equation since it can't come close to the loads of the high end x86-64 CPUs. And it also depends on the application, as you say. For large data-intensive applications the power draw of the motherboard, RAM, HDDs, etc. surpass the CPU anyway so it's diminishing returns.

Anyway, I do think there is a market for low power servers. But it's currently a really low margin, barely profitable market, and will be until power costs are higher than hardware costs for the advantages it brings. And does anyone really think Intel is not capable of adapting to the market at that point to maximize their profits yet again?

Re:AMD might stand a chance (1)

ppc_digger (961188) | about 2 years ago | (#41814945)

Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance, and price of the CPU is a distant third.

Power usage translates directly into heat, so if you have a CPU that takes 10x less power you can cram 10x more of them onto the same server.

Re:AMD might stand a chance (2)

funkboy (71672) | about 2 years ago | (#41815655)

This is almost certainly for a SeaMicro-based architecture. The GPU might be mildly irrelevant in this market today but will continue to gain importance as more tasks transition to being executable via OpenCL & its cousins.

What you are looking at is a small box densely packed with lots of cores. Another flavor will likely come as a box with a few weak ARM CPUs used to control a large quantity of GPUs for HPC applications.

The thing that will make or break ARM in a SeaMicro style chassis is whether they can get a successful OpenStack port to it done in time for the launch. My guess is that they will, as OpenStack development is going gangbusters at the moment.

Re:AMD might stand a chance (2)

drinkypoo (153816) | about 2 years ago | (#41816025)

Note the article says ARM *server* processors. In that market, GPUs are totally irrelevant, power usage is secondary to performance

Begging the question. Is power usage actually secondary? Not for many kinds of workloads, which are storage-intensive. For SOME servers it doesn't make sense. For OTHER servers, it clearly makes sense; people are already using ARM-based servers. Perhaps you should consider a little self-education.

Re:AMD might stand a chance (0)

haruchai (17472) | about 2 years ago | (#41814403)

Fabrication has always been their Achilles heel. If their fab capabilities were only twice as efficient ( yes I know that's a lot but Intel is WAAAY above that), they would have little to fear from Chipzilla.

Re:AMD might stand a chance (3, Informative)

Anonymous Coward | about 2 years ago | (#41814541)

AMD no longer has a fab of their own, as of two years ago(?). I believe they are currently using TSMC for most of their production.

TSMC might stand a chance (1)

Anonymous Coward | about 2 years ago | (#41815329)

TSMC has already demonstrated with ATIs 7xxx series GPU that they can do 28nm chip sizes. They should be able to apply that to AMD CPUs and remain competitive with Intel on that front.

Re:AMD might stand a chance (1)

lightknight (213164) | about 2 years ago | (#41815115)

Then they should take some of their idle Sales / Marketing / Business guys, have them fly over to {country}, and let them spend some time charming the other foundries into not only giving them the capacity they need, but doing so at an excellent price. At the very least, it will give them something to do.

Re:AMD might stand a chance (4, Interesting)

toejam13 (958243) | about 2 years ago | (#41814415)

I'm sure this is just AMD hedging their bets against multiple processor ISAs. There are places where ARM is better than x86/x86-64, so it makes sense to try and dominate those niches. It falls in line perfectly with AMD being a less expensive alternative to Intel.

Given that Intel is trying to wind down its StrongARM line it inherited from DEC, AMD may see the ARM line as a place where it can finally be top dog. It has the expertise to give Broadcom, TI and Samsung a run for their money.

Taking a really big drink from the hypothetical Kool-Aid, I could see ARM64 processors being used as x86-64 replacements in palmtops and laptops. There are a couple of x86 to ARM translators on the market, which would solve the binary compatibility issue. I used FX!32 back during the NT4 and NT5beta days with my DEC workstation, and it made emulated binaries about 90% as fast as native. With advances in JITC translators and a cleanup of the x86-64 ISA to make it closer to meeting Popek and Goldberg virtualization requirements, I could see a good modern translator being 95+% as fast as native x86-64 code.

I've been expecting Apple to churn out a Power Book with an ARM processor and a binary translator. They did it with m68K -> PPC and PPC -> x86, so I wouldn't be surprised in the least to see x86 -> ARM. Now imagine it with an AMD ARM64 SoC at the heart of it.

Re:AMD might stand a chance (2, Informative)

lowlymarine (1172723) | about 2 years ago | (#41814491)

An ARM processor doing binary translation for x86 would be like trying to tow an 18-wheeler with a Tata Nano. ARM may be low-power, but it's also...well, low-power. Even older Core 2 chips wipe the floor with ARM's latest and greatest from a performance standpoint.

Re:AMD might stand a chance (5, Interesting)

TheEyes (1686556) | about 2 years ago | (#41814611)

Maybe the new direction is going to be heterogeneous computing. We're already seeing AMD and Intel combine x86 and a GPU on one die; maybe AMD will try to combine everything and have a couple of ARM cores for low-power tasks, a couple of Bulldozer modules for more intensive tasks, all combined with their GPU.

Re:AMD might stand a chance (1)

symbolset (646467) | about 2 years ago | (#41814711)

Bingo.

Premature Optimization in translating x86-ARM (2)

girlinatrainingbra (2738457) | about 2 years ago | (#41815137)

They need a cute small name for the ARM cores to go with the bulldozer for the large cores: maybe call them Cats or Toros or Deeres, more lawn-mowery than bull-dozery. :) Anyway, why translate X86 to arm when you could recompile directly for ARM? Isn't that where GNU and Linux (or GNU and HURD) could show the advantage of free-software-open-source-software's source code access allowing for the direct or cross-compilation of the source code into binaries which run on the ARM?

I would think that taking pre-compiled X86-binaries and then translating {pre-compiled and optimized for x86} code into ARM code is a waste of time and a lot of premature optimization.

Re:AMD might stand a chance (1)

Max Littlemore (1001285) | about 2 years ago | (#41814909)

I wouldn't want to tow an 18 wheeler with a Nano, but I also wouldn't want to deliver a small package to an inner city address with an 18 wheeler. Horses for courses (or pidgeons, horses and elephants for their respective courses if you will).

Re:AMD might stand a chance (1)

Sable Drakon (831800) | about 2 years ago | (#41814545)

AMD can't even keep up with a single competitor, so just how are they going to compete against 3+ vastly more nimble companies in the ARM space. AMD is making yet another last gasp at trying to stay relevant.

Re:AMD might stand a chance (0)

Anonymous Coward | about 2 years ago | (#41814959)

My memory might be failing me, but wasn't AMD the reason that Intel stopped the pointless GHZ bumping? And didn't they beat Intel to a proper multicore architecture? And weren't they before Intel when it came to the APU?

Not to mention that Intel's graphics are still pathetic compared with nVidia and AMD.

That doesn't really sound like a company that's grasping to stay relevant, that sounds like a company that's pretty much defining the course of future technologies. Even if at each step they aren't as fast.

Re:AMD might stand a chance (3, Informative)

samoanbiscuit (1273176) | about 2 years ago | (#41815613)

It's been said before on this thread, but I'll say it again. AMD remaining solvent while competing against Intel for 30 years is a lot more impressive than most people realize, especially considering they competed using Intel's own ISA. It's too soon to tell now, but it's reasonable to expect that AMD (being in Intel's weight class) could plausibly compete with most of the current ARM manufacturers. I'd certainly expect their 64 bit server chip efforts to be a lot more interesting than what the cell phone chip makers have been putting out from a performance perspective.

Re:AMD might stand a chance (1)

ducomputergeek (595742) | about 2 years ago | (#41814809)

I just find it ironic that Apple could very well be going back to RISC after not even a decade of being on x86. Even more ironic given the amount of work Apple contributed with Acorn back in the 1980's.

A MacBook with ARM chip wouldn't surprise me. After all the iPad, iPhone, and iPod are all arm chips already.

But then again I bought this MBP earlier this year as well as parallels and Windows 7 Pro because I do enough development work on multiple platforms that i do need to test against windows as well as use Visual Studio now and then. And this gives the flexibility to stay in Unix 90% of the time and then boot bootcamp using either parallels or even reboot directly into Windows 7 if I need to do some heavy work in VS.

NOTironic that Apple could very well be going back (3, Insightful)

girlinatrainingbra (2738457) | about 2 years ago | (#41815171)

Well, considering that they made the jump for the PowerPC architecture to the x86 architecture because IBM/Motorola could not provide a low power version of the G5 PowerPC chip to be used in the portable space of laptops, it doesn't seem ironic at all that Apple might consider using a low power consumption chip in the laptop or portable space at all. It almost makes darned-good-sense.
.

And considering what they'd been doing with Pink / Taligent in keeping a parallel universe of development of their codebase always going on the x86 architecture while publicly showing only PowerPC development, they've probably got a skunks-work factory team somewhere that's already been running ARM-based IOS or even ARM-based OSX for a year if not for years...

Re:NOTironic that Apple could very well be going b (0)

Anonymous Coward | about 2 years ago | (#41815307)

they've probably got a skunks-work factory team somewhere that's already been running ARM-based IOS or even ARM-based OSX for a year if not for years...

iOS is ARM-based. Always has been since first release.

Re:NOTironic that Apple could very well be going b (0)

Anonymous Coward | about 2 years ago | (#41815503)

But also Darwin based, just like Mac OS X

Re:AMD might stand a chance (1)

TheRaven64 (641858) | about 2 years ago | (#41815261)

Given that Intel is trying to wind down its StrongARM line it inherited from DEC, AMD may see the ARM line as a place where it can finally be top dog

Intel isn't trying to wind this line down, they sold it outright to Marvell two years ago. Even then, they were pretty anaemic. XScale was the P4 of the ARM world: twice as high a clock speed as everyone else but a much lower instruction-per-clock. It's an ARMv5 implementation, which seems painfully archaic today (especially given the lack of FPU, which even most ARMv6 implementations have).

Re:AMD might stand a chance (1)

Paul Jakma (2677) | about 2 years ago | (#41815521)

StrongARM? Didn't intel sell that to Marvel years ago?

Re:AMD might stand a chance (1, Interesting)

hairyfeet (841228) | about 2 years ago | (#41814497)

Uhh...what great chips? The Thubans were good, but everything based on Bulldozer just blows through power while having terrible IPC, thanks to having shared integer and floating point units. If they were to be honest the "modules" would be treated as single cores with hardware assisted hyperthreading, because the benches show that is a hell of a lot closer to what they are than to true cores. Hell since the release of BD they don't even have a single slot anymore on Tom's Hardware "Best Gaming CPU" [tomshardware.com] list whereas they used to pretty much OWN everything under $200. Hell look at how badly their new chips rate [tomshardware.com] compared to even their old chips, with not only the X6 but no less than TWO of the X4s, the 980 and 955, scoring better than their new FX 8120. So I'm sorry, this is coming from someone who has been building AMD exclusively since i heard about the OEM bribery, but the new chips? Just not good.

And sadly ARM isn't gonna save them either, they are too late to the game and from the looks of it ARM simply isn't gonna scale while keeping its lower power budget. Just look at how companies like Nvidia, that have been sinking a ton into ARM, are having to use ever more cores to get the performance up, it just doesn't scale. And since Intel has the fabs they can get to the lower sizes quicker, and their chips are frankly getting lower powered all the time. A 55w Ivy will frankly curbstomp a 125w Piledriver and with servers while there are some loads you can run without the IPC frankly there are a LOT more loads where you'll need that IPC and AMD just doesn't have it, and with electricity costs and cooling costs? It really don't look good for AMD, damned I wish it weren't true but it is what it is, AMD is in REAL bad shape right now.

Re:AMD might stand a chance (3, Informative)

makomk (752139) | about 2 years ago | (#41815275)

The Thubans were good, but everything based on Bulldozer just blows through power while having terrible IPC, thanks to having shared integer and floating point units. If they were to be honest the "modules" would be treated as single cores with hardware assisted hyperthreading, because the benches show that is a hell of a lot closer to what they are than to true cores.

Errrm, all of the integer units are dedicated and the shared floating point units still give each core as much floating-point resources as on the previous generation of AMD chips even if every single core is using floating point 100% of the time. If AMD hadn't screwed up on the engineering side, it'd be a really great design.

Re:AMD might stand a chance (1)

Sable Drakon (831800) | about 2 years ago | (#41814513)

Great chips? AMD hasn't have a truely great chip since the AthlonXP and maybe the Phenom 2. And even then, their Intel counterparts gave them a beating.

Re:AMD might stand a chance (1)

lightknight (213164) | about 2 years ago | (#41815109)

Not without the top-level chip designers their previous CEO nuked. They may be the Chicago Bulls in name, but the player lineup has changed.

Fingers Crossed! (2)

jerquiaga (859470) | about 2 years ago | (#41814267)

I'm hoping AMD does something to stay relevant. If they were to leave the market (or effectively leave the market by selling super low volume), then there's nothing to keep Intel honest.

Android Java Server (0)

Anonymous Coward | about 2 years ago | (#41814937)

Pretty please.

Intel (1)

Citizen of Earth (569446) | about 2 years ago | (#41814281)

Intel will be doing the same thing in 3... 2... 1... Just like missing the 64-bit era with Itanium, it is missing he mobile era with Atom.

Re:Intel (1)

NidStyles (794619) | about 2 years ago | (#41814301)

Intel, they just went for their license last week. AMD has had their license for 4 years now. Who want's to make bets on who is going to win this race? AMD has won all of the previous ones.

Re:Intel (3, Informative)

Dahamma (304068) | about 2 years ago | (#41814599)

Who want's to make bets on who is going to win this race? AMD has won all of the previous ones.

I assume you are joking, right? It's not a sprint, it's a marathon. Being first to market means nothing, it's winning the market. And Intel is crushing the 64-bit processor market right now.

Re:Intel (2, Interesting)

Anonymous Coward | about 2 years ago | (#41814451)

I wouldn't be surprised at all if Intel had a team working on ARM ISA designs as a contingency plan, but I highly doubt they'd transition to ARM unless x86 was facing virtual annihilation. They're well aware that if they start releasing ARM chips, the whole industry will much more quickly transition away from x86. There's no way they would willingly destroy their extremely profitable, high-margin x86 business.

Re:Intel (2, Interesting)

asliarun (636603) | about 2 years ago | (#41814465)

Intel will be doing the same thing in 3... 2... 1... Just like missing the 64-bit era with Itanium, it is missing he mobile era with Atom.

What are you even talking about? Since when did Intel miss the "64 bit era" as you put it? Sure, Itanium was a failure and Intel sunk billions of dollars trying to make it work. However, Intel could afford that mistake and still continue chugging along. As things stand today, Intel absolutely dominates the 64 bit market. In fact, except for Intel, AMD, and the IBM Power chips, there is no other game in town as far as 64 bit is concerned, and in this market, Intel probably has 80% or 90% market share, and has the best performance and performance per watt numbers.

So, I'm not sure which 64 bit era you are talking about, and how Intel missed it.

As far as Atom is concerned, yes, Intel is struggling quite a bit. However, Intel is trying to scale down its power consumption while ARM is trying to scale up its performance. Sooner or later, the two shall meet and it will be a very interesting battle. I wouldn't write off Intel so soon yet. In fact, the upcoming Clovertrail based Windows 8 tablets should be a very interesting launch. Take a look at the Thinkpad Tablet 2 for example. It should be a very interesting tablet for corporate customers or for users who want x86 along with Windows 8 Pro along with 3G and LTE mobility and a full-size USB port and with 8-9 hrs battery life.

I'm not saying Intel will win or lose, and it needs be relentless in improving power efficiency to even be a viable alternative to ARM. However, to say that Atom has already lost the race is a bit premature.

Re:Intel (2)

Citizen of Earth (569446) | about 2 years ago | (#41814757)

Intel went for IA-64 and it was a complete failure. Ultimately, it was forced to adopt the AMD-64 instruction set. That's what I mean -- Intel missed the boat and the 64-bit instruction set it uses isn't even its own. Since adopting AMD-64, it's dominated the market space. If it wants to get anywhere in the mobile space, it will need to fold its current Atom strategy and go all-out ARM. Until it does that, it's Itanium all over again.

Re:Intel (1)

ChunderDownunder (709234) | about 2 years ago | (#41814771)

One of the bugbears of the ARM platform is the absence of mature, complete FOSS drivers for the embedded GPUs. e.g. PowerVR (proprietary), Mali (lima), Tegra (proprieatry), Adreno (Freedreno).

I could see Intel going the other way - keeping ARM at a distance but licensing its HD Graphics GPU to SoC manufacturers at minimal cost on the condition that they use Intel's factories to fabricate them.

(Just speculating - have no idea what % of a Sandy Bridge CPU's power draw is due to the graphics core(s))

Future for AMD (1)

drhank1980 (1225872) | about 2 years ago | (#41814285)

I hope going with ARM is successful for them; maybe enough to get them to try to make something to compete with the Tegra in the mobile space eventually.

Or the past (1)

mozumder (178398) | about 2 years ago | (#41814309)

Consider that they used to sell Imageons (ARM CPU + ATI GPU), which they sold off.

Let's hope this time it works out for them. Power optimization is now important, unlike the Imageon days..

Yep, I knew it... (1)

NidStyles (794619) | about 2 years ago | (#41814293)

I called this three months ago after they announced that they would likely be going in a new direction. Not here obviously...

Re:Yep, I knew it... (0)

Anonymous Coward | about 2 years ago | (#41814475)

[citation needed]

Back to Imageon? (1)

mozumder (178398) | about 2 years ago | (#41814299)

Are they bringing back the Imageon line now? (ARM + ATI GPU on die?)

use cases? (2)

Aryeh Goretsky (129230) | about 2 years ago | (#41814319)

Hello,

When it comes to servers, I use comparatively few (a small lab with a few rack's worth that used for research projects) at work, so I'm wondering what sort of tasks these would be useful for? It sounds like they'll run RHEL and other Linux distributions, but even after looking at the second slide in this [hothardware.com] presentation, it's unclear to me advantage this would be to a a small business, or, in my case, a small department in a larger organization.

Is this new CPU/server line intended only for the enterprise? If so, what would the "trickle down effect" be for small groups like my own? Also, why would someone want to throw out their investment in existing hardware (including whatever talent they might have at programming and maintaining said hardware) for a design that's relatively proprietary?

Regards,

Aryeh Goretsky

Re:use cases? (1)

CAIMLAS (41445) | about 2 years ago | (#41814405)

Yeah I'm guessing this is geared more towards people who do the whole vendor support thing, and they've got a handful (or at least one) person dedicated to maintaining specific equipment (eg. linux servers vs. switches). Homogenous is key, but high thread count will also push ARM advantage here, because you could fit (say) 8 of these small systems with multiple CPUs each in a single 2U without much issue, and still leverage your SAN storage.

You'll probably see them in low-end "server" devices too, I imagine. Eventually. Unless you're running stuff that'll compile and run on ARM, it probably gives you no advantage unless it's worth recompiling/running it on something with a GPU. If Windows 2010 were available for ARM, running a terminal server cluster might be useful, for instance (depending on how big your dept is).

With AMD's experience in x86, if anyone can push ARM to the server/desktop it's them. If they can standardize on things like the 'BIOS' for their own products, I'm sure many smaller companies will follow. (You're already seeing a degree of standardization eg. with integrators like Samsung, but that's now where AMD is going with this, exactly.)

For SMB - say, IT shops with fewer than a dozen identical systems now, or special use cases needing GPU (eg. scientific computing), this is a non-starter, at least for now. Think high flung web shops, cloud computing, and the like.

Re:use cases? (1)

symbolset (646467) | about 2 years ago | (#41815221)

For IT shops with fewer than 24 typical servers, what AMD might do in 2014 is not relevant. You would not be interested in trying this thing until it was field proven for three years. Even if it arrived on time (not AMD's strong suit) and it was nerdvana, that's 2017 before you're racking it. More likely the first version is quirky and your pilot starts two years later. But let's say 2017, for giggles. A typical 2 socket rack server can now be configured with 32 2.7GHz cores, 768 GB RAM, and terabytes of SSD million-IOPs storage and it comes with dual 10Gbps FCoE.

By 2017 even with outstanding business growth your server needs will be met by at most three geographically separated VM hosts, or cloud hosts. You are quite literally done at that point, as server performance will outpace your business need faster than the server warranties expire. Most small businesses are already there. If the extreme growth in mobile performance continues the logarithmic path of the last few years, by 2017 you might be able to replace all those servers with a mobile device. In 2017 100 Gbit dual port CEE onboard NICs will be standard, and we'll be laughing about magnetic media's long reign. "Our 128GB drives were as big as a deck of cards and could do 140 I/Os per second on a good day - and we liked it."

I've been a nerd for a long time - since the late 70's. The pace of these changes is stunning and unprecedented. For most people the capacity of one server is already far beyond their need. The second and third are just for redundancy. And they don't cost much, in constant dollars, compared to the servers of yesteryear. The software licensing can still be a burden if you're into paying for permission to use software rather than (or worse in addition to) paying for software support, but that is a different issue. Frankly I never did understand that strange turn that software took for a while.

Re:use cases? (1)

Anonymous Coward | about 2 years ago | (#41814425)

Webservers, if there's lots of cores and they have the ability to turn on or off cores to save power...see the Sun T2/T1.
There are some workloads that just need a core, not a great core, and they need to scale the cores/threads economically. ARM is good for that.

Re:use cases? (1)

TheLink (130905) | about 2 years ago | (#41814759)

And see how successful the Sun T2/T1 was. By the time they launched their crap Intel was beating them on performance/watt and performance/watt/$$.

If AMD are really going ARM for the server market it's desperate clutching for straws.

It has to be some other market or they are committing suicide.

Cheaper, Lower power, More Cores, Faster-ish (0)

Anonymous Coward | about 2 years ago | (#41814633)

They are cheaper, so expect cheaper servers. They use less power, and drastically less power when idling, this means less heat, and less cooling, and better packing densities. You can put more cores in a chip, more chips on a blade, and more blades in a rack and the cooling is still viable. So in practice that means your bang per buck is far better, because you can afford more cores and can cool more cores.

Faster- ish, faster because more cores, ish because some apps need a lot more processing on one core and don't scale well as cores scale.

For small servers, well as long as you don't use any of the MS stuff, you won't even notice the change. A lot of file and print servers run with Arm now, so there won't be much change there. If you're an MS shop with all your stuff tied to Microsoft. Well, good luck with that, I don't use MS servers so I don't know what your options are.

Can we see (4, Interesting)

gcnaddict (841664) | about 2 years ago | (#41814327)

x86-64 and 64-bit ARM on the same chip?

I can see this being a remarkable selling point for Windows devices if both ARM and x86 code can execute on the same device without emulation.

Re:Can we see (1)

slashmydots (2189826) | about 2 years ago | (#41814395)

When they control the heck out of any apps on any tablet or device, why bother to open it up to multiple types of apps?

Re:Can we see (1)

Ostracus (1354233) | about 2 years ago | (#41815375)

Actually I see this eventually rendering "for X architecture..." irrelevant. The more important "for what OS" will be dealt with using VMs. The need for more cycles to pull all this off competitively will mean we finally find a use for all those "solutions looking for a problem" engineers have been dreaming up.

Re:Can we see (1)

drinkypoo (153816) | about 2 years ago | (#41816015)

I disagree. I think that would be foolish for customers. It would be great for AMD though, because people would be upgrading left and right (or overspecifying) to make sure they don't wind up limited by one or the other.

It would make more sense for AMD to finally invent a system which can take asymmetric processors linked via HyperTransport, so that you can plug one amd64 or ARM processor and then however many amd64 or ARM processors you like after that.

This is interesting (4, Interesting)

banbeans (122547) | about 2 years ago | (#41814329)

x86/AMD64 is overkill for many server functions.
It will be interesting to see if chips appear optimized for different functions.
For example hardware sql accelerators or massive i/o for file serving.
Since many hardware raid controllers are nothing but ARM cores anyway it would be interesting to see multiple cores, some used as RAID controllers and some more advanced cores for the os and file serving with a 10GB lan controller all on one chip.
Add power, drives and Ram and have a killer file server.

Re:This is interesting (1)

NidStyles (794619) | about 2 years ago | (#41814457)

I will have to agree with that. It seems the way to go in the future anyways with Windows grinding out onto ARM cores now. Sure the *NIX crowd will be capable of doing this, but with the number of Win users on server side being able to run ARM based machines, it's telling that the market will be more flexible and apt to adopt technology of this nature.

Re:This is interesting (2)

fm6 (162816) | about 2 years ago | (#41814493)

It's overkill if you have precisely one hardware server per function. That's becomming increasingly rare.Nowadays, a "server" is most often a VM that doesn't need exclusive access to the physical CPU.

Re:This is interesting (0)

Anonymous Coward | about 2 years ago | (#41814533)

I thought of this too
Two 10 GBE controllers and two FC controllers. use what you want.

Re:This is interesting (1)

bloodhawk (813939) | about 2 years ago | (#41814607)

The problem won't be the hardware, it will be the same problem that IA64 had, hardware is great but without the server software designed for it they are just expensive paper weights.

Re:This is interesting (0)

Anonymous Coward | about 2 years ago | (#41814979)

Fortunately just about anything that runs on Linux is available in an ARM binary too.

Re:This is interesting (0)

Anonymous Coward | about 2 years ago | (#41814999)

Much of this stuff lives and dies by vendor support, especially in the enterprise. if AMD want to make a viable market they will need more than just the current Linux ARM binaries.

Re:This is interesting (0)

Anonymous Coward | about 2 years ago | (#41815393)

"Add power, drives, ram"??? Think SOFTWARE, retard. Solid software to drive that. You are talking mainframe sophistication. Linux is not even in the same galaxy.

Educate yourself about what you can do with a mainframe. They use coprocessors everywhere, that's how they outperform anything else on Earth. But it doesn't work magically because you throw the silicon together and stir it in a pot. The level of software sophistication here is unknown to 99% of you.

And we wonder why the world doesn't work. It's because it's full of gullible idiots who will believe anything presented with their crippled-la-la-land view on just a fraction of the ecosystem needed to make something really work.

reply (-1)

Anonymous Coward | about 2 years ago | (#41814515)

Shanghai Shunky Machinery Co.,ltd is a famous manufacturer of crushing and screening equipments in China. We provide our customers complete crushing plant, including cone crusher, jaw crusher, impact crusher, VSI sand making machine, mobile crusher and vibrating screen. What we provide is not just the high value-added products, but also the first class service team and problems solution suggestions. Our crushers are widely used in the fundamental construction projects. The complete crushing plants are exported to Russia, Mongolia, middle Asia, Africa and other regions around the world.
http://www.mcrushingplant.com
http://www.crusher007.com
http://www.sand-making-machine.com
http://www.china-impact-crusher.com
http://www.cnshunky.com
http://www.bestssj.com
http://www.shunkyen.com
http://www.crusheren.com
http://www.crusher02.com
http://www.portablecrusherplant.net
http://www.csconecrusher.com

ARM64 + Hypertransport = Interesting Outlook (4, Interesting)

Uzull (16705) | about 2 years ago | (#41814697)

In fact AMD has an amazing technology portfolio. Having graphics chip (ATI Division), the hypertransport technology and AMD64, we can expect some interesting developments

Re:ARM64 + Hypertransport = Interesting Outlook (5, Interesting)

Spy Handler (822350) | about 2 years ago | (#41815209)

I remember when AMD bought ATI many years ago... everybody (including us Slashdot posters) were saying what a bone-headed waste of money that was.

Now everybody's saying AMD is really fucked except for one bright spot which is its graphics division....

Re:ARM64 + Hypertransport = Interesting Outlook (1)

drinkypoo (153816) | about 2 years ago | (#41816005)

These are not contradictory statements.

AMD could be really fucked today because they put too much effort into graphics, and not enough effort into CPUs. Only time will tell if their graphics will save them. I suspect they won't unless they learn how to write drivers that work.

Originally designed for mobile phones??? (5, Informative)

Anonymous Coward | about 2 years ago | (#41814765)

ARM architectures are considered more energy-efficient for some workloads because they were originally designed for mobile phones and consume less power.

Fuck no. The ARM1 was released in 1987 as a coprocessor for Acorn's BBC Micro. They were designed for low power operation because the engineers were impressed with the 6502's efficiency. There weren't any significant mobile phone deployments until 18 years later in 2005.

Re:Originally designed for mobile phones??? (2, Informative)

Anonymous Coward | about 2 years ago | (#41815831)

Almost. The first ARM1 was produced in 1985. This was used in the BBC micro coprocessor to design the ARM2. The first ARM2 silicon was produced in 1986 and the Archimedes computers, which ran on the ARM2, were released in 1987. I've still got my A310.

But yeah, it had nothing to do with mobile phones.

so now we know... (1)

slew (2918) | about 2 years ago | (#41815223)

So now we know what Jim Keller is back at AMD to do...

MIPS64 (2)

FithisUX (855293) | about 2 years ago | (#41815351)

For me it would make more sense if they followed the MIPS64 path. But.. its their money.

tough times (1)

ssam (2723487) | about 2 years ago | (#41815695)

AMD are having tough times in several sectors of the CPU market (server and low end desktop seems ok). some company resort to lawsuits to cling on, very glad to see diversification instead. GPUs only get so far for HPC, a large number of simple cores on a die will be applicable to a wider set of tasks.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?