Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Says It's 'Ambidextrous,' Hints It May Offer ARM Chips

timothy posted more than 2 years ago | from the raise-your-arm-shyly dept.

AMD 140

J. Dzhugashvili writes "Today at its Financial Analyst Day, AMD made statements that strongly suggest it plans to offer ARM-based chips alongside its x86 CPUs and APUs. According to coverage of the event, top executives including CEO Rory Read talked up an 'ambidextrous' approach to instruction-set architectures. One executive went even further: 'She said AMD will not be "religious" about architectures and touted AMD's "flexibility" as one of its key strategic advantages for the future.' The roadmaps the execs showed focused on x86 offerings, but it seems AMD is overtly setting the stage for a collaboration with ARM."

cancel ×

140 comments

Sorry! There are no comments related to the filter you selected.

let's hope that... (2, Interesting)

w.hamra1987 (1193987) | more than 2 years ago | (#38907537)

this means less intel in the market and more AMD!!!!

though seriously, how good is the ARM architecture today? havent tried it yet, does it provide comparable performance to an intel processor of similar price tag?

Re:let's hope that... (4, Informative)

the linux geek (799780) | more than 2 years ago | (#38907623)

It's a tough question. The Intel Atom has an edge on ARM, but it's not a big one, and while a high-performance ARM chip costs below $20, the Atom is significantly more. On the other hand, right now there are no ARM implementations that are really competitive on the PC front, and probably won't be until ARMv8 (64-bit) chips, or at least until Cortex-A15. A15 chips will probably come out in late 2012 and be a bit faster than the Atom, but a long way from Sandy Bridge and the other current Intel designs.

Re:let's hope that... (5, Interesting)

Anonymous Coward | more than 2 years ago | (#38907755)

Its also worth noting that ARM has never been about performance until the semi-recent smart phone (mobile computing) surge. And even today, performance takes a backseat to power consumption. And it is here where ARM has always led the way. ARM vs Intel, ARM provides better price, better consumption, and very competative performance, albeit second place. But given the market to whch ARM is primarily focused on, ARM easily scores the win; in spite of Intels best efforts.

For those doing more traditional embedded development, Intel's offers are likely front runners. For those participating in the mobile computer segment, ARM, by far, is the very clear winner.

Re:let's hope that... (2)

hitmark (640295) | more than 2 years ago | (#38908639)

Seems some are working on bringing ARM into the server rack, and we can see the reason when we read about the kinds of power and cooling issues there are around some of the larger server farms.

Re:let's hope that... (2)

nschubach (922175) | more than 2 years ago | (#38908993)

I never understood why file servers didn't use low power processors. Recently we've seen more and more ARM NAS devices, but I figured FTP servers and such would use these "lower end" processors simply because they only need to perform minimal computation to validate users and serve files.

Re:let's hope that... (2)

ppanon (16583) | more than 2 years ago | (#38912367)

Well there's always encryption, but they could probably integrate an on-chip co-proc for those functions.

Re:let's hope that... (1)

evilviper (135110) | more than 2 years ago | (#38912397)

I figured FTP servers and such would use these "lower end" processors simply because they only need to perform minimal computation to validate users and serve files.

ARM NAS boxes are a nightmare. Slow as all hell. That's not the kind of performance you want from your FTP server. And FTP servers have generally been replaced by HTTP servers, and a lot of dynamic pages which use up lots of CPU time. But even if that wasn't the case, it's only in Windows that there's a drive to single-task. On any Unix server, you'd just keep throwing more functions on the box if it has spare resources. No reason your FTP server can't be doing the job of SMTP server, running spamassassin, etc.

Re:let's hope that... (1)

Anonymous Coward | more than 2 years ago | (#38912431)

They tend to save from wrong place. The 400mhz arm9 cpu in my lspro could easily saturate gigabit link, but it won't because of the interrupt overhead. Tweaking it to use jumbo frames boosts throughput fivefold, but the rest of my network doesn't work well with those.

Re:let's hope that... (2, Informative)

Anonymous Coward | more than 2 years ago | (#38909477)

Its also worth noting that ARM has never been about performance until the semi-recent smart phone (mobile computing) surge. And even today, performance takes a backseat to power consumption.

It was a long time ago [wikipedia.org] , but not "never", when ARM was about performance and running circles around the 80286 and 68000 CPUs.

Re:let's hope that... (4, Insightful)

hairyfeet (841228) | more than 2 years ago | (#38909743)

The problem with ARM is there are literally millions of x86 programs that have become an integral part of peoples lives, this is also why even though Linux has been getting better each year it fails to find any real gains. Everything from that camera that came with the photo software your Aunt Sue loves to Corel and Photoshop, from that bain of Linux geeks MS Office to Quickbooks/Quicken which is God in small business and rightly so.

The reason ARM is able to gain so much in mobile is because frankly geeks have never understood how normal users think, as someone who has to understand their needs or go out of business i think i can shed some light. you see to a geek that Droid or iPhone is a general computing device, to a normal user it doesn't even have an OS, its just "A screen with buttons i can google and play games on that I'll chunk when the contract is up" and that's it. they have been conditioned that nothing is compatible so assume when they chunk the phone the only thing they'll keep is the SIM card and that's that. Creates a lot of waste but is great for the carrier. Tablets to the consumer is the same, its a large mostly disposable flatscreen TV that can let them Google. There is no real attachment there, no real desire by the majority to develop long term rapport with programs. this is why ARM netbooks went nowhere because to them a netbook is NOT just a general computing device, its a "baby laptop that should do everything my big laptop does only slower, because babies are smaller than grownups" see how that works?

I think where AMD is on the right track and has a real shot is Fusion. Not 3 years ago i could walk into the local Walmart or staples and i'd be lucky if there was a single AMD machine, usually the cheapest machine in the house. Now I see AMD Fusion netbooks, laptops, all in ones, and even desktops, some going up to nearly $1000 in price and talking to some of the guys that i know working there they are brisk sellers. More and more the PC is not only the office machine, its also an entertainment center With the AMD Fusion chips not only do you get great battery life/lower electric bills, like my EEE E350 that gets 6 hours playing 720p and lets me HDMI into any 1080p set and watch videos, but you also get to have all your programs that you know and are familiar with and which frankly there is often no FOSS equivalent and probably never will be. There is no FOSS software that matches the features of Quickbooks or photoshop, and certainly nothing like the little quilting app I installed the other day for a customer on her new Acer AMD C60 netbook. while FOSS users would probably think its stupid and not waste time for her its a "must have" because it helps her to work up the patterns she is gonna use on her next quilt and to visualize what it will look like.

So I think the future is bright IF, and that's a BIG IF, AMD continues to play it smart. the new Vector based GPUs will lower the power footprint even lower while letting the APU use the GPU cores like a super fast floating point which will give any program using floating point a nice kick in the ass, and considering they've had to lower desktop output to keep up with all the orders for the Bobcat chips shows the OEMs think its the right path too. you can now get those chips in every form factor you can name, from HTPC to iMac style to netbooks and laptops. While i'm sure AMD never considered it a desktop chip the OEMs found that its more than good enough for the average user and its selling quite briskly so they made a good call there.

Finally there is one place where AMD has already fucked up, and that's the recent killing of the entire AM3 [softpedia.com] line. While consolidating to a few chips would have been smart IMHO killing the AM3 Stars chips when Bulldozer has neither the yields nor performance to take its place was just stupid. if you have an AM3 board I'd suggest you pop over to tigerdirect where they are selling Thuban 1035T chips at $105 as they won't last long and once they are gone they are gone.

ARM is a line where the margins are extremely low and the competition extremely high, not a good market for AMD to try to get into IMHO. Not to mention with Intel fucking Nvidia out of their chipset business its pretty much Nvidia's last hope of being anything more than a discrete GPU shop so they would be ultra aggressive towards any AMD attempt to worm their way in. If they were to make me CEO tomorrow I'd have 4 chips continue in the AM3, the 1035T because its 95w and fits the most boards, the 110T for the overclockers, the Zosma based quad to take care of any bad chips in the Thuban line and finally a Thuban based Athlon quad to take care of those with a bad L3 cache. That would maximize yields while giving engineers time to work out the bugs on Bulldozer and would allow them to keep the entire low to middle range with competitive products. Then I would contact TSMC and see how quickly they could get up to speed on the E series and have them as a second source for GloFlo since its obvious that the E series is a big hit with OEMs and the last thing they need is shortages. This would give them the E series for netbooks and basic laptops, the A series for mainstream laptops and desktops, the AM3 to lock up the DIY and OCers, and finally the higher end bulldozers for those that want more cores above all.

But they have some really good products, shame they killed the AM3 line as frankly it was great for entry level and gamer PCs, but the E series and A series both have a bright future in netbooks and laptops. stay the course AMD, get the Vector based GPUs integrated into the E and A series and keep working with the OEMs to give them plenty of the chips that sell the most. all those that say "AMD is toast" frankly haven't walked into a B&M lately as that is ALL you see on the shelves. intel may be faster but the higher costs of chips and boards have priced them right out of several markets. AMD should keep their grip on those markets while chipping away at the mainstream markets with piledriver. But don't go into a totally new arch like ARM where its too crowded with too many players AMD, that's not a good move. you already have the C series down to just 9w for a full APU and the E series down to less than 18w under load, keep it up.

Re:let's hope that... (4, Insightful)

LWATCDR (28044) | more than 2 years ago | (#38910093)

"The problem with ARM is there are literally millions of x86 programs that have become an integral part of peoples lives"
Not really. There are many ARM programs that have become and integral part of people lives. Android and IOS are two big example not to mention the apps that run on them.
Software is not as locked to an ISA as it once was. Microsoft and Apple have shown that with the move of Windows to ARM and the move of OS/X to x86.
Applications are not written in assembly anymore they are written in C++ or another high level language. Take your example of Photoshop? Moving Photoshop from Windows to Windows on ARM is probably a much simpler project a Windows and OS/X version. The same is true of Office.

I do think that AMDs Fusion is interesting but your reasoning on why people will keep use the x86 is not valid. They will only keep using x86 for as long as that is the best solution. IMHO x86 is endanger of being the next PDP-11 or VAX unless it can scale down to mobile and fast.

Re:let's hope that... (1)

Nursie (632944) | more than 2 years ago | (#38911045)

The problem with ARM is there are literally millions of x86 programs that have become an integral part of peoples lives, this is also why even though Linux has been getting better each year it fails to find any real gains.

I'll just stop you there.

The problem with ARM is that it's not x86.
Yet Linux is x86 and it's not making any gains.

I think you might be trying to say that anything that's not Windows on x86 is going to be a failure?

I wonder, do you have the same attitude to Windows 8's much touted ARM version?

Re:let's hope that... (0)

WaywardGeek (1480513) | more than 2 years ago | (#38911313)

First of all, Atom has an edge? Really? My dual-core ARM smart-phone has more juice than my Atom based netbook. What I've been wondering for about 15 years is why the heck doesn't Intel buy Arm? It's the no-brainer way to protect your #1 status - buy out all competitors that show any signs of being a threat -- and do it before they're big enough to attract any anti-trust scrutiny. Intel is at least 10 years over due here. Are they being incredibly super stupid, or was there any valid reason ARM is still independent? ARM has been the biggest threat to Intel for many years. What's up?

Re:let's hope that... (2)

the linux geek (799780) | more than 2 years ago | (#38911697)

Published benchmarks disagree with your assessment of ARM.

Re:let's hope that... (0)

Anonymous Coward | more than 2 years ago | (#38911743)

> What I've been wondering for about 15 years is why the heck doesn't Intel buy Arm?

Because ARM doesn't want to be bought.

Intel can offer all the money in its coffers, and a lot more, but the ARM executives can simply say "no thank you".

Re:let's hope that... (2)

evilviper (135110) | more than 2 years ago | (#38912447)

What I've been wondering for about 15 years is why the heck doesn't Intel buy Arm? It's the no-brainer way to protect your #1 status - buy out all competitors that show any signs of being a threat

First off, Intel was selling ARM chips up until a few years ago. They snagged the famous "StrongArm" series off of DEC and rebranded it "XScale".

Second, ARM only recently established itself as THE x86 competitor. Go look up all the RISC architectures out there which were competing for dominance. If you needed high performance embedded, PowerPC has long been the way to go. SPARC has been competing in the embedded space. Hitachi made a go of it with their SuperH chips (eg. SH3).

Last but not least, MIPS is the old man of the bunch, and the one with the most fight left in it... It was always faster than ARM, even powering high-end (SGI) workstations and supercomputers in the past, while still competitive on the low-end. With China throwing it's weight behind MIPS as the basis of their domestic CPU development effort (Loongsoon/Dragon Chip), it's both advancing nicely, and being produced extremely inexpensively, which gives us things like the infamous $100 ICS Android tablet running on MIPS.

But more than that, ARM doesn't fit Intel's model... ARM just licenses the IP/Cores, and let's others fab them. Intel wants the whole pie. Even if they bought ARM, they couldn't stop existing licensees from continuing as before, and if they didn't keep ARM producing what customers wanted, switching over to MIPS wouldn't be that hard.

Re:let's hope that... (1)

stms (1132653) | more than 2 years ago | (#38907675)

I am not an expert but from what I hear ARM has much more speed per dollar. Though ARM can't match x86 in parallelism.

Re:let's hope that... (1)

migla (1099771) | more than 2 years ago | (#38907711)

this means less intel in the market and more AMD!!!!

though seriously, how good is the ARM architecture today? havent tried it yet, does it provide comparable performance to an intel processor of similar price tag?

The appeal of ARM is not measured in performance/$, it's about flipflops/wigwam.

Re:let's hope that... (5, Informative)

Guspaz (556486) | more than 2 years ago | (#38907773)

The price tag is directly comparable, because ARM doesn't make processors, they sell licenses to designs. The only relevant metric is really performance at a given power point.

The closest competitor is Intel's Atom chips. At comparable power points, the current ARM chips seem to substantially outperform Atom chips, and the ARM chips scale far lower than Intel's do. It becomes a bit murkier at higher power levels, since until recently nobody was really making ARM chips that high, but we'll see a lot more competition in this field in the future with the ARM Cortex A15, which is intended to be a lot more scalable. The current design is planned to go from 1.0GHz single-core, up to 2.5GHz eight-core, depending on what the integrator wants. On top of that, they've got the new Cortex A7 that they've designed as an ultra-lower performance chip, which is intended to be a much simpler architecture that's still ISA-compatible with the A15. The intention is actually to put an A7 and A15 in the same SoC, so that the SoC can entirely turn off the A15 cores when only low performance is needed (like playing audio or video, since that's done almost entirely on a DSP). This is similar to what nVidia did with the Tegra 3, just taken even farther.

Re:let's hope that... (3, Informative)

Andy Dodd (701) | more than 2 years ago | (#38908085)

Much of this is a change of focus... Instead of beefy desktop CPUs running bloated OS, the focus is becoming more on portable devices.

Basically, this is "We're hanging in there in the desktop/laptop market, but rather than hang on to our piece of a shrinking pie, we want to get in on the pie that's getting bigger".

ARM is superior in low-power applications. It's highest-end CPUs maybe match Intel Atom, but often have far more peripherals (such as a fairly decent GPU and 1080p multi-format video decoding all on a tiny chip about the size of your thumbnail. Seriously - I can almost completely cover an OMAP4 with my thumb.)

PowerPC (1, Funny)

MightyYar (622222) | more than 2 years ago | (#38907545)

Apparently they are bringing back the PowerPC for the new Amiga.

Re:PowerPC (3, Informative)

the linux geek (799780) | more than 2 years ago | (#38907649)

The most powerful general-purpose processor in the world (Power7) is a huge seller for IBM, and is a PowerPC implementation. PPC is also big in telecom applications, and Freescale does a number of fairly high-performance designs for that market.

The PPC used in the AmigaOne X1000 is a PA Semi PA6T - not very fast, designed as a low-power chip, and long-dead. Apple bought the company a few years ago, and I'm pretty sure new PA6T's are not being made. I suppose that speaks volumes about how many X1000's they reasonably expect to sell...

Re:PowerPC (3, Insightful)

PatDev (1344467) | more than 2 years ago | (#38907725)

Not technically true. Power7 belongs to the same family of architectures as PowerPC, but it's not really appropriate to say that Power7 is a PowerPC implementation. You might say that PowerPC is an uncle of Power7.

Re:PowerPC (3, Informative)

the linux geek (799780) | more than 2 years ago | (#38908139)

Power7 is fully compatible with the PPC 2.06 spec. How is it not a PPC ISA implementation?

Re:PowerPC (1)

Anonymous Coward | more than 2 years ago | (#38908795)

This is true. The PowerPC ISA is a subset of the POWER ISA.

RAD6000 / RSC / POWER1 (3, Insightful)

OrangeTide (124937) | more than 2 years ago | (#38911711)

RSC(POWER1) is the most popular CPU architecture on Mars, and possibly in the solar system outside of Earth.

Well. (0)

Anonymous Coward | more than 2 years ago | (#38907553)

I don't care what they say. I'll keep buying and reccomending AMD until they screw me over like intel did. once.

Bonus. I don't have to help pay for stupid commercials.

option d, all of the above (-1, Flamebait)

Anonymous Coward | more than 2 years ago | (#38907565)

OMG! x86 is dying
OMG! AMD is dying
OMG! AMD is winning
OMG! ARM is winning

Re:option d, all of the above (0)

Anonymous Coward | more than 2 years ago | (#38908389)

This story being submitted being Stalin himself, x86 may be dying in a Gulag, and AMD and ARM may be Soviet spies. My breath freezes from the implications.

Could we have a hybrid? (3, Interesting)

mehrotra.akash (1539473) | more than 2 years ago | (#38907593)

A PC(or laptop) running Windows 8(or any OS which supports both x86 and ARM) powered by a processor having full x86-64 support, and a low power ARM with a GPU capable of basic stuff like handling browsing and media playback
So, when you switch to a high requirement program (Gaming,encoding,VS,etc) the x86 cores turn on like a coprocessor and the work is handed to them
The ARM handles the UI and other stuff

Re:Could we have a hybrid? (3, Informative)

scheme (19778) | more than 2 years ago | (#38907693)

That's tough enough to do when all the processors use the same instruction set, but if the system has processors with different instruction sets, it makes it much harder to have the OS/system switch from a lower powered mode where it's running on the ARM processors to a high performance mode where it's running on the x86 processors. It's not impossible, it's just very complicated and I don't see companies lining up to do the work to implement something like that.

Re:Could we have a hybrid? (1)

mehrotra.akash (1539473) | more than 2 years ago | (#38907897)

We manage to do it for Graphics in laptops (like Nvidia Optimus which shifts to the dedicated GPU when required, and the intergrated one otherwise)

Re:Could we have a hybrid? (2)

jeffmeden (135043) | more than 2 years ago | (#38908177)

We manage to do it for Graphics in laptops (like Nvidia Optimus which shifts to the dedicated GPU when required, and the intergrated one otherwise)

That is for just one app, with one bit of specialized code that runs better on the GPU. And it's to do just one thing (arithmetic that the GPU is good at). Finding what operations work most efficiently on ARM vs x86 would be a whole project in itself.

You would basically need to convince Microsoft (or whoever is the prevalent OS vendor in this fantasy) along with ALL of their partners, to switch to ARM as the primary architecture, and THEN convince them to include additional code types if their apps want to run faster than a crawl (i.e. move off the ARM chip and onto the x86 chip). It's a complete chicken and egg problem, you would need a very well built development studio to manage all the differences in a way that didn't completely cripple developers with the work needed to make their code run in two places at once, and you would need the CPU hybrid vendor to get out in front with a hardware platform that was appealing to the masses.

We have worked hard enough to get to where we are with x86 (something like 30 years now) so while I like to think long term, I believe the practicality of this idea is probably on the low side. But since we are in fantasy land, you might as well propose that a machine be built with the ability to "dock" a smartphone through some sort of hyperbus, run all apps off of the smartphone as the primary CPU and then when an app needs more resources it can add on x86 or even multi-core or ramped-clock ARM CPU resources in the docking station (like the Motorola LapDock idea, except extended greatly.) But good luck getting a standard that all handset makers can get behind, and THEN a standard that all (or a majority of) app makers can get behind, and THEN finding people to buy the whole rig.

Re:Could we have a hybrid? (1)

fast turtle (1118037) | more than 2 years ago | (#38911205)

I'll put my $1M in on this one.

What you need is the ARM core to provide the hypervisor/uefi/bios access with the x86 cores being VM's. You then get the best of both worlds and can easily ensure that the best chip handles the apropriate load. Audio and Video get handled by the ARM core and it's DSP's while the x86 cores handle all of the x86 based software.

Re:Could we have a hybrid? (1)

Belial6 (794905) | more than 2 years ago | (#38912175)

Exactly. It really is a trivial problem to solve. I am kind of surprised that we have not seen it yet. That would be one place that AMD could really make a splash. Imagine a laptop that had an APU with x86, ARM and graphics on a single chip.

Re:Could we have a hybrid? (1)

gQuigs (913879) | more than 2 years ago | (#38907761)

Sorta like this [cnet.com] ?

It's not currently available though, and I'm not sure how long it was really available for...

Re:Could we have a hybrid? (1)

mehrotra.akash (1539473) | more than 2 years ago | (#38907815)

Not really, since it requires a reboot to go into full power mode and doesnt do it transparently

Re:Could we have a hybrid? (1)

calibre-not-output (1736770) | more than 2 years ago | (#38907777)

Performance-wise, what advantage does this offer over just having a faster x86-64 CPU? I don't see it.

Re:Could we have a hybrid? (1)

mehrotra.akash (1539473) | more than 2 years ago | (#38907843)

The advantage would be in terms of battery life

Re:Could we have a hybrid? (2)

Microlith (54737) | more than 2 years ago | (#38907917)

But which of Microsoft's divergent, self-serving rules regarding Secure Boot apply to a hybrid x86_64/ARM system?

Re:Could we have a hybrid? (-1)

Anonymous Coward | more than 2 years ago | (#38908145)

Are you LOOKING for flamebait,troll and offtopic mods?

Re:Could we have a hybrid? (1)

aepurniet (995777) | more than 2 years ago | (#38908257)

its a fair question, dont think that was ever addressed. if plopping in a atom chip can get rid of the requirement for secure boot, it would be huge. not to mention this whole top level thread is completely off topic, and the type of idle speculation that is usually reserved for cable news.

Re:Could we have a hybrid? (0)

Microlith (54737) | more than 2 years ago | (#38908361)

Ooh, someone's hurt because I brought up what would be an entirely valid question in the environment the GP quoted, and it put MS in a bad light!

StrongARM (0)

Anonymous Coward | more than 2 years ago | (#38907613)

They probably should have thought of this while they had the StrongARM team in house (~2001) ... before they forced them all to quit or move to x86.

Re:StrongARM (3, Informative)

lostmongoose (1094523) | more than 2 years ago | (#38908289)

You're confusing AMD and Intel. StrongARM was bought by Intel not AMD.

sub-45nm ARM? (2)

lobiusmoop (305328) | more than 2 years ago | (#38907713)

Wondering if a big state-of-the-art chip-fab like AMD getting into ARM processors might make sub-45nm ARM processors a possibility? AFAIK, only X86 chips are made like this just now. Could lead to fantastic performance-per-Watt chips coming off the line.

Re:sub-45nm ARM? (4, Informative)

Btarlinian (922732) | more than 2 years ago | (#38907767)

AMD lost its fabs a while ago. (Their fabs are part of GlobalFoundries now, and they're a bit ahead of TSMC, but not anywhere close to Intel in terms of process capabilities.)

Re:sub-45nm ARM? (2)

Btarlinian (922732) | more than 2 years ago | (#38907813)

And being ahead of TSMC is arguable in any case.

Re:sub-45nm ARM? (2)

the linux geek (799780) | more than 2 years ago | (#38908149)

28nm TSMC ARM is likely this year.

Ambidextrous? (5, Funny)

mark-t (151149) | more than 2 years ago | (#38907759)

Does that mean it's using two ARMs at once?

(duck)

Re:Ambidextrous? (1)

cmburns69 (169686) | more than 2 years ago | (#38908391)

Wouldn't that be ARMbidextrous?

Re:Ambidextrous? (1)

Anonymous Coward | more than 2 years ago | (#38908861)

As a lefthander, I prefer the term "ambisinister"

Re:Ambidextrous? (0)

Anonymous Coward | more than 2 years ago | (#38909089)

I'd give my right ARM to be ambidextrous.

Re:Ambidextrous? (1)

eulernet (1132389) | more than 2 years ago | (#38909587)

Yes, but they cost an arm and a leg.

Re:Ambidextrous? (1)

shadowofwind (1209890) | more than 2 years ago | (#38911947)

It means the mix big-endian and little little-endian in the same architecture.

As a complete NOOB on the subject... (2)

wisebabo (638845) | more than 2 years ago | (#38907771)

... would it be possible (or I guess more importantly) worthwhile to put x86 cores WITH ARM cores on a single chip?

In addition to offering dual boot capabilities, it might be useful to run "Virtual" (or sort of virtual) machines at full speed. I've often thought it would be nice to run some of the thousands(!) of cellphone Apps that I have on my laptop. Although it might be tricky to implement multi-touch correctly, still I'd think there might be some utility.

Or maybe all CPUs today are very generalized RISCy architectures with everything taken care of in microcode (or maybe nowadays it's nanocode)? That would make it (comparatively) really easy to do, right?

Re:As a complete NOOB on the subject... (0)

Anonymous Coward | more than 2 years ago | (#38908557)

Not cores as that would require major redesigns in the architecture for something pointless (no speed boost or feature that benefits everyone) but they could put 2 cpu into 1 silicon chip. There is no point though as the benefits are niche and dual booting can be benefits by having the arm proc on the mb in it's own chip or in the mb chipset.

Re:As a complete NOOB on the subject... (3, Informative)

Koen Lefever (2543028) | more than 2 years ago | (#38909631)

Or maybe all CPUs today are very generalized RISCy architectures with everything taken care of in microcode (or maybe nowadays it's nanocode)? That would make it (comparatively) really easy to do, right?

Sounds like you are reinventing the Crusoe processor [wikipedia.org] .

Re:As a complete NOOB on the subject... (1)

CharlyFoxtrot (1607527) | more than 2 years ago | (#38910211)

It'd be far easier to do Apple-style "universal binaries" [wikipedia.org] (bundles that contain executables for more than one architecture) than it is to create this kind of hybrid hardware. Apple could already create iOS/OSX universal binaries in Xcode if they wanted to since it can already compiles for both x86 and ARM for the "emulator" and device respectively. The biggest hurdle is the fact that the main control interface (touchscreen) is missing on the dektop.

Re:As a complete NOOB on the subject... (1)

Belial6 (794905) | more than 2 years ago | (#38912183)

The missing touchscreen is a temporary setback. As is the missing mouse pointer on the portable devices.

Wish Android had universal binaries (0)

Anonymous Coward | more than 2 years ago | (#38912261)

Google could learn a thing from Apple's universal binaries. I currently have an android tablet running a MIPs processor. There is a severe shortage of apps available to use.

Re:Wish Android had universal binaries (1)

evilviper (135110) | more than 2 years ago | (#38912353)

I currently have an android tablet running a MIPs processor. There is a severe shortage of apps available to use.

What the hell are you talking about? Android is a slightly modified java based platform. 95% of the Android apps out there are completely CPU-agnostic, because that's just the default state of affairs.

The exceptions are the CPU-intensive multimedia apps... Adobe Flash, video players, and some games. Actually Firefox is on the list as well for no good reason.

You can't get Flash for any other mobile platform, and it hasn't been ported to ICS even, so it's almost a niche product...

Android comes with built-in audio and video players, which support a decent number of formats. They're not polished, but it's not like you're left without a music/video player. They're part of the base system and will be ported to your MIPS chip with the rest of it.

So it looks like you're JUST talking about (high-end) games, really...

Didn't they try this already before? (0)

Anonymous Coward | more than 2 years ago | (#38907811)

AMD used to make a SoC based on MIPS arch. It must not have worked out for them because they didn't really do a lot with it and finally got rid of it.

OT. Does AMD still have the same CEO and officers it did when the stock melted down a few years back? If so, the best thing AMD could do would be to fire the lot of them. There has to be competent officers at the helm for a company to succeed.

Re:Didn't they try this already before? (0)

Anonymous Coward | more than 2 years ago | (#38910803)

They bought Alchemy Semi, which is the company who did MIPS SoCs. Alchemy was founded by the vast majority of the DEC StrongARM team. AMD didn't know what to do with Alchemy (even though they were profitable!) and decided to be x86-only, and sold them off. Shortly after, Intel decided to be x86-only "even harder" and got rid of the XScale/StrongARM stuff they grabbed from DEC.

Funny, I bet Intel and AMD would be much further ahead if they held onto their non-x86 people. Let's not even mention AMD's 29K or Intel i960...

A new high end CPU (0)

Anonymous Coward | more than 2 years ago | (#38907823)

I'd love to see an AMD high end CPU using ARM instruction set , full 64 bits, etc . This could be interesting for servers.
But I'm dreaming

Where does AMD come into the picture? (0)

Anonymous Coward | more than 2 years ago | (#38907987)

AMD and ARM are both design and marketing companies that don't actually make chips (external foundries do the manufacturing). ARM already has the ARM cpu design pretty well figured out, so what would AMD bring to the table in an AMD-ARM alliance? Just branding, or what?

Re:Where does AMD come into the picture? (4, Informative)

petermgreen (876956) | more than 2 years ago | (#38908209)

AIUI ARM do HDL design of processor cores, then they pass that HDL on to other companies who make complete chip designs based on it. Those companies in turn pass the designs onto fabs (which may be in-house or external) for manufacture. IIRC some vendors also do their own HDL work and only license the basic architectural design from ARM.

Re:Where does AMD come into the picture? (2)

Nikker (749551) | more than 2 years ago | (#38908769)

Licensing fees.

AMD bleeds money to Intel for the x86 instruction set. At one point this was manditory since all the programs out there that were able to be run by a comparatively inexperienced computer users were written for the computers they could find at Radio Shack et al. Now that Microsoft and Google are popular and platform agnostic (Linux/Android vs win8) AMD has a window of opportunity to start from scratch and just offer a kernel patch to have your apps run on their chips. This new direction is going to be interesting to see execute. Intel was and is the gatekeeper of the consumer PC space now that is not so stable. Android can and has already been ported to x86,MIPS and a whole slew of variant ARM archetectures. To top it off millions of people use and enjoy Android, distributors like that they can make the products cheaply and must stay with the platform to keep their purchases/investments, lastly carriers love it cause they can lock you in for 3 years at premium rates.

It looks like this time Intel might have to tighten it's belt for a change.

Re:Where does AMD come into the picture? (1)

Locutus (9039) | more than 2 years ago | (#38909311)

ARM does do most of the design work but I think there's still lots of integration and other optimizations done by the licensees. And not all licenses are the same so some are allowed to tweek and others not.

So what does AMD bring to the table with ARM game? They do have a pretty nice graphics GPU and they do have some familiarity with optimizing not to mention the ability to merge x86 with ARM if they want to. ie 2 x86 cores and 2 ARM cores so you could have blazing performance at the cost of power or boot the ARM cores for power sipping usage all in one package. just throwing it out there with off the top of the head comments. Hopefully some chip design geeks chime in with more complete examples of where this could work for AMD and possibly the general customer bases. And hey, maybe that'll mean ARM devices without bootloader lock outs via MS requirements.

LoB

Re:Where does AMD come into the picture? (1)

symbolset (646467) | more than 2 years ago | (#38909697)

The leap from "not x86" to "ARM" involves a large unfounded assumption.

ambidextrous (0)

trb (8509) | more than 2 years ago | (#38908039)

How will I decide whether I want a right ARM or a left ARM? If they are ambidextrous, will it matter?

Re:ambidextrous (0)

Anonymous Coward | more than 2 years ago | (#38908185)

The real question is: Will the left hand know what the right hand is doing?

Re:ambidextrous (2)

K. S. Kyosuke (729550) | more than 2 years ago | (#38908503)

The real question is: Will the left hand know what the right hand is doing?

Modern architectures usually don't do that. There is a solution to this problem, but it's kind of MESI.

Re:ambidextrous (0)

Anonymous Coward | more than 2 years ago | (#38908303)

It depends on the chirality of your code.

choice of words (3, Funny)

moco (222985) | more than 2 years ago | (#38908233)

Since they have no products using that other architecture I think the word they were looking for is "Bicurious".

Why it's called "trinity" (1)

gr8_phk (621180) | more than 2 years ago | (#38908537)

I'm shocked that the press hasn't gone wild with speculation on the name "trinity" which implies 3 of something. My guesses are as follows:

1) They integrate CPU, GPU, and "system" on a chip - not really worthy of the name
2) They integrate 3 distinct CPU architectures in APUs. Bulldozer, Bobcat, Power. Or x86, Power, ARM.
3) They are aiming for PC, Apple, and Console markets with the stuff in #2 (consoles require Power arch for backward compatibility).

My bet is that Wii U will have an IBM CPU and AMD GPU on the same die manufactured at GF. The only thing not official there is the integration.

It's also insane for Apple not to go with Trinity and there have been rumors. AMD has canceled product and delayed (public) availability of Trinity even though they claimed it was ramping and on track (last fall) for early 2012. This suggests they're stockpiling for a large customer.

That's just my speculation based on Googling of course. So they either have something big and have kept it very quiet, or the just suck.

Re:Why it's called "trinity" (1)

Anonymous Coward | more than 2 years ago | (#38908717)

Sorry to burst your bubble, but Trinity is a name of a river in Texas, which is the theme for the APU naming (e.g., Brazos, Sabine, Lynx).

Re:Why it's called "trinity" (1)

the linux geek (799780) | more than 2 years ago | (#38909083)

There is exactly zero chance of Trinity being anything other than what it's been announced and demo'd as from day one: L3-less Bulldozer (well, technically Piledriver) with a GPU on-chip. In other words, an incremental successor to Llano.

Re:Why it's called "trinity" (1)

gr8_phk (621180) | more than 2 years ago | (#38910229)

Agreed. For the desktop PC part. I just figured that was part of a larger picture. The other poster saying it's named after a river in Texas really deflated my hope too.

Competitor for Tegra? (3, Informative)

Tapewolf (1639955) | more than 2 years ago | (#38908559)

The Tegra is basically an ARM SoC with an nVidia video system. Maybe they're looking at doing an ARM SoC with the ATI video core...

Re:Competitor for Tegra? (0)

Anonymous Coward | more than 2 years ago | (#38910275)

Maybe they're looking at doing an ARM SoC with the ATI video core...

Whoever sold their mobile business [digitaltrends.com] to Qualcomm has probably been promoted as a visionary...

Re:Competitor for Tegra? (0)

Anonymous Coward | more than 2 years ago | (#38910855)

Don't forget, another part went to the all evil Broadcom, from which they now have some advanced 3D graphics capabilities...

What I really want to know is if they can ... (1)

Skapare (16644) | more than 2 years ago | (#38908893)

... upgrade the ARM architecture to 64 bit (hopefully, they have some experience in that), put 64 cores of it on one die, and crank the speed up to 4 GHz.

Re:What I really want to know is if they can ... (1)

nschubach (922175) | more than 2 years ago | (#38909413)

64 cores at 6.4Ghz running 64-bit code... we'll call it the AMD 262144 processor

Re:What I really want to know is if they can ... (1)

LWATCDR (28044) | more than 2 years ago | (#38910123)

For many tasks 64 bit is over rated. Unless you are doing something that needs a HUGE memory space and or 64 bit ints 64 bit code takes up more room and is slower than 32 bit code... If the ISA isn't brain dead and starved for GP registers in 32 bit mode.

Re:What I really want to know is if they can ... (1)

ThePeices (635180) | more than 2 years ago | (#38911275)

"For many tasks 64 bit is over rated."
And as time goes on, that 'many' turns into 'some' and eventually into 'once in a blue moon'. Thats the nature of progress.

The thing is, many of us actually do need 'HUGE' memory space and/or 64 bit ints.
this is 2012, and i just need more than 4GB of RAM in my computer.
  - My flight combat simulator gobbles RAM like a crack whore gobbles crack. (DCS A-10) 4GB is simply not enough for this one application.
  - Photoshop CS5 / Lightroom just runs better 64 bit natively and most especially when working with multiple 18MP RAW format photos.

The performance difference between 32 and 64 bit is very minor, and the extra storage/memory is again a minor thing. If your OS is 64 bit, there is no reason not to run a 64 bit version of an app instead of a 32 bit one if you have the choice between the two.

Did you believe Bill Gates when he said that 640k of RAM is plenty enough for anybody?

Idea... (2)

tgetzoya (827201) | more than 2 years ago | (#38909059)

AMD builds a hybrid chip. It uses the ARM core for everyday tasks and then the x86 core when power is necessary. Kind of what Samsung does with their 5 core processor. Add in an AMD graphics core and that would bring some power.

Works for me. (1)

Daniel Phillips (238627) | more than 2 years ago | (#38909153)

I will so buy a bagfull of these chips if AMD follows through on this smart thing. 28 nm multicore ARMs. Booya! Also looking forward to the integrated low power GPU.

a*r*mbidextrous (1)

adavies42 (746183) | more than 2 years ago | (#38909453)

shirley they meant armbidextrous....

at last! (0)

lkcl (517947) | more than 2 years ago | (#38909741)

the problem with the x86 architecture is that it was designed to be compact and space-saving: the escape-sequences that go up and up and up from 8086 to 80186 to 286 to 386 to 486 to 586 to 64-bit are incredibly efficiently encoded. *BUT*, there comes a massive performance penalty which is that the clock rate now has to be twice as fast as a RISC processor in order to achieve the same results. RISC processors, with the exception possibly of the Xtensa (which kinda cheats by allowing VLIW as well), tend to substitute larger memory requirements for less compact instructions; ARM cheats by actually compressing the instructions! (thumb).

so it's all quite horrendously complex, but the kicker is that power goes up in a square law with processor speed. double the speed you need FOUR times the power. so, if an x86 processor has to run twice as fast to achieve the same results as a lowly RISC core which is eating twice the amount of RAM as an x86, the x86 is using FOUR times the power in order to keep up with the RISC processor.

it's never that simple, though: ARM and other RISC cores also trade higher latency for lower power. _and_ there is the issue of trying to run a 64-bit or a 128-bit memory bus, off to a separate "Northbridge" chip: these RISC all-in-one SoCs with embedded GPUs and integrated I/O just don't have that problem, and they're not trying to drive massive amounts of external lines just to get access to memory [through a silly "Northbridge" chip].

but that's not the end of the story. with the advent of DDR3 and the introduction of 28nm, RISC cores are going to eat x86 for breakfast, lunch and dinner, as RISC CPU designs means can easily run at 2 to 2.5ghz in 28nm... and still only use 1 to 1.5 watts! and with DDR3 RAM being so fast, the "problem" of latency for RISC CPUs is *also* going away.

if AMD tells you they can do a 2 watt x86 CPU (like in TFA), they aren't exactly lying, but they're sailing pretty close to the wind.

bottom line is: if they're saying that they're architecture-agnostic, that basically saves their bacon. let's hope that they do a decent job, eh? it would be absolutely cool for them to put an ATI-based "Open" GPU with full and complete GPL'd source code along-side an ARM or any other CPU core.

Re:at last! (3, Insightful)

Chris Burke (6130) | more than 2 years ago | (#38909977)

*BUT*, there comes a massive performance penalty which is that the clock rate now has to be twice as fast as a RISC processor in order to achieve the same results.

That's just complete bollocks.

A modern x86 processor (meaning... since the Pentium Pro in the mid 90s) is, internally, a RISC-like core with full OoO execution and so on and so forth.

Variable instruction decode is a pain in the ass and does add latency in the front end. This isn't great, but it is nowhere near a 50% reduction in IPC. Try more like 1-2% (measured via correlated cycle-accurate performance simulator), depending on how clever you get and in any case easily made up for by a clever widget or two.

Basically predictions of RISC eating x86 for breakfast were made over 15 years ago and never came to pass. Mostly by x86 morphing so that the difference was essentially irrelevant.

Your talk about northbridges sounds woefully out of date, too. This has nothing to do with ISA, and both major x86 vendors now have integrated northbridges.

You're closer to reality when talking about power. Regardless of the small IPC penalty, those decoders burn up a lot of power. There are ways to get around this, too, and for moderate perf moderate low power x86 does just fine. At the very low end of power, though, going to something like ARM makes sense.

ARM suffers from the same problem all riscs do (1)

luminousone11 (2472748) | more than 2 years ago | (#38911183)

ARM like any risc chip is great for a particular level of complexity, for a particular application, or situation.

Basically predictions of RISC eating x86 for breakfast were made over 15 years ago and never came to pass. Mostly by x86 morphing so that the difference was essentially irrelevant.

Exactly. x86 might be a pain to decode, but the fact that you can replace the backend arch that actually does all the work with one that fits the particular level of complication desired means that x86 unlike ARM(or any risc for that matter) can scale from simple 8086 with 29,000 transistors to that of a westmere-ex with 2,600,000,000 transistors. and go from 8bit to 64bits, or with SIMD 256bit. when they added large caches throw in instructions for cache control/hinting. What is really needed is a fixed instruction length CISC arch with an opcode address space large enough for future expansion, a means to deprecate old instructions, keep x86 addressing(the 64bit model that is), and an ISA that is designed to be easily decoded into whatever the chip is really running.

Re:ARM suffers from the same problem all riscs do (0)

Anonymous Coward | more than 2 years ago | (#38911299)

Fixed length CISC instructions would be a terrible waste of memory. Suppose you had a fixed instruction width. If you want to do everything x86 can do, each instruction would have to be at least 16 bytes long (to hold two 64-bit immediate values). But if you have an instruction where it only references two registers, it requires a lot less that since it only requires a few bits to represent a register.

RISC ISA's are able to have fixed-sized instructions because all of the work is done in registers. There are a few bits for the opcode, and a few bits to specify the registers. And they allow only small immediate values (for example, PowerPC ISA, even though it is a 64-bit architecture, only allows for 16-bit immediate values).

Re:ARM suffers from the same problem all riscs do (1)

luminousone11 (2472748) | more than 2 years ago | (#38912041)

http://www.sandpile.org/x86/opc_enc.htm [sandpile.org] its a mess, with 64bit we have a prefix byte , 1 to 4 bytes for the op, mod byte, and above that with avx we have the DREX byte, potentially imm byte. etc. I see no reason why intermediates can't occupy the space the next instruction would have and be aligned to the size of the fixed instruction length. that one exception to the fixed size rule shouldn't cause that much trouble for decoding of instructions.

Re:ARM suffers from the same problem all riscs do (0)

Anonymous Coward | more than 2 years ago | (#38912285)

You can replace a backend arch that actually does all the work with one that fitst the particular level of complication desired regardless of what the frontend arch looks like. In fact if the frontend arch is neat and regular like ARM rather than crufty and horrid like x86, it's easier.

fi85t (-1)

Anonymous Coward | more than 2 years ago | (#38909791)

who arE intersted documents lik=e a

x86 and GPU, not x86 and ARM (2)

Pulzar (81031) | more than 2 years ago | (#38910795)

AMD is clearly talking about using both x86 and GPU for compute work vs. focusing on x86 only... the ARM thing is just a wild speculation, or wishful thinking.

comrade! (0)

Anonymous Coward | more than 2 years ago | (#38911123)

Was this posted be Stalin?

Tards (1)

Life2Death (801594) | more than 2 years ago | (#38911147)

I think if anything, they would be commenting on their OpenCL and x86-64 arch's that are in their pipe ie x86+GPU processing, but I guess that goes over everyone's heads. Bobcat was geared towards going into phones and low-compute devices that are anything but gaming and server level applications.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>