×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Reveals Plans to Move Beyond the Core Race

CowboyNeal posted more than 7 years ago | from the eye-to-the-future dept.

AMD 227

J. Dzhugashvili writes "The Tech Report has caught wind of AMD's plans for processors over the coming years. Intel may be counting on cramming 'tens to hundreds' of cores in future CPUs, but AMD thinks the core race is just a repeat of the megahertz race that took place a few years ago. Instead, AMD is counting on Accelerated Processing Units, chips that mix and match general-purpose CPU cores with dedicated application processors for graphics and other tasks. In the meantime, AMD is cooking up some new desktop and mobile processors that it hopes will give Intel a run for its money."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

227 comments

yeah but (-1)

Anonymous Coward | more than 7 years ago | (#17248140)

imagine a beowulf cluster of these little bad boys

Same old. (5, Insightful)

sam991 (995040) | more than 7 years ago | (#17248184)

Intel pushes the 'more power! faster!' philosophy while AMD just redesigns the architecture and it takes Intel a few years to catch up. Not much has changed since 2000.

Re:Same old. (5, Interesting)

TrisexualPuppy (976893) | more than 7 years ago | (#17248236)

Intel pushes the 'more power! faster!' philosophy while AMD just redesigns the architecture and it takes Intel a few years to catch up. Not much has changed since 2000.
Correct. Intel still has the lion's share of the market, and they want to keep it that way. It's interesting how they "cheat" and lock two dies together and call it dual-core or quad-core just to come out with the technology "first" to keep the investors happy.

AMD is smaller obviously, so it has fewer resources...but with those Alpha scientists, they're going to keep going strongly. It's just a matter of time with business directives like this before AMD takes over. They've been having some really cool ideas...and a few more over a few years, the innovators may win. And no, I'm not an AMD fanboi, but I have talked to some architects from IBM and Intel, and they do concur.

hyper transport (4, Informative)

Joe The Dragon (967727) | more than 7 years ago | (#17248356)

Intel would be better off if they where to start useing hyper transport Even having two cores on same die with linked by hyper transport to each other with one link to the chip set is better then 2 cores shearing the FSB.

What is the point of having 32 cores with only one link to the chip

Even with the new Xeon's there still only one link per cpu and the cpus need to use it to get to ram

Amd chips right now have up to 3 newer ones will have up to 5 links

Re: Alternative CPU vendors (0, Offtopic)

TaoPhoenix (980487) | more than 7 years ago | (#17248870)

I am due to get my Last of the Windows line XP machine real soon. (I saw an announcement about Intel's successor to Kentsfield, so I have to get the specs on that.) My purchase timeframe for this is rather tight.

This will be the last Windows machine I will buy, and I plan to let it do its thing in the corner running specialized apps forever until the circuits fuse.

I will be going into a purchase freeze to let "everything else" sort itself out to see "who's on top in 2010 and beyond". I suspect by that point AMD will have caught up to their 2006 projection, and whatever that machine becomes, it will likely have an AMD chip.

I will shortly become a Thundering Newbie with my first Linux box. This will be dirt cheap so I can proceed to make mistakes with less fear of risking serious money. I'm sure someone will howl at some of my posts.

Re: Alternative CPU vendors (0, Offtopic)

westlake (615356) | more than 7 years ago | (#17249138)

I will be going into a purchase freeze to let "everything else" sort itself out to see "who's on top in 2010 and beyond"

Buy now and the Vista upgrade will be free. You may want that DVD.

I'll take the odds that in 2010 Vista will be dominant in the home market and growing very strong elsewhere.

That anything in free and open source of interest to end users will be ported to Vista or begin as native Vista applications. That anything in hardware of interest to end users will be shaped by Vista.

Re: Alternative CPU vendors (0, Flamebait)

satirenine (941898) | more than 7 years ago | (#17250076)

Why would you want Vista?

Re: Alternative CPU vendors (1)

snarfbot (1036906) | more than 7 years ago | (#17250398)

well since it will come preloaded on every computer sold from major retailers, most people wont have a choice, and since dx10 is locked onto it, if u game u will want it.

if none of the above, then the only reason would involve poor decision making skills.

Re: Alternative CPU vendors (1)

russ1337 (938915) | more than 7 years ago | (#17249272)

Good on you dude. WHile you might have a little trouble when you first go to use the command line, once you're semi proficient you will feel the freedom from MSFT.... it totally rocks.

Put 'em together... (1)

TheSHAD0W (258774) | more than 7 years ago | (#17249234)

Heads up, Nvidia. There will soon be a market for chipsets that support both CPU lines SIMULTANEOUSLY. Get the best of both worlds, literally.

Obviously a good thing. (1)

CrackedButter (646746) | more than 7 years ago | (#17248194)

Two different methods that achieve the same result are better than one.

NOT (0)

Anonymous Coward | more than 7 years ago | (#17248404)

If manufacturers compete on cores, instructions-per-clock, and such, then consumers come out ahead. If AMD goes off on a tangent pulling random application-specific "accelerator" cores out of its ass, only products that specifically support the proprietary cores will benefit. It will be a huge hassle for developers, and a large portion of the marketplace will see no benefits.

Multiple general-purpose cores are the way forward. If AMD goes through with this strategy, both the company and its users will pay dearly.

Re:NOT (0, Troll)

Frozen Void (831218) | more than 7 years ago | (#17248832)

Unless that specific Application is Windoze/directX

Re:NOT (1)

Oddscurity (1035974) | more than 7 years ago | (#17249528)

Or text processing instructions?
  • Two asciiz pointers (haystack, needle) in, match location out
  • Another instruction to continue this search (haystack, needle, offset)
  • Sort

Instructions like that could speed up compression.

Re:NOT necessarily (0)

Anonymous Coward | more than 7 years ago | (#17248898)

I would welcome application specific processors. Example, I primarily just use a web browser and the internet for my primary computer use. If they had such a chip, that's what I would buy. The guy next door primarily does gaming, he wants something else. Dude down the street uses one at the office, does text processing and some spread sheets, he could use something else. My other next door neighbor is a movie freak, he wants a media PC to show high def movies every evening.

What you want-here is bad car analogy time-is some vehicle that is a truck, no it's a sportscar, no it's a bus, no it's an off roader, nope it's a dragster, no it's a commuter car. What we have now is a one size fits no one really well, whereas application specific makes a lot more sense, especially if it is modularized and plugin-able to the mobo. That "general" PC is (for conversational purposes) what we have now and obviously we are having problems with it, and just throwing horsepower in general at a problem doesn't get you a bus, or a truck or a sportscar, you have to look at the whole thing and design it for what you want to do. We are starting to get diversification, but it is still more a general PC than not, and I think havng dedicated machines would make a lot more sense and actually save some cents down the road, as you could get what YOU need and want as opposed to some generic thing that only half-way does what you (you is joe random consumer) want no matter how much cash you throw at it.

Re:NOT necessarily (1)

GeffDE (712146) | more than 7 years ago | (#17249394)

But these "application-specific processors" arent "processors for one application." They are processors that do a specific type of calculation; for instance, a TCP/IP stack implemented in hardware, or a physics unit, or a graphics chip. In order to make a web-browsing "chip," you would need a bunch of these specialized chips, which is why most applications/programs/binaries are run on a general processing unit. In terms of you not-so-bad car analogy, for a browser, you need a car that is a bus, and an off-roader and a commuter car (or whatever). It needs an "SUV" of a computer chip. So if you primarily use a computer for web browsing, then any CPU is going to be fine because browsers aren't computationally intensive (although if you use Firefox maybe grab some extra memory???); meanwhile, the gamer might like that physics coprocessor because that would speed up games that support the processor and the media guy might like an operating system that supports a lot of offloading of audio/video computations (*cough* OS X *cough*) to the graphics card AND a really nice graphics card. There already are specialized processors in a computer these days, but there are only so many processes that can be specialized simply*.



*Doubtlessly, this is what AMD is researching: trying to make everyday tasks into some special form that can be run through some super-extra-optimized pipeline...I am just unsure of how that can be done in some/a lot of cases.

Free Enterprise (1)

mordors9 (665662) | more than 7 years ago | (#17248200)

This is the type of innovation that usually comes when there is true competition in the market. Imagine how much better the OS market would be with similar competition.

Re:Free Enterprise (4, Funny)

Moridineas (213502) | more than 7 years ago | (#17248328)

Like if there were hypothetical competitive operating systems like Mac OSX, Linux (and the competition therein--Ubuntu, Fedora, etc), FreeBSD, OpenBSD, Solaris, etc?

Yes, if those were competitors. (2, Insightful)

Somatic (888514) | more than 7 years ago | (#17250210)

The thing is, I like to be able to play games on my home computer. Any old game I see at the store. I want to be able to play it. A home computer is half about entertainment. Windows has no competition in that area. They just don't.

I use Linux daily at work, but, I have no driving need to have a Linux box at home. I don't do that much worky stuff at home. I'm already burned out after doing it all day at work (and if I need to do more work, I can ssh in with Cygwin from home). And you really can't game on Linux. Yes, I'm sure some games are compatible with various Linuxes (the only one that comes to mind is Puzzle Pirates, and that's because it's written in Java), and I guess there's Windows emulation. But let's be honest. You're not gaming on Linux.

Mac, well. I can honestly never see myself owning one. If I was going to go another OS, it would be Linux for its flexibility. Apple just makes me nervous with all its proprietary stuff. I know lots of people own Macs and are happy with them. But the number of programs in general that are written for Mac is too tiny for me, games especially.

I have a box of junk in the corner with a bunch of games in it, and out of curiousity, I pulled them out while writing this, just to see which were compatible with Mac right out of the box... not with some lameass windows emulation that will run it at 1/10 speed, but actual Mac compatibility. Everquest (and about 7 expansions): Windows. Yeah, there was a Mac version a few years ago, where you get to play on your very own Mac server. Good luck with that. GTA: Vice City: Windows. Knights of the Old Republic: Windows. Dark Age of Camelot: Windows. There are plenty more in the closet. In fact, I'm pretty sure that the only one in my entire collection that is compatible with Mac right out of the box is World of Warcraft.

But I hate World of Warcraft.

Sorry, but, my home box is my entertainment. And Microsoft knows it too. They agressively pursue the gaming angle with developer tools. Now they've got XNA, with the goal being to write a game once and have it play on both xbox or windows with a minimum of development fuss.

They know where their dollars are coming from. They court the market, because they know geeks like me would flee if Linux could entertain me even 1/100th as much. So what I'm saying is no, no there is no Windows competition, not in my market.

Free Reliant (2, Funny)

Anonymous Coward | more than 7 years ago | (#17248476)

This is the type of innovation that usually comes when there is true competition in the market. Imagine how much better the OS market would be with similar competition.
This is the type of innovation that happens when you abandon a crew on Ceti Alpha V. Imagine how much better the OS market would be with similar KHHHAAAAAAANNNNNN!!!

MODERATORS!!! (0)

Anonymous Coward | more than 7 years ago | (#17249870)

Mod this guy funny. I havent laughed so hard in days...

Re:Free Enterprise (3, Insightful)

badboy_tw2002 (524611) | more than 7 years ago | (#17248838)

Of course, for the competition of the type that exists between Intel and AMD or AMD/Nvidia you need a common standard to compete with. If all apps ran on the same OS/GUI API then you'd have a true choice in operating systems (this one is more secure, this one faster, this one runs Word twice as fast and handles more DB load, etc). CPUs have x86, GPUs have DirectX/OpenGL, OSs need a standard application interface commmonly accepted by software developers. Otherwise you're comparing not just the OS but all the stuff that goes with it (skins, music players, etc etc etc)

better for some things, and not for others (1)

ZahnRosen (1040004) | more than 7 years ago | (#17248226)

By going in slightly different directions AMD differentiates itself from Intel. There will be things it does better and it will prosper in those areas...

Integrated graphics.. (2, Interesting)

mr_stinky_britches (926212) | more than 7 years ago | (#17248240)

That sounds great and all, and the AI that the article mentions really does sound interesting...but I am not clear on how a processing unit for extremely specialized tasks is going to translate into significant performance gains? Is the current generation of CPU not optimized for mathematic operations? This seems the most direct way to get the best all around performance, to me. Also, isn't it kind of sucky to make the processor only good at a few good things rather than fast in a diverse range of applications?

If anyone can give me any insight here...please speak up.

Thanks

- I post interesting things or short articles I write here [wi-fizzle.com]

Re:Integrated graphics.. (1)

Jah-Wren Ryel (80510) | more than 7 years ago | (#17248360)

Is the current generation of CPU not optimized for mathematic operations?

You name the operations, and I'll tell you if it is optimized for it or not.

Or, in other words, that's the difference between genral-purpose and special-purpose.

Re:Integrated graphics.. (4, Insightful)

SQL Error (16383) | more than 7 years ago | (#17248364)

Is the current generation of CPU not optimized for mathematic operations?

What do want to run on a computer that isn't "mathematic operations"?

More specifically:

Are current CPUs optimised for physics simulations? No.
For image processing? No.
For data compression? No.
For encryption? No.

These are all areas where custom cores can provide enormous performance benefits (both in absolute terms, and in terms of performance per watt) over current CPUs, which are general purpose.

Re:Integrated graphics.. (1)

JPriest (547211) | more than 7 years ago | (#17248596)

You could probably add network processing, data compression, and a speech engine to the list also.

Re:Integrated graphics.. (1)

Watson Ladd (955755) | more than 7 years ago | (#17248638)

Physics simulation, image processing, and data compression all use math. For physics and images the math is matrix math. Look at what most benchmarks are made of, and you will see it's the same stuff.

Re:Integrated graphics.. (1)

mabinogi (74033) | more than 7 years ago | (#17249112)

Saying that a CPU does "maths" and is therefore is specialised for any mathematical task is like saying that someone knowing "IT" is therefore qualified to fix the printer.

It entirely depends on _which_ mathematical operations you're talking about.

Re:Integrated graphics.. (3, Interesting)

megaditto (982598) | more than 7 years ago | (#17248660)

What I'd like to see is a couple of those field-programmable thingie cores that can reconfigure their circuits to a specialized calculation a program is doing... Wishful thinking but still...

Transputer (3, Informative)

temojen (678985) | more than 7 years ago | (#17248904)

Compaq used to sell those; they're called transputers and came as a PCI card with 4 FPGAs, some RAM, and a PowerPC CPU.

Re:Integrated graphics.. (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17248950)

First thing we learned when doing FPGAs - great for prototyping, slower than ASIC, fairly expensive but in low volumes better than ASIC. When going into mass production, ASICs are faster and significantly cheaper in large volumes (thinks 10 000s of units instead of 100s) due to economy of scale. So while FPGAs might be cool, they're horribly impractical and it would make more sense to have ASIC daughter cards like the PhysX.

Also, the only reason you'd want an FPGA is if application developers wanted to program it specifically for their algorithms (which is added, usually unneeded, development time).

On a side note, it might be cool to get those self-configuring CPUs which optimize their own codepath for the application & data (although I haven't heard anything about them recently).

FPGA in Opteron Socket (0)

Anonymous Coward | more than 7 years ago | (#17249134)

It's already been done. There are several companies that produce a board with an FPGA and the necessary logic to plug it into a CPU socket on a multi-opteron (hello AMD) system board.

Check out the Xtreme Data xd1000 [xtremedatainc.com] for a device that looks interesting. It sits on the HyperTransport bus and can can bridge to others (useful if you had a few extra sockets to spare).

There are other devices and toolkits like this - check out Google for more data.

And no - I don't own one - but I wish I had the money to buy one ;-)

Re:Integrated graphics.. (1)

KliX (164895) | more than 7 years ago | (#17249490)

Not so wishful - the latest Nvidia card is verging on that kind of functionality.

No bloody way we'll ever get low level access though.

Re:Integrated graphics.. (1)

Rakishi (759894) | more than 7 years ago | (#17249668)

As I understand it the problem is that FPGAs have a limited number of changes/uses/whatever before they crap out. My EE friends keep complaining about how the ones in their classes are barely working even when they're relatively new (but used so much that they're already way over the recommended number of uses).

Re:Integrated graphics.. (2, Interesting)

blahplusplus (757119) | more than 7 years ago | (#17250042)

"What I'd like to see is a couple of those field-programmable thingie cores that can reconfigure their circuits to a specialized calculation a program is doing... Wishful thinking but still..."

This is what human minds do but CPU's are far from this goal, not to mention the nightmare of managing it as complexity increases.

Re:Integrated graphics.. (1)

WhoBeDaPlaya (984958) | more than 7 years ago | (#17250354)

High performance FPGAs get wtf-pw3ned by ASICs any day of the week ;) They're to ASICs what general purpose CPUs are to specialized processors.

Re:Integrated graphics.. (1)

eddy (18759) | more than 7 years ago | (#17248664)

>For encryption? No.

"Yes" if you have a VIA CPU with PadLock.

Shame on Intel and AMD for not following there. They deserve to be shamed in benchmarking, but no site seems to have the know-how to run PadLock-enabled benchmarks (OpenSSL, Loop-AES, etc)

# cat /proc/cpuinfo

[..]
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge cmov pat clflush acpi mmx fxsr sse sse2 tm nx pni est tm2 rng rng_en ace ace_en ace2 ace2_en phe phe_en pmm pmm_en

Re:Integrated graphics.. (-1, Flamebait)

Frozen Void (831218) | more than 7 years ago | (#17248874)

I see cpuinfo is extremely userfriendly.I should install lunix just to check it out.

Re:Integrated graphics.. (1)

Eideewt (603267) | more than 7 years ago | (#17249192)

I hate to tell you, but those acronyms would be just as mysterious if they were spelled out and put in a pretty GUI.

Re:Integrated graphics.. (1)

nr (27070) | more than 7 years ago | (#17248696)

And what value does that provide to Cray and other supercomp/cluster builders? they dont have any use of graphics/encryption/compression stuff put into the generic processors they use to build their supercomputers, they want only Floating Point Operations and nothing else.

Re:Integrated graphics.. (2, Insightful)

Deflatamouse! (132424) | more than 7 years ago | (#17248938)

Remember, a large percentage of the time the entire system - including CPUs, chipsets, memory, disks, etc., are just pushing data around without performing any calculations. We could all gain from better performance of these operations as well.

Re:Integrated graphics.. (2, Insightful)

GeffDE (712146) | more than 7 years ago | (#17249486)

Physics simulations and image processing can be (and are) done on GPUs. Same for any hardcore math stuff, like Folding Proteins [stanford.edu]. The problem with the AMD approach is that there are only so many (and I don't think it is many, but I really don't know, so if you do, please let me know) different kinds of operations. Like I said, the physics simulations and image processing are the same type of problem and also conveniently tackled very proficiently by graphics hardware.

Re:Integrated graphics.. (2, Interesting)

stigmato (843667) | more than 7 years ago | (#17249928)

Like I said, the physics simulations and image processing are the same type of problem and also conveniently tackled very proficiently by graphics hardware.
Perhaps thats why AMD merged with ATI? Just a thought.

Re:Integrated graphics.. (1)

zippthorne (748122) | more than 7 years ago | (#17250276)

No, that's not a problem. That's actually the key strength. It means that a relatively small menu of super optimized special processors will be sufficient to satisfy most of their customer's needs at ludicrous speed.

Re:Integrated graphics.. (1)

Alef (605149) | more than 7 years ago | (#17249574)

Are current CPUs optimised for physics simulations? No.
For image processing? No.
For data compression? No.
For encryption? No.

Maybe not, but if you have specialized cores for each of these, you will have 4 cores idling when you don't do any of that. The alternative would be to have 5 general purpose cores. Each single one would be slower at a specific task, but the symmetric design would give better flexibility allowing all cores to operate all the time. It isn't a clear cut case which approach is the better, and the general consensus seems to vary periodically over time (see the wheel of reincarnation [catb.org]).

Re:Integrated graphics.. (1)

Jeff DeMaagd (2015) | more than 7 years ago | (#17248390)

Much in the same way that a dedicated graphics chip can render so much quicker than a general purpose CPU running many times the speed. The problem is, the specialized units for certain problems may or may not be needed a year from now when a better way to solve a problem is found. Or new problems arise that don't fit the existing specialized units. General purpose CPUs may be slower at a given task, but they can perform a much greater range of tasks. As such, I'm not calling a winner here.

Re:Integrated graphics.. (4, Insightful)

CAIMLAS (41445) | more than 7 years ago | (#17248516)

Imagine a processor with special circuitry routines which will speed up the operation of the following by a significant percent:
- database servers
- web servers
- CAD and 3d programs (rendering)

Basically, it's not much different than MMX or any other extension to a processor. The programmers can still code for the x86 (or whatever) architecture and the same operating system, but then shortcut those instructions when the additional instructions are found to be available. Or maybe they can work it transparently so programmers don't have to do anything additional - it'll optimize on the fly (provided they can figure out how to do that). Overall, I think the software headache will be worth it to companies, as they will be able to have substantial gains in performance in the hardware department, cutting cost while gaining performance. What datacenter wouldn't love to use half as many machines to provide access to the same amount of information; what animator wouldn't love to have their workstation be able to render things at twice the speed?

Re:Integrated graphics.. (1)

Cyberax (705495) | more than 7 years ago | (#17249374)

There is such computer for hobbyists: http://www.eng.petersplus.ru/sprinter/ [petersplus.ru] - it is a clone of 8-bit Z80 with FPGA. People were able to port Doom on this computer - FPGA was used as rendering accelerator.

You could even reprogram it on the fly - I remember writing accelerated floating-point computations just for fun.

Re:Integrated graphics.. (2, Insightful)

kestasjk (933987) | more than 7 years ago | (#17250184)

As a database guy I really don't think processors would make a bit of difference to database speed in the vast majority of cases. Database design is usually what's at fault when you're shown a slow database, followed closely by query design, followed by memory, followed by hard disks, followed by processors. The same sort of thing applies to web servers; the bottleneck is never the processor.

As for CAD, well I think that would be quite a waste. Remember that processor designers only have so many transistors to use, and they have to make the most of them. It would be a waste of die real estate, chip designer time, CAD software writer time, etc, just to get a slight performance boost. I sure wouldn't want to pay for CAD specific instructions when I don't use any CAD tools.

I think general purpose processors should leave as much as possible up to software; optimize the general purpose stuff as much as possible so that everything runs faster, and if a user needs some extra fast processing capability for a specific task they can get an extra processor for that purpose.
You can buy external graphics, crypto, and physics processors; if there was enough demand there would also be external database, web server, and CAD processors.

Re:Integrated graphics.. (2, Interesting)

mo (2873) | more than 7 years ago | (#17248542)

The thing is, there's a huge number of applications that do the same basic computations over and over again.
Just as the floating point coprocessor became the FPU section of the processor, it makes sense to give future processors the ability to do the common operations that are now done by graphics cards.
Things like matrix multiplications (which is actually will be a single processor operation in SSE3) are used all over the place in graphics, sound, and well, virtually anything that eats up CPU power these days. Doing this stuff serially in a traditional general-purpose CPU takes forever, but it's blazing fast if you do it in parallel specially designed hardware.

You might think that having hardware that just does matrix-multiplications limits your processor to only certain domains, but it makes sense to have a dektop process that's fast at desktop tasks: (play games, rip/encode video/audio, run skype, raytrace and use photoshop)? There's still going to be the server-class processors that are good at general-purpose non-mathy things like serving databases, but it just doesn't make sense to use a Xeon in a desktop when an AMD/ATI integrated chip will do all the stuff you want faster and cheaper.

Well... (1)

jd (1658) | more than 7 years ago | (#17248738)

The short answer is no. The longer answer is that the current architectures are designed to solve the more common simple cases at a reasonable speed. They are not designed for complex common operations (which is why we use libraries like ATLAS and FFTW, rather than a simple opcode, for so many fundamental operations). They are totally incapable of rarer complex operations, which is why research facilities pour literally millions of dollars into developing high-performance maths toolkits, and hundreds of millions into the truly heavy-duty stuff (3D FFTs, Navier-Stokes, gas prices...)


There is no theoretical reason why you could not build special-purpose processor units for different types of special-interest operations. There are so many that real-estate becomes an issue, but if you're willing to put up with Yet Another Socket Type, you could easily see AMD replace a sizable chunk of the maths libraries in Linux or Windows with hard-coded implementations hundreds of times faster than software is capable of.

Re:Integrated graphics.. (1)

kRutOn (28796) | more than 7 years ago | (#17249906)

Is the current generation of CPU not optimized for mathematic operations? This seems the most direct way to get the best all around performance, to me.

There are a few operations that are significantly slower on a regular CPU as compared to a special purpose processor like an FPGA. For instance, FPGAs usually have quite a few 18x18 multipliers. These multipliers can be used effectively for many algorithms such as an inverse discrete cosine transform (iDCT).

That could mean, for example, less power used to render MPEG-2 video; effectively delivering HDTV to notebook users without draining the battery.

CPUs and GPUs (1)

blantonl (784786) | more than 7 years ago | (#17248264)

For the same reason that ATI has been so successful in the graphics market, AMD will dominate the CPU market. I think it is apparent to everyone that neither AMD Athalons nor Intel Core Duos run on video cards -- and because video processing requires different architectures to optimize performance for different applications (like video, sound, math etc.) - AMD will be a clear winner in the overall processor space because of this direction.

This is great news for AMD, and clearly shows innovation vs. the status quo.

Re:CPUs and GPUs (2, Insightful)

Bright Apollo (988736) | more than 7 years ago | (#17248336)

This is expected news, if you step back objectively.

AMD loses and will continue to lose the manufacturing race with Intel. Intel will likely continue to develop smaller and smaller dies, and AMD could never hope to leapfrog them for lack of cash to do so. Of course, give Intel their due: they employ some pretty smart people as well.

Ultimately, making your CPU do more specialized tasking, or capable of programmatic specialized tasking (think FPGA) is the right kind of innovation for them. I would also look to see more RISC-based operations, and wouldn't be at all surprised if they went off in that direction in some way. If they do, IBM has something to worry about... ... which brings me to the POWER CPUs. Where I work, I can architect a solution in a variety of ways, and currently I choose to build p550s with POWER5s (later POWER6s) with all the nice dynamic partitioning and micro-partitioning that you cannot get (at that level) from anyone else. I wonder how comfortable IBM would be feeling if they saw AMD start to offer the same kinds of partitioning elements in their CPUs and architectures?

This is all good news for me.

-BA

np-hard optimization board (2)

GrEp (89884) | more than 7 years ago | (#17248294)

Nvidia, please make a board for solving small instances NP-complete problems. Mainly max-clique and graph coloring :)

Amiga? (2, Informative)

vjl (40603) | more than 7 years ago | (#17248384)

That sounds like the Amiga's way of doing things...over 20 years ago! I'm glad it's catching on, and I'm glad AMD is doing it; AMD usually gets things right, and makes their products a lot more affordable than Intel...

/vjl/

Re:Amiga? (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17248468)

I heard the Amiga was invented by Chuck Norris.

Re:Amiga? (1)

Dun Malg (230075) | more than 7 years ago | (#17248526)

That sounds like the Amiga's way of doing things...over 20 years ago! I'm glad it's catching on, and I'm glad AMD is doing it; AMD usually gets things right, and makes their products a lot more affordable than Intel...

/vjl/
Actually, this is simply the latest iteration of a well-documented pattern going back forty-odd years known as the Cycle of Reincarnation [jargon.net].

Re:Amiga? (1)

Mspangler (770054) | more than 7 years ago | (#17249060)

And Apple. In 1993 or so, Apple shipped two 68040 Macs with a Motorola DSP chip on the motherboard. (Codename Tsunami?) As I remember, no one but Apple ever wrote software that used it.

Intel had the same idea with MMX. And how long did it take for anyone to use MMX? Years. Unless AMD sticks with it for years, at least until the first models that have it are obsolete, few people will use it because it's not the least common denominator.

Dedicated processors for "other" tasks (2, Interesting)

Gothmolly (148874) | more than 7 years ago | (#17248442)

Sounds like OS-dependency and driver hell to me. Imagine if you had an MP3 decoding co-processor, an MPEG-2 encoding co-processor, an Excel co-processor, a GCC co-processor... getting it to all work seamlessly would make today's 200MB video card drivers pale in comparison. So you install WMP version 42 and you have to check "use dedicated MP3 coprocessor" in Tools->Options? The whole point of CPUs is that they are general purpose.

Re:Dedicated processors for "other" tasks (1)

kryptkpr (180196) | more than 7 years ago | (#17248562)

I was thinking more along the lines of how MMX/3DNow are implemented.. extra instructions.

And maybe not such specific tasks like "mp3 decode".. but what about an FFT/IFFT instruction set extension? A matrix-multiply or matrix-inversion instruction set extension? The operating system could see these instructions and ensure they're executed on the correct processing unit (fast interconnects are of course needed here, which I believe is what HT3 is all about!)

Hardware acceleration of these tasks would greatly speed up many applications (specifically codecs of all kinds).

Re:Dedicated processors for "other" tasks (4, Informative)

Anonymous Coward | more than 7 years ago | (#17248646)

Don't forget crypto, the hardware AES on a 1GHz VIA C3 runs circles around a A64 X2 4800+ doing the same in software, at something like 10 vs. 80W power consumption.

Re:Dedicated processors for "other" tasks (1)

Ignis Flatus (689403) | more than 7 years ago | (#17248814)

well see, now what you're talking about is a digital signal processor. that's what those things are optimized for. and that's about all they do well. would be interesting to have a couple on-board a PC, though.

Re:Dedicated processors for "other" tasks (1)

peragrin (659227) | more than 7 years ago | (#17248680)

Yea you had better turn in that 3D video card you have there now. it just won't catch on.

Consider this, a smart version of IBm's cell chip, with the other cores designed for one task each. with two of the cores a generic CPU.

Re:Dedicated processors for "other" tasks (0)

Anonymous Coward | more than 7 years ago | (#17248726)

This process has been done before. Sun SPARC and Motorolla 6800 series CPUs and had a large number of general purpose registers and a simple instruction set. This is like a CPU with many cores, fast general purpose. Get software to make use of the flexibility. Clever programming can make an application fast.

AMD are persuing many cores but some of those cores will be dedicated and optimised for different things. This will be more like a CISC CPU like the x86 with fewer general purpose registers, but more complex codes and instructions for doing routine tasks in specific registers optimised for the job. Easier to program for as many common operations will have a defined API, and performance will come through changes to the hardware and instruction sets.

Good that there are two approaches, but if they won't run the same programs, then one will stagnate and die, the other approach will live on.
x86 won this race as a dualcore 3GHZ can still run the same programs that worked on a 386, 486, pentium etc ad the CPUs were cheap.

Naturally... not all processes are (2, Interesting)

mseidl (828824) | more than 7 years ago | (#17248568)

infinitely parallel. Gaming on one hand, can be very much parallelized. With physics and an ever increasing amount of vertices to transform and AI to calculate, and in general crap to render.

A lot of other software is not. Such as: Office productivity, operating systems...(these can benefit, but ultimately they'll reach a limit).

The other question is, when you put hundreds of cores on a chip, how do you handle logistics of accessing cache? Or cache coherency?(not required) They it'll go up to 16 or so cores before they might run into some cache latency issues.

I think the other question is... how long till software catches up? We're at a point where hardware has been carrying software. Software is coded for the most part, pretty crapily(thanks to out of order cores). When are the software designers going to get with the program and leverage hardware more? I know hardware is very dynamic. But, now we're seeing hardware reach it's limit, and that multiple cores don't do anything unless some key multi-threaded apps are running.

PPC called and wants its AltiVec back? (2, Insightful)

AHuxley (892839) | more than 7 years ago | (#17248582)

Where will all the optimised code come from?
What will the cost be in making it all work 'just' for AMD?
How locked in would any code be?
Over the life of a project, will it be worth 'porting' code to AMD?

Re:PPC called and wants its AltiVec back? (2, Insightful)

elhedran (768858) | more than 7 years ago | (#17249874)

Where will all the optimised code come from?

Believe it or not, 100cores requires optimized code as well. programs don't magically become multi-threaded, a developer has to work out how to split the work up into 2/4/100 threads and not lose performance due to locking/thread communication.

What will the cost be in making it all work 'just' for AMD?

Probably about the same as making it work for a new graphics card

How locked in would any code be?

It sounds to me they are talking optimization. hence it would run on an intel, just slower.

Over the life of a project, will it be worth 'porting' code to AMD?

Ah, I'm not going to try and answer this one. But it is an excellent question and should be asked (and answered) by AMD.

Good, time for mainboard makers to......... (0)

Anonymous Coward | more than 7 years ago | (#17248636)

Make Boards that support both chipsets, both AMD doing its thing and Intel doing its thing, on the same board.
Apps take avantage of the processor which performs better.
and/or graphics/buses get dedicated to one cpu while the other tries to figure out why Vista has duplicate coding.

I Love Competition (1)

Reed Solomon (897367) | more than 7 years ago | (#17248644)

Yay! I dislike Intel (I get a negative vibe from them, man) and generally like AMD. I hope AMD gets some real success as Intel always seems to outplay them in terms of getting their product into our machines.

Hybrid Graphics & the Cell roadmap. (5, Interesting)

lightversusdark (922292) | more than 7 years ago | (#17248668)

The article is a bit light on detail, there's a webcast of the presentation on AMD's Investor Relations site [corporate-ir.net] (needs a login (BugMeNot doesn't work) and it's WMP or Real only. And it's apparently four hours long.
The most interesting thing for me was the mention of "Hybrid Graphics":
According to AMD, notebooks with hybrid graphics will include both discrete and integrated graphics processors. When such notebooks are unplugged, their integrated graphics will kick in and disable the discrete GPU. As soon as the notebook is plugged back into a power source, the discrete GPU will be switched on again, apparently without the need to reboot. AMD says this technology will enable notebooks to provide the "best of both worlds" in terms of performance and battery life.
It also looks like they're also extending the Fusion concept along Cell-like lines, with additional cores for non CPU or GPU purposes.
Their road map through 2008 only talks about up to quad core, although I assume this means CPU cores (I'm not sure that I would accept a CPU+GPU on a single die branded as a 'dual-core' chip). I think the Cell has eight cores, but due to yield issues not all are enabled in a PS3, and they are not all functionally equivalent. I don't know if this is the case for the Cell-based IBM blades, though.
The roadmap basically looks like periodic refreshing of the product line reducing power consumption with each iteration, which is where I think Intel have got a head-start on AMD. However, if AMD can sort out the yield issues, and compilers and developers begin to take advantage of these "associate" cores in Cell and future AMD architectures, then maybe Intel will have turned out to have missed a trick, as they did with x86-64.

Re:Hybrid Graphics & the Cell roadmap. (2, Informative)

Watson Ladd (955755) | more than 7 years ago | (#17248800)

They wouldn't need to work on compilers, and developers wouldn't need to rewrite code if they encouraged people to use BLAS and then optimize BLAS. I think that a lot of this multi-core stuff will end up being matrix and vector math units with some kind of MIMD based on GPU style masking branches. If they wrap it in a special-purpose API, they only end up hurting their benchmark scores.

Re:Hybrid Graphics & the Cell roadmap. (1)

3choTh1s (972379) | more than 7 years ago | (#17249230)

The Cell has as many "cores" as the device requires. It's an expandable CPU and that is why it's so intriguing. It has one main "core" which is really the brains of the operation and several coprocessing units that do very simple operations very very fast. But the cell DOES NOT have a set number of these units. It just so happens to have 8 in the PS3, 6 used for general purpose game/entertainments stuff, 1 for OS security, and 1 disabled for better yields.

You have to be able to imagine that this processor is supposed to be in all sorts of other devices. Say a cell phone with 1-2 coprocessing units because it has to be stingy with the power. Or a renderfarm with 16 spu's per Cell per computer to speed up video processing. Expandability is the key with the Cell architecture.

Intel has done heterogenous multicore for years (1)

Lemming Mark (849014) | more than 7 years ago | (#17248782)

Worth noting that this strategy may well have at least occurred to Intel already.

It's not a new idea to mix lots of kinds of cores on one die: Intel's IXP network processors have been available a number of years now. These combine an Xscale (StrongARM) core with a number of specialised network processing-oriented microengines. The Xscale can run Linux and acts as a supervisor to the microengines, which do the fast path work of actually processing the data. The microengines are streamlined to be able to do this job quickly, meanwhile the Xscale is able to run control plane and management code efficiently - because that's what it's designed to do.

It'll be interesting to see if Intel also use this strategy in their future desktop and server CPUs - it certainly makes good sense, and it's an approach they've already productised in other areas.

Re:Intel has done heterogenous multicore for years (0)

Anonymous Coward | more than 7 years ago | (#17250390)

No no no... Only AMD can "innovate", nobody's going to believe this claptrap about Intel supposedly thinking along the same lines.

It's ironic that a cheap ass clone chip maker has gotten the reputation of being innovative because they capitalized on the Intel mistake of not using x86-64 extensions. I mean, I'm sure it never occurred to Intel to just add 64-bit extensions, it's _sooo_ unlike what they did on moving from 16 to 32 bit.

What's amusing is even AMD's innovation claim to fame - 64-bit extensions, is just a copy of what Intel did when moving from 16-bit to 31-bit processors.

Cue : (5, Funny)

CODiNE (27417) | more than 7 years ago | (#17248786)

20 people asking "Why would anyone need this?"
50 people replying "I encode video"
45 people replying "Games"
10 replying "Babes of course"
1 karma whore incapable of making a decent top 10 list. :)

On the Clock (1)

Doc Ruby (173196) | more than 7 years ago | (#17248796)

Does that mean they're finally going to hire some Pacific Islanders, Basque, Thai, Pygmies, or Mayans?

My experience has been... (0)

int21hex (923711) | more than 7 years ago | (#17248812)

My experience has been that intel does provide a more durable chip while amd is always a budget chip. I would rather find a way to overclock an intel than deal with amd's already pushed values. Quality control of the two are way different.

Re:My experience has been... (1)

Maurice (114520) | more than 7 years ago | (#17249582)


By experience, do you mean that you have actually had an AMD chip fail or are you just guessing?

Personally, I have used AMD chips since the days of the Am386 and have not had a problem with them. Also, my laptop with a 2.2 GHz Athlon 64 runs circles around my 3.4GHz dual core office desktop in terms of performance. Don't ask me why, since my answer would be that the Pentium sucks.

Uhh... I think we forgot an obvious possibility... (1)

Caspian (99221) | more than 7 years ago | (#17248954)

AMD thinks the core race is just a repeat of the megahertz race that took place a few years ago. Instead, AMD is counting on Accelerated Processing Units, chips that mix and match general-purpose CPU cores with dedicated application processors for graphics and other tasks.

So Intel is betting on more and more cores, and AMD is betting on more and more integrated units. Wait... why not, y'know, BOTH? A 16-core CPU with an integrated high-end GPU sounds sweet.

The end is nigh (1)

redblue (943665) | more than 7 years ago | (#17249106)

Personally I can't imagine taking on more than 4x4 PrOn action in a day. Maybe 16x16. Anything more than, I get cognitive dissonance. Islam has a limit of 4. How wise.

Definition needed.... (1)

quizzicus (891184) | more than 7 years ago | (#17249160)

Are there any big architectural differences between multiple, specialized cores within a die and multiple, specialized units within a core? If so, what are they?

I don't believe it! (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17249212)

Do you trust Stalin to give you the straight dope on the microprocessor industry? I don't. Trotsky is the true heir to the legacy of Marx, Engels, and Lenin.

IEEE Talk said just this (0)

Anonymous Coward | more than 7 years ago | (#17249478)

Went to a talk in Fort Collins (AMD opened a design shop here) where the speaker said just this. That Amdahls law applies, and no matter how good the hardware is there is always some serial portion which will not give the performance boost you want. And parallelism is hard to implement. So basically, he broke down AMD's strategy in to a die that has specialized accelerators, one big CPU and multiple smaller CPU's plus cache, etc. So that stuff which can be parallel has multiple cores, stuff which is not has cores which are less power hungry. Interesting stuff.

most of them idle most of the time? (2, Interesting)

bcrowell (177657) | more than 7 years ago | (#17249568)

The two big obstacles to getting better performance from parallelization are that (1) some problems aren't parallelizable, and (2) programmers, languages, and development tools are still stuck in the world of non-parallel programming. So from that point of view, this might make more sense than simply making a computer with a gazillion identical, general-purpose CPUs.

On the other hand, I'd imagine that most of these processors would sit idle most of the time. For instance, right now I'm typing this slashdot post. If I had a video card with a fancy GPU (which I don't), it would still be drawing current, but sitting idle 99.99% of the time, since displaying characters on the screen as the user types is something that could be done back in the days of 1 MHz CPUs. Suppose I have a special-purpose physics processor. It's also drawing current right now, but not doing anything useful. Ditto for the speech-recognition processor, the artificial intelligence processor, the crypto processor, ...

There are also a lot of applications that don't lend themselves to either multiple general-purpose processors or multiple special-purpose CPUs. One example that comes to mind is compiling.

On a server, you're probably either I/O bound, or you're running a bunch of CGI scripts simultaneously, in which case multiple general-purpose processors are what you need.

For almost all desktop applications except gaming, performance is a software issue, not a hardware issue. I was word-processing in 1982 with a TRS-80, and it wasn't any less responsive than Abiword on my current computer. Since I'm not into gaming, my priorities would be (1) to have a CPU that draws a low amount of power, and (2) to have Linux do a better job of cooperating with my hardware on power management. I would also like to have parallized versions of certain software, but that's going to take a lot of work. For example, the most common CPU-heavy thing I do is compiling long books in LaTeX; a massively parallel version of LaTeX would be very cool, but I'm not holding my breath.

Re:most of them idle most of the time? (1)

Eideewt (603267) | more than 7 years ago | (#17250108)

I'm with you almost all the way, except on compiling not benefiting from parallelization. If that were the case, programs like distcc would be pointless.

Re:most of them idle most of the time? (1)

bcrowell (177657) | more than 7 years ago | (#17250326)

It probably depends on what kind of compile you're doing. If you're changing one line and recompiling, I don't think paralellization helps. If you're compiling a large app from scratch, it probably does.

Re:most of them idle most of the time? (1)

Coryoth (254751) | more than 7 years ago | (#17250358)

The two big obstacles to getting better performance from parallelization are that (1) some problems aren't parallelizable, and (2) programmers, languages, and development tools are still stuck in the world of non-parallel programming.
Programmers might be stuck in the world on non-parallel programming, but there are plenty of languages that aren't: Ada, AliceML, Concurrent Clean E, Eiffel + SCOOP, Erlang, Occam, Oz, and Pict all do concurrency remarkably well. Several of those, Ada, Eiffel, and Erlang in particlar, are certainly ready and capable for serious industrial programming.

Intel may be early (2, Interesting)

Eideewt (603267) | more than 7 years ago | (#17249738)

I do have my doubts about Intel's "more cores than you can shake a stick at" approach. I can't see the use in more than a few full-speed cores. They all have to be able to get at instructions quickly or most will just spin their wheels so hundreds of cores are a big challenge in more than just a making them fit and operate together sense. How much can we parallelize before most of the cores are doing little to nothing because their caches are empty? For that matter, the average user doesn't usually utilize one CPU core fully. Even on Dual-core (including actual dual CPU) desktop machines both cores are rarely needed for a responsive computer.

Intel's standpoint seems to be that there's a world of data crunching lurking in all our computers (automated photo sorting, face recognition, and photo-realistic rendering), but none of these strike me as killer apps waiting to happen. All are things we could get used to and come to depend on, but I don't think any of them are being held back just because of our computing capacity, although photo-realistic rendering may be close. I'm pretty sure these aren't solved problems yet. Even if we were itching to do all this, one can only sort so many photos. It seems a bit wasteful to have all that power waiting around most of the time. Are we really nearly living in a world in which computing power is so plentiful that we can have that kind of ability even though we hardly ever use it?

On the other hand, AMD's approach seems to have more immediate application. Video/audio encoding and other parallel processes are things that many of us do do frequently. A couple hundred cores could be pressed into use for this, but that seems much less elegant than purpose-built hardware.

I don't know which approach will be best in the long-run. Probably both. It does seem to me that Intel is at best a few years to early to be hyping large numbers of cores.

First... (1)

teoryn (801633) | more than 7 years ago | (#17249802)

First we made specific hardware for the task at hand.
Then we made more general hardware and specialized with software.
Now we're making specific hardware again...

AMD C/Fortran compiler (1)

AFairlyNormalPerson (721898) | more than 7 years ago | (#17250204)

If they came out with something like this, it would be nice if they offered C and Fortran compilers that were specifcally tuned to take advantage of their new technology. Maybe I'm just too picky, but I don't think it's unreasonable to think that a chip maker provide offerings tuned specifically for their chips.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...