Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Fusion System Architecture Detailed

timothy posted more than 3 years ago | from the why-when-I-was-a-boy dept.

AMD 121

Vigile writes "At the first AMD Fusion Developer Summit near Seattle this week, AMD revealed quite a bit of information about its next-generation GPU architecture and the eventual goals it has for the CPU/GPU combinations known as APUs. The company is finally moving away from a VLIW architecture and instead is integrating a vector+scalar design that allows for higher utilization of compute units and easier hardware scheduling. AMD laid out a 3-year plan to offer features like unified address space and fully coherent memory for the CPU and GPU that have the potential to dramatically alter current programming models. We will start seeing these features in GPUs released later in 2011."

cancel ×

121 comments

Sorry! There are no comments related to the filter you selected.

first (-1)

Anonymous Coward | more than 3 years ago | (#36472080)

1st!

Long time coming (1)

Kuruk (631552) | more than 3 years ago | (#36472086)

Whats wrong with hardware !

Humans are too stupid to program it.

Not sure want the fix is not hardware keeps exploding and we are stuck with Windows 7, lol 8 or (CAT), lol Lion.

Re:Long time coming (3, Interesting)

noname444 (1182107) | more than 3 years ago | (#36472146)

Integrating CPU, GPU and unifying the memory address space will probably make things easier for programmers. So hopefully it'll help programmers utilize the hardware better.

Re:Long time coming (0)

Anonymous Coward | more than 3 years ago | (#36472222)

I prefer the way Intel is handling the new paradigm with its Knights Corner 50core cpu. These will be much easier to utilise with the existing programming languages like OCaml, Erlang, and MPI.

Re:Long time coming (1)

ElusiveJoe (1716808) | more than 3 years ago | (#36472978)

First, this system has distributed memory which is much harder to program than shared memory in case of AMD.

Second, MPI is not a language, it's a standard. But in terms of parallel languages it's more like an assembler: data locality and synchronization have to be managed by the programmer. So it cannot substitute high-level languages.

As to functional languages, I have no idea how information about the data locality could be extracted from the program. It's not that I doubt them, I really don't know, but would like to learn.

Re:Long time coming (1)

Anonymous Coward | more than 3 years ago | (#36472232)

nvidia already offers this with cuda.

Re:Long time coming (2)

eugene2k (1213062) | more than 3 years ago | (#36472392)

No it doesn't. Like OpenCL, CUDA basically means you're sending instructions to the GPU by writing data to a mapped memory region. Sharing address space is not possible at that level. It's only possible to do at a CPU level.

Re:Long time coming (0)

Anonymous Coward | more than 3 years ago | (#36473992)

OpenCL is not fixed to target the GPU, unlike with CUDA in OpenCL using the CPU is not considered to be only an emulation.

Re:Long time coming (4, Insightful)

TheRaven64 (641858) | more than 3 years ago | (#36472346)

It's not that difficult to write code that takes full advantage of modern hardware. The limitation is need. Every 18 months, we get a new generation of processors that can easily do everything that the previous generation could just about manage. Something like an IBM 1401 took a weekend to run all of the payroll calculations for a medium sized company in 1960, using heavily optimised FORTRAN (back when Fortran was written in all caps). Now, the same calculations written in interpreted VBA in a spreadsheet on a cheap laptop will run in under a second.

It would be naive to say that computers are fast enough - that's been said every year for the last 30 or so, and been wrong every time - but the number of problems for which efficient use of computational resources is no longer important grows constantly. Look at the number of applications written in languages like Python and Ruby and then run in primitive AST interpreters. A decent compiler could run them 10-100x faster, but there's no need because they're already running much faster than required. I work on compiler optimisations, and it's slightly disheartening when you realise that the difference that your latest improvements make is not a change from infeasible to feasible, it's a change from using 10% of the CPU to using 5%.

Re:Long time coming (2)

noname444 (1182107) | more than 3 years ago | (#36472530)

While I agree with you regarding application programming, need, etc. I must clarify that I was talking about graphics/game applications that require the full hardware potential.

If you compare this new architecture with an arguably over complicated architecture like the playstation 3 I'd argue that writing software that utilizes the hardware to its full potential is indeed hard. And in this context, making a more elegant, integrated GPU/CPU will make the lives of us poor indie game programmers a bit easier.

Re:Long time coming (2)

Bert64 (520050) | more than 3 years ago | (#36472662)

The current trend seems to be towards more power efficient hardware and virtualization (and dynamic scaling etc), rather than ever faster hardware...
So while your interpreted spreadsheet may be able to compute payroll calculations in a second, your hardware will consume more power doing it that way than using an optimized implementation... Also with sub optimal code, you won't be able to run as many instances on a single piece of hardware, and thus require more hardware.

Re:Long time coming (0)

Anonymous Coward | more than 3 years ago | (#36472666)

it's a change from using 10% of the CPU to using 5%

In other words: from serving 10 concurrent clients to serving 20. Much appreciated I would say.

Re:Long time coming (4, Interesting)

TheRaven64 (641858) | more than 3 years ago | (#36472706)

Not really. Now the CPU spends 95% of its time waiting for data from the network or disk instead of 90%, but the CPU is rarely the bottleneck these days.

Around the time of the Pentium II, Intel did some simulations where they increased the (simulated) speed of the CPU running typical applications and measured performance. They found that, if the speed of other components didn't change, an infinitely fast CPU (i.e. all CPU operations took 0 simulation time) ran about twice as fast as the ones that they were shipping. It doesn't take much of an improvement in CPU speed before the CPU just isn't the bottleneck anymore, even in processor intensive tasks. RAM and disk bandwidth and latency quickly take over. This was one of the problems Apple had with the PowerPC G4 - the RAM wasn't fast enough to supply it with data as fast as it could process it, so it rarely came close to its theoretical maximum speed.

Re:Long time coming (4, Interesting)

pandrijeczko (588093) | more than 3 years ago | (#36472858)

I think what is going to be really interesting is to see what this does to PC gaming from the perspective of non-Windows operating systems.

APUs are clearly a step forward in the direction of putting powerful graphics processing on portable devices, an area where Microsoft and Windows has very little marketshare at the moment.

Therefore, this surely must bring DirectX's domination in the PC gaming market into question - will this therefore result in more commercial games being developed around OpenGL, thus making cross-platform games much easier to develop?

Re:Long time coming (2)

hairyfeet (841228) | more than 3 years ago | (#36475038)

Frankly I really don't see how much better GPUs can get picture wise myself. Hell my HD4850 which my GF got me for my BDay cranks the living hell out of the purty, so much I have to be careful not to be distracted by the purty and get my ass blown off! And maybe it is different with CUDA but the only thing I've seen come out for Streams is a video transcoder that frankly doesn't give you as good a result as a plain Jane CPU only transcode, and the time savings isn't worth the picture hit.

So while I'm sure this will make programmers happy I really don't see how it will make much of a difference to Joe user. Hell even the sub $150 GPUs that are the biggest market have so much purty being thrown on the screen it is truly insane, I never thought I'd see the day that human faces and movements would get THAT realistic!

And finally there is that bloated stinking dead elephant in the room no one mentions, I'm of course talking about the craptastic consoles that everyone is writing the games for. While i like the fact that the vast majority of games will run native resolution with lots of bling even on my 3 generations old HD4850, I'll be the first to admit PCs aren't the main target market anymore. Hell the new Nintendo is gonna have the HD4xxx series, which like mine is already three generations behind and it ain't come out yet!

So I honestly don't see how all this extra goodness is gonna make much of a diff. The developers write to the consoles first, the consoles don't have these features, therefor nobody writes to them. Hell look at how few DX10 and DX11 games are out, simply because the consoles are DX9. If the other consoles follow Nintendo then we'll be seeing DX10 in late 2012, so maybe this cutting edge stuff will get used by the majority of games around 2022, when you can pick up these chips at a yard sale for $5. Depressing, but that is life.

AMD's next strike against intel (0)

Anonymous Coward | more than 3 years ago | (#36472106)

..and intel with either buy nvidia, or attempt to do the same thing, I'm betting on the former.

Re:AMD's next strike against intel (1)

loufoque (1400831) | more than 3 years ago | (#36472130)

They already have Larrabee, which is pretty much the same thing but far better.

Re:AMD's next strike against intel (2)

chaboud (231590) | more than 3 years ago | (#36472224)

Dead. Project.

Larrabee proved to have a few fundamental flaws, last I checked.

Re:AMD's next strike against intel (0)

Anonymous Coward | more than 3 years ago | (#36472248)

Check again. The Cloud on a chip (Already being used for ray tracing) and Knights Korner (50 core cpus, currently being tested out by a number of chosen universities and software developers) is based on Larrabee, and doing very well.

Re:AMD's next strike against intel (4, Insightful)

myurr (468709) | more than 3 years ago | (#36472250)

Except Larrabee failed because performance didn't live up to expectations and was a generation behind the best from AMD and nVidia. What this development from AMD allows is much more efficient interaction and sharing of data between a traditional CPU and an on-die GPU through updates to the memory architecture. These memory changes will also allow the parts to take advantage of the very fastest DDR3 memory that current CPUs struggle to fully utilise.

The two most obvious scenarios for this technology are for accelerating traditional problems that take advantage of the existing vector units (SSE, etc.) by utilising the integrated GPU to massively accelerate these programs, and in gaming rigs where there is a discrete GPU the new architecture allows the integrated GPU to share some of the workload. The example given, and one that is increasingly relevant as all games now have physics engines, is for the discrete GPU to concentrate on pushing pixels to the screen and the integrated GPU to be used to accelerate the physics engine.

Is it a game changer? Probably not in the first couple of generations, although it would be a very welcome boost to AMDs platform that could get them back in the game as the preferred CPU maker. But long term Intel will have to come up with an answer to this in some form as programmers get ever more adept at exploiting the GPU for general purpose computing, and changes like those AMD are incorporating into their designs make these techniques ever more powerful and relevant to wider ranges of problems. Adding more x86 cores won't necessarily be the answer.

Re:AMD's next strike against intel (0)

Anonymous Coward | more than 3 years ago | (#36472486)

The two most obvious scenarios for this technology are for accelerating traditional problems that take advantage of the existing vector units (SSE, etc.) by utilising the integrated GPU to massively accelerate these programs, and in gaming rigs where there is a discrete GPU the new architecture allows the integrated GPU to share some of the workload.

Interesting. I hadn't realized that the Fusion units could replace the SSE vector units in the CPU, but it makes sense. Do you know if AMD actually has plans in that direction (seamless thread migration between CPU/GPU)?

I'm very excited about having an actual MMU for the GPU. Combined with unified memory addressing and protection, this could prove to be a real boost for parallel workloads. Now if only we could have a sane programming environment for heterogeneous systems that doesn't depend on DirectX...

Re:AMD's next strike against intel (1)

fast turtle (1118037) | more than 3 years ago | (#36474294)

Intel appears to be following a Discreete core design while AMD with Fusion is following an All-in-One design. From looking at what AMD has released as to their roadmap, it appears that unlike Intel, the APU will become the math core (fpu) of the chip, with the cpu core becoming even smaller. This appears to be planned for either the 2nd or 3rd generation of the chips

Although we're seeing continual die shrinkage by Intel, I suspect that AMD's integration will result in far better energy savings then what Intel gains from die shrinkage. From a performance stance, the APU already beats Intel's GPU by a large margin and looking at the power consumption graphs from http://www.tomshardware.co.uk/a8-3500m-llano-apu,review-32207-22.html [tomshardware.co.uk] we're already seeing a more stable draw by the fusion design compared to the i3. Yes the Intel design does drop into a far lower power stage but with proper emphasis on the rest of the other system chips, AMD should be able to cut power even further while retaining performance.

Re:AMD's next strike against intel (1)

loufoque (1400831) | more than 3 years ago | (#36472546)

Except Larrabee failed because performance didn't live up to expectations and was a generation behind the best from AMD and nVidia.

The original plan was to release a 32-core Larrabee in 2009, with a maximum theoritical performance of 2 TFlops. That's more than the most powerful nvidia card available today.
And unlike a GPU, you could actually reach that performance, since it's a real x86-compatible CPU you have full access to, with intrinsincs similar to that of SSE (Larrabee is pretty much the ideal SIMD ISA -- much better than SSE or AVX) available on regular compilers.
It also doesn't contain hardcoded fixed-function pipelines, which is a good thing.

What this development from AMD allows is much more efficient interaction and sharing of data between a traditional CPU and an on-die GPU through updates to the memory architecture. These memory changes will also allow the parts to take advantage of the very fastest DDR3 memory that current CPUs struggle to fully utilise.

Larrabee uses a high-bandwidth ring bus to communicate between cores, like the Cell architecture; that has been proven to be a very good design, and Intel adds cache-coherency hierarchy on top of it so that all cores see the same shared memory.

Re:AMD's next strike against intel (2)

Chris Mattern (191822) | more than 3 years ago | (#36472774)

The original plan was to release a 32-core Larrabee in 2009, with a maximum theoritical performance of 2 TFlops.

But since they couldn't do it, the original plan does mean much, now does it?

Larrabee (1)

toastar (573882) | more than 3 years ago | (#36473674)

Believe it or not, Making a chip the size of a football field isn't really the best idea.

Re:AMD's next strike against intel (1)

Chris Mattern (191822) | more than 3 years ago | (#36473690)

Argh. *doesn't* mean much.

Re:AMD's next strike against intel (1)

Dr. Spork (142693) | more than 3 years ago | (#36473734)

I think about it like this: What are some computational problems which today justify a home user in buying an expensive machine rather than a cheap one? Not browsing or productivity or whatever else my mom does. It's media encoding, media processing, rendering and gaming. All of these could be radically sped up when programs effectively make use of the GPU as a supercharged vector unit extension of the CPU. Then there are computer functions like web hosting and compiling that won't benefit from this, but not that many computers do this. So this sort of thing will make a real difference to many real users.

CONCLUSION: AMD IS DEAD !! (-1)

Anonymous Coward | more than 3 years ago | (#36472116)

Again !!

2005 was a very good year !! Then it died, just like Frank !!

The first problem that comes to mind.. (1)

Anonymous Coward | more than 3 years ago | (#36472128)

Is that the modular nature of current components allows for relatively easy upgrading and a comparatively low cost. Buying a new graphics card that has the price of a GPU and dedicated video RAM is reasonable. Having to buy a new CPU every time you want to upgrade your GPU could get unreasonably expensive fast.

Re:The first problem that comes to mind.. (4, Insightful)

Rosco P. Coltrane (209368) | more than 3 years ago | (#36472154)

I think only a small number of computer users upgrade components these days - gamers and power users. But the majority of people these days buy a beige box or a laptop and never ever open them. From a business point of view, combining the GPU and the CPU makes sense. Heck, nobody cried when separate math coprocessors disappeared.

Re:The first problem that comes to mind.. (1)

kevinmenzel (1403457) | more than 3 years ago | (#36472234)

That may be the case, but the boxes they buy benefit from the economy of scale offered by being able to seperate those components. Every time I go to a computer store, I'd say that within the boxes people can buy, there's a wide variety of CPUs and GPUs in those boxes - in many combinations. This allows customers to buy what they need. For some, that's a moderate processor with moderate graphics, for others, it's a moderate processor with relatively decent graphics (to play blu-ray discs or 1080p flash videos), for gamers they want specific GPUs mixed with specific CPUs to give them the best performance in the games they care about. In professional workstations, you want a workstation GPU that's going to have similarities to consumer GPUs, but will focus on different tasks. For home recording enthousiasts, as they delve deeper into the field, they need to have control over those elements in order to avoid potential conflcts with audio hardware. Some people need to be able to support more than 1 monitor, but others only need 1. Some people need to be able to output to S-Video to connect to an old projector - but might not need that feature in a year, when the projector is set to be upgraded to HDMI, at which point the IT team will want to replace some graphics cards.

Essentially, there are damned good reasons to have things seperate, because computers, as much as they are general purpose machines - aren't actually that generalized to the point where one can say "You only need 3 different kinds of computer." If that were ACTUALLY true, Apple would be doing a lot better than they are in computer sales.

But it's not true. So hopefully intel WON'T push the same thing, because then pretty much every application that matters will have to still support the current model with all of its difficulties, but likewise all of its benefits

Re:The first problem that comes to mind.. (2)

TheThiefMaster (992038) | more than 3 years ago | (#36472350)

I would imagine that you'll likely still be able to upgrade by adding a discrete graphics card for quite some time.

Re:The first problem that comes to mind.. (2)

Tapewolf (1639955) | more than 3 years ago | (#36472450)

Since this design seems to be about using the APU for non-graphics things as well, you could probably stick an nVidia card in the PCI-E slot for better video and continue to use the Fusion APU for OpenCL (or whatever) at the same time.

Re:The first problem that comes to mind.. (3, Informative)

Targon (17348) | more than 3 years ago | (#36472358)

There will still be that same ability to get separate components, but the GPU element is being moved from the chipset onto the CPU(now called an APU).

There really have been only three general configurations:
1: CPU with integrated graphics on the motherboard
2: CPU with integrated graphics on the motherboard PLUS a discrete video card/GPU.
3: CPU without integrated graphics on the motherboard with ONLY one or more video cards.

So, what this does is to update 1 and 2, since you can still add a discrete video card. Since the graphics portion of Fusion is better than what Intel offers, this isn't a bad setup. There will also be the option to swap the APU with a faster version that has both a faster CPU core as well as faster GPU core in most motherboards.

Yes, there are certain advantages offered by the APU design, but it isn't an "all or nothing" offering, AMD will continue to offer straight CPUs(with Bulldozer being the next core design), and if you think about it, AMD may go to a tick-tock design like Intel has, but rather than it being based on core design and fab processor technology going back and forth, we may see AMD going CPU core design, GPU design, and then APU to combine the latest CPU and GPU designs.

Right now, many are waiting for AMD to release its first all new core design since 2003, since that will hopefully get AMD the better CPU core performance that many have been waiting for.

Re:The first problem that comes to mind.. (2)

TheRaven64 (641858) | more than 3 years ago | (#36472386)

Laptop sales passed desktop sales a couple of years ago. Anyone buying a desktop is now in the minority. With laptops, the constraints are different. Having the CPU and GPU in separate chips complicates the board design, which adds to the cost. With integrated CPU and GPU designs, you can have a simple board design and just pop a faster chip in the top of the line models.

Upgrading your GPU separately? My first PC had a slot for installing an FPU. You could get one from Intel, but you could get faster ones from AMD. Then Intel integrated their inferior FPU into the die with the 486. How many people now complain about not being able to replace their FPU with a faster third-party one?

Re:The first problem that comes to mind.. (5, Insightful)

MrHanky (141717) | more than 3 years ago | (#36472506)

One reason why laptop sales passed desktop sales is of course that desktops last longer, due to their upgradeability.

Re:The first problem that comes to mind.. (0)

Anonymous Coward | more than 3 years ago | (#36472698)

Very insightful indeed. Too bad I'm out of mod points right now.

Re:The first problem that comes to mind.. (1)

drinkypoo (153816) | more than 3 years ago | (#36472848)

Who upgrades desktop machines? Most desktops go through their entire life without a single upgrade. Most users will pitch them and buy another computer if they develop a problem they don't know how to fix, let alone if the machine is too slow. Remember, we live in a disposable culture. It's interesting in that the Native Americans were big on throwing stuff into big piles too, but of course nothing they were working with was leaving a toxic debt.

Re:The first problem that comes to mind.. (0)

Anonymous Coward | more than 3 years ago | (#36472886)

Most companies will make do with very old desktop systems internally (which would be embarrassing to the company image as laptops), and many companies will upgrade those desktops at least with more memory to get a few more years out of them before scrapping.

Re:The first problem that comes to mind.. (1)

nonicknameavailable (1495435) | more than 3 years ago | (#36473056)

i upgraded my scaleo x with a new motherboard harddrive cpu and ram and gpu and harddrive

Re:The first problem that comes to mind.. (1, Flamebait)

gad_zuki! (70830) | more than 3 years ago | (#36473728)

Cheap people, gamers, power users, and businesses do. That's probably a good chunk of the desktop market right now.

> Remember, we live in a disposable culture.

Would you like some cheese with your whine?

Re:The first problem that comes to mind.. (0)

drinkypoo (153816) | more than 3 years ago | (#36474722)

Would you like some cheese with your whine?

Would you like some Flaturin with your trash culture of slavery?

Re:The first problem that comes to mind.. (1)

RobbieThe1st (1977364) | more than 3 years ago | (#36473218)

And they're less likely to fail due to less movement etc.
There's *still* P4's in use, though they are finally being phased out -- and then only(likely) because the PSU's caps are failing. Same with other hardware from the same vintage, like screens.

Re:The first problem that comes to mind.. (0)

Anonymous Coward | more than 3 years ago | (#36472442)

Video cards have been included on motherboards for years now, hasn't hurt the high end gaming video card market much at all. I can only imagine that drivers able to utilize both GPUs would be a fantastic way to add even more horsepower as a future upgrade.

Re:The first problem that comes to mind.. (0)

Anonymous Coward | more than 3 years ago | (#36472854)

enthousiasts

Ye olde users?

Re:The first problem that comes to mind.. (1)

obarthelemy (160321) | more than 3 years ago | (#36473712)

Yes and no. Most customer (I'd guess 80%) actually don't care at all about performance (neither CPU nor GPU) because whatever's current nowadays is good enough for them. For those, an APU means cheaper prices and more hardware/software reliability.

The rest will indeed need more CPU and/or GPU power, and neither Llano nor its successor will be for them, because the CPUs are lackluster, the GPU is OK but not great (equivalent to an entry-level discrete card), and, on top of that, CPU and GPU have to fight for RAM bandwidth, which becomes a major bottleneck.

Again, the vast majority of the market, the ones looking at a Core i3 and a sub $75 vidcard, should look at Llano.

Re:The first problem that comes to mind.. (2)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36472504)

Also, nothing about AMD's new design precludes discrete GPUs more or less similar to today's models, it is just an effort to make the (economically inevitable) integrated GPU more useful by virtue of its close integration with the system, rather than simply cheaper as integrated GPUs are today.

Expansion will be slightly trickier than today's Crossfire/SLI, because certain GPU elements(while comparatively few) will enjoy much faster access to the CPU and main memory, while the expansion GPU(s) will presumably have many more elements, and their own pool of RAM; but be a PCIe bus away from the CPU. I'm sure that the beta drivers and the edge cases will be pretty dire; but it will eventually be worked out.

Re:The first problem that comes to mind.. (1)

GreatBunzinni (642500) | more than 3 years ago | (#36472812)

The reason why nobody cried when separate math coprocessors disappeared was because not only math coprocessors didn't disappeared but also separate math coprocessors didn't disappeared also.

  Back in those days, you needed a math coprocessor because more often than not the CPU didn't offered any support for basic features such as floating point arithmetic, which happens to be of fundamental importance. Yet, even when providing that support directly on the CPU and even providing vectorized versions of it became a standard feature, you still have a considerable number of people spending ungodly amounts of money on separate math coprocessors which are more commonly known as.... graphics cards. If there is any doubt that a graphics card is nothing more than a glorified math coprocessor then learning about OpenCL and CUDA should be enough to dispel this myth. And you know what? Nowadays people spend more money on those graphics co-processors than in CPUs.

Now, I don't know how the majority of the people you know decide to lead their life, but what I know is that I do believe that if someone decides to take away their ability to switch graphics card or even in some cases install multiple graphics cards on a computer. then they will proverbially cry. Or at the very least be extremely pissed. Adding to this, in the consumer standpoint the ability to choose and the ability to upgrade their graphics cards is one of the things that still make desktop computers as relevant as always. Just because nowadays we have practically disposable computers just lying around, such as cheap netbooks, and just because people purchase those portable computers to do other trivial stuff such as communicating and browsing simple sites it doesn't mean that everyone suddenly stopped needing a graphics card which can be upgraded.

CPU, FPU, GPU, ALU, control unit, packaging (3, Interesting)

DragonHawk (21256) | more than 3 years ago | (#36473128)

A "math coprocessor" is just the FPU (Floating Point Unit) of a particular era of microcomputers. The FPU implements machine instructions for floating point math. Before the microcomputer, when machines filled cabinets, you might have an FPU (on one or more circuit boards), you might not. Same with the early micros. Eventually they built the FPU into the same die as the CPU, so no need for a separate chip. The FPU is always tightly coupled to the CPU because it shares the same control unit as the CPU. (A CPU consists of a control unit plus an arithmetic/logic unit.) You can't change the design of one without changing the other.

A GPU is different from an FPU. It doesn't process CPU instructions -- it has its own control unit. GPUs operate independently of the CPU.

Building a CPU into the same die or IC package as the CPU won't prevent you from installing a discrete graphics card. No need to get all upset about it.

Although the tech may eventually get to the point where you won't bother with a discrete graphics card. I suspect we'll eventually see a large package containing CPU, GPU and memory, for performance reasons. One will upgrade them all together.

Before you panic about that: In the early days of minicomputers, CPUs were implemented as many boards containing lots of discrete logic and small scale integration. It was possible to do things like change how the adder was implemented, how memory was accessed, or add whole new machine instructions. You could "upgrade" at that level. That capability was lost with the move to (very) large scale integration. However, things are so much cheaper and faster with (V)LSI that it's worth it.

So if $100 will bring you a new CPU, GPU, and RAM, running 10x faster than what you had before, then yah, I can see it happening, and being a win.

Re:The first problem that comes to mind.. (0)

ledow (319597) | more than 3 years ago | (#36472242)

I have to say - I can't remember the last time I upgraded a video card (it may have been the AGP era), and I play 20+ hours a week just on Steam games.

Since we hit the CPU speed limits, and software authors can't just make you upgrade, there comes a point where a computer is "good enough" for the vast majority of games for almost its entire usable life. By the time it comes to upgrades, it's usually cheaper to just buy a new computer with the components you want than trying to force your motherboard into CPU upgrades, RAM upgrades, GPU upgrades, PSU upgrades to cope with the above, etc.

I was one of the first people with a 3DFX card, back in the day, and it was worth it then - it literally made the impossible possible. Now a new graphics card might let you get a handful more FPS, or move to a slightly higher res but pretty much they all do what you want and the difference between the most expensive and the cheapest doesn't really justify the price difference. If you do upgrade, you'll usually looking at higher specs across the board rather than a super-expensive graphics card in a machine without enough RAM / cores to actually fill it up with data.

I also manage networks for a living - I haven't upgraded an internal card in nearly a decade on that side, except on servers (RAID cards, network cards, etc.). My work laptop was bought by my workplace as the cheapest thing we could find which had an Intel chip and certain other unrelated specifications (e.g. number pad, etc.). It has a GT130M in it and - although I don't play the very-latest-and-greatest - there's nothing I've thrown at it yet where it's really struggled. Especially in the laptop arena, a combined CPU/GPU is a brilliant idea because it forgoes that long journey over the bus to an under-funded mobile graphics chip that you can't upgrade anyway, but the heat would be a worry. I haven't ever used the ExpressCard slot in my laptop for anything.

Everybody moaned about combining the FPU "because it just bumps up the price and we won't use it". Everybody moaned about integrated sound - but I haven't bothered to buy a sound card in over a decade. At the moment, the bugbear is integrated graphics (which have been around for a LONG time) but nobody except hardcore gamers are really noticing a problem and the situation is improving all the time.

I don't think it's a trend you'll be able to stop, and I think the separate graphics card market has been dropping off in recent years in favour of integrated chipsets precisely because nobody is really willing to pay hundreds in order to barely notice a difference in their graphics capabilities (gamers would notice, business users, laptop users, ordinary families, etc. wouldn't).

Re:The first problem that comes to mind.. (1)

MareLooke (1003332) | more than 3 years ago | (#36472290)

You forget the fact that the majority of the PC games is created with console hardware in mind and as such uses only a fraction of what a modern GPU is capable of.

That said, the "good enough" argument does fly for desktop users. I expect graphics boards to become like sound cards, they will be useful for specific applications (musicians come to mind for sound cards) and the people that need them will buy them.

Re:The first problem that comes to mind.. (1)

Smirker (695167) | more than 3 years ago | (#36472292)

Heck, I can't even notice the difference these days to SCREEN 13.

Re:The first problem that comes to mind.. (1)

cynyr (703126) | more than 3 years ago | (#36472736)

Just upgraded my desktop, and by that i mean bought a new one, and moved the server into the old one. At some point in the future, this new desktop will become the family computer, I'll have a new desktop, and the server will still be humming along.

As for "bumping up the price" give me tools to use it for GPGPU while using the PCIe card for video and I'm sold.

I think some business users will notice, I have a nvidia Quadro in my work laptop for a reason.

well CAD and useing the GPU as a CPU is still ther (1)

Joe_Dragon (2206452) | more than 3 years ago | (#36473412)

well CAD and useing the GPU as a CPU is still there. OpenCL makes the video card in to a HIGH end FPU the can do stuff that the main cpu sucks at.

Any ways a video card still has faster ram that is not used shared with system ram. On board video on some boards has a max of 2 displays (some boards force one to be analog) Now if ATI / AMD can have on board video with DP then you can do more. But I think if you need like 3-4+ screens a add in video card may be better and save you the ram hit.

Re:The first problem that comes to mind.. (0)

Anonymous Coward | more than 3 years ago | (#36472266)

You can also add an external GPU via PCIe and disable the integrated one, at least on the current AMD Fusion platform

Re:The first problem that comes to mind.. (1)

Targon (17348) | more than 3 years ago | (#36472320)

AMD will still make straight CPUs as well as GPUs, but Fusion makes sense for the low end of the market that was already going to use integrated graphics, the APU makes more sense. You can also add a video card to a desktop, or possibly some laptops that have a Fusion APU. As it stands now, Llano is still going to be using CPU cores that are based on current Athlon 2/Phenom 2 cores. Bulldozer is the next core design from AMD and will have both CPU-only implementations, and then later we will see new Fusion APUs that use that new processor core design.

Think of it the way you do computers today, you have your low end with integrated graphics that NEVER gets updated, then you have your mid-range, and then you have the high end. For MOST users, there is virtually no need for a machine that is more powerful than what Llano offers, but there are still a good number who want or need more.

Re:The first problem that comes to mind.. (1)

hedwards (940851) | more than 3 years ago | (#36474324)

Perhaps for most folks around here it's low end, but I recently got one, and I've been shocked at how well it performs. You're not going to be playing games that were made in the last few years, but it does a really good job at the sorts of things that people typically do. I needed something portable, durable and power efficient, and it does that quite well. I'm really curious to see what the new tool kits are going to be able to provide.

Re:The first problem that comes to mind.. (1)

hairyfeet (841228) | more than 3 years ago | (#36475546)

Uhhh...didn't read TFA, did you MR AC? It says plainly that in cases where there is a dedicated GPU the integrated will take over the physics and leave the dGPU to push the graphics. Think of it as the biggest baddest FPU ever created, sure it'll do graphics, and if all your friends wans to do is play WoW it'll work fine for that, but it isn't gonna stomp some GDDR 5 800 stream processor beast.

But the nice thing about this design is unlike today all those IGPs won't just be turned off if you have a real card, it'll be doing physics and other number crunching thus making your monster GPU even more insanely powerful because it isn't having to do both graphics and physics anymore. Frankly from the sounds of it it will be pretty sweet, just give it about 2 years for everything to get integrated which will be right about the time I'll be ready to move this AMD quad into the background and build me a new box. Go AMD!

Prepare for the rage. (-1)

Anonymous Coward | more than 3 years ago | (#36472174)

That distant thunder of angry angst filled crysis and cod players will shit their pants when they wake up. Coming low on the horizon. Right now let us rationally view what this may or may not do for the average consumer. How it may or may not make a huge impact on the pc landscape. For tomorrow the geeks will be in a thrall and none of our polite conversation shall be sacrosanct. They will call forth for blood. For vengeance. For $800 video cards. And neither you nor I will quench their bloodlust.

-Shoe

I like the idea, but have concerns (5, Interesting)

Sycraft-fu (314770) | more than 3 years ago | (#36472230)

One concern of mine is simply performance with unified memory. The reason is that memory bandwidth is a big factor in 3D performance. The kind of math you have to do just needs a shitload of memory access. This is why GPUs have such insane memory configurations. They have massively wide controllers, special high performance ram (GDDR5 is based on DDR3, but higher performance) and so on. That's wonderful, but also expensive.

So it seems to me that you run in to a situation where either you are talking about needing to have much more expensive memory for a computer, possibly with additional constraints (at high speeds memory on a stick isn't feasible, electrical issues are such that you have to solder it to the board) or a system where your performance suffers because it is starved for memory bandwidth. Please remember that it would also have to share memory with the CPU.

Perhaps they've found a way to overcome this, but I'm skeptical.

I also worry this could lead to fragmentation of the market. What I mean is right now we have a pretty nice unified situation from a developer perspective. AMD and Intel have all kinds of cross licensing agreements with regards to instruction sets. So the instructions for one are the instructions for the other. While there are special cases, like 3DNow that only AMD does, or AVX which Intel has and AMD has yet to implement, by and large you have no problems supporting both with a very similar, or dead identical, codebase.

Likewise GPUs are unified from an app perspective. You talk to them with DirectX or OpenGL. The details of how AMD or nVidia do things aren't so important, that handled. You use one interface to talk to whatever card the user has. Not saying there can't be issues, but by and large it is the same deal.

Well this could change that. APUs might need a drastically different development structure. Ok fine, except AMD might be the only company that has them. Intel doesn't seem to be going down this road right now, and nVidia doesn't have a CPU division. So then as a developer you could have a problem where something that works well for traditional CPU/GPU doesn't work well, or maybe at all, for an APU.

That could lead to a choice of three situations, none that good:

1) You develop for traditional architectures. That's great for the majority of people, who are Intel owners (and people who own what is now current AMD stuff) but screws over this new, perhaps better, way of doing things.

2) You develop for the APU. That is nice for the people who have it but it screws over the mass market.

3) You develop two versions, one for each. Everyone is happy but your costs go way up from having more to maintain.

Of course even if everything goes APU it could be problematic if AMD and Intel have very different ways of doing things. Their cross licensing does not extend to this sort of thing, and I could see them deciding to try and fight it out.

So neat idea, but I'm not really sure it is a good one at this point.

Re:I like the idea, but have concerns (2)

Joce640k (829181) | more than 3 years ago | (#36472332)

This is why GPUs have such insane memory configurations. .... wonderful, but also expensive.

Have you seen what sub-$100 graphics cards can do these days?

This sort of integration could save enough money at the manufacturing end to make that level of graphics almost free to the end user, especially in laptops. It's a huge win.

Re:I like the idea, but have concerns (1)

LWATCDR (28044) | more than 3 years ago | (#36475014)

Yes
http://www.tomshardware.com/reviews/best-graphics-card-game-performance-radeon-hd-6670,2935-2.html [tomshardware.com]
For $65 you can get a card with great 1680x1050 performance in most games.
In other words good enough for most people.
If they can get APUs up to that level which sounds possible it really will be great.

"Intel doesn't seem to be going down this road... (0)

Anonymous Coward | more than 3 years ago | (#36472344)

Intel doesn't seem to be going down this road right now

What about the GPU equipped Sandy Bridge CPU's Intel have out now and did have out before Fusion APU's were on the market?

Re:"Intel doesn't seem to be going down this road. (1)

Rockoon (1252108) | more than 3 years ago | (#36472948)

To quote AnandTech, "On average the A8-3850 [GPU] is 58% faster than the Core i5 2500K [GPU]. If we look at peak performance in games like Modern Warfare 2, Llano delivers over twice the frame rate of Sandy Bridge. This is what processor graphics should look like.

This is comparing AMD's flagship APU @ $170 vs Intels mid-range Sandy @ $220.

The road Intel is going down is the same road its always gone down. Delivering sub-par graphics performance to a crowd that isnt going to notice.

but better video at a lower cost is something to (1)

Joe_Dragon (2206452) | more than 3 years ago | (#36473460)

but better video at a lower cost is something to keep in mine.

Apple better look out a low end mini with i3 and on board video at $700+ will be a joke next to what AMD will have + it will have like 8-16 unused pci-e lanes. Apple better have a video chip in it on x8 pci-e and 2 TB ports on the other x8 pci-e.

Re:but better video at a lower cost is something t (1)

Rockoon (1252108) | more than 3 years ago | (#36473618)

The Mac Mini uses an nVidia 320M, which benchmarks at about half of the AMD 6550 Llano.

Re:"Intel doesn't seem to be going down this road. (1)

Targon (17348) | more than 3 years ago | (#36472976)

Intel GPU technology is so far behind AMD/ATI and NVIDIA, it makes sense that it has not drawn as much attention. The graphics side of Fusion is far more advanced than the integrated graphics we have seen on motherboards to this point as well.

Re:"Intel doesn't seem to be going down this road. (1)

Bert64 (520050) | more than 3 years ago | (#36473130)

Intel GPUs only really target the lowend, they are pretty weak compared to the offerings from ATI/AMD and nVidia...

Re:I like the idea, but have concerns (5, Interesting)

YoopDaDum (1998474) | more than 3 years ago | (#36472414)

Unified memory is an implementation option, but not the only one. It definitely make sense when price matters more than performance. But for a higher end part you could have separate memories. Look at AMD multi-core CPUs, it's already NUMA (light) from the start: each core as a direct attached bank with minimum latency, and can access the other cores memory banks with a (small) additional latency. Extended here, the GPU could have a dedicated higher performance GDDR5 memory directly attached, but accessible from the CPU side (and similarly the GPU could access all the system memory). It's a NUMA extension for a hybrid architecture if you wish. It needs support from the OS/drivers to handle this in a transparent way, but NUMA is not new so existing know-how could be reused.

Regarding performance, on principle an integrated solution can do better by offering tighter integration and more efficient exchanges between CPU and GPU than going through a lower speed / higher latency external bus as for a discrete GPU. We shouldn't judge the principle by today implementations, as they target the low (bobcat based) and middle (llano) ends only, not yet the high end.
The con of integration is that you loose the flexibility of choosing CPU and GPU separately, and upgrading separately, but as others have pointed out most people do not care nor use this in practice.

As for fragmentation, it's the usual situation. You can hide the differences using things like OpenCL, but you'll sacrifice some performance initially compared to a targeted implementation. Most should target this when the tools become sufficiently mature. But if you want to extract all the juice you will have to be target dependent, and face this fragmentation indeed. Still, over time we can expect some convergence (the good ideas will become clearer, and be adopted). So with time the generic approach (OpenCL or like) will become better and better, and less and less people will develop for a target as the decreasing performance advantage won't justify the cost. This process will not necessarily be fast ;) and we're just starting.

Re:I like the idea, but have concerns (1)

Rockoon (1252108) | more than 3 years ago | (#36473028)

Regarding performance, on principle an integrated solution can do better by offering tighter integration and more efficient exchanges between CPU and GPU than going through a lower speed / higher latency external bus as for a discrete GPU.

This isnt quite right. On principle, a discrete solution doesnt have to compromise with the low-latency random access memory performance demands of the CPU, while an integrated solution does. For raw compute performance, the discrete solutions are starting out in a much better position.

The latency savings only manifests as a win for small workloads, but small workloads ultimately dont matter (blink of an eye vs half-a-blink of an eye)

Re:I like the idea, but have concerns (1)

WaroDaBeast (1211048) | more than 3 years ago | (#36472826)

Well, we could always have memory right on the motherboard, à la Sideport. Of course, more memory, such as 512 MB of GDDR5, would be better than today's Sideport memory's specifications (which is 1333 MHz DDR3, I think). But anyway, comparing [wikipedia.org] HD 6xxx integrated GPUs to their non-integrated counterparts, I find the memory bandwidth not to be so bad.

Any sub €60 graphics card I can buy comes with, at best, 1333-1400 MHz DDR3 memory anyway...

Re:I like the idea, but have concerns (1)

drinkypoo (153816) | more than 3 years ago | (#36472828)

They have massively wide controllers, special high performance ram (GDDR5 is based on DDR3, but higher performance) and so on.

I have a GT 240. It has 3/4 the functional units of the GTS 250, GDDR3 instead of GDDR5 (you can get a GDDR5 model now, but you couldn't when I bought it) and yet provides 3/4 the performance of the GTS. The memory bandwidth is clearly only an issue when you actually need that much bandwidth, which you don't if you're pushing slightly less polys etc. As long as the connection to memory is wide enough it won't be a problem for the low- to mid-range market they're aiming for.

I also worry this could lead to fragmentation of the market. [...] Well this could change that. APUs might need a drastically different development structure.

They might?

Re:I like the idea, but have concerns (1)

obarthelemy (160321) | more than 3 years ago | (#36473786)

I don't see why APUs need to be seen differently than discrete cards, from a software point of view. AMD has made abundantly clear that LLano is using a variant of their current Radeon architecture, all the hardware is and will remain abstracted anyway (through DirectX mainly).

I'm sure there are specificities to an APU, and that they would benefit, possibly greatly benefit, from the Apps adressing them in a more "native" way. But the same can surely be said of the discrete AMD and nVidia cards, and nobody is interested. Such is the dominance of directX anyway than graphics chips designers actually target directX support at the design stage of their chips. The same will go for APUs.

Re:I like the idea, but have concerns (1)

LWATCDR (28044) | more than 3 years ago | (#36474950)

They do address this but I am going to suspect that their will always be room for high end GPUs or at least there will be for a long time. APUs are going to target the good enough category first. If they are good enough for 1080p video and gaming they will be good enough for 90+% of the market. This will hopefully raise the bar on integrated graphics up to the usable level. For high end users the APU could be used for things like trans-coding, physics modeling, and other GPU friendly tasks while the graphics cards can be used for the display. In theory the APU will be good enough for even light CAD work and none enthusiast gaming. There is always an an option to add GDDRx to the system as well for the APU if more performance is needed.
 

Great move for laptops (0)

Anonymous Coward | more than 3 years ago | (#36472244)

I think it will be great to buy a reasonably priced laptop knowing later I can upgrade the cpu and doing so will upgrade the graphics chip at same time! so 3-5 years down the line a simple upgrade may breath new life into the thing assuming faster chips will be available within the same thermal scope.

Re:Great move for laptops (1)

pandrijeczko (588093) | more than 3 years ago | (#36472816)

It's probably a little dangerous to make that assumption because whenever I've looked inside a laptop, the CPU is soldered to the motherboard, not plugged into a socket as in a desktop.

Besides which, inside a laptop you have much less free space for heat dissipation and many of them already run reasonably hot - giving you the option of plugging in a faster CPU that generates more heat may end up frying some of the other internal components, that brings things like manufacturer warranties into question.

APUs are a next logical step in portability and compactness. I like desktops PCs as much as the next guy but with APU technology, desktops are one step closer to their eventual demise.

Lot's of laptops have CPU sockets not the apple on (1)

Joe_Dragon (2206452) | more than 3 years ago | (#36473476)

Lot's of laptops have CPU sockets not the apple ones but lot's of other ones.

Re:Great move for laptops (0)

Anonymous Coward | more than 3 years ago | (#36473598)

plugging in a faster CPU that generates more heat

One of the joyful things about the current trend, is that the newer/faster stuff generates less heat. An awesome CPU draws 140 Wat -- no I mean 90 Watts -- no wait now it's down to a little under 70 Watts, no wait that's last year's model, because now it's 45 Watts.

Damn I love what's been happening in the last 4-5 years.

Imagine (-1)

Anonymous Coward | more than 3 years ago | (#36472274)

we can expect to see multiple terabytes per second of bandwidth between the caches

Imagine a beowulf cluster of those.

1996 called ... (2)

psergiu (67614) | more than 3 years ago | (#36472302)

... and congratulated AMD for redescovering sgi's O2 [wikipedia.org] Unified memory Architecture. [wikipedia.org] .

PS: IBM PC jr. (1984) & Commodore Amiga (1985) were actually the 1st one to use UMA. Could this mean we will have "Chip RAM" & "Fast RAM" again ? :)

Re:1996 called ... (0)

Anonymous Coward | more than 3 years ago | (#36472370)

And did you warn them about september 11 2001?

Re:1996 called ... (1)

BiggerIsBetter (682164) | more than 3 years ago | (#36472480)

Could this mean we will have "Chip RAM" & "Fast RAM" again ? :)"

That would actually make sense, given the current difference in graphics card RAM speed/cost vs system RAM speed/cost.

Re:1996 called ... (1)

Anonymous Coward | more than 3 years ago | (#36472664)

What? The BBC Micro (1981) had shared graphics memory as did many of its contemporaries (e.g. Vic 20, ZX81, Spectrum). I believe the Acorn Atom (1980) also did.

Re:1996 called ... (0)

Anonymous Coward | more than 3 years ago | (#36472900)

So did the Apple I.

Re:1996 called ... (0)

Anonymous Coward | more than 3 years ago | (#36473724)

"Chip RAM vs Fast RAM" is about having both shared and unshared memory, actually having two buses. You can see something like this coming, except with the "chip" RAM being the faster/expensive stuff optimized for the vect-- um I mean -- GPU, and the "fast" RAM being the more conventional (e.g. DDR3) cheaper stuff -- but all in the same address space. And all of it allocated and managed by a NUMA-aware kernel.

Re:1996 called ... (1)

bhtooefr (649901) | more than 3 years ago | (#36474582)

And the Apple II had it in 1977.

Bitcoin mining? (0)

ribuck (943217) | more than 3 years ago | (#36472422)

But how good will the new architecture be for processing Bitcoin blocks?

That's why some of AMD's current high-end GPUs are hard to find.

Re:Bitcoin mining? (0)

Anonymous Coward | more than 3 years ago | (#36472464)

AMD Advantage in mining is it VLIW (many many simple processors) instead of nvidea more complex(but fewer) processors. moving away from VLIW is a disadvantage. however. assuming there not gonna throw out the rotate command we miners should still be ok.

hello (-1)

Anonymous Coward | more than 3 years ago | (#36472610)

Hi,
I am writing just to introduce myself. My name is Mohit Baghel, and I also have a blog in the other niche.
The URL is http://freedomit26.com.
I found your content very interesting, and I will
definitely be recommending it to my readers.
Best wishes,
Mohit Baghel
http://freedomit26.com

WebGL support? (1)

nikanth (1066242) | more than 3 years ago | (#36472626)

Does it have WebGL support? i.e., address space protection and preemption support/kernel mode for shader programs?

unified space (1)

sam0737 (648914) | more than 3 years ago | (#36472638)

Maybe someone read the TFA could chime in. The TFS mentioned unified address space, but not necessarily unified memory access right? it could be just another virtual memory paging mechanism....

Will it run Linux? (4, Informative)

vigour (846429) | more than 3 years ago | (#36472782)

Will it run Linux?

I'm not being facetious, I got stung by the lack of support [phoronix.com] by Nvidia for their Optimus [nvidia.com] graphics cards on my ASUS U30JC.

Thankfully Martin Juhl [martin-juhl.dk] has been working on a solution using VirtualGL, which gives us the use of our Nvidia cards under linux [github.com]

Re:Will it run Linux? (4, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36472960)

I would(given ATI's historically somewhat weak driver team) be wholly unsurprised to see some rather messy teething pains; but(given AMD's historical friendliness, and the long-term trajectory of this plan), I suspect that it will actually be a boon to Linux and similar.

The long term plan, it appears, would be to integrate the GPU sufficiently tightly with the CPU that it becomes, in effect, an instruction-set extension specialized for certain tasks, like SSE on steroids. If they reach that point, you'll basically have a CPU where running OpenGL "in software" is the same as running it "in hardware" on the embedded graphics board, because the embedded graphics board is simply the hardware implementation of some of the available CPU instructions, along with a few displayport interfaces and some monitor-management housekeeping.

I'd be unsurprised, as with Optimus, to see some laptops released with an embedded/discrete GPU combination that is fucked in one way or another under anything that isn't the latest Windows, possibly making the discrete invisible, possibly forcing you to run the discrete all the time, or some other dysfunctional situation; but I'd tend to be optimistic about the long term: GPU driver support has always been a sore spot. Compiler support for CPU instructions, on the other hand, has generally been pretty good.

Re:Will it run Linux? (1)

vigour (846429) | more than 3 years ago | (#36473220)

...but I'd tend to be optimistic about the long term: GPU driver support has always been a sore spot. Compiler support for CPU instructions, on the other hand, has generally been pretty good.

Excellent point!

Re:Will it run Linux? (0)

Anonymous Coward | more than 3 years ago | (#36473080)

Yes, with proprietary drivers there have even been some benchmarks see
http://www.phoronix.com/scan.php?page=article&item=amd_fusion_e350
Open source drivers may take a while to get usable but look to be accelerating in terms of release date ->functional, as they catch up with their old hardware on the new gallium architecture. You can get regular news from the same site as the benchmark, but they do tend to try and make a mountain out of a molehill sometimes.

Pendulum swinging back-n-forth (0)

Anonymous Coward | more than 3 years ago | (#36473308)

Step 1: invent chip that does "x". Step 2: "x" chip now asked to do "y" task. Step 3: Kludge together ways to accelerate "x" chip doing "y" task. Step 4: invent "y" chip to sit next to "x". Step 5: combine "x" and "y" chip into "z" chip.
Rinse
Repeat

Re:Pendulum swinging back-n-forth (1)

marquis111 (94760) | more than 3 years ago | (#36473362)

You say that like it's a bad thing.

What APU! (0)

Anonymous Coward | more than 3 years ago | (#36473828)

How much poo would APU poo, if APU could poo poo?
-- Apu from the Simpsons

Preemptive Multitasking? (1)

Twinbee (767046) | more than 3 years ago | (#36474368)

Does this Fusion APU multitask so that it can run 2 or more kernels at once (with no worries of the watchdog kicking in and stopping >5 sec kernels) ?

Bandwidth Limited (0)

Anonymous Coward | more than 3 years ago | (#36475368)

According to Anand's reviews, the GPU on current Llano APUs is extremely bandwidth limited. Llano feels like a testbed product to me, like a sort of beta moved out into production as proof of concept (and AMD's pre-release demo of the next-gen Trinity kind of confirms that). The CPU is pretty weak; apparently the highest-end A8 APU is JUST competitive with the old i5 520M. Graphically, though, it kicks 31 flavors of crap out of anything not a $100+ discrete GPU. It's also apparently extremely energy-efficient.

If AMD were smart they'd make LV versions of these and have them compete with the i3 and the LV i* parts from intel in the thin and light market. Imagine a 12" machine with the ULV equivalent of the high-end A8 APU and a 1600x900 screen at, say, a $450 price point. This would absolutely MURDER intel in that entire market segment.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?