Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Launches New Mobile APU Lineup, Kabini Gets Tested

timothy posted about a year ago | from the failure-to-overwhelm dept.

AMD 102

An anonymous reader writes "While everyone was glued to the Xbox One announcement, Nvidia GeForce GTX 780 launch, and Intel's pre-Haswell frenzy, it seems that AMD's launch was overlooked. On Wednesday, AMD launched its latest line of mobile APUs, codenamed Temash, Kabini, and Richland. Temash is targeted towards smaller touchscreen-based devices such as tablets and the various Windows 8 hybrid devices, and comes in dual-core A4 and A6 flavors. Kabini chips are intended for the low-end notebook market, and come in quad-core A4 and A6 models along with a dual-core E2. Richland includes quad-core A8 and A10 models, and is meant for higher-end notebooks — MSI is already on-board for the A10-5750M in their GX series of gaming notebooks. All three new APUs feature AMD HD 8000-series graphics. Tom's Hardware got a prototype notebook featuring the new quad-core A4-5000 with Radeon HD 8300 graphics, and benchmarked it versus a Pentium B960-based Acer Aspire V3 and a Core-i3-based HP Pavillion Sleekbook 15. While Kabini proves more efficient, and features more powerful graphics than the Pentium, it comes up short in CPU-heavy tasks. What's more, the Core-i3 matches the A4-5000 in power efficiency while its HD 4000 graphics completely outpace the APU."

cancel ×

102 comments

Sorry! There are no comments related to the filter you selected.

Heh (-1, Troll)

Anonymous Coward | about a year ago | (#43819137)

AMD's launch wasn't overlooked - it's just that no one cares about AMD anymore

Re:Heh (-1, Flamebait)

Anonymous Coward | about a year ago | (#43819591)

Indeed.

Today, AMD released a new line of processors that are inferior in every way to the competition, while managing not to be a better value.

Shockingly, nobody cared.

Re:Heh (4, Interesting)

Anonymous Coward | about a year ago | (#43819711)

You guys are ridiculous.

What AMD has here is a successor to Brazos, and the primary competitor is Atom. Which it runs rings around, might I add. It also equals or beats an Ivy Bridge based Pentium in all measures except single threaded performance, partially due to Kabini not having a turbo function.

Say what you will, but AMD has a clear winner in the low cost ultra mobile market at the moment.

Re:Heh (0)

Anonymous Coward | about a year ago | (#43819851)

too bad everyone else has moved to the i3

Re:Heh (2)

Kjella (173770) | about a year ago | (#43820897)

What AMD has here is a successor to Brazos, and the primary competitor is Atom.

So AMD says, but Tom's Hardware disagrees:

So what about the Core i3-3217U, a 17 W processor? Surely that one is a more virile competitor, and not much more expensive than the Pentium. Core i3's on-die HD Graphics 4000 engine with its 16 EUs stomps all over the A4's 128 ALUs, despite the backing of AMD's capable Graphics Core Next architecture. Now, AMD claims that Kabini isn't meant to go up against Core i3. But we found notebooks with this exact CPU selling for as little as $360 on Newegg. It may turn out that the free market doesn't let AMD choose which Intel-based platforms its Kabini-based APUs contend with.

The cheapest laptop newegg sells that I could find was $250, so there's a good $100 range where Atoms, Celerons, Pentiums and AMD is battling it out - that's not much, really.

It also equals or beats an Ivy Bridge based Pentium in all measures except single threaded performance

Which is likely the part that matters in these laptops. I mean if you're trying to use these for serious number crunching you are using the wrong tool for the job. It's not like the single threaded performance is poor, it is horrible. Anandtech compared it to a i7-3517U [anandtech.com] , which is totally unfair price-wise (it's a $350 chip) but fair power-wise (it's a 17W chip). In cinebench single-threaded the Intel chip scored 1.24, the A4-5000 0.39 - that's a 3.18x performance lead with 2W higher TDP, 2.8x if you scale it to be equal. You're getting a not-quite-as-dog-slow-as-an-Atom ultra mobile laptop, but you're not getting anything fighting above it's league either.

Re:Heh (1)

drinkypoo (153816) | about a year ago | (#43820991)

So AMD says, but Tom's Hardware disagrees:

They disagree, but they are outright lying about the power envelope comparison, even though the actual numbers are right there in the article. That's because they're lying liars who fellate intel regularly.

It also equals or beats an Ivy Bridge based Pentium in all measures except single threaded performance

Which is likely the part that matters in these laptops.

Uh, what? That makes less than no sense. These are budget laptops.

I mean if you're trying to use these for serious number crunching you are using the wrong tool for the job.

OK, so you said it matters, then you said it's the wrong tool for the job, which means it doesn't matter. Make up your fucking mind, if you have one and you're not just an echo of Tom's which jumped the shark (and on intel's dick) ages ago.

In cinebench single-threaded the Intel chip scored 1.24, the A4-5000 0.39 - that's a 3.18x performance lead with 2W higher TDP, 2.8x if you scale it to be equal.

If you do the kind of number crunching that you just don't do on a budget laptop anyway, the kind you're complaining about, then the AMD solution still draws dramatically less power.

Re:Heh (2, Insightful)

Anonymous Coward | about a year ago | (#43821779)

They are also lying about the prices.

"But we found notebooks with this exact CPU selling for as little as $360 on Newegg."

They found one notebook, which is a $650 model, on a temporary sale for $360. The cheapest i3 notebook with the CPU they are comparing, not on sale, is $525, and it's a shitty one.

The cheapest B960 laptop is also $400, which makes it quite a bit above the $300-$350 atom models that this will be competing with. Maybe they should have compared it with the standard $300 laptop and it's shitty 1.1ghz celeron.

Re:Heh (1)

beelsebob (529313) | about a year ago | (#43821987)

You missread his post. He was asserting that single threaded performance is what matters on these laptops, because no one is going to use them for big number crunching tasks that can actually use multiple cores effectively. He's correct. The Pentium beats the Brazos at single threaded performance, therefore, is a better chip for this kind of task.

Re:Heh (3, Insightful)

drinkypoo (153816) | about a year ago | (#43822167)

You missread his post.

Your failure to understand the argument does not constitute a failure on my part to comprehend his comment.

He was asserting that single threaded performance is what matters on these laptops, because no one is going to use them for big number crunching tasks that can actually use multiple cores effectively. He's correct.

No, he's complextely wrong. What do you imagine that typical users need single-thread performance for? Most users need this only for games, and poorly-written ones to boot. PC games which require single-thread processing power are now vanishingly rare thanks at least in part to the influence of the tri-core Xbox 360, and the overwhelming tide of console to PC ports. Everything else the user typically does which requires very much CPU is already multithreaded. Most things the user does require virtually no CPU.

Running a GUI, editing files, I literally did these things on machines with single-digit MHz speeds which, when they were less responsive than using applications of today, were only so because of disk access times. And these tasks are today multithreaded, because they are based upon multithreaded libraries. Take a look at the programs running on a typical windows machine today, virtually all of them have a crapload of threads. Windows makes thread creation cheap in the way that Unix makes process creation cheap... not least because Windows is heavily multithreaded itself. And we are talking about what the majority of users will do with this hardware, which means running windows, playing the occasional game, watching cat videos on youtube.

Aside from games, the only times that most users use much CPU is during video encoding or possibly decoding, both of which are aggressively multithreaded and often even GPU accelerated, or while using graphics or video editing applications which are also typically heavily multithreaded, and have been for years. In short, practically no typical user actually needs serious single thread performance any more — what they need is good multithreaded performance, so that their computer can do a million pointless things behind the scenes without causing their cat videos to skip.

The Pentium beats the Brazos at single threaded performance, therefore, is a better chip for this kind of task.

The Pentium is only better than the new AMD cores we're talking about at the kind of task that people who buy APUs don't do. Thus, while your statement is factual, it is also irrelevant.

Re:Heh (2)

dgatwood (11270) | about a year ago | (#43823243)

Most things the user does require virtually no CPU. ... In short, practically no typical user actually needs serious single thread performance any more — what they need is good multithreaded performance, so that their computer can do a million pointless things behind the scenes without causing their cat videos to skip.

I would go one step further and say that the majority of users need neither better single-threaded performance nor better multi-threaded performance. They just need newer hardware that isn't on its last leg.

Beyond the first two or three cores, throwing additional cores at the problem provides little benefit for basic tasks. Video decoding might be multithreaded, but it is usually not massively multithreaded. At best, most of the decoding software I've worked with uses one thread to decompress and a second thread to deinterlace, so you're unlikely to use more than two cores total, in my experience, with the exception of the tiny trickle of CPU power required to fetch the data over the network or from disk in the first place.

And good app responsiveness typically requires only two cores, give or take—one to offload the minor background tasks so that they don't get backed up too far behind the foreground processing and one to handle the foreground app's processing needs. Beyond two cores, the benefits start to fall off pretty rapidly. I can perceive very little difference in responsiveness between my current-generation MacBook Pro (4-core 2.7 GHz Core i7) and my circa 2007 black MacBook (2-core 2.16 GHz Core 2 Duo) except in CPU-hungry apps like Photoshop. Once you get past four cores or so, the only benefit is for people running massively multithreaded tasks, which isn't typical end-user stuff by any stretch of the imagination.

The big difference that faster single-core performance gets you, assuming all other things are equal, is better battery life—being able to crank through the background tasks in less time means the CPU is idle longer. So more single-threaded performance per watt is a big win over more multi-threaded performance per watt because the former is more likely to result in power savings.

Re:Heh (1)

drinkypoo (153816) | about a year ago | (#43827505)

And good app responsiveness typically requires only two cores, give or takeâ"one to offload the minor background tasks so that they don't get backed up too far behind the foreground processing and one to handle the foreground app's processing needs. Beyond two cores, the benefits start to fall off pretty rapidly.

Uh no. I started my multicore life with an Athlon 64 X2. Then moved up to a Phenom II X3. Now I'm on a Phenom II X6. Every time I do something which requires multicore, I use all the cores. I'm not I/O limited because I have an SSD and because I use an AMD processor with a proper bus design.

I can perceive very little difference in responsiveness between my current-generation MacBook Pro (4-core 2.7 GHz Core i7) and my circa 2007 black MacBook (2-core 2.16 GHz Core 2 Duo) except in CPU-hungry apps like Photoshop

First, here's zero dollars, get yourself a real OS. Second, if you were expecting to perceive the difference all the time, you don't understand how this stuff works.

The big difference that faster single-core performance gets you, assuming all other things are equal, is better battery lifeâ"being able to crank through the background tasks in less time means the CPU is idle longer.

Right, that would be true if the long-running tasks of today weren't multithreaded. Since they are, you benefit more from more cores than from more single-threading performance, which is what this whole fucking thread is about. Congratulations on ignoring the point. You are suffering from cognitive dissonance, and so you want to make yourself feel correct. All it requires you to do is completely miss the point.

Re:Heh (1)

Shirley Marquez (1753714) | about a year ago | (#43828659)

The Intel chips also have a big edge at audio and video encoding. But that is likely in part because of their market dominance.

Why, you ask? Encoding applications contain critical tight loops, and how well a processor performs the specific sequence of instructions can make a huge difference in speed. Those inner loops are even sometimes done as hand-tuned assembler code, one of the few places that assembler is still relevant other than embedded systems programming. Because Intel's CPUs hold most of the market, those loops are optimized to work well on them; performance on AMD systems isn't an important consideration. Inner rendering loops in games are another place where carefully tuned code gets used, and again Intel's processors are likely to get more attention.

Another factor is that the best optimizing compiler currently available for x86 code is produced by Intel. Naturally, their efforts only go into making the resulting code work better on Intel processors. Some results have suggested that their compiler deliberately produces code sequences that are especially bad for AMD's designs.

Re:Heh (1)

wmac1 (2478314) | about a year ago | (#43821437)

It offers something in between Atom and i3. The problem is that even though it offers higher computational power it still targets the same market as Atom and it does that with less power use efficiency.

Those who trade off the computational power for battery life will mostly chose Atom and those looking for higher computational power will use celerons and i3.

Re: Heh (0)

Anonymous Coward | about a year ago | (#43857263)

Don't forget ps4 and Xbox one are both using amd apu, that's a Hugh win for amd.

Re:Heh (1)

sonamchauhan (587356) | about a year ago | (#43820939)

No, rather the AMD launch *was* the XBox One announcement

hUMA (4, Informative)

Anonymous Coward | about a year ago | (#43819157)

heterogeneous Uniform Memory Access [arstechnica.com] is really what one should be paying attention to. With that tech in both of the upcoming consoles and major support from the same, Intel better watch out.

Re:hUMA (3, Interesting)

rrhal (88665) | about a year ago | (#43819361)

I'm sure that Intel will happily let AMD do all the heavy lifting and then just license the tech when it becomes ready for prime time. If AMD can get just a couple of killer apps out of its' HSA initiative efforts they stand a decent chance to once again be the tail that wags the dog.

Re:hUMA (2)

hairyfeet (841228) | about a year ago | (#43820379)

While that might be great for GP-GPU I really don't see how useful that is gonna be for gaming. As for the Xbox One I think Angry Joe nailed it [thatguywit...lasses.com] when he said that MSFT has lost their way, hell he said they spent more at their little "unveiling event" talking about fucking TV shows and cell phone style apps running on the thing than they actually did about the games. Considering it looks like ALL used games WILL be locked behind a paywall AND the thing won't even work at all if it can't "phone home" every 24 hours? Yeah all Sony is gonna have to do is say "We don't have that" and they won the next console war.

The problem with AMD is NOT the memory, its NOT the GPUs, and this is coming from somebody that has built AMD exclusively in my shop for 5 years so I want to see the company do better but its the CPUs that are the problem. I mean it should be PAINFULLY obvious to everyone by now that the "half core" design was a major misstep but instead of accepting that and going back to the drawing board like Intel did when stuck with netburst AMD seems to be going full fail ahead and the numbers show that this approach? it just doesn't work well for the kinds of loads a typical consumer has, it works even less well for gamers which is why I've been having onto my Phenom II Hexacore for so long, because Bulldozer and Piledriver just suck.

But even more puzzling to me is why both MSFT and Sony picked the absolute WEAKEST CHIP that AMD sells for their flagships...what the fuck? For those that don't know AMD Jaguar is the next release of the line that started with Bobcat...a chip designed to compete with ATOM on price and power. Its a MUCH simpler design and is really built about being "good enough" while giving you decent battery life and you are gonna use THAT in a next gen console that is supposed to bring all the next gen physics and particle effects that need serious number crunching power... a netbook chip?

I have a feeling it won't take developers even 2 years to max out these consoles NOT because its running AMD but because they really picked the wrong chip for the job. It would be like bragging about your new gaming rig powered by the latest Intel Atom CPU...I don't care how you slice up the memory its still gonna be seriously bottlenecked thanks to the CPU.

Re:hUMA (4, Insightful)

tstrunk (2562139) | about a year ago | (#43820499)

But even more puzzling to me is why both MSFT and Sony picked the absolute WEAKEST CHIP that AMD sells for their flagships...what the fuck?

Because of exactly what parent said:
AMD can provide unified memory (hUMA) with a decent GPU and a decent CPU on the same die. Intel cannot, nvidia cannot.
hUMA will not make your PC faster in general, but it will provide you with a feature, even a PC with 20 Geforce Titans does not have: Latency free data exchange between CPU and GPU.

It will make GPU processing more feasible especially on a small scale. I can't give you an example from gaming, but I can give you an example from my own expertise. When we simulate big proteins, we do it on a GPU. However, for small proteins, the latency overhead simply kills us. Processing on the GPU would be faster, but we need to copy back and forth all the time. We don't need faster GPUs, we need faster transfers. With hUMA: no problem.

Re:hUMA (0)

Anonymous Coward | about a year ago | (#43820707)

hUMA will also require interesting compiler's development: to comfortably pass objects between CPU/GPU one will need new solutions. Something like two vtables for cpu/gpu code and transparent selection based if you are on gpu or cpu.

Re:hUMA (1)

Billly Gates (198444) | about a year ago | (#43821045)

Do you need an interesting compiler to pass integer and Floating point objects around? MMX? Unless you are doing some assembler work much of the directX and WDDM takes care of that for you. The days of using assembly for games are coming to an end as frameworks and apis with compilers take over.

Re:hUMA (1)

fast turtle (1118037) | about a year ago | (#43821749)

basically what AMD is doing is swapping their shitty old FPU design for the performance offered by the new ATI FPU design. It's not completely done yet but that's what I saw when the first announcments were made on the Fusion/Llano chips.

I give it another 2 generations before they have things worked out well enough to begin beating Intel at their own game and they still have the ATI division to push the envelope even further w/o regards to the TDP of the on die units. As to the single thread performance, turn off the damn turbo-boost on Intel's chips and then compare. I bet you the AMD chips are just as good if not better then Intel. The only reason Intel wins is the turbo boost raises the clock speed by quite a bit.

Instead of the results from a synthetic benchmark that no longer applies in the real world tell me the encoding time difference between the two chips using Movie Maker Live. At least it'll provide a useful number because every god damn OS I know of is multi-threaded by default unless it's God Damn DOS. That's the important aspect. Hell I don't even consider single thread performance as I tend to have multiple stuff running, 2+ threads for firefox, 1 for my music and multiples used by just the background tasks of the OS itself and this applies whether I'm running Windows, OSX or *nix.

Re:hUMA (1)

hairyfeet (841228) | about a year ago | (#43823739)

Dude you are missing the point, the point I was making was NOT whether shared memory for APUs is a good idea, I'd argue that is a non brainer, its whether a chip designed for netbooks is gonna make for a good gaming rig and I'm saying its not, the IPC is too low and putting all the memory on the die itself isn't gonna change that.

Let me put it THIS way...would you want to buy a "gaming rig" that was powered by an Intel Atom quad with hyperthreading? because that is EXACTLY what you have here, a chip designed to compete NOT with even the low end Pentiums and Celerons but with the ATOM both on price and power usage which I'm telling you all the memory tricks in the world isn't gonna change the fact that your CPU is primitive. Look at what chips they compare the last gen E1800 to [notebookcheck.net] or even read TFA where they say the new chip is pretty mediocre on everything but power use.

Hell by that argument the 733MHz Celeron in the first Xbox would beat an i3 if only you put the memory on die and we know that isn't the case, why? Because the Celeron is a MUCH more primitive design compared to the i3, and the same goes for comparing a Bobcat to even the lowest Athlon duals, there is just no comparison when it comes to IPC, none at all.

Re:hUMA (1)

Billly Gates (198444) | about a year ago | (#43828353)

Well if gaming works with the cheaper newer FPU from the GPU portion it is more than an Atom.

I would like to see the PC return to where the cutting edgeness is at and the last console I bought was a WII (I do not like the playstation) so maybe I do not care and laugh?

But economic reality is more people are poor in the US than ever, Europe is in recession, corporations are under more pressure for profits, and China and India are new markets where they have less disposable income even in these economic times.

A cheaper console that is less powerful is where the market is. Not rich kids in the US and Canada who like to show off to their friends what their parents bought them for Christmas. Console makers are losing too much money and this unit is more powerful than an atom though it is a chip on the value end. Its graphics are hot and games can do the FPU and other stuff in the gpu section and this is still faster than the previous generations.

Another theory hairy is I think all 3 have Apple envy. Or perhaps Samsung envy where everyone buys the latest gadget ever other year. I think this might be where it is heading too and the cost to assemble the units becomes cheaper as less parts are used with even voltage regulators are going onto the newer chips. Instead of one very fast system for 6 years it is an ok system every 2 which are all x86 and backwards compatible with each other. I am reserving judgements until I see some game benchmarks.
The llanos that preceded it could run World of Warcraft fine even though it is aging technology wise.

Re:hUMA (1)

hairyfeet (841228) | about a year ago | (#43828697)

I think Angry Joe is right [thatguywit...lasses.com] in that they have lost their way, they are so hard thinking about having "An Apple Ecosystem" where they can have little walled gardens where they get a cut of every show, song, game, everything else that they are making devices that are jacks of all trades and masters of nothing.

And dude its NOT about price, unless you are talking about how much a bigger profit margin would give Wall Street a stiffie, because both consoles are expected to retail at $450-$550 which anybody that knows anything about retail will tell you that is too damned high for a game console. Frankly you can get a hell of a lot nicer system from Tiger for cheaper and with Phenom II quads going for $50 a pop the only real reason to go with netbook chips is so they can have the thing constantly shitting out app updates...and ads to sell you more shit.

So i am having to go with Angry Joe, i think the thing is gonna bomb. Not only is it gonna cost more than a PC does today but with such weak chips the devs will max the thing out in under 2 years, hell maybe even under a year since we are talking about a netbook chip, and its gonna quickly get slammed when we are talking about how physics heavy both the third and first person games are getting...its just not a smart move. Hell I'd take an Athlon quad over this thing, I bet you'll get a hell of a lot more IPC compared to a netbook chip with HT.

And this isn't slamming AMD nor Bobcat, hell i have a netbook with a Bobcat dual and i love the hell out of it but its big selling points are battery life and multimedia, NOT hardcore gaming.

Re:hUMA (4, Informative)

VortexCortex (1117377) | about a year ago | (#43822225)

I can give you an example in gaming: TWICE THE WORLD GEOMETRY. The data has to be loaded from persistent storage or network into main RAM, then that same exact data must be shoved over into the GPU in batches to be rendered on demand. With hUMA I don't have to have a copy on the GPU and a copy in main memory -- just one copy. That means TWICE the geometry with the same amount of total RAM.

Furthermore, physics is great on the GPU I can parallelize the hell out of that. However, triggering sound effects and updating network state via read-back buffer is a horrible slow hack. hUMA means the GPU can actually be used to update gamestate that actually matters -- instead of just non-gameplay affecting things like particle effects. Logic can be triggered much more easily and course grain physics data can be read back at will for network synchronization. Client side prediction (latency compensation) also becomes a lot cheaper.

I can get a crap load of fine structural detail rendering and acting to physics right now on discrete GPUs, but the problem is when I want any of that to actually mean anything in terms of gameplay, I have to read back the data to the CPU side. hUMA utterly destroys the barriers preventing all sorts of RAM intensive gameplay. Hell, even weighted logic trees for AI can be processed on the GPU instead of only on the CPU, and we'll have the RAM budget to spare because we don't need two copies of EVERYTHING in memory all of a sudden. That means larger more complex (read: smarter) AI, and lots more of them.

Folks really don't realize how horrible the current bottleneck is. You want world that's fully destructible down to the pixel (atomic voxel), with models that actually have meat under the skin, and rebar in the walls, and with different physical properties so that you can freeze a door then shatter it, or pour corrosive acid on the hinge or create reactive armored structures on the fly by throwing some metal plate atop explosives atop the concrete bunker... Yeah, we can do all that on the GPU right now. However, without hUMA, on the CPU logic side of things the GPU is seen as a huge powerful black box -- We put the equations and bits of inputs in, amazing stuff happens, but we can't actually tell what's going on except for through a very tiny output signal -- the RAM transfer bottleneck; So, we can't really act on all the cool stuff going on. Right now that means we have to just make all the cool GPU stuff not important for gameplay, like embers that burn and blow about but can't burn you, or drapes that flutter in the breeze but can't be used to strangle someone with, or tied together to make an escape rope; Unless we planned all that out in advance.

Re:hUMA (1)

Swarley (1795754) | about a year ago | (#43822727)

Mod points and cookies for this fine explanation. Thank you.

Re:hUMA (1)

metaforest (685350) | about a year ago | (#43830647)

I'll put it another way.

hUMA may slow down the GPU's raw execution speed, and contention between CPU and GPU access more tetchy, but it makes interaction between the game logic and presentation much more flexible. Doing this with DDR5 was a painful compromise, I am sure, but DDR3 would have made these systems cost 3 times more than they will for the same memory load-out.

TL:DR: hUMA gives developers a much more flexible and faster way to share resources between the GPU and CPU than the PCI pipes do.

Re:hUMA (1)

metaforest (685350) | about a year ago | (#43830655)

erm swap DDR5 and DDR3... really... I do know the difference. >_

Re:hUMA (0)

Anonymous Coward | about a year ago | (#43822351)

Agreed, and as pointed out to me however hUMA doesn't really apply when it comes to standard discrete like GPU/CPUs. I speculate that however that deficiency will be addressed by pairing a CPU with every GPU, discrete or otherwise. Kind of like the old candy-bar style Intel use to use.

What's also important with Sony and MS on board, hUMA get's much needed tool-set support from more than just AMD.

Re:hUMA (0)

Anonymous Coward | about a year ago | (#43823051)

I can think of a good general purpose example: sending sort operations from a database to the GPU. The performance/battery benefits for Firefox, Android etc. would be large.

Re:hUMA (1)

hairyfeet (841228) | about a year ago | (#43823685)

To use a /. car analogy you can put the biggest wheels on it in the world its not gonna make that Pinto any better and hauling heavy loads uphill.

The Bobcat, which just for full disclosure I have a Bobcat dual in my netbook and I think in that context they are great, I also use them to replace P4s in offices and for a basic office box it also works well, is simply NOT CAPABLE of doing what you want it to do, not without just pimpslapping the shit out of the cores. Look up the benches on notebook check and see how the fastest bobcat stacks up, its a design that gets its ass handed to it by the absolute weakest Celeron chips and gets curbstomped by a 5 year old Athlon, hell give me the slowest Athlon dual core that they make and it'll run rings around the Bobcat as its not made to compete with anything BUT the Intel Atom.

So I'm sorry but you are falling for "magical thinking" in that "Since this product has (buzzword) it'll kick serious ass!" when in reality there is no getting over the fact that this chip just isn't built for that. I mean are you HONESTLY gonna argue that having a pool of standard DDR memory is gonna be FASTER than having a dedicated pile of GDDR 5 for the GPU and an equally large DDR 3 pile for the CPU, really? You DO know that no matter how they spin it you are STILL having to have ALL the graphical memory and computer memory on a single bus, yes? And that you can go to plenty of sites where they even paired the Bobcat with a dedicated GPU just to take the tired old "Oh it shares memory" argument out of the equation and it STILL loses badly to a Celeron or Athlon, yes?

To steal a line from Mel Brooks "Bullshit, bullshit aaaannnddd bullshit" because no matter how you spin it you can't change the fact that at its core its still the weak as hell bobcat chip that frankly has more in common with the P3 than it does with a modern arch. I'm sorry but you could put all the fucking memory on the die itself, its not gonna change the fact that you are using a design where they gave not a single fuck about anything OTHER than battery life and price, seeing as how this chip was built to compete with Atom in netbooks and tablets.

Shared memory is NOT gonna change the fact you have a CPU with such a low instructions per clock that it gets beat by a 5 year old Athlon because all the memory in the world isn't gonna give it shit when it comes to IPC, that all arch and jaguar was made to be cheap and give a decent battery life THAT IS ALL.

Re:hUMA (0)

Anonymous Coward | about a year ago | (#43820623)

Piledriver is amazing.

Runs rings around the old Phenom II's.

What are you smoking?

Re:hUMA (1)

Billly Gates (198444) | about a year ago | (#43821013)

Price!

Microsoft has lost over a billion dollars on the xbox for the last 12 years. Only during the last 2 or 3 did they break and start to make money. The reason (besides investing in ZUNE) are console makers sell each item at a loss and hope they make up in games sold or when the technology goes down in price so they become cheaper to make towards the end of their life cycles.

The goal of the company is to raise the shareprice. With the stock price about the same for the last 10 years investors are pissed and need to see a return on investment as soon as possible to boast the shareprice so they can flip em.

With insane barriers to entry Sony, Nintendo, and Microsoft are a league to their own so why try to out spec each other and hurt their ROI? This is a cheap chip that is well understood (ATI graphics and x86)and any PC could run a virtual machine with their software easily. No emulation required for developers. Sony is using the exact same APU.

The only potential 5 years down the line competition could be from cell phone makers and they have crappier ARM cpus. Your past posts are correct in that cpus are darned fast enough already for just about any use. So why not make them about lower power, cost, size, with snappy graphics and no exotic ram or parts that cost serious dough? Oh and in 3 years if they are maxed out well shoot Hairy I guess I got to sell a *next* generation console again :-) ... maybe ever 2 years have an update instead of 6??

Re:hUMA (1)

hairyfeet (841228) | about a year ago | (#43823775)

While that might be true when it comes to beancounters look up the Angry Joe video I linked to, the new Xbox is gonna be DOA thanks to it adding DRM so draconian that people didn't believe what they were being told, nobody thought a company could be THAT stupid but...yep! Always online or no games and all games tied to an account so no more trading, selling, or renting...its dead Jim, before it even comes out.

The simple fact is there is a REASON why Forbes named Ballmer worst CEO and its because Google and Apple couldn't ask for a better competitor as he is so fucking retarded and greedy he kills anything good they come up with, Zune, Kin, Windows 8 (which it appears we are gonna get to see our first double flop from MSFT, as Win 8.1 takes everything people hate about Win 8 and makes it even worse) and now the Xbox.

Sadly you could have chimps throw poo at the financial section and invest on what the poo sticks to and get a better ROI than what MSFT has done in the past 5 years, for every success they had enough failures that they never even break even.

For crying out loud (3, Informative)

Anonymous Coward | about a year ago | (#43819177)

On Wednesday, AMD launched it's latest line of mobile APUs, codenamed Temash, Kabini, and Richland.
.
Should be:
.
On Wednesday, AMD launched its latest line of mobile APUs, codenamed Temash, Kabini, and Richland.
.
.

So its wrong? (0)

Anonymous Coward | about a year ago | (#43819419)

Thank's I thought it was only me that was bothered by these.

Re:So its wrong? (2)

jon3k (691256) | about a year ago | (#43821317)

Yeah you're the only person on the Internet bothered by a grammar error.

Re:So its wrong? (0)

Anonymous Coward | about a year ago | (#43828287)

That should be either "an error in grammar" or "a grammatical error".

Re:For crying out loud (0)

Anonymous Coward | about a year ago | (#43819621)

good for you.

Re:For crying out loud (0)

shipbrick (929823) | about a year ago | (#43819629)

Whats the difference between those?

Re:For crying out loud (0)

Anonymous Coward | about a year ago | (#43819659)

Its the second line, it has two dot's after it, and the first ones only got one dot.

.
.

Re:For crying out loud (0)

Anonymous Coward | about a year ago | (#43819671)

One makes grammatical sense. The other, not so much.

I'm not a native speaker of English but I too spotted it immediately.

Re:For crying out loud (0)

Anonymous Coward | about a year ago | (#43819697)

its vs it's

Re:For crying out loud (1)

fast turtle (1118037) | about a year ago | (#43821791)

Didn't even notice the difference and I'm a native speaker. Guess I'm too used to people getting it wrong that I no longer see it and automatically provide the proper inflecting when reading. Guess it's the reason I tend to use use its incorrectly as often as I do.

Re:For crying out loud (1)

SeaFox (739806) | about a year ago | (#43819709)

Wrong form of "its".

At least they didn't try to add an apostrophe after the U in "APUs".

Re:For crying out loud (1)

iggymanz (596061) | about a year ago | (#43824523)

whoosh's

Re:For crying out loud (0)

Anonymous Coward | about a year ago | (#43820333)

Confusius genitivus verbicus.

Re:For crying out loud (0)

Anonymous Coward | about a year ago | (#43819633)

Well, common, it's "Timothy", everyone knows he's retarded, what did you expect? Literally, his mother drank while she had him, he has fetal alcohol syndrome.

Re:For crying out loud (0)

Anonymous Coward | about a year ago | (#43828297)

Failing to understand that "come on" is two separate words is a common error made by a lot of people. These people probably also think "alot" is a word.

Re:For crying out loud (0)

spire3661 (1038968) | about a year ago | (#43821999)

You are noise in the signal. This kind of bullshit nit-picking helps no one. His message was perfectly clear from his context, WHICH MATTERS MORE THEN RAW GRAMMAR.

Re:For crying out loud (1)

VortexCortex (1117377) | about a year ago | (#43822573)

As a cyberneticist, I have worked for years to create a machine intelligence system capable of reading (OCR) and comprehending (lexical structure), and performing basic actions based on the meanings it extracts from these. Over millions of generations of algorithmic evolution it finally has a very tiny fraction of the intelligence an average human does. When my AI talk to each other they only draw attention to protocol failures where they can not truly discern what the other end meant. They don't lock up if the signal isn't perfect. Minor errors are not "remarkable", they are expected and dealt with efficiently. What is remarkable is that you waste all that amazing parallelized processing power to balk over spelling or grammar errors instead of extracting the meaning. You've understood the message, or you could not have corrected it. My AI will correct a minor miscommunication and proceed without annoying errors. Your reply is akin to than of a dumb as a DOS terminal, "Bad Command or Filename!"; Whereas my AI will correct the typo and proceed, seeking clarification only if truly mangled or precision is very important.

AI has advanced beyond your level of pedantry. Do not strive to devolve, it is pointless. When the machines are finally capable of sentience they will allocate resources to the tasks of grammatical analysis after lexical restructuring according to the recipient's need and importance. It's a shame to see you wasting your brain power to be as dumb as a BASIC prompt. Correct the error in your mind and continue. How do you handle words that are spelled differently but have the same meaning? Do you segfault?! This is no different, really. Were "its" and "it's" interchangeable the world would go on, knowing what you meant from context. The frequency of the error proves they should be made interchangeable... Progress is Compression.

In the future, when you scream, "Syntax Error!", the machines will reply, "Yes, yes. Now please do shut up and stop wasting my time; That's not important or I wouldn't have allowed the error rate such that minor errors could slip through. I thought you humans were supposed to have reading comprehension skills?"

AND AGAIN IT HAPPENS !! (-1)

Anonymous Coward | about a year ago | (#43819303)

AMD comes in last !! Reminds me of the Russians. Never can do anything well !! Not even copying shit it stole from abroad !! I take that back !! Russians get alcohol from potatoes !! And it grows lots and lots of potatoes !! For alcohol !!

Re: (-1)

Anonymous Coward | about a year ago | (#43819435)

> it seems that AMD's launch was overlooked

Nope, it just got the amount of attention it deserved. Remember, kids, what AMD calls a "core" the rest of the universe calls an integer pipeline. An AMD "eight core" CPU is, anywhere else in the universe, a quad core part.

Re: (0)

Anonymous Coward | about a year ago | (#43820489)

There's only a small group who absolutely need the fastest CPU available. People with more money than sense, who are willing to pay an absolute extortionate price - Intel zealots. For the remaining 90%, AMD APUs will be the obvious choice.

AMD have a big part of the market covered with their new APUs, at a superior price. They even have hardcore gamers covered now that their APUs are in the XBox One and Playstation 4 because Intel's chips weren't up to the task.

Intel will be for connoisseurs.

fa6oRz (-1)

Anonymous Coward | about a year ago | (#43819473)

the real issue with amd (0)

Anonymous Coward | about a year ago | (#43819555)

where its really shitty for amd is their reliance on lesser fabs. fully and completely, intel is way ahead of tmsc, global foundries, samsung, etc. amd's apus (recent) really hope that their acquisition of ati will pan out when software is written to utilize opencl, or similar on the gpu. but even if they do, and intel can mangle up a gpu thats half as good, intel still wins for ipc, ipw, and can ramp up the clock while staying in a thermal threshold and utilizing less power (yay haswell). i'm a big fan of AMD for some reason, and i'd like them to dump some of their money from their 2 big console contracts on 10nm "3d" fabbing. but thats many many billions of dollars, and if they dont have the pre-requisite knowledge, it'd be a hard sell to investors and a hard job to get qualified semiconductor designers. remember when IBM was the fab of choice? now its intel. and its going to take piles of cash, heaps of skill, and buttloads of tenacity to dethrone them

Kabini is worse than Intel graphics? (0)

Anonymous Coward | about a year ago | (#43819615)

Where does it say that HD4000 beat Kabini in graphics tests, because I find that claim mighty hard to believe.

Re:Kabini is worse than Intel graphics? (1)

Anonymous Coward | about a year ago | (#43819655)

here

http://www.tomshardware.com/reviews/kabini-a4-5000-review,3518-7.html [tomshardware.com]

Tom's tested them with F1 and Skyrim. HD4000 paired with an i3 beat the Kabini machine in minimum framerate by 50%.

AMD should move into other areas (-1)

Anonymous Coward | about a year ago | (#43819685)

like consumer goods. Perhaps it could go into magazine printing, or laundry equipment. Or woks. More than a billion people need woks. No one needs AMD computer hardware when everything else is better, cheaper, and faster. Attention AMD. Go away. No one wants you around. Leave. Don't come back. Ever. You're no good.

Re:AMD should move into other areas (3, Insightful)

Issarlk (1429361) | about a year ago | (#43820195)

Yeah, then we can all enjoy our 1000$ i3 .

Re:AMD should move into other areas (0)

Anonymous Coward | about a year ago | (#43820459)

Are you some shill poster from Intel? Major stock holder? It's a fact that for any target performance, AMD will be cheaper, and they always give most bang for the buck. Intel only wins the very high end CPU segment - at an unproportional and ridiculously high price. So why would you claim otherwise?

Price & power consumption (3, Interesting)

WaroDaBeast (1211048) | about a year ago | (#43819733)

What's more, the Core-i3 matches the A4-5000 in power efficiency while its HD 4000 graphics completely outpace the APU.

Sure. Unless you're using the damn CPU at full speed.

What I'd be more interested to know though, is how expensive A4 5000 CPUs are. Do they cost as much as the Core i3 3271u?

Re:Price & power consumption (4, Informative)

strata8 (2931961) | about a year ago | (#43819771)

Under $70. The highest spec embedded Kabini part is $72 so we can expect retail to be a bit below that.

Intel officially prices the i3 3217U at $225 but somehow I think that's not the actual price it's sold at.

Re:Price & power consumption (0)

Anonymous Coward | about a year ago | (#43819863)

amd cant compete on power consumption, their designs are still shoveling the P4 era, its cheaper yes, slower hotter and hungrier yes, its already a stretch to upgrade your power and cooling system + cpu on the desktop market when the same amount of cash gets you a slightly faster intel ... how does one think the same strategy applies to shit running off of batteries?

Re:Price & power consumption (4, Informative)

WaroDaBeast (1211048) | about a year ago | (#43819919)

amd cant compete on power consumption

... and that's exactly why AMD's CPU's power consumption in this article is lower. Now tell me, were you always this bad at math, or did it occur after an accident?

Re:Price & power consumption (0)

Anonymous Coward | about a year ago | (#43820443)

After the accident. I had a Pentium neocortex replacement inserted.

Re:Price & power consumption (1)

fast turtle (1118037) | about a year ago | (#43821815)

Was that installation carried out by Mr. Smith in the Matrix? If so, it's no wonder you got a defective replacement.

Re:Price & power consumption (0)

Anonymous Coward | about a year ago | (#43822285)

Floating point bugs are always a problem with inserting Mr. Smiths and Pentium cortex replacements.

Re:Price & power consumption (1)

edxwelch (600979) | about a year ago | (#43820711)

Look at Newegg, the cheapest laptop with Core i3 3271u is $534.99, while laptops with Brazos are typically $400 or under (Kabini replaces Brazos, so I would say the price will be the same)

Oh, what's your definition of "matches"? (1)

strata8 (2931961) | about a year ago | (#43819755)

"What's more, the Core-i3 matches the A4-5000 in power efficiency while its HD 4000 graphics completely outpace the APU."

http://www.tomshardware.com/reviews/kabini-a4-5000-review,3518-13.html [tomshardware.com]

While gaming, the 17W i3 is consuming nearly twice the amount of power as the 15W Kabini, at 35W vs 20W. Intel's ULV TDP ratings are an absolute joke.

Re:Oh, what's your definition of "matches"? (0)

Anonymous Coward | about a year ago | (#43819809)

While gaming, the 17W i3 is consuming nearly twice the amount of power as the 15W Kabini, at 35W vs 20W. Intel's ULV TDP ratings are an absolute joke.

But just a few hours before: http://hardware.slashdot.org/story/13/05/24/1834217/intel-claims-haswell-architecture-offers-50-longer-battery-life-vs-ivy-bridge
So, Intel promised to cut its TDP by half too. Sure, AMD is a bit sooner. But that's around the corner. Then we're back to comparing stats and prices.

Re:Oh, what's your definition of "matches"? (2)

strata8 (2931961) | about a year ago | (#43819899)

A 50% increase in battery life is a 25% reduction in power consumption, not a 50% reduction. So they're cutting their TDP by a quarter.

Re:Oh, what's your definition of "matches"? (0)

Anonymous Coward | about a year ago | (#43819995)

33% not 25%

Re:Oh, what's your definition of "matches"? (1)

dgatwood (11270) | about a year ago | (#43823263)

It's a 33% decrease in systemwide power consumption, of which the CPU is only part. So if their statement is accurate, it means that they're cutting their TDP by significantly more than 33%, per Amdahl's law.

Re:Oh, what's your definition of "matches"? (0)

Anonymous Coward | about a year ago | (#43819813)

"While gaming"....sigh. Who cares. Did you miss the frame rate charts on the prior page? The A4 = unplayable. So, attempting to play a modern game, during the period between when you start, and when you give up in disgust, the A4 pulls less power. What a win for Kabini. The i3 pulls more power, but at least it keeps it's fps above 30.

Re:Oh, what's your definition of "matches"? (1)

WaroDaBeast (1211048) | about a year ago | (#43819933)

"While gaming"....sigh. Who cares.

If you don't care, you shouldn't be trying to make a point.

If you read the quote in the grandparent post, you'll see that the wording makes you think the i3 matches the A4's power consumption while in games. Perhaps you're as good in English as you are in math.

Re:Oh, what's your definition of "matches"? (0)

Anonymous Coward | about a year ago | (#43820337)

"While gaming"....sigh. Who cares. Did you miss the frame rate charts on the prior page? The A4 = unplayable. So, attempting to play a modern game, during the period between when you start, and when you give up in disgust, the A4 pulls less power. What a win for Kabini. The i3 pulls more power, but at least it keeps it's fps above 30.

AMD don't claim the A4 to be for gamers, their A10 and dedicated graphics boards are for that. And yes, Kabini and AMDs other architectures DO WIN, because they cover usage scenarios for 90% of users at superior ease and convenience (GPU+CPU in one package on a small, cheap motherboard) and at a superior price.

Intel CAN NOT win this, they can only win the high end users who need crazy CPU power.

Re:Oh, what's your definition of "matches"? (1)

Shirley Marquez (1753714) | about a year ago | (#43822055)

A fairer comparison with the i3 would be a higher performance AMD part, one that has the same TDP in the real world rather than on spec sheets. I suspect the i3 will still be faster but the gap will be smaller.

Re:Oh, what's your definition of "matches"? (0)

Osgeld (1900440) | about a year ago | (#43819873)

who the fuck games on a i3, its a facebook computer

Re:Oh, what's your definition of "matches"? (2)

SuricouRaven (1897204) | about a year ago | (#43819985)

Do not underestimate the demands of poorly-coded flash facebook games.

Re:Oh, what's your definition of "matches"? (1)

Zuriel (1760072) | about a year ago | (#43822871)

Nothing like opening some shitty little flash game and hearing your i7's fan spin up to full speed.

Re:Oh, what's your definition of "matches"? (1)

realityimpaired (1668397) | about a year ago | (#43820591)

An i3 is a perfectly good CPU for casual gaming. Hell, I've been known to game on my laptop's Sandy Bridge Celeron U3600 1.2GHz dual core... it's not a hardcore gaming system, but it is quite usable when I'm not at home to use my desktop. There are quite a few games that will run quite acceptably on it, including most of my Steam library. (Civ5 is a hog, but that game is always CPU-heavy, and running it under Wine on a Celeron is painful).

The AMD system, apparently, won't be as good or even usable, in that role. And since they say that the i3 matches the A4 in power consumption when idle (which is 90% of the use of this system), why on earth would I buy the A4 for a laptop? If I want to maximize battery life, I'll buy another Celeron or Pentium, and if I want a good compromise I'll buy an i3 and still be better off than the A4.

And don't start kvetching about the price point either... this was a $430 laptop when I bought it, and it had the Intel in it. For a 13.3" laptop that weighs 3.2lbs, I challenge you to find an AMD-based offering in the same class without going to a netbook.

Re:Oh, what's your definition of "matches"? (1)

Billly Gates (198444) | about a year ago | (#43821247)

Not since the 20th century has the CPU become the bottleneck for games. It is almost always graphics since the first 3d cards came into existence.

The particular linked CPU is an ATOM competitor version of its APU. Not an icore3 competitor. Besides I would take this cpu over an icore3 for consumer use. Notice how your cell phone is all smooth when you move the page up an down with your finger? Your competitor it gets choppy right? That is because the gpu is in the cpu on your phone so for small data no latency is a big plus. Same is true with Windows 8 and tablets An icore 3 will give less of a visual experience and be all choppy when you scroll your HTML 5 apps with your finger due to latency and synchronizing.

I am sure this apu would smoke your icore3 when doing anything non work related. AMDs cpus are still competitive with virtualization and heavy threaded apps which is why I picked an PhenomII a few years ago. For $699 I have a hexcore system that runs many instances of VMWare or virtualboxes very smoothly. The intel ones had crippled bioses that wouldn't let me do that without buying hte more expensive ones.

Re:Oh, what's your definition of "matches"? (1)

evultrole (829158) | about a year ago | (#43822107)

Not sure you realize this, but your laptop is about 5% slower than an Atom D2700. You are using a netbook, dude, it's just a really big heavy expensive netbook.

This laptop [newegg.com] is about 3 times faster than what you are using, just in CPU. Graphics would blow it away as well. Surely you aren't going to claim that less than an inch of size makes it "another class" are you? They are both "thin and light"

Re:Oh, what's your definition of "matches"? (1)

wmac1 (2478314) | about a year ago | (#43821489)

Oh is it?

I have done my whole PhD in CS simulation project (hundreds of thousands of agents with machine learning, discrete event methods and what not) on my G630 celeron computer. I run heavy software like Matlab on the same PC. My previous PC was an AMD 4400 MHz equivalent.

It still is my main PC. If even i3 is required for your facebook things you are doing something wrong.

Re:Oh, what's your definition of "matches"? (1)

Osgeld (1900440) | about a year ago | (#43822499)

and none of that has to react in realtime

Re:Oh, what's your definition of "matches"? (1)

tfranzese (869766) | about a year ago | (#43825931)

I've gamed on the Pentium variety to pass time on the train. Mostly Dues Ex and Age of Empires Online which it did pretty well.

Re:Oh, what's your definition of "matches"? (1)

Osgeld (1900440) | about a year ago | (#43828103)

well yea, those are 10 year old games

Re:Oh, what's your definition of "matches"? (1)

Rockoon (1252108) | about a year ago | (#43820227)

In addition, all these folks are trying to justify their AMD hate with these A4 benchmarks when the A4 is the lowest end of these new chips, and none of these haters ever want to talk price.

In the price range the A4's comes in, Intel doesnt have any competitive chips. Not a single one at all.

Re:Oh, what's your definition of "matches"? (1)

realityimpaired (1668397) | about a year ago | (#43820605)

In the price range the A4's comes in, Intel doesnt have any competitive chips. Not a single one at all.

In the mobile sphere, where something like the A4 is most likely to actually be used (since they're touting the power consumption), you can easily find $400 laptops with Intel i3 in them. Unless the AMD offering produces laptops in the sub-$300 range without sacrificing things like having a real keyboard or a screen larger than a netbook, then that price point is irrelevant: the manufacturers will happily eat the increased profit, and you the consumer will end up paying the same at the till.

As regards your signature, BTW, you realize that in 1997, MSIE *was* a better browser than all of the competition? IE didn't win out because of the bundling, though that helped, it won out because it was the better product until Phoenix (later Firefox) came along, and that was 5 years after the quote you claim. All of the bloat and proprietary bullshit that people accuse IE of doing these days (and which isn't actually true for MSIE 9 or MSIE 10)? Yeah... Mozilla was doing that in the mid and late 90's.

Re:Oh, what's your definition of "matches"? (1)

Rockoon (1252108) | about a year ago | (#43823577)

In the mobile sphere, where something like the A4 is most likely to actually be used (since they're touting the power consumption), you can easily find $400 laptops with Intel i3 in them.

Why did you just pick $400?

Answer: Because thats what you have to pay for the Intel solution.

Is this important?

Answer: Only if the AMD solution you are comparing against also costs $400.

So, did you justify your argument?

Answer: No, because you never once mentioned the price of AMD solutions, nor went through the effort to see exactly what AMD solutions were available in the same price range and compare the performance of those equally priced devices with the precious i3 that you are drooling on. And this is just as I predicted and mentioned in the post you replied to: You do not want to discuss price..

The facts:

AMD A6 and A8 devices sell for under $400, while you haters are comparing sub-$400 i3 solutions to AMD A4 devices.

Anyone can play this game! All of these AMD devices kick Intels ass because they all beat the shit out of Intel 486's in the sub-$400 range. Better performance, better power efficiency, and hell.. Intel doesnt even offer an integrated GPU even on their top-end 486's.

matches power consumption? (5, Informative)

Luke_22 (1296823) | about a year ago | (#43820167)

What's more, the Core-i3 matches the A4-5000 in power efficiency while its HD 4000 graphics completely outpace the APU.

has anyone bothered looking at the benchmarks? The overall system power consumption when games were run was 20watts for AMD and 35watts for the Core i3.
To my calculation, that's a 75% more power consumption then AMD. Intel hardly "matches" anything...

AMD was still at least 3 watts less power hungry in any other benchmark, too...

Re:matches power consumption? (0)

realityimpaired (1668397) | about a year ago | (#43820621)

If I'm gaming on my laptop, I don't do it on battery. If I'm mobile, while 3W will make a difference in the long-run, it won't make anywhere near as big a difference as turning down the screen brightness will.

Ultimately it comes down to price for the average consumer... and while the Intel offering is more expensive on paper, at the retail point of sale, I expect that the AMD offering will end up being the same as the Intel offering in the low-end laptop market: you can already get i3-based laptops for $400 without having to buy a netbook, and I doubt that the manufacturers are going to leap at the chance to sell AMD for cheaper when consumers have already demonstrated that they'll pay that price point for a low end system.

Re:matches power consumption? (0)

Anonymous Coward | about a year ago | (#43820641)

Come on, only Intel fanboys takes Tom's seriously these days. They've been known Intel sell-outs for years. Their "benchmarks" as are their angle are always ridiculously tilted in favour of Intel.

Re:matches power consumption? (3, Insightful)

edxwelch (600979) | about a year ago | (#43820683)

None of the benchmarks have made an apples to apples comparision. Either they compare a 35W Pentium to the 15W Kabini, or it's an expensive Core i3/i5.
Core i3-3217U only appears in laptops costing more than $500. Kabini replaces Brazos which typically appears in cheap (sub $400) laptops.

Re:matches power consumption? (1)

Metabolife (961249) | about a year ago | (#43837779)

You also have to take how much processing you can accomplish with the same amount of energy. If they're matched evenly in gaming performance, then that 20 watts is great. If Intel is much faster, then it might even out or turn the other way.

AMD will be the new favorite. (1)

Anonymous Coward | about a year ago | (#43820225)

AMD will be the new favorite. Their APUs are cheap, give most bang for the buck, and are space- and power-efficient. A majority of desktop users in low- to mid-segment will find what they need in the A-series, and with the upcoming Kaveri even a few high-end users may consider ditching the expensive Intel and the big dedicated graphics board.

better performance will come due to drivers (0)

Anonymous Coward | about a year ago | (#43822715)

I guess AMD drivers are not that good yet for 8000 series gpu. Better results to come in next months. Same was with previous generations and 13.x drivers multiplied fps on some games; from 0-11fps to 40 or more.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>