Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Cancels 28nm APUs, Starts From Scratch At TSMC

Soulskill posted more than 2 years ago | from the changing-horses dept.

AMD 149

MrSeb writes "According to multiple independent sources, AMD has canned its 28nm Brazos-based Krishna and Wichita designs that were meant to replace Ontario and Zacate in the second half of 2012. The company will likely announce a new set of 28nm APUs at its Financial Analyst Day in February — and the new chips will be manufactured by TSMC, rather than its long-time partner GlobalFoundries. The implications and financial repercussions could be enormous. Moving 28nm APUs from GloFo to TSMC means scrapping the existing designs and laying out new parts using gate-last rather than gate-first manufacturing. AMD may try to mitigate the damage by doing a straightforward 28nm die shrink of existing Ontario/Zacate products, but that's unlikely to fend off increasing competition from Intel and ARM in the mobile space."

cancel ×

149 comments

Good or Bad? (1)

Anonymous Coward | more than 2 years ago | (#38140422)

With all the issues at gloflo, this might be a good thing. But it looks like too little too late.

Old news? (0)

Anonymous Coward | more than 2 years ago | (#38140430)

I thought this was all covered last week some time...

Maybe it's a dupe.

Take your time, let software catch up. (4, Insightful)

Kenja (541830) | more than 2 years ago | (#38140436)

So far I have been totally unable to tax my current CPU past 40% utilization. I think we can take a break and let software catch up and older systems fall off the support map before the next generation of CPUs hit.

Re:Take your time, let software catch up. (3, Funny)

Dunbal (464142) | more than 2 years ago | (#38140498)

Don't worry, the next OS version should do it...

Re:Take your time, let software catch up. (0, Informative)

Anonymous Coward | more than 2 years ago | (#38140504)

You should do more then make slashdot comments and watch porn.

Re:Take your time, let software catch up. (0)

Anonymous Coward | more than 2 years ago | (#38140522)

uhh, just because *you* don't use cpu demanding applications doesn't mean the rest of us don't.

Re:Take your time, let software catch up. (5, Insightful)

CSMoran (1577071) | more than 2 years ago | (#38140556)

So far I have been totally unable to tax my current CPU past 40% utilization. I think we can take a break and let software catch up and older systems fall off the support map before the next generation of CPUs hit.

Just because your usage scenario is not CPU-bound does not mean everyone else's is.

Re:Take your time, let software catch up. (0)

Anonymous Coward | more than 2 years ago | (#38140588)

I work with big data and I have an i7-975 for work which is overclocked to 4.2GHz. I have been thinking about upgrading.

Re:Take your time, let software catch up. (5, Funny)

Anonymous Coward | more than 2 years ago | (#38140680)

I salute you, mythical IT-worker who manages to get an overclocked computer work-approved.

Re:Take your time, let software catch up. (4, Informative)

0123456 (636235) | more than 2 years ago | (#38140734)

I salute you, mythical IT-worker who manages to get an overclocked computer work-approved.

Who said it was approved? In a previous job a friend inherited a computer from someone who'd left and never understood why it would crash every few days and hit bugs that no-one else seemed to see until he looked in the BIOS and discovered the previous user had overclocked it.
 

Re:Take your time, let software catch up. (2, Insightful)

Skarecrow77 (1714214) | more than 2 years ago | (#38141058)

... ok. I'll bite.

If you -know- that it's not stable, why didn't you clock it back down to spec, or at least down to where you can be sure it is truly 100% stable? Aren't you losing more time by doing multiple redundancy checks on your resultant data sets than you're gaining by the few extra clock cycles?

you are doing random spot checks on your data, right?

As anybody who has lived with an -almost- stable overclock for long periods of time knows, if it's not 100% stable, you're getting little computational data errors here and there that are going to add up long term to "omfg my data is borked and has been for 6 months and I didn't even realize".

Re:Take your time, let software catch up. (1)

Skarecrow77 (1714214) | more than 2 years ago | (#38141238)

re-reading, I apologize. I confused you and GGP as the same poster, and thought you were getting errors on a system you were keeping overclocked. My mistake.

Re:Take your time, let software catch up. (1)

Rudeboy777 (214749) | more than 2 years ago | (#38141334)

Did everything work as expected once you set it back to stock speeds in the BIOS?

Re:Take your time, let software catch up. (4, Insightful)

marcosdumay (620877) | more than 2 years ago | (#38140634)

The change in feature size won't just be usefull to get faster processors (altough servers could use some of them), it is also important to reduce the power footprint of the chips (that being AMD, it means both CPU and GPU will use less power) and to reduce the price of those chips.

Re:Take your time, let software catch up. (1)

Anonymous Coward | more than 2 years ago | (#38140794)

I think we can take a break

Who is "we"? Oh right, it's everyone who buys microprocessors, because we're all running the same software and doing the exact same things with our computers.

Re:Take your time, let software catch up. (4, Interesting)

gstoddart (321705) | more than 2 years ago | (#38140828)

So far I have been totally unable to tax my current CPU past 40% utilization.

Well, DfrgNtfs.exe is using 25% of my quad-core, and I'm not doing much else. I've gone well into 70% more more at times if I'm actually doing something intensive.

I'm using 7GB out of 8GB of RAM, and if I had 16GB I could probably put a hell of a dent in it too.

I don't even consider what I'm doing to be much of a load, and in the past I've been on machines where something literally was CPU bound for as much as an hour and I needed to walk away.

I don't even find it tough to use up that much resources ... hell, I stopped using Mozilla because it would expand to well over 1GB of RAM overnight (with the same # of windows and tabs that used to fit in 300MB).

I think the software has already caught up ... especially if you're like me and open something and leave it open.

Re:Take your time, let software catch up. (1)

Anonymous Coward | more than 2 years ago | (#38140980)

Defrag? 1995 called and wants its file systems back. News flash to the rest of the world: using (almost) all your RAM is a Good Thing. Can you say RAMdisk?

Oh, for a few 10s of GB of RAM, and an SSD array to fill it.

Re:Take your time, let software catch up. (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#38141026)

1995 called? So ext4 is from 1995? It has an online defrag utility, you know.

Re:Take your time, let software catch up. (1, Flamebait)

gstoddart (321705) | more than 2 years ago | (#38141106)

1995 called? So ext4 is from 1995? It has an online defrag utility, you know.

2009 called ... I'm running Vista. My Linux boxes are all now VMs ... I've no interest in running Linux as my primary box anymore.

But, I see you're living up to your nick.

Re:Take your time, let software catch up. (3, Informative)

Joce640k (829181) | more than 2 years ago | (#38141248)

Vista? Ack.

At least have the decency to install Windows 7.

Re:Take your time, let software catch up. (2)

badran (973386) | more than 2 years ago | (#38141748)

And in what meaningful way would that be different than an up to date Vista?

Re:Take your time, let software catch up. (0)

DiabolicallyRandom (2449482) | more than 2 years ago | (#38141886)

About a billion and one ways.

Re:Take your time, let software catch up. (0)

Anonymous Coward | more than 2 years ago | (#38142082)

He doesn't know. :-\

Re:Take your time, let software catch up. (2)

hairyfeet (841228) | more than 2 years ago | (#38142884)

Oh please, Win 7 is better on RAM management, better about UAC (and doesn't bug the fuck out of you for something simple like throwing crap in the trash), better because of libraries, better taskbar, better with devices and printers, better by having Action Center that doesn't bug you with pop ups, Aero Snap and Shake, hell its better in just about every way! I ran Vista up to SP1 and frankly it was a turd, buggy, memory hogging, lousy with shares, it sucked the big wet titty and i sure as hell wouldn't use the piggie for VMs!

As for TFA...man this really sucks. i knew AMD was having serious trouble with GloFlo (reports were Liano was getting less than 40% good chips per wafer) but damn. maybe it'll turn out to be a good thing they sold GloFlo in the first place. Of course the MAJOR downside is they are gonna have to compete with all TSMCs other customers and that could seriously hurt yields and couldn't come at a worst time, with Brazos chips selling as fast as they can crank them in everything from netbooks to all in ones to HTPCs.

I just hope they have GloFlo keep cranking out the Thuban and Zacate chips until they can get TSMC up to speed. I'm sure that GloFlo will need the business and AMD sure as hell needs the chips. this would royally suck if we had an AMD chip shortage to go with the HDD shortage...I wonder if i should be upping my timeframe on snatching a Thuban?

Re:Take your time, let software catch up. (1)

Carnildo (712617) | more than 2 years ago | (#38142928)

News flash to the rest of the world: using (almost) all your RAM is a Good Thing.

Not really. On my system, performance starts to suffer once applications are taking up all but 1 GB or so; if non-app memory drops below 50 MB, the system becomes unusable.

Re:Take your time, let software catch up. (1)

garyebickford (222422) | more than 2 years ago | (#38143366)

This was a while back, but I once ran a ray tracing project that ran nonstop for two weeks, essentially 100% CPU the whole time. In fact it didn't even finish - it was 2/3 done when someone else pulled the plug on it accidentally. Fortunately the data for that much of the picture was saved to a file as it went. Nowadays the same project would probably take 10 minutes, but hey.

Re:Take your time, let software catch up. (1)

drinkypoo (153816) | more than 2 years ago | (#38143542)

Firefox has allocated 628MB on my 8GB system after running for days. That's still a lot of RAM (although I have the memory cache turned up pretty high on this system) but it's not a gigabyte overnight. I think you were running crappy extensions.

Re:Take your time, let software catch up. (-1, Flamebait)

Anonymous Coward | more than 2 years ago | (#38140830)

anecdotal evidence means you're a moron

slashdot must be quite a change from your pencil pushing life. go play some bingo, old man

Re:Take your time, let software catch up. (0)

Anonymous Coward | more than 2 years ago | (#38141548)

Cool ageism bro. Go play some Pokemon and learn how to write properly.

Are you also a racist, sexist, or homophobe?

Re:Take your time, let software catch up. (5, Informative)

Bengie (1121981) | more than 2 years ago | (#38140844)

With multi-core CPUs, just because you can't reach 100% usage doesn't mean your not CPU limited.

Re:Take your time, let software catch up. (4, Interesting)

Skarecrow77 (1714214) | more than 2 years ago | (#38141160)

Exactly. Too bad I already posted in the thread and can't mod you up anymore.

Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.

  Intel has made only modest gains in performance-per-clock-cycle since the core 2 duo. AMD I'm pretty sure is actually going backwards if I am correctly remembering some of the bulldozer vs thurban reviews.

Looking at forthcoming offerings, AMD especially seems to be assuming that we're all constantly using our CPUs to run handbrake 24/7 or batch encode a couple hundred wavs to mp3 at a time, and thus would love 12 cores.

Re:Take your time, let software catch up. (2)

ob0 (1612201) | more than 2 years ago | (#38141488)

Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.

Have you seen the Bulldozer reviews? They've been hitting AMD over the head due to its poor single-thread performance (amongst other things...)

Re:Take your time, let software catch up. (4, Informative)

Anonymous Coward | more than 2 years ago | (#38141504)

Nobody pays much attention to single-core performance anymore, and I have no idea why. There are tons of programs that people use on a regular basis that are single-core limited.

There's a very simple reason: physical limitations. The current processor technology is more or less maxed out for single-thread performance. There's probably some gains available by completely changing the instruction set or completely giving up on multi-thread performance, but nothing that Intel can put into a chip they can sell. They can't up clock speed anymore due to the speed of light (except a little bit when doing a die shrink). The obsession with multi-core isn't because Intel and AMD think everyone wants to run more threads; software is moving towards using more threads because Intel and AMD simply can't improve single-thread performance but they, at least for a little while longer, can keep adding more cores.

Re:Take your time, let software catch up. (3, Insightful)

Kjella (173770) | more than 2 years ago | (#38141758)

Looking at forthcoming offerings, AMD especially seems to be assuming that we're all constantly using our CPUs to run handbrake 24/7 or batch encode a couple hundred wavs to mp3 at a time, and thus would love 12 cores.

I think it's quite obvious that AMD didn't have the resources to hit many targets, so they picked two:

1) Laptops/Low-end PCs with Bobcat cores (Fusion/Llano APUs)
2) Servers with Bulldozer cores (Valencia/Interlagos)

Sadly the latter seems to have misfired a bit even in the server arena, but it's no question IMHO that the high-end desktop market was intentionally abandoned. Either that or they've missed their design targets by many miles, they can't have been that off on single core performance. I can sort of understand, Intel was already dominating and the Atom threatened their low end (remember, CPU designs have a 2-3 years lead time) and they couldn't afford to lose their bread and butter machines. So they aimed Bobcat low (power), Bulldozer wide (cores) and left Intel to compete with themselves. Not to be too much of a cynic, but it's better for AMD to win some markets than being a loser in all of them.

Re:Take your time, let software catch up. (1)

PRMan (959735) | more than 2 years ago | (#38142222)

Actually, the changes to the core in Windows 7 mean that most situations are nearly evenly split across processors anyway.

I had a batch file at a previous company calling all "single-threaded" applications and during the entire run of the batch, all 4 CPUs were within 5% of each other. Bring up your Task Manager Performance tab someday and leave it up all day at work. You might be surprised.

Re:Take your time, let software catch up. (1)

Anonymous Coward | more than 2 years ago | (#38142678)

Just coz windows continually switches the load from one core to another doesn't mean you are getting performance gains. Got 4 cores? Notice the load across the box is pinned at 25%? You're burning one core, no more, just windows is juggling it for some reason (context switches, load leveling, who knows?)

Re:Take your time, let software catch up. (1)

Anonymous Coward | more than 2 years ago | (#38141538)

With multi-core CPUs, just because you can't reach 100% usage doesn't mean your not CPU limited

Actually, it means exactly that. Any software that cannot utilize more than 1 core, yet is bound to hit the limit of any currently marketed x86 processors is inefficient.

It is embarrassingly easy to create multithreaded software these days. Not taking advantage of this is stupid. And yes, I'm looking at you python and ruby!

Antivirus? (1)

grimJester (890090) | more than 2 years ago | (#38140866)

You really should install an antivirus program.

Re:Take your time, let software catch up. (1)

Anonymous Coward | more than 2 years ago | (#38140966)

That's easy!
I just start a thread with an infinite loop for every cpu core.

Kids these days...
Can't code themselves out of a wet paper bag to save their lives...

Re:Take your time, let software catch up. (1)

lightknight (213164) | more than 2 years ago | (#38141018)

Hehe. Install Ad-Aware, and run a full system scan. Watch those cores get used...

Re:Take your time, let software catch up. (4, Insightful)

PRMan (959735) | more than 2 years ago | (#38141338)

Seriously, this.

In building computers for my wife and my brother, I just went with lower end I3 and Phenom X2(4) processors. Why? Because the effective performance difference between the two for the applications they are running is .001%. And the price difference between those and say, an I7 is 1000%.

But I made sure to get both systems SSD drives. Price difference? About 200% (500GB HDD $60 vs 128GB SSD $125). But the performance difference is about 700%.

Re:Take your time, let software catch up. (3, Informative)

jd (1658) | more than 2 years ago | (#38141390)

Software isn't the bottleneck. Caches are *tiny* compared to the size of even single functions in modern programs, which means they get flooded repeatedly, which in turn means that you're pulling from main memory a lot more than you'd like. Multi-core CPUs aren't (as a rule) fully independent - they share caches and share I/O lines, which in turn means that the effective capacity is slashed as a function of the number of active cores. Cheaper ones even share(d) the FPU, which was stupid. The bottleneck problem is typically solved by increasing the size of the on-chip caches OR by adding an external cache between main memory and the CPU. After that, it depends on whether the bottleneck is caused by bus contention or by slow RAM. Bus contention would require memory to be banked with each bank on an independent local bus. Slow RAM would require either faster RAM or smarter (PIM) RAM. (Smart RAM is RAM that is capable of performing very common operations internally without requiring the CPU. It's unpopular with manufacturers because they like cheap interchangeable parts and smart RAM is neither cheap nor interchangeable.)

Really, the entire notion of a CPU - or indeed a GPU - is getting tiresome. I liked the Transputer way of doing things (System-on-a-Chip architecture) and I still like that way of doing things. The Transputer had some excellent ideas - it's a shame it took Inmos so long to design an FPU (and a crappy one at that) and given that the T400 had a 20MHz bus at a time most CPUs were running at 4MHz, it's a damn shame they failed to keep that lead through to the T9000.

What I'd like to see is a SoC where instead of discrete cores (uck!) you have banks of independent registers, pools of compute elements and hyperthreading such that the software can dynamically configure how to divide up the resources. There's nothing to stop you moving all the GPU logic you like into such a system. It's merely more pools of compute elements. Microcode is already in use and microcode is nothing more than software binding of compute elements to form instructions. (Hell, microcode was already common on some architectures back in the 80s and was available for microprocessors within a decade of their being invented.) There's nothing that says microcode HAS to be closed firmware from the manufacturer - let the OS do the linking. It's the OS' job to partition resources and it can do so on-the-fly as needs dictate - something a manufacturer firmware blob can't do. Put the first 4 gigs onto the SoC and have one MMU per core plus one spare, so that each core can independently access memory (provided they don't try to access the same page). The spare is for direct access to memory from the main bus without going through any CPU (required for RDMA, which most peripherals should be capable of these days).

Such a design, where the OS converts the true primitives into the primitives (ie: instruction set) useful for the tasks being performed, would allow you to add in any number of other true primitives. Since any microcode-driven CPU is essentially a software processor anyway, you can afford to put extra compute elements out there. Any element not needed would not be routed to. Real-estate isn't nearly as expensive as is claimed, as evidenced by the number of artistic designs chip manufacturers etch in. Those designs are dead space that can magically be afforded, but there's nothing to stop you from replacing them with the necessary inter-primitive buffering to build ever-more complex instructions from primitives without loss of performance. I'm willing to bet HPC would look a whole lot more impressive if BLAS and LAPACK functions were specifically in hardware rather than being hacked via a GPU.

Of course, SoC means larger chips. So? Intel was talking about wafer-scale processors several years back (remember their 80-core boast?) and production has only improved since then. The yield is high enough quality that this is practical and since the idea is to software-wire the internals it becomes trivial to bypass defects. That means you don't have to send signals anything like as far, you don't have all those low-grade connectors and you don't have the agony of modern pin layouts (you need to be able to connect to the bus and each reference voltage including hard ground but that's it since everything else is on-board).

Disk is another slow area. An intelligent drive should be able to have one filesystem dynamically loaded onto it with anything not using that FS being passed back raw as it currently is. By having the dominant FS handled by the drive, you eliminate lots of bus chatter and kernel blocking. Linux wouldn't care, it just sees the virtual FS layer anyway - it doesn't matter if the filesystem itself is physically on the CPU or physically on the drive. There's no direct dependency because there's no tight coupling. Windows shouldn't care as it also has abstraction layers. Why should it care if what lies behind the abstraction is run on one chip or another?

Lastly, compilers are often god-awful bad at adding in parallel processing. Not that they should have to -- the programmer is SUPPOSED to be competent at this. Parallel programming has only been standard CS material since 1978! If programmers aren't capable of writing efficient parallel programs by now, they need to be dropped off a cliff and replaced with programmers who can write. However, there ARE some good compilers out there. Cilk++ is essentially a rewrite of G++ with code block level parallel processing. (It's also open-source now, so I hope G++ soon gets some of the modifications ported in.) Even there, there are PLENTY of parallel programming languages which are extremely good at this kind of work - Occam is superb, UPC is said to be highly respectable, the last time parallel compilers were mentioned someone mentioned Haskell, Erlang and other such languages were also very good.

Several HPC apps - I believe ATLAS is one - have tried some of the parallel processing extensions to regular C but dumped them due to lack of performance. They're more often to be found using Pth, OpenThreads or other standard threading libraries. What matters, though, is that high performance IS achieved by people who bother. If a given programmer can't achieve the same results, it is because they can't be bothered. For all the problems with compilers, I refuse to blame the available technology for the incompetence of code monkeys.

Re:Take your time, let software catch up. (3, Informative)

hkultala (69204) | more than 2 years ago | (#38142406)

Software isn't the bottleneck. Caches are *tiny* compared to the size of even single functions in modern programs, which means they get flooded repeatedly, which in turn means that you're pulling from main memory a lot more than you'd like.

Wrong.

The code size of average function is much smaller than instruction cache for any modern processor.
And then there are L2 and L3 caches.

Instruction fetch needing to go to main memory is quite rare.

And then about data.. depends totally on what the program does.

Multi-core CPUs aren't (as a rule) fully independent - they share caches and share I/O lines, which in turn means that the effective capacity is slashed as a function of the number of active cores. Cheaper ones even share(d) the FPU, which was stupid.

None one of the CPU's sharing FPU with multiple HW threads are cheap.

Sun Niagara I had slow shared FPU, but the chip was not cheap

AMD Bulldozer, which usually has sucky performance, sucks less on code which uses the shared FPU.

FPU operations just have long latencies and there are always lots of data dependencies, so in practice you cannot
utilize FPU well from one threads, you need to feed instructions from multiple treads.

Intel uses HyperThreading for this, AMD Bulldozer it's CMT/shared FPU/module.
GPU's are barrel processors for the same reason.

The bottleneck problem is typically solved by increasing the size of the on-chip caches OR by adding an external cache between main memory and the CPU.

Much more often the bottleneck is between the levels of the chip's caches.
The big outer level caches are slow and processors spend quite often small time waiting for data coming from them. And if you increase the size of the last level caches, you make them even slower.

One of the reason's for bulldozer's sucky performance is because it has small L1 caches(so it needs to fetch data deom L2 cache often), but big and slow L2 cache. So there is this relatively long L2 latency happening quite often.

External cache.. has not been been used for about 10 years by Intel or AMD. It's either slow or expensive, and usually both. Now when even internal caches can easily be made with sizes over 10 megabytes, the external cache has to be very expensive in order to compete with internal caches, and still it only makes sense on some server workloads.

After that, it depends on whether the bottleneck is caused by bus contention or by slow RAM. Bus contention would require memory to be banked with each bank on an independent local bus. Slow RAM would require either faster RAM or smarter (PIM) RAM. (Smart RAM is RAM that is capable of performing very common operations internally without requiring the CPU. It's unpopular with manufacturers because they like cheap interchangeable parts and smart RAM is neither cheap nor interchangeable.)

Smart RAM is a dream, and a research topic in universities. It's uncommon because it does not (yet) exist.

And most of the problems/algorithms are not solveable by "simple" smart ram that can only operation on data near each others. And it you try to make it even smarter, then you end up making it costlier and slower, it will become just chip with multicore processor and memory on same chip.

There are some computational tasks where smart ram would improve the performance by great magnitude, but for the >90% of all the other problems, it has quite little use.

Re:Take your time, let software catch up. (1)

Kjella (173770) | more than 2 years ago | (#38142782)

Lastly, compilers are often god-awful bad at adding in parallel processing. Not that they should have to -- the programmer is SUPPOSED to be competent at this. Parallel programming has only been standard CS material since 1978! If programmers aren't capable of writing efficient parallel programs by now, they need to be dropped off a cliff and replaced with programmers who can write. (...) What matters, though, is that high performance IS achieved by people who bother. If a given programmer can't achieve the same results, it is because they can't be bothered. For all the problems with compilers, I refuse to blame the available technology for the incompetence of code monkeys.

So what? Mathematicians have had number and field theory for centuries, it doesn't make it easier to understand. Recipe-programming is easy to understand, there's no dependency issues, no resource contention, just a simple start-to-finish sequence of events. Simple interactions like worker threads and resource pools are easy to work out, only mutex it so that you don't grab the same work packet or resource.

Truly parallel programming is to me like having 20 chefs in my house cooking a meal, all using limited utensils and all being completely brain dead. I have to make sure they don't end up in a race condition grabbing the same utensils, deadlock at the stove or one chef pouring something into another chef's casserole. And instead of doing this like a recipe with threads and resource locks, I have to come up with some kind of parallel execution plan. That's what it feels like to me at least.

That's complicated. Not just a little bit complicated, but like extremely messy complicated. I just want to hand out a bunch of recipes, set them off doing it and have simple rules which means they can't block like for example "get items in alphabetical order" so if both need a fork and knife it'll never happen that one has a fork and the other a knife so they block each other. I don't have to explicitly lay out the parallelism, just do it in parallel until it hits a blocker. Then solve that blocker based on simple rules that'll have a deterministic answer.

Parallel languages turn this upside down, if I want all the chefs to start in parallel I have to declare that. But then I also have to declare all the exceptions to the rule. That no, there's only four plates on the stove, there's one oven, five kitchen knives and so on. I guess in this case with static recipes it's rather simple. But throw in a lot of branching and function calling and it becomes a complete mess trying to figure out if it's safe to declare something a parallel section or not.

If I lose to a chess program it's not because I'm lazy, it's because the computer can check millions of moves more than me. Resource locks lets threads block on demand as needed. Parallel programming puts the problem in your lap. The more complicated the system gets, the better to let the system deal with it than you. If you haven't experienced it that way, you haven't worked on a system complicated enough to overwhelm you. Massive, simple parallelism? Sure. Complex parallelism? I'd do threads any day.

Re:Take your time, let software catch up. (1)

jd (1658) | more than 2 years ago | (#38143448)

Doesn't matter if the chess program can look at a million more moves or a billion. Chess Grand Masters look at patterns and compute which patterns are better than other patterns, which means that the pattern itself is a function. The better the Grand Master, the better the evaluation function. You need only have a function that evaluates the permutation of pieces on the board to a degree that is greater than the computer's evaluation of the permutation of a billion moves. Since Chess is a Full Information Game, it is provable that, for one of the sides, an evaluation exists such that looking a single move ahead will guarantee a win or a draw against any defense and that the other side can always force a draw if a single error is made.

So, yes, it is because you're lazy. Have you tried to produce a superior evaluation method? Have you really analyzed the maths? No? Then you haven't applied the effort needed. It is not the computer that has won, it is you who has lost.

20 chefs? Critical Path Analysis will show you every single scheduling conflict, when it will occur, where it will occur and how it will occur. You can timetable everything to that, with the breaks needed to synchronize neatly plotted out for you. 20 threads on a CPA is trivial. A major project might easily have a hundred. A parallel execution plan? Why? CPA will tell you the scheduling. Ok, how to divide resources? Well, it's a linear problem, so Operational Research would work fine. You want to recombine the elements to be efficient and the tools for that exist and taught to first years.

It IS still just recipes, but if N recipes call for egg yolks to be mixed well, you might as well have one chef mixing all that the recipes require then dividing up the mix into the right proportions and not have 20 chefs doing exactly the same thing. It's not hard. You make it complicated by believing it's complicated. It isn't. What is complicated is changing your mindset from serialized Mrs Beaton cookery to parallelized cooking of the kind that was actually commonplace in early cultures. (There was a fascinating find recently in Mesoamerica of gigantic cookware used to prepare dishes using this form of parallelism.)

Re:Take your time, let software catch up. (1)

Anonymous Coward | more than 2 years ago | (#38141742)

Go to blender.org, download a copy, run it, press f12. Congratulations. You've taxed your CPU to 100% for a split second. Make a more complex scene and you can tax it for a little longer.

Re:Take your time, let software catch up. (1)

kermidge (2221646) | more than 2 years ago | (#38142362)

Dunno, man, but my CPU is running 98-100% as I write this.

Re:Take your time, let software catch up. (0)

Anonymous Coward | more than 2 years ago | (#38142388)

I have 6 HT cores and 24 GB of RAM. Yes, I max them both out and have been wanting primarily more cores.

Re:Take your time, let software catch up. (3, Funny)

bill_mcgonigle (4333) | more than 2 years ago | (#38143358)

So far I have been totally unable to tax my current CPU past 40% utilization.

Oh, you should try Firefox sometime!

PROOFREAD please (1)

EmagGeek (574360) | more than 2 years ago | (#38140500)

It's TSMC, not TMSC.

Thank you.

Competition ? (4, Informative)

unity100 (970058) | more than 2 years ago | (#38140544)

AMD has no competition in APU arena. It is dominating it.

http://techreport.com/articles.x/21730/8 [techreport.com]

its actually possible to game with acceptable detail and fps with entry-mid level laptops without paying a fortune now.

Re:Competition ? (1)

Desler (1608317) | more than 2 years ago | (#38140732)

You misinterpreted the statement to be about APUs whilst the statement was about the CPU market in general.

Re:Competition ? (0)

unity100 (970058) | more than 2 years ago | (#38140774)

it says APU in the article though.

Re:Competition ? (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#38140802)

Yeah and? You do realize that sentences can have different contexts than the one before them, right?

Re:Competition ? (1)

Anonymous Coward | more than 2 years ago | (#38141180)

They should have the same context. Thats what paragraphs are for

Re:Competition ? (2)

edxwelch (600979) | more than 2 years ago | (#38140750)

Very true - AMD compete well against Intel in entry-mid laptops.
Unfortunately, it's a rather narrow segment.

Re:Competition ? (1)

Nadaka (224565) | more than 2 years ago | (#38141000)

I believe that it is the widest consumer segment actually. Desktop usage is shrinking and gaming has been held back by consoles.

Re:Competition ? (1)

edxwelch (600979) | more than 2 years ago | (#38141324)

Desktop share *was* shrinkinga couple of years ago, but it leveled off. It's now 50 -50

Re:Competition ? (1)

Bengie (1121981) | more than 2 years ago | (#38140880)

APU market is small, desktop market is big. AMD's APUs compete in both markets.

Pretend you went back 15 years ago and tried selling a dual core desktop CPU. You could claim you're doing well in the multi-core desktop market.

Re:Competition ? (1)

Anonymous Coward | more than 2 years ago | (#38142042)

Yeah, the A8 is awesome.

Too bad their new CPU has failed along with the planned APU based off of them.

But then again a die shrink for the A8 whoohoo that makes it a tablet or fanless chip. I can't wait to have a proper chip in these silent systems as the A8 still requires a small fan. That thing would be able to play all current generation "developed for xbox" PC games on the go. I don't need fast 8 core systems for non excisting software, I just want more than the future arm dual and quad cores can provide and would love to have a silent A8 APU Llano chip with 4 or 8 GB RAM in every device in the house and on the go. I missed out on the current gen games and would love to pick them up without drm for a few bucks on gog.

AMD = Stagnated. (0)

Anonymous Coward | more than 2 years ago | (#38140640)

AMD = Stagnated. Take it out back and shoot it to put it out of it's misery.

Re:AMD = Stagnated. (0)

Anonymous Coward | more than 2 years ago | (#38140716)

I hope you like $500 celerons...

Re:AMD = Stagnated. (4, Interesting)

dc29A (636871) | more than 2 years ago | (#38140768)

I hope you like $500 celerons...

If this was 1995, I'd believe it. In 2011, Intel competes with itself. If they drive up CPU prices, they won't be able to make more and more profits because people do *NOT* need to upgrade. The vast majority of the population is doing fine on a dual core 4+ year old CPU running a browser and IM program and watching videos. Since people do not need to upgrade, but Intel has to sell more and more CPUs, their profits would collapse and then the stock and then ... hilarity ensues.

Re:AMD = Stagnated. (1)

Dcnjoe60 (682885) | more than 2 years ago | (#38140904)

I hope you like $500 celerons...

If this was 1995, I'd believe it. In 2011, Intel competes with itself. If they drive up CPU prices, they won't be able to make more and more profits because people do *NOT* need to upgrade. The vast majority of the population is doing fine on a dual core 4+ year old CPU running a browser and IM program and watching videos. Since people do not need to upgrade, but Intel has to sell more and more CPUs, their profits would collapse and then the stock and then ... hilarity ensues.

Actually, the vast majority of people world wide don't even have access to a computer. For those who do, the vast majority probably don't even need a dual core cpu, for most of what is done. Engineers, graphic designers and gamers would be the exception to that statement.

Re:AMD = Stagnated. (1)

afabbro (33948) | more than 2 years ago | (#38141046)

Actually, the vast majority of people world wide don't even have access to a computer. For those who do, the vast majority probably don't even need a dual core cpu, for most of what is done.

The vast majority probably don't need more than dial-up connectivity, though it sure is much more pleasant when you have broadband.

Dual core is the same way. It's much more pleasant to work on a dual core machine than a single core, because most people multitask (listen to music, watch video, browse sites with many tabs, plus OS and antivirus and dropbox sync and blah blah in the background.

Your point has more validity above dual core. Certainly engineers, graphic designers, and gamers have a better chance of needing 4/6/8/12 cores than the typical "I surf, read email, and watch YouTube" user.

Re:AMD = Stagnated. (2)

tiffany352 (2485630) | more than 2 years ago | (#38141006)

Wow.. um. I'm currently running a 4yo handmedown computer with a Pentium D. I have a browser running, xchat, gedit, and I'm listening to pandora. And the only thing I would need a new CPU for is so I can a: watch 720p html5 video, or b: compile GCC in a fraction of the time. However, if I guessed, the vast majority of the population only uses their computer for a web browser containing facebook and youtube. I know people who /only/ use their computer for facebook (and that's when they're not using their phone for it).

Re:AMD = Stagnated. (5, Insightful)

GreatBunzinni (642500) | more than 2 years ago | (#38141024)

Your assumption that you can simply ignore AMD's influence in the CPU market and still end up with a relevant model to explain and predict its outcome is both naive and disingenuous. AMD does have products which outperform equivalent Intel products, even when not accounting with Intel shenanigans such as relying on funny compiler tricks, and AMD happens to price them quite attractively. If you haven't considered any AMD offering on any budget for any serious desktop and instead opted to rely only on Intel products then you are both clueless and economically-challenged.

Re:AMD = Stagnated. (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#38141132)

AMD does have products which outperform equivalent Intel products,

Such as?

Re:AMD = Stagnated. (1)

GreatBunzinni (642500) | more than 2 years ago | (#38141378)

If you are really interested to know that then you should simply pick up any random benchmark from the web and compare prices. For example, in some benchmarks [cpubenchmark.net] the AMD FX-8150 processor, which goes for about 220 euros, outperforms Intel Core i7-2860QM systems, which sells for around 500 euros. And in the nearest mom&pop store, an AMD Phenom II X6 1100T goes for 178 euros while the Intel Core i7 870 goes for 240 euros.

But seriously, pop up any random benchmark between recent intel and AMD processors and compare their performance and their price. You will notice that AMD either come out ahead or are head-to-head.

Re:AMD = Stagnated. (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#38141978)

So a disingenuous comparison! Why use the 2860qm to compare to the 815/ when when you could compare to the cheaper i7-2600 which is only $30 more and has 30w less tdp while still outperforming the bulldozer. Or why not compare that 1100t to the i5-2500 which is way more performant, again 30w lower tdp and only $35 more. Oh right, because that doesn't create as insane a price gap.

Re:AMD = Stagnated. (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#38142068)

Correction the i5-2500 is only $20 more. The i5-2500k which is even faster is the one that's $35 more than the 1100t.

AMD = Important. (2)

Mojo66 (1131579) | more than 2 years ago | (#38141234)

Whether you buy AMD products or not, you can't ignore the fact that AMD is an important counter-balance to Intel. Without AMD, Intel would have a monopoly in CPUs which would bring prices up and innovation down until other competitors, like ARM, would fill in the gap, which could take some time.

Re:AMD = Stagnated. (0)

Anonymous Coward | more than 2 years ago | (#38141294)

I carefully checked the numbers last year when I built a new desktop.
AMD's performance per $ and even more so in performance per watt was shithouse compared to intel
I ended up with a dual core i5 661. The integrated "gpu" kept me going until I upgraded to a radeon

in short: amd good for graphics, intel good for cpu
If only intel bought nvidia and applied some of their low power wizardry to their domestic heaters they sell as graphics cards...

Re:AMD = Stagnated. (4, Informative)

GreatBunzinni (642500) | more than 2 years ago | (#38141782)

Intel i5 661: http://www.newegg.com/Product/Product.aspx?Item=N82E16819115217&Tpk=i5%20661 [newegg.com]
According to these benchmarks [cpubenchmark.net] , we have:

  • AMD Phenom II X4 965 4,291 $129.99*
  • Intel Core i5 661 @ 3.33GHz 3,286 $175.66*

And this doesn't account for the money spent on a motherboard, which adds a hefty price to any intel offering.

So, looks like you botched your careful number check.

Re:AMD = Stagnated. (3, Informative)

mrchaotica (681592) | more than 2 years ago | (#38142632)

I got an AMD Phenom II X4 840 for $59.99 a few days ago (at Microcenter); I'm sure it's more than half as fast as a 965, so it's an even better value. I got a new motherboard (AMD 760G chipset) with it too; it was also $59.99. Not bad, I think -- would I have been able to find an Intel solution for that price/performance?

Re:AMD = Stagnated. (4, Interesting)

Guppy (12314) | more than 2 years ago | (#38141156)

In 2011, Intel competes with itself.

That's part of the problem. One of the speculated reasons the Atom processor is so far behind, is that Intel was afraid it would cannibalize more profitable segments of its mobile CPU market. As a result, they launched it with a bunch of contractual restrictions on it (customers had to agree not to use it in any notebook larger than 10"-form factor), while using pricing models that discouraged 3rd party graphics (Atoms bundled with Intel's chipset were sometimes actually cheaper than solo Atoms, making nVidia ION combos uneconomical).

Since AMD had no strong CPUs in the netbook segment, everyone had to simply accept these restrictions at first, until AMD introduced their Ontaria and Zacate series.

Re:AMD = Stagnated. (2)

Kjella (173770) | more than 2 years ago | (#38141264)

Oh, they can go slower. The world market is still expanding both in size and average price they can afford, companies will still buy them for their X years of support, laptops break down and so on. Intel wouldn't drive prices up as such, they'd bring costs down. Sell 22nm processors at same prices as 32nm processors, does that sound massively profitable to you? It does to me. In the end they'll sell you something that costs like an Atom for the price of a 2600K. Or maybe just slow down their tick-tocks, let each generation soak up twice the profits. I doubt Intel would let AMD die though, that'd bring too much anti-trust scrutiny on their total domination of the world's computers. At death's door would be just fine though.

In any case, I find this news unlikely. TSMC has crap record for delivering on time with decent yields, their 32nm process was so bad it got scrapped and the 28nm process is still struggling from what I gather. The only reason they've not been slain in the market for that is that both AMD and nVidia depend on them now so the graphics market just took a timeout. If Intel had a real graphics division they'd be eating them for lunch by now. GlobalFoundries is what used to be AMD proper, if they aren't able to do 28nm then they've got a total of zero reliable production facilities if you ask me. And Intel's already doing volume production on 22nm....

Re:AMD = Stagnated. (0)

Anonymous Coward | more than 2 years ago | (#38140778)

Hope you like power-hungry crap processors like bulldozer. It's really sad when the six core phenom II black edition chips are less power hungry than bulldozer chips.

Global Foundries (5, Informative)

Anonymous Coward | more than 2 years ago | (#38140684)

The description is somewhat misleading in that Global Foundries is not a "long-time partner," but what were AMD's own internal wafer fabs until Global Foundries was spun out as a separate company in 2009.

TSMC (2)

pavon (30274) | more than 2 years ago | (#38143138)

Yeah, and TSMC is the foundry that ATI has used for years (and still does). The plan with the APUs has always been to move ATI's GPU to AMD's^W Global Foundry's process. They have given up on that and decided to move AMD's CPU to the TSMC process instead. It's a pretty big turn of events.

Extremely useful summary (2, Insightful)

bigredradio (631970) | more than 2 years ago | (#38140686)

Moving 28nm APUs from GloFo to TSMC means scrapping the existing designs and laying out new parts using gate-last rather than gate-first manufacturing. AMD may try to mitigate the damage by doing a straightforward 28nm die shrink of existing Ontario/Zacate products, but that's unlikely to fend off increasing competition from Intel and ARM in the mobile space

After reading the summary (a few times), I came to the conclusion that I know nothing about this topic. Thanks for the heads up so I that was not burdened with reading an article that only a select few might understand or care.

waaait a minute (2)

markhahn (122033) | more than 2 years ago | (#38140696)

so far, all bobcat-based chips have been made at TSMC, haven't they? so is this really news?

Re:waaait a minute (0)

Anonymous Coward | more than 2 years ago | (#38143180)

Yes they have. They are all fully synthesizable cores and the TSMC 28nm process was the know from the beginning as the next step. Probably the news here is that the GF 28nm bulk process is late to the party. Otherwise this is all FUD. Didn't read the article, of course.

Apologies to Herman's Hermits (-1)

Anonymous Coward | more than 2 years ago | (#38140714)

I'm the American Voter, I am
The American Voter, I am, I am
Oswald shot a Kennedy from the seventh floor
They've been shot at many times before

And if ev'ry one shot a Kennedy (Kennedy!)
We wouldn't have a worry or a sound (No sound!)
'Cuz there's one good kind of Kennedy
And that's a Kennedy we're certain is dead!

Second verse, same as the first!

I'm the American Voter, I am
The American Voter, I am, I am
Oswald shot a Kennedy from the seventh floor
They've been shot at many times before

And if ev'ry one shot a Kennedy (Kennedy!)
We wouldn't have a worry or a sound (No sound!)
'Cuz there's one good kind of Kennedy
And that's a Kennedy we're certain is dead!

Happy November 22nd!

Bulldozer Impact (1)

andy9o (1235174) | more than 2 years ago | (#38140742)

Hopefully Global Foundries' issues don't impact Bulldozer, or AMD will fall even further behind in the performance desktop arena.

Re:Bulldozer Impact (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#38140840)

Bulldozer is a fail architecture. Lower performance and higher power draw than their own chips that are lower price like the phenom 2 six cores.

Re:Bulldozer Impact (0)

Anonymous Coward | more than 2 years ago | (#38141022)

They already did. Bulldozer was intended to ship at 4+ GHz but Global Foundries couldn't make it. They still have a chance to fix it on the next iteration in 2012, but I'm not holding my breath.

Long-time partner? Really? (4, Informative)

WilliamBaughman (1312511) | more than 2 years ago | (#38140812)

Calling Global Foundries AMD's "long-time partner" really dates "MrSeb", he must have started reporting tech news in the last three years. Global Foundries isn't just a "partner" to AMD, it's part-owned by AMD, and was spun out of AMD's manufacturing and merged with Chartered Semiconductor.

Re:Long-time partner? Really? (1)

ericloewe (2129490) | more than 2 years ago | (#38140852)

How bad is it when what used to be your in-house fab merits a last-minute change to a competitor's relatively different process?

Re:Long-time partner? Really? (0)

Anonymous Coward | more than 2 years ago | (#38140944)

Maybe this was a reason to spin it off, it was limited their options.

Re:Long-time partner? Really? (5, Interesting)

confused one (671304) | more than 2 years ago | (#38141112)

All true; but, they're down to 9% ownership and according to the articles no longer have rights to appoint someone to the GloFlo board. Looks like the relationship is becoming increasingly sour.

Time to wise up, Amd (0)

Anonymous Coward | more than 2 years ago | (#38140992)

To bad Amd don't do something sensible for a change, like cancelling Bulldozer, and release a real 8 core Phenom II instead ...

Re:Time to wise up, Amd (1)

lightknight (213164) | more than 2 years ago | (#38141116)

How about a Phenom that can be used in a multiple socket motherboard? It might destroy their Opteron marketshare, but they would own the desktop + server market.

5 x ~$200 Phenom II X6s...30 cores for $1000.

Re:Time to wise up, Amd (2)

Lunix Nutcase (1092239) | more than 2 years ago | (#38141296)

Or get an i5-2500k which is faster than a lot of the x6s for only.like 20 bucks more.

Re:Time to wise up, Amd (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#38141318)

And 30w less tdp.

Re:Time to wise up, Amd (1)

Cassini2 (956052) | more than 2 years ago | (#38143996)

AMD tried this once before with their AMD QuadFX 4x4 concept [wikipedia.org] . It didn't go anywhere.

The problem is that most games are insufficiently multi-threaded to take advantage of a dual processor architecture. A hard core group of gamers exists that would purchase dual processor and quad processor Opteron and Xeon motherboards if it resulted in increased game performance. Unfortunately, best game performance is often obtained from single processor desktop chips.

Bottom line: Games often struggle at keeping more than 2 to 3 cores busy. As a result, better performance for the dollar is obtained by purchasing better video cards.

The Maturation of the American Economy (1)

icongorilla (2452494) | more than 2 years ago | (#38141522)

X86 cpu manufacturer can and should survive. Maybe Intel or Microsoft or Apple will buy them out to put them out of their misery. The quicker customers can box themselves in the better. Choice is fleeting and obviously, chooses the current "best" processor is always in your "best" interest with no thought of the long term. But maybe Arm really is meant to eventually replace the X86 architecture.

Meaningless given the Atom problems for Intel.... (1)

bingbangboom (2457958) | more than 2 years ago | (#38141706)

Intel has the Atom line (current generation is garbage) and the i3-23X7M ($100-$200 premium) that competes on the low end with AMD.

Intel Atom's next generation has no 64bit drivers or DirectX 10 for there PowerVR chipset:
http://news.softpedia.com/news/Intel-Cedar-Trail-Atom-Won-t-Receive-64-bit-Graphics-or-DirectX-10-1-Driver-232915.shtml [softpedia.com]

__________________


Fusion "2.0" was already in the works:
http://www.xbitlabs.com/news/mobile/display/20111121213529_AMD_Readies_Brazos_2_0_as_Krishna_Wichita_Get_Delayed.html [xbitlabs.com]

IIRC, these were scrapped because OEM's weren't going to design products around a 6-month lifecyle--hence they are skipping a generation.

You have to silently face East at 11am EST (2, Funny)

PopeRatzo (965947) | more than 2 years ago | (#38142232)

Financial Analyst Day in February

Oh my god, there's less than 70 shopping days left!

It's tradition in my house that on Financial Analyst Day, or FAD as we call it, we make spiced wine and spike it with DMT, then sit around singing appropriate songs, such as "Money" by Pink Floyd, "Money (That's What I Want)" by the Beatles and "Gimme da Loot" by Biggie Smalls.

Then, sitting in a circle, we pass around a revolver with only one shell loaded and spinning the cylinder, we point at the person to the left and pull the trigger.

It's by far my favorite holiday.

AMD APU graphics make big difference (2, Interesting)

Anonymous Coward | more than 2 years ago | (#38143608)

APU unlikely to fend off increasing competition from Intel? Most Intel Atom based netbooks/tablets/whatever that I know have the GMA 3150. Which runs at 200 Mhz max. and has 2 shader units. The C-50 has 80 unified shaders running at 280 Mhz (yes, again low but I'm guessing 80 things working in parallel make up for it. please correct me if I'm wrong), supporting DX11,OpenGL 4.1 and UVD 3. Way better than Intel graphics cards. True, the CPU isn't very fast, but for things like video playback and 2D,3D games and other applications? It beats Intel hands down. I love Intel for their linux support but they just don't make graphics hardware for gaming.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...