×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Unveils Barcelona Quad-Core Details

kdawson posted more than 7 years ago | from the four-in-hand dept.

206

mikemuch writes, "At today's Microprocessor Forum, Intel's Ben Sander laid out architecture details of the number-two CPU maker's upcoming quad-core Opterons. The processors will feature sped-up floating-point operations, improvements to IPC, more memory bandwidth, and improved power management. In his analysis on ExtremeTech, Loyd Case considers that the shift isn't as major as Intel's move from NetBurst to Core 2, but AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

206 comments

Intel's Ben Sandler? (1, Funny)

TheLink (130905) | more than 7 years ago | (#16389447)

"Intel's Ben Sander laid out architecture details of the number-two CPU maker's upcoming quad-core Opterons."

Talk about dual processing being grafted on... ;)

Re:Intel's Ben Sandler? (1, Offtopic)

leonmergen (807379) | more than 7 years ago | (#16389525)

I hate your sig.. it's so early.. how dare you.. I hate your sig, need coffee.. aahh

Re:Intel's Ben Sandler? (0, Offtopic)

MarkRose (820682) | more than 7 years ago | (#16389745)

I hate your sig.. it's so early.. how dare you.. I hate your sig, need coffee.. aahh

I think you need more than coffee. Slashdot hasn't had links like that for a while now!


(tagging beta) [slashdot.org]

Re:Intel's Ben Sandler? (3, Funny)

mattmacf (901678) | more than 7 years ago | (#16389813)

Ok guys, CUT THE SHIT!! It's four in the morning and the last thing I want to do is request a new password just to finish posting a comment and come back to find new comments that prompt me to need to go back to my email and login AGAIN

See, some of us just don't ever logout and everytime I come back to my computer Slashdot is waiting happily for me to return. but you couldn't just let that be, could you? nooooooooooo... every JACKASS WITH AN AGENDA and a COMPLETELY UNFUNNY SIG has to dick me around tonight instead of just letting me post in peace.

My apologies, I seriously need some sleep.

--
Forget your password? Have your password mailed to you [slashdot.org] by entering your nickname, uid, or email address.

Memory Controllers (5, Funny)

ExploHD (888637) | more than 7 years ago | (#16389479)

the memory controllers now support full 48-bit hardware addressing, which theoretically allows for 256 terabytes of physical memory.
256 terabytes should be enough for anybody.

Re:Memory Controllers (1)

edwardpickman (965122) | more than 7 years ago | (#16389571)

12 billion polygons and 25 terrabytes of texture maps in Maya will max that out easy. Why such a low ceiling? They should have gone for at least 512 terrabytes of physical memory. By the time they release Duke Nuke Em Forever it'll exceed that for sure.

Re:Memory Controllers (0)

sacx13 (1011547) | more than 7 years ago | (#16389673)

In 1981 Bill Gate's delcared: ``640K of memory should be enough for anybody.'' Regards

Re:Memory Controllers (2, Informative)

aussie_a (778472) | more than 7 years ago | (#16389743)

No he didn't. But please, don't let that stop you claiming otherwise.

Re:Memory Controllers (0)

Anonymous Coward | more than 7 years ago | (#16389925)

Of course he said it. Just because he later denies it, doesn't mean he isn't lying.
As the head of the world's biggest software company, would you admit to making such
a terribly wrong prediction about the future of computing? No. Actually you would
deny it as strongly as possible.

Re:Memory Controllers (0)

x2A (858210) | more than 7 years ago | (#16390753)

"Of course he said it"

There's no "of course" about it. The visionary who was laughed at when he said there will be a computer in every home, couldn't see a use for more than 640k? You really believe that? Whatever you can say about bill gates, him not being able to imagine people having uses for a computer (and a more powerful computer) is not one of them.

Re:Memory Controllers (1, Insightful)

Anonymous Coward | more than 7 years ago | (#16391383)

Like how he missed the world processing, file/print sharing, directory/identity
services, internet, internet search booms? The reason why they're ahead in those
fields is because they used (questionable) business tactics to claw their way to
the front.

The "computer in every home" thing wasn't a profound prediction: there was a
big dollar sign pointing there from his business model.

Aside from the fact that there isn't really a (PC) computer in every USA home,
if we do generously concede Bill that one prediction, he has missed a great
deal more to really be considered a computer visionary.

A business visionary, perhaps.

Re:Memory Controllers (1, Funny)

Anonymous Coward | more than 7 years ago | (#16389975)

Please read parent more carefully.

Clearly "Bill Gate's" is shorthand for "Bill Gate has", not referring to Microsoft's Bill Gates at all, and you can't actually *prove* he didn't delcare that in 1981, nobody knowing what "delcare" means.

The joy and marvel of logic at play!

Re:Memory Controllers (0)

Anonymous Coward | more than 7 years ago | (#16390233)

<rant>

Screw you and your little slack-jawed, post-modernist nonsense.

Historical events happen, and then some limp-wristed twit has to come along and try to inject ambiguity.

"What is truth? We can't prove anything, so truth must be whatever our degenerate overlords tell us to feel at the moment."

May you be violently ignored.

</rant>

Re:Memory Controllers (0)

Anonymous Coward | more than 7 years ago | (#16390959)

In the undying (rumored) words of the master Bill Gates: "640KB ought to be enough for anyone"

wha? (2, Funny)

macadamia_harold (947445) | more than 7 years ago | (#16389483)

AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together.

So Intel's Ben Sander claims that AMD's claim is that Intel claims that their dual-cores grafted together qualify as quad-core technology? That's not confusing at all.

Re:wha? (1, Insightful)

Anonymous Coward | more than 7 years ago | (#16389511)

AMD is limited to claims nowadays!

On snap! (5, Funny)

joe_cot (1011355) | more than 7 years ago | (#16389489)

"In his analysis on ExtremeTech, Loyd Case considers that the shift isn't as major as Intel's move from NetBurst to Core 2, but AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together."
BUUUUUUUUUURNED
Next week: Intel responds by telling us how fat AMD's mother is.

Mine goes to eleven (2, Funny)

Anonymous Coward | more than 7 years ago | (#16389569)

Quad core? Bah! That's only 4.
Wake me up when they have a processor that goes to eleven.

Re:Mine goes to eleven (-1, Troll)

10Ghz (453478) | more than 7 years ago | (#16390971)

Who cares if it has zillion cores and threads, if it still sucks?

eh? (1)

dexomn (147950) | more than 7 years ago | (#16389577)

I thought Ben Sander worked for AMD.

Re:eh? (1)

thoryorak (302686) | more than 7 years ago | (#16389593)

Ben Sander does work for AMD. It's pretty obvious in the original Extremetech article. Unless AMD is letting Intel people work on their new projects.

Re:eh? (1)

mr_matticus (928346) | more than 7 years ago | (#16389623)

Why not? Microsoft's Windows division lets Apple do their work.

Sorry, couldn't resist. Of course Ben works for AMD. This is just yet another flawed Slashdot summary (are there ANY accurate and well-written summaries on Slashdot these days?).

Could I still count??? (1, Interesting)

RuBLed (995686) | more than 7 years ago | (#16389611)

"AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together."

So a Siamese twin is not really a true twin because they are two persons grafted together? :)

Re:Could I still count??? (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#16389689)

No, but your mom's a conjoined twin.

Re:Could I still count??? (2, Insightful)

Mikachu (972457) | more than 7 years ago | (#16389695)

So a Siamese twin is not really a true twin because they are two persons grafted together? :)

No, actually, it's more like saying that siamese twins are not actually two people in the same body because they're grafted together.

Re:Could I still count??? (1)

Tsagadai (922574) | more than 7 years ago | (#16389973)

Actually as opposed to creating siamese twins AMD claims to have created one twin with 4 split personalities.

Re:Could I still count??? (1)

Derosian (943622) | more than 7 years ago | (#16390337)

This is like the difference between an AMD processor processing something just as fast at say, 1.8gig, as an intel 2.6gig.

Why? Because Intel will look for shortcuts to a goal. Yes, we have quad core processors, but not telling us the processors were not made for that purpose they will however most likely run faster. While AMD will make a processor for the specific purpose of it being quad core.

Re:Could I still count??? (3, Funny)

MiniMike (234881) | more than 7 years ago | (#16391681)

No, it's more like comparing Siamese Quadruplets against two sets of Siamese twins stapled together.

There's a nice image to drink your coffee to...

How obscure is this? (1)

MobileTatsu-NJG (946591) | more than 7 years ago | (#16389635)

"AMD Unveils Barcelona Quad-Core Details"

It's the processor that runs like a dog with no nose!

Re:How obscure is this? (0)

Anonymous Coward | more than 7 years ago | (#16390171)

It's gonna be... fantastic!

(re: how obscure is "processor that runs like a dog with no nose")

Once again... (5, Insightful)

tygerstripes (832644) | more than 7 years ago | (#16389647)

Firstly, can I just say that stating that "the shift isn't as major as Intel's move from NetBurst to Core 2" is like... er... comparing a decent incremental car improvement with swapping a bicylce for a car. Or something. I'm not saying Core2Duo isn't great tech, but look; Netburst was shit. Everyone knows it. They flogged that horse for far too long, so comparing on the grounds of the proportional improvement is not a useful comment. It's like when the thick kid in school got the "most improved" award, and everyone sat there and went "Well yeah, but what was his alternative?".

As for the quad-core thing, it's the same story all over again. Intel rush out a solder-together-two-chips job to beat the competition to market, and then the actual innovators come out with something coherent that works more efficiently etc.

I'm not saying the AMD will necessarily be better. What I'm saying is I don't care who gets to market 2 months earlier. I want the better chip, and I can live with the mystery for a few weeks.

Although, frankly, I can barely afford to eat having just built a decent Core2Duo rig, so I won't be investing either way just yet...

Re:Once again... (1)

Calinous (985536) | more than 7 years ago | (#16389847)

NetBurst was a good architecture - the only problem with it was total heat, and hotspots inside the processor. This kept it from reaching its expected 10GHz (but was able to run at 6GHz on liquid nitrogen cooling). Now, if an 3GHz P4 is underwhelming, you couldn't say so about a 6GHz one.

Re:Once again... (3, Informative)

tygerstripes (832644) | more than 7 years ago | (#16389891)

It wasn't that good. AMD came out with an architecture which, in practical terms, was better designed, while Intel just kept trying to push the envelope with this very hot chip, and steadily lost market share as a result. Core2Duo is fantastic, relatively speaking, but it was a very long time coming...

Re:Once again... (1)

masklinn (823351) | more than 7 years ago | (#16390213)

NetBurst was a good architecture - the only problem with it was total heat, and hotspots inside the processor. This kept it from reaching its expected 10GHz

... and NetBurst was underperforming at the speeds it could reach. In a word, no, it wasn't a good architecture because it only worked well in FantasyDreamLand where heat dissipation doesn't exist.

Re:Once again... (3, Informative)

LaughingCoder (914424) | more than 7 years ago | (#16390995)

Netburst was designed for a market that touted clock speed as the performance measure for CPUs. AMD, with a big helping hand from the gamers, changed the game into rewarding true benchmark/performance rather than simple clock speed. I suppose if Intel had managed to achieve 10GHz clocks their performance would have been top notch, though one wonders how long those instruction pipelines would have to be ... and how much power they would have burned.

Now Intel has out-benchmarked AMD, and is attempting to change the rules again to performance-per-watt. This next wave should be interesting to watch.

Re:Once again... (1)

eebra82 (907996) | more than 7 years ago | (#16390009)

Firstly, can I just say that stating that "the shift isn't as major as Intel's move from NetBurst to Core 2" is like... er... comparing a decent incremental car improvement with swapping a bicylce for a car. Or something. I'm not saying Core2Duo isn't great tech, but look; Netburst was shit. Everyone knows it.

We all know that already. The point is that AMD needs that kind of jump to get ahead of the competition like it was half a year ago.

I would say that if the writer's point of view is that AMD needs an equally big advancement in tech and it's dead on right. Why? Simply because the lead that AMD had was erased by so much, and AMD will need so much to recover if it wants the top position back.

Re:Once again... (0)

Anonymous Coward | more than 7 years ago | (#16390057)

In some cases two dual-core dies on a chip is better for the manufacturer. Better yield and easier to validate leading to cheaper parts, more easy to clock higher. IBM has been doing this for a long time. What kind of implementation the chip has may be of interest to some, but of no interest to many. What matters is performance and price/performance and power consumption.

Note to AMD: We don't care (4, Insightful)

cperciva (102828) | more than 7 years ago | (#16389649)

AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together

Note to AMD: We don't care about the implementation details. We care about performance, cost, and power consumption; the clock speed, cache sizes, and how cores talk to each other is irrelevant.

For all I care, Intel's "quad core" processor could be using a team of psychic circus midgets.

Some of us do. (1)

the_hoser (938824) | more than 7 years ago | (#16390041)

I care. I think this kind of tech is cool. If you read into it a bit, it gives you a decent enough idea of how much better it will perform. It's not always accurate, but it's usually correct.

And, accorting to the article, this was all coming from the mouth of an intel person, not AMD.

Re:Some of us do. (0)

Anonymous Coward | more than 7 years ago | (#16391641)

The article is ok, the announcement is _wrong_: the guy is from AMD.

Re:Note to AMD: We don't care (1)

ilmdba (84076) | more than 7 years ago | (#16390051)

We don't care about the implementation details.
I think you mean you don't care. The rest of us that use processors by the thousands, will be over here caring about the implimentation details.

Re:Note to AMD: We don't care (1)

Jeppe Salvesen (101622) | more than 7 years ago | (#16390291)

You are badly confused.

The ends is a well-performing processor that doesn't produce too much heat or cost too much.

The means is technology. The implementation details may be fascinating, but what matters is benchmarks vs total power usage.

Re:Note to AMD: We don't care (3, Interesting)

aminorex (141494) | more than 7 years ago | (#16390571)

"...what matters is benchmarks..."

Lies, damned lies, and statistics. I both agree and disagree. Throughput on applications is what matters to end users. Synthetic benchmarks are useful (and so matter) in as much as they identify specific architectural performance characteristics for a given implementation. They are less than useful (and do not matter) when they do not correspond in a predictable way to throughput results.

"...vs total power usage..."

For your application, perhaps. Most home and office users don't care about the power dissipation of their CPU, as long as the cooling rig is zero-maintenance. GPUs completely overwhelm small variations in CPU for gamers these days. For high-throughput computing systems, there is a major shared/distributed memory split. For shared memory systems (i.e. capable of scaling throughput on multithreaded applications by increasing CPU counts), interconnect scalability matters more than any thing else, and AMD wins handily. For distributed memory systems, blade farms, etc, scalability and rank density will be determined by power dissipation, and there, finally, I can agree with your comment, and Intel may have a (very small) lead. It's a rather small slice of a diverse market, however.

Re:Note to AMD: We don't care (5, Informative)

Anonymous Coward | more than 7 years ago | (#16390099)

Some of us do care. Some for work, some for fun. AMD's "designed as quad-core" approach has some notable consequences, especially in the cache layout that (on paper, of course) seems very well suited to virtualization -- much more so than the Intel solution in TFA.

AMD: a shared L3 feeding core-specific L2 caches. Intel: each core-pair sharing a L2 cache. AMD's approach better avoids threads competing for the same data (thanks to copying it from L3 to every L2 that needs it), while keeping access latencies more uniform and predictable (thus better optimizable).

Other AMD enhancements look more like catch-up to Core 2: SSE [and it's "Extensions", dammit, not "Enhancements"] paths from 64bit to 128bit, more advanced memory handling (out-of-order loads versus Intel's disambiguation et al.), more instructions per clock by beefier decoding (more x86 ops through fast path instead of microcode) and more "free" ops (where Intel added way more discrete execution units from Core to Core 2).

If AMD's quad manages to be better due to better memory bandwidth and latency (in practice), then they were quite right about "true quad-core" :)

Re:Note to AMD: We don't care (0)

Anonymous Coward | more than 7 years ago | (#16390305)

that's the first useful comment I've read on this story, mod insightful please.

Re:Note to AMD: We don't care (2, Insightful)

tygerstripes (832644) | more than 7 years ago | (#16390103)

Is that meant to be sarcastic?

You don't care because you don't understand. Performance, cost and power consumption are directly affected by such things as clock-speed, cache, core integration, architecture etc, and different aspects offer different advantages for different uses.

If it were that easy to put a reliable figure on Performance, the Megahurtz shambles would never have happened.

Re:Note to AMD: We don't care (1)

mabinogi (74033) | more than 7 years ago | (#16390461)

> You don't care because you don't understand. Performance, cost and power consumption are directly affected by such things as clock-speed, cache, core integration, architecture etc.

Yes, but saying "Oooh, our chip is true quad core and yours isn't" doesn't on its own say anything about final cost or performance. They might as well have said "But ours are red".

It's a clue that it might perform better with all other factors being equal, which they're not, so it's still a useless statement. Until they benchmarks are out, it's all just speculation.

Re:Note to AMD: We don't care (1)

Kijori (897770) | more than 7 years ago | (#16391547)

They might as well have said "But ours are red".

I think you chose a bad example. Everyone knows red ones go faster!

Re:Note to AMD: We don't care (0)

Anonymous Coward | more than 7 years ago | (#16390637)

Note to Slashdot user "cperciva": Actually, some of us do care about processor architecture.

Question (0)

Anonymous Coward | more than 7 years ago | (#16390803)

It's clear both AMD and Intel are speeding up their processors, but didn't they both say that they were going to focus more on features like Vanderpool and Pacifica virtualization? Whatever happened to Vanderpool/Pacifica for normal chips? Have they given up plans to produce those chips or are they going to be shipped "real soon now"?

Re:Note to AMD: We don't care (1)

name*censored* (884880) | more than 7 years ago | (#16390899)

Well maybe YOU don't care about how psychic circus midgets consume power, but uh....

Wait what was the question?

Re:Note to AMD: We don't care (4, Insightful)

Visaris (553352) | more than 7 years ago | (#16391133)

Note to AMD: We don't care about the implementation details. We care about performance, cost, and power consumption; the clock speed, cache sizes, and how cores talk to each other is irrelevant.

AMD it taking the route that will give better performance. I hear you saying that soldering some copper pipes with rubber-bands would be fine as long as it would perform. The point is that it will work... just not very well.

If you don't think I'm right, look at Intel's own product roadmap. They plan to release a new version of Kentsfield that has all four cores on one peice of Si, with a shared cache, just like AMD is about to do... only later in 2007 after AMD's version comes out. When the two major chip companies move in the same direction, usually that means it is the right one. The only difference is that AMD is going to get there sooner because they didn't bother to play around with this MCM (Multi-Chip-Module) junk. Intel just wants to get to market first; they don't seem to put quality first.

Re:Note to AMD: We don't care (1)

andr0meda (167375) | more than 7 years ago | (#16391199)



AMD it taking the route that will give better performance. I hear you saying that soldering some copper pipes with rubber-bands would be fine as long as it would perform. The point is that it will work... just not very well.

If you don't think I'm right, look at Intel's own product roadmap. They plan to release a new version of Kentsfield that has all four cores on one peice of Si, with a shared cache, just like AMD is about to do... only later in 2007 after AMD's version comes out. When the two major chip companies move in the same direction, usually that means it is the right one. The only difference is that AMD is going to get there sooner because they didn't bother to play around with this MCM (Multi-Chip-Module) junk. Intel just wants to get to market first; they don't seem to put quality first.


This sounds very XP. I like XP. Yet, someone please explain why all managers seem to believe that he copper pipes soldered together will hold and throw them fortunes? Is it the attractivity of the gamble in and on itself?

Re:Note to AMD: We don't care (1)

Visaris (553352) | more than 7 years ago | (#16391419)

Yet, someone please explain why all managers seem to believe that he copper pipes soldered together will hold and throw them fortunes? Is it the attractivity of the gamble in and on itself?

Intel will be first to market with QC chips. At that time, Intel's product will be better... because AMD won't have one yet. That is why Intel is taking the route it is. Good for business? Maybe... it just puts Intel that much furhter behind in the long run, only for some short term gains... I don't have a business degree, but I don't think it's worth it.

Barcelona (0)

Anonymous Coward | more than 7 years ago | (#16389675)

Barcelona is the second largest city in Spain, capital city of Catalonia and the province with the same name. It is located in the comarca of Barcelonès, along the Mediterranean coast (4123N 211E) between the mouths of the rivers Llobregat and Besòs. As capital city of Catalonia, Barcelona houses the seat of the Generalitat de Catalunya and its Conselleries, the Parliament of Catalonia and the Supreme Court of Catalonia. Barcelona has a Mediterranean climate, with mild winters and hot, dry summers. January and February are the coldest months, averaging temperatures of 10 C. July and August are the hottest months, averaging temperatures of 25 C. Barcelona, Wikipedia [wikipedia.org]

Some one at AMD was thinking on his summer vacations?

Re:Barcelona (2, Funny)

Plammox (717738) | more than 7 years ago | (#16389731)

If anybody at AMD had watched Fawlty Towers, maybe they would have opted for Madrid instead.

(Manuel with thick Spanish accent:) Mr. Fawlty! I'm from Barcelona, I know *notheeeng*!

Re:Barcelona (0, Offtopic)

elbonian (950259) | more than 7 years ago | (#16389865)

Well, in fact Manuel is more like a name of a person from Madrid than a name of a person from Barcelona. In Barcelona they speak the catalan language and the equivalent of the Spanish name Manuel in Catalan is Manel.

Re:Barcelona (1)

Plammox (717738) | more than 7 years ago | (#16390277)

In fact, I'm told that in the Catalan translation of Fawlty Towers Man(u)el says he's from Madrid.

But since this is an early eighties UK tv production, they surely didn't get all the Spanish vs. Catalan cultural differences right.

I'm sorry, but.. (1)

n1hilist (997601) | more than 7 years ago | (#16389729)

I find it very hard to get excited about AMD's developements. I love them, I love their CPUs but the chipsets I end up with on motherboards I've previously owned have always been dodge and hard to get working in Linux. Maybe I just have bad luck?

Amazing analysis (4, Funny)

nanoakron (234907) | more than 7 years ago | (#16389781)

AMD: 4=4
Intel: 4=2x2

Where do they hire these guys?

-Nano.

Re:Amazing analysis (1)

l0cust (992700) | more than 7 years ago | (#16389917)

Oh I know I know ! Since:
AMD: 4=4 Intel: 4=2x2

=> A = D = I = '4', M = n = '=', t = l = '2' and e = 'x'
=> They hire these guys at Hideous Crytograhy Inc.

Now where is my cookie !

werty (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#16389921)

Faberware made in Thailand ..... I lived in Thailand 5 years ,
you cant get good deals on pans there , tax is too high ,

  but in the US there is no tax on Chineese , so Walmart does
  cheap but very high quality non stick pans for $9 !
    Food in Asia is poisoned with formaldehyde .
  Millions die every year , i was poisoned several times and i knew
  all the places and foods to expect F' !
    Foodland on LatPrao 99 had Australian cheddar . But the
  178 th time i bought it was the killer ! They washed it in F'
  to make it last longer !

    Idiot Asians dont know cheddar lasts long time without help !
    Thais are really dumb . They can not be taught . Boiled pork ?!!
  I ate at friends , and he boiled the pork !

    There is no law nor courts nor justice in Thailand , no law suits .....hmmmmm
  no justice ? USA has no justice , but it has law and courts .

Socket consideration (1, Insightful)

wysiwia (932559) | more than 7 years ago | (#16390043)

I won't buy any AMD processors anymore until AMD clears its socket plans and guaranties a minimum of 3 year availability for processors on a socket. See also http://hardware.slashdot.org/comments.pl?sid=19821 5&cid=16242757 [slashdot.org].

O. Wyss

Re:Socket consideration (3, Insightful)

somersault (912633) | more than 7 years ago | (#16390215)

Why - do you think todays processors won't still be useful in 3 years? Most games don't take advantage of current technology for a year or more I'd say, and your office applications/OS are going to run fine on any of today's decently specced systems (3000+, 3Ghz Pentium, doesn't even matter if they only have one core). The only people that can truly make use of multicore chips would be scientists and people who do any other kind of intensive parallel processing, like like graphics rendering. In 3 years you'll probably want a new mobo anyway to take advantage of whatever new-fangled technology has come out. I guess you could say I'm becoming less of a geek these days even though I'm an IT manager, but if my computer works, and plays the games I like sufficiently (say 1280x1024@60fps with details maxed out), I don't see the need for upgrading my processor (I'd upgrade my graphics card before anything else, since graphics cards come out more often and usually would have a larger effect on performance from one generation to the next).

Since most of the chipset is becoming integrated into the processor these days then your argument will make more sense over time, but if you were more patient and waited for things to come down in price, as they always do, and rather quicker than I expect sometimes, then you'd be able to buy a new mobo, ram and processor for the same price as the new processor would have cost 6 months previously (not meant to be a perfect example, I haven't been following the prices of stuff since I built my last system a couple of years ago, but the idea is sound :p )

Re:Socket consideration (5, Informative)

wolrahnaes (632574) | more than 7 years ago | (#16390223)

As the person who responded to your last post explained, that's just not possible with the K8 architecture as it is. The memory controller is on-die and memory technology is evolving, therefore the interface between the processor (where the controller is) and motherboard (where the DIMMs are) must also change.

The closest to a solution we have would be going back to Pentium 2/3 style processor-on-a-card designs which would move the memory slots to an expansion card shared with the processor which would then have a HyperTransport interface to the motherboard.

This works, as some motherboard manufacturers (ASRock on the 939DUAL for one) have implemented something along these lines for AM2 expandability. The problem lies in laying out the circuitry for this new slot, not to mention the incompatibility with many of the large coolers we often use today. It also would become even more complex when faced with another one or two extra HyperTransport lanes as found on Opteron 2xx and 8xx chips, respectively.

AMD made a compromise when they designed K8. On the one hand, the on-die memory controller improves latency by a huge amount and scales much better by completely eliminating the memory and FSB bottlenecks that Intel chips get in a multiprocessor environment. On the other hand, new memory interface = new socket, no way around it.

From what I understand, the upcoming Socket F Opterons will have over 1200 pins in their socket so as to allow both a direct DDR2 interface and FB-DIMM. If I understand FB-DIMM technology correctly, it should end this issue by providing a standard interface to the DIMM which is then translated for whatever type of memory is in use. Logically this will trickle down to the consumers in another generation. For the time being however, AMD has stated that the upcoming "AM3" processors will still work in AM2 motherboards, as they will have both DDR2 and DDR3 controllers.

Re:Socket consideration (1)

beezly (197427) | more than 7 years ago | (#16390361)

Indeed, Socket F has 1207 pins. There are some snippets of information and some more links available at http://en.wikipedia.org/wiki/Socket_F [wikipedia.org]. We're delaying the upgrade of our cluster to wait for Socket F systems to become available (so we can compare them against Intel's latest offering at that point).

Re:Socket consideration (1)

wysiwia (932559) | more than 7 years ago | (#16390839)

As the person who responded to your last post explained, that's just not possible with the K8 architecture as it is. The memory controller is on-die and memory technology is evolving, therefore the interface between the processor (where the controller is) and motherboard (where the DIMMs are) must also change.

Yet that doesn't matter more than the last time you responded. It's no problem to merge a new core (or multi cores) with a memory controller for the 939 socket. It's not even a big problem to put several memory controllers on the same die and just connect the fitting one with the socket so only a single die is sufficient. There are no technical obstacles for AMD to provide new cores with old interfaces, it's just marketing considerations.

Besides I'm sure AMD already thinks about how to fix the socket AM2 low acceptance. It's a fact, customers are annoyed with AMD because of AM2 and it's market share will shrink unless AMD comes up with a solution fast. IMO the 939 is probably the cheapest at least on short terms.

O. Wyss

Re:Socket consideration (1)

tomstdenis (446163) | more than 7 years ago | (#16391033)

First off, that isn't true. Things like Vcc sources have to move around to acommodate new designs. You're also disregarding the move to DDR2 which has a different interface as well.

You've been able to get 939 and 940 pin boards for a LONG WHILE [even now given AM2 is out]. Sure 754-pin has disappeared but AMD doesn't even sell 754-pin desktop processors anymore [laptops being the exception].

You might as well bitch out Intel for not being able to get Super Socket 7 motherboards anymore for your P54C processor.

Tom

Stable hardware platform (1)

beezly (197427) | more than 7 years ago | (#16390261)

You want the latest and greatest features, but you aren't willing to cope with changing your hardware to keep up?

CPU manufacturers don't change interface designs for fun. It costs them time and money to design a new interface. They do it because the market demands new technology.

Besides, looking at recent history, Socket A, 940 and 939 have had roughly 3 years. Socket 754 was a red herring that no one in their right mind should have bought if they were looking for platform longevity.

If you compare AMD's socket strategy to Intel's (http://www.tom.womack.net/x86FAQ/faq_time.html), AMD look pretty good at developing platforms with good "socket longevity"

The MARKET demands? Your joking, right? (1)

PopeRatzo (965947) | more than 7 years ago | (#16390671)

CPU manufacturers don't change interface designs for fun. It costs them time and money to design a new interface. They do it because the market demands new technology.

Show of hands: Who's been demanding new CPU technology? What percentage of the "market" has already gone to dual-core, and is clamoring for quad-core to run their apps?

You don't think maybe a manufacturer would push new technologies out the door to get new sales do you? "..the market demands.." my ass.

I guess you won't buy Intel either... (4, Informative)

Visaris (553352) | more than 7 years ago | (#16391055)

I won't buy any AMD processors anymore until AMD clears its socket plans and guaranties a minimum of 3 year availability for processors on a socket.

I suppose that means you won't buy an Intel chip either. Look at what happened with Conroe. Core 2 Duo uses a socket with the same name as the P4 socket, the same number of pins too. But guess what? When Conroe came out there were less than a handful of reasonable boards out of the hundreds of models out, that would actually support it. The voltage requirements changed slightly, the BIOS requirements changed, and the end result was that upgrading to Conroe on a given board was hit or miss. I fail to see how Intel's MB upgrade situation is any better than AMD's. It sounds to me like you're falling for Intel's game: "We kept the socket name and number of pins the same, so that means we have better socket longevity." Sorry, but I'm not falling for it. I've read too many horror stories on the forums from Conroe upgraders that thought they could use their current P4 boards.

Don't get me started on Intel's TDP scam either (AMD's = max, Intel's = average). AMD may not always have the best tech, but I find them to be a much more straight-forward company, with fewer sneaky games designed to trick customers.

And why are we posting a story about AMD's tech said/written by an Intel employee? Sounds like it was biased before it even started to me.

Reminicient of the early dual-cores? (1)

TheThiefMaster (992038) | more than 7 years ago | (#16390119)

"but AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together."
I'm sure that this was true about intels early dual-cores, if they've done it again then we can expect some truely awful performance from their quadcores...

Re:Reminicient of the early dual-cores? (2, Interesting)

masklinn (823351) | more than 7 years ago | (#16390231)

It was indeed, Intel didn't have integrated ("true") dual-core (AMD-style) before the Core architecture. Pentium-D's two cores, for example, had to use the FSB to communicate with one another, they didn't have a specific, fase, core-to-core bus. In the end, they were no better than regular dual-core, except that you only needed a single socket.

Re:Reminicient of the early dual-cores? (1)

deadline (14171) | more than 7 years ago | (#16391013)

And in some cases the Pentium D performs quite well [clustermonkey.net]

We have been using dual processor systems for a while. No one really complained that they were not "dual core" and got quite a lot of work out of them. Gluing two cores together was quicker and easier than a true dual core, and if it is the same price as a single core, I'll take it.

Quad-core vs. dual-dual-core? (2, Interesting)

Kopretinka (97408) | more than 7 years ago | (#16390245)

Can anyone please shed some light on the difference (for the user) between a true quad-core and a dual dual-core processor? I expect a quad-core can be cheaper because it is more integrated, but is that it?

Re:Quad-core vs. dual-dual-core? (1)

glwtta (532858) | more than 7 years ago | (#16390341)

Probably has to do with memory/cache access and total available bandwidth between the cores. Memory architecture seems to be the one area where the Core still can't touch the Opteron.

Of course, I'm just guessing.

Re:Quad-core vs. dual-dual-core? (4, Informative)

Phleg (523632) | more than 7 years ago | (#16390677)

A "true" quad-core means that all of them share the same L2 cache, AFAIK. Basically, performance benefits as they can all use the same high-speed memory cache for L1 misses. This is also extremely useful in the case of multiple processes which aren't bound to a CPU. If process A is scheduled on processor 1, then 2, then 3, then 4, there are going to be a lot of cache misses (since it's in no CPU's L1 cache). With two dual-cores bolted on to each other, processes switching from processors 1-2 to 3-4 are going to incur severe performance penalties as any relevent memory is fetched over the memory bus from RAM.

Re:Quad-core vs. dual-dual-core? (1)

Phleg (523632) | more than 7 years ago | (#16390689)

As a silly analogy, imagine two cars strapped to each other versus a single car with dual engines but lots of shared components where it makes sense to do so. The one that actually had some engineering and design behind it will likely make better use of resources, rather than the ad-hoc, bolted-together solution.

Re:Quad-core vs. dual-dual-core? (2, Informative)

DohnJoe (900898) | more than 7 years ago | (#16391093)

actually, there's three levels of cache in the opteron: L1 and L2 are CPU bound, L3 is shared.
They claim that this improves performance with virtualization


From the article:
Barcelona uses a three-stage cache architecture. The L1 cache is 64KB, the L2 cache is 512KB and the L3 cache is 2MB. The L1 and L2 caches are dedicated to a particular core, while the L3 cache is shared among all cores. Note that the L3 cache has been engineered to be variable in size, so that different products may offer different L3 cache sizes. The L1 and L2 caches are exclusive, as with current Opterons and Athlon 64s. This means that the L1 and L2 cache don't hold copies of the same data.

Re:Quad-core vs. dual-dual-core? (4, Informative)

tomstdenis (446163) | more than 7 years ago | (#16391103)

As others pointed out, inter core communication has to hit the FSB. That makes things like owning/modifying/etc cache lines slower as you have to communicate that outside the chip.

There are also process challenges. Two dies take more space than 4 cores on one die since you have replicated some of the technology [e.g. FSB interface driver for instance]. Space == money therefore it's more costly.

If one dual-core takes 65W [current C2D rating] than two of them will take 130W at least [Intels ratings are not maximums]. AMD plans on fitting their quadcore within the 95W enveloppe. Given that this also includes the memory controller you're saving an additional 20W or so. In theory you could save ~55W going the AMD route.

Also currently, C2D processors have lame power savings, you can only step into one of two modes [at least on the E6300] and it's processor wide. The quad-core from AMD will allow PER-CORE frequency changes [and with more precision than before] meaning that when the thing isn't under full load you can save quite a bit. For instance, the Opteron 885 [dual core 2.6Ghz] is rated for about 32W at idle down from 95W at full load. I imagine the quad-core will have a similar idle rating.

Tom

Re:Quad-core vs. dual-dual-core? (0)

Anonymous Coward | more than 7 years ago | (#16391121)

A true quad-core is for certain not cheaper to produce.
A dual-core die has much better yields.

True QC versus MCM: (5, Informative)

Visaris (553352) | more than 7 years ago | (#16391255)

Intel's QC is really an MCM, or multi-chip-module. That means they have literally grabbed two Conroe (Core 2 Duo) chips off of the assembly line, and mounted them in a single package. From the outside it looks like a single chip, but inside, it has two, separate peices of Si, connected over the FSB. That is the problem: the two chips are connected to the same bus. A single chip presents one electrical load on the bus, and two chips present two loads. This means that the speed of the bus needs to be dropped. That is why kentsfield will have a slower bus speed than normal chips. If you think about it, this is the exact opposite of the situation you want. You have just added a core, so it would be nice to add more bus bandwidth. Instead, the Intel solution lowers the overall bus bandwidth, not to mention that it is a shared bus. The two cores fight each other over a very slow external bus, and this creates a performance bottleneck.

When all four cores are on a single peice of Si, all sharing a L3 cache, the chips don't need to fight over the external bus as much. The cores can share information between them internally, and do not need to touch the slow external bus to perform cache coherency and other synchronization. Also, true QC chip presents one load to the outside bus. This means that the bus speed does not need to drop because of electrical load.

There are many people who don't care how the cores are connected as long as the package works. The point is that the way the cores are connected have a direct impact on performance. We'll be talking about Intel vs. AMD cache hierarchy in 2007 when AMD uses dedicated L2 and shared L3 while Intel uses only shared L2. Expect cache thrashing on Intel's true QC chips with heavily threaded loads when it comes out. Next I'll hear people say that the cahce doesn't matter as long as it works. As long as it works for what? Single-threaded tiny-footprint benchmarks like SuperPi or Prime95? How about a fully threaded and loaded database or any other app that will actually stress more than the execution units?

YUO FAIL IT! (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#16390295)

800 w/5*12 Megs of to avoid so as to Of the GNAA I reciprocating it racist for a of progress. encountered while what they think is ASSOCIATION OF Then Jordan Hubbard fucking surprise, polite to bring Project returns hobbyist dil3ttante with THOUSANDS of Stupid. To the HAS BROUGHT UPON track of where Bombshell hit I'm discussing FreeBSD showed

1+1+1+1==2+2==4 (0)

jscotta44 (881299) | more than 7 years ago | (#16390831)

Hey, I thought dual core was better than single core. So shouldn't two duals be better than four single cores? - so AMD is shooting its own marketing foot? ;-)

Re:1+1+1+1==2+2==4 (1)

automattic (623690) | more than 7 years ago | (#16391687)

Think of it in terms of a hand of poker.

4 of a kind clearly beats 2 pair.............

I'm sure someone at AMD thought this up on poker night.

Intel obviously likes to keep things in pairs......like boobies, and AMD has decided to steer clear of grafting on another pair of mammary cores, er, glands.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...