×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Quick and Dirty Penryn Benchmarks

kdawson posted more than 6 years ago | from the they-don't-remember-quick-they-just-remembr-dirty dept.

Intel 90

An anonymous reader writes "So Intel has their quad-core Penryn processors all set and ready to launch in November. There are benchmarks for the dual-core Wolfdale all over the place, but this seems to be the first article to put the quad-core Yorkfield to the test. It looks like the Yorkfield is only about 7-8% faster than the Kentsfield with similar clock speeds and front-side bus."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

90 comments

NIGGA STOLE MY BIKE! (-1, Troll)

Anonymous Coward | more than 6 years ago | (#20352955)

Fucking niggers.

QUICK AND DIRTY LIKE KDAWSONS MOM (-1)

Anonymous Coward | more than 6 years ago | (#20353091)

YORE teh man now dawg

Hey, gamers! (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#20352981)

Strafe does not [reference.com] mean the same thing as sidestep. We at Wikipedia would appreciate [wikipedia.org] any insight you can offer as to the origins of your illiteracy.

Re:Hey, gamers! (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#20352995)

Ever seen a man strafing? It looks fucking weird. Only Jim Carrey can do a good strafe.

Wall-running is an entirely different proposition. That shit is for freaks.

What's in a name (0, Offtopic)

BillGatesLoveChild (1046184) | more than 6 years ago | (#20352997)

Penryn? Wolfdale? Yorkfield? I wonder if Intel hasn't run out of names and started naming their processors after English Teddy Bears.

*Only* 7-8% ? (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#20353053)

Who fucking cares anyway? All it shows is just how quickly AMD is disappearing into irrelevant-ville, just like Linux.

Re:*Only* 7-8% ? (0)

eebra82 (907996) | more than 6 years ago | (#20353123)

"Who fucking cares anyway? All it shows is just how quickly AMD is disappearing into irrelevant-ville, just like Linux."

You shouldn't rule out the company that forced Intel into this pace. After all, it's remarkable what they did with the Athlon series when Intel's Netburst architecture proved inefficient. I agree that they are in trouble, but at least don't make quick assumptions before we get to see Barcelona.

Re:*Only* 7-8% ? (0, Offtopic)

ILuvRamen (1026668) | more than 6 years ago | (#20353161)

a fake quote from nowhere about linux that's supposed to get good ratings and an ad in your sig for a poker site? How stupid do you think we are?

Re:*Only* 7-8% ? (1)

Eco-Mono (978899) | more than 6 years ago | (#20353639)

Not fake. Browse at -1 and you'd see the post he responded to.

Re:*Only* 7-8% ? (0)

Anonymous Coward | more than 6 years ago | (#20353963)

Dude, the guy's got an ad in his sig. Where do you think the AC 'quote' came from?

Re:*Only* 7-8% ? (1)

cdw38 (1001587) | more than 6 years ago | (#20353897)

You can't rule AMD out but Intel sure has done just about EVERYTHING right since Conroe. They've been more open with the press (both financial and enthusiast) and have (for lack of a better expression) engineered the shit out of AMD. That said, it looked like Intel would never catch up during the prime of Athlon 64...of course Intel's resources and size give it an astronomical advantage. AMD will again have to be innovative in some way if it ever wants to catch up...Fusion maybe?

Re:What's in a name (1)

Smoodo (614153) | more than 6 years ago | (#20358171)

Just a totally random comment, but Penryn, sounds a lot like "Pianren" in Chinese, which means to Cheat People.

Re:What's in a name (1)

BillGatesLoveChild (1046184) | more than 6 years ago | (#20360007)

That's pretty funny. Hope the name catches on. It'd certainly explain poor sales in China :-)

Oh wait her comes another humor impaired mod to further 'Offtopic' us. :-/

Any AMD Barcelona Benchmarks? (1)

tjstork (137384) | more than 6 years ago | (#20353107)

I would think that AMD would be providing Barcelona benchmarks hand over fist, at this point, if they had something...

Re:Any AMD Barcelona Benchmarks? (4, Insightful)

eebra82 (907996) | more than 6 years ago | (#20353151)

I would think that AMD would be providing Barcelona benchmarks hand over fist, at this point, if they had something...

There are two possible situations here:

a) Barcelona is faster than Intel's current line-up and does not want to see Intel up the pace more by releasing such numbers.
b) Barcelona is slower than Intel's current line-up and does not want its shares hit a new low, or perhaps buy some time to speed it up.

Re:Any AMD Barcelona Benchmarks? (0)

Anonymous Coward | more than 6 years ago | (#20353571)

If AMD had Barcelonas ready at 2.5Ghz or more than we would sure have seen benchmarks already.

Re:Any AMD Barcelona Benchmarks? (3, Interesting)

CajunArson (465943) | more than 6 years ago | (#20353625)

Barcelona is faster than Intel's current line-up and does not want to see Intel up the pace more by releasing such numbers.

    That may have been true 6 months ago, but the K10 is supposed to be officially announced in about 16 days on September 10 (since AMD claims not to do paper launches it is supposed to be widely available then too... ymmv). AMD is not going to be able to stop benchmarks after it is released, and while Intel can adapt quickly, it can't turn on a dime in 2 weeks time. AMD has not been doing well in the PR and benchmarking battles since Core 2 came out, if K10 really was that amazing you would be seeing all the usual suspects putting out full reviews right now in order to generate hype. I'm leaning towards your second theory, and most analysts are too.

Re:Any AMD Barcelona Benchmarks? (0)

Anonymous Coward | more than 6 years ago | (#20353269)

AMD hasn't released benchmarks on pre-release silicon before. Why would they start now?

End of CPU hype .. for now (0, Flamebait)

postmortem (906676) | more than 6 years ago | (#20353111)

No great improvements in speed in next few yrs .. remember the Pentium 4 stagnation (never gone to 4GHz, didn't release more than one model in 6 months, etc.)

And of course.. 4 core CPU has no use at homes unless you are content creator. I'm software engineer, I don't think that any of my colleagues I work with knows how to write app that will take advantage of 2 cores; let alone 4.

Conclusion? 4 cores right now need much software support.

Re:End of CPU hype .. for now (2, Informative)

swb (14022) | more than 6 years ago | (#20353193)

Depends on what you do "at home". Grandma who only sends email and orders flowers will see zero benefits.

But the rest of "normal" home users who own things like camcorders, make DVDs, rip movies, etc all see a huge benefit. I just put together a Q6600 system and couldn't be happier, but I've been a dual CPU workstation user since the PII days.

Re:End of CPU hype .. for now (1)

postmortem (906676) | more than 6 years ago | (#20353249)

I was debating between Q6600 (2.4GHz, 4 cores) and E6700 (2.67GHz, 2 cores), and I have chosen second option, because of limited advantage of more cores, but always present advantage of higher clock speed.

Re:End of CPU hype .. for now (1)

swb (14022) | more than 6 years ago | (#20353289)

I read a comparison/benchmark someplace (Ars? Who can remember..) that showed the E6700 only a touch better at a narrow range of applications and getting its hat handed to it on media encoding applications, so I went with the Q6600 since that accounts for my "heavy" computing.

I see MPEG-2 renders running better than real time on single pass encodes in TMPGEnc.

Re:End of CPU hype .. for now (1)

blackicye (760472) | more than 6 years ago | (#20354943)

I am running a Q6600 on an eVGA 680i motherboard, should have just gone with the Q6600, I have this sucker clocked at 3.3Ghz, am pushing for 3.4 but I can't get it stable yet.

Re:End of CPU hype .. for now (1)

eebra82 (907996) | more than 6 years ago | (#20353227)

"And of course.. 4 core CPU has no use at homes unless you are content creator. I'm software engineer, I don't think that any of my colleagues I work with knows how to write app that will take advantage of 2 cores; let alone 4.

Conclusion? 4 cores right now need much software support."


Well, you're talking about cutting-edge CPU:s which typically co-exist with cutting-edge software. If you're getting a quad core setup, it's probably because you're going beyond Word processing.

Of course quad cores will need more software support before they become a more viable option, but it's hardly a bad thing to prepare for the future when you're purchasing a computer.

Re:End of CPU hype .. for now (1)

dfgchgfxrjtdhgh.jjhv (951946) | more than 6 years ago | (#20353281)

'it's hardly a bad thing to prepare for the future when you're purchasing a computer.'

yes it is, it costs you extra money & hardware comes down in price quickly, if you buy a high end cpu now that you wont use for another year, you're wasting your money. in a years time your high end cpu will be mid range & a lot cheaper, so it'd probably be cheaper to buy a mid range or budget cpu now, then another one in a years time. then you can get a bit of money back for the old one on ebay too.

Re:End of CPU hype .. for now (1)

eebra82 (907996) | more than 6 years ago | (#20353409)

You're turning a discussion about buying high-end hardware into "best bang for the buck". Where in my quoted statement can you see me saying anything about buying the top of the line hardware and that it is the best choice?

Re:End of CPU hype .. for now (1)

Spokehedz (599285) | more than 6 years ago | (#20353353)

Or if you use Linux... Because that support has been standard for quite some time now. They even rotate which CPU gets priority so that heat and usage gets distributed evenly.

Yay for processes that make sense!

Ummm, not necessarily (3, Informative)

Sycraft-fu (314770) | more than 6 years ago | (#20353765)

Intel tends to do a release of a new architecture, then some refinements on that. While it would be cool to do a whole new architecture each time around, there's just not really money for that. This is one of the refinements. The chips are not likely to be all that much faster then their previous chips at the same clock speed because they are largely the same architecture. Mostly they are just a die shrink (which means lower power and probably better scaling and cost) and some new instructions, that aren't really used yet. They are still Core 2s.

However that doesn't mean that the next generation will be the same. Indeed, if Intel keeps with their plans it will be a new architecture and thus hopefully bring new speed increases.

As to using multiple cores, well if you don't know how, perhaps you'd best learn then? You not knowing how doesn't mean it can't be done, indeed it can be done and IS being done. Multi-core is just the way things are going, at least for now. Not only are desktops and servers headed that way, but even things like the Xbox 360 and PS3 are as well. It's simply time to start thinking about software in a different way. No longer is a big while loop the way to go.

Already that's happening. The number of games (and games are interesting to watch since they often ride the leading edge in terms of requirements) that makes use of two cores has risen dramatically. We are also seeing a couple games, with more on the horizon, that will support 4 cores. Things like AI and physics get executed in parallel, which makes it possible for them to be much more complex.

Finally, there HAVE been some cool developments on processors, just not ones that most hardware sites like to cover. Some time back Intel introduced a technology they call VT, which is basically instructions to allow you to virtualize the protection rings on a processor. Supposed to make for faster VMs. Currently the implementation is somewhat lacking, VMware claims it is slower than a well optimised software solution, though others dispute that claim (Xen likes VT). The new 45nm Core 2s add to the existing VT technology with what Intel calls VT-d. Basically the idea is to allow VM software to pass DMA access to their guests, but in a safe manner that can't hurt the host. This may not be exciting to everyone, but these advances are worthwhile, given that virtual computing is getting more and more use.

Processors may not be getting huge gains in single thread performance any more, but that doesn't mean they aren't advancing.

Re:Ummm, not necessarily (1)

Repossessed (1117929) | more than 6 years ago | (#20356629)

Not only is multi-core the way to go, shortly there will be very little option for 90% of users. Vista without a dual core is... painful. In fact, it's getting difficult to buy a pre-built machine with a single core at all.

Re:End of CPU hype .. for now (1)

asm2750 (1124425) | more than 6 years ago | (#20354027)

You hit the nail on the head. Parallel programming is now required if you want to take advantage of multiple cores.

*NEVER* underestimate the viruses (2, Funny)

DrYak (748999) | more than 6 years ago | (#20354191)

NEVER underestimate the huge number of virus / trojan / spyware and pop-up generating crapware that are running in parallel on average joe's computer.

Just think about the number of users who come into stores to buy "faster computers because the old one is getting too slow" when the old computer is crawling under an impressive amount of crapware.
They are the perfect target for those new multi-core processors :
- 1 core for running the OS, Internet Explorer and Microsoft Word.
- All other core for running SPAM-spitting zombies.

Now, if you add Vista in the equation...

Re:*NEVER* underestimate the viruses (2, Funny)

Bill Dog (726542) | more than 6 years ago | (#20355993)

Also never underestimate the huge number of anti- virus / trojan / spyware and pop-up crapware that are running in parallel on average joe's computer. My folks still use AOL. ("Security Edition".) Their computer is basically locked up whenever one of the several types of scans or automatic check for updates auto-launches. They need 3 cores for all those kinds of horribly-written craplets, and 1 to play Minesweeper.

Re:End of CPU hype .. for now (1)

blahplusplus (757119) | more than 6 years ago | (#20354317)

"Conclusion? 4 cores right now need much software support."

It goes beyond just that IMHO, right now the PC industry needs to get it's act together as a PLATFORM. And also for applications that don't break. One of the big things that is pissing me off right now is closed-source programs who's compatability breaks and because it's closed source no one can fix it/update it, etc to get it running when OS's and other technologies change. I think there really needs to be a legal framework for people (end users) who own software and in the Open souce community for accessing closed-source code (that they technically own/ have invested in or have some ownersip stake in really) especially when those applications are long past their sell-date to get them fixed and up and running. You don't buy a car and expect to be prevented from fixing it when something goes wrong.

Next, growth IMHO for certain industries like the game industry is being held back by not subsidizing the cost of some kind of mid-range performance standard graphics *for everyone*. I find it ironic that companies like Nintendo, Sony, and MS can subsidize their consoles, but when it comes to the PC, MS just sit's there.

I think one of the big reasons PC gaming is flagging was in large part due to the incessant march of the graphics card industry. Starcraft and Diablo 1 & 2 were both 2D games, it makes sense that these games got as widespread as they did because they'd run everywhere.

Modded -1, New Here (1)

Doctor Memory (6336) | more than 6 years ago | (#20361417)

I think there really needs to be a legal framework for people (end users) who own software
You don't own software, you license it. Unless you contracted a company to write something for you and you explicitly retained the rights (and the source).

Next, growth IMHO for certain industries like the game industry is being held back by not subsidizing the cost of some kind of mid-range performance standard graphics *for everyone*.
You can get a DX10 graphics card for US$100 [newegg.com]. Or are you still using an AGP motherboard?

I find it ironic that companies like Nintendo, Sony, and MS can subsidize their consoles, but when it comes to the PC, MS just sit's there.
MS doesn't make PCs.

I think one of the big reasons PC gaming is flagging was in large part due to the incessant march of the graphics card industry.
Are you suggesting that game companies can't handle the increased power of new graphics cards?

Starcraft and Diablo 1 & 2 were both 2D games, it makes sense that these games got as widespread as they did because they'd run everywhere.
So, basically you just want a line of Cheap Bastard(TM) games? Why not just haunt the used games stores? I'm sure you can find something there that'll run on your 850MHz P-III.

Parallelization is easy (3, Informative)

mi (197448) | more than 6 years ago | (#20354413)

4 core CPU has no use at homes unless you are content creator. I'm software engineer, I don't think that any of my colleagues I work with knows how to write app that will take advantage of 2 cores; let alone 4.

Well, fortunately, some of this software has already been written just for you and your colleagues. Check out make(1) manual page — look for the -j option...

And no, it is not only for software engineering either. Every time I come back from vacation, I use make [algebra.com] to convert my digital pictures from the lossless "raw" format of the camera to the lower resolution JPEG for the web-pages. Having four CPUs makes that process four times faster. Great idea, uhm?..

Your colleagues may be doofusen, but people, who will finally bring us reliable speech-generation and parsing (as an example) will certainly be smart enough to take full advantage of the multiple processors.

Meanwhile, you can schedule a meeting to discuss using OpenMP [openmp.org] in your company's software... Compilers (including Visual Studio's and gcc) have been supporting this standard for some years now.

Re:Parallelization is easy (0)

Anonymous Coward | more than 6 years ago | (#20355427)

[C#]

Thread thrd = new Thread(new ThreadStart(thrdFunction));
thread.Start();

[VB.NET]
Dim thrd As New Thread(New ThreadStart(AddressOf thrdFunction))
thrd.Start()

Re:End of CPU hype .. for now (1)

HandsOnFire (1059486) | more than 6 years ago | (#20362967)

Conclusion? 4 cores right now need much software support. But shouldn't it be improvements in hardware that make software run faster as opposed to the other way around? For instance, my 3 year old 2GHz Athlon64 is way faster than my 1.6GHz dualcore Athlon64 for all the games I play. Why is it that something that uses twice as much space and is on a smaller process node (90nm) and has twice the memory channel width (dual channel vs. single channel) runs slower? It's newer hardware, and it's running my old software slower. That is destroying the value of the product to customers. Something is wrong here. I bet you a 3GHz core 2 duo would run circles around a 2.4GHz Penryn in any game, despite having 2x the potential processing power.

What about true multithread performance (2, Informative)

DamonHD (794830) | more than 6 years ago | (#20353115)

My recent experience with quad-CPU Xeon machines is that multithread performance for a single is VERY poor, even with great care in coding, presumably because of cache-sloshing between these physically-separate CPUs dropped onto one die.

(I compare with Niagara and even Core Duo which seem much better for threaded apps.)

Has anyone else tested threadability of these CPUs, and power efficiency, sleep states, etc?

Rgds

Damon

Making better use of the die space (1)

Skapare (16644) | more than 6 years ago | (#20353207)

They could probably make better use of the die space of the 4th, 3rd, or even 2nd CPU core by putting things like cache there instead. And in another direction, go with SoC (system on a chip) or certain subsets thereof. Combined with serialized bus technologies, this should work while also reducing pin counts.

Re:Making better use of the die space (1)

DamonHD (794830) | more than 6 years ago | (#20353325)

Well, what's nice about my Niagara T1000 box is that everything is on one chip, and the outermost level of cache serves all CPUs, so even a nominal cache flush for volatile/synchonized never need leave the chip and hit real RAM.

I'm just concerned that threading seems poor when you really do have to go to memory to get data between CPUs, and your idea of giving up some individual cache for some shared cache would be quite right if Intel had the engineering time to do it.

For my latest nasty performance surprise on the Xeons I had to run only a couple of threads rather than 4 (one per CPU), even for entirely CPU-bound work not sharing any significant writable state, I guess to keep everything on one chip and away from main memory.

Very frustrating, but maybe just getting me in practice for real NUMA/threading a few years down the road... B^>

Rgds

Damon

Re:Making better use of the die space (1)

John Betonschaar (178617) | more than 6 years ago | (#20353563)

They could probably make better use of the die space of the 4th, 3rd, or even 2nd CPU core by putting things like cache there instead.

The benefits of having extra cache drop off very quickly above certain cache sizes (depending on the addressable RAM the cache is indexing). A lot more is involved with improving level-0/1/2 cache performance than just upping the cache size.

I'd expect greater benefits from moving dedicated (but programmable) VLIW units into the CPU to increase instruction-level parallelism, for example for efficient video encoding/decoding, image and audio processing etc. You could create a very versatile CPU that doesn't need a big GPU or dedicated audio hardware, and still be very usable for both workstation as well as multimedia/video editing tasks.

Re:Making better use of the die space (1)

Colin Smith (2679) | more than 6 years ago | (#20353691)

They could probably make better use of the die space of the 4th, 3rd, or even 2nd CPU core by putting things like cache there instead.
Except you won't pay the price.

They can charge you more for a 4 core CPU with shit amounts of cache than they can with a dual core with shed loads. People are stupid. They assume more megahurts means more fast and more cores means more fast... Whether the additional cores are actually doing anything at all.

Business CPUs it's a different matter, they actually benchmark their apps and yup, buy CPUs with loads of cache when they're faster.

And really, the best thing they could do is add an FPGA.
 

Re:Making better use of the die space (1)

DamonHD (794830) | more than 6 years ago | (#20355499)

Hmm, looked at FPGAs too. Not generally worth the highly-specialised one-off development even when writing your own code.

Much nicer to have something portable which next year will just run faster without your doing much because of an improved compiler, runtime, CPU, cache, bus, kernel, whatever... Usually...

Rgds

Damon

Re:Making better use of the die space (1)

rbanffy (584143) | more than 6 years ago | (#20356595)

For multi-threading apps, instead of multiple cores (nothing except caches are shared between complete CPU cores), it makes a lot of sense to have an HT-like architecture (multiple context stores, shared elements) that reduces the time it takes to do a context-switch. It would also help a lot to have a context-aware cache system where a swapped-in context would not wake up having to read every instruction from main memory.

Since not all threads will be runnable at any given time, having more cores instead of bigger caches could hurt performance instead of helping it. A bunch of context stores and a couple independent CPU elements will not give you the same performance boost than a second core will, but they take up a lot less silicon and may spare you many trips to main memory.

Re:What about true multithread performance (2, Informative)

bjackson1 (953136) | more than 6 years ago | (#20353889)

Intel's Core Microarchitecture is not currently available in a quad-CPU platform. It is understandable the multithreaded performance would be poor, then.

The current quad-cpu architecture is based on Tulsa, which a 65nm shrink of Paxville, which is essentially a Pentium 4 Smithfield, or two Prescotts shoved onto one ship. Basically, it's two years ago's technology. The new Tigerton chip will be in Core based, however, it's not out yet.

Re:What about true multithread performance (1)

Wavicle (181176) | more than 6 years ago | (#20362797)

How did this get modded informative?

Intel's Core Microarchitecture is not currently available in a quad-CPU platform.

Incorrect. Intel's "Core Microarchitecture" is marketed under the name "Core 2." The "Core 2 Quad" processors use the Core Microarchitecture. See Intel's product brief [intel.com] on the subject.

It is understandable the multithreaded performance would be poor, then.

The single threaded performance of quad core is similar to the single threaded performance of dual core, clock for clock. This should have tipped you off to the fact that quad core is using the new microarchitecture.

The current quad-cpu architecture is based on Tulsa, which a 65nm shrink of Paxville, which is essentially a Pentium 4 Smithfield, or two Prescotts shoved onto one ship.

Also incorrect. All of those processors use netburst. None of the new Quad Cores do. Your problem seems to be that you are reading dated information on server class chips and assuming all "Quad Cores" are server class. (Although new quad core Xeon's are Core as well)

Basically, it's two years ago's technology. The new Tigerton chip will be in Core based, however, it's not out yet.

Incorrect again. Although you are correct about Tigerton being a Core part.

Re:What about true multithread performance (1)

bjackson1 (953136) | more than 6 years ago | (#20363053)

Incorrect. Intel's "Core Microarchitecture" is marketed under the name "Core 2." The "Core 2 Quad" processors use the Core Microarchitecture. See Intel's product brief on the subject.

  I said quad CPU not Quad Core. Socket 771 Core 2 Quads or Quad Xeons can only be used in pairs.

  Basically the answer to all of your arguements is that I said "Quad CPU" not "Quad Core". You should know there is a difference.

Re:What about true multithread performance (1)

Thundersnatch (671481) | more than 6 years ago | (#20354363)

Your experience isn't shread by me, or by most other benchmarkers. Take a look at multi-threaded SPEC benchmarks for the Xeon 5300 series. SPEC_int_rate 2006, SPEC JBB_2005, etc, all show the Xeon 5300 as the clear per-socket performance leader for x86 systems. The quad-core Xeons are only bested by the IBM POWER 6, and Niagra in the Java benchmarks.

See the SPECint_rate 2006 [spec.org] results page, and filter on two-chip systems.

Perhaps your particular application is a degenerate case for the 5300s cache architecture, but I have to tell you our Xeon X5355 based Dells are (by almost a factor of two) the fastest dual-socket application servers we have, much faster than the dual-core Opteron systems we have that run the same apps and are just a few months older.

Re:What about true multithread performance (1)

DamonHD (794830) | more than 6 years ago | (#20354423)

It's clear that the nature of my app (which is in Java, BTW) is going to make a difference, and I've not seen quite this effect before in Java or C++ threading over 10+ years where I've had to run well short of a thread per CPU to maximise throughput or at least throughput per CPU. The threads are moderately tightly coupled but, as I say, rarely sharing mutable state. Usually running with a few too many threads to allow for a little parallel slackness is a better bet.

Part of the problem in my particular app is more subtle in that the nature of the app is forcing more GC than I normally allow, and that *is* much less paralleliseable in practice, even on JDK 6u2 with -Xconcgc...

I have spent a long time looking for my screw-up in this app, I just haven't found it yet!

Rgds

Damon

Re:What about true multithread performance (1)

be-fan (61476) | more than 6 years ago | (#20354457)

The SPEC benchmarks are _almost_ perfectly parallelizable. They are just multiple instances of a single-threaded benchmark, and as such don't really test all the things that arise in true multi-threaded programs (cache line bouncing, etc).

Re:What about true multithread performance (1)

Thundersnatch (671481) | more than 6 years ago | (#20359103)

Take a look at SPECjbb2005 or TPC-C, which resemble "real" applications a lot more than SPECint_rate. The Quad-core Xeons are 70-100% faster than the fastest dual-core Opteron systems.

As much as I wish it weren't so, AMD has been toasted in the two-socket server space, which is the largest part of the server market. Barcelona proabably won't change that, as Penryn will arrive at the same time.

Can you SSE Me Now? (2, Interesting)

Clear Monkey (945568) | more than 6 years ago | (#20353173)

"Intel expects SSE4 optimizations to deliver performance improvements in video authoring, imaging, graphics, video search, off-chip accelerators, gaming and physics applications. Early benchmarks with an SSE4 optimized version of DivX 6.6 Alpha yielded a 116 percent performance improvement due to SSE4 optimizations." Not bad...

Re:Can you SSE Me Now? (1)

Slashcrap (869349) | more than 6 years ago | (#20355487)

"Intel expects SSE4 optimizations to deliver performance improvements in video authoring, imaging, graphics, video search, off-chip accelerators, gaming and physics applications. Early benchmarks with an SSE4 optimized version of DivX 6.6 Alpha yielded a 116 percent performance improvement due to SSE4 optimizations." Not bad...

Also, Intel have introduced a new instruction for adding sixteen to fourteen and dividing the result by two (ADDFTNSTNDIV2). This has produced a performance increase of up to 12,000% in applications and benchmarks which mainly add sixteen to fourteen and divide the result by two.

The performance gains for more general applications are expected to be slightly lower, although Intel are not releasing any official benchmarks at this time.

All joking aside, DivX always shows massive performance improvements whenever Intel add some new SSE instructions. I can't help wondering if this has less to do with the efficacy of the new instructions and more to do with them not using the existing instructions very well. Let me know when Xvid/x264/FFMPEG release an SSE4 version that's twice as fast and I'll go out and buy one.

Vote SSE for more glorious vendor lock-in! (0)

Anonymous Coward | more than 6 years ago | (#20356637)

There are certainly advantages to be had by implementing SIMD hardware, but Intel's been doing it in counterproductive and selfish ways since the start.

Remember MMX? Those instructions used the FP registers, so you couldn't do FP simultaneously; more important than performance was that Intel's competition couldn't run your code if you used MMX.

Remember SSE/2/3? Intel robbed silicon from the P4's FP capability, so the P4 and derivatives gave good math performance if you wrote with (Intel only!) SSE_latest_version and poor performance if you wrote easy, portable, STANDARD, code (did nothing special). Intel dangled the carrot with slightly increased performance if you wrote Intel-only code, and broke thumbs by hurting your performance if you opted for portable, maintainable, slicon-(and thus vendor-)neutral code.

If you're willing to trade vendor lock-in and high priced hardware for performance, the answer has been and still remains PPC/Power. Altivec performance has been much higher than contemporary x86/x87, IA64, and AMD64 parts' since it was introduced in 1999, and remains so today. And if you prefer general purpose parallelism to SIMD, look at Tilera and Niagara!

Show me ONE application running much faster with YET ANOTHER revision to Intel's vendor-locking SSE, and I'll show you MANY applications that abstain in order to maintain good performance on more hardware. Those that value performance over portability would be better off at least obtaining the best performance available since they're willing to pay that price in the first place, and that performance is not available on x86-64, or anything else made by Intel or AMD.

Altivec pisses me off because it's proprietary, but at least performance is very good.
SSE pisses me off because it's proprietary, and it doesn't even offer a much performance incentive.

Nothing to SSE here, move along. (0)

Anonymous Coward | more than 6 years ago | (#20357089)

You're an idiot.

Re:Vote SSE for more glorious vendor lock-in! (1)

gfody (514448) | more than 6 years ago | (#20358107)

Who makes an SSE version of a function w/o a regular x86 version to fall back on when SSE isn't available?

If I install Linux on it (1)

zukinux (1094199) | more than 6 years ago | (#20353261)

Seriously (partly, at-least) : How many penguins I will see during the boot-up? 4?

Re:If I install Linux on it (1)

EvilRyry (1025309) | more than 6 years ago | (#20353541)

Ever see a PS3 booting the Linux kernel? Muchos penguins!

But yes, you would get 4 penguins.

Re:If I install Linux on it (1)

zukinux (1094199) | more than 6 years ago | (#20353709)

Ever see a PS3 booting the Linux kernel? Muchos penguins!

Since PS3 havn't got multiple (or more than 2), it shouldn't have more than 2 penguins, so it's a bug, or I've understood wrong? for what I know, the number of penguins declare the number of CPUs the kernel think the computer has.

Re:If I install Linux on it (1)

EvilRyry (1025309) | more than 6 years ago | (#20355251)

There are two big penguins then a small fleet of little penguins underneath them with 'SPE' tattooed on their chests in red letters. Like so http://www.kernel.org/pub/linux/kernel/people/geof f/cell/debian-penguin-shot.png/ [kernel.org]

Re:If I install Linux on it (1)

zukinux (1094199) | more than 6 years ago | (#20356645)

I don't get why there are 2 penguins and 6 penguins under it in PS3 boot-up... what does the 6 underneath represent? (youtube.com for ps3 linux boot up)

Re:If I install Linux on it (2, Funny)

Anpheus (908711) | more than 6 years ago | (#20353917)

Does Linux even boot on Blue Gene, or does it hang while trying to draw over one hundred thousand penguins?

AMD Is Dead If They Don't Change The Game (2, Interesting)

osewa77 (603622) | more than 6 years ago | (#20353341)

AMD rose to this position primarily because they didn't make Intel's mistakes - trying to force a new CPU architecture on the market (Itanium) instead of incrementally developing the X86 line, and focusing on clock-speed (P4) at the expense of performance per watt. Now that Intel is focused on performance per watt, AMD needs to find a new differentiator for their chips.

Perhaps they should start thinking about how to integrate a high quality Vista-capable GPU into their processors? (afterall they acquired ATI). How about sound cards, USB ports, et cetera. If they can fit 90% of a typical motherboard into the processor and usher in a new era of affordable and efficient computers while intel is busy playing with 64-core chips, why not?

Re:AMD Is Dead If They Don't Change The Game (1)

epiphani (254981) | more than 6 years ago | (#20353525)

They are doing exactly that.

AMD is going the route of a true native quad core with Barcelona, coming out in september. They have the desktop version of that, Phenom, coming out closer to Christmas. Intel is taking the quick and dirty route to quad core - smash two dual core CPUs onto the same die. AMD is actually doing a proper quad core architecture.

They have in their roadmap a GPGPU (general purpose graphics processing unit) for late 2008 or early 2009. I'm personally still trying to understand what this means, but my impression is that its going to be huge.

Re:AMD Is Dead If They Don't Change The Game (1)

CajunArson (465943) | more than 6 years ago | (#20353677)

Intel is taking the quick and dirty route to quad core - smash two dual core CPUs onto the same die. AMD is actually doing a proper quad core architecture.

        A 'smashed' Xeon runs much better than an AMD CPU that I can't buy. If I said AMD sucked because they took the 'quick and dirty' route with the K10's shared L3 victim cache, limited memory prefetching, and limited incomplete subset of SSE4 you'd probably just say those are buzzwords.

Re:AMD Is Dead If They Don't Change The Game (1)

init100 (915886) | more than 6 years ago | (#20354015)

Intel is taking the quick and dirty route to quad core - smash two dual core CPUs onto the same die. AMD is actually doing a proper quad core architecture.

Do you think that the fact that the Intel method is cheaper due to higher yield is irrelevant? With a single-die quadcore, the entire processor needs to be discarded if just one core is broken. With dual-die quadcores, you only need to discard one half of the processor. This increases yield and lowers costs, and I cannot see what is so bad about that. Performance isn't everything, and it isn't like it suffers greatly from the dual-die design. I'd guess that it suffers more from the shared FSB design.

Re:AMD Is Dead If They Don't Change The Game (1)

asm2750 (1124425) | more than 6 years ago | (#20354083)

Its cheaper to just squish two dual cores together and have two dies but performance takes a hit if a processor is made that way. I think AMD was going for scalability on the quadcore they designed, probably in such a way to where they don't have to make a major redesigns until they are ready with their fusion line. Its funny that alot of people in the IT world say this is AMD's final years because they cant break X clock speed, but you have to remember, Intel has alot more fabs than AMD and a lot more money for R&D on new fab processes like 45nm and achieve X clock speed so the morons in IT can be happy. AMD is trying to counter this by doing this kind of R&D with IBM to save on costs and focus more on product design. Besides whats better than having big blue as a research partner? They make processors for every console now, and they build supercomputers for a bunch of major national laboratories now as well.

Re:AMD Is Dead If They Don't Change The Game (1)

gnasher719 (869701) | more than 6 years ago | (#20355159)

I don't think having two dual-cores in a package instead of four cores combined is necessarily a disadvantage. To compare these properly, you would have to assume same quality of implementation. So Intel could have gone for one unified 12MB L2 cache with four access paths instead of two 6MB L2 caches with two access paths each. With same quality of implementation, the four access paths will be slower because you have to cope with four processors accessing it at the same time instead of two. So each access will be slower. Larger caches are always slower anyway, so 12 MB is slower than 6 MB again. On the other hand, 12 MB unified is better if you have only one thread that is cache intensive, and the unified cache is better if more than two threads are communicating, and you don't need to worry about allocating threads to the right processors in pairs. Communicating between L2 cache and main memory might be faster for 2x2 cores because you have two access paths, on the other hand making them fast is harder.

So all in all, the speed advantage could go either way, and will depend on the code that you run. For example, if you run four video encoders simultaneously, all the advantages of 1x4 cores don't help, but the higher speed of 2x2 does. Other tasks will be different.

Re:AMD Is Dead If They Don't Change The Game (0)

Anonymous Coward | more than 6 years ago | (#20354725)

Don't forget that there is still a market for dual core. I could see AMD selling their broken quad core ships as lowe power high performance dual core chips.

Re:AMD Is Dead If They Don't Change The Game (0)

Anonymous Coward | more than 6 years ago | (#20355403)

Or triple core chips!

Re:AMD Is Dead If They Don't Change The Game (1)

Laxator2 (973549) | more than 6 years ago | (#20353941)

AMD did try to play a different game when they announced Fusion and Torrenza. Intel played dirty by turning back the clock to a time when people were addicted to the single-core benchmarks (i.e. framerates) and chose the perfect timing: AMD's cash reserves were the lowest after the ATI merger. Intel are hard-pressed to kill AMD now, before they open their new fabs. (Malta, NY ?) and are capable of meeting demand. And maybe that anti-trust lawsuit has some real basis, otherwise Dell would certainly not bother to sell AMD chips, especially now that they are no longer the fastest performers.

Re:AMD Is Dead If They Don't Change The Game (1)

Jeff DeMaagd (2015) | more than 6 years ago | (#20354319)

AMD still seems to be doing good design but their fabbing lags Intel by a year. I think it's Intel's fab technology that carried them through despite their other technology misdirections. I hope that the results of the ATI merger become a long term positive, it seems to be holding them down in the short term. Betting on on-die GPU is quite a serious bet, quite a bit more serious than an on-die memory controller in my opinion, especially when they go into major debt to acquire another large company just to pull that off. I hope it works out for them.

Re:AMD Is Dead If They Don't Change The Game (0)

Anonymous Coward | more than 6 years ago | (#20355375)

> AMD still seems to be doing good design but their fabbing lags Intel by a year.

Their fab capacity is waaaay below Intel's. Pretty much everybody is a shadow of Intel's capacity, who can pretty much turn a knob and flood the market with CPUs. They do have a SOI process that Intel doesn't, and they're talking about moving to GaAs while Intel isn't. Of course Intel just doesn't like to talk all that much -- they have information control that would make Apple envious.

I wouldn't count AMD out yet. I'm waiting for Phenom before making that call.

Meaningless comparison (0)

Anonymous Coward | more than 6 years ago | (#20353453)

"Yorkfield is only about 7-8% faster than the Kentsfield with similar clock speeds"
One of the main features of the Penryn family is the capacity for high clock speeds, so comparing with the same clock is meaningless.

And although the processor has other performance boosts, I suspect many of them come from SSE4, and nowhere in TFA have I seen any indication that the benchmarks were compiled with it in mind.

Less power or not? (1)

timeOday (582209) | more than 6 years ago | (#20353491)

Although Yorkfield uses a 45nm fab process and consumes less power, Intel plans to stick to its existing 95 Watt and 130 Watt thermal design power ratings.
I don't get it, does it use less power or not? Or does this mean it uses less power per cycle, thus allowing them to ramp up the clock until it's back up to 130 watts?

Re:Less power or not? (1)

Wesley Felter (138342) | more than 6 years ago | (#20355101)

Or does this mean it uses less power per cycle, thus allowing them to ramp up the clock until it's back up to 130 watts?

Yes, they are increasing the clock to maintain the same TDP.

People expect too much. (0)

Anonymous Coward | more than 6 years ago | (#20353503)

Even though the processor itself is only slightly faster, doesn't mean that the market overall doesn't gain from this. Sure, there's no big leap in performance here (it's pretty much a die shrink), but in the end 45nm generates better economies of scale, and the average person ends up with a lot more than 7-8% improvement as they move up from their old P4 systems. More speed, less heat, better prices. Tech marches forth.

OH! I se (-1, Offtopic)

drDugan (219551) | more than 6 years ago | (#20353581)

Wel that would explin why I keep dopping packets and my cll phone has been beepng that my sevrs can't png each oter. Monopolies. *sigh*

So don't wait, go buy those Kentsfield's! (0)

Anonymous Coward | more than 6 years ago | (#20354595)

Gee, this is awfully convenient. "Hey folks! There's no point in waiting! Come on down and buy!" Just like the car dealers pushing out 2007 models with summer sales.

I expect a noticable boost for high-end HD-TV/DVD (3, Informative)

Terje Mathisen (128806) | more than 6 years ago | (#20355363)

When decoding "full HD" h264, i.e. 40 Mbit/s BluRay or 30 MBit/s HD-DVD, with 1080p resolution, current cpus start to trash the L2 cache:

Each 1080p frame consist of approximately 2 M pixels, which means that the luminance info will need 2 MB, right?

Since the normal way to encode most of the frames is to have two source frames and one target, motion compensation (which can access any 4x4, 8x8 og 16x16 sub-block from either or both of the source frames), will need to have up to 2+2+2=6MB as the working set.

Terje

Re:I expect a noticable boost for high-end HD-TV/D (0)

Anonymous Coward | more than 6 years ago | (#20360513)

Of course this isn't exactly random access (at least if you organize things right) and simply moving around 6MB times 50fps isn't a big deal anymore.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...