×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Haswell CPUs Debut, Put To the Test

timothy posted about 10 months ago | from the good-to-live-when-advances-are-boring dept.

Intel 189

jjslash writes "Intel's Haswell architecture is finally available in the flagship Core i7-4770K and Core i7-4950HQ processors. This is a very volatile time for Intel. In an ARM-less vacuum, Intel's Haswell architecture would likely be the most amazing thing to happen to the tech industry in years. Haswell mobile processors are slated to bring about the single largest improvement in battery life in Intel history. In graphics, Haswell completely redefines the expectations for processor graphics. On the desktop however, Haswell is just a bit more efficient, but no longer much faster when going from one generation to another." Reader wesbascas puts some numbers on what "just a bit" means here: "Just as leaked copies of the chip have already shown, the i7-4770K only presents an incremental ~10% performance increase over the Ivy Bridge-based Core i7-3770K. Overclocking potential also remains in the same 4.3 GHz to 4.6 GHz ballpark."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

189 comments

Transactional Memory support (5, Insightful)

rev0lt (1950662) | about 10 months ago | (#43883133)

For me, this is by far the biggest architectural improvement I see in these line of processors (check http://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions [wikipedia.org] and http://software.intel.com/sites/default/files/m/9/2/3/41604 [intel.com] for more information). If it sticks, it will help solving a lot of multi-core shared memory software development issues.

Re:Transactional Memory support (4, Interesting)

Z00L00K (682162) | about 10 months ago | (#43883353)

It's an interesting addition which can be useful for some.

But when it comes to general performance improvement it's rather disappointing. Looks like they have fine tuned the current architecture without actually adding something that increases the performance at the same rate as we have seen the last decades. To some extent it looks like we have hit a ceiling in increased performance with the current overall computer architecture and that new approaches are needed. The clock frequency is basically the same as for the decade old P4, the number of running cores on a chip seems to be limited too, at least compared to other architectures.

One interesting path for improving performance that may be useful is what Xilinx has done with their Zynq-7000 which combines ARM cores with FPGA, but it will require a change in the way computers are designed.

Re:Transactional Memory support (2)

Hamsterdan (815291) | about 10 months ago | (#43883935)

I'm pretty sure if AMD would bring out something to be competitive, Intel would find a way to come out with faster processors, just like they did when the original Athlon kicked ass. Suddenly they were able to manufactures processors that were'nt just 33Mhz faster than the previous ones.

Re:Transactional Memory support (1)

cnettel (836611) | about 10 months ago | (#43883955)

But when it comes to general performance improvement it's rather disappointing. Looks like they have fine tuned the current architecture without actually adding something that increases the performance at the same rate as we have seen the last decades. To some extent it looks like we have hit a ceiling in increased performance with the current overall computer architecture and that new approaches are needed. The clock frequency is basically the same as for the decade old P4, the number of running cores on a chip seems to be limited too, at least compared to other architectures.

Even single-threaded performance, even if you normalize for identical frequencies, has increased since Core 2. That they have increased since Pentium 4 goes without saying. We do not see the same increases in instructions per second that we used to do, but we still have increases. This page [anandtech.com] (and the next) from Anandtech was quite illuminating.

Re:Transactional Memory support (1)

bored (40072) | about 10 months ago | (#43884451)

You should look at that page again, i'm not sure is benchmarking is "fair". If he wants to compare the older CPU's it might have been ammusing to use the 5 year old version of the code too. That way, your not seeing the affects of code optimized for the latest CPU's at the expense of the old ones.

Ignoring things that are using SSE modes not available on the core2 (cinebench, etc!).

The i7-4770 is actually clocked at 3.9Ghz when running a single core. So for something like 7 zip, 4807/3.9=1232 units/ghz. Vs the core2 at 2519/2.5=1007 units/Ghz. The numbers don't look that great.

We are talking a CPU with more cache, faster RAM, faster PCIe... Its only getting a puny 20% performance improvement in 7-zip single threaded over nearly 5 years!

Now of course that is the pessimistic view, the optimistic is that if you application can take advantage of 256bit AVX, or a dozen cores then the CPU's have gotten significantly faster.

But, I've seen this a few times, someone goes out and purchases a new machine to replace an old high end machine and the new machine is actually slower at some number of given tasks. This happened where I worked, we replaced a couple 3.2Ghz opteron machines with some nice new 2.0 Ghz xeons, and the latency of our application went up ~25% and the throughput went down by ~10%. I'm sure if we had purchased better xeons that wouldn't have happened. But, the guys in purchasing assumed that spending the same amount of money for a new machine as they spent 5 years ago on the old ones would yield a better machine. They were wrong!

Re:Transactional Memory support (2)

bored (40072) | about 10 months ago | (#43884509)

Just to reply again, some of these benchmarks are obvious bullshit. Like the AES one, the new cpu's have AES-NI instructions for accelerating AES.

So, yes if you happen to be doing AES, and your running code that can take advantage of AES-NI then the new CPU's are going to fly. But the whole benchmark is so tilted its not even funny. Why not use a benchmark that renders some SSL encoded web pages? Because the benchmark is going to be bottlenecked by the network stack and the rendering engine. Not the tiny percentage of the time the CPU spends doing AES.

He really should have broken the benchmarks into two camps, benchmarks actually running the same code on both sets of CPU's and ones that can leverage some new instructions available only on the new CPUs.

Re:Transactional Memory support (3, Insightful)

TheRaven64 (641858) | about 10 months ago | (#43884139)

The hardware lock elision stuff is going to be more than just a little bit useful. It means that software that uses coarse-grained locking can get the same sort of performance as software using fine-grained locking and close to the performance of software written specifically to support transactional memory. It will be interesting to see if Intel's cross-licensing agreements with other chip makers includes the relevant patents. If it's something that is widely adopted, then it is likely to change how we write parallel software. If not, then it will just make certain categories of code significantly more scalable on Intel than other CPUs.

Re:Transactional Memory support (0)

Anonymous Coward | about 10 months ago | (#43884199)

The issue with the market is that ARM marketing and datacenter power consumption point that power reduction is a must, so Intel has to move around trying to improve both performance and power. So you complain about significant performance, when the rest of the world is complaining about power consumption.

Re:Transactional Memory support (1)

bhinesley (1853036) | about 10 months ago | (#43884211)

The clock frequency is basically the same as for the decade old P4

The P4 had an absurdly long 31 stage pipeline, compared to the Haswell's 14 stages. More stages means each stage is shorter, and thus more cycles per second. So, sure, the clock frequency is "basically the same", but clock frequency alone doesn't mean a whole lot when you're comparing completely different architectures.

Re:Transactional Memory support (0)

Anonymous Coward | about 10 months ago | (#43884457)

Which is why the wii u is more powerful than people give it credit for.

Re:Transactional Memory support (4, Interesting)

PhrostyMcByte (589271) | about 10 months ago | (#43883915)

A more immediately useful feature is backwards-compatible hardware lock elision. Before taking a lock, you emit an instruction which is a NOP for older CPUs but causes Haswell to ignore the lock and create an implicit transaction. Instant scalability improvement to just about every app out there with contention, without having to distribute Haswell-specific binaries.

My favorite feature, though, is scatter/gather support for SIMD. Scatter/gather is very important because up until now loading memory from several locations for SIMD use has been a pain in the ass involving costly shuffles and often requires you to load more than you actually immediately wanted, possibly forcing you to spill registers. It's really not something you want to do, but sometimes there are no good alternatives. I'll be super interested to see benchmarks taking this into account.

Re:Transactional Memory support (1)

Anonymous Coward | about 10 months ago | (#43884431)

Instant scalability improvement to just about every app out there with contention, without having to distribute Haswell-specific binaries..

Sorry that's not completely true. Any app that has LOCK contention but little DATA contention will improve. Most highly contended locks have some degree of shared data naturally and if the CPUs collide on said cacheline modifications then the transaction fails. Performance *can* actually suffer. Any contended lock with very little data contention should just be broken up into smaller locks (or some parts not done under the lock).

Optimized for Macbook Air (-1)

Anonymous Coward | about 10 months ago | (#43883139)

That's my quick reading. Maybe I'm wrong

Re:Optimized for Macbook Air (-1)

Anonymous Coward | about 10 months ago | (#43883287)

Wow... I didn't think apple fan boys were so sensitive as to mark an innocuous and possibly true comment as a troll. Wow.

Re:Optimized for Macbook Air (0, Insightful)

Anonymous Coward | about 10 months ago | (#43883379)

You'll need the electricity savings to offset the absurd cost of the MacBook Air [amazon.com].

If it saves you $50 in electricity per month over your present system, all other factors being equal, in 3 and a half years you'll be saving money. People don't buy luxury to save money. Relevant car analogy, not a single 2013 Ferrari model gets 20 MPG or better. [fueleconomy.gov] By contrast, most Kia vehicles get 30 MPG or better. [fueleconomy.gov]

The moral of this story? Poor people care about economy, wealthy people care about performance, and business people look for a happy medium.

Re:Optimized for Macbook Air (0)

Anonymous Coward | about 10 months ago | (#43884101)

And then there's no-poor-but-not-rich-either people who just want a system that fucking works.

Performance per Watt (4, Insightful)

Technician (215283) | about 10 months ago | (#43883141)

Hmm, Performance per Watt seems to have been glazed over.

The possibility of a fanless media center PC, the ability of a server farm to eliminate over half the cooling cost, and long battery life in a gaming class laptop seems to not be the attention of the article.

Gee it's only X percent faster..

Re:Performance per Watt (1)

gl4ss (559668) | about 10 months ago | (#43883151)

Hmm, Performance per Watt seems to have been glazed over.

The possibility of a fanless media center PC, the ability of a server farm to eliminate over half the cooling cost, and long battery life in a gaming class laptop seems to not be the attention of the article.

Gee it's only X percent faster..

still uses plenty of juice when gaming..

Re:Performance per Watt (0)

Anonymous Coward | about 10 months ago | (#43883277)

The post is referring to the power drain while under load, which the quote in the summary talks about. "single largest improvement in battery life in Intel history" is quite the claim to not support.

Re:Performance per Watt (-1)

Anonymous Coward | about 10 months ago | (#43883241)

Hmm, Performance per Watt seems to have been glazed over.

The possibility of a fanless media center PC, the ability of a server farm to eliminate over half the cooling cost, and long battery life in a gaming class laptop seems to not be the attention of the article.

Gee it's only X percent faster..

We're not going back to the fanless anything, my friend. Tablets were the last bastion of fanless computing, and even that is now crumbling, as Windows 8 (not RT) tablets become more popular. Sure, these overheating piles of crap may not be very popular around here (/.), but we are not the ones who drive the market.

Fact is, the emphasis is still on processing power over performance per watt. Sure, Intel gives it lip service, but not much beyond that. Because, again, the market is not necessarily interested in fanless, cool-running devices. The last fanless GPU cards had half a kilogram of heatsink, and *still* needed an external fan (come to think about it, these shouldn't have been called fanless).

Re:Performance per Watt (0)

Anonymous Coward | about 10 months ago | (#43883263)

I don't think ANYONE in his right mind will do ANYTHING even remotely CPU intensive on a laptop that is NOT plugged in.

Performance per Watt is relevant only for situations requiring either very large (supercomputers) or very small (appliances) performance. Gaming for instance is not one of these.

Re:Performance per Watt (0)

Anonymous Coward | about 10 months ago | (#43883439)

Speak for yourself. The main reason why PCs are still not the main media centre / gaming unit in households is due to the ridiculous noise they make. PSU has a fan, GPU has a fan, CPU has a fan, chassis has 2 to 8 fans, some MBs have small fans. Even going water cooling means you have bloody annoying pumps.

Using less Wattage means less cooling required. A little above you perhaps?

Re:Performance per Watt (1)

Gadget_Guy (627405) | about 10 months ago | (#43883729)

I don't think ANYONE in his right mind will do ANYTHING even remotely CPU intensive on a laptop that is NOT plugged in.

What nonsense. Gaming capable notebook computers are quite affordable these days. I have one to play games on my long train trips to work. Not only do more efficient computers mean that we can play on batteries for longer, but they also stop our genitals being fried from CPU/GPUs under heavy load.

Similarly, my main gaming PC is pretty low spec solely because I don't want to have a furnace sitting at my feet, especially in summer when I used to have to stop playing games when it got too hot with my previous system. More efficient processors mean that I don't have to have such a weak system just because I want cool & quiet gaming.

Re:Performance per Watt (1)

aliquis (678370) | about 10 months ago | (#43884395)

our genitals being fried from CPU/GPUs under heavy load.

There must be a joke in there waiting to be found but I don't know what it is :)

Re:Performance per Watt (1)

Molochi (555357) | about 10 months ago | (#43883985)

Yeah they only tested the power consumption of the new unlocked/enthusiast desktop CPU (4770k) with a 84W TDP.

Toward the beginning of the article they say that there will be a 35W i5 Haswell for socket 1150. No mention on specs tho'

Progress (1)

Anonymous Coward | about 10 months ago | (#43883155)

Intel's Haswell architecture would likely be the most amazing thing to happen to the tech industry in years.

Seems I've been hearing this about each of their "tocks".... Nehalem, then Sandy Bridge, and now again with Haswell.

Their process is good, and that kind of advertising may even be warranted, but the hype is really getting repetitive.

Software killed the PC, not hardware (4, Insightful)

Anonymous Coward | about 10 months ago | (#43883183)

The lack of phenomenal hardware improvements may annoy the nerds, but the mass market PC is killed by the abysmal software environment. People are fleeing to tablets and phones and with that the cloud because maintaining a PC has become just about impossible for laymen. The slowness of a desktop that hasn't seen professional maintenance is astonishing, if it is still working at all. Viruses and trojans aside, every bit of software comes with its own updater, many of which are poorly implemented resource and attention hogs. If the updater doesn't do you in, it's the bundled adware, sometimes installed by the update "service". The PC is stiff and stone cold, a host overwhelmed and killed by its parasites. Time to put it 6ft underground.

Re: Software killed the PC, not hardware (0)

Anonymous Coward | about 10 months ago | (#43883417)

Couldn't agree more. This needs to be upvoted to 5 and then given more for extra measure. PC's days on the Desktop are numbered. When mobile platforms have their GUIs optimised for the Desktop, that will be it for PCs. Only Desktops needed for specialist apps (that don't have a non-PC equivalent) and Servers will survive.

Servers will be next though. Soon we will have hundreds of tiny ARM based systems setup in a software cluster, in the same physical space as 1 x86 based server, achieving a fraction of the power usage but with powerformance that is magnitudes better.

Re:Software killed the PC, not hardware (1)

Anonymous Coward | about 10 months ago | (#43883465)

Absolute bollocks. There's nothing special about tablets that makes them immune. What? Exchange the power to install what I want for a crApp Store? No thanks. Fuck the cloud too. I'm backing down all my data from Flickr after their latest fiasco. Paying the play in somebody's walled garden is no better than paying a subscription fee for AV that slows down your machine. Long live the PC. Fuck the tablets. People will get sick of paying for stuff that should be free. People will get sick of the cloud when it costs them money, and not just nerds. I know a "layman" who lost a cloud app when the OS updated because the stupid thing didn't keep track of the fact that he had paid for it. Their only "solution" was for him to pay again. Shit like that will kill the cloud of tablets (sounds like a bad acid trip), and good riddance.

Re:Software killed the PC, not hardware (1)

TheRaven64 (641858) | about 10 months ago | (#43884159)

There's nothing special about tablets, except for one thing: users don't expect backwards compatibility. You can make significant improvements if your customers don't expect to be able to use your product as a drop-in replacement for something else.

Re:Software killed the PC, not hardware (0)

Anonymous Coward | about 10 months ago | (#43883745)

oh wow, some ignorant ass cant keep is P4 XP machine running cause they have not installed a single update on the thing since it came from the factory!

dude, stop living in 2005

Re:Software killed the PC, not hardware (-1)

Anonymous Coward | about 10 months ago | (#43883839)

No, I can keep a PC running and supposedly you can too. We're not the mass market consumers though. They can't maintain their PCs, and then they call me to fix the broken pieces of shit (software-wise - the hardware is often more current and better than what I use). I think that with your attitude you're digging the PCs grave. The industry is so full of it. Between the anti-malware outfits latching onto every CPU cycle and I/O transaction available, updaters screaming for attention every day and people like you who proclaim that anyone who can't keep a PC clean must be a doofus, it's no wonder that users flee in droves. Don't you understand how horrible their PC experience must be when they leave it for Android or iOS?

Re:Software killed the PC, not hardware (1)

lightknight (213164) | about 10 months ago | (#43884595)

You're right. The PC industry needs to come out with maintenance free PCs, for the common people!

We will take the automotive industry's example of oil-free cars, and run with it. No longer will you need to stop by Good Year / Jiffy Lube / etc. or (shh, is forbidden) change your oil yourself: this is the year 2013, right, WTF do they mean read the car manual and perform maintenance? Don't they know that no one has time for that? Just make cars / trucks / boats / planes maintenance-free, and your problem is solved! Besides, those car mechanics probably steal from you when you bring your car in...or break something so that you need to bring it back in...and $40 for an oil change? It's a rip-off!

Hell, we should replace the keyboard with 'The Facepalmer (TM),' because the computers these days are smarter than the end users, so they should know what they want to do, right? It's not that hard! Just have the user smash their face into 'The Facepalmer (TM),' and it will reorganize their iTunes collection / send an email for them / or browse the web for them! It's that easy!

Re:Software killed the PC, not hardware (4, Interesting)

PopeRatzo (965947) | about 10 months ago | (#43884341)

Time to put it 6ft underground.

This was the giveaway.

What do you care if there are still people who would rather use a desktop PC that's not behind a garden wall and actually get work done? Why do you insist that the PC platform has to be killed off? Isn't there room in the world for more than one set of computing needs?

This notion, that only the most popular form of anything should exist pops up strangely often around here. The iPad is phenomenal, so Android tablets should just disappear from the market. The iPhone is popular so no Windows phones can be allowed. That sort of thing.

Friend, I can understand that you'd rather work on a tablet and have someone else make decisions about what you can and cannot have, what you can and cannot do, but why in the world are you so insistent that no one else be able to make their own choice?

I don't get you.

Re:Software killed the PC, not hardware (2)

etash (1907284) | about 10 months ago | (#43884375)

do you know how many people have declared the PC to be dead. It's usually either people who have "better" solutions to offer, or the useful idiots who believe them. Ever tried doing some real work on a tablet ? like video editing, image editing, mundane tasks like excel, word editing. how about video games ?

Re:Software killed the PC, not hardware (1)

drinkypoo (153816) | about 10 months ago | (#43884487)

The workstation isn't going to die any time soon, but it is being marginalized by game consoles one the one hand and portable devices on the other. I perform more and more tasks on my phone because it is close to hand and because there's an app for that. I can now reasonably get some information on an address faster by firing up maps on my phone (which is a 2011 model, and not exactly hot shit) than by walking into the next room, or look up a wikipedia article on the same basis.

The hobbyist computer is being replaced by inexpensive SBCs, so only diehard gamers and various professionals actually need big ugly desktop computers any more. I've still got one and I'm not planning on giving it up any time soon, but if some non-technical type asked me what they should buy, I'd ask them what they wanted to do with it and probably wind up recommending a tablet, or a phone upgrade and some STB-class computer, game console, or literally a STB.

How does this compare (4, Interesting)

maroberts (15852) | about 10 months ago | (#43883253)

With AMDs CPU/GPU solutions?

Re:How does this compare (1)

Anonymous Coward | about 10 months ago | (#43883321)

It should probably have said 'In graphics, Haswell completely redefines the expectations for Intel processor graphics.'

Re:How does this compare (0)

Anonymous Coward | about 10 months ago | (#43883347)

Same thing just slightly more advanced and more expensive

Re:How does this compare (0)

Anonymous Coward | about 10 months ago | (#43883357)

CPU: Intel's still way ahead for anything constrained by Amdahl's law, AMD's probably a bit faster when something can actually use all those cores.

GPU: Intel's winning for once, but that may not hold up under examination of the price ranges. AMD puts their good GPUs in low-end CPUs, Intel puts them in high-end CPUs. If Intel hasn't mixed that up with this release too much (I haven't investigated), AMD may still be ahead in practice.

Power/thermals: If you care, you shouldn't even be considering AMD.

Re:How does this compare (0)

Reliable Windmill (2932227) | about 10 months ago | (#43883483)

You can't factor in external graphics boards for this comparison, and AMD A10 still beats the new Intel on graphics.

Re:How does this compare (0)

Anonymous Coward | about 10 months ago | (#43883709)

According to Anandtech's benchmarking at least, the new Intel stuff looks to be solidly ahead of the A10. What I was referring to is that the current Haswell chips cost quite a lot more than an A10.

If I were trying to build a decent all-around rig, not specifically for gaming, I wouldn't be looking at the i7's price range. More like an A10. At the AMD A's price range, Intel has historically had worthless GPUs. They've only been stepping that up in higher end stuff, but if I'm paying enough for a CPU that Intel's throwing in decent graphics, I'm going to want a real GPU to go with it, rendering it a moot point.

Re:How does this compare (1)

Reliable Windmill (2932227) | about 10 months ago | (#43883853)

It's in part 11 - IGP Performance: AMD A10 (Radeon HD 7660D) is up to 20% ahead of the new Intel (Intel HD 4600) in graphics performance. When you factor in external graphics boards, or tests which are CPU-bound, you get different results.

Re:How does this compare (1)

beelsebob (529313) | about 10 months ago | (#43884115)

The point being that the HD 4600 is the slowest Haswell IGP. Look at the Iris 5200 instead on anandtech's benchmarks. It's a good 20-30% ahead of the A10's 7660D.

Re:How does this compare (1)

Anonymous Coward | about 10 months ago | (#43884573)

It's also only in $400+ mobile i7s.

Re:How does this compare (2)

beelsebob (529313) | about 10 months ago | (#43884109)

Nope, the Iris 5200 wipes the floor with the A10. You're probably referring to the low end HD 4600 in the i7 (which they put there because they expect no one buying an i7 to be using integrated graphics).

Re:How does this compare (1)

Molochi (555357) | about 10 months ago | (#43884083)

They compared the 4770k to the A10. A10 was still faster in games. 4770K was faster in OpenCL

Tom's also said intel will have a faster integrated graphics setup (iris pro) but it will be exclusive to the BGA offerings and not offered on LGA1150 CPUs.

Graphics.. (0)

ottawanker (597020) | about 10 months ago | (#43883295)

Haswell completely redefines the expectations for processor graphics.

.. for Intel. It still seems to lag behind AMD's on-die graphics.

Re:Graphics.. (2, Informative)

timeOday (582209) | about 10 months ago | (#43883391)

You must have said that before looking at the benchmarks? Looks to me like AMD is toast. Intel's integrated graphics beat AMD's on every game in the AnandTech test.

The new Intel even beats the discrete mobile GPU (Geforce GT 650M) on a couple tests. On most the Intel is somewhat slower but using around half the power.

Re:Graphics.. (0)

Reliable Windmill (2932227) | about 10 months ago | (#43883475)

You've misinterpreted the benchmarks, or read the wrong ones. Look at 11 - IGP performance: the AMD A10 still beats the new Intel in graphics performance, and it does so at half the price.

Re:Graphics.. (1)

timeOday (582209) | about 10 months ago | (#43883817)

The AnandTech review I was referring to is the HD 5200 integrated graphics in the mobile chip. The Techspot one you are referring to is for desktops where Intel has not included HD 5200, it is the HD 4600, so the AMD won by 7% - 24% for integrated graphics performance. I guess the HD 5200 is not coming in desktop CPUs for a few more months, but it is about 50% faster than HD 4600 (both are on the AnandTech charts) so I hope AMD has something big up its sleeve.

As to the price difference, CPU performance has to count for something too. On media encoding [techspot.com] the Intel is more than twice the speed of that particular AMD. And with a discrete graphics card the Intel beat every AMD in the test. And at idle the Intel takes less than half the power of an AMD FX system.

Re:Graphics.. (1)

illaqueate (416118) | about 10 months ago | (#43883683)

that said the only article I saw there was comparing a $650 chip vs a $130 AMD A10 chip. and the only game it's close to the 650 in performance is Crysis Warhead which is heavily CPU limited. in other games they were only competitive where the settings were low which not only allowed the Intel card to close performance but also the A10.

the 4770k which is the $339 part is slower than the a10 in games according to this

http://www.techpowerup.com/reviews/Intel/Core_i7_4770K_Haswell_GPU/ [techpowerup.com]

benchmarks vs benchmarks (0)

Anonymous Coward | about 10 months ago | (#43884273)

Maybe he was looking at the THG test (4770K vs A10) where Intel lost on every game.

Anand Lal Shimpi (1)

sartwell (1895728) | about 10 months ago | (#43883311)

Anand wrote that summary, not jjslash.

Re: Anand Lal Shimpi (0)

Anonymous Coward | about 10 months ago | (#43883361)

Technically, jjslash put everything Anand wrote in quotes, so the summary is a quote of a quote.

Re: Anand Lal Shimpi (0)

Anonymous Coward | about 10 months ago | (#43884113)

Package mine to take out. And add a double order of wontons too.

Long way to go (-1)

Anonymous Coward | about 10 months ago | (#43883369)

The only thing they've matched ARM for on efficiency is the *idle* power draw. But a processor doing nothing, really isn't a processor, it's not *processing* for one thing!

Some way to go, an improvement, but they'll need to do a lot better to grab a share of battery limited things like mobile phones. Also there's another problem, ARM and Android grew together, and Android makes it trivial to run massive numbers of threads (even my little app uses 15 Async tasks), and Arm has grown core count easily. They're small cores, easy to put lots on a die. The two grew together.

Intel has few/fast/complex cores and its not a good fit for Android.

Arm is scaling to run large numbers of cores, 4 is quite common now, and the next gen chips are 8-16 cores too, so they'll need to catch up on power draw AND core count at the same time, and do it all while competing with the entire chip industry on tiny margins.

I think they'll quit, myself, and just make ever more expensive chips for legacy Windows boxes.

Re:Long way to go (1)

Junta (36770) | about 10 months ago | (#43883777)

I'd consider the fact that the most demanding android applications are arm specific in terms of compile is the more critical thing.

Intel does have a product with high core count, Phi has 50 cores for example.

I haven't seen a lot of evidence that ARM under load offered better price performance than Intel before. The only thing making that claim I can recall committed a grevious mistake, measuring ARM power usage and then assuming TDP value rather than measuring x86 power usage. It was undeniable that under typical smartphone/tablet conditions Intel did horribly (mostly 'idle' but immediately able to do things on demand), and that seems to have been the engineering focus this time, a shallower sleep state that's possible without screen blanking is one notable facet.

While Intel's prospects in the mobile arena are slim (Andorid has a lot of momentum, and that momentum is largely tied to ARM in much the same way as Windows is tied to x86), I suspect they will continue to rule the roost in the datacenter and workstation-like workloads.

Need to wait a few years (4, Insightful)

gnasher719 (869701) | about 10 months ago | (#43883425)

In the last few years, Intel has been adding new instructions that will give major performance gains when they are used. For example, Haswell can do two fused multiply-adds with four double or eight single precision operands per cycle per core, but no current code will use this. We'll get the advantage when HPC code is recompiled (in a few months time), and when general code assumes that everyone has this feature (in five years time). But on the other hand, we _now_ get the advantages of features they added five years ago.

Re:Need to wait a few years (3, Informative)

Anonymous Coward | about 10 months ago | (#43883533)

This is absolutely the most overlooked aspect of Haswell. As an incremental improvement, its less than stellar, but in certain areas, it effectively doubles performance over the previous generation. Aside from FMA for floating point apps, the integer 256bit SIMD pipeline is now effectively feature complete.

Your point about waiting for recompiles is a good one - all the more reason we should be moving to adaptable metaprogramming systems for HPC, rather than a constant manual reworking of codebases. Projects like Terra (http://terralang.org) are particularly promising in this regard

*Some* codebase will need rework. (1)

Junta (36770) | about 10 months ago | (#43883801)

For the vast majority of even HPC code, it means compiler rework and math library development. The vast majority of benefit can be achieved with a drop in of a new library without rebuild of the application. In your example, it would be the interpreter, which actually tend to be the last things to receive this attention.

Re:*Some* codebase will need rework. (0)

Anonymous Coward | about 10 months ago | (#43884019)

Terra compiles against current LLVM builds; LLVM has supported AVX2 (in some form) since 2011

Overclocking potential (1)

pellik (193063) | about 10 months ago | (#43883459)

I don't know what the OP is talking about saying it has only the same overclocking potential.

The -K series has unlocked the base clock multiplier allowing it to be set up to 166MHz without destabilizing the other controllers on the die. This should allow for considerably more fine-tuning.

Also, the theoretical maximum overclock is 8GHz (80x100MHz, 64x125MHz, or 48x166MHz). 4.6GHz may still be a reasonable goal for an air cooled system, but there is certainly more potential.

Re:Overclocking potential (0)

Anonymous Coward | about 10 months ago | (#43883549)

Not to mention 4,5GHz and above overclocks on low voltages versus what you'd need on Sandy/Ivy.

Heat Dissapation (2)

stms (1132653) | about 10 months ago | (#43883473)

How hot do these chips get. I have to throttle my (not overclocked) 3770k when I do media encodes because it gets too damn hot. I've been meaning to get a better cooler I just haven't gotten around to it.

Re:Heat Dissapation (1)

Anonymous Coward | about 10 months ago | (#43884121)

There's no point in throttling the CPU manually. It'll automatically start throttling itself when it gets too hot (105C or higher). The thermal shutdown temp is 130C. Nevertheless, if a non-overclocked 3770k keeps hitting 105C, you might want to check that the cooler is seated correctly and that the thermal paste has been applied properly.

Re:Heat Dissapation (1)

stms (1132653) | about 10 months ago | (#43884601)

I'm testing it right now and I'm getting 98C as my max which is a bit more than I'm comfortable with. I think I may have seen it a bit higher before. This is the first rig I've put together by myself so it's possible I may have done something wrong. I've heard that the 3770 runs hot that's why I bring it up.

Re:Heat Dissapation (1)

DaveGod (703167) | about 10 months ago | (#43884423)

Hmm, does the Intel-supplied fan make quite a racket when it starts getting hot? When it gets hot the fan should speed up considerably compared to idle. Also the system should be throttling the CPU automatically if it gets too hot. If these are not happening, I suggest checking your BIOS settings (I assume you are not running any tweaking software supplied by the manufacturer, which is usually very clunky). Another possibility is the hot air is not being exhausted.

If you end up getting a new cooler, have a look at some of the excellent reviews of the Coolermaster Hyper 212+, which is very popular, inexpensive and I can confirm the reviews as regards it being effective and quiet.

Re:Heat Dissapation (1)

stms (1132653) | about 10 months ago | (#43884613)

It gets a bit louder not to the point that it's too bothersome. My mobo is a GIGABYTE GA-Z77X-UD5H all I did to it when I built my system is upgraded the firmware and configure boot options. Thanks for the tip on the the cooler I was considering the Noctua NH-D14 which is a bit more pricey but if it keeps my chip from frying it'll be worth it.

AMD still a winner (0)

Reliable Windmill (2932227) | about 10 months ago | (#43883511)

Intel do have the big, raw CPU power, but I think that's not something the majority of desktop users care about. AMD A10 still beats the new Intel on graphics, which is probably not key to most desktop users, but the price is: 4 out of 5 desktop users have all their CPU and GPU needs covered in the AMD A-series, at half or even one third the price of the equivalent Intel. This is the key: if you go AMD you save lots of money.

Re:AMD still a winner (0)

Anonymous Coward | about 10 months ago | (#43883597)

Wel, depends of course, since you get more per wat, for datacenters there is pretty much no competition. I heard somewhere that for anything that requires big processing power, you should pretty much replace everything every 2 years and will save money (if its for the same processing power of course).

Re:AMD still a winner (0)

Anonymous Coward | about 10 months ago | (#43884735)

Wel, depends of course, since you get more per wat, for datacenters there is pretty much no competition. I heard somewhere that for anything that requires big processing power

It is not power per watt. It is performance per dollar that matters. That includes hardware, software, and power (for server and cooling). Yes, generally after 2 or 3 years, replacing at least the CPU is well worth it, unless you are running idle most of the time.

For home or office machines, performance per watt is meaningless unless you are running your CPU 100% for some crazy reason. For home or office, it is idle power and capital costs that matter most. If a CPU is fast enough, who cares what the performance per watt or concurrent threads?

I have an AMD CPU. CPU utilization is 1.5%. It is fast enough. Cost me $100 2 years ago. Spending another $100 for better performance per watt or more computing power would have been a complete waste. On the other hand, spending $50 extra on a high efficiency power supply is well justified.

amd also has more MB choice and more pci-e lanes (1)

Joe_Dragon (2206452) | about 10 months ago | (#43883739)

amd also has more MB choice and more pci-e lanes.

With lntel you need Pci-e switchers on the MB to get more then 16 pci-e lanes (not counting the 4+DMI for chip set link).

Re:AMD still a LOSER for me (2)

fnj (64210) | about 10 months ago | (#43883821)

For me. I couldn't care less what the majority of desktop users care about. What I care about is a CPU with high performance per watt and graphics good enough to scroll a text display (editor, eclipse) with some graphics (browser) fast enough not to be annoying, and to watch HD video without any pauses. Sandy bridge was more than good enough in the graphics department for anything I would ever want to do on current displays. Haswell will probably more than maintain this on the highest resolution displays coming down the road in the next few years.

Dollar cheapness of the CPU within limits does not impress me at all. I am generally content with a system for at least 4 years to spread out that cost, and there are many other cost centers in a system besides CPU.

I have never used a single AMD system and see no reason to believe that they will ever make anything that would change my mind.

I don't really begrudge any slack jawed gamers their massive nuclear power plant busting AMD systems with the absurd overkill of space heater SLI graphics; it's just not anything that has the slightest relevance to anything I could ever care about.

Re:AMD still a LOSER for me (1)

drinkypoo (153816) | about 10 months ago | (#43884461)

Dollar cheapness of the CPU within limits does not impress me at all. I am generally content with a system for at least 4 years to spread out that cost, and there are many other cost centers in a system besides CPU.

It's nice to have the kind of money where you can just throw it away, but the majority of people on this planet do not have that kind of luxury.

I don't really begrudge any slack jawed gamers their massive nuclear power plant busting AMD systems with the absurd overkill of space heater SLI graphics; it's just not anything that has the slightest relevance to anything I could ever care about.

I don't really begrudge any slack-jawed gamer-haters their massive pocketbook-busting intel processors with the absurd price tag, but the fact is that the majority of users' needs are covered better by a cheap AMD chip than an expensive intel chip, because their needs include low price.

Re:AMD still a LOSER for me (0)

Anonymous Coward | about 10 months ago | (#43884535)

What I care about is a CPU with high performance per watt

I have never used a single AMD system

Either you're lying or you used a P3 until mid 2006...

Mobile Graphics (1)

Daniel Hoffmann (2902427) | about 10 months ago | (#43883577)

It seems the graphics for the notebook chips are pretty good: http://www.notebookcheck.net/Intel-HD-Graphics-4600.86106.0.html [notebookcheck.net] One step above nVidia 540m (the one I have on my laptop) http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html [notebookcheck.net] For reference this could play recent games on medium settings very well. For gamers that can't afford (or don't want to bother with) two computers it seems this chip will provide enough horse power to play most games, a thrown back to the era (486 dos days) where you could simply play all available pc games if your rig was not older than 2 years. These days you have to actually buy a dedicated (and expensive) gaming pc to play games, I think this is what gave the console makers the edge on the market. Intel can change that.

Why would Intel care (4, Insightful)

zrelativity (963547) | about 10 months ago | (#43883677)

Why would Intel care about raw CPU performance. They have no competition from AMD in CPU performance. The GPU performance may not be as good as A10, but it has improved and that's what matters for Intel.

Intel for a little while has correctly perceived that their risk to business is from shift in computing to mobile devices and they are addressing this issue. One thing Intel has always been very good at, and I'm a great admirer of them for that, when they perceive a risk, they are extremely good at steering their giant ship very rapidly into the headwind and tackle that threat. Their process technology lead also gives them a huge advantage.

Over the next couple of years, the battle front will be the mobile and server devices, the desktop processors will become a second class citizen. Maybe this will give some lifeline to AMD, but AMD is so far behind on performance.

Re:Why would Intel care (1)

Samantha Wright (1324923) | about 10 months ago | (#43883771)

A monopoly must always strive to slightly outdo itself, so that it may motivate its captive market to continue consuming. Regardless of the technical challenges inherent in improving performance now, a 10% improvement really says "here's something we can force the next cycle of bleeding-edge adopters to take up."

Get A Clue, Intel (4, Insightful)

Jane Q. Public (1010737) | about 10 months ago | (#43883715)

While I am all for advances in CPUs, I seriously wish Intel would go back to a naming scheme for its CPUs that made any kind of sense to the average buyer (or even the technically-oriented buyer). I have grown really weary of having to look at tables of CPU specifications every time I shop around for computers.

Intel's naming scheme -- expecially in recent years -- has been a mishmash of names and numbers without any obvious coherence. Get a clue Intel. You're hurting your own market.

If I didn't have to run OS X in my business, I'd buy AMD just for that reason. Their desktop CPUs may not be quite up to the latest Intel, but they are certainly adequate and the price is better.

Re:Get A Clue, Intel (1)

XanC (644172) | about 10 months ago | (#43884125)

I thought people who said "expecially" and "excape" just had sloppy pronunciation. Is it possible that they actually think that's how those words are spelled??

Re:Get A Clue, Intel (1)

Anonymous Coward | about 10 months ago | (#43884251)

Fat fingers. S and X are right next to each other on a QWERTY keyboard.

Re:Get A Clue, Intel (-1)

Anonymous Coward | about 10 months ago | (#43884237)

Simple actually. This Haswell release is better than the previous Hasgood line.

But not faster than the Hasbro chipset, which is physically larger and specifically targeted toward African-American consumers.

The Hasbra design for the Indian market is scheduled for later this year. It will significantly increase benchmark scores by implementing "VISA" (Very Inconspicuous Score Adjustment), a method for the processor to seamlessly peek at its neighbor's page memory to avoid cache misses.

Lastly, no word on when the next major offering, codenamed Hascheezburger, will be released. That revision will render bunny jpegs 25% faster than any AMD offering, which explains why AMD's stock is being so heavily sold short.

HTH.

Haswell is NOT faster than Ivybridge (0)

Anonymous Coward | about 10 months ago | (#43884279)

Every tech site knows Haswell has completely failed to improve over Ivybridge, but they are hiding this fact in the 'small print'. The Ivybridge and Haswell chips that Intel calls 'equivalent' show a small advantage to Haswell, but look at the power consumption of the two parts. The Haswell part uses more power than the so-called equivalent Ivybridge when running the same code, and gets a smaller performance boost over Ivybridge than its increased power usage.

Modern Intel parts do NOT run a simple constant speeds under the hood. Intel's stated base and turbo clocks are a gross simplification, and do not compare generation to generation. In other words, the Ivybridge and Haswell parts Intel states are equivalent are NOT equivalent. The Haswell part does more work at default by clocking higher on average (thus its much higher power usage under load).

At a similar power usage, Haswell seems to be somewhat slower than Ivybridge- proving another massive Intel failure in its FinFET project. Intel had attempted to redesign its transistor technology after the massive disappointment of Ivybridge FinFET performance. The truth is that FinFET concepts are under-performing horribly (for everyone) versus the early hype for this new approach to the transistor. At the same time, the vastly simpler/cheaper FD-SOI technologies are showing incredible potential at current process sizes.

FinFET has given Intel 'knee' improvements where power consumption just doesn't matter that much- namely at the low end of mains powered desktop use (or notebooks with whopping great batteries, and limited usage life per charge). On the other hand, high performance main-powered parts have got worse, and ultra-low power mobile parts have barely improved- Intel's worst nightmare.

AMD's Steamroller cores are due to catch up with Intel's cores very soon now, and AMD is going to massively out-pace Intel on RAM bandwidth (by going to GDDR5 and then a 256-bit memory interface as with the PS4). AMD is also converting to a fully unified memory architecture (already present on the PS4 APU from AMD) by the end of 2014, and will have a mostly unified part (Kaveri) at the end of 2013.

Even Intel is switching to new memory technologies with its next generation CPU, making the Haswell an expensive dead-end for ill-informed suckers. Intel is stuck between a rock and a hard-place with the coming focus on integrated graphics for both games and high performance FPU calculations. While Intel can and does improve its own dreadful integrated graphics system, it always lags massively behind AMD and Nvidia, while producing solutions that are vastly more expensive than either of its competitors.

The mobile Haswell with embedded L4 cache for the GPU is a great example of this. A mega-expensive solution from Intel with a fraction of the games performance of an equivalent solution from either Nvidia or AMD/ATI. What notebook manufacturer would be mad enough to use this part? All the expense, and none of the performance of a mid-end gaming notebook using GPUs from AMD or Nvidia.

Brief summary (1)

JDG1980 (2438906) | about 10 months ago | (#43884447)

TL;DR: Haswell is OK on the desktop, but nothing special; roughly 5%-10% better than Ivy Bridge. If you're on Sandy Bridge or better already, it's probably not worth upgrading. This architecture was designed for laptops first and foremost. Light power consumption/TDP on mobile parts, and a better GPU, are the big selling points. Apple will get much better integrated graphics so they don't need a Nvidia chip for their top rMBP, but they'll pay out the nose for it.

This is what all the rumors and leaks over the past couple months said and it's now officially confirmed.

Re:Brief summary (0)

Anonymous Coward | about 10 months ago | (#43884563)

TL;DR: Haswell is OK on the desktop, but nothing special; roughly 5%-10% better than Ivy Bridge. If you're on Sandy Bridge or better already, it's probably not worth upgrading. This architecture was designed for laptops first and foremost. Light power consumption/TDP on mobile parts, and a better GPU, are the big selling points. Apple will get much better integrated graphics so they don't need a Nvidia chip for their top rMBP, but they'll pay out the nose for it.

This is what all the rumors and leaks over the past couple months said and it's now officially confirmed.

You mean "the consumers" will pay out the nose for it. ^_^

Need more cores, 8, 10, 12... (0)

Anonymous Coward | about 10 months ago | (#43884609)

Why waste the silicon on a GPU that is just mediocre? Yes, you can replace a discrete video card with it, but that does not need to be 1/10th as good as this.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...