Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Removes "Free" Overclocking From Standard Haswell CPUs

timothy posted about a year ago | from the first-one's-no-longer-free dept.

Upgrades 339

crookedvulture writes "With its Sandy Bridge and Ivy Bridge processors, Intel allowed standard Core i5 and i7 CPUs to be overclocked by up to 400MHz using Turbo multipliers. Reaching for higher speeds required pricier K-series chips, but everyone got access to a little "free" clock headroom. Haswell isn't quite so accommodating. Intel has disabled limited multiplier control for non-K CPUs, effectively limiting overclocking to the Core i7-4770K and i5-4670K. Those chips cost $20-30 more than their standard counterparts, and surprisingly, they're missing a few features. The K-series parts lack the support for transactional memory extensions and VT-d device virtualization included with standard Haswell CPUs. PC enthusiasts now have to choose between overclocking and support for certain features even when purchasing premium Intel processors. AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs."

Sorry! There are no comments related to the filter you selected.

AMD plant much? (1, Funny)

Anonymous Coward | about a year ago | (#43998601)

Clear who pays your salary.

Nice biased wording there (5, Insightful)

KZigurs (638781) | about a year ago | (#43998605)

AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs.

It is also significantly slower buck for buck in real life workloads.

Re:Nice biased wording there (5, Insightful)

Squiddie (1942230) | about a year ago | (#43998715)

I try to practice the good enough philosophy, and AMD is good enough. I don't get the whole Intel/AMD fanboyism. I certainly would feel cheated if I just had to have Intel, though.

Re:Nice biased wording there (0)

Anonymous Coward | about a year ago | (#43999351)

I practice the time is valuable philosophy. I don't want to wait on my computer any longer than absolutely necessary.

Re:Nice biased wording there (4, Insightful)

fredprado (2569351) | about a year ago | (#43999479)

I am quite sure the extra milliseconds on the operations you have to wait on your computer will be very significant for you.

Re:Nice biased wording there (4, Insightful)

localman57 (1340533) | about a year ago | (#43999821)

I practice the time is valuable philosophy. I don't want to wait on my computer any longer than absolutely necessary.

People who really think their time is valuable don't overclock. It's a hobby that tries to squeeze the most out of a given $ of hardware. But after you factor in the amount of time you spend messing around with the thing to try to eek out that additional performance, and add in the lost work time caused by unexpected crashes and instability, you're better off just buying the most expensive hardware you can, and replacing it when something better comes along.

That said, the people who do that need to be grateful to the overclocking crowd. There needs to be bleeding edge people finding out what works and what doesn't, such as the great work they've done with cooling technology. The best of what the overclockers are doing today turns into tomorrow's high end mainstream.

Re:Nice biased wording there (0, Troll)

Anonymous Coward | about a year ago | (#43999541)

I try to practice the good enough philosophy, and AMD is good enough.

According to OP, if you take two equally-priced CPUs, one Intel and one AMD, the AMD will be significantly slower. Or put another way, take two CPUs with equivalent performance, one Intel and one AMD, and the AMD will cost more.

Is this incorrect, or do you just have a different definition of "good enough" from the rest of us?

Re:Nice biased wording there (2, Insightful)

Lumpy (12016) | about a year ago | (#43999931)

Mostly a bunch of whiny babies that actually do not do anything with their computers.

Real computer users want cores, lots of cores...

Re:Nice biased wording there (0)

Anonymous Coward | about a year ago | (#43998799)

Do AMD processors have support for transactional memory extensions and VT-d device virtualization?

Re:Nice biased wording there (4, Informative)

apexdawn (915478) | about a year ago | (#43998863)

They do have VT-d, but I believe transactional memory is a Haswell only for the moment. I have read nothing on whether AMD will implement such extensions (I could be wrong on this).

-Reed

Re:Nice biased wording there (0)

Anonymous Coward | about a year ago | (#43998921)

Yeah but if you want ECC / vt-d (And you might as well). Then it is AMD or Xeon E3.

I would never buy a K series on principle. (Maybe I would if there was a way to ungimp it).

Re:Nice biased wording there (0)

Anonymous Coward | about a year ago | (#43999017)

AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs.

It is also significantly slower buck for buck in real life workloads.

By real life workloads, don't you mean synthetic benchmarks? In real life you probably couldn't tell what's under the hood or care, within reason.. So, if you want to save some money buy AMD. But, If it makes you feel superior, buy Intel. It's your money.

Re:Nice biased wording there (0)

Anonymous Coward | about a year ago | (#43999377)

I compress video from ripped BD and from other sources, yes I will be able to tell the difference when the processing takes hours longer. I would imagine that anyone who does CAD rendering or other similar processor intensive things would also notice a difference. I wonder what the speed difference would be for long running code compiles? Yeah, I use machines with SSD for those so IO isn't an issue.

Sounds like perhaps your real life isn't exactly the same as everyone else's huh?

Re:Nice biased wording there (1, Insightful)

Lumpy (12016) | about a year ago | (#43999985)

"Yeah, I use machines with SSD for those so IO isn't an issue."

Yes it is... SSD is an absolute DOG for extended writes if you are ripping to a SSD you are being brain dead.

Re:Nice biased wording there (5, Informative)

Rockoon (1252108) | about a year ago | (#43999025)

It is also significantly slower buck for buck in real life workloads.

Buck for buck? Are you on crack?

AMD wins the price/performance comparison. Intel wins the peak performance comparison.

Looks to me like you are practicing the big lie for your masters at Intel.

Re:Nice biased wording there (2, Insightful)

Anonymous Coward | about a year ago | (#43999703)

Intel also wins watts/performance.

Re:Nice biased wording there (1)

Anonymous Coward | about a year ago | (#43999105)

Significantly?

No way there mister intel shillboy...

The word you were looking for was MARGINALLY if you're really paying attention.

In the real world AMD and INTEL are pretty much the same on a speed for speed platform. However for the $ you spend at intel. You'd get a little more right off the top from AMD. That price premium on intel is high.

And also you have to pay for intels giant marketing campaign that pays shills like you and tv commercials and stickers and logos all over fuck...
Guess who pays for all that shit you didn't want?

That's right. Intel customers.

I'll take AMD thanks. More of my dollar went into the chip not the marketing. Way better return.

Re:Nice biased wording there (2)

Mashiki (184564) | about a year ago | (#43999791)

It is also significantly slower buck for buck in real life workloads.

Yeah...well no, [cpubenchmark.net] you might want to look up the price/core cost vs AMD and Intel, then you'll quickly see AMD tromps all over it. And really with the Vishera cores, you're seeing a negligible loss in real world performance. The only place where Intel beats AMD in cost-per-core is with the celery(celeron) line.

AMD offers those things (2, Insightful)

fustakrakich (1673220) | about a year ago | (#43998609)

That's because they are not number one. Like Avis, they have to try harder.

Sales Pitch (1)

Synerg1y (2169962) | about a year ago | (#43998653)

Obvious sales pitch is obvious:

AMD also has overclocking-friendly K-series parts, but it offers more models at lower prices, and it doesn't remove features available on standard CPUs."

Feature #1 TSE: http://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions [wikipedia.org] I'd imagine nobody codes for this.

Feature #2 : http://software.intel.com/en-us/articles/intel-virtualization-technology-for-directed-io-vt-d-enhancing-intel-platforms-for-efficient-virtualization-of-io-devices [intel.com]

It can still do virtualizion just fine: http://forums.anandtech.com/archive/index.php/t-2133898.html [anandtech.com]

Not an Intel fanboy or anything, but they're not as arrogant as people are making them sound.

Re:Sales Pitch (1)

X0563511 (793323) | about a year ago | (#43998763)

I'd imagine nobody codes for processor features that are limited to a particular brand or model lineup...

Re:Sales Pitch (1)

Zan Lynx (87672) | about a year ago | (#43999453)

From what I've read in the last few months, the Linux kernel and glibc will both be adding transaction lock support. The performance benefits are pretty nice even when limited to backwards compatibility with existing lock methods.

Also, libraries like Intel's (of course) TBB will add support.

But all of that will be done with feature detection and fall back to using existing code.

It's like saying that nobody codes for MMX, SSE, Altivec or 3DNow. Or that nobody uses a particular Nvidia OpenGL extension only available on the newest cards. Yes, if it gains that extra 15% speed boost they will code for it.

Re:Sales Pitch (0)

Anonymous Coward | about a year ago | (#43998797)

Yeah, honestly, 99% of people who do heavy-duty virtualization will be using Xeons where they will still have these features.

The VT-d extensions are not very popular to begin with and are even more rarely used on the desktop CPU lineup.

Re:Sales Pitch (1)

Synerg1y (2169962) | about a year ago | (#43999013)

My main concern was whether it can run VMWare Workstation acceptably, and it can. Any larger VM scenarios instantly create a disk IO bottleneck on any desktop PC.

Which brings me back to my OP, Intel removed important sounding features that are actually useless, and correctly so. They should be commended for taking the initiative, instead everything I've red puts it in a negative anti-consumer light.

Bringing me to conclude, if you don't know wtf you're talking about, stop posting news stories about it.

Re:Sales Pitch (-1)

Anonymous Coward | about a year ago | (#43999591)

Sir, I am appalled at your statements.

This is a Intel giving you a big efyou, and ur actually naive enough to applaud them for it.
VT features are useful for anyone who uses virtual machines alot, so they are important to many ppl.
Overclocking features are important to gamers, or anyone who requires more performance.
I overclock all the time. Its very useful and makes a huge difference.
Intel is trying to punish ppl that do both overclocking and VT, and force them to get a crappier deal.
There is no reason for this besides marketing.
This is an obvious move by Intel to screw customers for profit, not for any engineering reason.

-HasHie @ TrYPNET.net

Re:Sales Pitch (2, Informative)

Anonymous Coward | about a year ago | (#43999717)

There's a big difference between VT and VT-d. Intel is only disabling VT-d (aka Directed IO) in the processors.

It is an I/O passthrough to a virtual machine (allowing a virtual machine to directly access the IO bus instead of passing through the hypervisor). Most people won't use anything like this and it's primary only found in enterprise class bare-metal hypervisors like VMWare ESXi, so it honestly doesn't have any impact on workstations running VMWare Workstation in 99.99% of situations.

From Intel:

"VT-d" stands for "Intel Virtualization Technology for Directed I/O". The relationship between VT and VT-d is that the former is an "umbrella" term referring to all Intel virtualization technologies and the latter is a particular solution within a suite of solutions under this umbrella.

The overall concept behind VT-d is hardware support for isolating and restricting device accesses to the owner of the partition managing the device.

Re:Sales Pitch (0)

Anonymous Coward | about a year ago | (#43999735)

Please tell us when you used VT-d on your desktop PC last and what you used it for.

Re:Sales Pitch (3, Informative)

armanox (826486) | about a year ago | (#43999841)

Notice that VT-d is disabled, not VT. VT-d allows a hardware device to be passed directly from the hypervisor to a virtual machine (such as a video card). This is only used in HypverV, Xen, and (I think) VMWare ESX, none of which are desktop products. I use VMWare Workstation and Virtualbox quite often (although I'm warming up to KVM) on both AMD and Intel, with no ill effects from either side. Please be informed about what you're saying Intel is screwing us on, and you'll see that 90% of the people that use these features aren't even effected.

Re:Sales Pitch (1)

twistedcubic (577194) | about a year ago | (#43999727)


My main concern was whether it can run VMWare Workstation acceptably, and it can. Any larger VM scenarios instantly create a disk IO bottleneck on any desktop PC.

Oh c'mon. Just about any desktop with sufficient RAM and a modern processor can run several virtual machines with boring 7200RPM drives. My students don't complain, even in the hour just before an assignment is due when everyone is using.

Re:Sales Pitch (0)

Anonymous Coward | about a year ago | (#44000019)

Win8 desktop will default to installing Windows 8 on HyperV if all of the required visualization instructions exist. Anyone installing Win8 with hardware that supports these instructions will be using these instructions.

Re:Sales Pitch (4, Informative)

TopSpin (753) | about a year ago | (#43999107)

I'd imagine nobody codes for this. [TSE]

That is going to be an important feature when programmers eventually leverage it. Hardware assisted optimistic locking can make concurrency easier, safer and more efficient as the CPU takes care of coherency problems usually left to the programmer and CAS instructions. Imagine being able to give each of thousands or millions of actors in a simulation their own independent execution context (instruction pointer, stack, etc.,) all safely sharing state and interacting with each other using simple, bug free logic, as opposed to explicit and error prone locking and synchronization. This has been done with software transactional memory but it frequently fails to scale due to lock contention. Hardware based TM can prevent that contention by avoiding lock writes.

It is extremely cool that Intel is implementing this on x86.

Does MHz matter anymore? (3, Interesting)

Anonymous Coward | about a year ago | (#43998655)

Is there anyone besides a small group of people who benefit from higher clock rates? Most people I know would pick battery life over performance on mobile devices. Desktops have been "powerful enough" for at least the past 5 years. Is it just about bragging rights at this point?

Re:Does MHz matter anymore? (1)

Entropius (188861) | about a year ago | (#43998935)

I know it does for photo processing. I have a laptop with a dual core i5 (something like 2.9GHz), and when I come home with a card full of RAW images it takes an hour at least to render them to jpeg in lightroom. RawTherapee is also somewhat slow. Faster storage would help somewhat (I really need to find the right size Torx screwdriver so I can put my SSD in this laptop), but it is still rather CPU-bound.

Re:Does MHz matter anymore? (1)

crakbone (860662) | about a year ago | (#43998967)

Construction, architecture, engineering and manufacturing companies all use 3d heavy workstations. The faster the memory, the faster the harddrive and faster cpu clock speed make a large difference in employee downtime when designing new products. So no, Desktops are not powerful enough.

Re:Does MHz matter anymore? (5, Informative)

BLKMGK (34057) | about a year ago | (#43999057)

Add to the list below rendering and those of us who compress and process video - of which I am one. Faster clock speeds can save me HOURS of time and is why I run an overclocked Sandy i7 at over 4ghz. It runs for hours at a time fully slammed with no problems.

So yeah, there are use cases for this outside of your sphere of knowledge.

Re:Does MHz matter anymore? (1)

slaker (53818) | about a year ago | (#43999079)

If you have a compute task that's not bound by I/O or RAM such as media transcoding, a faster CPU can be quite helpful. My time to reencode a BD dropped by almost 30% in a move from Lynnfield to Ivy Bridge versions of i7; that's not insignificant for a process that still takes hours. Putting aside my dubious need, we're not that far from consumer 4k video and the increased demands that will bring.

Re:Does MHz matter anymore? (1)

armanox (826486) | about a year ago | (#43999159)

Some remarks:

1 - If you're buying an i5 or i7, chances are you're using more then the average user (especially if you're going with an i7).
2 - The processors in question are desktop processors, not the mobile ones.

Re:Does MHz matter anymore? (1)

unixisc (2429386) | about a year ago | (#43999205)

That's what I wonder as well. For all CPU intensive workloads, wouldn't the extra cores do it? Also, if certain applications require faster cores, wouldn't it be better if they were multi-threaded more?

As for the engineering & video processing apps, seems to me like they could make use of something like the Itanium

Re:Does MHz matter anymore? (1)

Anonymous Coward | about a year ago | (#43999423)

Most people I know don't overclock their mobile devices, they do however like higher clocks on desktops where, believe it or not battery life is inconsequential. The question is though, "powerful enough" for whom and for what workload? Difficult as it may seem to imagine, there actually are people out there who's day to day workloads are more power intensive than surfing the web and reading e-mail.

I as well as quite a few people I know tend to do, for example, heavy audio processing on a daily basis (who'd have thought, musicians hang out with other musicians) things like real time wave shaping, synthesis, pitch bending and really pretty much any kind of effects processing are rather costly on cpu cycles, especially when layering, and more so when layering "excessively" (which is common practice in electronic music for those fat, doofy bassy sounds we all know and love), and the higher clock is what makes the difference between tens of minutes and a few minutes for post-processing the finished waveform.

A dedicated soundcard only takes you so far (and let's be honest here, soundcard tech has not advanced anywhere near as much as GPU tech), and while having more and more physical SMT threads provides the biggest performance gain, higher clocks takes you even further.

I do a fair deal of graphics processing, but that's mostly taken care of by beefier and beefier GPUs.

Now, I'm by no means the everyman, and my no means is my usual workload generally common, that's the difference between you and I, I make no insinuations to the contrary by saying things like "powerful enough" or suggesting that because I need or don't need something, nobody else does. There's a market for more power, and there are workloads where no amount of power is "enough power"

Beyond that, why fuss about it? You don;t see yourself making use of higher clocks? That's great, don't buy the new, beefier processors. People who can make use of it will gladly pony up the cash for these monsters.

Re:Does MHz matter anymore? (1)

unixisc (2429386) | about a year ago | (#43999495)

But haven't multi-core architectures more than compensated for the levelling off of clock speeds?

Can't say I'm surprised (2, Interesting)

Anonymous Coward | about a year ago | (#43998659)

Now that AMD is no longer a threat to them, they can go back to their old tricks again.

Sounds like some quality MBA decision making! (1)

Anonymous Coward | about a year ago | (#43998671)

Let's give no one what they want!

That is dumb (1)

tapspace (2368622) | about a year ago | (#43998679)

Shouldn't the unlocked multiplier version be a primium product? This is annecessary step backwards. I think most people who are interested in a K-series would be more willing to pay a premium. Who in their right mind would EVER give up VT-d for an unlocked multiplier? Maybe they just want to kill the tradition once and for all.

Re:That is dumb (5, Informative)

tapspace (2368622) | about a year ago | (#43998689)

Guh. Premium, not primium! And annecessary = unnecessary. I suck.

Re:That is dumb (1)

Anonymous Coward | about a year ago | (#43998741)

+1, Insightful.

Re:That is dumb (3, Interesting)

armanox (826486) | about a year ago | (#43999171)

Well, I would, for one. Unless you're using Xen or HyperV, VT-d doesn't really benefit you.

Not really a big shock (3, Informative)

apexdawn (915478) | about a year ago | (#43998697)

Well, "free" clock headroom aside, Intel removing features from the K series parts (VT-d, etc.) has been going on since Sandy Bridge I believe. Basically, if you want the best of both worlds you will want to invest in an Extreme Edition processor. As quick search on ark will show, the 3770K does not have VT-d while the 3930K does.

-Reed

Re:Not really a big shock (1)

apexdawn (915478) | about a year ago | (#43998725)

I hit submit a bit too soon. I'm going to wager a guess that the Ivy Bridge-E -- if released -- will have the missing Haswell features on the 4770K. If not then getting the best of both might involve Xeon, if those chips allow for overclocking.

-Reed

Re:Not really a big shock (1)

Xenx (2211586) | about a year ago | (#43998945)

The fact that they removed OC from the current Xeons, leads me to believe it won't be present in the next gen.

Re:Not really a big shock (1)

ericloewe (2129490) | about a year ago | (#43998983)

Nobody who buys a Xeon and needs it would ever overclock it. It's not worth the (minimal) risk increase.

Re:Not really a big shock (1)

BLKMGK (34057) | about a year ago | (#43999297)

Bullshit - I would! I have a XEON now running an ESX server at home and hell yes I would overclock it without a second thought. Good luck finding a XEON board that supports both ESX and overclocking though! there's nothing magical or scary about overclocking if you have half a clue and don't try to run right over the bleeding edge. I've been doing it since the 8088 days when a damned crystal from RadioShack was required and it's never been a problem. This is Intel screwing with the market plain and simple.

Re:Not really a big shock (0)

Anonymous Coward | about a year ago | (#43999553)

You obviously missed the "and needs it" part of the post you responded to. No one but you is going to care if your home ESX box is a little faster or turns into slag.

This is why AMD can not die just think of what int (4, Insightful)

Joe_Dragon (2206452) | about a year ago | (#43998711)

This is why AMD can not die just think of what intel will do with out AMD in the market.

Re:This is why AMD can not die just think of what (2)

nhat11 (1608159) | about a year ago | (#43998861)

They will have to deal with the ARMs market than?

Re:This is why AMD can not die just think of what (1)

Joe_Dragon (2206452) | about a year ago | (#43998915)

can that run todays X86 software?

Re:This is why AMD can not die just think of what (0)

Anonymous Coward | about a year ago | (#43999041)

Just recompile for the other architecture. (What do you mean you don't have the source code?)

Re:This is why AMD can not die just think of what (0)

Anonymous Coward | about a year ago | (#43999261)

Don't let them take our ARMS.

Re:This is why AMD can not die just think of what (0)

Lunix Nutcase (1092239) | about a year ago | (#43999491)

AMD hasn't been real competition to Intel for quite some years now. Though the fanboi butthurt would be amusing.

Meh. (5, Insightful)

nitzmahone (164842) | about a year ago | (#43998727)

I've never found overclocking to be worth the trouble. Anytime there's a stability issue with an overclocked PC, there's always that nagging doubt that all my troubleshooting is for naught, because it was a fluke bit fail due to the overclocking. Life's too short- skip the anxiety and run your processor at it's rated speed.

Re:Meh. (2)

NewWorldDan (899800) | about a year ago | (#43998847)

My thoughts as well. I kind of wonder how many people out there are still overclocking. It's so rare that anything I do is CPU bound anymore. Maybe I'm getting old becuase I just want things to work.

Re:Meh. (2)

Xenx (2211586) | about a year ago | (#43999035)

The biggest reasons would be for encode/decode, gaming, enthusiast. Each has their reasons and at least two of them have actual use for higher clock speeds. Why pay $1000 for a CPU when I can pay $250 and overclock. Only time I've ever had a problem is physical, and my own fault. Sometimes you have to settle for a little less clock speed, but you can test and maintain relative stability.

Re:Meh. (2)

s.petry (762400) | about a year ago | (#43999843)

<shrug> I never pay that much for a CPU, since I have had exceptional experiences with the AMD CPUs. In my experiences, they have always outperformed Intel's processors, and generally cost half as much. I could overclock them if I wanted, and back in the Athalon 800'ish series did.

Re:Meh. (2)

lgw (121541) | about a year ago | (#43999101)

I do video transcoding that doesn't know how to use the GPU yet, so I overclock at home on my server. My gaming box has "everything overclocked" just because it was a fun project.

Re:Meh. (-1)

Anonymous Coward | about a year ago | (#43999113)

Overclocking is built into the soul of all nerdish adolescent males, because it's a dick-measuring contest that's easy for nerds to win. It has actually become more of a problem since CPUs stopped getting faster, becuase the allure of being able to say "my PC is 10% faster than yours" very great -- espeiclaly when the kids know that there won't be a chip with 30% higher clock rate available in a few months.

We've all been there. Just let the kids have their fun while they're young. They'll grow out of it eventually. Trying to convince them not to do it will only make them want to do it more, so maybe we old farts should start feigning interest in ricing our PCs, and say it's to improve digestion, or to help us have relations their mothers or something.

Re:Meh. (1)

Jonah Hex (651948) | about a year ago | (#43999055)

Same here, I used to spend time squeezing every bit of speed out of my system including overclocking, now I'm more worried about having a decent speed, multiple cores, and support for VMs for development use. It is this last one that concerns me the most, if Intel is going to make me pick and choose between processors that support extended functions for VMs I'm not going to be very happy personally, of course at work the companies can afford better than I can. ;) - HEX

Re:Meh. (1)

bemymonkey (1244086) | about a year ago | (#43999063)

So troubleshoot at stock speeds, then switch back to your overclock when you've solved the problem. That also has the positive effect of actually showing you whether it's your overclock that's the issue.

Re:Meh. (1)

BLKMGK (34057) | about a year ago | (#43999183)

Then you aren't doing it right. If you setup an overclocked machine correctly and don't try to push it right to the bleeding edge you'd get plenty of bang for the buck. My current Sandy machine is pushed to 4.5GHZ and I save a great deal of time processing video as a result - it's an i7 3770K. It process video for hours on end with no issues and reboots only for updates occasionally. Cooling is your biggest issues, water works best and don't push a ton of voltage through it. Start with the basics and work up to hat the CPU is capable of, it'll be stable.

do7,l (-1)

Anonymous Coward | about a year ago | (#43998755)

you j01n today! it simple, crisco or lube.

makes soldered in cpus now a really bad a idea as (0)

Joe_Dragon (2206452) | about a year ago | (#43998765)

makes soldered in cpus now a really bad a idea as if a MB makers puts in a faster CPU with that MB you may not be able to get that MB with a CPU that has transactional memory extensions and VT-d device visualization. Or OEM's will have to stock more MB then they really want as well.

Re:makes soldered in cpus now a really bad a idea (1)

Lunix Nutcase (1092239) | about a year ago | (#43999513)

Since when is transactional memory is a mass consumer feature? Next to no one will notice or care.

Cripple Hardware? (1)

gbkersey (649921) | about a year ago | (#43998791)

The new Celeron?

why would you OC enterprise CPU's? (1)

alen (225700) | about a year ago | (#43998815)

those are enterprise features? why would you OC a chip in something that brings you revenue and risk a problem?

Re:why would you OC enterprise CPU's? (1)

HellKnite (266374) | about a year ago | (#43998907)

I think this is precisely they point. This is a business decision to prevent people from buying cheap unlocked desktop CPUs with VT-d, overclocking them, and say, using them to run their dev/test QA VM environments - hell, even production environments if you're really pinching pennies. If you want to get really "out there", it's possible that there was pressure from hypervisor vendors for Intel to lock this down so that they didn't have to support the random failures that can occur with overclocking.

Intel (backed potentially by hypervisor vendors) is basically saying "You can either buy a desktop CPU and run VMs on it, but no overclocking that stuff for free performance / headache causing problems"

Re:why would you OC enterprise CPU's? (0)

Anonymous Coward | about a year ago | (#43999343)

Because it's not overclocking, but removing deliberate underclocking.

Because as a professional, you can tell when their manufacturing process is good enough that ALL their sold CPUs manage to get to the highest speed level, and they just down-labeled it anyway to not wreck their own market.

Because in that case it is exactly as safe as what Intel would otherwise do internally.
And because you can save loads of money that way. By preventing Intel for ripping you off.

That's why.

This shows what will happen in a world without AMD (2)

TheBlackMan (1458563) | about a year ago | (#43998821)

If Intel will ever be allowed to become monopoly again, it will produce extremely pricey and extremely limited processors. Everybody should love AMD, because it is the only thing stopping Intel from selling them shit wrapped in golden paper for thousands $$.

Re:This shows what will happen in a world without (1)

gooner666 (2612117) | about a year ago | (#43998925)

Tell that to the Intel fanboys who did not by their first pcs until the 2000's. Remember the gold old days when the Pentium 3 1000 was 800 bucks until AMD came along and started spanking that ass? Lets hope they don't go away as we don't need the evil empire to be the sole provider of chips again. Buy AMD!

Re:This shows what will happen in a world without (0)

Anonymous Coward | about a year ago | (#43999193)

I thought AMD was the first to 1GHz, and so they had started spanking that ass before the P3-1GHz came along. But the general gist of your post is true; AMD coming out of nowhere knocked the pricing shit that Intel pulled right down.

Re:This shows what will happen in a world without (2)

Holi (250190) | about a year ago | (#43999795)

AMD didn't come out of nowhere, they were making 8088's in 1975.

Re:This shows what will happen in a world without (1)

Lunix Nutcase (1092239) | about a year ago | (#43999533)

You think AMD is any threat to Intel? They stopped having any real competitive pressure on Intel years ago.

Overclocking .. (1)

houbou (1097327) | about a year ago | (#43998877)

= short life span for your CPU.. so, I'm not too worried..

Re:Overclocking .. (1)

Anonymous Coward | about a year ago | (#43999097)

I have a 2003 era Athlon XP CPU still running at 40% overclock. That CPU have more than 70.000h, from my main PC to a NAS and then a seedbox on it and most of the time in the 50C range, I still don't know anyone that lost a CPU for anything other than extreme overvoltage.

Re:Overclocking .. (1)

armanox (826486) | about a year ago | (#43999987)

Lost an Athlon last year that was being used in a NAS due to the cooling fan locking up (it was a second gen athlon, before they had thermal shutdown....)

Well, you just killed it for me. (4, Insightful)

girlintraining (1395911) | about a year ago | (#43999021)

The K-series parts lack the support for transactional memory extensions and VT-d device virtualization

Yeah, well, fun fact... a lot of enthusiasts like myself like things like VMWare, which depend on this kind of thing. Deleting those features from the unlocked line means I just won't buy them... one of the big drivers for overclocking is to run virtualization. You might think it's "just gamers" doing this, but a lot of us do network and system administration and deployment and like the ability of having a "lab in a box" offered by current processors. You take that away and you're going to find your bottom line hurting, possibly more than a little.

I don't know which of your marketing assclowns came up with this idea as a revenue generating measure, but it's going to backfire in their face and I hope when it does you fire their ass, apologize, and never try this again. You're only succeeding in driving us towards commodity hardware like AMDs offerings... All they need to capitalize on the market you've just shit on now is offer mainboards with multiple sockets for their CPUs and make the mainboards cheap and the core system very energy efficient... and not only will the enthusiasts ditch you, but so will the data centers...

You're opening a can of worms here. Bad plan, darlings.

Re:Well, you just killed it for me. (1)

BLKMGK (34057) | about a year ago | (#43999119)

This can of worms has been opened awhile, you've obviously not tried to build a K based machine running virtualization. See my post below...

Re:Well, you just killed it for me. (1)

girlintraining (1395911) | about a year ago | (#43999237)

This can of worms has been opened awhile, you've obviously not tried to build a K based machine running virtualization. See my post below...

Got one right now, actually; It's a i5-3570K. To the best of my knowledge, no features are disabled compared to other models based on this core. But vmware needs VT-d to function, and if they kill this feature off, it won't work. So, no, it hasn't been opened for "awhile", this is something that's started rolling out in the last year.

Re:Well, you just killed it for me. (1)

BLKMGK (34057) | about a year ago | (#43999471)

Go look up the spec sheets for Sandy CPUs. Or better yet Google 3570K and VT-d. Surprise! I found out the hard way myself when I built an ESX server and couldn't install, I found the feature greyed out in the BIOS. A quick Google on that model and I realized I'd been had too.

http://ark.intel.com/products/65520 [intel.com]

http://www.tomshardware.com/forum/356118-28-purchased-3570k-virtualization [tomshardware.com]

Re:Well, you just killed it for me. (1)

girlintraining (1395911) | about a year ago | (#43999907)

Go look up the spec sheets for Sandy CPUs. Or better yet Google 3570K and VT-d. Surprise!

Sorry, my bad. I confused VT-d with VT-x. Yes, you're correct -- it won't run an ESX server, but I use Workstation, so it's been fine for me. That sucks though -- I know a lot of people who build dedicated lab machines on a rack; I don't have the funds to lay out on something that complex, nor the space where I live right now, but I can see how that would screw you over... especially when VMWare's hardware requirements [vmware.com] white sheet doesn't specifically list it either. :(

This kind of cpu fragmentation I think is an attempt to create new markets where they can charge more, and it's frustrating because there's no technological reason for it. Where's government regulation when you really need it?

Re:Well, you just killed it for me. (2)

Lunix Nutcase (1092239) | about a year ago | (#43999655)

But vmware needs VT-d to function, and if they kill this feature off, it won't work.

Bullshit. Even ESX/ESXi can work just fine without VT-d. The only thing you lose is I/O pass-through. Cut out the hyperbole. The fact that you can explicitly disable VT-d in VMWare's settings disproves your ridiculous claims.

Re:Well, you just killed it for me. (-1)

Anonymous Coward | about a year ago | (#43999965)

Yeah, well, fun fact... a lot of enthusiasts like myself like things like VMWare, which depend on this kind of thing. Deleting those features from the unlocked line means I just won't buy them... one of the big drivers for overclocking is to run virtualization.

Sorry, but what crack are you smoking?

1) VT-d doesn't affect virtualisation performance – it affects PCI passthrough to virtualised hosts, something that only enterprise vendors are doing in general.
2) Overclocking has absolutely 0 dependancy on high clock speeds at all.
3) The two things you need for overclocking are a metric fuck ton of RAM, and parallelisation. Given that AMD's "8 core" chips are actually 4 module chips with int/int SMT (while the i7s are 4 core chips with any op/any op SMT), that rather gives intel another advantage here.

In what world are you living in which you think that you need a high clock speed, let alone overclocking for running a bunch of VMs to be effective?

Current K CPU also lose VT-d (3, Informative)

BLKMGK (34057) | about a year ago | (#43999087)

Current K rated CPU lose this and possibly some other features. I didn't pay attention to this and found out the hard way when I couldn't run an overclocked ESX-i Sandy machine. Pissed is an understatement! There's no good reason to do this other than to screw with the marketplace.

I've switched to a XEON CPU of Ivy heritage and GL finding a board for one of those that runs ESX-i and can be overclocked. Nearly every machine I own is overclocked and has been for many years and it pisses me off to get jerked around like this by Intel.

Re:Current K CPU also lose VT-d (2)

darkwing_bmf (178021) | about a year ago | (#43999245)

There's no good reason to do this other than to screw with the marketplace.

Maybe. Another possibility is that those features are heavily timing dependent and the OC chips caused more problems than they solved.

Re:Current K CPU also lose VT-d (1)

BLKMGK (34057) | about a year ago | (#43999531)

Current CPU that aren't K rated can be overclocked though not to the same degree. I've never heard of issues from folks overclocking those and running them in virtual environments. Somehow I doubt that this is for our protection but they certainly haven't said one way or the other. If I could overclock my damned XEON I'd sure do it.

This is just business (1)

Virtucon (127420) | about a year ago | (#43999303)

I actually think this makes sense from a business perspective since the virtualization features would be targeted towards their Xeon line vs. the home PC market. As for overclocking, I do it moderately on both Intel and AMD systems but this lock on the Haswell reminds me of the same debates around Sandy Bridge and Ivy Bridge and ... back to when they started locking the clocks on the Pentium IIs. The advantages of overclocking don't just go against getting the most speed out of the hardware, they also allow folks to not buy a K (or X) edition for example that has a much higher price point than the Non-K version while still reaping *most* of the benefits of the higher priced model. If Intel keeps the K prices competitive with the Non-K editions then I don't see a big deal but I don't think features getting removed is a good way to go. If you're charging me more for the K or X model, give me everything the lower end chip has and more. I guess all of this means we'll still have folks hanging on to Ivy Bridge and Sandy Bridge processors for a bit longer? Maybe not. Also since people are chiming on on AMD today, I'm also not sure that the New FX-9590 is all that great for the price either. The Vishera/Bulldozer line is 125w vs 220w TDP for the 9500 series as well as the price boost. You can easily get 4.8Ghz all day long out of a Vishera FX-8350 (just run water cooling). I'm not sure what the cooling systems will need to be for the FX-9xxx but it probably won't be as simple as an H100i..

inb4 new 'ee' chips (1)

Anonymous Coward | about a year ago | (#43999399)

give both the features and the unlocked multiplier... at a much higher price, of course.

Is it necessary these days? (5, Insightful)

TheSkepticalOptimist (898384) | about a year ago | (#43999449)

Yes, I remember the good ol' days when you can get a $100 CPU and make it work like a $800 one. I remember in particular the days of buying a cheap Celeron and having it perform like much more expensive Pentium II or even P3.

And I also remember days of headaches with stability issues, over heating and other stupid problems all to squeeze a few extra FPS out of Doom.

Nobody overclocks anymore, and if they do, it like getting a trophy for trolling a blog. Its completely unnecessary and doesn't really offer anything except a feel good, slap on the thy own back when you see your completely arbitrary and virtual benchmark numbers rise up while you ruin your CPU.

What needs the extra performance these days? You need to Tweet faster? Like on Facebook faster? Browse a website factions of milliseconds faster?

Games used to drive overclocking but GPU's are where game performance lies these days. Sure maybe overclocking your CPU by 50% might offer 1% more FPS, but who the fuck really cares, nobody with a life that is.

Intel realizes that the enthusiast market for PC's has nose dived and its obviously cheaper to produce CPU's where you don't have to worry about the kind of performance tolerances that are required for overclocking.

And I don't think "enterprise" level developers are buying cheap computers and then overclocking to get better VM performance. I mean really? If you consider yourself an "enterprise" developer then get the "enterprise" to buy you a decent workstation or VM server. I don't think your "enterprise" wants you to spend days trying to optimize performance on your workstation, I'd fire anybody that wastes any amount of time in a BIOS.

I would say Intel should focus on offering one "enthusiast" level CPU that is completely unlocked for overclocking. I mean if people want to burn out their CPU repeatedly its more money from a market segment that is drying up, but I think in general Intel or any CPU company should not have to worry about providing overclockable CPU's across their product line.

The bottom line is that benchmarks aside, if you ever looked at your Task Manager you'd probably realize that your CPU is idling at 1% usage 99% of the time, so you want to make the System Idle task run faster? I don't get it anymore.

Re:Is it necessary these days? (0)

Anonymous Coward | about a year ago | (#43999773)

I would say Intel should focus on offering one "enthusiast" level CPU that is completely unlocked for overclocking.
See also, Extreme Edition Processors.

they seem to be forgetting (0)

slashmydots (2189826) | about a year ago | (#43999801)

The Pentium G860 is fast enough to do basically anything short of intense gaming or video editing. It's faster overall than the AM3 Phenom X4 quad core chip. I think the G2120 ivy bridge even beats Phenom x6 Deneb chips. So oh no, my brand new Haswell i7 isn't overclockable. It's so darn slow I just have to overclock it. With any i7 from the Haswell series, your hard drive and RAM are the bottlenecks so if you're trying to speed yourself up by overclocking, "you're doing it wrong."

The real reason for this change is fab yields (1)

tlambert (566799) | about a year ago | (#43999867)

The real reason for this change is fab yields.

It's how they do all the other processors as well:

o manufacture
o test
o blow fuses as needed for failed tests
o bin the part a an xxxyyyzzz part

One of the reasons Apple machines tend to be more expensive is they pay a premium for higher performance "speed burt" relative to other laptop vendors, so the chips that rate out at supporting a higher speed burst clocking go into the Apple bin.

Similarly, RAM chips get binned as well; those that bin out as supporting within tolerance frequency and volage following functions go into the Apple bin, the next best go into the Crucial bin, and the rest go into the "everybody else" bin.

By doing this, they can increase their effective fab yields per die, even if they fail to increase their yields of a particular high end chip.

This also allows re-binning; that when you have a bunch of medium-end parts, and get a buttload of orders for low end parts. In order to meet demand for the low end parts, you don't go out and manufacture low end parts based on the demand; instead you take a bunch of next-higher-up parts, and blow their fuses in order to make them into lower end parts, and re-bin them. Voila! Just In Time "manufacturing".

By making an explicit change to a classification, they are admitting to lower than expected successful fab yields for the areas of the CPU they are fusing off.

What we should actually be asking ourselves is what this means for the future of transactional memory, if they are unable to get their yields up in order to meet demand, which may be high in certain government and scientific applications. If it exceeds capacity, expect either mutiple fab retools, or it being dropped as a feature at some point in the future.

Haswell is a big disappointment (1)

edxwelch (600979) | about a year ago | (#43999887)

At least the desktop version.
Runs hotter than Ivy Bridge
and has worse overclocking
10% more power consumption, with only 13% speed increase. Every other generation of Intel got a speed increase with at least the same, or less power consumption

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?