×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Demands Higher Chip Temps From Intel

CmdrTaco posted more than 5 years ago | from the because-they-can dept.

Power 287

JagsLive writes "When purchasing server processors directly from Intel, Google has insisted on a guarantee that the chips can operate at temperatures five degrees centigrade higher than their standard qualification, according to a former Google employee. This allowed the search giant to maintain higher temperatures within its data centers, the ex-employee says, and save millions of dollars each year in cooling costs."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

287 comments

Is this possible? (3, Insightful)

CRCulver (715279) | more than 5 years ago | (#25382755)

Wouldn't Intel run into physical limitations that simply don't allow chips to run at that low a temperature? I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round. We've seen reports of appealing places like that on Slashdot before. (Of course, that would just be a short-term fix before we move the Earth to a farther orbit around the sun to avoid suffocating in our own waste heat like the Puppeteers in Niven's Ringworld [amazon.com] ).

No one mentions a more obvious approach. (1, Interesting)

Anonymous Coward | more than 5 years ago | (#25382811)

Under-clocking them a bit can't be that hard to do.

Re:No one mentions a more obvious approach. (3, Insightful)

SoupIsGoodFood_42 (521389) | more than 5 years ago | (#25383159)

That means you'd need to make up for the lack of processing power with additional CPUs, which would mean more CPUs to cool.

Re:No one mentions a more obvious approach. (1)

Z00L00K (682162) | more than 5 years ago | (#25383951)

Maybe it's time to think more of the performance/power consumption ratio when designing servers.

More CPU:s may actually not be that bad because you can spread the dissipated power over a larger area. However you will also have larger computers.

One way around it could be to locate datacenters at locations with natural cooling available like rivers and larger lakes.

Today many cooling systems are aircooled, but the air can be a lot warmer and not able to absorb as much heat as water.

Re:Is this possible? (5, Insightful)

Thelasko (1196535) | more than 5 years ago | (#25382995)

Wouldn't Intel run into physical limitations

Google is most likely getting the best chips out of Intel's standard production. It's akin to sorting out the best bananas at the grocery store. This sort of privilege happens when you buy enough products from a supplier.

If they were demanding much more than 5 degrees then I would say they are getting custom made chips, but I don't think that's the case.

Re:Is this possible? (3, Interesting)

Dr. Manhattan (29720) | more than 5 years ago | (#25383873)

This happens with resistors, too. If you want one within 5% of the nominal ohmage, you pay more. If you want want one within 10%, you pay less, but you'll find that they're all either about 10% low or 10% high, with a 'notch' in the center of the distribution. Same production process used, but they skim off the 'good ones' and charge more for them.

Re:Is this possible? (4, Interesting)

Thelasko (1196535) | more than 5 years ago | (#25384159)

Legend has it [arstechnica.com] that the Celeron processor began its life as a way for Intel to make money off of the Pentiums that didn't pass quality control. If they sell the low performing processors at a discount, why shouldn't they sell the over performing ones at a premium?

Re:Is this possible? (4, Interesting)

onitzuka (1303967) | more than 5 years ago | (#25383013)

WI'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

There is no reason to be surprised. It is cheaper to not move the data center to where it is colder and just make all upgraded hardware use the new chips. Google's budget calls for hardware upgrades already. Upgrading to CPUs with higher temp tolerances would mean they pay the same $X-thousand for the box they would anyway and simply turn the thermostat up.

A net savings.

Re:Is this possible? (4, Informative)

Ngarrang (1023425) | more than 5 years ago | (#25383031)

I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

Don't kid yourself. They probably have. But then, who did you get to work at what would probably be a very remote location?

Additionally, such remote locations may suffer from not enough bandwidth and/or electricity.

Re:Is this possible? (2, Interesting)

shaka999 (335100) | more than 5 years ago | (#25383299)

You know, they wouldn't have to go that far. I'm in Colorado, if they put a data center in one of the higher mountain towns I imagine they could significantly reduce their costs.

I guess the other thing they could look at is using a heat exchanger and use that excess heat for hot water heating or something.

Re:Is this possible? (1, Insightful)

Anonymous Coward | more than 5 years ago | (#25383577)

I'm from Colorado, too! Unfortunately, thinner air also makes cooling harder...

Re:Is this possible? (5, Interesting)

Zocalo (252965) | more than 5 years ago | (#25383049)

Two words: "Free Cooling"

The greater the difference between your data centre's output air temperature and whatever passive external cooling system you are pumping it though, the more heat you can dump at zero cost. That's monetary cost as well as the cost to the environment through the energy "wasted" in HVAC systems and the like. Google has a PUE (Power Usage Effectiveness; the ratio of power input to power acutally used for powering production systems) approaching 1.2 vs typical values of around 2.5-3.0 - Microsoft is around 2.75 as I recall - so they are clearly doing something right.

Re:Is this possible? (3, Insightful)

terraformer (617565) | more than 5 years ago | (#25383799)

Two more words, uncontrolled humidity.... Yes, there are efficiency gains to be had everywhere, but none of them are free, only less costly as in bringing in moist outside air will require that air to be dehumidified, at a cost obviously. If you go with a heat exchanger, the amount of cooling decreases significantly and so you need more of them, and each one DOES require energy to operate (aka; move air/liquid/whatever through).

Re:Is this possible? (1)

wvmarle (1070040) | more than 5 years ago | (#25384033)

This humidity: how much is this really an issue? (genuine question). I can imagine that when you reach 100% and your equipment is cooler than ambient you get condensation issues. However here we are talking about equipment that needs cooling, i.e. is well above ambient temperatures. Condensation is therefore surely not an issue.

If you would say "dust", that I can see as an issue as it clogs up ventilation openings and can put a nice blanket on equipment keeping it all nice and hot. Dust however is very easy to filter, particularly the larger particles of dust that are an issue in this kind of equipment.

And finally, if a higher or varying humidity would cause more system failures, that may not be a big issue. Considering the numbers of servers Google uses this becomes really statistical, and they have already designed the whole system with failures in mind. So the failure itself is not an issue (just rip it out and replace the failed part), and cost is easily and reliably calculated using normal statistics (or own experience).

In the case of Google, a 1-2% failure rate due to humidity issues can very well be more than offset by the savings on cooling. Cooling is very expensive after all.

Re:Is this possible? (1)

hedwards (940851) | more than 5 years ago | (#25384223)

Humidity can kill electronics. I recall some years back a guy down in Louisiana took his new camera out and about for a couple of weeks. It was a really nice camer which he'd just spent $1500 on, and within a few weeks it failed due to corrosion.

So yes, humidity is a very serious thing and if you're not planning for it you can end up with destroyed equipment very quickly.

Re:Is this possible? (2, Insightful)

terraformer (617565) | more than 5 years ago | (#25384273)

Actually it is a very good question and no one really knows the answer. Most of our IT equipment standards are derived from telecom standards from years ago. It may very well be that the tolerances are too tight and we can move away from highly controlled environments, but not enough is known about today's equipment at high or low (static problems) humidities to understand the consequences of doing so. As for dust, which is a known issue in anything with moving parts or things that don't do well with interrupted air flow, but the tighter that filter is, the air pressure needs to go up and therefore the energy needed to move air through it goes up too.

Re:Is this possible? (5, Funny)

iminplaya (723125) | more than 5 years ago | (#25383171)

I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

Oh yeah! And completely melt the ice cap.

Re:Is this possible? (3, Insightful)

thermian (1267986) | more than 5 years ago | (#25383881)

If you had say, a town sized data centre installation, it would probably have about the same effect as a smallish and partially active volcano, of which there are many in northern latitudes. Pretty much nothing apart from local effects, which is, in spite of the green crazies rantings, not too bad, not compared to the alternatives.

What you wouldn't have is as much need for additional power to cool, which of course saves the pollution caused by its generation. You should bear in mind that the colder parts of the Earth are being far more seriously effected by polutants in the atmosphere then by anything which is just warmer than its surroundings.

As for why I said green crazies. Well if they hadn't been so all fired determined to put governments off nuclear power, we'd have that instead of all these coal burning plants. Now we have massive pollution problems and a truly gigantic cost for building all those nuclear plants in a shorter time, instead of gradually over the last three or four decades.

Re:Is this possible? (4, Insightful)

jimicus (737525) | more than 5 years ago | (#25383585)

Wouldn't Intel run into physical limitations that simply don't allow chips to run at that low a temperature? I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round.

Are you serious? Neither the Arctic nor the Antarctic is well known for reliable power or fast Internet connections.

Re:Is this possible? (2, Interesting)

Colonel Korn (1258968) | more than 5 years ago | (#25383985)

Wouldn't Intel run into physical limitations that simply don't allow chips to run at that low a temperature? I'm surprised Google isn't considering moving some of its data centres to Arctic locations where you get cool temperatures year-round. We've seen reports of appealing places like that on Slashdot before. (Of course, that would just be a short-term fix before we move the Earth to a farther orbit around the sun to avoid suffocating in our own waste heat like the Puppeteers in Niven's Ringworld [amazon.com] ).

I doubt anything physical is being done. Intel is very conservative in setting maximum operating temperatures. They're probably just promising Google that they'll cover those operated 5 C hotter under their warranty. If anything is actually being done to the hardware it's probably just altering the temp at which throttling occurs.

Re:Is this possible? (2, Interesting)

Surt (22457) | more than 5 years ago | (#25384163)

It's going to be far cheaper to build radiator fins extending into space than to move the earth's orbit, barring some innovative invention in the orbit moving department. Also, orbit moving has the downside of reducing the solar flux, which will be bad for our solar energy efforts.

Environmental impact (2, Interesting)

phorm (591458) | more than 5 years ago | (#25382761)

Uhhhh. Wouldn't making chips a bit more efficient be better, as opposed to making them "less likely to burn out at higher temps"

Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.

Re:Environmental impact (4, Insightful)

LWATCDR (28044) | more than 5 years ago | (#25382857)

Not really.
No matter how cool the chips run they will put out heat. If you have two chips that run at X and use Y watts you will save power if you can run them a little hotter and use less power for cooling.

Re:Environmental impact (1)

janeuner (815461) | more than 5 years ago | (#25382883)

So, you argue that they should use nothing but mobile chips because they use so much less power and generate a lot less heat?

Never mind that you would have to build twice as many servers because of the performance penalty, or any other technical details for a distributed server. Just go with the "Google is bad" storyline - it works better.

Re:Environmental impact (3, Interesting)

Rogerborg (306625) | more than 5 years ago | (#25382915)

Ideally, yes. And ideally, I'd come home to find Alyson Hannigan oiled up and duct taped to my bed.

Pragmatically, if they can't run cool, then it's more efficient to run them hot than to spend more energy actively cooling them.

Re:Environmental impact (5, Funny)

moderatorrater (1095745) | more than 5 years ago | (#25383965)

And ideally, I'd come home to find Alyson Hannigan oiled up and duct taped to my bed

You know you're pathetic when they're even unwilling in your fantasies.

Re:Environmental impact (5, Insightful)

I.M.O.G. (811163) | more than 5 years ago | (#25382955)

I'm not sure what your getting at - if by doing this they are saving millions in cooling expense, they are certainly using less energy. What is "going green" if it isn't energy conservation? The fact that the conservation comes from less work for their AC units rather than efficient little processors is immaterial.

Don't expect any company to do things because its right - but good companies will find win-win situations where they cut costs and do things to "Go Green".

Re:Environmental impact (0, Flamebait)

bendodge (998616) | more than 5 years ago | (#25383601)

What is "going green" if it isn't energy conservation?

Most peoples' idea of 'going green' somehow involves the government.

But they pass it off to someone else (5, Insightful)

Moraelin (679338) | more than 5 years ago | (#25383621)

Yes, but way I see this is:

Intel isn't arbitrarily going, "man, we could make chips that run ok 5 degrees hotter, but we're gonna piss everyone off by demanding more cooling. Just because we can." Most likely Intel is already doing the best it can, and getting a bunch of chips which vary in how good they are. And they're getting the same bunch of chips regardless of whether Google demands higher temps or not.

Google just gets a cherry-picked bunch, but the average over Intel's production is still the same. Hence everyone else is getting a worse selection. They what remains after Google took the best.

It's a zero-sum game. The total load on the planet is the same. The same total bunch of chips exits Intel's fabs. On the total, no energy was conserved.

So Google's "going green" is at the cost of making everyone else less "green". They can willy-wave about how energy efficient they are, by simply dumping the difference on someone else.

That's not "going green", that's a predatory approach. Your computers could require on the average an extra 0.5W in cooling, so Google can willy-wave that theirs uses 1W less. They just dumped their costs and their "eco burden" to someone else.

It's akin to me willy-waving that I'm so green and produce less garbage than you... by dumping some of my garbage in random other people's garbage bins across the town. Yay, I'm so green now, you all can start worshipping me. Well, no, on the total the same amount of garbage being produced, I just dumped it and the related costs on other people. That's not going green, that's being a predator.

I can see why a business might want to cut their own costs, and not care about yours. That's, after all, the whole "invisible hand" theory. But let me repeat it: on the whole no energy was conserved. They just passed off some of their cooling costs (energy _and_ money) to someone else.

Re:But they pass it off to someone else (5, Insightful)

Muad'Dave (255648) | more than 5 years ago | (#25383745)

I agree with most of what you said, but I think there is a _slight_ difference between Google having the higher-temp-rated chips and your average Joe user having them. Google's chips will be running full throttle/full temp 24/7; Joe user might run them full blast (and therefore full temp) 2% of the time. I bet the energy savings are not insignificant when usage patterns are taken into consideration.

Re:But they pass it off to someone else (1)

beckje01 (1216538) | more than 5 years ago | (#25384233)

I would have to agree with you, usage patterns will play into this a lot. But lets look at this another way.

Google takes the best chips, those chips are being run hotter then Intel recommends for the rest of the batch. So if Google wouldn't do this the cooling required would be related to Intel's recommendation by Google demanding a subset of chips that can run hotter they create a set of chips that require less cooling then Intel's recommendation. Thus the net cooling required is lower then if just using Intel's recommendation.

In the purest sense the GP makes sense but in the real world where the cooling cost is based on Intel's specs not what the chip can do Google is lowering the overall cooling cost, assuming 100% load at all time. But with usage patterns they are lowering the cost on a big segment that will run at 100% load while most other chips will not be running at 100%.

Re:But they pass it off to someone else (1)

aniefer (910494) | more than 5 years ago | (#25383825)

It's a zero-sum game. The total load on the planet is the same. The same total bunch of chips exits Intel's fabs. On the total, no energy was conserved.

So Google's "going green" is at the cost of making everyone else less "green".

Not true.

This is only the case if you assume every other customer is also running data centers that require special cooling. If Google is using off-the-shelf components, then it is just as likely that the other chips are going to regular desktops that receive no additional cooling.

In this case, it is actually more efficient overall to give the best chips to Google and other datacenters, and leave the rest to others.

Re:But they pass it off to someone else (2, Insightful)

mabhatter654 (561290) | more than 5 years ago | (#25383979)

you miss the point entirely. Google wants Intel's processors to operate properly at a higher temperature than currently spec'd that will allow Google to use less cooling. They want the processors to tolerate more heat, not generate more heat. This means Intel will have to make the processors slightly more beefy, but how much does that really cost across millions of units once the design work is done, a few bucks per processor tops.

Google pays dozens of dollars a MONTH to cool each processor. Intel making this change may cost them a dozen dollars up front, one time. If Intel spends a little extra up front to make a processor that allows higher temps, Google will save multiple times the Processor's cost in electric bills... that's REAL efficiency at work. Google wants Intel to design a chip $20 more expensive so they can save $1000's in energy cost.

Re:But they pass it off to someone else (0)

Anonymous Coward | more than 5 years ago | (#25384029)

No Google is not getting a cherry picked bunch. All Google is getting is a warranty that if the chip fails while using at 0-5c higher then the normal published spec, Intel will cover it.

More than likely, the CPU temp spec is the limiting factor in their "computers" and the only thing preventing them from changing their room temp. They don't want to do it and not have coverage so they want something from Intel that says go ahead.

For everyone making the claim of cherry picking, can you provide ANY basis on how or what those chips are singled out and is this something Intel does as a normal course of business? I've heard they put some through different clock tests so maybe they actually do? Again, some proof, not speculation or assume that it happens because everyone else says it does so it must be true.

Re:Environmental impact (0)

Anonymous Coward | more than 5 years ago | (#25383037)

Both are good optimizations that you can do independently of the other. Efficiency is the obvious one and of course it's being worked on. I think the point here is that Intel has overlooked the second optimization up to now.

Re:Environmental impact (3, Insightful)

gstoddart (321705) | more than 5 years ago | (#25383071)

Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.

The amount of energy you need to use to cool that stuff is quite significant. And, in case you haven't realized it, generating cool air also create more warm air, it's just not in the data center. It's usually vented right outside.

If the chips could run hotter, they'd have to use less energy to cool the data center, and generate less waste heat from that very cooling in the first place.

I'm not convinced that what they're asking for isn't a good idea.

Cheers

Re:Environmental impact (1)

phorm (591458) | more than 5 years ago | (#25383263)

Uh, if the chips ran cooler, wouldn't there be less heat to dissipate in the first place?

Re:Environmental impact (2, Interesting)

gstoddart (321705) | more than 5 years ago | (#25383441)

Uh, if the chips ran cooler, wouldn't there be less heat to dissipate in the first place?

Yes, but which is easier: making the chips more efficient, or allowing them to run a little hotter without melting?

I honestly don't know. My first thought is that efficiency is harder than durability, but that's pulled completely out of my backside.

I still think they're right in asserting that if they could handle a little more heat, then data centers would spend less energy trying to cool them to their operating range.

Make them both more efficient (so they generate less heat) and run hotter (so they're less sensitive to that heat) and it seems like you win on two ends, no?

Cheers

Re:Environmental impact (1)

lorenzo.boccaccia (1263310) | more than 5 years ago | (#25383827)

some missing math: dissipated heat for a transistor is always related to power and clock, as in v^2 * hz.

To run chips cooler, you have to slow them (do not want) or to undervolt them, which could be used to some margin, but reduces the thermal tolerance of chips, as in undervolted cores needs to run cooler, or hang by the effect of the temp on transistors.

So the other road could be to cherry pick better batches and run them at hotter temps

Re:Environmental impact (1)

phorm (591458) | more than 5 years ago | (#25384195)

To run chips cooler, you have to slow them (do not want) or to undervolt them.

The "undervolt" option would fall in the efficiency category. The trick would be to make them perform at the same level (in terms of operations processed/time) while supplying less voltage. As it often seems that CPU makers tends towards "how can we crunch out more 'power' regardless of consumption before meltdown" as opposed to "how can we do the same amount of work in a similar amount of time with less power consumption," that would seem better.

I don't see how lowering the power requirements of a chip reduces thermal tolerance, though.

Re:Environmental impact (1)

SoupIsGoodFood_42 (521389) | more than 5 years ago | (#25383119)

Well, cooling them down takes energy and creates heat, so depending on how much you can save in that area vs. the extra expended with the chips, it might end up being more efficient overall.

Re:Environmental impact (0)

Anonymous Coward | more than 5 years ago | (#25383529)

Even if they use the most efficient chip out there, the chips will still generate heat.

The money is being saved by reducing expenses because is costs a lot of money to cool those chips down by an extra 5 degrees.

Re:Environmental impact (1)

Surt (22457) | more than 5 years ago | (#25384309)

Uhhhh. Wouldn't making chips a bit more efficient be better, as opposed to making them "less likely to burn out at higher temps"

Seems that google's not really thinking green in this case (despite the pretension to do so in others), unless they plan on making use of the datacenter heat elsewhere.

Yes, assuming those were the tradeoffs, which they weren't.

Google is trading existing performance levels against reduced cooling, which is a pure win, just by demanding more resilient chips from Intel.

Re:Environmental impact (2, Interesting)

aperion (1022267) | more than 5 years ago | (#25384311)

It doesn't sounds like Google is asking for less efficient chips, that's a retarded notion. Instead it sounds like Google is asking for more durable chips.

One that still operates at X% efficiency but operates at a higher ambient temperature., say 70F instead of 65F. I'm sure Google would like it better if the chips didn't produce any heat (ie 100% efficient), but that's impossible.

Still, if you want to "Go Green" and be environmentally friendly, stop viewing heat as a problem. It's better to try and reclaim some of the lost energy (heat IS energy) than it is to spend more money trying to get rid of it. ie use energy to get rid of energy.

Underclocking if you're poor? (3, Interesting)

Junior J. Junior III (192702) | more than 5 years ago | (#25382791)

If you don't have the clout of a Google-sized organization to buy higher-rated chips from Intel, I wonder if you can basically achieve the same thing by underclocking. An underclocked chip will run cooler, but I don't know if it'll run more stably at higher temps, although I think it would.

Does anyone have any experience with doing this?

I think it'd be interesting to see whether the cost savings in power and cooling is offset by the cost of the performance losses.

Re:Underclocking if you're poor? (0)

Anonymous Coward | more than 5 years ago | (#25382967)

Underclocking would probably cause the chip to generate less heat. So while it may not tolerate higher temperatures, the server farm would be easier to cool

Re:Underclocking if you're poor? (0)

Anonymous Coward | more than 5 years ago | (#25383455)

This is true. I remember when the fan on my K6-2 350mhz broke so I just ran it at 300mhz fanless.

Easily more cool than running at full speed, and likely faster than just running a slower chip at full speed.

Re:Underclocking if you're poor? (3, Informative)

LaminatorX (410794) | more than 5 years ago | (#25382985)

In the past, I've under-clocked primarily for noise benefits. Lower clock->lower temp->slower fam RPM->lower dbSPL.

Re:Underclocking if you're poor? (0)

Anonymous Coward | more than 5 years ago | (#25383003)

no, it would just mean a lower heat output.

Re:Underclocking if you're poor? (1)

cduffy (652) | more than 5 years ago | (#25383631)

Several years ago I underclocked my home system (which was initially quite unreliable) and saw a substantial decrease in uncommanded reboots. The issue went away almost entirely (even at full speed) when I upgraded the power supply; I suspect that I was running near the edge of what it could handle, and underclocking (by reducing the CPU's power draw) moved its usage far away enough from that boundary to make a difference.

Re:Underclocking if you're poor? (0)

Anonymous Coward | more than 5 years ago | (#25383663)

I can't believe my eyes! Someone on Slashdot is actually UNDERclocking a chip? You're going to have to give up your Slashdot membership.

Re:Underclocking if you're poor? (1)

I.M.O.G. (811163) | more than 5 years ago | (#25383805)

As an overclocker, your statement tends to be true in practice. The cooler a chip is kept, the closer you can get to its maximum overclocking frequency - the frequency beyond which the chip exhibits instability. Similarly, the lower the frequency is set the warmer temperature the chip will generally handle with stable operation. These are general trends, processors from different fabs or batches perform differently - but within the same batch of processors, you can reliably test and observe these results.

Temp specs (1)

MyLongNickName (822545) | more than 5 years ago | (#25382805)

Not mentioned in the story. What CPU are they talking about, and what is the upper end Google is looking for?

(and this having to wait five minutes between posts is moronic. Look at my posting history, and all of them from the same IP address. Tell me why I have to wait this long to post.)

Waste Heat reclamation (4, Funny)

LaminatorX (410794) | more than 5 years ago | (#25382817)

When in college, I heated my crappy little schack by putting 150W bulbs in every light. It was like my own little Easy-Bake oven.

Re:Waste Heat reclamation (0)

Anonymous Coward | more than 5 years ago | (#25383225)

Funny, I just ran Folding@home on a bunch of Pentium IIIs.

Re:Waste Heat reclamation (1, Troll)

Abreu (173023) | more than 5 years ago | (#25383331)

I gather you were not paying the electricity bill...

Re:Waste Heat reclamation (1)

entgod (998805) | more than 5 years ago | (#25383551)

Probably not, but even if he were, at least he wouldn't have been paying the heating bill. And he got extra light, of course.

Re:Waste Heat reclamation (2, Informative)

LaminatorX (410794) | more than 5 years ago | (#25384193)

The place was all electric. An incandescent bulb compares favorably to many space heaters in terms of V->heat efficiency, and you get light as a bonus.

Intel says they don't (1)

the_skywise (189793) | more than 5 years ago | (#25382823)

If the chips failed prematurely at these higher temperatures, the former Googler says, Intel was obliged to replace them at no extra charge.

Intel denies this was ever the case. "This is NOT true," a company spokesman said in an email. Google declined to comment on its relationship with Intel. "Google invests heavily in technical facilities and has dozens of facilities around the world with many computers," reads a statement from the company. "However, we don't disclose details about our infrastructure or supplier relationships."

So Google claims they're more environmentally friendly... but burn through chips faster.

Not too surprising (1)

Ngarrang (1023425) | more than 5 years ago | (#25382835)

When you are a big company that spends enough money, you can ask for this sort of thing and your demand will be met.

"Guarantee us a higher temp CPU or we will switch to AMD...and tell everyone about it."

I have a feeling that the CPUs can handle a bit more temp than they are rated as a CYA move by Intel, anywho.

Re:Not too surprising (-1)

Anonymous Coward | more than 5 years ago | (#25382945)

No. It's standard engineering practice to rate everything (temp, voltage, amperage, weight, vibration, etc.) at the absolute maximum the device will handle.

Re:Not too surprising (5, Insightful)

stilwebm (129567) | more than 5 years ago | (#25383255)

"Guarantee us a higher temp CPU or we will switch to AMD...and tell everyone about it."

That's not really how the negotiation goes in this type of situation where there are two major supplier choices (AMD and Intel) and Google is a relatively small customer when compared with Dell, HP, IBM, etc.

In all likelihood, the negotiation is more of a partnership where both parties work together to create value. Google says, "We buy thousands upon thousands of your chips, but we also pay millions of dollars annually to cool them. We'd be willing to pay little premium and lock in more volume if you can help us save money by increasing the temperature inside our data centers." Google has done the math and comes prepared with knowledge of how much those specs can save them and forecast of chip need of the next 12-18 months, and the two work together to create value. For example, Google might offer to allow endorsements (as they did with Dell for their Google appliances) in exchange for favorable pricing and specifications.

The "do this or I'll switch" tactic only works well when there are many suppliers and products are relatively undifferentiated, like SDRAM or fan motors.

Re:Not too surprising (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#25383565)

It also wouldn't surprise me if Google were willing to offer something of a testbed setup. A while back, they put out that report on HDD reliability and its influences, so they are obviously watching that sort of thing. And, since their historical style has been very much about redundant masses of commodity gear, they can theoretically tolerate slightly higher failure rates if those lower costs in other ways.

I suspect that, with negotiation to set the correct balance of pricing, warranty, access to handpicked chips, etc. both Intel and Google could easily benefit from an arrangement where Google gets to play with slightly experimental stuff, like higher temperature processors, and Intel gets field reliability data.

Re:Not too surprising (1)

freedom_india (780002) | more than 5 years ago | (#25383611)

In reality that IS true.
The Very Large bank where i was employed earlier had a special agreement with Microsoft.
They got a customized version of XP meant specially for the bank's hardened network. Yeah i know the Admin kit allows customization, but i mean at a lower level: the NSA hookups in the system DLLs were not present!
As soon as a Dell entered the bank, it was wiped out, and this OS was installed. It was a weird mix of little Linux bootup code and XP.
You had all rights of an admin EXCEPT when it comes to modifying system32 or adding hookups to startup.
Guess they had a different code base, because my laptop came with the bank's logo'ed recovery disc which had its bootup code and OS made from XP.

I hope that ends the AMD ads.. (-1, Troll)

Anonymous Coward | more than 5 years ago | (#25382865)

Saving money by saving power is a fabel.
I've always been an AMD fanboie so you can't touch me.

Re:I hope that ends the AMD ads.. (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#25383153)

I see you've never fed a big Liebert or five. And it's fable.

WTF? Lawyers as engineers, not so much (3, Informative)

Ancient_Hacker (751168) | more than 5 years ago | (#25383095)

This sounds like a scenario where lawyers are trying to act as engineers. That works about as well as you might expect.

There are these engineering things, amusingly called "Schmoo plots", that map out a chip's operating envelope of voltage versus speed versus temperature. From those an engineer can forsee how hot you can run a chip before its rise and fall time margins start to get marginal.

There is very little Intel can do to stretch thing by another 5 degrees. It's not something that can be imposed by fiat. Intel engineers have already juggled all the variables to come up with the best performance possible. SOMETHING is going to have to give. Either the chips will have to be selected and graded for speed, lowering the overall envelope for the chips everyone else gets, or they'll have to fudge some other parameters, hoping nobody will notice, or worse yet they'll tweak some variable right to the edge of raggedness, resulting in worse reliability down the road.

Lawyers and accountants generally don't know you can't have everything. let's hope the engineers educate them.

Re:WTF? Lawyers as engineers, not so much (5, Insightful)

mindstrm (20013) | more than 5 years ago | (#25383333)

Odds are this is being driven by a data-center engineering team, who are looking at the cost savings of running their data center 5 degrees hotter.

You don't get what you don't ask for.

Intel will do exactly as much engineering as necessary to keep their target market up, and no more.

If the market wants chips that operate 5 degrees hotter.. the engineers will do their job and see if it can be done. Intel will charge a premium for this.

That's business.
 

Re:WTF? Lawyers as engineers, not so much (1)

kipman725 (1248126) | more than 5 years ago | (#25383655)

chips near the centre of the wafer are higher quality. All google is asking for are these chips instead of a random mix of those from all over the wafer. This is why some chips over clock far better than others even though they were produced at the same week from the same plant. They can presumably ask for this because they are buying such large quantities. It's quite a novel way of saving money though.

Re:WTF? Lawyers as engineers, not so much (1)

rrohbeck (944847) | more than 5 years ago | (#25383961)

Right on.
Of course, Intel will give them whatever they want because Google is such a large customer. And will then pay in terms of higher failure rates, hence warranty costs. And Google will notice the same thing, assuming they do decent data gathering on failures, and find out that this is a really bad idea because those failures cost them even more than Intel.
Seems to me like bean counters are trying to beat physics.

Re:WTF? Lawyers as engineers, not so much (1)

Alereon (660683) | more than 5 years ago | (#25384287)

There is very little Intel can do to stretch thing by another 5 degrees. It's not something that can be imposed by fiat. Intel engineers have already juggled all the variables to come up with the best performance possible. SOMETHING is going to have to give. Either the chips will have to be selected and graded for speed, lowering the overall envelope for the chips everyone else gets, or they'll have to fudge some other parameters, hoping nobody will notice, or worse yet they'll tweak some variable right to the edge of raggedness, resulting in worse reliability down the road.

In the real world, processors don't fail (barring power spikes/motherboard failures that fry them). The consideration here is much more likely to involve legal concerns about the warranty or the temperature at which thermal throttling or shutdown occur. Most likely Google and Intel were both able to confirm that the processors would not fail during their expected lifetimes in Google's datacenters even when operating continuously at this new maximum load, which is why they agreed to amend the processor specifications. I sincerely doubt these CPUs are different from others in any way other than possible the thermal protection setpoints that are pre-configured.

I for one am not surprised. (1, Flamebait)

nimbius (983462) | more than 5 years ago | (#25383183)

other businesses have this same questionable practice. for example, walmart requires special packaging from its suppliers that is not normally afforded to other retailers. broadcomm, microsoft, and nvidia likely have a few cozy agreements that are exclusive and hushed. a possible example here might be the ACPI standard and how it seems to "just work" in windows but struggle in some cases with *nix.

it certainly gives google a cost advantage, and i can imagine why they vehemently deny it in TFA as i glance over the justice department article. although whatever gains google makes up for in cooling, they may just as easily have lost in a more power-hungry architecture overall:

http://people.freebsd.org/~brooks/papers/bsdcon2003/fbsdcluster.pdf [freebsd.org] has experienced it, and his 2007 update also confirms.

im left wondering what AMD might do for its biggest customers?

Re:I for one am not surprised. (0)

Anonymous Coward | more than 5 years ago | (#25383659)

im left wondering what AMD might do for its biggest customers?

Knowing their customer base, AMD likely sends them a baseball cap and an overclocking guide.

Re:I for one am not surprised. (1)

rrohbeck (944847) | more than 5 years ago | (#25384053)

Dell did (does?) the same thing by having higher temperature specs for their servers than the rest of the industry. Of course customers will see higher failure rates if they actually use the larger margin.
It's teh physics, stupid!

AdWords (-1, Offtopic)

rehtonAesoohC (954490) | more than 5 years ago | (#25383275)

It's a good thing Slashdot doesn't use AdWords, or the ad next to the summary would be something like:

"I just saved millions on my car insurance by switching to Geico!"

Google Runs Its Data Centers at 80 Degrees (1)

miller60 (554835) | more than 5 years ago | (#25383373)

Google said recently that it runs its data centers at 80 degrees [datacenterknowledge.com] as an energy-saving strategy, so chips that support higher temperatures would mean fewer hardware failures in their data center. Most data centers operate in a temperature range between 68 and 72 degrees, and I've been in some that are as cold as 55 degrees. Lots of companies are rethinking this. In the Intel study on air-side economizers, they cooled the data center with air as warm as 90 degrees. ASHRAE is also considering a broader temperature range for data center equipment.

Commercial or industrial scale? (1)

ajb44 (638669) | more than 5 years ago | (#25383377)

Hmm, they don't say if this is commercial (0..70) or industrial (-40..85) temperature range - I guess intel chips are normally commercial range, so they've bumped then up to 75.

Are they saving MILLIONS? (5, Interesting)

hackus (159037) | more than 5 years ago | (#25383471)

Most of the power supply systems for my servers, which are HP G3-5 systems of various U sizes, tend to waste more power as temperature goes up.

This has nothing to do with CPU's though. It is the power supplies on the machines. As temperature goes up, efficiency goes down. At around 80 degrees I noticed a significant larger draw on the power supply with my amp meter.

I had a gaming system with two ATI 4870's and the 800 Watt power supply would crash my machine if I did not run the air conditioner and keep the room at 70 degrees after some fairly long Supreme Commander runs.

I noticed that the amperage would go up, and the power output would go down as temperature would go up.

I have not conducted any experiments in a lab setting with this stuff, but from experience, jacking the temperature up usually makes power supplies work harder and makes them less efficient.

-gc

Re:Are they saving MILLIONS? (2, Informative)

Penguinoflight (517245) | more than 5 years ago | (#25383915)

Traditional 120-many voltage DC power supplies certaily suffer from lesser efficiency at higher temperatures. Running two 4870s on a single 800w power supply probably isn't a good idea, especially if you have a high-powered CPU to go with them. Most quality power supplies will be rated lower than their maximum output to allow for temperature concerns.

These things said, google uses custom power supplies and systems that only run on 12v. These power supplies may be easier to generate in quality batches, but will still be subject to the same efficiency curves.

Re:Are they saving MILLIONS? (1)

ChrisA90278 (905188) | more than 5 years ago | (#25384183)

Do ypu think maybe that Google is using DC power? That way they have just a few large power supplies and another room and send DC power over large copper bus bars to the racks. These DC systems are expensive but you make the money back in power/cooling and you save money with the UPS too.

Re:Are they saving MILLIONS? (1)

wvmarle (1070040) | more than 5 years ago | (#25384295)

Somehow I doubt datacentres like the ones Google operates use switching power supplies, located next to the hardware they power, like in your home computer. I for one would consider building a single power supply pushing a lot of amps through some fat cables that branch off to where-ever power is needed.

But then I've never seen a datacentre from the inside, so I may be totally wrong.

higher chip temps??? (1)

sam_paris (919837) | more than 5 years ago | (#25383855)

I might be missing something here, but why would Google be demanding "higher" chip temps to save on cooling??

Surely they should be demanding lower chip temps.. or is it just a mistake in the headline?

Re:higher chip temps??? (4, Insightful)

OneSmartFellow (716217) | more than 5 years ago | (#25384017)

They are not asking for the chips to be made to produce more heat, they're demanding that Intel guarantee that the chips will still perform, even if operated at a higher than specified max operating environment temperature.

You would be forgiven for thinking it makes more sense to for Google to insist that the chips produce less heat, rather than that they will still operate in extreme temperatures, since the majority of the cooling cost come from dissipating the chip heat from the enclosed space. But hey, it's Google, they do things a bit different.

Re:higher chip temps??? (1)

sam_paris (919837) | more than 5 years ago | (#25384269)

Thanks both of you for putting me straight there. It makes complete sense now :) Although I still think the headline and summary could have been worded better...

Re:higher chip temps??? (2, Informative)

NixieBunny (859050) | more than 5 years ago | (#25384045)

They don't want the chips to get hotter than they already do. They want them to work correctly when they are run hotter. This allows them to use passive cooling in more climates, which saves big bucks on the cooling bill.

Re:higher chip temps??? (0)

Anonymous Coward | more than 5 years ago | (#25384157)

they want chips that can survive under hotter conditions so they spend less on cooling - ie - they run the room at 80 degrees and know that the processor can handle the 80 degrees.

typically, we would expect that you want a chip that runs at 65 degrees instead of 70 degrees while delivering the same performance - reducing total heat output.

what Google is looking for is chips that perform hotter without failure, so they can cut cost on cooling their data centers.

hope that helps...

They should also consider... (1)

Muad'Dave (255648) | more than 5 years ago | (#25383879)

...using more effective means to extract the waste heat from the processors they already have. Lower thermal resistance equals lower operating temperatures. As many boxes as they have maybe they should invest in large-scale refrigerant-based cooling system with tiny heat exchangers for each CPU. I envision overhead refrigerant lines with taps coming down into each cabinet to serve the procs in it. Each server could have quick-disconnect lines on the back for easy removal. No need to cool all that air, and you'd get very good thermal resistance figures.

Higher temp = higher power (1)

hpa (7948) | more than 5 years ago | (#25384207)

I have to say I am a bit surprised. A CPU operating at a higher temperature will draw more power and thus produce more heat at the same performance point. This is one of many temperature dependencies in silicon circuits. Now, it's possible that Google's demand is that they can run at the same speed and power at the higher temperature, which means in reality they are underclocking a faster chip to run it warmer.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...