×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Making Your Datacenter Into Less of a Rabid Zombie Power Hog

timothy posted about 10 months ago | from the watch-for-the-foaming-mouths dept.

Power 52

Nerval's Lobster writes "Despite the growing list of innovative (and sometimes expensive) adaptations designed to transform datacenters into slightly-less-active power gluttons, the most effective way to make datacenters more efficient is also the most obvious, according to researchers from Stanford, Berkeley and Northwestern. Using power-efficient hardware, turning power down (or off) when the systems aren't running at high loads, and making sure air-cooling systems are pointed at hot IT equipment—rather than in a random direction—can all do far more than fancier methods for cutting datacenter power, according to Jonathan Koomey, a Stanford researcher who has been instrumental in making power use a hot topic in IT. Many of the most-publicized advances in building "green" datacenters during the past five years have focused on efforts to buy datacenter power from sources that also have very low carbon footprints. But "green" energy buying didn't match the impact of two very basic, obvious things: the overall energy efficiency of the individual pieces of hardware installed in a datacenter, and the level of efficiency with which those systems were configured and managed, Koomey explained in a blog published in conjunction with his and his co-authors' paper on the subject in Nature Climate Change . (The full paper is behind a paywall but Koomey offered to distribute copies free to those contacting him via his personal blog.)"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

52 comments

Less powerconsumption = less cooling (3, Interesting)

plopez (54068) | about 10 months ago | (#44144325)

I've pointed this out a number of times. But people do not seem to "get it". If you can reduce your power consumption then there is less waste heat and then less cooling cost. Note too that if your applications use lots of disk reads/writes and network IO with the cpu in a waiting state then you can save power by using a lower end gear. E.g. laptop chips and slower memory vs full blown "Enterprise" hardware.

Re:Less powerconsumption = less cooling (4, Interesting)

TeknoHog (164938) | about 10 months ago | (#44144351)

My thoughts exactly. My first web server in 1998 was a laptop, and ever since I have wondered why 'desktop' components waste so much power compared to 'mobile' counterparts. Since 2003 my 'desktop' machines have been built with 'mobile' CPUs (Mini-ITX et al) and I keep asking this: why should a machine waste power willy-nilly just because it is plugged in? I also like the quiet of passively cooled CPUs (of course, other components like PSUs can be passively cooled).

Re:Less powerconsumption = less cooling (1)

K. S. Kyosuke (729550) | about 10 months ago | (#44144401)

Mere power consumption of individual IC packages probably isn't the overwhelming concern; TCO is.

Re: Less powerconsumption = less cooling (2)

jd2112 (1535857) | about 10 months ago | (#44145095)

if you can run on a low power system as you describe it is probably a good candidate for virtualization.

Re:Less powerconsumption = less cooling (1)

evilviper (135110) | about 10 months ago | (#44148895)

I have wondered why 'desktop' components waste so much power compared to 'mobile' counterparts

Performance and price...

Laptops stay a few generations behind desktops, in terms of memory and bus speeds, memory and cache sizes, CPU speeds, etc.

In addition, looking at AMD, they actually made their "mobile" CPUs by testing their cores after manufacturing, to see which ones could handle low voltage operating without errors. They got rerouted to the mobile CPU line, while the rest were directed to the desktop line. In short, tighter tolerances are needed for lower voltage, mobile operation. Without the desktop or server markets, where power consumption is far less expensive, prices for mobile components would be ASTRONOMICAL. And on desktops, the CPU is often the most expensive component in the system, so the huge price would be felt very, very directly.

Efficiency has trade-offs... As long as grid power stays reasonably inexpensive, the extra power consumption and cooling cost is a small drop in the bucket, compared to the far higher component manufacturing costs.

Re:Less powerconsumption = less cooling (1)

plopez (54068) | about 10 months ago | (#44149017)

"Laptops stay a few generations behind desktops, in terms of memory and bus speeds, memory and cache sizes, CPU speeds, etc."
  You still don't get it. My question is do you need the fastest "state of the art" hardware. If the answer is no, go with the lower end gear. WHo cares how fast a bus is if it is as fast as it MUST be.

Re:Less powerconsumption = less cooling (1)

antdude (79039) | about 10 months ago | (#44150691)

What about performance? Desktops are faster like for gaming. :/

Re:Less powerconsumption = less cooling (4, Funny)

sjames (1099) | about 10 months ago | (#44144423)

It works the other way too. If you don't cool the servers at all, eventually they stop consuming power ;-)

Re:Less powerconsumption = less cooling (2)

ShanghaiBill (739463) | about 10 months ago | (#44144805)

It works the other way too. If you don't cool the servers at all, eventually they stop consuming power ;-)

Eventually. But not as soon as you might think. Modern servers can tolerate heat fairly well, and many data centers waste money on excessive cooling. As long as you are within the temp spec, there is little evidence that you gain reliability by additional cooling. Google has published data [google.com] on the reliability of hundreds of thousands of disk drives. They found that the reliability was actually better at the high end of the temperature range. This is one reason that Google runs "hot" datacenters today.

Re:Less powerconsumption = less cooling (1)

Narcocide (102829) | about 10 months ago | (#44145273)

While that *may* be true for some hardware (and I only say "may" because Google claims it is true, though I'm fairly certain they have fundamental flaws in their accounting of this) I can personally verify that *temperature change* in the form of increases in temperature, even within the stated hardware specifications has a *HUGE* impact on longevity of most consumer-grade hardware.

Re:Less powerconsumption = less cooling (1)

ShanghaiBill (739463) | about 10 months ago | (#44145437)

I can personally verify that *temperature change* in the form of increases in temperature, even within the stated hardware specifications has a *HUGE* impact on longevity

So I can believe Google's peer reviewed and published study of hundreds of thousands of devices, or I can accept your "personal verification". Wow, this is a tough decision.

Re:Less powerconsumption = less cooling (0)

Anonymous Coward | about 10 months ago | (#44146065)

Peer-reviewed? Bing reviews studies?

Re:Less powerconsumption = less cooling (1)

Narcocide (102829) | about 10 months ago | (#44146471)

Well, think of it this way. *I* personally have absolutely no vested interest in increasing the frequency with which your own business's in-house hardware infrastructure suffers failures. Google on the other hand...

Re:Less powerconsumption = less cooling (1)

Cramer (69040) | about 10 months ago | (#44150141)

It's not "peer reviewed". At best, it's "peer read". Google's data is only 100% valid for GOOGLE. It's their data on their infrastructure. Unless you happen to have a Google Datacenter, the results aren't that valuable to you.

I keep my DC (~800sq.ft) at 68F. Mostly because I prefer to work in a cool space. (well, cool while I'm in the cool isle.) But also because of cooling capacity; if the HVAC is off, how long does it take to reach 105F? from 68, about 15 min, from 82 a few minutes. However rare that may be, it's still non-zero.

I've said this before... Temperature stability is the important part. The actual setpoint is of little importance. Google has been doing this on such a large scale, long enough to have the data to tweak their setpoint. (I still say it's voodoo science as they're aiming at a very mobile target -- hard drive logevity.)

Re:Less powerconsumption = less cooling (1)

sjames (1099) | about 10 months ago | (#44145451)

Absolutely. When I ran a small datacenter, I instituted the change from 68F to 75F as a standard. In spite of predictions of disaster, the only thing that changed is the power bill went down.

Re:Less powerconsumption = less cooling (1)

ShanghaiBill (739463) | about 10 months ago | (#44145679)

Absolutely. When I ran a small datacenter, I instituted the change from 68F to 75F as a standard. In spite of predictions of disaster, the only thing that changed is the power bill went down.

If you have good airflow, you can go much higher than that. The critical factor is the temp of the components, not the room temp. Dell will warranty their equipment up to 115F (45C). Google runs some of their datacenters at 80F, and others at up to 95F.

There are some drawbacks to "hot" datacenters. They are less pleasant for humans, and there is less thermal cushion in the event of a cooling system failure. But many datacenters avoid that problem by replacing chillers with 100% outside ambient temp air cooling. That wouldn't work in Las Vegas (high of 115F today), but most places it is viable.

Re:Less powerconsumption = less cooling (1)

sjames (1099) | about 10 months ago | (#44146991)

Sure, I suspect we could have gone hotter then, even with a datacenter designed for 68F, but it was a bit cutting edge at the time just to get to 75 and we would have had to alter the airflow.

Re:Less powerconsumption = less cooling (1)

Cramer (69040) | about 10 months ago | (#44150225)

This suggests your DC may be rather poorly insulated.

I don't know your environment (pre- or post-) so I cannot say what that +7F did to the thermodynamics of your HVAC system... +7F room, +20F servers, +50F exhaust? (greater deltaTemp == faster / more efficient energy transfer) for example, if you're in Az and your heat rejection (cooling coils) are only reaching 120F, they aren't going to be very good at dumping heat into >100F air. (this is where water cooling should be used.)

(Note: I had that "talk" with the nuts that built our last office DC... placed a system on the roof rated to 95F... next to RDU. 6months out of the year it was nearly useless. The system we've inherited at the current office is much better... it uses the building chilled water during the day.)

Re:Less powerconsumption = less cooling (1)

sjames (1099) | about 10 months ago | (#44150677)

I don't see why it would suggest particularly poor insulation. Any time you move a room's temperature closer to the outside temp, you can expect the bills to go down a bit. In our case it meant that the room was a bit above the outside temp for larger parts of the day which makes a huge difference, especially when you're using outside air when conditions are favorable.

Re:Less powerconsumption = less cooling (1)

Cramer (69040) | about 10 months ago | (#44151373)

It doesn't have to be particularly poor, just not sufficient for a data center. You want the heat load in the room to be as near 100% equipment as possible -- no leaks from outside the room. You also what the cold to stay in the room -- i.e. not blowing through cracks (or holes) in the floor, wall seams, through doors, etc. It's fairly simple to test the efficiency of the room... turn off all load, and watch how much the HVAC has to work to keep it at the setpoint.

As I said, I don't know the specifics of your situation. It could simply be the small increase inside created significantly higher temp in the outside loop -- thus significantly improving your heat rejection. Of course, it could also be the result of the DC matching the surrounding office temp, thus the HVAC isn't working to cool the rest of the building. You knowingly changed one out of a multitude of variables and concluded it "good" without a thought to the other variables. Honestly, you got the result you were looking for, so "why" becomes less important.

Re:Less powerconsumption = less cooling (1)

citizenr (871508) | about 10 months ago | (#44146323)

if your applications use lots of disk reads/writes and network IO with the cpu in a waiting state then you can save power by using a lower end gear.

or you can add ramsan/flashsystem and enjoy 21 century.

Re:Less powerconsumption = less cooling (0)

Anonymous Coward | about 10 months ago | (#44147543)

In fact in most task the disk is the bottleneck. Invest in fast disk and memory, get cooler slower processors.

Re:Less powerconsumption = less cooling (2)

evilviper (135110) | about 10 months ago | (#44148859)

you can save power by using a lower end gear. E.g. laptop chips and slower memory vs full blown "Enterprise" hardware.

"Enterprise" hardware doesn't mean the fastest... Infact it's the opposite, as enterprise hardware has longer development cycles.

Enterprise gear means things like ECC memory, BMCs monitoring server health, HDDs that won't freeze up for several minutes retrying a single unreadable block error, etc. And if you feel like skimping on it, you'll end up paying much more in the long run, as a single flipped bit in your database can cost a company obscene amounts of money, RAID arrays will report disk failures far more often, you'll be paying for remote hands, and waiting for on-site access far more often, and you'll have no notification nor insight into your servers, as hardware keeps silently failing, rather than alerting and halting to protect your data.

And scaling up (1)

alen (225700) | about 10 months ago | (#44144395)

Last few years we went from 30 some database servers to a dozen at most

Modern hardware is insanely powerful and you get a huge bang for the buck consolidating a few servers onto a single machine

Re:And scaling up (2)

mlts (1038732) | about 10 months ago | (#44144781)

This. With the availability and reliability of SANs, virtual machine software, hypervisors, rack/blades, and such, there are a lot of tasks which are best moved to a rack/blades/SAN/VM architecture. Even high/extreme I/O can be handled by virtualization on POWER and SPARC platforms.

These days, for most tasks [1], the question is why not a rack/blade solution. A half-rack with a blade enclosure and a drive array oftentimes can do more than 2-3 racks of 1U machines.

Security separation is getting better and better. Even Microsoft is getting a solution out there that is good enough for prime time with Hyper-V in Windows Server 2012. IBM has had top notch separation (well, since the days of the 1970s and VMs on mainframes), Oracle as well.

To go "green", if a data center hasn't already gone with P2V, they should. There are always exceptions, but this is something to be considered.

This also helps with the next buzzword I'm hearing bandied about from the PHB types -- the SDDC, or software designed data center.

[1]: Ones that do not require specialized high-speed hardware like professional video capture. Of course, there are other tasks that require separation due to heavy I/O such as Netbackup servers. Then, there are servers that have to be separated for security or management reasons. For example, a SDMC for POWER boxes should be on discreet hardware for security reasons. Similar with the VM for vCenter management so it can be powered on and used even if the main cluster is inop.

Of course, having cheap, discrete hardware for large scale operations like Facebook makes rack/blades not as useful, but most data centers will benefit from the P2V move.

Re:And scaling up (2)

evilviper (135110) | about 10 months ago | (#44148913)

These days, for most tasks [1], the question is why not a rack/blade solution. A half-rack with a blade enclosure and a drive array oftentimes can do more than 2-3 racks of 1U machines.

This is complete nonsense. Blade servers are more expensive, and CAN'T outperform simple 1U servers. 1U servers are packed to the gills with the hottest components that can be kept cool given the amount of space they have to work with. Blade servers, or any other design, can't possibly pack things more densely than 1U servers have been.

And Blades can't compete with virtualization either. It's just not remotely as flexible. You can't oversubscribe the memory of a Blade server, since that memory is physically dedicated to the single OS running on it. You can't oversubscribe CPU, nor boost the CPU when you need more performance.

You get more expense, with less performance, and less flexibility. Blades need to die off already...

what about cuting down all the ac to dc to ac (2)

Joe_Dragon (2206452) | about 10 months ago | (#44144449)

Why can't there BE UPS with ATX DC out?

Re:what about cuting down all the ac to dc to ac (1)

Anonymous Coward | about 10 months ago | (#44144483)

Because you, in your infinite laziness, have chosen not to manufacture and sell us one.

Re:what about cuting down all the ac to dc to ac (0)

Anonymous Coward | about 10 months ago | (#44144501)

Why can't there BE UPS with ATX DC out?

surely you've heard of picopsu and clones

Re:what about cuting down all the ac to dc to ac (1)

gl4ss (559668) | about 10 months ago | (#44144523)

Why can't there BE UPS with ATX DC out?

surely there is dc ups systems? or what do you call a dc system and dc->atx psu's, if just not that?

Re:what about cuting down all the ac to dc to ac (1)

CyprusBlue113 (1294000) | about 10 months ago | (#44144535)

Why can't there BE UPS with ATX DC out?

Because there's this thing called resistance...

Re:what about cuting down all the ac to dc to ac (1)

thegarbz (1787294) | about 10 months ago | (#44145921)

Easily solved by cable size. Run a busbar system to each rack and tap off those.

Re:what about cuting down all the ac to dc to ac (1)

mlts (1038732) | about 10 months ago | (#44144691)

I've wondered why NEBS 48 volt systems are not more common. 48 volts is high enough that is doesn't need the big fat wires that 12VDC high-amperage connections do, and computers would just need a DC-DC converter to convert the incoming voltage to the 12 and 5 volt rail voltages.

It would be nice to see a standard 48 volt connector, something other than the one used for phantom power to mics. Preferably a connector with a built in high-amp switch (DC has no zero crossings so DC switches have to be beefy enough to handle the connects and disconnects at full amps.)

A DC-DC connector is less fussy than a rectifier, and without having to re-invert to AC (which has to be done by online UPS models), it is more efficient as well.

Re:what about cuting down all the ac to dc to ac (2)

petermgreen (876956) | about 10 months ago | (#44144907)

48 volts is high enough that is doesn't need the big fat wires that 12VDC high-amperage connections do

48V while not as bad as 12V still means much thicker cables and/or higher cable losses (most likely some combination of both) than normal mains voltages.

Servers at full load can draw a heck of a lot of power. 500W is not unreasonable for a beefy 1U server, put 42 of those in a rack and you are looking at 21KW.

Feed those servers with a 240V single phase supply and you are looking at about 88A. That is high but managable with the sort of cable sizes you can find at most electrical wholesalers.
Feed those servers with a 240V/415V three phase plus neutral system about 29A in each phase conductor (and ideally nothing in the neutral). Easy to deal with
Feed them of a telco style -48V DC system and you are looking at about 438A. That is getting into the territory of busbars.

and computers would just need a DC-DC converter to convert the incoming voltage to the 12 and 5 volt rail voltages.

To prevent large currents flowing where they are not wanted you'd almost certainly want isolating DC to DC converters in the computers. An isolating DC-DC converter isn't much different from an AC-DC power supply (the main difference is that the rectifier and power factor correction stuff can be eliminated).

Re:what about cuting down all the ac to dc to ac (1)

thegarbz (1787294) | about 10 months ago | (#44145961)

Feed those servers with a 240V single phase supply and you are looking at about 88A. That is high but managable with the sort of cable sizes you can find at most electrical wholesalers.

This is a problem if you're running commodity solutions with wires everywhere. If you're going to design a DC only datacentre you'd likely run very high current busbars over the aisle, and then tap busses onto each individual rack. Cables while flexible (in more ways than one) are not really ideal from an engineering point of view.

Re:what about cuting down all the ac to dc to ac (1)

petermgreen (876956) | about 10 months ago | (#44152395)

Comparing 48V DC to 240/415V TP+N AC.

For 48V DC you have

Higher wiring costs (both materials and labour).
Higher end system costs.
More restricted choice of end systems.
Most likely higher resistive losses in wiring.
Greater difficulty installing and removing stuff*.
Higher losses in the primary side of the isolating switched mode converter in your end system.

For 230/400V TP+N AC you have.

Losses from inverters in UPS systems and rectifiers in end devices.
Vendor lockin when paralell running UPS units.

* A new connector as proposed in the GP post could help to some extent with this but the characteristics of DC, the higher currents involved and the fact it would be a niche product mean that even if such a connector were standardised it would be a lot bulkier and more expensive than the connectors used for hooking up servers to AC supplies.

Re:what about cuting down all the ac to dc to ac (1)

thegarbz (1787294) | about 10 months ago | (#44153009)

Part of your list of downsides takes double credit. You don't have higher resistive losses if your wiring costs more. Resistive losses are the reason you buy bigger cables. But that's the key. I wasn't proposing a wire based solution. Busbars are used in high current application specifically due to the insane cost of wiring.

Yes that makes your system harder to implement but that does not equate to difficulty in installation / removal. That equates to an engineering design problem and several houses have so far shown that there's real benefits to designing your own racks from the ground up.

Of course you have restricted choice, but what if people took it up? Do you still have restricted choice? When the first iPhone came out how restrictive were the choices on quality apps?

There is a real benefit to doing this, and if possible doing it your own way. There are benefits to a DC only datacentre. Mind you there are workarounds to the problems too, like Google's solution of co-locating backup power on the motherboard doing away with the UPS altogether.

Re:what about cuting down all the ac to dc to ac (1)

evilviper (135110) | about 10 months ago | (#44148831)

It's an old myth that AC-DC-AC conversion is a big loss. The % losses are in the single-digits. And the hassle of running a DC powered datacenter is a HUGE hassle.

The idea started way the hell back before "80plus" power supplies, when most PSUs were 60% efficient, but DC power supplies were more commonly 80%+ efficient. Now that common AC PSUs are much better, the DC advantages are long gone. There were also another class of losses from intermediate power distribution, but they can be cleaned-up as well without converting to DC.

And there are options to eliminate some of that DC-AC conversion inefficiency as well. Google went with on-board batteries, so the PSU is the only AC-DC conversion step, and it stays DC the rest of the way. Facebook and several others that follow the OpenCompute model, instead went with having a bank of batteries for every two server racks, and power supplies with both AC and DC inputs, so AC power isn't UPS-backed, but instead will start drawing DC power from the battery banks when needed, eliminating the DC-AC conversion.

And cheaper UPSes (eg. SmartUPS) are actually more efficient, since they don't do continuous double-conversion, but instead pass-through utility power normally, and switch over to DC inversion from the battery only in the event of power failure.

Match hardware to the workload. (0)

Anonymous Coward | about 10 months ago | (#44144459)

Most of out workloads are light on CPU and heavy on ram in relative terms. Our "building block node" is a low power CPU system which takes upto 1TB of RAM and costs less in terms of capital than Dell's closest system and uses about half the energy. We keep them for 7 years under maintenance and run them "hot" in terms of utilisation, only expandeing the estate when their software predicts a resource pinch in the next 6-8 weeks, enough time for the boss to approve and me to install.

AG

Re:Match hardware to the workload. (1)

mlts (1038732) | about 10 months ago | (#44145249)

What I've wondered about is using servers designed for power requirements at different times.

For example, server or blade "A" runs an Intel Atom and is made to be slow but energy saving. Server "B" runs much faster, but takes more electricity.

Add a SAN, cluster filesystems, and something like vMotion, and what can happen is that VMs that see heavy usage during the day can be moved to the higher speed servers as load permits. Then come evening, they get moved back to the slower processors, and the faster servers suspended or powered down. Some phones do this, with cores for low speed and high speed, moving tasks to a faster core as need be.

Throwing hardware at problems.. (1)

digitalhermit (113459) | about 10 months ago | (#44144569)

So for years I've been hearing that it's much cheaper to throw faster hardware at a problem rather than tuning an application or a server. It's finally coming back to bite us. Imagine if tuning had gained a 10% or 15% improvement. How much power and millions of dollars does that translate to?

Re:Throwing hardware at problems.. (1)

sunking2 (521698) | about 10 months ago | (#44144647)

Compared to the labor rates to do such a thing? Electricity is cheap and you seldom have to justify it being budgeted for.

Re:Throwing hardware at problems.. (0)

Anonymous Coward | about 10 months ago | (#44145703)

tuning is a one time cost.
you keep buying electricity.
tuning is probably worth it.

zombie servers (0)

Anonymous Coward | about 10 months ago | (#44144777)

Worked on a team with a huge DC install base, our biggest problem is we had ops so busy rolling out new capacity skus or upgrading current skus, there was never any time to decommission old skus. Management was too afraid to just shut them down, so 1000's of old sku machines just sat there doing nothing but sucking power. Talking to others in the industry, it sounds like this is a common issue.

Location Location Location (1)

lobiusmoop (305328) | about 10 months ago | (#44144831)

Given that data centers are basically big electric heaters doing some number crunching along the way, might be sensible to put them in cold climates rather than hot, so a) it's easier to dump all the heat generated and b) that heat has some practical uses.

Re:Location Location Location (1)

mysidia (191772) | about 10 months ago | (#44145175)

might be sensible to put them in cold climates rather than hot

People outside cold climates need servers geographically near too.... a datacenter that is far away will have high latency: so far noone's found a way around the speed of light limitation.

How about burying datacenters though... under the ground, where the temperature is more uniform, and, where you can also bury huge copper arrays, and put your servers in thermal contact with the thermally conductivearrays, to conduct the heat away....... using the earth itself as a heat sink?

Switching off (1)

manu0601 (2221348) | about 10 months ago | (#44145223)

Switching off and on the hardware will wear it out for various reasons: power supply are more likely to fail when switching on, hard disks mechanical parts suffer from hot/cold cycles. It means switching off for power saving cause the hardware to be more replaced, which also has an environmental cost. I did not read TFA, but from the summary, I understand that the benefit outweights the cost, is that correct?

And how about software efficiency? (0)

Anonymous Coward | about 10 months ago | (#44145411)

How much power is thrown away by running the server applications in slow bloated interpreted languages instead of native aplications?

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...