×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Open Compute Project Driving Open-Source Hardware Development

Soulskill posted about 2 years ago | from the hope-they-have-a-learner's-permit dept.

Facebook 29

The Open Compute Project was launched by Facebook early last year to facilitate collaborative development of highly-efficient computing infrastructure. They wanted to make datacenters cheaper and less energy-intensive to operate. Since then, many industry heavyweights have joined up, and the effects of the project are becoming evident in how companies buy hardware. "Instead of the traditional scenario in which the company’s buying decisions are determined by what the Original Equipment Manufacturers (OEM) such as Dell, HP, and IBM are offering, open sourcing hardware give companies the ability to buy the exact hardware they want. Businesses are increasingly more curious about open source, and many of them are already deploying open source tools and the cloud, [Dell's Joseph George said]. They are increasingly looking at open source software as viable alternatives to commercial options. This level of exploration is moving to the infrastructure layer. 'Driving standards is what open source is about,' George added. With specifications at hand, it is possible to manufacture server and storage components that deliver consistent results regardless of who’s in charge of production.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

29 comments

I dunno (1)

skipkent (1510) | about 2 years ago | (#40680825)

Off the shelf components are dirt cheap, slap on Linux and run KVM and that VM can run with the specs you're looking for. The idea is cool, but I doubt the prices will get anywhere near what we are looking at now.

Re:I dunno (1)

Anonymous Coward | about 2 years ago | (#40680873)

Even if the price is equivalent, the end result is 'outsourcing' decision making and deflecting responsibility when something goes wrong.

Re:I dunno (0)

Anonymous Coward | about 2 years ago | (#40680875)

That approach doesn't work when the cost of the energy consumed by the device exceeds the cost of customisation.

Re:I dunno (1)

silas_moeckel (234313) | about 2 years ago | (#40680933)

2x 2 socket MB in 1.5 RU is not very dense. 4 socket 32 dimm 1ru server barebones are 1400 bucks. As to power thats a long term expense at worst and it's a non factor at best. Most colo does not do metered power and often it's 15 20 or 30 amps at a given voltage so the differential cost in power is only a cost or savings if it pushes you one way or another on those thresholds per rack.

Re:I dunno (0)

Anonymous Coward | about a year ago | (#40683483)

Can someone confirm this density? It does seem disappointing. Still, you haven't entirely measured it fairly. You have to do the density per floor area not per rack unit. Your 1RU server will be lots deeper than the Open Compute 1.5RU servers seem to be. That means that they can get more rows of servers into the same area. However this also means more walk ways between the racks. I wonder if they could do interesting things like two rows back to back with liquid cooling in the middle (so you don't need as much air space). That would seem to be a major benefit of having standardised components.

"Off-the-shelf" may not be the best choice (3, Insightful)

Taco Cowboy (5327) | about 2 years ago | (#40680941)

Off the shelf components are dirt cheap, slap on Linux

True, off-the-shelf components are comparatively cheaper

True, Linux, in principle, is free, as in Free Beer

But that does not mean the combination of off-the-shelf components and Linux is the best there is

Proprietary hardware / software combo may carry a very high price tag, but, when we are talking about enterprise level computing, or computing in the level of data-centers, there are times proprietary equipments make more sense than off-the-shelf components - in term of stability, performance, and/or energy efficiency

I am all for open-source, but my own experience in the computing scene - especially in large-scale deployment - tells me that the best option there is might not be the cheapest option
 

Re:"Off-the-shelf" may not be the best choice (0)

Anonymous Coward | about 2 years ago | (#40681813)

You can build a campus mesh network out of Linksys running Linux every 2 years, or you can build it with Cisco Aironet every 10.

Re:"Off-the-shelf" may not be the best choice (0)

Anonymous Coward | about 2 years ago | (#40682147)

I always believe that the best option will invariably be the latest tech, the cheapest option is yesterdays.

Your budget will determine what you get.

Re:"Off-the-shelf" may not be the best choice (1)

Johnny Mnemonic (176043) | about 2 years ago | (#40682227)

You don't think Facebook is "enterprise level computing"?

Re:"Off-the-shelf" may not be the best choice (0)

Anonymous Coward | about a year ago | (#40684715)

Not in the sense that they care about the specific maintenance of their hardware. Facebook has there Ops guys fix things. Those same guys know the OS inside and out and are willing to think. Many enterprise Ops want to make a call to the OEM and them fix it. They are basically liaisons for the company to the hardware world. Many companies view this model as better or more preferred to the startup/Facebook model.

Re:"Off-the-shelf" may not be the best choice (1)

techno-vampire (666512) | about 2 years ago | (#40682473)

But that does not mean the combination of off-the-shelf components and Linux is the best there is

Especially if you pick the cheapest components you can find. And, of course, you need to customize your server installations properly. Installing your favorite distro from a LiveCD (designed for a worksttion not a server) and then tacking on whatever programs you really need is probably not the best way to go. Still, if you pick your components with care and optimize your installation for what you need, you should be able to end up with something that's far, far better for you than anything bought off-the-shelf.

Re:"Off-the-shelf" may not be the best choice (0)

Anonymous Coward | about a year ago | (#40683379)

I am all for open-source, but my own experience in the computing scene - especially in large-scale deployment - tells me that the best option there is might not be the cheapest option

ROFLMAO. 462 of the world's 500 fastest computers are running on Linux.

But I guess you have more experience than the companies building the worlds faster supercomputers.

Re:"Off-the-shelf" may not be the best choice (0)

Anonymous Coward | about a year ago | (#40683559)

"Enterprise level" computing is changing. The best system for a job will always be application specific, but if the application is carefully designed to work over commodity hardware without showing failures then there may be no problem running it on a completely cheap system. Note the way that Google, running on the cheapest of the cheap, is much more reliable than the Microsoft's systems like Danger and Office356 running on "Enterprise" hardware or even the Bank of Ulster running on top end high reliability hardware. In some cases the right tool is a fully paid up Red Hat Enterprise Linux, with top level support running on certified and tested hardware. This is a completely different much more serious beast than a free Ubuntu server downloaded off the internet running on an old white box in your bedroom. It's perfect that that enterprise level stability mixes with cheap bottom end hardware so you don't even have to change binary when youur application design changes to allow you to migrate from a high stability server to a much cheaper one.

When you have the right set up it's important to remember that zero is infinitely better than n. Whilst m is only n/m times better than n. In other words; if you have a cheap license for a server then you still have all the costs associated with looking after licensing. If you use a completely free Linux system then you can dynamically run as many copies as you want without even thinking about it. That flexibility and simplicity is really useful in a large enterprise setting where even knowing how many servers you have at any given moment may be difficult.

Re:"Off-the-shelf" may not be the best choice (0)

Anonymous Coward | about a year ago | (#40684531)

I've encountered this argument for years although it usually is applied to Brand X versus Brand Y.

In fact, regardless of your choice, you still need to purchase and manage your assets intelligently.

Everything has limitations and the limitations in commercial product can be just as pronounced.

If open source is a fit, use it, especially at the enterprise level where the resources to overcome the limitations are generally more abundant.

Re:I dunno (1)

DuckDodgers (541817) | about 2 years ago | (#40681395)

I think this is going to happen, but it will start at the big companies first and reach low end consumer hardware later. From the OpenCompute project About page, http://opencompute.org/about/ [opencompute.org] "The result is that our Prineville data center uses 38 percent less energy to do the same work as Facebook’s existing facilities, while costing 24 percent less."

Energy costs are a big concern at the major hosting, social networking, and search companies. Facebook, Google, Microsoft, IBM, Amazon, Ebay, etc... have millions of servers, so they can save at a minimum tens of millions of dollars in energy costs per year by switching their servers to more efficient designs. Eventually so many big companies will be buying this kind of hardware that it will show up on Amazon and Newegg and become the small business and home server norm too.

On the other hand, while I'm pleased about the entire development I think "open source hardware" is a misleading description. The external dimensions are open specifications, the component layouts are open, the power supply, the cooling system, etc... but I haven't seen anything from OpenCompute to indicate that the network cards, the processors, the graphics chips, the machine BIOS, etc... etc... is open source and free for anyone to reproduce at will. That's true open-source hardware, and at the rate things are going I would be surprised if I see an open source processor design that matches a 2010 Core i3 processor in my lifetime.

Re:I dunno (0)

Anonymous Coward | about 2 years ago | (#40681627)

Well, there is always OpenCores.org

Re:I dunno (1)

DuckDodgers (541817) | about a year ago | (#40685477)

I am very excited about OpenCores.org. But I'm under the impression that their very best designs are many generations behind the latest ARM chips, let alone Intel chips, for processing power and efficiency.

Having a small group of software developers build something like Rails or uTorrent or VLC is impressive. Having a medium size group of software developers build something like the Linux kernel is incredibly impressive. Having a bunch of volunteers put together something equivalent to an Intel Core 2 processor? That may not be possible.

I would love to be totally wrong about this. I don't know anything about computer processor design processes. But I figure if Intel spends many billions of dollars per year in research and development costs, you would need a large number of the brightest human beings on the planet working in a coordinated fashion to match their designs in OpenCores or something similar.

Re:I dunno (1)

mcgrew (92797) | about a year ago | (#40690339)

TFS: "Businesses are increasingly more curious about open source"

Citation needed; if only that were true. Yeah, Apache and Linux-based servers, but little to nothing else.

Video (1)

DaMattster (977781) | about 2 years ago | (#40680989)

This might be off-topic but I would love to see as part of an open compute platform an open source video camera that does its recording in VP8. This would encourage more independent artists because there are no royalties and unlimited use is granted. While VP8 is still patented by Google, it's license is totally royalty free and non-restrictive.

Re:Video (0)

Anonymous Coward | about 2 years ago | (#40681663)

Honestly I don't see what a compression algorithm has to do with the camera, that should be a post-production issue.
But on the upside here is an open source camera you can add it too via an FPGA. http://www3.elphel.com/index.php

Re:Video (2)

solidraven (1633185) | about a year ago | (#40683071)

The problem with making an opensource camera is actually the imaging sensor, the rest of the hardware is fairly trivial compared to that. Finding a well documented high resolution image sensor is hard at the best of times. Finding an affordable one is even harder.

Re:Video (1)

AndreyFilippov (550131) | about a year ago | (#40687293)

Building "open source" cameras for more than 10 years I would say that the codecs designed fro video distribution may be not the best for the cameras, where you have to preserve as much as possible of the original sensor data while having reasonable compression. It does not need to be completely lossless (as for editing) - the sensor (and just the physical world itself) has some noises, the the compression errors should be just below that. For the video distribution the task is different - reduce bandwidth while preserving _perceived_ video.

Too bespoke (-1)

Anonymous Coward | about 2 years ago | (#40681481)

Just look at Facebook's own 'open compute' datacenter [imgur.com]. It's designed more to look cool than it is to be useful; they even want to change the spacing on racks (without going metric!), just because.

I say ignore them, stick with the known and/or develop a _true_ Open server design.

Re:Too bespoke (2)

Guspaz (556486) | about 2 years ago | (#40681861)

Facebook and the Open Computer project are not the only people doing this, nor are they the first (although they do go deeper than others, even designing the motherboards themselves).

Two years or so before Open Compute was founded, BackBlaze opensourced their storage pods, which fit 45 hard disks in a custom 4U chassis (they gave away the 3D plans as well). Last year, they updated it to version 2.0, which lets them store 135 TB in 4U for a total purchase cost (in 2011) of $7,384.

Inspired by that (as in, they say they were), Netflix did the "Open Connect" appliance, which is supposed to be open, although I can't actually find the design suitable for case manufacture.

Re:Too bespoke (0)

Anonymous Coward | about 2 years ago | (#40682027)

Thanks! That's probably the most informative (and genuinely interesting) response to a goatse bait post I've ever seen :)

Scale of the opportunity (2)

SpankyDaMonkey (1692874) | about a year ago | (#40683147)

Datacentres currently are estimated at consuming between 1.1% and 1.5% of the total power generated across the world. That's bigger than almost every other industry out there, heck that's bigger than quite a few medium-sized countries. With this sort of size then even a small percentage gain in efficiency makes a huge difference to costs. A modern datacentre has a yearly power bill of $10million or more, and so if you can find a way of doing the same processing using 2% less power - well those numbers suddenly start looking very good indeed when you realise the scale.

Personally I'm more excited about the new ARM servers that have been recently announced. For commodity type workloads they have a real chance to be a game changer in the amount of electricity needed to perform a task, and when the new 64-bit cores arrive then I can see them suddenly fitting a lot of strategies.

Disclaimer - I run a team responsible for all physical installations in a blue-chip datacentre

Scope of this project? (1)

unixisc (2429386) | about a year ago | (#40683309)

What is the scope of this Open Compute Project - how deep does it go? Is it starting from the very basic microprocessor level - say something like OpenRISC - and growing from there? Or is it somewhat higher level - like taking the reference platforms provided by the likes of Intel, AMD, NVIDIA and others and building from there? Is this an Open Sourced project so that in case any current manufacturers go under, getting custom made computers w/ almost identical specs will be do-able? In other words, is it an attempt do build in anti-obsolescence features?

open compute is a fraud (1)

Gravis Zero (934156) | about a year ago | (#40685215)

i looked in on this project to help earlier this year and found that it's 100% bullshit.

1) the Open Compute Project claims to want energy efficiency but they are using AMD Operon and Intel XEON chips (so inefficient!) instead of something power efficient like ARM chips. why? in case you didnt see, the project is COMPLETELY run by big businesses that have proprietary bullshit that they want to run.

2) they also claim to be making low cost machines. those chips cost >$500 each which is waaaaaay more than any ARM chip. right now you can get quad core ARM chips running at 1.2 GHz [arm.com] and 2.2 GHz in the near future [arm.com] so dont give me that "ARM isnt fast enough!" bullshit.

Open Compute is a fraud

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...