Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Slashdot: Building a Cheap Computing Cluster?

timothy posted about a year and a half ago | from the when-freecycle-just-doesn't-make-sense dept.

Hardware 160

New submitter jackdotwa writes "Machines in our computer lab are periodically retired, and we have decided to recycle them and put them to work on combinatorial problems. I've spent some time trawling the web (this Beowulf cluster link proved very instructive) but have a few reservations regarding the basic design and air-flow. Our goal is to do this cheaply but also to do it in a space-conserving fashion. We have 14 E8000 Core2 Duo machines that we wish to remove from their cases and place side-by-side, along with their power supply units, on rackmount trays within a 42U (19", 1000mm deep) cabinet." Read on for more details on the project, including some helpful pictures and specific questions.jackdotwa continues: "Removing them means we can fit two machines into 4U (as opposed to 5U). The cabinet has extractor fans at the top and the PSUs and motherboard fans (which pull air off the CPU and remove it laterally — (see images) face in the same direction. Would it be best to orient the shelves (and thus the fans) in the same direction throughout the cabinet, or to alternate the fan orientations on a shelf-by-shelf basis? Would there be electrical interference with the motherboards and CPUs exposed in this manner? We have a 2 ton (24000 BTU) air-conditioner which will be able to maintain a cool room temperature (the lab is quite small), judging by the guide in the first link. However, I've been asked to place UPSs in the bottom of the cabinet (they will likely be non-rackmount UPSs as they are considerably cheaper). Would this be, in anyone's experience, a realistic request (I'm concerned about the additional heating in the cabinet itself)? The nodes in the cabinet will be diskless and connected via a rack-mountable gigabit ethernet switch to a master server. We are looking to purchase rack-mountable power distribution units to clean up the wiring a little. If anyone has any experience in this regard, suggestions would be most appreciated."

cancel ×

160 comments

Sorry! There are no comments related to the filter you selected.

Imagine (5, Funny)

BumbaCLot (472046) | about a year and a half ago | (#43150925)

A beowulf cluster of these! FP

Re:Imagine (2)

operagost (62405) | about a year and a half ago | (#43150953)

Awesome... it feels like /. circa 2000 again.

Re:Imagine (5, Interesting)

Ogi_UnixNut (916982) | about a year and a half ago | (#43151407)

Yeah, except back in the 2000's people would be thinking it is a cool idea, and would be at least 4 other people who have recently done it and can give tips.

Now it is just people saying "Meh, throw it away and buy newer more powerful boxes". True, and the rational choice, but still rather bland...

I remember when nerds here were willing to do all kinds of crazy things, even if they were not a good long term solution. Maybe we all just grew old and crotchety or something :P

(Spoken as someone who had a lot of fun building an openmosix cluster from old AMD 1.2GHz machines my uni threw out.)

Re:Imagine (1)

Anonymous Coward | about a year and a half ago | (#43151875)

The difference is that we take the clustering part for granted now. The question wasn't something interesting like how do I do supercomputer-like parallel activities on regular PCs or solve operational issues. It was just about physically putting a bunch of random parts into a rack on a low budget.

But we won already... now, the mainstream is commodity rack parts. You should put the money towards modern 1U nodes rather than a bunch of low volume and high cost chassis parts to try to assemble your frankenrack of used equipment. You get subsidized server guts by buying a 1U server instead of just some unusual empty rack case. Even more, half the value of the rack equipment is that it has an optimized cooling plan to work in that density.

Re:Imagine (4, Insightful)

CanHasDIY (1672858) | about a year and a half ago | (#43152083)

You should put the money towards modern 1U nodes rather than a bunch of low volume and high cost chassis parts to try to assemble your frankenrack of used equipment.

Methinks you've missed the key purpose of using old equipment one already owns...

Re:Imagine (1)

Cramer (69040) | about a year and a half ago | (#43152481)

No, we haven't. What you and many others (including the poster) miss is how much time and effort -- and yes, money -- will go into building this custom, already obsolete, cluster. His first mistake is keeping Dell's heat tower and fan -- that's designed for a DESKTOP where you need a large heatsink so a slow (quiet) fan can move enough air to keep it cool; in a rack cluster, that's not even remotely a concern. (density trumps noise)

(I'm in the same boat -- as I'm sure everyone else is. I have stacks of old, obsolete machines. Difference is, *I* know they're junk which is why they're stacked in a corner... spare parts for the few we still use (read: never replaced))

Re:Imagine (2)

CanHasDIY (1672858) | about a year and a half ago | (#43152577)

Since this is obviously a 'pet' project, i.e. something he's doing just to see if it can be done, time and effort costs don't really factor in, IMO. Like when I work on my own truck, I don't say, "it cost me $300 in parts and $600 in labor to fix that!"

His first mistake is keeping Dell's heat tower and fan -- that's designed for a DESKTOP where you need a large heatsink so a slow (quiet) fan can move enough air to keep it cool; in a rack cluster, that's not even remotely a concern. (density trumps noise)

I find the idea of jury-rigging up a rackmount a bit specious myself... But again, this appears to be a 'can we do it' type project, so I don't feel compelled to criticize like I would if he were trying to do this with some mission-critical system.

Re:Imagine (0)

Anonymous Coward | about a year and a half ago | (#43152487)

The submitter was talking about racks and chassis of some kind to hold the desktop parts he removed from their original cases. I assumed he was going to be purchasing this stuff, and didn't already have it sitting around as well...

Just the ancillary equipment to assemble a cluster of already owned motherboards is significant budget, and it is budget better spent on just a few modern rack nodes in many cases.

Re:Imagine (2, Funny)

Anonymous Coward | about a year and a half ago | (#43151921)

Back then, people read slashdot at -1, nested, and laughed at the trolls. Right now, I wouldn't be surprised if I'm modded -1 within about 15 minutes by an editor with infinite mod points. Post something the group-think disagrees with, get downmodded. Post something anonymous, no one will read it. Post something mildly offensive, get downmodded.

We didn't have fucking flags back then and the editors didn't delete posts. Now they do. Fuck what this site has become.

Re:Imagine (1)

Anonymous Coward | about a year and a half ago | (#43151007)

It's been a long time since "Imagine a beowulf cluster of those!" made any degree of sense, or even appeared on ./.

Natalie Portman's Hot Gritts to you!

Re:Imagine (1)

K. S. Kyosuke (729550) | about a year and a half ago | (#43151281)

Don't forget the Petrification Award for unlocking the first post achievement!

Re:Imagine (0)

Anonymous Coward | about a year and a half ago | (#43151699)

Not meaning to be crude, but...
HOW THE FUCK is this offtopic?!?

Don't do it (4, Insightful)

damn_registrars (1103043) | about a year and a half ago | (#43150975)

Seriously, it isn't worth your effort - especially if you want something reliable. People who set out to make homemade clusters find out the hard way about design issues that reduce the life expectancy of their cluster. There are professionals who can build you a proper cluster for not a lot of money if you really want your own, or even better you can rent time on someone else's cluster.

Re:Don't do it (3, Insightful)

Impy the Impiuos Imp (442658) | about a year and a half ago | (#43151111)

Get an older, CUDA-capable card and have your whoever write code for it instead. I doubled all my SETI work units over 10 years in just 2 weeks. A CPU is just a farmer throwing food to the racehorse nowadays.

Re:Don't do it (0)

Anonymous Coward | about a year and a half ago | (#43151173)

Running something like Hadoop would work with this kind of setup. Hadoop is designed to allow machines to break without anything happening to the map-reduce code that runs on it, other than slight delays. Configuration of a new machine should be as simple as installing a disk image with the configurations in place.

For general purpose computing, you are correct. It wouldn't be pessimistic at all for one computer to go malfunctioning every week. If that disrupts what you are trying to run, or if you don't have resources to maintain the system, don't do it.

Re:Don't do it (4, Informative)

cbiltcliffe (186293) | about a year and a half ago | (#43151813)

For general purpose computing, you are correct. It wouldn't be pessimistic at all for one computer to go malfunctioning every week.

Huh? E8000 Core2 Duos are not that old. I've got a rack of a half dozen Pentium IIIs that I've run for years without problems. What kind of crap hardware do you run where you're expecting 1 failure out of 14 machines every week?

This is assuming, of course, that when you set up the cluster in the first place, that you check motherboards for bad caps, loose cooling fans, etc, and discard/repair anything that looks even like it might possibly fail. Considering the effort this guy seems to be going to, that's probably (but I've been wrong about that kind of thing before) a given.
From the pics, these are BTX machines, which in my experience have better cooling than ATX, and are less likely to have overheated, failing caps in the first place.

Re:Don't do it (5, Insightful)

Anonymous Coward | about a year and a half ago | (#43151277)

Seriously, it isn't worth your effort - especially if you want something reliable. People who set out to make homemade clusters find out the hard way about design issues that reduce the life expectancy of their cluster. There are professionals who can build you a proper cluster for not a lot of money if you really want your own, or even better you can rent time on someone else's cluster.

If the goal of this is reliable performance, you're absolutely right. But if the goal is to teach yourself about distributed computing, networking, diskless booting, all the issues that come up in building a cluster, on the cheap - then this is a great idea. Just don't expect much from the end product - you'll get more performance a modern box with 10s of cores on a single MB.

Re:Don't do it (1)

Anonymous Coward | about a year and a half ago | (#43151297)

I agree with this poster. After building a homebrew HPC environment and then working with a vendor engineered solution, I can tell you that taking old hardware is really not worth it other than a learning exercise. But never the less, building it would be fun, just not practical. So from a learning perspective, knock yourself out.

From a pragmatic point of view, the hardware is old, and not very efficient in terms of electricity. Also considering that a single TESLA card can deliver anywhere from 2 to 4 teraflops in one card, and you would be lucky to see even a 1 teraflop in this entire arrangement. However this does not preclude you from introducing TESLA cards into the environment if you have a compliant PCI-E slot and the power to run them.

Also, it depends on what you are trying to achieve. Anyway, have fun with your engineering challenge!!

Re:Don't do it (0)

Anonymous Coward | about a year and a half ago | (#43151721)

I agree. Try to get 4 cheap 8-core boxes and use those instead of the 14-node frankenstein.
It'll save you a bundle on electricity and UPS capacity.
You can also install much more RAM since they'll take DDR3.

You're looking at $1200 or so if you go AMD.
That gets 32 CPU cores 4 motherboards and RAM (whatever $100 gets).
You already have PSUs - salvage those if they can handle 300w.
You don't need cases since you're going to stick it in trays.

Add stronger power supplies if you want to use GPUs.
Add Infiniband if you want a real cluster and GPU to GPU communication.

Re:Don't do it (2)

sneakyimp (1161443) | about a year and a half ago | (#43152327)

Nonsense! Home-built cluster can be cheap and very educational. http://helmer.sfe.se/ [helmer.sfe.se]

Re:Don't do it (1)

stymy (1223496) | about a year and a half ago | (#43153351)

Also, always calculate Ghz/Watt or whatever, as newer processors are more efficient, and new processors can sometimes pay for themselves pretty fast over time due to a lower electricity bill.

don't rule out (5, Insightful)

v1 (525388) | about a year and a half ago | (#43150981)

throwing gear away or giving it away. Just because you have it doesn't mean to have to, or should use it. If energy and space efficiency are important, you need to carefully consider what you are reusing. Sure, what you have now may have already fallen off the depreciation books, but if it's going to draw twice the power and take double the space that newer used kit would, it may not be the best option, even when the other options involve purchasing new or newer-used gear.

Not saying you need to do this, just recommending you keep an open mind and don't be afraid to do what needs to be done if you find it necessary.

Re:don't rule out (4, Interesting)

eyegor (148503) | about a year and a half ago | (#43151227)

Totally agree. We had a bunch of dual dual-core server blades that were freed up and after looking at the power requirements per core for the old systems we decided it would be cheaper in the long run to retire the old servers and buy a fewer number of higher density servers.

The old blades drew 80 watts/core (320 watts) and the new ones which had dual sixteen-core Opterons drew 10 watts/core for the same amount of overall power. That's a no brainer when you consider that these systems run 24/7 with all CPUs pegged. More cores in production means your jobs finish up faster, you'll be able to have more users and more jobs running and use much less power in the long run.

Re:don't rule out (4, Insightful)

nine-times (778537) | about a year and a half ago | (#43151361)

I agree. I've been doing IT for a while now, and this is the kind of thing that *sounds* good, but generally won't work out very well.

Tell me if I'm wrong here, but the thought process behind this is something like, "well we have all this hardware, so we may as well make good use out of it!" So you'll save a few hundred (or even a few thousand!) dollars by building a cluster of old machines instead of buying a server appropriate for your needs.

But let's look at the actual costs. First, let's take the costs of the additional racks, and any additional parts you'll need to buy to put things together. Then there's the work put into implementation. How much time have you spent trying to figure this out already? How many hours will you put into building it? Then troubleshooting the setup, and tweaking the cluster for performance? Now double the amount of time you expect to spend, since nothing ever works as smoothly as you'd like, and it'll take at least twice as long as you expect.

That's just startup costs. Now factor in the regular costs of additional power and AC. Then there's the additional support costs from running a complex unsupported system, which is constructed out of old unsupported computer parts with an increased chance of failure. This thing is going to break. How much time will you spend fixing it? What additional parts will you buy? Will there be any loss of productivity when you experience down-time that could have been avoided by using a new, simple, supported system? What's the cost of that lost productivity?

That's just off the top of my head. There are probably more costs than that.

So honestly, if you're doing this for fun, so that you can learn things and experiment, then by all means have at it. But if you are looking for a cost-effective solution to a real problem, try to take an expansive view of all the costs involved, and compare *all* of the costs of using old hardware vs. new hardware. Often, it's cheaper to use new hardware.

Re:don't rule out (5, Insightful)

Farmer Pete (1350093) | about a year and a half ago | (#43151521)

But you're missing the biggest reason to do this...The older hardware is already purchased. New hardware would be an additional expense that requires an approval/budgeting process. Electricity costs lots of money, but depending on the company, that probably isn't directly billed to the responsible department. Again, it's hard to go to your management and say that you want them to spend X thousand dollars so that they will save X thousand dollars that they don't think they need to spend in the first place.

Re:don't rule out (1)

Anonymous Coward | about a year and a half ago | (#43151709)

While I don't agree that this project is a GOOD idea. This! A thousand times, THIS!

I just spent 6 months convincing the managemetn here that we can update our 7 year old servers with 50% less equipment, save 75% on power and cooling and pay for the project in about 18 months without mentioning that our userbase/codebase has grown to the point that we are paying people to stare at screens.

Read this article about the Titan upgrade to Oak Ridge Supercomputer http://www.anandtech.com/show/6421/inside-the-titan-supercomputer-299k-amd-x86-cores-and-186k-nvidia-gpu-cores FLOPS per Watt has come a long way in the past 5 years. Running old hardware will cost you a lot, especially if it idles any signifiicant period of time.

Re:don't rule out (4, Insightful)

i.r.id10t (595143) | about a year and a half ago | (#43151553)

On the other hand, depending on what kind of courses you teach (tech school, masters degree comp sci, etc) keepign them around for *students* to have experience building a working cluster and then programming stuff to run parallel on them may be a good idea. Of course, this means the boxes wouldn't be running 24/7/365 (more likely 24/7 for a few weeks per term) so the power bill won't kill you, and it could provide valuable learning experience for students... especially if you have them consider the power consumption and ask them to write a recommendation for a cluster system.

Re:don't rule out (4, Interesting)

ILongForDarkness (1134931) | about a year and a half ago | (#43151373)

Great point. Back in the day I worked on a SGI Origin mini/supercomputer (not sure if it qualifies 32 way symmetric multiprocessor still kind of impressive now a days I guess (even a 16 way Opteron isn't symmetric I don't think). Anyways at the time (~2000) there were much faster cores out there. Sure we could use this machine for free for serial load (yeah that is a waste) but we had to wait 3-4X as long as a modern core. You ended up having to ssh in to start new jobs in the middle of the night so you didn't waste an evening of runs versus getting 2-3 in during the day and firing off the fourth before you go to bed. Add to that the IT guys had to keep a relatively obscure system around, provide space and cooling for this monster etc they would have been better just buying us 10 ~1Ghz at the time I guess dual socket workstations.

Re:don't rule out (0)

Anonymous Coward | about a year and a half ago | (#43151915)

The Origin wasn't symmetric. It was demonstrating that NUMA was good enough for many people to be fooled.

Re:don't rule out (2)

pseudofrog (570061) | about a year and a half ago | (#43152013)

Because I know I'm not the only one who is bothered by this: )

Re:don't rule out (2)

korgitser (1809018) | about a year and a half ago | (#43151481)

Agreed. Once the OP calculates the TCO of the system, it might turn out that the free stuff might not be worth it. First you should find someone who has done something similiar before. Then you can start from the actual bottlenecks and play out some alternative scenarios.
What requirements do your calculations have? CPU vs I/O? The TDP of an e8000, 65W, is not bad - this puts your presumed rack short of the 2kW range. How much would that electricity cost for you in a year? If your calculations are I/O bound, you will have to spend on additional RAM and maybe SSDs, or the CPUs will be mostly occupied with wasting electricity/money. It might make sense to buy Atom boards instead. On the opposite end, it might make sense to buy some real cruncher CPUs or even GPUs.
You also have to calculate the labor involved. Setting the system up is not too much, but maintaining it, supporting it? If your lab is 14 people, and we presume every one will ask for support once a week, you will have 3 people every day bugging you about the cluster. Add to this regular maintenance, replacing failed parts (desktop grade hardware will fail regularly under heavy load), keeping track of the general state of your software stack upstream... You might find that you will spend most of your time on the cluster and not on your job. Which means you need hire an extra pair of hands. It might be cheaper just to buy your slice of time on an actual (commodity) science cluster.

Easy... (1)

Anonymous Coward | about a year and a half ago | (#43151003)

1. buy malware at a shady virus exchange to create a beowulf botnet
2. ???
3. profit!!!

Mounting these bare horizontally (0)

Anonymous Coward | about a year and a half ago | (#43151029)

Is your second mistake. How much memory is available and what will your interconnects be?

GPUs (1)

ThatsNotPudding (1045640) | about a year and a half ago | (#43151039)

I thought some folks had switch to GPUs for heavy number-chrunching... Though the custom hardware setups no doubt renders this a moot point.

Glad I could help :\

Re:GPUs (1)

Farmer Pete (1350093) | about a year and a half ago | (#43151155)

That's because it's a hell of a lot faster to use a GPU. The problem is that a decent GPU uses a lot more power than those PSUs can probably support, but even a semi-proficient GPU may be a wise investment.

Re:GPUs (0)

Anonymous Coward | about a year and a half ago | (#43151311)

The advantage is if a GPU is just sitting there. I don't have the link at the moment, but there was a comparison a while back... CPUs do comparably once you take equal precautions with optimizing things for cache sizes and such.

Once you solve the hardware challenges..... (5, Informative)

eyegor (148503) | about a year and a half ago | (#43151115)

You'll need to consider how you're going to provision and maintain a collection of systems.

Our company currently uses the ROCKS cluster distribution, which is a CentOS-based distribution that provisions, monitors and manages all of the compute nodes. It's very easy to have a working cluster set up in a short amount of time, but it's somewhat quirky in that you can't fully patch all pieces of the software without breaking the cluster.

One thing that I really like about ROCKS is their provisioning tool which is called the "Avalanche Installer". It uses bittorrent to load the OS and other software on each compute node as it comes online and it's exceedingly fast.

I installed ROCKS on a head node, then was able to provision 208 HP BL480c blades within an hour and a half.

Check it out at www.rockclusters.org

Re:Once you solve the hardware challenges..... (1)

clark0r (925569) | about a year and a half ago | (#43151603)

How does this play with SGE / OGE? Can you centrally configure each node to mount a share? How about install a custom kernel, modules, packages, infiniband config and Lustre mount? If it can do these then it's going to be useful for real clusters.

Re:Once you solve the hardware challenges..... (1)

Anonymous Coward | about a year and a half ago | (#43151935)

Correct website is -> www.rocksclusters.org

Re:Once you solve the hardware challenges..... (3, Informative)

pswPhD (1528411) | about a year and a half ago | (#43152147)

I can recommend Rocks as well, although you WILL need the slave nodes to have disks in them (you could scrounge some ancient 40Gb drives from somewhere...) you seem to want hardware information so...

First point is to have all the fans pointing the same way. Large HPC's arrange cabinets back-to-back, so you have a 'hot' corridor and a 'cold' corridor, which enables you to access both sides of the cabinet and saves some money on cooling.
My old workplace had two clusters and various servers in an air conditioned room, with all the nodes pointing the back wall. probably similar to what you have.
Don't know anything about the UPS, but I would assume having it on the floor would be OK.

Good luck with your project. Write a post in the future telling us how it goes.

Really? (5, Funny)

Russ1642 (1087959) | about a year and a half ago | (#43151119)

Slashdotters only imagine building Beowulf clusters. This is the first time anyone's been serious about it.

Re:Really? (0)

Anonymous Coward | about a year and a half ago | (#43151187)

Yes, the first time. Seriously.

Re:Really? (0)

Anonymous Coward | about a year and a half ago | (#43151325)

If I had a nickel for every wet dream I've had of a Beowulf cluster....

Re:Really? (0)

Anonymous Coward | about a year and a half ago | (#43151491)

You'd be blind?

Re:Really? (0)

Anonymous Coward | about a year and a half ago | (#43152499)

With so many beowulf cluster "experts" here, you wouldn't think the Wikipedia page [wikipedia.org] would be in need of an expert on the subject.

Don't. (0)

Anonymous Coward | about a year and a half ago | (#43151121)

Besides the cost of electricity and cooling (which you will either pay yourself or share with others) the hassle of maintaining your own cluster is not worth it. I set up a purpose-built 50-blade cluster as a grad student and it ate upy time like nothing else. Not a good idea.

Don't waste time on /. (0)

Anonymous Coward | about a year and a half ago | (#43151123)

Trust me why not try asking in LQ forum? I am sure someone will come up with something good, here in /. filter hundreds of replies / comments for answer to your original question :)
Some comments will let you thinks if the poster is fucken drunk or in sleep while posting comments.

Re:Don't waste time on /. (0)

Anonymous Coward | about a year and a half ago | (#43152307)

Glad you threw that random smile face there to make your comment whimsical.

Probably not worth your time (5, Interesting)

MetricT (128876) | about a year and a half ago | (#43151129)

I've been working in academic HPC for over a decade. Unless you are building a simple 2-3 node cluster to learn how a cluster works (scheduler, resource broker and such things), it's not worth your time. What you save in hardware, you'll lose in lost time, electricity, cooling, etc.

If you're interested in actual research, take one computer, install an AMD 7950 for $300, and you will almost certainly blow the doors off a cluster cobbled from old Core 2 Duo's, and you'll save more than $300 in electricity.

Re:Probably not worth your time (0)

Anonymous Coward | about a year and a half ago | (#43151299)

Is OpenCL the choice for the 7950? If so, how accessible is this for learning v cuda?

Re:Probably not worth your time (1)

MetricT (128876) | about a year and a half ago | (#43151435)

It depends very specifically on the application. There are some fields that are currently tied to nVidia due to "legacy" code (a strange term for code that can't be 1-2 years old) that is written in CUDA. If so, you can buy an equivalent nVidia.

If you're writing your own app (which if they're studying combinatorics seems likely) then rewriting the core loop in OpenCL is reasonable.

OpenCL is a higher-level abstraction, and you do lose some performance compared to CUDA, but it's worth it in my opinion simply for portability.

Re:Probably not worth your time (0)

Anonymous Coward | about a year and a half ago | (#43151377)

They probably have no budget for additional computing hardware but instead are capable of sinking electricity and facility costs into the operating costs of the institution. That said, the 28 old cores can serve as a maker movement style project for learning to build bigger things out of unreliable components in the future.

Re:Probably not worth your time (1)

ILongForDarkness (1134931) | about a year and a half ago | (#43151417)

Absolutely right about HPC users. Unless you are a gluten for punishment generally you need to get results fast before you know what is next. So users will avoid your cluster nodes because they can get 2-3X the speed from a modern desktop. What you will get is the people that have an endless queue of serial jobs (been there my last computational project was about 250,000 CPU hours of serial work) but generally you'll have a lot of idle time. People will fire off a job and it will finish part way through the night. Your system is so slow they won't bother to login to submit new jobs until the morning etc.

Re:Probably not worth your time (2, Informative)

Anonymous Coward | about a year and a half ago | (#43151683)

I'm a glutton for correcting grammar mistakes, and I believe you meant to use the word "glutton" where you used the word "gluten." Gluten is a wheat based protein, and a glutton is someone that exhibits a desire to overeat.

Re:Probably not worth your time (1)

cbiltcliffe (186293) | about a year and a half ago | (#43152153)

So what if you're a glutton for gluten?
Well, besides the Atkins Diet is probably not for you, that is....

Re:Probably not worth your time (1)

CanHasDIY (1672858) | about a year and a half ago | (#43152211)

Unless you are a gluten for punishment.

If you're anything like my wife, gluten is punishment.

Thank you, thank you, I'll be here all week! Enjoy the veal!

Re:Probably not worth your time (3, Interesting)

serviscope_minor (664417) | about a year and a half ago | (#43151991)

I've been working in academic HPC for over a decade. Unless you are building a simple 2-3 node cluster to learn how a cluster works (scheduler, resource broker and such things), it's not worth your time. What you save in hardware, you'll lose in lost time, electricity, cooling, etc.

I strongly disagree. I actually had a very similar Ask Slashdot a while back.

The clustre got built, and has been running happily since.

If you're interested in actual research, take one computer, install an AMD 7950 for $300, and you will almost certainly blow the doors off a cluster cobbled from old Core 2 Duo's, and you'll save more than $300 in electricity

Oh yuck!

But what you save in electricity, you'll lose in postdoc/developer time.

Sometimes you need results. Developing for a GPU is slow and difficult compared to (e.g.) writing prototype code in MATLAB/Octave. You save heaps of vastly more expensive person and development time by being able to run those codes on a cluster. And also, not every task out there is even easy to put on a GPU.

Re:Probably not worth your time (3, Informative)

MetricT (128876) | about a year and a half ago | (#43152105)

You *do* know that Matlab has been supporting GPU computing for some time now? We bought an entire cluster of several hundred nVidia GTX 480's for the explicit purpose of GPU computing.

Re:Probably not worth your time (2)

serviscope_minor (664417) | about a year and a half ago | (#43152833)

You *do* know that Matlab has been supporting GPU computing for some time now?

Yes, but only for specific builtins. If you want to do something a bit more custom, it goes back to being very slow.

Just use Amazon AWS (0)

Anonymous Coward | about a year and a half ago | (#43151131)

It's 2013 don't build your own cluster just use AWS EC2 spot instances.

Re:Just use Amazon AWS (5, Informative)

hawguy (1600213) | about a year and a half ago | (#43151295)

It's 2013 don't build your own cluster just use AWS EC2 spot instances.

An EC2 "High CPU Medium" instance is probably close to his Core 2 Duo's (it has 1.7GB RAM + two cores of 2.5 EC2 compute units each (each ECU is equivalent to a 2007 era 1.2Ghz Xeon).

Current spot pricing is $0.018/hour, so a month would cost him around $12.96. (not including storage, add about a dollar for 10GB of EBS disk space).

If his computers use 150W of power each, at $0.12/KWh, they'll cost exactly $0.018 -- the same price as an EC2 instance excluding storage.

However spot pricing is not guaranteed, so he'll have to be prepared to shut down his instances when the spot price rises above what he's willing to pay -- full price for the instance is $0.145/hour, but he could get that down to $0.09/hour if he's willing to pay $161 to reserve the instance for 3 years.

Re:Just use Amazon AWS (2)

Guspaz (556486) | about a year and a half ago | (#43152241)

The" Cluster Compute" instances might be better suited to cluster computing, although they're not cheap. But a single one of them, a dual-CPU eight core Xeon E5-2670 (dedicated, so they don't list EC2 compute units), probably has more computing power than the entire Core 2 Duo cluster being proposed.

But as I said, not cheap. It comes out to $400 per month for a reserved instance. A spot instance could be slightly cheaper. Then again, at the 150W of power usage you specified, times 1.8 to use the industry typical datacenter power usage efficiency (which accounts for air conditioner cooling, UPS losses, and other overhead), we get 3,780W, which in a single month is 2721.6 kilowatt hours, and at $0.12 that amounts to $326.59 in power alone!

So, it seems that the Amazon server at $400 per month, is barely more expensive than the power required to run those 14 Core 2 machines!

Just trust me to look at your data. (-1)

Anonymous Coward | about a year and a half ago | (#43152469)

I guess you've never heard of indusrial espionage!

Sounds interesting... (4, Informative)

Mysticalfruit (533341) | about a year and a half ago | (#43151189)

I'm routinely mounting things in a 42U cabinets that ought not be mounted in them, so I've got *some* insight.

The standard for airflow is front to back and upwards. Doing some sticky note measurements, I think you could mount 5 of these vertically as a unit. I'd say get a piece of 1" think plywood and dado cut channels 1/4" top and bottom to mount the motherboards. This would also give you a mounting spot that you could line up the power supplys in the back. This would also put the Ethernet ports at the back. Another thing this would allow would be for easy removable of a dead board.

Going on this idea, you could also make these as "units" and install two of them two deep in the cabinet (if you used L rails).

Without doing any measuring, I'm suspecting this would get you 5 machines for 7U or 10 machines if you did 2 deep in 7U.

Re:Sounds interesting... (1)

hawguy (1600213) | about a year and a half ago | (#43151335)

The standard for airflow is front to back and upwards. Doing some sticky note measurements, I think you could mount 5 of these vertically as a unit. I'd say get a piece of 1" think plywood and dado cut channels 1/4" top and bottom to mount the motherboards. This would also give you a mounting spot that you could line up the power supplys in the back. This would also put the Ethernet ports at the back. Another thing this would allow would be for easy removable of a dead board.

That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?

Re:Sounds interesting... (1)

cbiltcliffe (186293) | about a year and a half ago | (#43152389)

That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?

The autoignition temperature for generic cheapo plywood is somewhere on the order of 300 degrees C. If you went with pine, which is still pretty cheap, it goes up to 427 degrees C.

How hot do you think computers run?

The dust, I could give you, if the wood used was cheap chipboard, balsa, or something else soft. Something even moderately hard like pine it wouldn't be a problem, as long as you properly cleaned off the sawdust from the cutting process. If you went all out and used oak, it's probably harder than the circuit boards you'd be mounting in it, with the added benefit that it raises the autoignition temperature up to 482 degrees C.

Of course, you could use some edge rails in the wood, and eliminate the dust problem regardless of wood used, and they'd probably be cheap enough that you could get them through petty cash, and not need budget approvals, too.

Re:Sounds interesting... (1)

hawguy (1600213) | about a year and a half ago | (#43152625)

That sounds like a fire hazard, not to mention a source of dust - do people really put wooden shelves in their datacenters?

The autoignition temperature for generic cheapo plywood is somewhere on the order of 300 degrees C. If you went with pine, which is still pretty cheap, it goes up to 427 degrees C.

How hot do you think computers run?

It's not normal operation that would concern me with wooden rack shelves, but failures like this:

http://www.theregister.co.uk/2012/11/26/exploding_computer_vs_reg_reader/ [theregister.co.uk]
http://ronaldlan.dyndns.org/index.php [dyndns.org]
http://www.tomshardware.com/reviews/inadequate-deceptive-product-labeling,536.html [tomshardware.com]

One bad power supply could set the whole cabinet on fire -- and perhaps worse, set off the server room fire suppression system.

Re:Sounds interesting... (0)

Anonymous Coward | about a year and a half ago | (#43152253)

I've been waiting for a good day to stop reading slashdot. This is it.

Anyhow, thank you, 533341, for representing an admirable attitude in this thread. Unfortunately the place has been overrun with some pestilent breed of sophisticate.

My last slash .02 - if no single tasks actually requires a cluster, and you really just want to retain the computing power, consider mounting your mobos in light-box style picture frames, behind nice art or posters. With minor attention to your fan quality and mounting, and replacing platter hdds with ssds you end up with nearly silent computational mass on your walls.

Best of luck.

Inter-node communication (4, Informative)

plus_M (1188595) | about a year and a half ago | (#43151195)

What do you intend to use for inter-node communication? Gigabit ethernet? You need to realize that latency in inter-node communication can cause *extremely* poor scaling for non-trivial parallelization. Scientific computing clusters typically use infiniband or something like it, which has extremely slow latency, but the equipment will cost you a pretty penny. If you are interested in doing computations across multiple computing nodes, you should really setup just two nodes and benchmark what kind of speed increase there is between running the job on a single node and on two nodes. My guess is that you are going to get significantly less than a 2x speedup. It is entirely possible that the calculation will be *slower* on two nodes than on just one. Of course, if you are just running a massive number of unrelated calculations, then inter-node communication becomes much less important, and this won't be an issue.

Re:Inter-node communication (2)

plus_M (1188595) | about a year and a half ago | (#43151791)

And of course by "slow latency" I mean "low latency".

Re:Inter-node communication (1)

MerlynEmrys67 (583469) | about a year and a half ago | (#43152735)

Actually - by slow latency you mean high latency. High latency is bad like slow bandwidth is bad. You want the lowest latency numbers that you can afford. I know of people that count the speed of light going down a cable in their latency calculations because it matters to them (~ 5uSec/m)

Re:Inter-node communication (2)

bmxeroh (1694004) | about a year and a half ago | (#43153105)

I think the point was that they had made a typo in saying "slow latency" and really meant to type "low". But thanks for explaining exactly what they didn't mean.

Re:Inter-node communication (0)

Anonymous Coward | about a year and a half ago | (#43151851)

The university of Kentucky built a cluster using a matrix switch layout that ensured there was a single hop between any two nodes. This was excellent for minimizing latency, but a tad complex to wire up.

Reliability, space, and efficiency (2)

Peter Simpson (112887) | about a year and a half ago | (#43151233)

It may initially seem like a good idea, but if the population isn't homogeneous, you could find your time eaten up looking for spares. With a single type of PC, a node can be sacrificed to keep others running. But these are systems near the end of their design lifetime (and loaded with dust -- and who knows what else?) so components (fans, HDDs, power supplies) are going to be starting to fail more frequently. And the rats' nest of power cables! Perhaps a bunch of multiprocessor, multicore server blades would be a better choice? They go pretty cheaply, and you'd get more cores per power supply, and use less floor space to boot, by rack mounting them.

Scientific American article: http://www.scientificamerican.com/article.cfm?id=the-do-it-yourself-superc [scientificamerican.com]

So reusing old hardware (3, Insightful)

MerlynEmrys67 (583469) | about a year and a half ago | (#43151265)

There is a reason that old hardware should be gotten rid of. Depending on the exact config of the 14 servers (processor/whatever) you could probably replace them with 1, maybe 2 servers. The current generation of Jefferson Pass servers hold 4 servers in a 2U sled - so you could replace this whole thing with a 2U solution that isn't exposed the elements like you are proposing. It would be new, under warranty and faster than all get out.

Your solution will take 14 servers, connect them with ancient 1GbE interconnect and hope for the best. The interconnect for clusters REALLY matters, many problems are network bound - and not only network bound but latency bound as well. Look at the list of fastest supercomputers and you will barely see Ethernet anymore (especially at the high end) and definitely not 1GbE. Your new boxes will probably come with 10GbE that will definitely help... Especially since there will be fewer nodes to have to talk to (only 2, maybe 4)

The other problem that you will run into is your system will take about 20x the power and 20x the air conditioning bill (yeah - that is a LOT of power there), the modern new system will pay for itself in 9-12 months (and that doesn't include the tax deduction for donating the old systems and making them Someone Else's Problem)

Recycling old hardware always seems like fun. At the end of a piece of hardware's life cycle look at what it will actually cost to keep it in service - Just the electricity bill will bite you hard, then you have the maintenance, and fun reliability problems.

Re:So reusing old hardware (0)

Anonymous Coward | about a year and a half ago | (#43151573)

This. These old e8000 machines will get killed by a 1/3 as many Ivy Bridge boxes. That alone will pay for itself very quickly.

Re:So reusing old hardware (0)

Anonymous Coward | about a year and a half ago | (#43152133)

Mind explaining to us how 1-2 high class servers will allow you to learn how a large cluster functions together?

Not performing actual work, not acomplishing any goal outside of learning, not running software, not running OS's, not learning virtualization... But learning HPC clustering specifically?

Sounds like you are attempting to get him to spend a few thousand dollars on something that will accomplish zero goals, instead of spending a few bucks on power and spending some time to accomplish all the stated goals.

Re:So reusing old hardware (1)

MerlynEmrys67 (583469) | about a year and a half ago | (#43152857)

What I am saying is the cost of running these 14 nodes will quickly cost more than the cost of running a 4 node cluster that will provide better performance. Server systems (especially OLD server systems) are real power hogs. When you are drawing close to 500W/node that is 7KWatt running 24/7. All of that power adds up. On top of that the stated goal was space saving. He is taking a whole rack to provide the compute power of ~4U. The new server approach provides a 10x savings in space, a 10x savings in power cost - and another 10x savings in cooling cost (the other big cost to running a cluster).

It is nice to put old hardware to use occasionally - it is almost never cost effective to put old hardware to use, people don't realize that the main cost of a cluster is not acquisition cost, but cooling and power.

cabinet UPS (0)

Anonymous Coward | about a year and a half ago | (#43151291)

You're wasting your time with UPS if you don't have a Cabinet sized supply. The MTBF, maintenance, efficiency etc just doesn't make sense.
Put real money into making your cluster redundant or don't have a UPS at all.

You ought to consider what the cost of doing this with Amazon S3 or similar services might be.
I have a feeling you don't have any specific computation goals though, so it will be difficult to measure success.

( former builder of a 80 node 2 way Pentium III 1GHz cluster back in the day.)
Clusters are still very valuable, but be sure and accurately describe the computational cost of what you have planned becuase as you're building your cluster, prices of current tech keep getting so cheap that you might be able to just sell your equipment and lease time on someone else's HPC for half the money.

sell them and buy new.... (5, Informative)

Brit_in_the_USA (936704) | about a year and a half ago | (#43151331)

SPECfp2006 rate results:
e8600 34
i7-3770 130
x4 the performance

...sell the E8xxx series PC's in boxes for$100 a peice with windows licence
and use the $1400 towards buying Qty.4 lga1155 motherboards (4x$80), 4 unlocked K series i7's (4x$230) and 4x8Gb of DDR3 RAM (4x$40), 4x ~3-400W budget power supplies (4x $30) = $1520

Use a specialized clustering OS (linux) and have a smaller, easier to manage system, with lots more DDR 3 memory and lower electricity (and AC electricity) bill....

Donate them to local charity (0)

Anonymous Coward | about a year and a half ago | (#43151395)

I know that as a win7 desktop / office use those machines will still work fine. And I am fairly sure a local Boys & Girls Club / YMCA / choose your charity would take them, even if for re-donation to their clients.

Not sure if .edu's need tax write offs, but at least they will go to a better use.

Then get a modern high end video card for less than this will cost to build, use it for compute, and have a faster end solution.

Not another cluster... (2)

bobbied (2522392) | about a year and a half ago | (#43151455)

Unless you have a large number of identical machines capable of PXE booting and the necessary network hardware to wire them all together, you are really just building a maintenance nightmare. It might be fun to play with a cluster, but you'd do better to buy a couple of machines with as many cores as you can. It will take less space, less power, less fumbling around with configurations, less time and likely be cheaper than trying to cram all the old stuff into some random rack space.

If you insist on doing this, I suggest the following. 1. Only use *identical* hardware. (Or at least hardware that can run on exactly the same kernel image, modules and configurations) with the maximum memory and fastest networks you can. 2. Make sure you have well engineered Power supplies and cooling. 3. PXE boot all but one machine and make sure your cluster "self configures" based on the hardware that shows up when you turn it on because you will always have something broken. 4. Don't use local storage for anything more than swap, everything comes over the network... 5. Use multiple network segments, split between storage network and operational network.

By the way... For the sake of any local radio operations, please make sure you don't just unpack all the hardware from it's cases and spread it out on the work bench. Older hardware can be really big RFI generators. Consider keeping it in a rack that offers at least some shielding.

Cold side, hot side (1)

raymorris (2726007) | about a year and a half ago | (#43151547)

Would it be best to orient the shelves (and thus the fans) in the same direction throughout the cabinet, or to alternate the fan orientations on a shelf-by-shelf basis?

Keep them all the same, so that the system works as one big fan, pulling cool air from one side of the cabinet and exhausting hot air from the other. It's easiest to visualize if you imagine the airflow with a simple scenario. Imagine you had all of the even numbered shelves facing backward, blowing hot air to the front of the rack, while all the odd numbered shelves were trying to suck cool air from the front. That would totally fail because the odd numbered shelves would be sucking in hot air blown out from the even ones and vice-versa. You'd just be blowing hot air around the rack, not moving air through the rack. The same generally applies to other less simple configurations - if different units are arranged differently, they'll work against each other to some extent, rather than working as one team.

Watts X 3 = BTU (1)

raymorris (2726007) | about a year and a half ago | (#43151595)

ave a 2 ton (24000 BTU) air-conditioner which will be able to maintain a cool room temperature (the lab is quite small)

1 BTU is 0.29 watt/hour. So take your total power usage and multiply by three. That's how many BTU of heat the rack will diisipate (all power eventually turns to heat). That's how much ADDITIONAL cooling you'll need beyond what's already used to keep the room cool.

Re:Watts X 3 = BTU (0)

Anonymous Coward | about a year and a half ago | (#43151815)

proper units nazi edit
1 Btu. = 0.29 Wh

Re:Watts X 3 = BTU (0)

Anonymous Coward | about a year and a half ago | (#43152101)

So the air-conditioner is a "6.96 kWh" air-conditioner. What does that mean? That it will remove 6.96 kWh of heat in its lifetime?

Better ways to spend your time (0)

Anonymous Coward | about a year and a half ago | (#43151687)

Don't build it, rent it. For the cluster size (number of cores) you are proposing, it will be much faster, easier, and cheaper to rent the resources you need from Amazon Web Services. Then use MIT StarCluster to build the software infrastructure, run your cluster jobs, and shut the whole thing down. If you want to learn about building small clusters, that's a fun academic exercise. If you want to get work done, rent a cluster by the hour.

Sounds like a waste of time (1)

sdguero (1112795) | about a year and a half ago | (#43151745)

Messing with old hardware to try and make it rack mountable? Pfft. Save the effort. Buy a few mid-range servers and you'll get similar compute performance compared to that energy hog of a cluster. If you really want to use that hardware, don't remount it. Just stack the servers in a corner, plug them in, and install ROCKS. It's still gonna be an energy hog and have crappy performance though.

Re:Sounds like a waste of time (0)

Anonymous Coward | about a year and a half ago | (#43152663)

Exactly. Disassembling everything from the cases is a waste of time. Seriously, OP, you think you can design better airflow for this concept than Dell already has when they made the machines? I doubt it. Plus, looking at those Optiplex workstations, which I've disassembled before, I know you're gonna have a bad time trying to mount that CPU fan. Not to mention you're going to have to somehow create a mounting mechanism with stand-offs for the motherboards, oh, and since it's Dell they won't follow any of the ATX/mATX/BTX standards for mounting holes.

tl;dr - It's not worth your time. Donate them to a school or something and just buy new systems.

It depends on the problem. (0)

Anonymous Coward | about a year and a half ago | (#43151835)

A few people are saying don't bother. I'd like to extrapolate on that a little.

If your problem is embarrassingly parallel, you might get some good mileage out of your cluster. If not, don't bother.

cheaper and faster (0)

Anonymous Coward | about a year and a half ago | (#43151943)

it would be cheaper and faster to replace those 14 E8000 with 4 I7-3900 with DDR3 - old hardware should be retire, they are a pain to maintain, worse yet
no one carry those IDE PATA

Re:cheaper and faster (1)

CanHasDIY (1672858) | about a year and a half ago | (#43152457)

it would be cheaper and faster to replace those 14 computers you already own with 4 brand new computers whose processors alone cost more than $500 each

FTFY.

Strange idea of "cheaper" you've got there.

Raspberry Pi. (1)

faldore (221970) | about a year and a half ago | (#43152089)

Raspberry Pi.

http://www.tomshardware.com/news/Raspberry-Pi-Supercomputer-Legos-Linux,17596.html

14 cpu's from 5 years ago (1)

viperidaenz (2515578) | about a year and a half ago | (#43152127)

Why not give them away and buy 2 i7 26xx or better CPU's for the same performance? You could fit that in 1U instead of a 42U rack. No switch required, smaller UPS required, less aircon load, less electricity.

Microwulf (2)

xkrebstarx (1703372) | about a year and a half ago | (#43152179)

Check out the Microwulf work. It's not necessarily what you're looking for, but the community has produced some creative custom cases/racks. It might give you some fresh ideas.

I've built one, it works, but there are caveats (3, Interesting)

Anonymous Coward | about a year and a half ago | (#43152379)

We have a cluster at my lab that's pretty similar to what the submitter describes. Over the years, we've upgraded it (by replacing old scavenged hardware with slightly less old scavenged hardware) and it is now a very useful, reasonably reliable, but rather power-hungry tool.

Thoughts:

- 1GbE is just fine for our kind of inherently parallel problems (Monte Carlo simulations of radiation interactions). It will NOT cut it for things like CFD that require fast node-to-node communication.

- We are running a Windows environment, using Altair PBS to distribute jobs. If you have Unix/Linux skills, use that instead. (In our case, training grad students on a new OS would just be an unnecessary hurdle, so we stick with what they already know.)

- Think through the airflow. Really. For a while, ours was in a hot room with only an exhaust fan. We added a portable chiller to stop things from crashing due to overheating; a summer student had to empty its drip bucket twice a day. Moving it to a properly ventilated rack with plenty of power circuits made a HUGE improvement in reliability.

- If you pay for the electricity yourself, just pony up the cash for modern hardware, it'll pay for itself in power savings. If power doesn't show up on your own department's budget (but capital expenses do), then by all means keep the old stuff running. We've taken both approaches and while we love our Opteron 6xxx (24 cores in a single box!) we're not about to throw out the old Poweredges, or turn down less-old ones that show up on our doorstep.

- You can't use GPUs for everything. We'd love to, but a lot of our most critical code has only been validated on CPUs and is proving very difficult to port to GPU architectures.

(Posting AC because I'm here so rarely that I've never bothered to register.)

Summary of Responses (0)

Anonymous Coward | about a year and a half ago | (#43152537)

1. "As a commentator I reject your premise."
2. "You shouldn't want what you state you want."
3. "You should spend additional money to pay for more efficient machines rather than the computer you already have which are paid for, because money grows on trees, I place no value on your learning exercise, and I assume the electricity comes right out of your departmental budget exactly the same way purchase hardware would."
4. "I will ignore your very specific and detailed description of your setup, because screw you, that's why."

Re:Summary of Responses (1)

MerlynEmrys67 (583469) | about a year and a half ago | (#43153041)

3. "You should spend additional money to pay for more efficient machines rather than the computer you already have which are paid for, because money grows on trees, I place no value on your learning exercise, and I assume the electricity comes right out of your departmental budget exactly the same way purchase hardware would."

I always love that - I work for someone, the goal is to get the largest value out of the money spent... regardless of who's budget it is. This is how we end up with a bureaucracy that does very stupid things like deploying old hardware that will cost more in 6 months in power than an updated environment will cost including new systems, its electricity and its cooling. The money for power does not come out of trees, it is a real cost to the whole organization

Go ask the guys (2)

ArhcAngel (247594) | about a year and a half ago | (#43152573)

Go ask the guys over at Microwulf. [calvin.edu] They appear to have licked this particular challenge and link to others who have as well.

Racks... (1)

Hymer (856453) | about a year and a half ago | (#43153291)

Racks are built for air flow from front to back, you'll need to turn the boards 90 unless you remove the side panels... No, you do not want to alternate airflow, you want a hot side and a cool side, it makes cooling easier. If you can, try to vent the hot air out instead of cooling it down, it is cheaper than cooling it down. Btw. did you consider putting 4 or 5 boards vertically in 2 rows behind each other ?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>