Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Hacking Supercomputing The Almighty Buck

Linux Clustering Hardware? 201

Kanagawa asks: "The last few years have seen a slew of new Linux clustering and blade-server hardware solutions; they're being offered by the likes of HP, IBM, and smaller companies like Penguin Computing. We've been using the HP gear for awhile with mixed results and have decided to re-evaluate other solutions. We can't help but notice that the Google gear in our co-lo appears to be off-the-shelf motherboards screwed to aluminum shelves. So, it's making us curious. What have Slashdot's famed readers found to be reliable and cost effective for clustering? Do you prefer blade server forms, white-box rack mount units, or high-end multi-CPU servers? And, most importantly, what do you look for when making a choice?"
This discussion has been archived. No new comments can be posted.

Linux Clustering Hardware?

Comments Filter:
  • Depends (Score:3, Insightful)

    by Anonymous Coward on Saturday May 21, 2005 @04:33PM (#12600714)
    Depends on

    (a) Your cost budget

    (b) Your work requirement: A Search engine is different from a weather forecast center.

    (c) Cost of ownership which includes maintenance etc

    • Re:Depends (Score:2, Insightful)

      by RogerWiclo ( 630886 )
      Congratulations! You get the honor the first post AND mod points for insightful, and you didn't have any useful information.
      • > Congratulations! You get the honor the first post AND mod points for insightful, and you didn't have any useful information.

        Congratulations!
        You get the honor the second post AND mod points for insightful, and you didn't have any useful information.

        The grandparent was correct (and insightful, for the smart reader).
        There's no correct answer to a generic question like that.

  • when all his three-CPU XBoxes become Linux clusters!

    Bwahahahahaha!!!

    • Actually, he probably will. Wouldn't you want your console to be known as a supercomputer? It's all publicity. And a guranteed free ad on Slashdot.
      • Not if they're making a loss on it (which I imagine they will be). If the XBox 360 does get used by a few companies as a Linux cluster - which isn't out of the question, then it'll be at MS's expense.

        And how was the grandparent post 'Offtopic'?
    • Yeah but this is the first time you get to watch them decay "live"....
  • by Fallen Kell ( 165468 ) on Saturday May 21, 2005 @04:45PM (#12600770)
    For the size and performance, they are hard to beat. A dual opteron setup in a 1U rack case is a very powerful setup in and of itself. The bonus of using off the shelf components with no need for proprietary hardware or software also make them very affordable. The added bonus is that you can simply get the parts from regular retailers for replacement.
    • by FuturePastNow ( 836765 ) on Saturday May 21, 2005 @04:54PM (#12600822)
      It's no dual Opteron server, but this Dan's Data [dansdata.com] article reviews what are probably the cheapest 1U servers you can buy. Definitely something to consider of you're going for cheap.
      • That's a cool looking rig, but it lists at about $850 (Australian, like $700 US) with no CPU, RAM, or drives.

        Here [dell.com] is a Dell PowerEdge 750 1U server with a P4 2.8GHz HyperThreaded, 256M and an 80G SATA (with room to add another drive and up to 4G of memory) for $499 shipped to your door. Yea, I'm a Dell fanboy, but even if I wasn't I would still see a pretty good price point in that box.

        Note that this specific box is pretty low end and could use some upgrades, but it is a complete machine ready to run, e
        • The real US price is more like $460 [newegg.com], at which the Dell is still a better deal. But that doesn't change the fact that it's a Dell ;)
        • Here is a Dell PowerEdge 750 1U server with a P4 2.8GHz HyperThreaded, 256M and an 80G SATA [...] for $499 shipped to your door.

          Sounds nice, but to upgrade that 2.8 GHz even to 3.0 Ghz, Dell rips you off another $299, and so on. A 60% price increase for a 7% CPU speed increase sounds similar to what airlines are doing with their pricing. So, not a company I would trust with my business.
          • Actually the 'real' price to jump from a 2.8 to a 3.0 is $100 - they are just having a promotion where you can get the upgrade from the baseline chip (a Celeron) to the 2.8 for free - Intel is blowing out the 2.8GHz P4's so they are taking advantage of price breaks to clear the channel of remaining stock.

            Getting the 3.0 is an option, but it's a dumb option - perhaps intended to influence the purchase decisions towards the 2.8, again to clear the remaining stock. I was comparing a specific box and configur
      • And power requirements significantly increases the cost of your data centre. There's no point buying lots of cheap servers if you have to then spend a fortune on uprating your air conditioning and UPS.

    • Not to mention that if you need more power, you can drop in the new dual core opterons without need for anything but eventually a bios upgrade.
      (not to mention that the dual core opterons acutally consume less power than some of the early steppings of the single core ones)
    • Agreed, that's what my company is using. As an added bonus, their reliability is amazing as long as you go with tier 1 hardware manufacturers. We discovered after I'd installed lm_sensors on our cluster that one of the machines had a dead heatsink-fan and dead exhaust fans, and yet the CPU was only at 77 degrees C and was still running under full load. I'd like to see a Xeon do that.
    • Dual Core Dual Opterons can be had for the same or slightly less per cpu core. A fair bit less if you consider you can cut the number of interconnects needed in half. The interconnect gets expensive once you pass 48 nodes on a cluster.
  • Ammasso (Score:4, Interesting)

    by LordNimon ( 85072 ) on Saturday May 21, 2005 @04:50PM (#12600796)
    Ammasso [slashdot.org] is a startup that makes iWarp [wikipedia.org]-based RDMA [wikipedia.org] hardware that runs over gigabit ethernet. Their technology is like Infiband, but much cheaper and almost as fast. Their drivers and libraries also provide MPI [wikipedia.org] and DAPL [sourceforge.net] support. The only support Linux (all 2.4 and 2.6 kernels) and they're way ahead of their competition in terms of performance, product availability, and support. Once you've decided on the servers, I strongly recommend you use Ammasso's hardware for the interconnects. Your hardware vendor may even bundle it with their systems - be sure to ask about that.
  • Check out Xserve (Score:5, Informative)

    by Twid ( 67847 ) on Saturday May 21, 2005 @04:50PM (#12600798) Homepage
    At Apple we sell the Xserve Cluster node [apple.com] which has been used for clusters as large as the 1,566 node COLSA [apple.com] cluster. We also sell it in small turn-key [apple.com] configurations.

    Probably the most interesting news lately for OS X for HPC is the inclusion of Xgrid with Tiger. Xgrid is a low-end job manager that comes built-in to Tiger Client. Tiger Server can then control up to 128 nodes in a folding@home job management style. I've seen a lot of interest from customers in using this instead of tools like Sun Grid Engine for small clusters.

    You can find some good technical info on running clustered code on OS X here [sourceforge.net].

    The advantage of the Xserve is that it is cooler and uses less power than either Itanium or Xeon, and it's usually better than Opteron depending on the system. In my experience almost all C or Fortran code runs fine on OS X straight over from Linux with minimal tweaking. The disadvantage is that you only have one choice: a dual-CPU 1U box - no blades, no 8-CPU boxes, just the one server model. So if your clustered app needs lots of CPU power it might not be a good fit. For most sci-tech apps, though, it works fine.

    If you're against OSX but still like the Xserve, Yellow Dog makes an HPC-specific [terrasoftsolutions.com] Linux distro for the Xserve.

    • Oh my, cough. All those hard hours of surfing Slashdot in search of a way to plug company hardware. First I read a post by a Dell guy and now a post by an Apple guy. Boy, it's going to be a big night in SF! - I hope I'm not being an a**, but it is worth commenting on.
      • Re:Check out Xserve (Score:3, Informative)

        by Twid ( 67847 )
        Well, you can think what you want, but I'm not a paid Apple shill surfing Slashdot for posting opportunities. I saw the story go up, and since I do cluster pre-sales at work, I thought I had some topical comments. I even mentioned running Linux on the Xserves. :)

      • I agree, I remember the day when you could depend on Slashdot to give you nothing but uninformed opinion.

        Next thing you'll see some post saying "I am a lawyer, actually."
  • by municio ( 465355 ) on Saturday May 21, 2005 @04:50PM (#12600806)
    At the current time I would choose blades based on dual core Opterons form many reasons. Some of the main ones are:

    - Price
    - Software availability
    - Power consumption
    - Density

    Brand depends on what your company is confortable with. Some companies would want to have the backing of IBM, SUN or HP. Others will be quite satisfied with in house built blades. This days it's quite easy to build your own blade, some mother boards builders take care of almost all components and complexity (for example Tyan). But again, maybe the PHBs at your gig will run for the hills if you mention the word motherboard alone.
    • Blades rarely make sense. Most blades available are not interconnected in any meaningful way, so you're really looking at independent systems, just of a small form factor. Why use blades? Well, they're space efficient. Great, unless you're actually using a lot of them, and don't have too much space. Because if you don't have too much space, you probably don't have too much cooling. If you don't have too much cooling, then you've totally killed your argument for blades based upon space. A rack full of
      • Not sure I follow what you mean by "not interconnected in any meaningful way". The current line of HP blades not only offers an onboard Cisco switch that is certainly going to save ports on your main data center switches, but an onboard Brocade SAN switch has been announced as well that is going to save ports on the production SAN switches. Not to mention the cable savings of each option.
        • What I would think he means is that they are still connected using GbE. Blades give the perfect oppurtunity to use a higher speed interconnect since you already have backplanes that the blades connect to.
    • Reading the Google article, I figured that they'd like them too, for much the same reasons -- for them, dual core looks far nicer than hyper threading, which is better, in turn than deep pipelines (which are almost useless given the kind of software that google uses in their boxes). Combine that with the lower power consumption (which adds up when multiplied by 15,000 processors), and you've got a nice sales pitch.

      In terms of using boards on open trays -- why not? as long as you don't have airflow proble

    • And the regular harddrive failures leave you secure in a job.
  • Pictures? (Score:1, Interesting)

    by Anonymous Coward
    I would love to see a picture of the google hardware...
    • by Anonymous Coward
      Oh yeah, show me a picture of your rack baby!
  • IBM zSeries (Score:1, Interesting)

    by Anonymous Coward
    Definitely worth checking out. It's one bad-ass Linux server -- and probably the only one to offer instruction execution integrity. That's a fancy way of saying 2+2 will always equal 4 on zSeries -- because everything is executed twice and compared at the hardware level -- or it won't execute.

    If you need this, you need it bad.
    • If you need this, you need it bad.

      Conversely, if you don't need it (because you've already decided on a cluster), then you really don't need it.
    • Z-series and clusters are diametrically opposed to each other. A Z-series processor is not any faster than in Intel processor--maybe even slower. So if you've got a 32-way mainframe (which is definitely on the big side), you've got no more processing power than a 32-way PC.

      So what did you get for your $2 million? Two things: incredible reliability and amazing I/0 bandwidth. You can fully saturate all 32 processors for weeks at a time if you wish, with uptime measured in *years*, 24x7. And that's gr

  • by devitto ( 230479 ) on Saturday May 21, 2005 @04:53PM (#12600818) Homepage Journal
    In the paper, it goes into tedious detail on the architecture and low-level operation of the application. Why do you think it does this? Because it is the application that *totally* depicts the solution, they chose lots of systems because of reliability, they made those systems "desktop class" because they didn't get much extra from using super-MP/MC systems.

    It's a great article, I strongely suggest you read properly, and do what they said they did - evaluate need against what's available.
  • well... (Score:5, Informative)

    by croddy ( 659025 ) on Saturday May 21, 2005 @04:53PM (#12600820)
    at the moment we have a rack with Dell PowerEdge 1750's. They're very nice for our OpenSSI cluster, with the exception of the disk controller. Despite assurances by Dell that the MegaRAID unit is "linux supported", we're now stuck with what's got to be the worst SCSI RAID controller in the history of computing.

    we're hoping that upgrading to OpenSSI 1.9 (which uses a 2.6 kernel instead of the 2.4 kernel in the current stable release) will show better disk performance... but... yeah.

  • I run a 4096 node NES beowulf cluster. Works great!
  • by Anonymous Coward on Saturday May 21, 2005 @04:57PM (#12600838)
    We can't help but notice that the Google gear in our co-lo appears to be off-the-shelf motherboards screwed to aluminum shelves.

    That would be typical of a prima donna company like Google that's floating in cash from their IPO.

    Around here, we don't waste money on fancy designer metals like aluminum. Salvaged wooden shipping palettes work just fine for us; they're free. And screws!? No need to waste resources on high-end fasteners when you can pick up surplus baling wire for less than a penny per foot. A couple of loops of wire and a few twists are all you need to assemble a working server.

    The dotcom days are over. There's no reason to throw money around like there's no tomorrow.

  • by Anonymous Coward on Saturday May 21, 2005 @05:11PM (#12600899)
    My .02 cents worth ...

    I build Linux and Apple clusters for biotech, pharma and academic clients. I needed to announce this because clusters designed for lifesci work tend to have different architecture priorities than say clusters used for CFD or weather prediction :) Suffice it to say that bioclusters are rate limited by file I/O issues and are tuned for compute farm style batch computing rather than full on beowulf style parallel processing.

    I've used *many* different platforms to address different requirements, scale out plans and physical/environmental constraints.

    The best whitebox vendor that I have used is Rackable Systems (http://www.rackable.com/ [rackable.com] . They truly understand cooling and airflow issues, have great 1U half-depth chassis that let you get near blade density with inexpensive mass market server mainboards and they have great DC power distribution kit for larger deployments.

    For general purpose 1U "pizza box" style rackmounts I tend to use the Sun V20z's when Opterons are called for but IBM and HP both have great dual-Xeon and dual-AMD 1U platforms. For me the Sun Opterons have tended to have the best price/performance numbers from a "big name" vendor.

    Two years ago I was building tons of clusters out of Dell hardware. Now nobody I know is even considering Dell. For me they are no longer on my radar -- their endless pretend games with "considering" AMD based solutions is getting tired and until they start shipping some Opteron based products they not going to be a player of any significant merit.

    The best blade systems I have seen are no longer made -- they were the systems from RLX.

    What you need to understand about blade servers is that the biggest real savings you get with the added price comes from the reduction in administrative burden and ease of operation. The physical form factor and environmental savings are nice but often not as important as the operational/admin/IT savings.

    Because of this, people evaluating blade systems should place a huge priority on the quality of the management, monitoring and provisioning software provided by the blade vendor. This is why RLX blades were better than any other vendor even big players like HP, IBM and Dell.

    That said though, the quality of whitebox blade systems is usually pretty bad -- especially concerning how they handle cooling and airflow. I've seen one bad deployment where the blade rack needed 12 inch ducting brought into the base just to force enough cool air into the rack to keep the mainboards from tripping their emergency temp shutdown probes. If forced to choose a blade solution I'd first grade on the quality of the management software and then on the quality of the vendor. I am very comfortable purchasing 1U rackmounts from whitebox vendors but I'd probably not purchase a blade system from one. Interestingly enough I just got a Penguin blade chasssis installed and will be playing with it next week to see how it does.

    If you don't have a datacenter, special air conditioning or a dedicated IT staff then I highly recommend checking out OrionMultisystems. They sell 12-node desktop and 96-node deskside clusters that ship from the factory fully integrated and best of all they run off a single 110v electrical. They may not win on pure performance when going head to head against dedicated 1U servers but Orion by far wins the prize for "most amount of compute power you can squeeze out of a single electrical outlet..."

    I've written a lot about clustering for bioinformatics and life science. All of my work can be seen online here: http://bioteam.net/dag/ [bioteam.net] -- apologies for the plug but I figure this is pretty darn on-topic.

    -chris


    • If I had mod points, this is definitely where i'd put one of them.
    • In our experience, the number of cpus per rack is limited by data center cooling. Blades are great to look at, and somewhat easier to provision, but having 60 - 80 cpus in 7U chassis isn't any better than having 60 - 80 dual cpu 1U pizza boxes. Our latest (commercial) cluster is 604 Dell boxes - PESC 1425 for the low-end jobs, 1850's for the heavy lifting (12 GB RAM).
  • by oh_the_humanity ( 883420 ) on Saturday May 21, 2005 @05:17PM (#12600920)
    When doing clkustering and super computer work. Cheap isnt always the best way to go , if you take into consideration that 5 - 10 % of nodes will either not be functioning correctly or will have some sort of hardware failure. The more you cluster the more man power it takes to repair these nodes. if you buy 1000 $499 colomachines , and 50 of them are failing at any given time, it becomes very time consuming and tedious to keep the cluster going. Spending the extra bucks on high quality hardware , will save you money and head ache in the end. I always use the analogy when talking to older folks who want to get started in computers. spend the extra bucks to get a new machine. The extra money you spend on buying new good equipment , will more than pay for itself in comparison , to the amount of frustration you get from buying old used slow computers. My $.2
    • if you buy 1000 $499 colomachines , and 50 of them are failing at any given time, it becomes very time consuming and tedious to keep the cluster going.

      actually, the same monitoring and swapout/repair of hardware is the same whether it's cheap hardware or expensive hardware. I've had a fair bit of hardware failures on an IBM cluster. Once monitoring catches it and notifies me, I get the new hardware in place and use xcat and kickstart to push everything back out to the replaced node(s). The most tim

      • The difference with cheaper, slower hardware is that people need more of it to run than with more expensive equipment. More computers means more system administrators.

        I've seen people say "Hey, I can make a cluster out of my 16 old P3-400MHz computers, it's cheap". You can, sure, but a single 3GHz dual-xeon will both be faster and save you time, electricity, and space.

        An upfront cheap may lead to far higher support costs.
    • The extra money you spend on buying new good equipment , will more than pay for itself in comparison , to the amount of frustration you get from buying old used slow computers. My $.2

      I would agree that in many cases for clusters, you want high quality new computers. Why deal with a bunch of used P3 class computers, when a cluster of AMD64 computers could get more done with 1/4 the units.

      However, if someone wants an inexpensive computer to browse the internet and do email, I do recommend getting a used
  • What about software?

    What are people's experiences with OpenSSI vs OpenMosix?

  • Obviously (Score:5, Funny)

    by iamdrscience ( 541136 ) on Saturday May 21, 2005 @05:41PM (#12601040) Homepage
    Isn't it obvious that the best technology is blade servers? I mean, c'mon fucking BLADE servers! It's far and away got the coolest name of any of them. The only way you could beat them would be if some company came out with something cooler like ninja star servers, now that would be awesome.
    • Yeah.

      HP, instead of calling its big boxes Superdomes, should have used Thunderdrome!
      • Personally, I think they should have stuck with the "Halfdome" name for the single cabinet boxes, and reserved "Superdome" for the multi-cabinet boxes.

        Put two (or more) halfdomes together, and you get a Superdome? Makes sense to me.
    • by mikeage ( 119105 ) <{slashdot} {at} {mikeage.net}> on Sunday May 22, 2005 @06:05AM (#12604008) Homepage
      Hi, this post is all about blades, REAL BLADES. This post is awesome. My name is Mikeage and I can't stop thinking about blades. These machines are cool; and by cool, I mean totally sweet.

      Facts:

      1. Blades are servers.

      2. Blades fight ALL the time.

      3. The purpose of the blade is to flip out and kill people.

      Testimonial:

      Blades can kill anyone they want! Blades cut off heads ALL the time and don't even think twice about it. These guys are so crazy and awesome that they flip out ALL the time. I heard that there was this blade who was eating at a diner. And when some dude dropped a spoon the blade killed the whole town. My friend Mark said that he saw a blade totally uppercut some kid just because the kid opened a window.

      And that's what I call REAL Ultimate Power!!!!!!!!!!!!!!!!!!
  • I think it really depends on use and in your facility. For example:
    • Do you host your servers at your own facility or do you host them in a data-center where rack space is a premium?
    • Is your application CPU intensive, disk intensive, memory intensive (i.e. scientific application, web servers, database servers, etc.)

    We run a very large website. We have a 1U dual Athlon MP box for administration and log processing, and a 2U dual Xeon box with a 6 disk RAID 10 solution for apache + mysql. Its a great s

  • by astrojetsonjr ( 601602 ) on Saturday May 21, 2005 @05:56PM (#12601101)
    Currently 65 (1 master, 64 nodes) of AMD Mobos on Ikea shelves. Cheap, easy to swap out, good air flow around the hardware. The shelves are wood, so everything just sits on them. It would be nice to find power supplies with extra connections to power more than one system.
  • by Anthony Liguori ( 820979 ) on Saturday May 21, 2005 @06:13PM (#12601191) Homepage
    It's tempting to just go buy a bunch of motherboards on ebay and some bread racks to build your cluster. It's certainly the cheapest and most flexible approach.

    However, it takes a special type of people to manage that kind of hardware. You have to deal with a high amount of failure, you have to be extra careful to avoid static problems, you've got to really think through how your going to wire things.

    On the other hand, if you get something like a IBM BladeCenter, you have a very similar solution that may cost a little more but is significantly more reliable. More importantly, blades are just as robust as a normal server. You don't have to worry about your PHB not grounding himself properly when he goes to look at what you've setup.

    I expect blades are going to be the standard form factor in the future. It just makes sense to centralize all of the cooling, power, and IO devices.
    • Google are doing to I.T. what Arkwright did to textiles during the Industrial Revolution and Ford did to manufacturing at the turn of the century. The vast majority of those in I.T. who don't follow their basic model are going out of business in the near future. There will only be a few niche players and a few Google like companies.

      Anyway, the magic formula for the system to use is MIPS per Watt, or MFLOPS per Watt. The power requirements and heat produced by high density computers is a real problem to dea
  • SunFire Servers (Score:5, Informative)

    by PhunkySchtuff ( 208108 ) <kai@automatic[ ]om.au ['a.c' in gap]> on Saturday May 21, 2005 @06:26PM (#12601292) Homepage
    SunFire v20z or v40z Servers.
    http://www.sun.com/servers/entry/v20z/index.jsp [sun.com]
    http://www.sun.com/servers/entry/v40z/index.jsp [sun.com]
    They're the entry-level servers from Sun, so they have great support. They're on the WHQL List, so Windows XP, 2003 Server and the forthcoming 64-bit versions all run fine.
    They also run Linux quite well, and as if that wasn't enough, they all scream along with Solaris installed.
    The v20z is a 1 or 2 way Opteron box, in a 1RU case. the v40z is a two or for CPU box that is available with single or dual core Opterons.
    Plus, they're one of the cheapest, if not the cheapest, Tier 1 Opteron servers on the market.
    • Re:SunFire Servers (Score:3, Informative)

      by eryanv ( 719447 )
      The Sun 1U servers are great products for clusters. Last year our research group purchased around 72 v60x's (dual Xeons) and they've never given us a problem. Sun doesn't sell this model anymore as they've pretty much dropped Intel for AMD, but they run Linux with ClusterMatic just fine. Not only that, but Sun has the best techincal support I've ever encountered, I don't mind when we do need to call them up to fix something.
      Besides that, we also have several clusters of midsized towers from a local company,
    • yeah, the V20z scream alright, after boot Sun has the fans in them pegged by the BIOS at 15K rpm. The high pitched shriek they make is annoying even in noisiest datacenter environment.
  • by MilesParker ( 572840 ) on Saturday May 21, 2005 @07:23PM (#12601729)
    We wanted to set up a small 4-8 node cluster mostly for testing and as a compute resource. For various political reasons we were looking at an IBM solution. At my uirging we went for dual Opterons in the 1U format. And the price seemed right. Here's where it gets wierd *after* the OBM sales people step in. Going thourgh it peice by piece I thought I could put a decent system together - with our substantial IBM discount -- for $14k. By the time we got the quote with all of the crap they thought we needed it was 34k! Just to give the flavor, the rack and assorted pieces was 4k. But thats not the funny part. We were like, "well for this much money, we assume you are putting it together for us." "Um no...didn't you see the services quote that went along with this?" We hadn't -- with the services/support quote came in at $60k! So at this point we asked, can't we just buy the individual pieces we need and put it together ourselves. "Well, yes, but then it won't be an IBM e1350 cluster 'solution'..." "Yea, well, we don't really care what its called, it'll be just as fast and 75% cheaper..." At that time they were getting rid of their 325 servers for way cheap and we actually put that system together for as cheap as a whitebox and probably as cheap as if we'd tried to put it together ourselves. The moral I guess is that if you have to deal with the big vendors, have a very sharp pencil handy!
  • It Depends... (Score:4, Informative)

    by xanthan ( 83225 ) on Saturday May 21, 2005 @07:35PM (#12601814)
    I think most of my posts to Slashdot begin with "It Depends..." =)

    Answering this... It Depends...

    What is your cluster's tolerence to failure? If a node can fail, then you have the option of buying a lot of cheap hardware and replacing as necessary. This is the way that most big web farms work.

    What is your cluster machine requirements? Do you have heavy I/O? Does cache memory matter? Do you need a beefy FSB and 64G of RAM per node? You may find that spending $3000/node ends up being cheaper than buying three $1000 nodes because the $3000 node is capable of processing more per unit time than the three $1000 units are.

    What is your power/rack cost constraint? Google is an invalid comparison simply because of their size. They boomed when a lot of people were busting and co-lo's were hungry for business. I'd bet they have a song of a deal in terms of space, power, and pipe. You are not Google and I doubt you have a similar deal. Thus, you may find that there is a middle ground where it is better to get a more powerful machine to use less rack space/power.

    In the end, you have to optimize between these three variables. You'll probably find that the solution, for you specifically, is going to be unique. For example, you may find that: Node failure is an option since the software will recover, power/rack costs are sufficiently high that you have to limit yourself, and CPU power with a good cache is crucial, but I/O isn't. This means getting a cheaper Athlon based motherboard with so-so I/O and cheesy video is a good choice since it frees your budget for a fast CPU. Combine with the cheapest power suppy the systems can tolerate and PXE boot and you have your ideal system.

    Best of luck.
  • by sirwired ( 27582 ) on Saturday May 21, 2005 @08:19PM (#12602070)
    Obviously, there ain't no such thing as a free lunch. It all depends on what you want.

    For sheer processor density, if you need complete servers, the IBM BladeCenter servers offer the most "Bang" (Fast), and they are fairly reliable and compact (Good). They are not cheap. They do have better density than the HP Blades. WETA Digital (Peter Jackson's FX company) uses them.

    That will get you 2 server processors, two server-class IDE drives + 2 GigE ports + all peripherals (Power, KVM, CD, Mangement, GigE switches, SAN switches if you want, etc.) per one-half of a rack unit. This is well over twice the density of pizza box units when you count external peripherals like the networking switch, KVM, etc.

    Google's setup is Fast and Cheap, but their hardware reliability is quite lousy. However, their clustering setup is specifically designed around expected hardware failure.

    (As a side note, Google no longer uses bare boards for their basic nodes. They use fairly small and slow nodes with a LOT of RAM from some company I can't remember. They look kind of like over-sized hard drives.)

    If you need crap-loads of raw computing power, in a relatively compact power-efficient chassis (1024 processors/rack), IBM's Blue Gene simply cannot be beat. This is Captial-F Fast, and Capital-G Good, but you certainly can't afford one. (While it provides more cycles for the watt and dollar than any other setup, it isn't exactly as simple as a Beowulf cluster.) And you would still need to buy pesky things like large GigE switches and storage. Check out the current issue of the IBM Journal of Research and Development on IBM's website (or your local university library) for all sorts of juicy details.

    [Yes, I am an IBM shill]

    So realistically, you really need to look at your application. If it can tolerate failure of any individual node on a regular basis, get the cheapest stuff you can find that will fit in your space and CPU requirements. If node reliability is important, but space is not, 1U servers from any of the three major vendors (or Apple, if that is your thing) will do the job just fine. If you need reliability and space, then honestly IBM's BladeCenter boxen are the best, as long as they fit your application. (I am not just speaking as an IBM'er here... they really are the best blades out there.)

    SirWired
  • I posted an "Ask Slashdot" question almost identical to this one about 6-months ago - but it was rejected (go figure). Regardless, I got what I wanted from this one! Thanks folks.
  • Check out VirtualIron. [virtualiron.com]

    These guys make a hypervisor that sits on top of infiniband between multiple blades and turns them into one big shared-memory linux NUMA system, up to around 32 cpus and probably even bigger eventually. Plus they can dynamically move cpus and memory in and out of each virtual machine, split the group into multiple smaller virutal machines, etc.

    I have no connection to them, just saw them at one of the east coast linux tradeshows. I think their hypervisor will eventually be superced
  • by jimshep ( 30670 ) on Sunday May 22, 2005 @09:09AM (#12604437)
    We recently puchased a 13 node dual Opteron cluster from Penguin Computing after evaluating clusters from them, Dell, IBM, HP and single memory image machines from SGI. Penguin provided a solution with the best price/performance as well as ease of use. They let us benchmark our codes on some of their test clusters to determine wheteher the Opteron or Xeon based clusters would be better suited to us. My favorite aspect of their system besides price was the plug-and-play setup. The cluster was shipped fully assembled, configured, and tested to our site. All we had to do was roll it out of the crate, plug in the power and network connections, configure the network settings in the OS, and start running our simulations. All of the solutions from the other vendors would have required significant setup time on our behalf unless we spent a large amount of money for the services. I also really like the Scyld operating system that was included in the cluster. It makes the cluster work almost like a single image memory machine. Scyld on the compute nodes is setup to download the kernel image and necessary libraries from the master node at boot-up, so any changes made to the master node automatically propagate to the compute nodes. After several months of running simulations, the cluster has not given us any problems. It has been very reliable (never needed a reboot). Their technical support has been very responsive about ansering questions we had with the initial startup.
  • Take a look at The Value Cluster Project [clusterworld.com].

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...