Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Mini-ITX Clustering

michael posted more than 10 years ago | from the there's-no-i-in-team dept.

Hardware 348

NormalVisual writes "Add this cluster to the list of fun stuff you can do with those tiny little Mini-ITX motherboards. I especially like the bit about the peak 200W power dissipation. Look Ma, no fans!! You may now begin with the obligatory Beowulf comments...."

cancel ×

348 comments

Sorry! There are no comments related to the filter you selected.

Imagine.. (3, Funny)

hookedup (630460) | more than 10 years ago | (#8400081)

A beowulf cluster of these? There, done... and it felt good!

Re:Imagine.. (0)

Anonymous Coward | more than 10 years ago | (#8400185)

Just a single one of these. It'd be like me on a Friday night!

Re:Imagine.. (0)

Anonymous Coward | more than 10 years ago | (#8400192)

why imagine... just make one...

you can't complain now because you don't have a vast underground bunker to build your massive cluster in.

..although I've never quite figured out just what to do with such a thing if it were to be built. Could you run seti@home on a Beowulf cluster?

Please explain this (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8400342)

Ok. Please someone explain this [bbc.co.uk] . Has BBC really gone bonkers or is it for real?

Pentagon officials have confirmed that Guantanamo detainees may still be kept in detention, even if they are found not guilty by a military tribunal.

Re:Please explain this (-1, Offtopic)

pe1rxq (141710) | more than 10 years ago | (#8400381)

Nah... there just reporting what the rest of the world already knew. Even in the unlikely event of a fair trial they are screwed.

Jeroen

Explanation: abandon all hope (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8400402)

explain this

Executive Branch Out Of Control.

Nothing on CNN (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8400451)

Not a word on CNN.

Typical BBC left-wing crap.

Re:Imagine.. (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8400366)

Re:Imagine.. (4, Funny)

iminplaya (723125) | more than 10 years ago | (#8400373)

Too Many Users

Evidently they didn't cluster enough...

You asked for it (-1)

scumbucket (680352) | more than 10 years ago | (#8400084)

What about a Beowulf cluster of these?

Mini ITX Cluster Fuck (-1)

Anonymous Coward | more than 10 years ago | (#8400086)

If I only had a Beowolf cluster of these

If you care about this kind of thing (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#8400087)

It most likely means you are a GAY.

Wow! (-1, Redundant)

Dark Lord Seth (584963) | more than 10 years ago | (#8400089)

Imagine an OpenMOSIX cluster of these babies!

Imagine... (5, Funny)

Chmarr (18662) | more than 10 years ago | (#8400091)

... a beowulf cluster of obligatory beowulf cluster comments.

Image a Beowulf Cluster of these babies! (-1)

Can it run Linux (664464) | more than 10 years ago | (#8400098)

Could they run Linux?

Floating point performance (5, Interesting)

October_30th (531777) | more than 10 years ago | (#8400101)

I thought about this some time ago.

I decided against a mini-ITX cluster because the floating point performance (why else would you build a cluster?) of VIA CPUs is just abyssmal.

Is there any reason why there are no P4 or AMD mini-ITX mobos around?

Re:Floating point performance (3, Interesting)

wed128 (722152) | more than 10 years ago | (#8400121)

i would imagine they run too hot for such a small form factor...this is just a guess, so treat it as such.

Re:Floating point performance (5, Insightful)

Short Circuit (52384) | more than 10 years ago | (#8400151)

Not to mention that mini-ITX is VIA-proprietary technology. At least, I think it is.

And VIA markets their own line of CPUs for use in that scenario.

However, I wouldn't mind seeing Pentium-M or mobile Athlons placed on mini-ITX boards.

Re:Floating point performance (1)

IWorkForMorons (679120) | more than 10 years ago | (#8400355)

I've seen AMD and P4 Shuttle systems that are that size. They may not reach the under 12 CM size boards that VIA are starting to make, but mini boards are available for AMD and P4.

This [logisysus.com] was found after 2 seconds with Google.

Re:Floating point performance (1, Interesting)

Anonymous Coward | more than 10 years ago | (#8400139)

the power consumption of the desktop p4/amd chips would kind of defeat the purpose of building one from these

Re:Floating point performance (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8400145)

IMAGINE A BEOWULF CLUSTER OF THESE!

*_g_o_a_t_s_e_x_*_g_o_a_t_s_e_x_*_g_o_a_t_s_e_x_*_
g_______________________________________________g_ _
o_/_____\_____________\____________/____\_______o_ _
a|_______|_____________\__________|______|______a_ _
t|_______`._____________|_________|_______:_____t_ _
s`________|_____________|________\|_______|_____s_ _
e_\_______|_/_______/__\\\___--___\\_______:____e_ _
x__\______\/____--~~__________~--__|_\_____|____x_ _
*___\______\_-~____________________~-_\____|____*_ _
g____\______\_________.--------.______\|___|____g_ _
o______\_____\______//_________(_(__>__\___|____o_ _
a_______\___.__C____)_________(_(____>__|__/____a_ _
t_______/\_|___C_____)/LINUX_\_(_____>__|_/_____t_ _
s______/_/\|___C_____)_INSIDE|__(___>___/__\____s_ _
e_____|___(____C_____)\_(TM)_/__//__/_/_____\___e_ _
x_____|____\__|_____\\_________//_(__/_______|__x_ _
*____|_\____\____)___`----___--'_____________|__*_ _
g____|__\______________\_______/____________/_|_g_ _
o___|______________/____|_____|__\____________|_o_ _
a___|_____________|____/_______\__\___________|_a_ _
t___|__________/_/____|_________|__\___________|t_ _
s___|_________/_/______\__/\___/____|__________|s_ _
e__|_________/_/________|____|_______|_________|e_ _
x__|__________|_________|____|_______|_________|x_ _
*_g_o_a_t_s_e_x_*_g_o_a_t_s_e_x_*_g_o_a_t_s_e_x_*_


Important Stuff: Please try to keep posts on topic. Try to reply to other people's comments instead of starting new threads. Read other people's messages before posting your own to avoid simply duplicating what has already been said. Use a clear subject that describes what your message is about. Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments might be moderated. (You can read everything, even moderated posts, by adjusting your threshold on the User Preferences Page) If you want replies to your comments sent to you, consider logging in or creating an account.

Important Stuff: Please try to keep posts on topic. Try to reply to other people's comments instead of starting new threads. Read other people's messages before posting your own to avoid simply duplicating what has already been said. Use a clear subject that describes what your message is about. Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments might be moderated. (You can read everything, even moderated posts, by adjusting your threshold on the User Preferences Page) If you want replies to your comments sent to you, consider logging in or creating an account.

Important Stuff: Please try to keep posts on topic. Try to reply to other people's comments instead of starting new threads. Read other people's messages before posting your own to avoid simply duplicating what has already been said. Use a clear subject that describes what your message is about. Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments might be moderated. (You can read everything, even moderated posts, by adjusting your threshold on the User Preferences Page) If you want replies to your comments sent to you, consider logging in or creating an account.

Re:Floating point performance (5, Informative)

J3zmund (301962) | more than 10 years ago | (#8400196)

They might be on their way. Here's [commell.com.tw] a 1.7 GHz Pentium M.

Re:Floating point performance (0)

Anonymous Coward | more than 10 years ago | (#8400418)

But Pentium M's aren't cost-effective...

Re:Floating point performance (2, Interesting)

0x1337 (659448) | more than 10 years ago | (#8400221)

The reason why you don't see any Mini-ITX mobos around the Athlon, is power consumption. I recently built a mini-ATX computer around a T-Bird (1gHz, should have picked something less of an oven), and the mini-ATX power supply crapped out on me, making me buy a REAL ATX powersupply. Gah, still can;t find a 300 WAtt mini-ATX supply.

Btw, you're wrong - there ARE P4-based mini-ITX mobos.

Re:Floating point performance (1)

hawkbug (94280) | more than 10 years ago | (#8400380)

Using a t-bird will do that to you for sure. Consider using an XP 1500+ based on the thoroughbred core... and your energy consumption will greatly decrease.

Re:Floating point performance (1)

stratjakt (596332) | more than 10 years ago | (#8400229)

Mini-ITX really isnt THAT much smaller than some FlexATX boards, most notably shuttle's offerings..

You just have to find a way to dissipate the heat, the heatpipe setup in Shuttle's latest line of barebones is pretty clever.

As for the Mini-ITX cluster, it's kind of a joke. You may as well just cluster old 486 boards, it'd be cheaper, they can be had for a buck or so..

You'd probably have to cluster a dozen of them together to equal one 3.6 p4.

Re:Floating point performance (1)

Unoti (731964) | more than 10 years ago | (#8400378)

For SETI, I'd need about 33 of my Pentium 133's to equal the performance of one of my AMD 2500's.

Re:Floating point performance (5, Informative)

-tji (139690) | more than 10 years ago | (#8400264)

There are P4 Mini-ITX systems available: Pentium 4 [silentpcreview.com]

But, most mini-itx systems are very small in size, and strive for quiet or silent operation. So, there are obvious problems with the P4's heat/power requirements. Perhaps a better solution is the Pentium-M in a mini-itx form factor. It has pretty good performance, at a low power/heat level: Pentium M [commell.com.tw] . But, most of the Pentium-M boards are intended for industrial or OEM use, so they are hard to find in retail, and are pretty expensive.

Re:Floating point performance (4, Informative)

niko9 (315647) | more than 10 years ago | (#8400338)

How about Fujitsu's mini-tx form factor for the Pentium M proc. Runs passive (huge heatsink, but passive nonetheless) and uses less electrons.

Coudn;t find a link though, sorry.

Re:Floating point performance (2, Interesting)

October_30th (531777) | more than 10 years ago | (#8400372)

Sounds excellent.

In fact, a Pentium M platform would be a perfect choice as long as the mobile Athlon mobos are impossible to find.

Does anyone have a link?

Re:Floating point performance (1)

F34nor (321515) | more than 10 years ago | (#8400374)

HEAT.

Re:Floating point performance (2, Informative)

a20vertigo (263583) | more than 10 years ago | (#8400441)

There are supposedly some Pentium M boards around, as well as 4s... in fact, if you look at Mini-ITX.com's store, they're selling a P4 mini-itx board. If only it's one slot was AGP and not PCI, that would make a hell of a small little gaming box...

Re:Floating point performance (5, Informative)

mi (197448) | more than 10 years ago | (#8400442)

the floating point performance (why else would you build a cluster?)
  • To crack encryption?
  • To compile big projects?
  • To compress huge files?

The floating point is just a convenience. Almost any algorithm can be modified to work with fixed point precision -- and without loss of performance.

Of course, many people will insist, they need FP to be able count dollars and cents -- they don't even think of counting cents (or any other fractions of the dollar) with integers, for example.

These are, usually, the same people, who have troubles defining bit...

Pointy-Haired Boss (3, Funny)

Vexler (127353) | more than 10 years ago | (#8400107)

Just imagine Dilbert's boss asking him for a Beowulf cluster.

Kind of like that strip where he (the boss) wanted to have a SQL database in lime.

Re:Pointy-Haired Boss (1)

ComradeX13 (226926) | more than 10 years ago | (#8400156)

I think it was mauve. ...Yes, I am pathetic. And I have a photographic memory.

Re:Pointy-Haired Boss (2, Funny)

Magus424 (232405) | more than 10 years ago | (#8400158)

Actually, he though Mauve had the most RAM :)

Inexpensive for testing purposes, (3, Insightful)

Space cowboy (13680) | more than 10 years ago | (#8400116)

... but that's about all it'll be useful for. A Nehemiah CPU is really weedy by todays standards, even the 1GHz one is about the same as a 600MH P3. So, he's got 12 of them, which is probably less CPU power than an average dual P4 motherboard...

Still, you can get some stats on how the clustering works, what's the best algorithm for dispersing problems, and these boards are cheap, but that's about the only advantage I can see...

Simon

Re:Inexpensive for testing purposes, (5, Interesting)

addaon (41825) | more than 10 years ago | (#8400428)

I agree, but that's actually a very interesting use. It also lets you play around with network topologies, and interconnects, and such. And of course, these boards do have one PCI slot, as well as the standard assortment of serial and parallel, so the hardware people can have fun too. For real number crunching? Not a chance. For doing a $2000 prototype, in 15 nodes, of a $50000 50-node cluster? I can't really think of a more flexible, more convenient, or more affordable option. For doing a $1000, 6-node flexible network simulator, purely for education? Also more than worth it, with few other options around.

Re:Inexpensive for testing purposes, (3, Informative)

Pidder (736678) | more than 10 years ago | (#8400449)

There are no dual boards for normal P4s since they can't runt in SMP mode. You have to buy Xeons and they arn't exactely cheap. Dual AMD Athlons (the MP model or a modded XP) are your only option for a cheap dual desktop.

Seriously, though... (5, Interesting)

Short Circuit (52384) | more than 10 years ago | (#8400122)

All things considered, what's the cost-per-tflop of that sort of system. These guys don't require as much cooling, space, or whatever else you care to think about.

Has anyone tried stuffing several into a single 1U chassis? For a sort of cluster of clusters?

Re:Seriously, though... (4, Interesting)

drinkypoo (153816) | more than 10 years ago | (#8400328)

You could get (maybe) 2-4 boards into a deep 1U box. It would be better to use a ~6U box and put lots of them on their sides. You could make a 12" deep 6U with probably 18 or so of these things in it, without having to have cables coming out the front AND back of each box.

WTF?!!!! (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8400123)

I'm lovin' it! [i-am-asian.com]

shuttle (2, Interesting)

trmj (579410) | more than 10 years ago | (#8400126)

My favorite use for those mini-itx boards is making a nice shuttle [shuttle.com] xpc. Cheap, fast gaming computers that are quite portable as well.

The only problem I've found so far is they ony come with nvidia onboard graphics, but that's what the agp slot is for.

Re:shuttle (0, Redundant)

trmj (579410) | more than 10 years ago | (#8400154)

bah, mod me stupid. I was thinking atx while reading/typing itx.

Michael Sims = turd (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8400127)

The truth about Michael Sims is here:

Domain Hijacking and Moral Equivalency [spectacle.org]

Yes, but (1, Funny)

Anonymous Coward | more than 10 years ago | (#8400129)

Where did they find those telephone modems for Mini-ITX form factor?

Imagine... (5, Funny)

Anixamander (448308) | more than 10 years ago | (#8400141)

...a new, original joke. Now imagine another one, because that last one wasn't that funny.

In fact, maybe you just aren't that funny. Except in Soviet Russia.

Shit, now I'm doing it.

Re:Imagine... (2, Funny)

Anonymous Coward | more than 10 years ago | (#8400222)

John Lennon surely would not have been happy!

Re:Imagine... (0)

Anonymous Coward | more than 10 years ago | (#8400277)

A clusterKnoppix of these things...

Moshe would be shocked.

In Soviet Russia, YOU are the funny joke. (-1, Offtopic)

jamonterrell (517500) | more than 10 years ago | (#8400301)

If you're going to do the joke, at least do it right (see subject).

Re:Imagine... (1, Funny)

Anonymous Coward | more than 10 years ago | (#8400346)

...a new, original joke. Now imagine another one, because that last one wasn't that funny.

You mean... a Beowulf cluster of new, original jokes?

(ot) the sig: (1)

janbjurstrom (652025) | more than 10 years ago | (#8400369)

Hehe, nice one.

Totally unralated: About you're current .sig, "Do not taunt Happy Fun Ball(TM)". Was it you who had that awesome "The instructions SPECIFICALLY SAID ... DO NOT TAUNT HAPPY FUN BALL!" (or something to that effect), I've seen around some time ago?

Obligatory Kent Brockman... (0)

Anonymous Coward | more than 10 years ago | (#8400416)

I, for one, welcome our new humor-defining overlord.

What a easy to read page... (0)

Anonymous Coward | more than 10 years ago | (#8400142)

seesh black on grey......

Imagine! (0)

omar.sahal (687649) | more than 10 years ago | (#8400144)

Imagine a beowulf cluster of these....it would gain consciousness and kill us all.

This with Chess (3, Interesting)

SamiousHaze (212418) | more than 10 years ago | (#8400150)

You know I seriously wonder if this would be a viable option for Computer chess programs (http://www.chessbase.com/newsdetail.asp?newsid=25 ). It certainly is getting cheap to get massive hardware processing power.

Linux support (0, Offtopic)

AmandaHugginkiss (756492) | more than 10 years ago | (#8400159)

The site appears to be down so I can't read if these clusters support Linux. Does anybody know?

Re:Linux support (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#8400219)

They run on Linux, and hence the site is down.

Re:Linux support (0)

Anonymous Coward | more than 10 years ago | (#8400424)

That would be funny, except its too bad the site is actually running Windows 2000 [netcraft.com] .

Re:Linux support (0)

Anonymous Coward | more than 10 years ago | (#8400230)

yes they do.

Re:Linux support (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8400255)

Haven't read the article. Probably won't. But most likely, yes...It's x86 after all. And why would you care anyhow? Are you going to build one, or are you just asking because EVERYTHING should run linux?? This is getting pointless

Some preliminary performance results (5, Informative)

JimmyQS (690012) | more than 10 years ago | (#8400170)

We studied 3 mini beowulf systems a while back, here at University of Central Florida, one of which was a mini-ITX beowulf. Here's some info and preliminary results: http://helios.engr.ucf.edu/beowulf/miniature.phtml

Clickable link... (1, Informative)

Anonymous Coward | more than 10 years ago | (#8400403)

Would it Kill ya? Clicable link [ucf.edu]

Bad page design (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8400171)

Why did this ass-munch decide to use black text on dark gray background? I have to highlight the article text just to read it. WTF?

I built a fanless ITX system... (1, Interesting)

Kenja (541830) | more than 10 years ago | (#8400174)

I built one of these, cost me six times as much for one third the power. Unless you NEED a quiet system, dont bother.

Re:I built a fanless ITX system... (3, Informative)

addaon (41825) | more than 10 years ago | (#8400458)

Six times as much as what? My entire mini-itx system was under $500, and most of the cost of that was a solid-state drive large enough for a decent linux distribution... and most of the rest was a touch-screen monitor.

he's apparently not running.. (0)

Anonymous Coward | more than 10 years ago | (#8400176)

..his webserver on that

Why 12 nodes? (1)

El (94934) | more than 10 years ago | (#8400186)

Why not 16 nodes, or some other power of 2?

Cool stuff ... (4, Interesting)

Lazy Jones (8403) | more than 10 years ago | (#8400191)

This rocks - we were considering something similar for our clustering-R&D needs (for trying out new network file systems, failover solutions etc.), but we decided to go with plain P4 barebones instead. They can be stacked nicely, are relatively quiet and the fast CPUs with HT come in handy when you want good latencies at CPU-intensive tasks (dynamic websites etc.).

Here's a picture [amd.co.at] of our first 4 boxes. The USB stick seen sticking out from one of the boxes is bootable and an excellent replacement for floppy disks...

more information ... (3, Informative)

Lazy Jones (8403) | more than 10 years ago | (#8400295)

Oh, I forgot: each of these boxes contains a 2,8GHz P4 Northwood CPU (200/800MHz FSB), 1GB RAM. The Shuttle barebone used is the S75G2 [shuttle.com] and one of the reasons we chose it was that it has an on-board gigabit ethernet adapter. The CPU cooler that came with it is also very interesting - it uses a rather unique design with a heatpipe ...

Re:Cool stuff ... (1)

Feyr (449684) | more than 10 years ago | (#8400335)

a bootable usb stick? i'd assume your board has bios support for booting on usb?

Re:Cool stuff ... (4, Informative)

Lazy Jones (8403) | more than 10 years ago | (#8400377)

> i'd assume your board has bios support for booting on usb?

Yes, I guess that most current BIOSes of the newer boards do, especially the consumer-ish stuff. We just used the stock Shuttle XPC with its FlexATX-board.

USB Stick (0)

Anonymous Coward | more than 10 years ago | (#8400387)

Not to be too off topic here, but what is the brand / model of the USB drive you're using to boot with?

Hmmm (5, Funny)

captain_craptacular (580116) | more than 10 years ago | (#8400200)

There was no cutting or bending involved. All metal bits were simply cut, drilled, and bolted together using 4-40 hardware.

So what was it? No cutting, or cutting?

FLASH... (2, Interesting)

Short Circuit (52384) | more than 10 years ago | (#8400205)

Ouch...He's using flash as the HD for the computing nodes. Hope they're set to be mounted read-only.

Maybe he should consider PXE instead.

Re:FLASH... (1, Informative)

Anonymous Coward | more than 10 years ago | (#8400246)

IBM MicroDrives. They have a flash-style interface (which is really just an IDE connection with a different connector), but no wear-levelling issues.

Re:FLASH... (5, Interesting)

technomancerX (86975) | more than 10 years ago | (#8400324)

"He's using flash as the HD for the computing nodes"

Actually, he's not. IBM Micro Drives are not CF, they just have a CF form factor/interface to be compatible with hand held devices. They are hard drives.

Re:FLASH... (0)

Anonymous Coward | more than 10 years ago | (#8400356)

Try READING the article.

>The computational nodes have 256 MB RAM, each and
>boot from 340 MB IBM microdrives by means of
>compact flash to IDE adapters.

"microdrives"....

Whilst not clustering... (4, Interesting)

Alioth (221270) | more than 10 years ago | (#8400206)

Whilst not clustering, a good use for these low power systems would be for web hosts or budget dedicated servers. I'm sure a server room full of these would require much less airconditioning (and power) than the typical servers. Many people require dedicated servers for security (they are the only one on the box) and don't require fast FPU performance.

page is gone (0)

Anonymous Coward | more than 10 years ago | (#8400209)

crash dotted already. way to go guys!

not even a cluster can stand up to the slashdot effect!

Enough on the Web Server !!! (1)

IamGarageGuy 2 (687655) | more than 10 years ago | (#8400214)

Just hit reload! It seems to be holding up just fine, with the occasional bad hit. Gotta give 'em a break, this is /. after all.

Enough on the Web Server !!!-Group participation (0)

Anonymous Coward | more than 10 years ago | (#8400300)

"Too Many Users"

A cluster of Slashdotters has hit their website.

Interesting Parts of the Article Text (0)

Anonymous Coward | more than 10 years ago | (#8400220)

Introduction

I built a Mini-ITX based massively parallel cluster named PROTEUS. I have 12 nodes using VIA EPIA V8000, 800 MHz motherboards. The little machine is running FreeBSD 4.8, and MPICH 1.2.5.2. Troubles installing and configuring Free BSD and MPICH were few. In fact, there were no major issues with either FreeBSD or MPICH.

The construction is simple and inexpensive. The motherboards were stacked using threaded aluminum standoffs and then mounted on aluminum plates. Two stacks of three motherboards were assembled into each rack. Diagonal stiffeners were fabricated from aluminum angle stock to reduce flexing of the rack assembly.

The controlling node has a 160 GB ATA-133 HDD, and the computational nodes use 340 MB IBM microdrives in compact flash to IDE adapters. For file I/O, the computational nodes mount a partition on the controlling node's hard drive by means of a network file system mount point.

Each motherboard is powered by a Morex DC-DC converter, and the entire cluster is powered by a rather large 12V DC switching power supply.

With the exception of the metalwork, power wiring, and power/reset switching, everything is off the shelf.

At present, the idle power consumption is about 140 Watts (for 12 nodes) with peaks estimated at around 200 Watts. The machine runs cool and quiet. The controlling node has 256 MB RAM , and an 160 GB ATA 133 IDE hard disk drive. The computational nodes have 256 MB RAM, each and boot from 340 MB IBM microdrives by means of compact flash to IDE adapters. The computational nodes mount /usr on the controlling node via NFS, for storage and to allow for a very simple configuration. No official benchmarks have been run, but for simple computational tasks the mini cluster appears to be faster than four 2.4 GHz pentium 4 mcahines used in parallel, at a fraction of the cost and power use.

Power and Cooling

Mini-ITX boards have very low power dissipation as compared to most motherboard/cpu combination in popular use today. This means that a Mini-ITX cluster with as many as 16 nodes won't need special air conditioning. Low power dissipation also means low power use, so you can use a single inexpensive UPS to provide clean AC power for the nodes.

In contrast, a 12-16 node cluster built with Intel or AMD processors will generate enough heat that you will likely need heavy duty air conditioning. Additionally, you will need adequate electrical power to deliver the 2-3 kilowatts peak load that your 12 node PC cluster will require. Plan on having higher than average utility bills if you use PC's...

Hardware Construction

The cluster is built in two nearly identical racks. Each rack has two stacks of three motherboards and dc-dc converters mounted on aluminum standoffs.

The compact flash adapters used to mount the microdrives are also in stacks of three. Each stack of boards is mounted on a 7 inch by 10 inch 0.0625 thick 6061-T6 aluminum plate as are the microdrive stacks. There are seven metal plates in all, in each rack.

Software Configuration

The cluster consists of a controlling node, with a large capacity hard drive, and several computational nodes, each with their own hard disk drive (these hard drives can be smaller).

The software which performs the parallelization (MPI) is installed on the controlling node, and the computational nodes mount a shared directory on the controlling node via NFS.

Communications between the nodes is established via rsh by MPI, and shared files are found via the mounted NFS file system,

The networking is fast ethernet (100 Mbit) and makes use of a fast ethernet switch switch. Gigbit ethernet is faster (and better for fast file I/O) but 100 Mbit ethernet is quite adequate for number crunching.

The version of MPI used is mpich-1.2.5.2

The Operating system for the controlling node and all the computational nodes is FreeBSD MINI 4.8 RELEASE

FreeBSD has moved forward a bit since I began building my cluster, so check with freebsd.org to see what is currently available. Whatever distribution you use, you should be using RELEASE or STABLE versions.

Install and configure the controlling node

Keep it simple. Resist the temptation to add a lot of options. JUST MAKE IT WORK.

Keep all the nodes as identical as possible, they will be running code that is generated on the controlling node.

Setup a firewall between the cluster and the outside world. The cluster needs a high degree of connectivity and has rather poor security.

Assemble the nodes and test them one at a time.

Install Mini-FBSD on the controlling node first (I'm using the mini 4.8 distribution).

Use the same root password on the controlling node and on all the computational nodes.

Configure the controlling node as an NFS server and export /usr to be accessed with root privileges.

Enable inetd, and edit /etc/inetd.conf to allow rlogin.

Setup rsh and ssh so that the controlling node and computational nodes can access each other.

Be sure to edit /etc/ssh/sshd_config to allow root login.

DO NOT allow the controlling node to rsh/ssh to itself. Doing this will not only cause security issues, but can lead to the controlling node getting saturated with rsh connections during a program run , and can cause slowness and program crashes.

Allow only essential external computers to access the controlling node by ssh. Do not allow any external computers to use rsh to access any node. Use ssh instead.

Edit /etc/rc.conf for the appropriate hostname and ip address.

Edit /etc/hosts to include the hostnames and ip's of the controlling node, computational nodes, and any external computers which need to access the controlling node.

Download and install MPI. Be sure to read the documentation on the MPI web site. Install MPI in /usr/local/mpi. I built MPI to run in P_4 mode to keep things simple.

In '/root/.cshrc' add '/usr/local/mpi/bin' to the path. You might also wish to edit '/etc/skel/.cshrc' with the same value so that new users get a working MPI.

Install FBSD on one computation node

Configure it as an nfs client.

Enable inetd and edit /etc/inetd.conf to allow rlogin

Edit '/etc/fstab' to add the nfs mount for /usr and set the mount point as /mnt/usr . Create a symbolic link at /usr/local/mpi that points to /mnt/usr/local/mpi

Add the hostnames and ip addresses for the controlling nodes and all the computational nodes to /etc/hosts

Edit /etc/rc.conf for the appropriate hostname and IP address for the node.

Edit /etc/ssh/ssh_config to configure the node as an ssh client.

Use rcp/scp to copy the /etc/ssh/sshd_config file from the controlling node to the computational node.

Create an empty file with the name of '.hushlogin' and put it in '/root'. You may wish to also put .hushlogin in /etc/skel so new users automatically get a copy of it. This inhibits motd and limits the login text to a prompt. It serves to keep mpi from complaining about getting an unexpected response when it uses rsh to connect to a node.

You may need to have .rhosts in /root, be sure to include all nodes in this, if you use it. You might wish to put a copy of .rhosts in /etc/skel so that new users can use ssh/rsh without being root.

You will need to add each node to '/usr/local/mpi/share/machines.freebsd '. This file is the list of nodes usable by MPI.

Run the test script /usr/locall/mpi/sbin/tstmachines with the -v option. 'sh /usr/local/mpi/sbin/tstmachines -v" It may complain that it can not access the controlling node (this is normal), but it should talk to all the nodes in the nodelist and run some test software to confirm that all is working. The script uses rsh to talk to all the nodes, and if the controlling node cannot rsh to itself, the script will complain. Resist the temptation to allow the controlling node to rsh to itself. MPI will run a process on localhost in addition to any nodes listed in '/usr/local/mpi/share/machines.freebsd', so even if the script complains that it can't find the controlling node, mpi will still work.

Compile and run some of the sample programs that come with mpi to confirm that all is working properly.

Copy the newly configured node to an "empty" hard drive.

If all is well, connect an empty hard drive for the next node to the secondary controller and use dd to copy the configured hard drive to the empty one. Be sure the "empty" drive is configured as slave and does not contain a primary partition. or FBSD might not know what to do with two hard drives at the same time.

Shut down the computer and remove the copied drive and install it in the second node. Don't forget to move the jumper from slave to master.

Configure the new node by booting it, and logging in from a keyboard, and editing /etc/rc.conf for the appropriate hostname and ip address.

Add the new node to '/usr/local/mpi/share/machines.freebsd' on the controlling node.

Reboot the new node and rsh to it from the controlling node to confirm communications.

Run /usr/local/mpi/sbin/tstmachines in verbose mode to assure the new node works properly.

If the new node is working properly, use dd to install copies of the computational node on all the drives for the remaining cluster nodes.

Testing

Plan for some odd things to happen. Clustering has a way of exposing "flaky" hardware and software. Usually if a node crashes frequently for no apparent reason , you might want to consider it as having potential hardware problems.

Power up the new cluster and let it idle for a day or two, and check the nodes to see if they spontaneously crash, disconnect, or otherwise misbehave. If the cluster seems stable, you need to begin writing programs designed to stress the machine so that you can expose software bugs, and latent hardware issues. Work through these issues one at a time. Depending on the hardware, the size of the cluster. and it's complexity, it could take from a few weeks to several months to weed out the worst of the quirks and bugs. Replace flaky hardware. One bad node in the nodelist can render a cluster useless, so don't waste your time and money trying to limp along with wounded hardware.

Operation

Power it up and leave it up. Cycle the power on a node only when you absolutely must. This reduces failures from inrush currents at power-up as well as reducing thermomechanical stresses that lead to component failures.

Development

You might wish to set aside a node for development, so you can test new kernels or software. Once you are sure your new code is stable, you can migrate it to the other nodes. Exclude this node from the nodelist so the users don't get unhappy surprises when they run their software.

Maintenance

Plan on having about ten percent of the cluster failed or failing at any given time. If you need a machine with 10 nodes operational, you had best plan on having 12 nodes, and some spare parts. The larger the cluster is, the more failed hardware you can expect. Really large clusters have hardware failures on a more or less continuous basis. Alternatively, you can just build a lot of extra nodes and take bad nodes offline as the cluster "burns in" (this seems expensive and wasteful to me). Run the cluster on a good UPS. It is not an option. You need clean power to get good hardware life, and with this many computers the investment in a UPS will pay off in terms of longer hardware life.

Lifespan

Consumer grade electronics is designed with an operational life of two years. Lower quality components have an even shorter design life. This means that once you get all the bugs worked out, and everything is "burned in" you can expect a year or two of fairly trouble-free service. After that, the components age sufficiently that you will begin to see hardware failures rising to the point that you probably will want to consider just building a new machine.

Final words

Building a parallel computing machine is a big investment in time and money. Take your time and plan your project carefully. Make sure all of the components you plan to use are available, and will continue to be available over the several months it is likely to take you to build and test your creation. A little thought will save you a lot in terms of time, money and disappointment, and will pay big dividends in satisfaction.

Silly question, I know, but... (4, Funny)

pegr (46683) | more than 10 years ago | (#8400231)

Just what do you do with such a thing? I don't mean obvious commercial uses, but as a home-bound geek, what reason can I use to justify this to my wife?

Re:Silly question, I know, but... (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#8400280)

You can justify it by saying "Bitch, get back in the kitchen!"

Test Text. (4, Informative)

F34nor (321515) | more than 10 years ago | (#8400242)

I built a Mini-ITX based massively parallel cluster named PROTEUS. I have 12 nodes using VIA EPIA V8000, 800 MHz motherboards. The little machine is running FreeBSD 4.8, and MPICH 1.2.5.2. Troubles installing and configuring Free BSD and MPICH were few. In fact, there were no major issues with either FreeBSD or MPICH.

The construction is simple and inexpensive. The motherboards were stacked using threaded aluminum standoffs and then mounted on aluminum plates. Two stacks of three motherboards were assembled into each rack. Diagonal stiffeners were fabricated from aluminum angle stock to reduce flexing of the rack assembly.

The controlling node has a 160 GB ATA-133 HDD, and the computational nodes use 340 MB IBM microdrives in compact flash to IDE adapters. For file I/O, the computational nodes mount a partition on the controlling node's hard drive by means of a network file system mount point.

Each motherboard is powered by a Morex DC-DC converter, and the entire cluster is powered by a rather large 12V DC switching power supply.

With the exception of the metalwork, power wiring, and power/reset switching, everything is off the shelf.

At present, the idle power consumption is about 140 Watts (for 12 nodes) with peaks estimated at around 200 Watts. The machine runs cool and quiet. The controlling node has 256 MB RAM , and an 160 GB ATA 133 IDE hard disk drive. The computational nodes have 256 MB RAM, each and boot from 340 MB IBM microdrives by means of compact flash to IDE adapters. The computational nodes mount /usr on the controlling node via NFS, for storage and to allow for a very simple configuration. No official benchmarks have been run, but for simple computational tasks the mini cluster appears to be faster than four 2.4 GHz pentium 4 mcahines used in parallel, at a fraction of the cost and power use.

Power and Cooling

Mini-ITX boards have very low power dissipation as compared to most motherboard/cpu combination in popular use today. This means that a Mini-ITX cluster with as many as 16 nodes won't need special air conditioning. Low power dissipation also means low power use, so you can use a single inexpensive UPS to provide clean AC power for the nodes.

In contrast, a 12-16 node cluster built with Intel or AMD processors will generate enough heat that you will likely need heavy duty air conditioning. Additionally, you will need adequate electrical power to deliver the 2-3 kilowatts peak load that your 12 node PC cluster will require. Plan on having higher than average utility bills if you use PC's...

Hardware Construction

The cluster is built in two nearly identical racks. Each rack has two stacks of three motherboards and dc-dc converters mounted on aluminum standoffs.

The compact flash adapters used to mount the microdrives are also in stacks of three. Each stack of boards is mounted on a 7 inch by 10 inch 0.0625 thick 6061-T6 aluminum plate as are the microdrive stacks. There are seven metal plates in all, in each rack.

The top cover plate has the mounting bracket for the 6 on/off/reset switches.

The plate below it is home to the power distribution terminal block. The power delivery cable for each rack is heavy duty 14 gauge stranded wire with pvc insulation. The power cabling from the terminal strip to each of the dc-dc converters is 18 gauge stranded pvc insulated hookup wire. The wiring for the power/reset switches is 24 gauge stranded, pvc insulated wire.

The top rack houses nodes one through six (node one is the controlling node). The bottom plate of the top rack also houses the 160 GB ATA-133 hard disk drive used by the controlling node. All other nodes make use of the IBM microdrives. Node number three has a spare compact flash adapter which can be used to duplicate microdrives for easy node maintenance.

The disk drive and power cabling to the motherboards was dressed as was sanely possible on the back panel. The liberal use of nylon cable ties helps reduce the tendency of pc cabling to develop into a rats-nest.

The bottom rack houses nodes seven through 12, with one microdrive for each node mounted in an identical manner to the top rack. Other than lacking a hard drive on the bottom plate, the second rack is identical to the first. All the metalwork is fabricated by hand using 0.0625 inch aluminum plate and 3/4 inch aluminum angle stock. All of the standoffs and metal bits are attached using stainless steel 4-40 machine screws and aircraft style locknuts. Stick-on rubber feet keep the bottom plate from marring delicate surfaces.

There was no cutting or bending involved. All metal bits were simply cut, drilled, and bolted together using 4-40 hardware.

All wiring is crimped by hand using standard crimp connectors and tools available from a popular online electronics components supplier. The hand made wiring harnesses are dressed by twisting the wires to assure low noise, and then fixing the wiring in place using nylon cable ties. The power/reset switches are on-off-on center off , three position momentary contact toggle switches available from most good electronics supply stores.

The wiring for these switches is hand soldered at the switch end, and standard 0.1 inch header connectors were crimped at the motherboard end to make the necessary connections.

Networking

There is nothing sacred about the networking. I used the internal fast ethernet adapters which came with my mini-itx boards. The network switch was a low cost 16 port fast ethernet switch purchased at an office supply store for about $80. The cabling was crimped by hand using good quality four twisted pair (8 conductor) cat 5 cable.

Power Considerations

The DC-DC converters require a clean, well-regulated 12VDC source. I chose to use a heavy duty 60 ampere 12VDC switching power supply capable of delivering 60 amperes peak current which I ordered from an online electronics test equipment supplier. Since badly conditioned AC power is potentially damaging to expensive computing equipment, I use a 1 KVA UPS purchased at an office supply store to make sure the cluster can't be "bumped off" by power line glitches and droputs.
Software Configuration

The cluster consists of a controlling node, with a large capacity hard drive, and several computational nodes, each with their own hard disk drive (these hard drives can be smaller).

The software which performs the parallelization (MPI) is installed on the controlling node, and the computational nodes mount a shared directory on the controlling node via NFS.

Communications between the nodes is established via rsh by MPI, and shared files are found via the mounted NFS file system,

The networking is fast ethernet (100 Mbit) and makes use of a fast ethernet switch switch. Gigbit ethernet is faster (and better for fast file I/O) but 100 Mbit ethernet is quite adequate for number crunching.

The version of MPI used is mpich-1.2.5.2

The Operating system for the controlling node and all the computational nodes is FreeBSD MINI 4.8 RELEASE

FreeBSD has moved forward a bit since I began building my cluster, so check with freebsd.org to see what is currently available. Whatever distribution you use, you should be using RELEASE or STABLE versions.

Install and configure the controlling node

Keep it simple. Resist the temptation to add a lot of options. JUST MAKE IT WORK.

Keep all the nodes as identical as possible, they will be running code that is generated on the controlling node.

Setup a firewall between the cluster and the outside world. The cluster needs a high degree of connectivity and has rather poor security.

Assemble the nodes and test them one at a time.

Install Mini-FBSD on the controlling node first (I'm using the mini 4.8 distribution).

Use the same root password on the controlling node and on all the computational nodes.

Configure the controlling node as an NFS server and export /usr to be accessed with root privileges.

Enable inetd, and edit /etc/inetd.conf to allow rlogin.

Setup rsh and ssh so that the controlling node and computational nodes can access each other.

Be sure to edit /etc/ssh/sshd_config to allow root login.

DO NOT allow the controlling node to rsh/ssh to itself. Doing this will not only cause security issues, but can lead to the controlling node getting saturated with rsh connections during a program run , and can cause slowness and program crashes.

Allow only essential external computers to access the controlling node by ssh. Do not allow any external computers to use rsh to access any node. Use ssh instead.

Edit /etc/rc.conf for the appropriate hostname and ip address.

Edit /etc/hosts to include the hostnames and ip's of the controlling node, computational nodes, and any external computers which need to access the controlling node.

Download and install MPI. Be sure to read the documentation on the MPI web site. Install MPI in /usr/local/mpi. I built MPI to run in P_4 mode to keep things simple.

In '/root/.cshrc' add '/usr/local/mpi/bin' to the path. You might also wish to edit '/etc/skel/.cshrc' with the same value so that new users get a working MPI.

Install FBSD on one computation node

Configure it as an nfs client.

Enable inetd and edit /etc/inetd.conf to allow rlogin

Edit '/etc/fstab' to add the nfs mount for /usr and set the mount point as /mnt/usr . Create a symbolic link at /usr/local/mpi that points to /mnt/usr/local/mpi

Add the hostnames and ip addresses for the controlling nodes and all the computational nodes to /etc/hosts

Edit /etc/rc.conf for the appropriate hostname and IP address for the node.

Edit /etc/ssh/ssh_config to configure the node as an ssh client.

Use rcp/scp to copy the /etc/ssh/sshd_config file from the controlling node to the computational node.

Create an empty file with the name of '.hushlogin' and put it in '/root'. You may wish to also put .hushlogin in /etc/skel so new users automatically get a copy of it. This inhibits motd and limits the login text to a prompt. It serves to keep mpi from complaining about getting an unexpected response when it uses rsh to connect to a node.

You may need to have .rhosts in /root, be sure to include all nodes in this, if you use it. You might wish to put a copy of .rhosts in /etc/skel so that new users can use ssh/rsh without being root.

You will need to add each node to '/usr/local/mpi/share/machines.freebsd '. This file is the list of nodes usable by MPI.

Run the test script /usr/locall/mpi/sbin/tstmachines with the -v option. 'sh /usr/local/mpi/sbin/tstmachines -v" It may complain that it can not access the controlling node (this is normal), but it should talk to all the nodes in the nodelist and run some test software to confirm that all is working. The script uses rsh to talk to all the nodes, and if the controlling node cannot rsh to itself, the script will complain. Resist the temptation to allow the controlling node to rsh to itself. MPI will run a process on localhost in addition to any nodes listed in '/usr/local/mpi/share/machines.freebsd', so even if the script complains that it can't find the controlling node, mpi will still work.

Compile and run some of the sample programs that come with mpi to confirm that all is working properly.

Copy the newly configured node to an "empty" hard drive.

If all is well, connect an empty hard drive for the next node to the secondary controller and use dd to copy the configured hard drive to the empty one. Be sure the "empty" drive is configured as slave and does not contain a primary partition. or FBSD might not know what to do with two hard drives at the same time.

Shut down the computer and remove the copied drive and install it in the second node. Don't forget to move the jumper from slave to master.

Configure the new node by booting it, and logging in from a keyboard, and editing /etc/rc.conf for the appropriate hostname and ip address.

Add the new node to '/usr/local/mpi/share/machines.freebsd' on the controlling node.

Reboot the new node and rsh to it from the controlling node to confirm communications.

Run /usr/local/mpi/sbin/tstmachines in verbose mode to assure the new node works properly.

If the new node is working properly, use dd to install copies of the computational node on all the drives for the remaining cluster nodes.

Testing

Plan for some odd things to happen. Clustering has a way of exposing "flaky" hardware and software. Usually if a node crashes frequently for no apparent reason , you might want to consider it as having potential hardware problems.

Power up the new cluster and let it idle for a day or two, and check the nodes to see if they spontaneously crash, disconnect, or otherwise misbehave. If the cluster seems stable, you need to begin writing programs designed to stress the machine so that you can expose software bugs, and latent hardware issues. Work through these issues one at a time. Depending on the hardware, the size of the cluster. and it's complexity, it could take from a few weeks to several months to weed out the worst of the quirks and bugs. Replace flaky hardware. One bad node in the nodelist can render a cluster useless, so don't waste your time and money trying to limp along with wounded hardware.

Operation

Power it up and leave it up. Cycle the power on a node only when you absolutely must. This reduces failures from inrush currents at power-up as well as reducing thermomechanical stresses that lead to component failures.

Development

You might wish to set aside a node for development, so you can test new kernels or software. Once you are sure your new code is stable, you can migrate it to the other nodes. Exclude this node from the nodelist so the users don't get unhappy surprises when they run their software.

Maintenance

Plan on having about ten percent of the cluster failed or failing at any given time. If you need a machine with 10 nodes operational, you had best plan on having 12 nodes, and some spare parts. The larger the cluster is, the more failed hardware you can expect. Really large clusters have hardware failures on a more or less continuous basis. Alternatively, you can just build a lot of extra nodes and take bad nodes offline as the cluster "burns in" (this seems expensive and wasteful to me). Run the cluster on a good UPS. It is not an option. You need clean power to get good hardware life, and with this many computers the investment in a UPS will pay off in terms of longer hardware life.

Lifespan

Consumer grade electronics is designed with an operational life of two years. Lower quality components have an even shorter design life. This means that once you get all the bugs worked out, and everything is "burned in" you can expect a year or two of fairly trouble-free service. After that, the components age sufficiently that you will begin to see hardware failures rising to the point that you probably will want to consider just building a new machine.

Final words

Building a parallel computing machine is a big investment in time and money. Take your time and plan your project carefully. Make sure all of the components you plan to use are available, and will continue to be available over the several months it is likely to take you to build and test your creation. A little thought will save you a lot in terms of time, money and disappointment, and will pay big dividends in satisfaction.

Useful links

The MPI Home Page. You can download the latest distribution of MPI as well as useful documentation.

The FreeBSD Home Page. Download your favorite distribution of FreeBSD and browse online documentation.

Let's save space... (0)

ComradeX13 (226926) | more than 10 years ago | (#8400247)

I, for one, welcome a beowulf cluster of our new mini-ITX overlords in Soviet Russia.

Is that link hosted on there cluster? (0, Redundant)

unixsource (754527) | more than 10 years ago | (#8400248)

I hope not. 'There are too many connected users. Please try again later.' Sucks.

Re:Is that link hosted on there cluster? (1)

unixsource (754527) | more than 10 years ago | (#8400284)

I need to read these posts before I hit submit. Sorry for the bad grammar.

Dissipation (1, Funny)

Bleeblah (602029) | more than 10 years ago | (#8400261)

I especially like the bit about the peak 200W power dissipation.
Their web server is dissipating smoke and silicon goo!

A beowulf cluster... (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8400265)

Of trolls!

The Neo Trolling Group is one of the largest troll clusters in the world! Currently with 13 members, and growing all the time! [slashdot.org]

The Neo Trolling Group's Mission is to...
  • Avenge the death [kuro5hin.org] of Goatse.cx [goatse.cx] !
  • Expose the weaknesses in Slashbots
  • Get more trolls!


If you are troll who would like to join the worlds largest TROLL community, or you would like to be a troll (Read this [wikipedia.org] ), add "Neo Trolling Group" as your freind [slashdot.org] .

This post has been brought to you by Mookore 2004, a proud member of Neo Trolling Group!

Important announcement
The following faggots [slashdot.org] are foolish enough to add their name as a "foe" to our name! If we catch the se slashdot users posting then we will troll them to death! Please set your stance to neutral or freind or face the consequences [n00bz.net] !

Just because you can... (3, Insightful)

caffeinefiend (681092) | more than 10 years ago | (#8400270)

Yet another example of why you shouldn't do everything that you can do! These puppies aren't exactly famous for their flop-per-dollar ratio. In truthfully, it would be more efficient ( and cost effective) to make the cluster out of PIIIs. Anyhow, I'm off to go cluster a few toaster ovens, I hear that they offer a great delicious to efficiency ratio. Chris

those could be handy for ... (1)

enrico_suave (179651) | more than 10 years ago | (#8400276)

that flash mob cluster party coming up.

"outlet? no, no thank you... I'm good for quiet a few hours on this motorcycle battery right here"

or whatnot... of course my favorite use of mini-itx boards is to build PVR's and HTPC's with them...

*shrug*

e.

slashdotted already? (5, Informative)

cetan (61150) | more than 10 years ago | (#8400292)

sheesh that didn't take long.

I managed to get it mirrored here:
page 1:
http://www.phule.net/mirrors/mini-itx-cluster.html [phule.net]
page 2:
http://www.phule.net/mirrors/mini-itx-cluster2.htm l [phule.net]
page 3:
http://www.phule.net/mirrors/mini-itx-cluster3.htm l [phule.net]

Re:slashdotted already? (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8400443)

You can Slashdot My Mirrors too. Being a veteran slashdot user, I hosts that are resistant to slashdotting and so I have uploaded mirrors to these hosts!
Page 1 [slashdot.org]
Page 2 [n00bz.net]
Page 3 [hick.org]

Proudly Mirrored Space Cowboy [slashdot.org] , posting anonymously because I'm no karma whore!

Cost Ratio (1)

thomas536 (464403) | more than 10 years ago | (#8400358)

Does anybody know the price per flop of this setup? I'm curious what types of setups have the best ratio?

Did I miss the part (0, Offtopic)

stratjakt (596332) | more than 10 years ago | (#8400379)

Where he give some stats/benchmarks/observations as to how this thing performs, and if it was at all worth it?

Those 800s are so gutless they can just barely playback a DVD, I can't see exactly what application you would give this cluster.

heh I got obligatory for ya (2, Funny)

aztektum (170569) | more than 10 years ago | (#8400422)

They musta been runnin' their webserver on one!

*ba dum ch*

/.ed all ready (0)

Anonymous Coward | more than 10 years ago | (#8400425)

Ouch, slashdotted all ready.. oh well, anyone have a copy of the text?

Mini-ITX? Bah! Nano-ITX!!! (3, Informative)

Cpt_Kirks (37296) | more than 10 years ago | (#8400455)

I can't wait for the new, smaller nano-itx boards to come out. 4.5" on a side, 1GHZ CPU and draws 7 watts. I got an email from VIA claiming they will be released in April.

MB, slim DVD and laptop HD in a case the size of a large paperback book!

It will make my "K-Mart Toolbox Mini-ITX PVR" look like a full tower in comparison!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>