Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

RPiCluster: Another Raspberry Pi Cluster, With Neat Tricks

timothy posted about a year and a half ago | from the dots-dots-blinkenlights dept.

Power 79

New submitter TheJish writes "The RPiCluster is a 33-node Beowulf cluster built using Raspberry Pis (RPis). The RPiCluster is a little side project I worked on over the last couple months as part of my dissertation work at Boise State University. I had need of a cluster to run a distributed simulator I've been developing. The RPiCluster is the result. I've written an informal document on why I built the RPiCluster, how it was built, and how it performs as compared to other platforms. I also put together a YouTube video of it running an MPI parallel program I created to demo the RGB LEDs installed on each node as part of the build. While there have certainly been larger RPi clusters put together recently, I figured the Slashdot community might be interested in this build as I believe it is a novel approach to the rack mounting and power management of RPis."

Sorry! There are no comments related to the filter you selected.

5 - Profit! (5, Funny)

wibblewibble (2766235) | about a year and a half ago | (#43760399)

Dude, you should totally mine bitcoins with that bad boy!

Re:5 - Profit! (-1)

Anonymous Coward | about a year and a half ago | (#43760565)

fuck you faggot bitch.

Re:5 - Profit! (2)

Razgorov Prikazka (1699498) | about a year and a half ago | (#43760595)

...Then you can buy more of these 'bad boy`s' to make even more BTC, buy even more 'bad boy`s' yet... Before you know it: WORLD DOMINATION!!!
You could be driving around in one of these next week! (http://www.youtube.com/watch?v=cDoRmT0iRic)

Re: 5 - Profit! (1)

Anonymous Coward | about a year and a half ago | (#43760929)

6.) Move out of basement.

7.) Talk to a girl.

Re: 5 - Profit! (1)

jones_supa (887896) | about a year and a half ago | (#43763917)

8.) ???

http://www.linuxadvocates.com/p/support.html (-1)

Anonymous Coward | about a year and a half ago | (#43760401)

Dear Linux Advocate,

Money doesn't grow on trees. And, Linux Advocates is growing. Naturally, we anticipate operating costs and hope to be able to meet them.

But, any amount you feel you are able to donate in support of our ongoing work will be most surely appreciated and put to very good use. Your contributions keep Linux Advocates growing.

Show your support by making a donation today.

Thank you.

Dieter T. Schmitz
Linux Advocates, Owner

http://www.linuxadvocates.com/p/support.html [linuxadvocates.com]

Re:http://www.linuxadvocates.com/p/support.html (0)

mwvdlee (775178) | about a year and a half ago | (#43760507)

Since when does /. allow scam advertising within comments?

Re:http://www.linuxadvocates.com/p/support.html (-1)

Anonymous Coward | about a year and a half ago | (#43760581)

they've allowed scams since they championed linsux. those fucking faggots

Re:http://www.linuxadvocates.com/p/support.html (0)

Anonymous Coward | about a year and a half ago | (#43760935)

Feeling threatened, eh?

Adverisement (1)

For a Free Internet (1594621) | about a year and a half ago | (#43760403)

I am selling a limited edition, genuine sample of Frank's Pocket Lint for a limited time and it can be combined with other lints to create a fuzzy ball of lint.

AWESORME! Post it on Slashdort!

Hm... (5, Funny)

Anonymous Coward | about a year and a half ago | (#43760413)

A new Raspberry Pi cluster Fram Boise University, eh?

Re:Hm... (2, Insightful)

hughbar (579555) | about a year and a half ago | (#43760725)

Yes that's funny, but no-one much on here knows French...

acronym for F.R.A.M. + Boise = red + sour (4, Interesting)

girlinatrainingbra (2738457) | about a year and a half ago | (#43760777)

Haha. It would have been funny (or funnier) if this guy had come up with the acronym FRAM for this project and then called the page (or overall project) FRAM-Boise , perhaps:
Facilitated
Raspberry.Pi
Architectural
Messaging

since he says in his pdf document that " My research is currently focused on developing a novel da ta sharing system for wireless sensor networks to facilitate in-network collaborative processing of sensor data. In the process of developing this system it became clear that perhaps the most expedient way to test many of the ideas was to create a distributed simulation rather than developing directly on the final target embedded hardware."

Re:Hm... (1)

kermidge (2221646) | about a year and a half ago | (#43763843)

Funny, I first saw it as "honni suit, qui mal y pense," but looking it up, find it's "honi soit." Guess that 8th grade French book had a few mistakes in it, back in '60. But then I don't know French, just a few bits here and there that kinda stuck. Bonne chance, and all.

Re:Hm... (1)

hughbar (579555) | about a year and a half ago | (#43782523)

Yes, my sig is a 'jeu de mots' based on 'honi soi qui mal y pense', translated it would mean 'off we go for those who think little of it' but it sounds like the original. French speakers spend a certain amount of their lives doing this, look as Asterisk in the original, all the characters names 'mean' something.

Re:Hm... (1)

kermidge (2221646) | about a year and a half ago | (#43790991)

ah, thanks; I get the drift, but it's over my head [grin]

Re:Hm... (0)

Anonymous Coward | about a year and a half ago | (#43763091)

Lemme guess, you're Canadian

Re:Hm... (0)

Anonymous Coward | about a year and a half ago | (#43763773)

Only on my mother's side :P

Re:Hm... (1)

Gothmolly (148874) | about a year and a half ago | (#43765631)

The Pele of Anal?

First post (-1)

Anonymous Coward | about a year and a half ago | (#43760453)

even though it looks cool, it took ages to post first post on here using my raspberry pi beofulf cluster. BRB upgrading.

Obligatory Blake 7 Reference (2)

Grumpinuts (1272216) | about a year and a half ago | (#43760489)

It looks like Orac.

Re:Obligatory Blake 7 Reference (1)

RDW (41497) | about a year and a half ago | (#43761035)

"You pathetic fool. That isn't Orac! Look at it! It's just a box of flashing lights!"

Slow Pi (2, Insightful)

Anonymous Coward | about a year and a half ago | (#43760501)

Running the numbers from the paper says the $1000 x86 compute node took 3.85 seconds on a benchmark, where the RPI cluster took (456/32)=14.25 seconds and also cost about $1000. Thus, after porting the software, a 3.7 times slow down was achieved over traditional methods.

While there may be some gains (GPIO and such may be useful in this context) they didn't appear to be used here.

This looks like a fun project, that got research money, but was not very useful for the goal the money was supposed to be spent on. I haven't looked into the details, and I expect the parts may get reused for other projects later, but still, it seems kinda silly. The RPI was not build for that, its inefficient to use it that way.

Re: Slow Pi (1, Informative)

Anonymous Coward | about a year and a half ago | (#43760561)

If the purpose was to make a fast computer you may have a point. But the need for this project was to have a lore cost cluster to run massively parallel/distributed software. A single or low number of cores (relativity). May not give the solution you want. By exemplem, if you have a fast algorithm that can has to be run in order with no parallelism it will run fast on your $1000 x86. But the only way to speed this up is to use a faster processor then your technology limited. If you derive a different algorithm that may be a bit slower but allows massive parallelism, then you can make the system faster by adding more hardware. This system is not about doing things fast, it's about seeing how things run on a cluster. If you used the x86 then you would get a wrong result faster.

Re: Slow Pi (1)

gl4ss (559668) | about a year and a half ago | (#43760693)

If the purpose was to make a fast computer you may have a point. But the need for this project was to have a lore cost cluster to run massively parallel/distributed software. A single or low number of cores (relativity). May not give the solution you want. By exemplem, if you have a fast algorithm that can has to be run in order with no parallelism it will run fast on your $1000 x86. But the only way to speed this up is to use a faster processor then your technology limited. If you derive a different algorithm that may be a bit slower but allows massive parallelism, then you can make the system faster by adding more hardware. This system is not about doing things fast, it's about seeing how things run on a cluster. If you used the x86 then you would get a wrong result faster.

by another exempls.

ah fuck it, the benchmark is supposed to test that. so it is in a parallel thing faster than the pi cluster. on a single thread thing it would be ridiculously slower to use the pi's.

anyhow, I would wager that the point here is just to test the parallel algorithms on real hw - not to run them fast, but to prove that the basic ideas work.

Re: Slow Pi (2)

K. S. Kyosuke (729550) | about a year and a half ago | (#43761215)

anyhow, I would wager that the point here is just to test the parallel algorithms on real hw - not to run them fast, but to prove that the basic ideas work.

I guess the issue is that building this cluster for accurate testing of the behavior of distributed algorithms was probably cheaper than trying to build an accurate simulator for it running on a desktop workstation would have been.

Re: Slow Pi (2)

Cenan (1892902) | about a year and a half ago | (#43760809)

So you can make it faster by adding more hardware or.... adding more hardware. Parallel and distributed are two very different things, and you cannot run a distributed anything on a single cluster, if you do, it would be properly named parallel. Anyways, the comparison is still valid - the RPI cluster failed to deliver; it was slower, was just as expensive as their benchmark x86 machine and probably 1000x as complex.

You're right in what you say about algorithms, but it only holds if you already have unused cores to run the new algorithm on - the actual reason we try to derive parallelizable algorithms is because at the moment, processors with multiple cores are cheaper than mutliple processors with one core. I'm sure the researchers had fun doing this, but there is little to be gained from this paper except to conclude that if you want to build a parallel cluster, don't use RPIs.

On a side note (not aimed at above poseter), if you build something like this and measure the power consumption, don't fucking add all those lights. This research looks like it would barely be worthy of a high school paper, and if that is the standard at Biose - oh.my.god. I mean, half the paper deals with installing packages on Linux. shut the fuck up, we know this already, do some actual research.

Re: Slow Pi (1)

gatkinso (15975) | about a year and a half ago | (#43760953)

>> So you can make it faster by adding more hardware or.... adding more hardware.

Gene Amdahl says different.

Re: Slow Pi (1)

SuricouRaven (1897204) | about a year and a half ago | (#43761069)

"processors with multiple cores are cheaper than mutliple processors with one core."
And both are cheaper than one really, really fast core. You can only really go up to 4Ghz with off-the-shelf parts - any higher than that and you're on to exotic cooling systems involving liquified gasses of one type or another. The record is 8.8GHz, but that took liquid nitrogen.

Re: Slow Pi (1)

julesh (229690) | about a year and a half ago | (#43762815)

You can only really go up to 4Ghz with off-the-shelf parts - any higher than that and you're on to exotic cooling systems involving liquified gasses of one type or another. The record is 8.8GHz, but that took liquid nitrogen.

Of course, just measuring GHz isn't everything. As that's an AMD chip, you could probably get similar single-threaded performance by overclocking a recent Intel chip to about 6.6GHz [pureoverclock.com] (consensus seems to be that in computationally-intentensive tasks, sandy bridge is about 25% faster than bulldozer).

Re: Slow Pi (1)

jones_supa (887896) | about a year and a half ago | (#43763999)

GHz cannot be used to describe a CPU's performance anymore.

Re: Slow Pi (1)

K. S. Kyosuke (729550) | about a year and a half ago | (#43761251)

Parallel and distributed are two very different things, and you cannot run a distributed anything on a single cluster, if you do, it would be properly named parallel.

It's quite obvious that any distributed system is inherently parallel (unless you decide to do only synchronous message passing, which would be stupid). And if that cluster is comprised of isolated nodes passing messages over a network, then it's a distributed system - by definition.

Re: Slow Pi (4, Informative)

TheJish (2926133) | about a year and a half ago | (#43761751)

You seem to be missing the point of this completely. ;) I needed a cluster to test some distributed programs (yes, you can test distributed programs inside a cluster). The cluster itself has nothing to do with my PhD work other than that it is a tool I created to ensure I could test the software I've been developing. As for providing a tutorial on how to do what I did, I was writing this to enable freshman engineers understand what was involved with building the cluster. Not everyone knows Linux, or how simple it is to build a Beowulf cluster.

Re:Slow Pi (0)

Anonymous Coward | about a year and a half ago | (#43760635)

The goal was to have a cluster that was cheap and available for testing MPI programs on a real hardware cluster. For that, it's cost-effective.

Made for specific availability + project priority! (4, Informative)

girlinatrainingbra (2738457) | about a year and a half ago | (#43760665)

it looks like the purpose behind this project is to have an "always available" (to this Ph.D. student) 32-node cluster that is dedicated to doing the work which this dissertation student needs to perform in order to complete his Ph.D., and it makes sense to be able to do this for the cost of a single Xeon node in a larger beowulf cluster.
.
This lets him escape the externalities which might impinge on his getting his own work done, like the big bad Beowulf cluster not being up or available when he needs it, or it being prioritized for someone else's project (say a professor who has tenure and more funding available). Those sorts of shenanigans would delay his work. So a 1/3rd speed cluster that's always available for your own project is a helluva good deal at 1/32 the cost of the big bad beowuilf cluster, eh? At least I think so!

Re:Made for specific availability + project priori (1)

gl4ss (559668) | about a year and a half ago | (#43760727)

but the 32 raspberry pi's. are 3 times more expensive per compute speed unit than the onyx node he benchmarked against.
that's to say the 1000 dollars(8 threads) machine is about 3 times faster than all the raspberry pi's combined! it's a vastly superior computing solution.

it has to be for proofing some supercomputing sw and learning more than for anything practical.
you can't even get the pi's for price that would get you 32 pi's for a thousand bucks though. and add costs for cabling, power sources etc.

Comms and network testing needs hardware!!! (4, Informative)

girlinatrainingbra (2738457) | about a year and a half ago | (#43760761)

Right, but a "vastly superior computing solution" for CFD or linear equations is one thing. Trying to simulate network communications activity for 32 or 33 nodes on a single compute node is probably slower than actually trying out the algorithms on dedicated hardware that instantiates an actual hardware network. Thus, for a project that tries out different networking and communications algorithms, a 3 times more expensive by your calculations might actually end up being 10 times less expensive, especially considering the locking and interprocess communications required in a multi-threaded simulation on a single compute node vs. actually running it on real hardware with 32 nodes and an ethernet network linking the 32 nodes.
.
Especially considering that this system is going to be used for wireless communications protocols, the real hardware solution is IMHO the better way to go.

Re:Comms and network testing needs hardware!!! (1)

gl4ss (559668) | about a year and a half ago | (#43760803)

yeah, for that it makes sense, as a learning/testing tool as I said in other comments.

but you said that it's 1/3rd of the power of the beowulf cluster for 1/32 of price, it just doesn't go that way(if it did, it would scale for super computing at vastly cheaper price than the pc nodes). the cluster is 1/3rd of the power of a single pc for more expensive price than a single pc..

Re:Comms and network testing needs hardware!!! (1)

shess (31691) | about a year and a half ago | (#43761677)

I'm sorry, but ... what? The locking and other interprocess overhead will not increase on a multi-core single-node solution, it will decrease. If your system can run lock-free on the multi-node solution, they can run lock-free on a multi-core solution. It's a fleet of processes talking to each other via TCP/IP either way (except on a single-node solution you have additional options like UNIX-domain sockets or named pipes).

The only way I could see it possibly being a win is if the system being simulated is itself composed of raspberry pi devices, which isn't at all clear, given that the researcher originally was apparently fine using a shares Xeon cluster in the first place.

Re:Comms and network testing needs hardware!!! (1)

flux (5274) | about a year and a half ago | (#43762183)

And how does a single node effortlessly simulate the data propagation delays that are inevitable in a distributed system? Do you have a solution that involves work less than the worth of $1000? (Well, I suppose building up the RPi cluster took some time as well..)

It would be a more general solution if such software was written, but I wouldn't say cheaper.

Re:Comms and network testing needs hardware!!! (1)

TheJish (2926133) | about a year and a half ago | (#43761769)

Exactly!

Re:Made for specific availability + project priori (1)

SuricouRaven (1897204) | about a year and a half ago | (#43761081)

32 pis, 800ma per pi, 25.6A. Call it 30A to give some margin for error. Not exactly exotic - should be doable for thirty quid or so.

I've read about servers that pack hundreds or thousands of arm or atom chips into one enclosure, giving great performance-per-watt for heavily threaded workloads. Mostly targetted at webservers.

Re:Made for specific availability + project priori (1)

AchilleTalon (540925) | about a year and a half ago | (#43762597)

I believe you still miss the point. The performance of the cluster isn't the real issue. The benchmark was run just to show the expected degree of parallelism was actually reached. The benchmark is in no way representative of the user requirements for the cluster itself and the tasks it is needed for. It was just ran as a checkpoint to demonstrate the cluster is working as expected.

Re:Made for specific availability + project priori (0)

Anonymous Coward | about a year and a half ago | (#43761661)

Can you please point me out to the performance results. I know the microserver is a cool toy, but running on SDs, I'd assume it was better to actually go for running his actual network, instead of running a simulator on it. I don't know what is he running particularly, but I recall during mu dissertation asking the department and taking over computer labs that many kids wouldn't use anyways. It was exciting because we knew someone could show up, and randomly reboot one of the systems, so they could use them, while professors used their own acquired hardware.

Re:Made for specific availability + project priori (1)

flyingfsck (986395) | about a year and a half ago | (#43764513)

So how about making a 32 node simulator?

Re:Slow Pi (1)

gatkinso (15975) | about a year and a half ago | (#43760957)

RPI is cheap. Now, scale this to a bunch of Panda's or Gumstix running in a suitcase. Wala luggable supercomputer.

Re:Slow Pi (1)

drinkypoo (153816) | about a year and a half ago | (#43761147)

Scale to ODROID-U2. It only has a four week warranty, but if you use enough of them the presumably high failure rate might not impinge on operations. Delivered, it costs about the same as Pi, but it's a lot more machine. It has the same problems with proprietary chips, but they're the same problems after all, it's not like R-Pi doesn't have them.

Re:Slow Pi (1)

drinkypoo (153816) | about a year and a half ago | (#43761151)

(er, delivered, it costs four times as much as the Pi, but it has four cores, and a lot more of everything else too. so what I meant to say but didn't (in b4 correction) is that you get more for your money. The abysmally short warranty is why I don't own one already.)

Re:Slow Pi (1)

julesh (229690) | about a year and a half ago | (#43762949)

Wait a bit. See: http://olimex.wordpress.com/tag/a20/ [wordpress.com] - when these become available, they'll be about 4x the speed of a pi for about twice the money. Plus the olimex boards have a lot more GPIOs and useful stuff like that. :)

Re:Slow Pi (1)

drinkypoo (153816) | about a year and a half ago | (#43763047)

Thanks for the heads-up! I will, in fact, wait. (I am getting an Ouya for the living room, but that's something else...)

Re:Slow Pi (0)

Anonymous Coward | about a year and a half ago | (#43763893)

Except you forgot to factor in the cost of those 3.85 seconds of university cluster time, as well as the wait for it.

The pi cluster might take 14.25 seconds for the task, but he could start it NOW.

On the other cluster not owned by him, sure it would be 3.85 seconds, after your 6 week waiting period and paying $100 for the CPU time.

I'd much rather have my results in about 15 seconds, than use your "better system" which won't provide those results for over a month and a half at additional per-job expense.

I noticed you personally are not down in the university computer lab waiting in line to use their computers to post to slashdot..
No, you instead got your own computer to do what you please with it, when you please to.

Stop bitching that others just want the exact same thing.

It is the LEDs stupid (1)

flyingfsck (986395) | about a year and a half ago | (#43764459)

It is all about the RGB LEDs. Nothing else matters.

Rack mounting? (4, Insightful)

thegarbz (1787294) | about a year and a half ago | (#43760541)

Not to diminish your achievements which are otherwise quite cool, but this novel approach to rack mounting is anything but. Quite possibly the single most important feature of a rack is ease of component access. By tying all components together with PCB standoffs you basically can't remove a single RPi if there's ever a pressing need.

If anything you've shown a novel way of cramming things together without the use of a rack.

Re:Rack mounting? (1)

Neo-Rio-101 (700494) | about a year and a half ago | (#43760739)

Granted there's nothing much to remove from a pi mounted like this other than the SD card.
The only time I'd image you'd tamper with a pi is when it decides to die from the overclock.

Re:Rack mounting? (0)

Anonymous Coward | about a year and a half ago | (#43761503)

There's the pi itself, which is a throwaway component should it fail, a more modular design allowing its removal without unscrewing everything would be nice.

Re:Rack mounting? (0)

Anonymous Coward | about a year and a half ago | (#43762239)

Or just let you unplug it and stick a new one on the end. If the failure rate is not much more than 10% (or even 30%) and you are not that strapped for space, there isn't much need to remove old ones right away. Just keep adding new ones until either the whole rig is thrown out or you need to just spend a day clearing out the backlog of broken ones.

Re:Rack mounting? (1)

thegarbz (1787294) | about a year and a half ago | (#43765175)

Granted there's nothing much to remove from a pi mounted like this other than the SD card.
The only time I'd image you'd tamper with a pi is when it decides to die from the overclock.

Of course, but that's the point. Racks exist to allow you to take out components to swap. Often this is damage, sometimes this is upgrades, sometimes expansion.

Of note is that there's now several variants of the RPi including 256MB and 512MB versions. So upgrading may be a logical choice too.

Report could be improved... (1)

Anonymous Coward | about a year and a half ago | (#43760613)

Neat project but really the report left me frustrated.
You start by comparing price and features of RPi to two other alternatives, e.g. Onyx node.
Then you compare one RPi to one Onyx node. But moving on you never do a price or performance comparison of the 32 RPi cluster against the same onyx node which would be the interesting thing.
Figure 5 shows something you could possible relate to the earlier information but only graphically. You don't state the actual numbers!

Moving on "As discussed earlier, each RPi uses about 2W of power... except that you have overclocked it, so the power consumption is actually "higher" according to you. Well what is it *exactly*, in YOUR configuration? The one absolute value you give, 2 W, does not apply.
"Figures 9 a and b show the overall power use as measured at the wall..." No they do not. They show a pie chart of the percentage distribution of power by different components.

Make sure you include both absolute values and graphical comparisons, otherwise it does not help the reader get the full value out.

"14.6 seconds for a single thread on an Onyx node. With four threads it goes down to 3.85 seconds thanks to four processing cores. With eight threads, it goes up to a whopping 3.90 seconds."
This is interesting and good info. You have measurably proved that there is no gain from more than 4 threads on the Onyx. But what is "whopping" about it? Here you may try to use sarcasm to be funny. But it doesn't fit... First of all the values are not extreme enough to make the sarcastic joke work. And if you had a single, very small value, it might be funny to call it "whopping". But here in a comparison? I mean, why is 3.85 not whopping also then? It adds no value.

In summary, nice project but you can improve the report. Good luck.

Re:Report could be improved... (1)

gl4ss (559668) | about a year and a half ago | (#43760749)

the 4-to-8 improvement is probably because it only has 4 real cores.

However, I suspect that later he doesn't do the comparison of the single Onyx node vs. his whole cluster because it would show the Pi cluster as a pointless endeavor(It's only useful for learning parallel computing, not for executing it). his 32 Pi cluster is more expensive than a 1000 dollar node(which certainly isn't the cheapest way to get a 3ghz quad core pc).

Re: Report could be improved... (0)

Anonymous Coward | about a year and a half ago | (#43762825)

Apparently I he needed sarcastises ;)
http://www.collegehumor.com/article/6872071/8-new-and-necessary-punctuation-marks

should this perhaps be at RPI instead? jk! (3, Insightful)

girlinatrainingbra (2738457) | about a year and a half ago | (#43760627)

With a name like that (RPiCluster), perhaps it ought to be situated at the R.P.I. [wikipedia.org] in Troy, New York? Though for that nomenclature geographicalocalization, the Republican Party of Iowa [wikipedia.org] has as much claim to RPI [wikipedia.org] as these others do. I like the justification pointed out by the builder of this RPi.Cluster:
The RPi platform has to be one of the cheapest ways to create a cluster of 32 nodes. The cost for an RPi with an 8GB SD card is ~$45. For comparison, each node in the Onyx cluster was somewhere between $1,000 and $1,500. So, for near the price of one PC-based node, we can create a 32 node Raspberry Pi cluster! [from the pdf file at http://coen.boisestate.edu/ece/files/2013/05/Rasp.-Pi.pdf [boisestate.edu] ]

So the summary of the informal document is that it's cheaper to build a 32-node Rasp.-Pi cluster than to purchase even a single node of the 32-node Beowulf cluster that may or may not be available to you. And if you want to get your Ph.D. work done, I must agree that it sounds better to not be dependent upon the whims and follies of others' benevolence in having external hardware clusters available for your use. Bravo, Joshua Kiepert, I like your "informal writeup". Best wishes on your work!

Re: should this perhaps be at RPI instead? jk! (-1)

Anonymous Coward | about a year and a half ago | (#43761005)

Add another 23 units and call it a Raspberry Pi55.

How does that taste?

Re:should this perhaps be at RPI instead? jk! (1)

TheJish (2926133) | about a year and a half ago | (#43761619)

Thanks. You have accurately deduced my intent ;) I started this project when the 32-node cluster I had been using was taken offline for renovations of the lab it resides in.

Performance / Cost (-1)

Anonymous Coward | about a year and a half ago | (#43760891)

So this is a nice research project, but nor really that efficient.

According to the tables on page 9, one PI takes 450s and 32 PI need (450/32)s or about 14s. This is also the time of a single thread on the Xeon. The cost of the PI cluster is almost $2000, while the cost for the Xeon is given at around $1000.
I know that I can get an AMD 8-core at 3000MHz with 32GB RAM for about 600 EUR. The AMD might also have the advantage that it has 8 real CPU cores and not shared ones with hyper threading.

Why not virtual? (-1)

Anonymous Coward | about a year and a half ago | (#43760967)

If you really want to study clustering and it's not about going fast, you can easily buy a $600 PC to do the trick. You could even do this on a secondhand 2007-ish rackserver if you wanted to, or rent a server somewhere for some months for half the price....

If it has a up-to-date dual core, it can easily run 20 or 30 VM's that are limited to a single core 200 MHz CPU, 256 MB RAM and a moderate 4GB hard disk.

If it has a 3GHz quad core (which isn't hard to get, CPU's of that level have been available since 2008 or earlier) you can allocate even more resources to each node than a normal Pi and run every x86 OS you want!

That being said, I really don't see the advantage of using Pi's, as you will spend a LOT more time, installing hooking up, maintaining, ... than you will in a virtual environment.

welcome to the home of jealous haters (4, Interesting)

decora (1710862) | about a year and a half ago | (#43761121)

i wish i had done this, therefore you suck.

Re:welcome to the home of jealous haters (0)

Anonymous Coward | about a year and a half ago | (#43762427)

fuck you bitch diaf ... i mean, good point

Very impressive (1)

cyberthanasis12 (926691) | about a year and a half ago | (#43761231)

Impressive and cool!

Re:Very impressive (0)

Anonymous Coward | about a year and a half ago | (#43761629)

Especially he managed to get the buggy USB/Ethernet to work without crashing!?

Re:Very impressive (2)

Ignacio (1465) | about a year and a half ago | (#43763363)

Yes, as he threw a real power supply at it instead of using the crappiest USB adapter he could find.

Imagine a beowolf... (0)

Anonymous Coward | about a year and a half ago | (#43762571)

nevermind

diy top500? (0)

Anonymous Coward | about a year and a half ago | (#43762799)

is there a top500 [top500.org] for diy clusters?

I am sure you are very proud (0)

Anonymous Coward | about a year and a half ago | (#43763613)

But your supervisor is a fool.

Finally, No bewoulf cluster joke. (0)

Anonymous Coward | about a year and a half ago | (#43764235)

One article that can't trigger the joke "But it will run a Beowulf cluster ?"

BSU HAS EPIC GEEKS?? (0)

Anonymous Coward | about a year and a half ago | (#43765899)

If only I had technology-majoring friends at BSU, I would have known there was a decent geek community and chosen them over U-Idaho :|

overcomes 'others have priority' (1)

eionmac (949755) | about a year and a half ago | (#43768451)

The big problem in Ph D studies is your own review a few weeks before submittal time when you realize the things you should have done, at this point your own 'cluster' and always available is a "beyond price" jewel asset' to you. Awaiting priority on faculty assets could cost you your degree.

Good luck to you. Good thinking out of your priorities.

More measures & better data representation (1)

imevil (260579) | about a year and a half ago | (#43769385)

I would have preferred graphs with lines, logarithmic scale and comparison with the theoretically attainable performances.

Moreover, some more popular benchmarks should be run: HPL, NERSC Trinity benchmarks, or even real applications like Quantum Espresso which has some standard benchmark tests.

Power consumption should be measured when running any benchmarks as it may vary depending on the type of application (CPU bound, memory bound).

Nice project on the electrical and electronic engineering part, could benefit from the insight of someone from the scientific computing field.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?