Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Slashdot: Parallel Cluster in a Box?

QuantumMist (2463834) writes | more than 2 years ago

Hardware 4

QuantumMist writes "I'm helping someone with accelerating an embarrassingly parallel application. What's the best way to spend $10K to $15K to receive the maximum number of simultaneous threads of execution? The focus is on threads of execution as memory requirements are decently low e.g. ~512mb in memory at any given time (maybe up to 2 to 3X that at the very high end). I've looked at the latest Tesla card, as well as the 4 Teslas in a box solutions, and am having trouble justifying the markup for what's essentially "double precision FP being enabled, some heat improvements, and ECC which actually decreases available memory (I recognize ECC's advantages though)." Spending close to $11K for the 4 Teslas in a 1u setup seems to be the only solution at this time? I was thinking that GTX cards can be replaced for a fraction of the cost, so should I just stuff 4 or more of them in a box? Note, they don't have to pay the power/cooling bill. Amazon is too expensive for this level of performance, so can't go cloud via EC2. Any parallel architectures out there at this price point, even for 5K more? Any good manycore offerings that I've missed e.g. somebody who can stuff a ton of ARM or other CPUs/GPUs in a server (cluster in a box)? It would be great if this could be easily addressed via a PCI or other standard interface. Should I just stuff 4 GTX cards in a server and replace them as they die from heat? Any creative solutions out there? Thanks for any thoughts!"

Sorry! There are no comments related to the filter you selected.

google beowulf (1)

crutchy (1949900) | more than 2 years ago | (#38242722)

nuff said

Re:google beowulf (1)

QuantumMist (2463834) | more than 2 years ago | (#38242888)

Unfortunately such a cluster doesn't exactly optimize sheer performance because there's lots of communications back and forth, no shared data model, etc. Granted, due to the low memory constraints, the data could be replicated on each Beowulf compute node; however, then different costs are assessed for intrathread communications. Furthermore, I'm guessing you just mean put a few processors in each box and get 10 or so of them, but at that point is a Tesla solution not better?

Re:google beowulf (1)

crutchy (1949900) | more than 2 years ago | (#38244082)

cost: beowulf
performance: cray,, etc
i guess tesla fits somewhere in the middle

i got the impression cost was a driving factor from the OP

Re:google beowulf (1)

crutchy (1949900) | more than 2 years ago | (#38244170)

maybe the OP could also have a look at the @home projects that use boinc, which lends itself to beowulf to allow for cheap distributed parallel processing

some applications would no doubt require custom software though (regardless of the platform)
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?