Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Slimming Down a Supercomputer

timothy posted more than 4 years ago | from the render-farm-porn dept.

Supercomputing 64

1sockchuck writes "Happy Feet animator Dr. D Studios has packed a large amount of supercomputing power into a smaller package in its new render farm in Sydney, Australia. The digital production shop has consolidated the 150 blade chassis used in the 2007 dancing penguin feature into just 24 chassis, entirely housed in a hot-aisle containment pod. The Dr. D render farm has moved from its previous home at Equinix to the E3 Pegasus data center in Sydney. ITNews has a video and photos of the E3 facility."

cancel ×

64 comments

Sorry! There are no comments related to the filter you selected.

So instead of happy feet.... (1)

allaunjsilverfox2 (882195) | more than 4 years ago | (#31645652)

He has burning feet? :p Had to say it. It's interesting to see the reduction in space.

Re:So instead of happy feet.... (1)

Sulphur (1548251) | more than 4 years ago | (#31645752)

This is a blade.

Does it blend? (1)

TheMiddleRoad (1153113) | more than 4 years ago | (#31645658)

In slow motion, please.

already exists (0)

Anonymous Coward | more than 4 years ago | (#31645662)

But I thought they already made a slim supercomputer [myps3.com.au]

The unnamed business is: (0)

Anonymous Coward | more than 4 years ago | (#31645708)

Boronia Capital

Priceless (4, Funny)

syousef (465911) | more than 4 years ago | (#31645770)

Cost of real estate in prime metropolitan area - $15 million
Cost of state of the art server rocks - $30 million
Cost of flying in a cooler the size of a small bus on a 747 - $2 million
Cost of seeing data center employee's face when they realise they're on call 24/7 for no extra cash - Priceless.

Re:Priceless (1)

nacturation (646836) | more than 4 years ago | (#31646184)

Value of free publicity for Happy Feet and Hewlett Packard from the advertorial: $50,000

Re:Priceless (1)

cheater512 (783349) | more than 4 years ago | (#31646250)

A380 actually. :P

Re:Priceless (1)

Hurricane78 (562437) | more than 4 years ago | (#31649398)

Cost of seeing data center employee's face when they realise they're on call 24/7 for no extra cash - Priceless.

Well, if he agreed to give away work for free, because he thinks he’s worth that little, then that’s his own damn fault.
People who don’t learn to say no, will obviously be walked all over. Your boss is only a client of yours. You can always get another client. Either he offers a good deal, or he can GTFO.

I read TFA (1)

grimdawg (954902) | more than 4 years ago | (#31645792)

It was a billion times more entertaining than Happy Feet.

Re:I read TFA (1)

TheMiddleRoad (1153113) | more than 4 years ago | (#31645800)

Happy Feet was a fun movie. How dare you?

Re:I read TFA (1)

M8e (1008767) | more than 4 years ago | (#31645874)

Happy feet was at least more entertaining than the lord of the rings. They had singing, dancing and walking in happy feet. The walking parts was also better as penguins do that in a more entertaining way.

Re:I read TFA (1)

Hurricane78 (562437) | more than 4 years ago | (#31649426)

We here in the countries of Europe don’t get what you Americans like about singing in movies and shows. Always the pointless singing. While we all here just collectively cringe. It ruins the whole movie for us.
Not judging here. Do whatever makes you happy. :)
But we don’t get it, and can’t stand such movies.

Re:I read TFA (0)

Anonymous Coward | more than 3 years ago | (#31663268)

You must really hate Bollywood pics out there on the Continent, then....

Re:I read TFA (1)

value_added (719364) | more than 4 years ago | (#31645962)

Indeed. It was an insult to pediphiles everywhere.

Re:I read TFA (0)

Anonymous Coward | more than 4 years ago | (#31646526)

Pedicure [wikipedia.org]

A pedicure is a way to improve the appearance of the feet and their nails. It provides a similar service to a manicure. The word pedicure comes from the Latin words pedis, which means of the ankle, and cura, which means care

Why are you hating on the people who have ankle fetishes?

I work in a national lab computer center (1)

ChuckLLNL (749838) | more than 4 years ago | (#31645808)

and I'm getting a kick out of these "claims" of supercomputer "prowess"...

Re:I work in a national lab computer center (0)

Anonymous Coward | more than 4 years ago | (#31645896)

The article said the original cluster where on the top500 list, and it is nice to see a supercomputer put into good use, aka earning good money for its users.

Amazingly dependent on algorithms... (0)

Anonymous Coward | more than 4 years ago | (#31645822)

It's amazing how such computing depends on not only the complexity of the algorithms involved but also in their implementation.

Eat that all PHP + SQL plumbers "programmers" who have never heard terms like, say, minimal perfect hash or Bloom filter ;)

 

Tried to check out the E3 Networks site (1)

XCondE (615309) | more than 4 years ago | (#31645868)

But it's in Flash. And I didn't have the patience to wait for the clouds and animation to finish.

http://www.e3networks.com.au/ [e3networks.com.au]

Who is this supposed to be targeting? You have to be a class A moron to build a data centre website using flash on the landing page.

Re:Tried to check out the E3 Networks site (4, Funny)

deniable (76198) | more than 4 years ago | (#31645904)

It's targeted at managers with money. Need I say more?

Re:Tried to check out the E3 Networks site (1)

Hurricane78 (562437) | more than 4 years ago | (#31649490)

Did you spot any pointy hair?

Re:Tried to check out the E3 Networks site (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31646016)

Slightly off-topic, but...

1. The funny thing about their site design is that about 90% of it could have easily been done with mouseovers and no flash.

2. None of the text can be highlighted. Lets say that they were the solution for my business, and I just needed to e-mail someone in management a snippet about their site. Too bad. No copy and paste.

It feels like it's 2001 or 2002 again.

Re:Tried to check out the E3 Networks site (1)

MrMr (219533) | more than 4 years ago | (#31646676)

class B morons, obviously.

Re:Tried to check out the E3 Networks site (1)

Macka (9388) | more than 3 years ago | (#31654286)

You don't wait for it to finish, there is no finish. You just click on the background and it loads the rest of the site.

Flash isn't just used for the landing page: it's the whole site. Every scrap of it is flash. I feel sick!

About relocating supercomputing power to Australia (5, Funny)

hallux.sinister (1633067) | more than 4 years ago | (#31645890)

You all do realize that electrons spin backwards there, right?

Re:About relocating supercomputing power to Austra (2, Funny)

M8e (1008767) | more than 4 years ago | (#31645914)

Not only that they are also upside down.

Re:About relocating supercomputing power to Austra (1)

Nkwe (604125) | more than 4 years ago | (#31645938)

You all do realize that electrons spin backwards there, right?

Only when you are not watching.

Re:About relocating supercomputing power to Austra (1)

Lorens (597774) | more than 4 years ago | (#31646052)

You all do realize that electrons spin backwards there, right?

Moderation +2, 100% Informative

Only on Slashdot.

Re:About relocating supercomputing power to Austra (1)

CODiNE (27417) | more than 4 years ago | (#31646678)

Dude you could at least CITE it. Hello!
http://en.wikipedia.org/wiki/Coriolis_effect [wikipedia.org]

Re:About relocating supercomputing power to Austra (1)

hallux.sinister (1633067) | more than 4 years ago | (#31667230)

Sorry, forgot. Interesting Wiki Article though. Some of it is beyond me, although it may also be that it's closing in on two in the morning, several days past my bedtime.

Re:About relocating supercomputing power to Austra (1)

CODiNE (27417) | more than 4 years ago | (#31668622)

I was joking to make your joke about electrons going backwards seem more real. But nobody modded me funny cuz they thought I was serious. No deadpan humor on the net. It's all about the voice.

Re:About relocating supercomputing power to Austra (1, Funny)

Anonymous Coward | more than 4 years ago | (#31647314)

It's ok, they just flip the servers upside-down.

What about the rest of it ? (4, Interesting)

slincolne (1111555) | more than 4 years ago | (#31645898)

Physical space is the least interesting point of this article. Other things would be:

What racks are they using (at least 42RU in height) ?

How do they get power into these (4 chassis, each with 6 x 15A power inlets) ?

Are they using rack top switches, or is there more equipment?

Are they using liquid cooled doors - if so whose ?

I once tried to get answers from HP on how to power their equipment at this density - they diddn't have a clue. It's worth remembering that each of these chassis has six power supplies, each rated at up to 2.2KW. Even allowing for a 2N configuration, that's a massive amount of power, and a lot of cables.

Re:What about the rest of it ? (4, Informative)

imevil (260579) | more than 4 years ago | (#31646290)

TFA says they use 48RU, and each cabinet uses 14.4 kW (60A) which in my opinion is not that impressive: you just need 3 phases at 20A, 240V.

As for cooling, you can easily get away with no water-cooling if your hot aisle confinement is well done. From the pics it is just Dell's 1U servers, and if you fill one 48U rack with those you do get to 14.4kW. But not all racks are for number-crunching, you have racks for storage, control and network, and those make less than 8kW.

The problem is not powering those things, but more cooling. With a good hot-aisle or cold-aisle confinement you can go up to 15kW/rack, but depending on the air volume, you're quickly screwed up if the cooling fails.

Re:What about the rest of it ? (2, Interesting)

dbIII (701233) | more than 4 years ago | (#31646748)

Dell servers? Australia is the land Dell forgot so that's the last thing you want. From Australia you end up talking to Dell people from three different continents to get even the smallest problem solved, and the timezone difference as a barrier to communication spins things out to weeks that should be solved in a couple of days. Plus there is far better gear from whitebox suppliers using SuperMicro boards so why use Dell in the first place? Dell can't do two boards with 8 cores on each in 1U and I've got some of those a couple of years old now.
As for power, with 3 phase, plenty of spare capacity and a good electrician to add more outlets it's not a problem.

Re:What about the rest of it ? (1)

Anpheus (908711) | more than 4 years ago | (#31647780)

We just got a few 1U dual socket quad core servers from Dell, so I don't know why you're saying they can't do it.

Re:What about the rest of it ? (1)

dbIII (701233) | more than 4 years ago | (#31651424)

No you've misunderstood - Two servers in 1U with a power supply between the two boards. Others have had them for a couple of years.

Re:What about the rest of it ? (1)

Anpheus (908711) | more than 4 years ago | (#31651982)

Oh, they just released those. They have a 2U box with 4 servers sharing two PSUs now.

PowerEdge C6100 I think. But you're right, I remember looking at HP and the others and they did have them but they weren't the right price for us, as we didn't need density.

Re:What about the rest of it ? (1)

BitZtream (692029) | more than 4 years ago | (#31650468)

ell can't do two boards with 8 cores on each in 1

There comes a point when you pass the point of cramming too much heat in a case and the whole system rapidly becomes unstable.

Shoving 16 cores into a single 1U case, without doing the numbers, safely bypasses any sane risk.

Great, you can get that many in one U... Dell doesn't want to deal with the supporting of such hardware and dealing with all the heat issues.

Theres more to a data center than how much you can stuff into the racks, its actually got to work when its in there.

Re:What about the rest of it ? (1)

dbIII (701233) | more than 4 years ago | (#31651498)

They've worked well for at least two years and other places had them before me, so how's that for any sane risk.
"Doing the numbers" is called design - in those cases enough airflow does the job.
There are of course denser setups than that anyway but that changes the price catagory - while the two servers in 1U is less than what you would pay for 2 x 1 U Dell servers of equivalent specs. If you don't need the extra drive bays it's not worth going for Dell especially if you are in a country where their support is almost non-existent.

Re:What about the rest of it ? (1)

drsmithy (35869) | more than 3 years ago | (#31654028)

Dell can't do two boards with 8 cores on each in 1U and I've got some of those a couple of years old now.

They're called blades.

The density isn't quite as high, but since you'll nearly always run out of power or cooling long before you run out of rack space, even with 1U boxes, there's not a lot of benefit from increasing density much past even a simple 1U pizza box. The benefits of blades are more in the management centralisation and reduced cabling, which you don't get in those servers you're talking about.

Re:What about the rest of it ? (1)

dbIII (701233) | more than 3 years ago | (#31654268)

Those paticular boxes ended up being cheaper than 2x1U nodes with the same processing power and five of them are cheaper than 10 equivalent blades and a chassis.
My main point is Dell lags well over a year behind many of the other vendors and often costs more, so if they won't give you support in there is no reason to go with them.

Re:What about the rest of it ? (1)

Shawndeisi (839070) | more than 4 years ago | (#31647892)

They're likely not using top of rack switches, since you can pack a nutty amount of bandwidth into relatively few links with 10GB switches a la Cisco 3120s. I would be unsurprised if they had a Nexus 7000 in the middle of it all.

The article does mention that they're using HP Blade servers, not Dells as another commenter posted. In the video they showed a BL490c g6 blade, which is a dual socket Nehalem blade at 16 per chassis. For cooling they were using watercooled APC pods. The power isn't really the hard part, there is a version of the HP c7000 chassis that has two three-phase plugs straight into the chassis if you don't feel like running C19/C20 PDUs on the side of the racks.

Re:What about the rest of it ? (0)

Anonymous Coward | more than 4 years ago | (#31648274)

Hi there,

We are using APC 42U racks.
We use APC PDUs to deliver the power in the chassis racks.
No top of rack switches - unnecessary. We use a single 10GbE from each C7000 back onto a Brocade core.
No liquid cooled doors. Unnecessary.
We are seeing around 6.2KW out of each C7000 (32 nodes per 10U). Around 100A per rack.
Our suite requires prox card access so it is secure.

Hope that helps :)

Re:What about the rest of it ? (1)

Jaime2 (824950) | more than 4 years ago | (#31652216)

HP blade chassis are easy to power. They are designed to run three power supplies to the left and three to the right. Just run two PDUs on each side of the rack (30 to 50 amps each, depending on what servers you run). Twenty four power cords will supply about 100 devices (64 servers, 32 switches, and 8 management units). The system is designed so you will never need all six power supplies running at full tilt as that isn't fault tolerant. You can also get away with a few as four network cables for the entire rack, two 10 gig ethernet for data plus two for out-of-band management.

However, I wouldn't stand behind the rack without lip balm on. It will feel like a windy day in the desert.

Head line (0)

Anonymous Coward | more than 4 years ago | (#31645900)

I read this as "simmering down", and pictured a mac mini in a wok.

Slimming down? (1)

CrashandDie (1114135) | more than 4 years ago | (#31645940)

Somehow I don't believe the banks would be happy if we asked them to participate in "The Biggest Loser"

Seems fairly run-of-the-mill (1)

Junta (36770) | more than 4 years ago | (#31646432)

News story is that computers are faster and have more memory than they were 3 years ago, so they need fewer of them. They bought APC enclosed systems to avoid having a hot isle due to open air cooling (of course, that means they paid a non-trivial amount for that).

Re:Seems fairly run-of-the-mill (1)

timeOday (582209) | more than 4 years ago | (#31647082)

I am surprised they need fewer of them, instead of making something just as big and several times faster. I guess faster computers just aren't needed for render farms any more. With 6000 cores, you could render the movie in real time (24 fps) if each core were allowed 4 minutes per frame.

Re:Seems fairly run-of-the-mill (1)

glwtta (532858) | more than 4 years ago | (#31650000)

With 6000 cores, you could render the movie in real time (24 fps) if each core were allowed 4 minutes per frame.

I may be wrong, but I believe it takes way more than that to render a frame.

And (1)

sonicmerlin (1505111) | more than 4 years ago | (#31646464)

Yay for Moore's Law.

who manages their security? (1)

dropadrop (1057046) | more than 4 years ago | (#31646550)

The power cables are so easily accessible on the roof?

Strange... (1)

Disoculated (534967) | more than 4 years ago | (#31647410)

I'm really thinking that this article is leaving some very important details out... It's really strange that a money-making data center would have physical space as it's primary limiting factor. Things like power, cooling, network, etc are usually far more important than square feet of tile, especially when anyone with an experience in data centers isn't going to put it in a high-value real estate market, it's going to be out in some industrial/commercial zone in the burbs where land/power/water are cheaper. It's not like the developers sending programs/renders to the cluster need to be anywhere near it physically.

I'm guessing these folks are addressing some sort of unique problem that they have to solve this way, and they don't bother to explain that to us in the article.

Re:Strange... (1)

dstates (629350) | more than 4 years ago | (#31647508)

Like the fact that the power consumption density is now so high that they need to go to a rack system with water cooling. Back to the good old days of the IBM 360.

Potts Point? No wonder movie tickets are expensive (0)

Anonymous Coward | more than 3 years ago | (#31654412)

...if the render farm is located in Potts Point I 'spose they'll need to pass on the real estate site costs (either buy or rent) onto someone, and that's the poor old moviegoer.
For those who don't know Sydney, Potts Point is one of the most upmarket suburbs in Australia. ...sure isn't your average 'industrial estate location' noooo sireeee!

And the point is...? (1)

RBerenguel (1777762) | more than 4 years ago | (#31648480)

It is really necessary to pack this so thightly? Isn't the refrigerating cost overhead worth it?

Re:And the point is...? (0)

Anonymous Coward | more than 4 years ago | (#31648778)

I'm not an expert on cooling, but since the heat is more concentrated, it may actually be more efficient.

Re:And the point is...? (1)

RBerenguel (1777762) | more than 4 years ago | (#31648906)

There is less possible airflow. Think of a radiator, where you try to have as much contact with air as possible. You'll need to keep pumping air/water/cooling thing inside continuously. Well, they are the experts, I guess they will have it sorted out, somehow.

A renderfarm is NOT a supercomputer! (1)

halfdan the black (638018) | more than 4 years ago | (#31649150)

A renderfarm is really nothing more then a bunch of slave processors, each one rendering a separate frame. There is basically NO internode communication. A supercomputer on the other hand has extensive internode communication, which is why the switching fabric is so fundamentally important. So do not confuse a farm (web farm, or render farm) with a supercomputer.

So? (1)

BitZtream (692029) | more than 4 years ago | (#31650428)

If they waited another 2 years they could pack the same processing power into a desktop PC.

Why are we posting stories about companies who are just upgrading old PCs they use for their rendering farm.

Whats next? Google server farm updates? Going to start posting to us when redhat upgrades its FTP servers to faster hardware just because its cheaper than replacing the old?

I mean seriously, all they did was upgrade, and ... it wasn't even a big upgrade, I've made bigger purchases than that over the phone to dell without even any written authorization.

Its a silly rendering farm. Not a super computer. Its not even an impressive rendering farm.

Wot? (1)

Ken_g6 (775014) | more than 4 years ago | (#31651358)

No GPUs?

BREAKING NEWS!!! (0)

Anonymous Coward | more than 3 years ago | (#31653182)

Area man uses newer parts with increased power to get better performance from gear used 5 years ago. Film at 11.

Opportunity... (1)

centre21 (232258) | more than 4 years ago | (#31692760)

Now let's see if they could put that technology to good use by creating a good film.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>