Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Inside the Lucasfilm datacenter

CmdrTaco posted more than 7 years ago | from the wrecking-childhoods-takes-bandwidth dept.

IT 137

passthecrackpipe writes "Where can you find a (rhetorical) 11.38 petabits per second bandwidth? It appears to be inside the Lucasfilm Datacenter. At least, that is the headline figure mentioned in this report on a tour of the datacenter. The story is a bit light on the down-and-dirty details, but mentions a 10 gig ethernet backbone (adding up the bandwidth of a load of network connections seems to be how they derived the 11.38 petabits p/s figure. In that case, I have a 45 gig network at home.) Power utilization is a key differentiator when buying hardware, a "legacy" cycle of a couple of months, and 300TB of storage in a 10.000 square foot datacenter. To me, the story comes across as somewhat hyped up — "look at us, we have a large datacenter" kind of thing, "look how cool we are". Over the last couple of years, I have been in many datacenters, for banks, pharma and large enterprise to name a few, that have somewhat larger and more complex setups."

Sorry! There are no comments related to the filter you selected.

Fuck twofo up again (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17789472)

Twofo [twofo.co.uk] Is Dying

DC++ [dcpp.net] hub.twofo.co.uk:4144

It is official; Netcraft confirms: Twofo is dying

One more crippling bombshell hit the already beleagured University of Warwick [warwick.ac.uk] filesharing community when ITS confirmed that Twofo total share has dropped yet again, now down to less than a fraction of 1 percent of all file sharing. Coming hot on the heels of a recent Netcraft survey which plainly states that Twofo has lost more share, this news serves to reinforce what we've known all along. Twofo is collapsing in complete disarry, as fittingly exemplified by failing dead last in the recent Student comprehensive leeching test.

You don't need to be one of the Hub Operators to predict Twofo's future. The hand writing is on the toilet wall: Twofo faces a bleak future. In fact there won't be any future at all for Twofo because Twofo is dying. Things are looking very bad for Twofo. As many of us are already aware, Twofo continues to lose users. Fines and disconnections flow like a river of feces [tubgirl.com] .

N00b Campus users are the most endangered of them all, having lost 93% of their total share. The sudden and unpleasant departures of long time Twofo sharers fool_on_the_hill and Twinklefeet only serves to underscore the point more clearly. There can no longer be any doubt: Twofo is dying.

Let's keep to the facts and look at the numbers.

Sources indicate that there are at most 150 users in the hub. How many filelists have been downloaded? Let's see. 719. But 1621 IP addresses have been logged, and 1727 nicks have been sighted connecting to one user over the last term. How many searches are there? 600 searches in 3 hours. The highest sharer on campus, known as "firstchoice", or Andrew.Maddison@warwick.ac.uk in real life, was sharing over 1 TiB, despite working in ITS and not being on the resnet. He's only there so people off campus who think they're too good for bittorrent can continue to abuse the University's internet connection.

Due to troubles at the University of Warwick, lack of internet bandwidth, enforcements of Acceptable Usage Policies, abysmal sharing, retarded leechers, clueless n00bs, and ITS fining and disconnecting users, Twofo has no future. All major student surveys show that Twofo has steadily declined in file share. Twofo is very sick and its long term survival prospects are very dim. If Twofo is to survive at all it will be among p2p hardcore fuckwits, desperate to grab stuff for free off the internet. Nothing short of a miracle could save Twofo from its fate at this point in time. For all practical purposes, Twofo is dead.

Fact: Twofo is dying

Re:Fuck twofo up again (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17789558)

Your 4chan troll was better.

Re:Fuck twofo up again (-1, Offtopic)

icepick72 (834363) | more than 7 years ago | (#17789616)

For all practical purposes, Twofo is dead.
Fact: Twofo is dying


That last sentence just makes it anti-climatic because first you stated Twofo is dead, then you follow up by saying it's only just dying, refuting your previous statement. Despite the fact you said "for all practical purposes" it's still very anti-climatic.

Friggin' noobs are ign'ant of Slashdot history (0, Offtopic)

James A. V. Joyce (798462) | more than 7 years ago | (#17790170)

Look up "BSD is dying" already, crikey.

Rendering (2, Funny)

Anonymous Coward | more than 7 years ago | (#17789496)

Only a few boxen are used rendering and effects. The rest is to track and calculate sales of Star Wars merchandise.

300tb (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17789512)

Is that all? Most datacenters that house more than 1 large customer usually starts at about 300tb, nothing to write home about. Most customers using sap use a lot more.

Re:300tb (1)

morgan_greywolf (835522) | more than 7 years ago | (#17789844)

Is that all? Most datacenters that house more than 1 large customer usually starts at about 300tb, nothing to write home about.
Yeah, I was a bit disappointed as well.

By mid-year, my pre-production lab will have 150TB. Our production datacenter, just for PLM alone, has something like half a petabyte.

Re:300tb (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17789862)

But you still have a tiny penis, and that is what really matters.

Re:300tb (0)

Anonymous Coward | more than 7 years ago | (#17790910)

300tb was referring to ILM's high speed storage, there's actually a bit more total storage than that figure.

Am I the only who finds this funny... (1)

kalpaha (667921) | more than 7 years ago | (#17789520)

passthecrackpipe writes:

Over the last couple of years, I have been in many datacenters, for banks, pharma and large enterprise to name a few, that have somewhat larger and more complex setups."

I find it funny that Slashdot... (4, Funny)

Animaether (411575) | more than 7 years ago | (#17789596)

...would post this as a news item. Front page, too.

Let's break this down submission down..

"Hi. I found this article on the web that totally didn't impress me, I think they fiddled with the numbers to make themselves look better than they are, and overall I really couldn't give a shite."

Yes. Obvious front page material for a Sunday!

Hmm? (1)

vladsinger (1049918) | more than 7 years ago | (#17789540)

I'll just assume it runs linux. Did it say in TFA?

Re:Hmm? (1)

hjf (703092) | more than 7 years ago | (#17789588)

On Oracle Magazine ( http://www.oracle.com/technology/oramag/oracle/06- may [oracle.com] ), they said they used Linux for their "render farm" (I hate the word farm, computers aren't cattle), but designs were made in other platforms (SGI, Mac...). Finally, everything, even every single rendered, uncompressed frame is stored on an Oracle database (which runs on Linux).

Re:Hmm? (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17789632)

> everything, even every single rendered, uncompressed frame is stored on an Oracle database

WTF? Jabba the blob is not impressed, why would they do that? Wouldn't it make more sense to store metadata in the db and the actual image data on XFS RAID?

Re:Hmm? (1)

hjf (703092) | more than 7 years ago | (#17791276)

don't ask me. They claim to have a 300TB Oracle database. whatever works for them, right? anyway check the article I posted, maybe I misunderstood it.

Re:Hmm? (4, Funny)

gEvil (beta) (945888) | more than 7 years ago | (#17789612)

Nope. They run LucasOS. It's perfect for their needs, since it's constantly being updated to suit his vision.

Re:Hmm? (1)

notoriousE (723905) | more than 7 years ago | (#17790216)

A quick scan shows that they are running Solaris 8 [netcraft.com] guess they are living in the past :)

Re:Hmm? (2, Funny)

ATMD (986401) | more than 7 years ago | (#17791174)

They're certainly living a long, long time ago.

Re:Hmm? (-1)

Anonymous Coward | more than 7 years ago | (#17790424)

Nope, they're running Vista. They're the only ones with the hardware for it.

Re:Hmm? (1)

jones_supa (887896) | more than 7 years ago | (#17790572)

It runs SCUMM, of course.

They run OS X. (0)

Anonymous Coward | more than 7 years ago | (#17791390)

This is LucasFilm, not "SucksBecauseWereCheapFilm", so they probably run OS X, God's own operating system. Faster and more secure than Lin-sux.

Silly ILM should Google it :-) (0)

Anonymous Coward | more than 7 years ago | (#17789548)

I don't mean use the search engine, I mean drop the expense of rack cases and server boards for their render nodes. Google just velcro a bunch of cheap hardware to a shelf. If ILM did this, they'd get 2 nodes for the price of a 19" rack case alone.

Get a clue retard (0)

Anonymous Coward | more than 7 years ago | (#17790390)

Not every application scales out as well as google, retard. Google gets millions of requests per second that take relatively little computing power per request to process. In a two hour movie there are only (24 frames/second)*(60 seconds/minute)*(60 minutes/hour)*(2 hours)=172,800 frames to render out, but at a resolution of 4096x2160 pixels, each frame takes a while to generate.

When you have to push around so much uncompressed image data, subdividing the rendering of each frame into work for several servers doesn't make sense because then you have to handle all the image data twice: once to render each piece, and again to send the pieces to another node and assemble them into a full frame. With the millions of dollars ILM has at their disposal, they probably have people on staff doing analysis more thorough than your google-fanboy handwaving. Why don't you shut your mouth unless you actually have a clue.

Re:Get a clue retard (1)

trimbo (127919) | more than 7 years ago | (#17791106)

but at a resolution of 4096x2160 pixels, each frame takes a while to generate.

Where'd you come up with this resolution? I've never worked on a movie where the final rez was higher than 2K. I can only think of one set of elements I rendered at 4K -- a bunch of badly aliasing Mental Ray renders. A halfway decent renderer will let you get away with rendering at a lower rez than needed for the final comp.

Not to say your point isn't a good one though. Google's velcro and duct tape solution to server farms isn't really appropriate for the needs of 3D rendering. Plus, visual effects studios just don't have Google money to throw at custom farm solutions. On their scale, it's much more effective to just pay Dell or HP to take care of it.

Re:Get a clue retard (0)

Anonymous Coward | more than 7 years ago | (#17791588)

mental ray for production work is suicidal. PRMan is more than 10 times faster and the physical inaccuracy is irrelevant.

Glass

Re:Get a clue retard (1)

Mysticalfruit (533341) | more than 7 years ago | (#17791604)

All I can think of is that it's their internal standard for a wide screen HD feed.

Just an idea.

Re:Get a clue retard (1)

ATMD (986401) | more than 7 years ago | (#17791212)

Goddammit, as soon as I post in this topic I see something like this.

Someone mod this troll down, please.

Re:Get a clue retard (0)

Anonymous Coward | more than 7 years ago | (#17791756)

Who suggested that these nodes run different software than the current set-up? The comment was about hardware, about increasing capacity for less cost - the fastest hunk of junk in the Galaxy.

> Why don't you shut your mouth unless you actually have a clue.

Only a cock-smoker fails to practice what they preach.

That's really not that large (2, Interesting)

192939495969798999 (58312) | more than 7 years ago | (#17789552)

There are many corporate data centers larger and more powerful than that, it is much more impressive if the entire thing can run one giant application. Still, I'm pretty sure that Google's new datacenter wipes its ass with a datacenter the size of this one.

Re:That's really not that large (5, Funny)

rtaylor (70602) | more than 7 years ago | (#17789574)

Still, I'm pretty sure that Google's new datacenter wipes its ass with a datacenter the size of this one.

I'm pretty sure Google's datacentre has evolved beyond the need for an ass.

Re:That's really not that large (1)

Paradise Pete (33184) | more than 7 years ago | (#17790260)

I'm pretty sure Google's datacentre has evolved beyond the need for an ass.

I think they needed some to negotiate that slippery slope in China.

That's really not that large-Viagrasize it. (0)

Anonymous Coward | more than 7 years ago | (#17789732)

"Still, I'm pretty sure that Google's new datacenter wipes its ass with a datacenter the size of this one."

And the NSA datacenter can pick it's teeth with the Google datacenter.

Re:That's really not that large (1)

Zen (8377) | more than 7 years ago | (#17790402)

Yeah - that datacenter is nothing. I don't consider ours that big either, but the company I work for (non profit in the healthcare insurance industry) would be ranked around #40 on the global fortune 500 list if we were for profit.

We have a couple PB in online storage just for our mainframe, much less online storage for Lotus Notes, a few thousand servers of varying OS's, speeds, and feeds, a large SAN that contains online backups for all of those servers, much less our tens of thousands of high density tapes stored in silos with psuedo-online storage.

I'm not good with the actual numbers, but I know ours would blow this away without even blinking, and we're out of space and have already broken ground to double our size.

Re:That's really not that large (1)

straybullets (646076) | more than 7 years ago | (#17791692)

I know ours would blow this away without even blinking, and we're out of space and have already broken ground to double our size.
supersized as it is, one datacenter is nothing when it's alone :)

come back when you have multi site workload balancing coupled with a full activity recovery plan !

Re:That's really not that large (1)

sgt_doom (655561) | more than 7 years ago | (#17790918)

I'd have to agree. Ever check out Organized Crime's data centers? We're talking super hugh, here.....

Re:That's really not that large (4, Funny)

geobeck (924637) | more than 7 years ago | (#17791072)

...I'm pretty sure that Google's new datacenter wipes its ass with a datacenter the size of this one.

A conversation overheard recently over the ether:

Lucas DC: Hi! I've got 11.38PB/s and 500TB!

Google DC: Hah! I've pulled bigger queries out of my back end.

...although I'm not quite sure what that says about Google's "interfacing preferences".

Meh....SDSC has 2 PetaBytes of online storage (3, Informative)

Danathar (267989) | more than 7 years ago | (#17789560)

San Diego Super Computing Center (SDSC) has 2 Petabytes of online Storage with 400TB for researchers. They have 18PB of archival tape storage.

Still....I like datacenters. The hum of equipment. 65 degree temps and lower. I once had my cube re-located to a tape library. Quiet...peaceful place

http://www.enterprisestorageforum.com/hardware/fea tures/print.php/3634881 [enterprise...eforum.com]

Re:Meh....SDSC has 2 PetaBytes of online storage (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17789696)

Anyone doing 65F in a data center is an idiot.

coldframes (0)

Anonymous Coward | more than 7 years ago | (#17790226)

Back when cavebadgers were common pets and mascots, they kept the mainframe rooms pretty chilly. All those glaciers and stuff ya'know. The electric bill was cheaper then, too.

Eh (1)

khallow (566160) | more than 7 years ago | (#17790868)

I guess that depends on what the internal temperature of the machines will be at the hotest parts of the day. If the AC and outside thermal insulation isn't up to snuff, you might need to go into a day with some pretty cold temperatures just so your servers stay viable, heat-wise by the end of the day.

Re:Meh....SDSC has 2 PetaBytes of online storage (1)

wik (10258) | more than 7 years ago | (#17790734)

Hum of equipment? Either it's not much of a datacenter or your hearing is already shot.

Meh..Cemetary has 2 PetaCoffins of offline storage (0)

Anonymous Coward | more than 7 years ago | (#17791188)

"Still....I like datacenters. The hum of equipment. 65 degree temps and lower. I once had my cube re-located to a tape library. Quiet...peaceful place"

So's a cemetary, but you don't see people willing moving in there.

Re:Meh....SDSC has 2 PetaBytes of online storage (1)

fons (190526) | more than 7 years ago | (#17791932)

I hear you.

All the blinking lights, the spaghetti of cables. I love it.

I've actually never been in a datacenter. But I love to read articles like this one.

Hopefully one day I'll get a tour in one of these myself.

Rhetorical bandwidth? (4, Funny)

Peter Cooper (660482) | more than 7 years ago | (#17789564)

Is that the speed you can talk at?

Another interpretation: (1)

Uncle_Al (115529) | more than 7 years ago | (#17789570)

A rhetorical question does not expect an answer...

...so maybe "rhetorical bandwidth" is a nice way of saying that the data flows only in one direction? ;-)

Re:Another interpretation: (1)

ozamosi (615254) | more than 7 years ago | (#17789690)

Like half duplex or something?

Or maybe it's what the Wifi-figures are. Rethorical Bandwith. "802.11g is 27mbps full duplex. That is 27mbps in each direction. So we have 27 rethorical mbps times two, which sums up to *drumroll* 54! :D"

Re:Another interpretation: (1)

Barny (103770) | more than 7 years ago | (#17789712)

Nope, its just UDP ;)

Hey, maybe we could have a new mod code, +0 Rhetorical. Making it so no-one can post a reply to it ^_^

Re:Another interpretation: (1)

mrscorpio (265337) | more than 7 years ago | (#17790016)

Yeah, I think he means "theoretical" but just wanted to sound smart.

300TB? (1)

slashmojo (818930) | more than 7 years ago | (#17789610)

300TB of storage in a 10.000 square foot datacenter

Can fit 300TB in a single rack these days.. or is that a 10 square foot datacenter?

Re:300TB? (0)

Anonymous Coward | more than 7 years ago | (#17789826)


Seagate currently do a 750GB 3.5" drive
so 300TB / 750GB = 225 drives

225 drives at 1.5" high each = 28ft

so if we layed them out at say 8 drives per 1U rack unit wide = cabinet 3.5ft tall

and they needed 10,000 sq ft for something that would fit under the stairs
LOL

Re:300TB? (0)

Anonymous Coward | more than 7 years ago | (#17789854)

whoops math was a bit skew

300TB/750GB = 410 drives

410 * 1.5" = 615" / 12" = 51 ft

51ft / 8 drives per rack = 6ft cabinet

so it would fit next to the stationary cupboard

LOL.. sigh (0)

Anonymous Coward | more than 7 years ago | (#17791408)

Judging by your post. You know no knowledge of running anything other than you own computer. They are using NetApp storage and uptime are critical so it matters what you choose to put in your datacenter.

Submitter (5, Insightful)

kevin_conaway (585204) | more than 7 years ago | (#17789620)

Well passthecrackpipe, if you and your vast knowledge of large scale datacenters are not impressed with the story, why the hell did you submit it?

Re:Submitter (2, Interesting)

Anonymous Coward | more than 7 years ago | (#17790018)

Nomen est Omen.

Re:Submitter (1)

Vreejack (68778) | more than 7 years ago | (#17790566)

Let's play editor and re-word this summary

Here is Nothing Interesting

This place you never heard of before is so incredibly irrelevant, it's almost surprising. Their moderate hype is somewhat misleading; if I hadn't mentioned it you might have been fooled, had you cared.

Storage vs Archiving (0)

Anonymous Coward | more than 7 years ago | (#17789672)

How do they Archive the movies? TFA mentions that Pirates of the Caribbean was 60tb.
I work with post production for short, independent movies. I have my main RAID where I keep the movies while I work.
After it's done, I archive the TIFF image sequence (that goes to transfered into film) in a HD. It's probably hard to archive 60tb. Do they just throw away the digital copy?

Re:Storage vs Archiving (0)

Anonymous Coward | more than 7 years ago | (#17789748)

Read and weep. [oracle.com] Database in filesystem weenies rejoice, although I'm usually opposed it's a better solution than a full-blown RDBMS.

Also worth noting is that ILM will probably have the entire Oracle source tree mirrored and termination provisions in their licensing deal. The vendor lock-in problem should not be an issue for ILM whereas a smaller house would be insane to do this.

10.000 square foot datacenter is SMALL (2, Informative)

Secrity (742221) | more than 7 years ago | (#17789702)

10.000 square feet for a datacenter is not very impressive. The datacenter that I work in did a relatively modest 100,000 square foot EXPANSION which was the result of absorbing an adjoining atrium. I suspect that the power equipment and air handlers may take up 10,000 square feet.

Re:10.000 square foot datacenter is SMALL (0)

Anonymous Coward | more than 7 years ago | (#17790998)

10.0 square feet while small, is incredibly precise. I'd say my data centre at home is ~8 square feet but that's an off the cuff estimate. To know it's 10 square feet within 3 decimal places... wow.. just wow.

I wonder how accurate their ruler is.

-1 Facetious

All that just to make Greedo shoot first (0)

Anonymous Coward | more than 7 years ago | (#17789704)

What a fucking waste.

George Lucas has jumped the shark.

Then the shark ate him. Then shit him out. The shit washed up on a beach, dried out in the sun, and was pissed on by a dog.

Can we find the drive Jar-Jar is on (5, Funny)

$RANDOMLUSER (804576) | more than 7 years ago | (#17789724)

and format it?

OP doesn't seem very impressed... (3, Interesting)

been42 (160065) | more than 7 years ago | (#17789778)

So why submit this if you don't like it? Why not at least title it "Lucasfilm thinks it's soooo great."? I'm sure you've seen bigger data centers, and you can type 500 lines of code a minute, and maybe you defeated a ninja in hand-to-hand combat, but for the rest of us "normal" nerds it's still neat to read about the machines that get the work done in a business. Of course it's hyped up, it's a press release disguised as news. Take it for what it is, relax, and try to imagine those 2,000 servers in a secret cave under your house, manipulating the stock market in your favor. That's what I do.

RHETORICAL? (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17789870)

How about theoretical? *yawn*

Penis.... er.... Data Envy? (2, Insightful)

Snydley Whiplash (1052572) | more than 7 years ago | (#17790012)

Why all the negativity toward Lucas? Jar Jar's dead man, let it go. George said he was sorry already. I think it's a good story. It's absolutely fascinating to me to see how they make movies today, how much data gets pushed around, and how they make sure that the creative people have access to what they need, when they need it. And they do all this to support incredible time schedules, with boatloads of cash riding on every second. I don't know how anyone can say that this isn't an impressive operation. As for Lucas thinking they are so great... well, they pretty much are. I'd say that being organization that created the special effects for tons of blockbuster movies and being nominated for several major movie industry awards pretty much gives them some bragging rights.

Re:Penis.... er.... Data Envy? (1)

linguizic (806996) | more than 7 years ago | (#17790612)

This really has nothing to do with this discussion, but I feel it must be injected somewhere into it anyway:

http://www.landoverbaptist.org/news0899/jar.html [landoverbaptist.org]

And they drive to work at 2400mph (2, Insightful)

viking80 (697716) | more than 7 years ago | (#17790048)

300TB storage and 11 petabits/s bandwidth.

This means

A) they can push their entire storage through the network in 300*8Tb/(11Pb/s)=200ms.
or
B) the article author does not have a clue.

I think an anlogy would be: I drive back and forth to work everyday, or 400 times a year. My speed on each trip is 60mph, so in a year my speed is 60x400 or 24000mph.

Re:And they drive to work at 2400mph (1)

Jesterthe3rd (960830) | more than 7 years ago | (#17790158)

It's more like "There are 10,000 cars in this city, driving at 30 mph, so there speed is 300,000 mph!"...

Re:And they drive to work at 2400mph (1)

jeremy_hogan (587864) | more than 7 years ago | (#17790182)

> I think an anlogy would be: I drive back and forth to work everyday, or 400 times a year. My speed on each trip is 60mph, so in a year my speed is 60x400 or 24000mph.

I think the better analogy is: "You drive to work each day at light speed, despite work being less than an hour away." In other words, their entire data array can do the Kessel Run in less than 12 parsecs. Which is good if you are creating, sharing and batch rendering massive 3D and/or compositing fx files across a network.

data center (2, Interesting)

ralph1 (900228) | more than 7 years ago | (#17790152)

Guess they have not been to a hospital data center yet. Should check out someone like dow chemical.

which one makes the story... i think its down. (1)

Jackie_Chan_Fan (730745) | more than 7 years ago | (#17790154)

barf

11.38 petabits? (4, Interesting)

Nighttime (231023) | more than 7 years ago | (#17790156)

As in reference to THX 1138 [imdb.com] ?

Of course, it could just be a coincidence.

Hardly a coincidence (2, Informative)

NthDegree256 (219656) | more than 7 years ago | (#17790416)

This is the Lucasfilm datacenter. That number finds its way into all sorts of Lucas-related material. [wikipedia.org]

Re:11.38 petabits? (1)

bgii_2000 (1029920) | more than 7 years ago | (#17790588)

I agree, it just a joke.

Re:11.38 petabits? (1)

vladsinger (1049918) | more than 7 years ago | (#17791686)

And of course, the article makes the same speculation.

For all the knocks of this center (2, Insightful)

WindBourne (631190) | more than 7 years ago | (#17790190)

I have to wonder how many systems they have? They accomplish a great deal with what is a fairly small area. I would guess that they each computer has major ram and is simply NFSed back to a central server.

What I have found funny is the number of ppl who are speaking of how big their centers. Offhand, I tend to suspect that those centers could go on a MAJOR f%^&ing diet and need to have their budgets cut to a fifth. And finally, it is time to fire a bunch of the incompetents who can not run a tight center.

Re:For all the knocks of this center (1)

Zen (8377) | more than 7 years ago | (#17791650)

I'm not following you here. Yes, I am one of those who responded with some rough stats about the datacenter I work at. I also stated that I didn't even think mine was that great (big was the word I used). Because it's not. But it beats the crap out of the Lucas one, which is the story, so when you can relate to it and build on the topic, then it is an ontopic post and adds to the topic of conversation.

How can you state that other companies datacenters are too big and extremely wasteful when you have no idea what those companies do, what their legal departments say they need to keep (storage), what types/brands of bigiron they use which require massive amounts of space/cooling/power/storage, etc? How about executive level policies that dictate that servers should not run multiple apps? Now that one surprises many of my friends at other companies, but when they think about it, they wish that their company had a policy like that so that an app crashing does not take down another app. When a single hour of downtime in your datacenter costs your company over $4,000,000 not to mention loss of brand name status, competitive edge in mergers and acquistions, and other non-tangible costs, you tend not to sweat the 'small stuff' - like a $100M datacenter. It's definitely not overkill in the least bit with large companies. It pays for itself in 25 hours of downtime. A large company who dramatically cuts their IT budget and fires people is either one who is quickly going out of business, or one that is prepping themselves to be taken over by a larger company.

Nothing compared to my Sempron rig... (2, Funny)

gatkinso (15975) | more than 7 years ago | (#17790230)

...running FC6 x64.

Why? Because my rig has never so much as contained - much less rendered - an image of Jar Jar Binks.

Pwned.

300TB in 10,000 sqft is a lot? (2, Interesting)

willith (218835) | more than 7 years ago | (#17790262)

The datacenter at one of my employer's satellite sites has four CLARiiONs, at 2 racks each, a 5-bay DMX-3, and a 4-bay XP1024, for 380TB raw, in 3,200 sqft, along with thirty racks of servers, a P595 mainframe, and several multi-rack computing clusters. There's plenty of cooling and it's really not THAT crowded. Managing to pack 10-12 racks of storage into a 10,000 sqft data center is not anything noteworthy.

Re:300TB in 10,000 sqft is a lot? (0)

Anonymous Coward | more than 7 years ago | (#17792162)

hate to burts your bubble but a p595 is not a mainframe, that is IBM's 64way unix box. I know this because I'm the guy that comes out to fix them.

Save the cheerleader, use the Force... (1)

Manchot (847225) | more than 7 years ago | (#17790374)

Now I'm disappointed. I had hoped Masi Oka [wikipedia.org] would be working there.

But? (1, Funny)

Anonymous Coward | more than 7 years ago | (#17790614)

Will it run Vista? Sounds like they might need to upgrade!

60TB a movie...300TB total? (1)

vee_anon (1056222) | more than 7 years ago | (#17790618)

Does anybody else find it questionable that he said Pirates of the Caribbean required approx. 50TB of storage, and the next one will require 25% more....but then goes on to say that there is total storage space of 300TB in the data center. Thats basically enough to store six movies of equivalent size to Pirates, so where are all the rest of the movies they make stored??

Re:60TB a movie...300TB total? (1)

FlunkedFlank (737955) | more than 7 years ago | (#17791048)

You mean the rest of the movies they have ever made or the rest of the movies they are making at the same time? As soon as a movie is done all of the data is offlined to backup storage. 300TB is for the 2-6 movies they tend to be working on at a time.

Re:60TB a movie...300TB total? (0)

Anonymous Coward | more than 7 years ago | (#17791110)

Re:60TB a movie...300TB total? (1)

VENONA (902751) | more than 7 years ago | (#17791128)

Above, an AC posted a link to http://www.oracle.com/technology/oramag/oracle/06- may/o36lucas.html [oracle.com] which sez the answer to your question is 'tape'. Makes sense, I suppose. Storing old movies which require TB don't sound like something to store on- or near-line. I doubt much of it is reusable, on a day to day basis. When you launch a major project (make Greedo shoot first or something) for it, then it's in the books, and you have a business requirement to fill on-line storage, acquire more if you need it, etc.

And it's in a national park (1, Interesting)

Animats (122034) | more than 7 years ago | (#17790634)

There's considerable unhappiness in San Francisco about Lucasfilm's operation. It's in the Presidio, which used to be a military base and is now a national park. It's the only national park which has to make a profit, due to a Bush Administration deal. Letterman Army Hospital was torn down to make room for the Lucasfilm facility. The San Francisco Bay Guardian complains about this constantly, as they try to keep the Presidio from turning into an industrial park. The Lucasfilm move to the Presidio was something of a dot-com boom excess, when people thought SF was the place to be.

Pixar, in Emeryville, Tippett, in Berkeley, and Dreamworks, in Redwood City, are the innovative animation companies in the Bay Area. And of course, there's EA, SCEA, and some other game companies. Lucasfilm doesn't seem to get much attention.

There are data centers in San Francisco proper with far more storage, too. The Internet Archive has several petabytes of storage. There's a large colocation facility at the 6th St. offramp from I-280.

Lucasfilm pay is mediocre too (2, Interesting)

rk (6314) | more than 7 years ago | (#17790686)

They wanted me to move across the continent from a place with average cost of living and a 10 minute commute to work in San Francisco (right in the city, not even an outlying area) for about a 15% increase in pay. The only way I could afford that would be to take on a 2-3 hour commute and even then I'd have to run an even tighter ship, financially speaking, than I do now.

I suppose they were counting on the "cool factor". The job was cool, but not so cool I was willing to stick a stake through the heart of my family. Right after this, I read that Lucas donates 170 million to his alma mater. Hey George, why not donate 10% less and actually pay your people something more since you're insisting on setting up shop right in the freaking Presidio?

600 Tbyte of disk in total can't be right. I wrote an application a couple years ago that has 6 terabytes of disk allocated to it to cache its work. This was for a single app. Admittedly, we worked with fairly big data files where I was working, but I've got to think Lucasfilm's files are way larger than my 1-2 gig files.

Negotiation is important. (1)

WindBourne (631190) | more than 7 years ago | (#17791350)

We techies really sux at Negotiations. Sadly, the more hard core you are, the less business savey we appear to be. I have been stuck around 100K. A friend of mine with less education and experience was offered a job at MS. He was originally offered 85K (this was 8 years ago). He said no and held out for 150K, stock options, and benefits. They came around and re-offered him. I do not know exactly what it was (per contract, he was not allowed to say), but he says that it was more than what he wanted. After seeing the house that he picked up in the Seattle area, I believe him. For all I know, he had MS give him the down payment for it.

Considering the team that he was on, I was more surprised that he was offered so low at first, but that is what business ppl do. We all have to learn when and how to negotiate better. Perhaps CS/CE should take up classes on this.

Graphics processing power (1)

nicolastheadept (930317) | more than 7 years ago | (#17790720)

The majority of Lucasfilm's processing power is used for Graphics generation for ILM (unlike say Google). I think the hidden message of this article is they could get a huge screen and projector, play, for example, Crysis on full settings including 64x Antialiasing and Anisotropic Filtering, at 4320p with 22.2 surround sound and say to Sony, "Thats TrueHD!"
http://en.wikipedia.org/wiki/UHDV [wikipedia.org]
http://en.wikipedia.org/wiki/22.2 [wikipedia.org]

Actually, it's impressive. Most impressive... (4, Interesting)

Boss Sauce (655550) | more than 7 years ago | (#17790960)

As somebody who (ab)uses that particular rig daily, the article misses the point about what's so awesome about the system.

It's a good sized datacenter, but what it's able to support in processing ability is the impressive part, and that the fat bandwidth runs at capacity almost all of the time by the demands of processing jobs. Proprietary software doles out jobs 24/7 to thousands of procs all over campus-- including artists' desktop machines-- for heavy duty computation: rendering and simulation and whatever it takes.

I can't imagine a facility where so many people are creating and pumping so much data around.

Their datacenter has a droid! (3, Interesting)

AaronW (33736) | more than 7 years ago | (#17790966)

I toured their new facility in San Francisco. They have over 300 10Gbps ports and all PCs are connected via gigabit. Their datacenter was 2/3 full of dual-Opteron servers running SuSE Linux (though they were considering switching). Their server room was spotless. No cables were visible anywhere, but I did see a Roomba moving about the floor. The fellow who ran it said that since they're ILM, they have to have droids.

The facility was absolutely beautiful. When going between two buildings on an overhead walkway I saw the Golden Gate bridge with a nice orange sunset behind it. I wish I had my camera with me.

They said that they have many dedicated OC-48 pipes to various studios and can handle just about any format, since every studio uses their own format. They convert it to their own internal format, which I believe they open sourced.

When they moved from Skywalker Ranch, it was completely seamless. They had an OC-192 (10gbps) link running between the old and new facility as more and more equipment was migrated to the new facility but people continued to work at the old one.

-Aaron

Re:Their datacenter has a droid! (1)

kv9 (697238) | more than 7 years ago | (#17791478)

XS4ALL also has a droid [iwans.net]

Relative perspective (1)

nemesisprime (920350) | more than 7 years ago | (#17791060)

The point of the story was to display ILM data crunching power as impressive for a POST PRODUCTION house. Not "the greatest data center in the world". Compared to any other post production house, ILM is pretty darn impressive.

Hey, it's Lucasfilm, so it's automatically cool! (1)

jhylkema (545853) | more than 7 years ago | (#17791160)

Nevermind the fact that there are much larger and complex setups out there, as others have pointed out. Nevermind the fact that Star Wars was a ripoff of a Japanese pulp science fiction novel.

Theoreetical carrying capacity (1)

MadMagician (103678) | more than 7 years ago | (#17791166)

Theoretical bandwidth is a chimera. All the cars on Los Angelos freeways at a given time, carrying boxes of tapes -- now that's some theoretical bandwidth. What matters is achieved write and read capacity -- I believe the record [gridtoday.com] is 14.5 Gb/s sustained.

Bullshit or Calculation Error : 569 NICs/server ! (2, Informative)

wtarreau (324106) | more than 7 years ago | (#17791260)

TFA talks about 2000 servers equipped with 10 Gbps network cards.
11.38 Pbps is 11380 Tbps or 11380000 Gbps. This means that each
server has 569 network interfaces !! This is total bullshit. If
they had said they had 10*2000*2 = 40 Tbps, it would have been
based on more real (though irrelevant) data.

I hate it when ignorant journalists post meaningless data for public
consumption.

Willy

Wisdom goes only to the wise (0)

Anonymous Coward | more than 7 years ago | (#17791300)

"as somewhat hyped up -- "look at us, we have a large datacenter" kind of thing,..."

And just what kind of statement *did* you expect from a Lucas operation? Have you learned nothing from his movies, grasshopper?

major factual errors or there's a time machine too (0)

Anonymous Coward | more than 7 years ago | (#17791850)

Am I the only one catching stuff like this?

"4000 frames, each frame took 23 hours to render..."

MEANING ((4000*23)/24)/365 = 10.5 years????

Wow, they must have a time machine there too. Seeing as the film in question, Poseidon, was released last year.

Darth Lucas (0)

Anonymous Coward | more than 7 years ago | (#17792002)

I can just see it now...

George Lucas: Any attack made by the Media against this data-center would be a useless gesture, no matter what technical data they have obtained. This station is now the ULTIMATE POWER in the universe. I suggest we use it.

Old-School Star Wars Fan: Don't be too proud of this technological terror you've constructed. The ability to destroy a prequel trilogy is insignificant next to the power of losing respect from millions of Star Wars fans.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?