×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Considering Portable Data Centers

samzenpus posted more than 6 years ago | from the put-it-anywhere dept.

Intel 120

miller60 writes "Intel has become the latest major tech company to express interest in using portable data centers to transform IT infrastructure. Intel says an approach using a "data center in a box" could be 30 to 50 percent cheaper than the current cost of building a data center. "The difference is so great that with this solution, brick-and-mortar data centers may become a thing of the past," an Intel exec writes. Sun and Rackable have introduced portable data centers, while Google has a patent for one and Microsoft has explored the concept. But for all the enthusiasm for data centers in shipping containers, there are few real-world deployments, which raises the question: are portable data centers just fun to speculate about, or can they be a practical solution for the current data center expansion challenges?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

120 comments

How long before scammers discover these? (5, Funny)

djl4570 (801529) | more than 6 years ago | (#21442495)

I'm sure RBN would love "Datacenter in a Box." As soon as the authorities begin sniffing around the datacenter can be trucked somewhere else. How long before someone steals one and sells it on ebay.

It has to be more expensive (2, Insightful)

llZENll (545605) | more than 6 years ago | (#21442521)

Rule #1 in technology, anything portable is more expensive than if it were not portable. If its so cheap to use a crate, why not just put the stuff in the crate in a warehouse instead, bypassing the crate and all of the work and design involved with shoving and fitting the stuff in the crate?

AC for Computer Room (4, Informative)

raftpeople (844215) | more than 6 years ago | (#21442679)

Rule #1 in technology, anything portable is more expensive than if it were not portable


Have you ever signed the bill for having AC installed for your computer room in an existing building? While that is just 1 expense of many, it makes me think rule #1 is not accurate.

If its so cheap to use a crate, why not just put the stuff in the crate in a warehouse instead


This is a good idea that I've seen used in certain situations. There are downsides of course but for a company on a budget or in flux w.r.t. facilities this can be a good solution.

Re:AC for Computer Room (1)

PopeRatzo (965947) | more than 6 years ago | (#21445207)

Raftpeople, I've googled "flux w.r.t." but I'm getting too much noise. Can you (or someone) explain it to me?

Re:AC for Computer Room (1)

caluml (551744) | more than 6 years ago | (#21445235)

in flux means changing (in a state of flux). w.r.t. With regards to.
HTH HAND.

Re:AC for Computer Room (1)

PopeRatzo (965947) | more than 6 years ago | (#21446031)

Thanks, caluml. I was thinking it was a term of art with which I was not familiar. I'm also not used to seeing acronmyms spelt using lower-case and periods.

Re:AC for Computer Room (1)

Jartan (219704) | more than 6 years ago | (#21446267)

Have you ever signed the bill for having AC installed for your computer room in an existing building? While that is just 1 expense of many, it makes me think rule #1 is not accurate.


That's not really an argument for portability. That's an argument for picking a better place to put your data center. If "outside in a box" is a cheaper answer that still doesn't really have anything to do with it being portable. It's a given that if you put a data center in the middle of your climate controlled building that it's going to be a bitch piping out all that heat outside.

No matter how "blackbox" these things are though you still have to think about security and location. Seems like the real benefits are more about pre-built mass manufacturing advantages.

By your logic, (0, Troll)

geekoid (135745) | more than 6 years ago | (#21442771)

Making the iPod so big it's not portable would be cheaper to manufacture. Obviously that's not true.

Rule one is actually:
"When it comes to stating rules, Ignore IIZENII"

Re:It has to be more expensive (3, Informative)

mikael (484) | more than 6 years ago | (#21443383)

Because the location is remote and there is not time to build a normal facility. The main purpose for these data centers is to handle expansion in limited areas, or while a new data center is being upgraded.

There are another applications for keeping everything on a truck:

Valerie Walters Muscle Truck [valeriewaters.com] - a fitness centre that comes to you.

Office trailers [google.com]

Mobile kitchen trailers [aluminumtrailer.com]

Hospital trailers [hankstruckpictures.com]

Mobile retail and dwelling units [lot-ek.com] (Or shops and homes in containers).

Re:It has to be more expensive (4, Interesting)

Kadin2048 (468275) | more than 6 years ago | (#21443463)

Rule #1 in technology, anything portable is more expensive than if it were not portable. If its so cheap to use a crate, why not just put the stuff in the crate in a warehouse instead, bypassing the crate and all of the work and design involved with shoving and fitting the stuff in the crate?
Not really applicable here. The equipment is the same either way. It's not like buying a laptop versus a desktop, where one is carefully (and expensively) optimized and the other one isn't. The same pizzaboxes/blades are going in the racks either way, whether it's in a traditional datacenter or in a cargo container.

The advantage is more on the installation and infrastructure end. Think of it more as "mobile homes" versus "traditional houses." With a regular house, you have to get the plumber, electrician, HVAC guy, carpenters, etc. to your site. For a mobile home or trailer, you keep all those people in one place, and they build houses over and over and over, on an assembly line. And as a result, "manufactured homes" are a lot cheaper than regular ones.

I think that's the model that you want to apply to datacenters: get rid of all the on-site installation and configuration, all the raised flooring and cabling; just have a team of people in a factory somewhere, installing and wiring all the servers into the containers, over and over. Then you just haul the container to the customer's site and plug it in. (In fact, since it's in a shipping container already, there's no reason why you do this in a place where labor is expensive; you might as well assemble them in some third-world country somewhere; it would almost assuredly be worth the small cost for sea freight -- most of a container's transportation costs are in the last few hundred miles anyway.)

The problem is mainly a chicken-and-egg one; in order to make "datacenters in a box" cheaper than traditional ones, you need to get an economy of scale going. You need to have an assembly line churning them out. If you don't have that, you're just taking the expense of a traditional data center and then adding a bunch of containerization and transportation costs to it.

It might take a very long time to catch on, because there's such an investment in traditional datacenters right now, but if I worked doing datacenter server installations, it's probably something I'd be a little concerned about. Unlike with 'manufactured homes' and regular houses, there isn't much social stigma over having your web site served from a trailer.

Re:It has to be more expensive (1)

Antique Geekmeister (740220) | more than 6 years ago | (#21444835)

No, the equipment is not identical. With the limited space and resources of a portable data center, and the lack of maneuvering room for operations like relocating racks or having a bank of projector screens to monitor large arrays of systems, you have to be very careful in what you install and why. And cooling has to be managed very carefully, along with power consumption: you can't simply put in another fan to route the cool air where you want, and you don't have floor space to disassemble equipment on site the way you would in a more standard environment.

This approach is like buying blade computers instead of 1U "pizza boxes", you sacrifice some flexibility for an overall management interface, less space, and ease of installation or replacement. If you've got a movie company that needs a render-farm while you're doing Star Wars 37, and you only need the data center for the six months of final compilation, this could work much better for you than using some off-site processing center.

Re:It has to be more expensive (1)

Tim C (15259) | more than 6 years ago | (#21445101)

And as a result, "manufactured homes" are a lot cheaper than regular ones.
There's also the fact that they're generally a damn sight smaller, of course.

Re:It has to be more expensive (1)

VENONA (902751) | more than 6 years ago | (#21445331)

"Not really applicable here. The equipment is the same either way. It's not like buying a laptop versus a desktop, where one is carefully (and expensively) optimized and the other one isn't. The same pizza boxes/blades are going in the racks either way, whether it's in a traditional data center or in a cargo container."

People, and corporations, do different things with computers. I use a very carefully selected desktop. OTOH, I regard a laptop as an inherently unergonomic, flimsy, slow, POS. It's only advantage is portability, which isn't that valuable to me, *for what I need to do*. As long as the drive is encrypted, I regard a laptop as disposable. My cow-orkers regard them in pretty much the same way--burn through one a year, and who freaking cares?

So far as pizza boxes and blades--yet again, there's a bit more to computing than you seem to have seen, as you seem to think that all workloads scale out. FLASH! Some workloads only economically scale up. Hence large SMP machines. If you need 64 CPUs, and an interconnect that runs on the order of 200GB/S, to run an app that won't benefit from parallelization, what do you do? Punt? Address the need with your "carefully (and expensively) optimized" laptop?

How about hot spots, where you end up with too many people on one server, because you can't persist logins? That's a problem with some distributed enterprise apps, even where the application server layer may be ten systems. But SMP systems do dynamic load balancing, and make that problem go away.

I would truly love to see you design a data center, and "get rid of all the on-site installation and configuration, all the raised flooring and cabling." So would everyone else who's ever worked in a data center. It would be pretty sweet to never have to install and config another system. I'm not sure how you do away with cabling, though. Wouldn't that tend to leave you with a collection of standalone boxes, as in Before There Were Networks?

I'm totally at a loss about how you plan to cool the damned thing. That's probably the single trickiest bit of Sun's effort. The rest is just that cabling inconvenience you write off. Magically transport power and connectivity to the systems, and all you need on the other side of the wall is a fat network pipe, and a much fatter power pipe. Cooling, though, will be a bitch. Duct work is going to be relatively fixed, and critical. You've gotten rid of raised flooring, and I imagine dropped ceilings as well. Very cunning! Hope you have The One True Data Center Plan, and never have to change it. Because you can't readily do it.

Luckily, data centers that have never had a floor plan change are common as dirt. Of the very few that have, it's always been a cheap and easy thing to do. It's not as if systems (and their power, connectivity, and cooling requirements) ever change, after all.

It's been fun, but I have to run. A tube seems to have failed in my UNIVAC (UNIVersal Automatic Computer). http://en.wikipedia.org/wiki/UNIVAC_I [wikipedia.org] . While I'm gone, please feel free to STFU about data centers.

Trailer Homes vs Regular Homes (1)

SerpentMage (13390) | more than 6 years ago | (#21445669)

Now that is an interesting analogy. And I would argue it actually makes the argument for the gp. If mobile homes were that much better why are they not more common? Answer is value... And I think that is the point of the gp in that mobile data centers don't offer as much value as building your own data center.

Here are the issues I could see with a data center

1) Heating and cooling are more extreme than in a building.
2) Space since containers are fixed sizes and since this requires extreme management of temperature you are loosing more space.
3) Shock absorption will require special attention since a bump by anybody or anything could have ramifications.

Not to say that a portable data center would be a bad idea for certain sectors (oil drilling, etc) Though I don't see portable data centers being cheaper.

As a comparison. More people buy laptops, but I tend to buy more desktops because I need cheaper computing horsepower. I run simulations and a laptop that can handle what I need tends to cost 2 to 4 times the price of a desktop.

Re:It has to be more expensive (1)

SmackedFly (957005) | more than 6 years ago | (#21444583)

Not true, the reason a laptop is more expensive than a normal PC is the heating issues, the battery and the space limitations. If you remove these factors, the mass production advantage of laptops makes them equal in price.

Here is a rule, not just for IT, but for anything to do with production:

If you can produce a complete standalone product from factory, and just ship it, with minimal need for end-user setup, it's always cheaper in the long run.

Re:It has to be more expensive (1)

Naturalis Philosopho (1160697) | more than 6 years ago | (#21445833)

If you remove the "heating issues, the battery and the space limitations" from laptops, don't you then have a desktop/"normal PC"?

Re GP: Isn't the benefit to portable data centers, like many cheap laptops, that you can deploy them where you want them, quickly, and then, due to their inexpensive (and cheap/low value) nature, junk them in a year or two? It's not like this is a long term deployment solution where they are "just as good as" a normal Data Center, right? Think satellite laptop instead of desktop replacement laptop. Portable Data Centers are something to use in addition to traditional data centers, not in stead of.

Why it probably won't work (5, Insightful)

Z80xxc! (1111479) | more than 6 years ago | (#21442529)

It seems to me that there would be too many hassles for this to ever work. The equipment in a data center is expensive, and that equipment doesn't usually like being jostled around in a truck, let alone bouncing around at sea for a while. Although in theory it's a great idea, I just don't see it ever really working out. Also, what about security? Data centers need good security. If it's so easily portable, then it wouldn't be that hard for someone to just take off with one, whereas you can't exactly stick a real data center on your getaway car. TFA suggests a warehouse to store the things in to address security and such, but doesn't that sort of defeat the purpose of having them be mobile?

Re:Why it probably won't work (5, Interesting)

Feyr (449684) | more than 6 years ago | (#21442591)

good points, and there's also the maintenance and upgrades to consider, unless you're google and you just replace the rack when more than a certain % is defective. for the majority of places, clustering is the exception, not the norm and you just cant leave 70% of your rack full of defective or outdated crap

consider minor faults too. do you replace the whole rack because a network cable went bad? i don't think so, and i don't want to be the one crawling around that shipping container stringing cat5

Re:Why it probably won't work (3, Interesting)

dokebi (624663) | more than 6 years ago | (#21443393)

Google isn't doing that just because they have lots of money. No, it's actually cheaper to run things that way. And now with VM's running on clusters, the health of individual machines really doesn't matter anymore.

So, when do you think a Redundant Array of Inexpensive Datacenters will become a reality? Psst. It'll be sooner than you think.

Re:Why it probably won't work (1)

Feyr (449684) | more than 6 years ago | (#21443935)

there's a matter of scale involved. when you have 20000 racks, having 10% defective at any one time probably wont impact you. if you run 5-20 racks, im pretty sure it will and your space is probably expensive as hell as well.

Re:Why it probably won't work (1)

dokebi (624663) | more than 6 years ago | (#21444151)

Really? Let's say you have *one* rack full of drives, let's say holding 244 drives at 168 TB [sun.com] . Now, as 10% of the drives fail, would users notice the 10% drop in capacity? Really? Do you run your disk array at 90% capacity without expanding?

The fact is, even small clusters run at 50%-80% capacity, and if a whole datacenter is running at 80% capacity, they'll have to expand pretty soon. With these datacenter-in-a-box, Snap, and its done.

Re:Why it probably won't work (1)

Mikkeles (698461) | more than 6 years ago | (#21445267)

And don't forget having to run outside every hour, 24 hours a day, to put more coins into the parking meter!

Re:Why it probably won't work (4, Informative)

drix (4602) | more than 6 years ago | (#21442649)

Dig a little deeper--you really think that large companies such as IBM, Sun, Google et al would spend tens of millions of dollars developing these products and not give thought to the basic issues you have raised? I know I know this is Slashdot and this sort of armchair quarterbacking is de rigeur, but still... every one of these issues has been addressed on Jonathan Schwartz's blog, to say nothing of the myriad of technical and marketing literature which I'm sure covers it in exhaustive detail. Here's a Blackbox getting hit with a 6.7 quake [youtube.com] ; here's [sun.com] where he talks about shipping it, and security as well (it comes equipped with tamper, motion and GPS sensors, to say nothing of simply hiring a night watchman to call the cops if somebody comes prowling;) and the answer to your last question is no, no it does not.

Re:Why it probably won't work (2)

Z80xxc! (1111479) | more than 6 years ago | (#21442697)

You still fail to address the problem of working inside one of those. A shipping container can only be so big. As Feyr said, what do you do about upgrading or replacing stuff? There's limited room to move around. You need to be able to access all the equipment, not to mention getting wiring and all set up. Also, would you want to be the captain of a ship carrying several hundred of those? If that ship sinks, then you're in deep trouble. Pun intended. Having hundreds of mobile datacenters on the sea floor isn't going to do you much good, now is it?

Re:Why it probably won't work (1, Funny)

Anonymous Coward | more than 6 years ago | (#21442893)

You still fail to address the problem of working inside one of those. A shipping container can only be so big. As Feyr said, what do you do about upgrading or replacing stuff? There's limited room to move around. You need to be able to access all the equipment, not to mention getting wiring and all set up.

You could pretend you are in the ISS.

Also, would you want to be the captain of a ship carrying several hundred of those? If that ship sinks, then you're in deep trouble. Pun intended. Having hundreds of mobile datacenters on the sea floor isn't going to do you much good, now is it?

You could pretend you are in "Voyage to the Bottom of the Sea". Just watch out for sea monsters and such,

Re:Why it probably won't work (1)

cyphercell (843398) | more than 6 years ago | (#21443259)

you mean of all the shit ass cabling jobs he has, he's also gotta spend time on some f*ing boat in the middle of the ocean with some arsehole that always wants to "pretend we're on the International Space Station", jesus christ man, you are NOT selling this idea

Re:Why it probably won't work (4, Interesting)

TheLink (130905) | more than 6 years ago | (#21443169)

"You need to be able to access all the equipment"

Why? If you're something like Google, I bet you could just RMA the containers with faulty stuff back and get new/refurbished ones already configured to your specs - all you need is net boot them for automated install. AFAIK Google don't fix servers once they fail or even take them out of the rack, they just have someone go about once in a while to take em out (like "garbage collecting" instead of "malloc/free").

So for the big guys it'll be a bit like buying a prebuilt PC, only it's the size of a container.

Re:Why it probably won't work (4, Interesting)

Kadin2048 (468275) | more than 6 years ago | (#21443591)

I think the short answer is that you don't. I've seen the photos of Sun's boxes, and while the racks do pull out to let you get to the equipment if you need to, I think you basically just view each server in the rack as a small part of a bigger assembly (the box itself), and if something goes faulty in a single server, you move its workload to another machine and just turn it off and leave it there, essentially entombed in the rack. Maybe they'll be some way of easily swapping out machines, or maybe it'll just be easier to leave them there until the entire container's worth of machines are obsolete, and then just dispose of the whole thing and get a new box hauled in. (Or send it back to somewhere for refurbishment, where they can strip it down completely, pull out all the machines, repair and replace, and then bring in a new one.)

We think of rack space as being precious because of the way traditional datacenters are built and designed; I'm not sure that would still be true if you had a warehouse or parking lot full of crates (especially if they're stacked 3 or 4 high) instead. If you never unseal the box, rack space isn't a concern. Heck, if you have a football field of stacked containers, you might not even want to mess around with getting a dead one out of a stack if it died completely. Just leave it there until you have some major maintenance scheduled and it's convenient to remove it.

This is getting into business models rather than the technology itself, but I could imagine a company selling or leasing boxes with a certain number of actual processing nodes and a number of hot spares, and a contract to replace the container if more than x number of nodes failed during the box's service life (5 years or so). Companies could buy them, plug them in, and basically forget about them, like the old stories about IBM mainframes. If enough units in the box failed so that it was close to running out of hot spares, then it could phone home for a replacement. As long as enough hot spares were provided so that you didn't need to do this often, it might be fairly economical.

Re:Why it probably won't work (1)

RightSaidFred99 (874576) | more than 6 years ago | (#21443705)

You don't put them at sea. You put them somewhere where you have network and power, probably attached to an existing data center. You plug it in, and run e.g. 100Gig network to the container. This isn't rocket surgery. If one of the blades goes bad, you walk in, pull it out, and slide in a new blade. Not exactly difficult.

Re:Why it probably won't work (1)

quanticle (843097) | more than 6 years ago | (#21443881)

You need to be able to access all the equipment, not to mention getting wiring and all set up.

Why? I'd think that the wiring and everything would be pre-built into the container itself with standardized fasteners, so that replacing machines inside the crate would as simple as pulling out the old box/blade and dropping in a new one. In fact, because of the standardized layout, I'd think that replacing equipment would be considerably easier. Think Lego bricks vs. jigsaw puzzles. Which are easier to put together?

Re:Why it probably won't work (1)

goodtim (458647) | more than 6 years ago | (#21444231)

Also, would you want to be the captain of a ship carrying several hundred of those? If that ship sinks, then you're in deep trouble. Pun intended. Having hundreds of mobile datacenters on the sea floor isn't going to do you much good, now is it?

At $100/barrel, a supertanker with 2 million barrels of oil, is probably worth a lot more then a bunch of computers. Especially if you wreck the thing and it costs you another $100m to clean up.

Re:Why it probably won't work (1)

jo42 (227475) | more than 6 years ago | (#21442849)

every one of these issues has been addressed on Jonathan Schwartz's blog
Yes, but we still maintain that it is a solution looking for a problem.

Why it probably will work (2, Insightful)

Nefarious Wheel (628136) | more than 6 years ago | (#21443001)

Because, with virtual server architectures being on the rise, a new data centre can mean one or two large and very generic servers and simplified connections. This means the configurations can be highly standardised. The real difficulty would be ensuring your network of backed up virtual server files were configured in a portable fashion and properly documented, as in config management database. You wouldn't need to worry about the builds so much, just the right config of virtual drives. Get it right and you'd be back up and running in an hour. Get it wrong and you'd never recover.

I guess the rules are pretty much the same as for standard data centres, but since these will be looked at as a DR solution as often as not, being able to break a standard one out of the warehouse and put it online fast -- for any number of different configs -- would put it on any IT risk manager's shopping list.

Re:Why it probably will work (1)

Zeinfeld (263942) | more than 6 years ago | (#21443117)

Because, with virtual server architectures being on the rise, a new data centre can mean one or two large and very generic servers and simplified connections.

Thats it, plus you get the whole thing built and assembled in the factory at factory labor rates rather than on site at consultancy rates.

If you have a scheme that requires a large deployment of like equipment it could well be attractive. The key would be to build in enough redundancy into the basic box that the hardware never needs to be touched after the box is closed.

Very attractive if you are doing something like installing an Internet backbone in Afghanistan. Less interesting if you are working in downtown Mountain View.

The idea of data center plus power plus cooling in a package is definitely attractive for many applications. Rig the thing in Mountain view, send it off to Niagra Falls or some other place with real cheap power to operate it.

Re:Why it probably will work (1)

Kadin2048 (468275) | more than 6 years ago | (#21443739)

The idea of data center plus power plus cooling in a package is definitely attractive for many applications. Rig the thing in Mountain view, send it off to Niagra Falls or some other place with real cheap power to operate it.
Rig it in Mountain View? Try Hong Kong. Or maybe Wuhan (where Foxconn has its megafactories). The cost to ship a container from California to New York is a substantial fraction of what it costs to ship it from China to NY; most of the cost is in the "last 500 miles" -- the leg of the trip by truck from the nearest big intermodal facility. Plus, most of the servers, cabling, and other stuff going into the container is made in the Far East anyway, so it would make sense to assemble the thing there, rather than boxing up all the servers, putting them in a container, shipping it to the U.S., only to unbox them and put them into a different container. (In other words: you're already paying for China-US shipping of the servers anyway.)

Re:Why it probably will work (2, Informative)

kent_eh (543303) | more than 6 years ago | (#21443871)

About 14 years ago, I was at Ericsson in Richardson, TX for some training. They had a cell switch installed in a set of semi trailers that was specifically for disaster recovery. (though they did use it as a test bed when it wasn't required for DR)
If a customer lost a switch location due to fire, earthquake, or whatever, they could deploy this unit anywhere in north america within drive time plus 3-5 days for configuration.
The customer would be scrambling to get leased lines and microwave re-routed to the temporary location, but they could probably have some service restored to their customers within a week or 2. Especially if they had a few COWs (cell on wheels) to use.
A lot better that the 10-12 weeks it took to install a new switch from scratch.

Re:Why it probably won't work (0)

Anonymous Coward | more than 6 years ago | (#21443057)

Actually, this makes sense for CGI film production. Instead of leasing/buying space where you install a data center, you lease space where you can install one (or more) of these units. Your infrastructure costs are power and cooling, not raised floors and moving in racks. You also need the space for your production staff, but it does not have to be co-located with the data center, you just need good high speed data connections. You only get the computing resources when you need them, and at the end of the production the lease ends and you don't have to pay for stuff you are not using. When the next project starts and you repeat the cycle, you get the next generation price/performance. Also, if the production needs more resources you can get more units and expand without disturbing your current computing environment. The leasing model is the way that a lot of equipment is used in Hollywood. In fact Panavision cameras http://en.wikipedia.org/wiki/Panavision [wikipedia.org] are never sold, they are leased.

Re:Why it probably won't work (1)

Jeff DeMaagd (2015) | more than 6 years ago | (#21444477)

A shipping container isn't something that just any local four-fingered grunt can drive off with. I just don't see a whole lot of people being able to drive off with a shipping container without anyone noticing. There aren't many people with a crane, so narrowing down who took it is probably not that tough. I don't think cranes are cheap to rent. Some can, but I think it's not a much larger group of people that can sneak into a fixed data center and steal stuff.

The need? (1)

AlphaDrake (1104357) | more than 6 years ago | (#21442541)

Wouldn't want to be the trucker driving that box around... that's for sure. And if Google didn't go through with it, why would anyone else? :P But why does Intel really need multiple datacentres anyways? I mean, they have to host their website and drivers and such, but what else really...

The Trucker... (1)

Junta (36770) | more than 6 years ago | (#21442763)

It's nothing new for that much moneys worth of equipment to be in a single truck. Quite often I know trucks full of a datacenter's worth of racks drive to the destination..

That said, I wonder if the 'portable' or 'modular' aspect of it is really useful/cost saving. "Because it's a small, contained environment, cooling costs are far less than for traditional data centers", but why is it the case that a on-site constructed datacenter *must* be larger? I look at the pictures and it seems more like the 8' wide restriction imposed by the trailer width is used as a sort of excuse for making things much harder to service. When constructing a datacenter, you could probably build an 8'x40' room and be equally inconvenient, but probably achieve the same cooling situation that benefits these installs. It also has no place to service systems, or for people to actually work. All these things should be possible outside the strict cooling zone suitable for the racks, but a lot of datacenter design likes to provide for it in one place. Another thing is that in a traditional data center, OHSA dictates hot aisles be at least 3 feet wide, and floor tiles being about 2 foot wide that generally leads to 4 feet of space between the rear of racks, when 2 feet would have sufficed.

So is the lesson to make aisles between rear of racks impossibly narrow to get around OHSA, drop ceilings and have narrow service aisles, and the portable datacenters help by being so inconvenient as to force these issues?

Re:The Trucker... (2, Informative)

Thumper_SVX (239525) | more than 6 years ago | (#21446653)

It's a little bit of a conceptual shift from datacenters of old... and it's not for everyone. Having said that, this is exactly the sort of thing we've been talking about for a while where I work ever since Sun talked about their product.

Data center processing capabilities have increased dramatically over the years, but generally the problem I have seen in most datacenters these days is simply that they are not designed for the heat and power load per square foot that blades and high-density systems require. Most modern datacenters were designed and/or built in the 80s and 90s when they had very specific requirements as regards power and heat load per square foot... and that was reasonable at the time. The higher density systems such as blades are a great idea, and provide much more processing capability per square foot than traditional racked servers... however, it has become tough to keep up with the heat output and power requirements of these on a per rack basis. I know our datacenter where I work that was built in 1995 has been retrofitted no less than four times in the last few years to increase cooling capacity, and we're rapidly reaching the limits of what we can do with the physically constrained space we have. At the moment, if we add a new power feed or AC unit, we will actually need to remove racks to put it in. Given our racks are currently running at an average 85% physical capacity already you can see where we have a problem.

These sort of portable datacenters though are only for those who design their systems correctly. Most applications these days can leverage "fat" back end systems (databases and so forth) with "thin" front-end application servers. My proposal that's going through the mill right now was to invest in one of these containers to migrate all of the front-end systems into that datacenter, leaving only the data and storage (SAN) sitting in the existing datacenter. That way, we can eliminate approximately 60% of our servers, which themselves make up about 40% of the heat and power load in our datacenter today. That way we can continue to expand the storage (which is desperately needed, we just have no more floor space for SAN) and leverage either powerful blade servers or powerful standard rack servers as consolidated database clusters and possibly virtual machine space. Where we need application-server space, we can put a server out in the "trailer" and connect it across a fat link into the existing datacenter (bonded gigabit), thereby providing incredible flexibility.

The cost may seem prohibitive, but what are our other options? Right now, our only other option is to actually build a new dedicated datacenter building. The cost of that is incredibly prohibitive, and we've been playing catchup for a long time as far as trying to meet our user demand in a rapidly growing user base while being seriously constrained on space. The cost of one of these trailers is actually an incredible bargain compared to the cost of proper design, architecture, engineering and actually constructing a new building to house our ever growing application requirements.

So what about server failures? Personally, I feel that the best way to proceed is to run up the trailer to about 85% utilized, leaving lots of idle servers in-place. Network boots and stuff like that ought to provide rapid provisioning within the trailered data center, so in the event of a failure you just use network boot to bring up another node and call for service. Hey, we already have all of our servers under maintenance with the manufacturer anyway, and most of the time this is exactly what we do. Plus, what if we grow again? Add another trailer. Simple, cost-effective and efficient.

The security aspect? Leverage your already existing datacenter. Use that as your data source, leave as little actual customer data on the trailered servers as you can. If you start getting constrained on space, start moving your database servers out to the trailers as well, but connect them back to your SAN in the old DC. By doing so, your customer data is safe even if someone were to pick up the trailer and drive off with it. However, given that you know said trailer is going to remain a semi-permanent feature of your site, why not do due diligence to secure it? Cameras, fences, alarms. It's a simple matter to secure it. Hell, if your building will support it, get a crane and put the trailered DC on the roof. If your infrastructure is well designed, there's no reason for anyone to go in there unless it's your vendor coming in to replace a bad server or do maintenance.

There are a lot of naysayers here on Slashdot who claim this won't work. Well, all it takes is the ability to think outside the box and realize that if your architecture is well designed in the first place there's no reason you can't leverage solutions like this instead of constantly playing catchup with the building codes and requirements of ever increasing processing ability. If your architecture won't allow you to segregate tasks in this manner, you need some engineers to redesign your architecture ASAP or you're going to find yourself in a very painful place before too long. We're lucky; we've had some of the best at our location for a while, and they've really been able to design the architecture properly. As a result, if and when we get approval to get the first of those trailers in we can prove that this is a working solution and show that this is going to be the way forward. My current proposal is to trial this at a redundant site instead of our main production DC. We need continuity of business anyway and we were going to build out a new DC at a remote location... what better and faster way to do it than to use the relatively small DC at that location for SAN, run FC out to a secured trailer and voila... we have JIT DC capacity.

Easy to steal, easy to blow away, easy to crash... (0)

Anonymous Coward | more than 6 years ago | (#21442555)

What are we waiting for?

Not the old 19 inch 'portable' TV kind of portable (0)

Anonymous Coward | more than 6 years ago | (#21442563)

I hope their idea of portable isn't simply putting handle(s) on a 160 lbs chassis...

LAN Party ! (1)

Joebert (946227) | more than 6 years ago | (#21442567)

If there were to be a Woodstock today, the center piece would be a portable data center with highpower wireless antennas mounted to the roof.

People would be paying $100 for juice, but not because they're thirsty, rather because their laptop battery is almost dead.

Re:LAN Party ! (1)

weighn (578357) | more than 6 years ago | (#21443019)

If there were to be a Woodstock today...
... there would be 100k kids on e and crystal meth bouncing around to mostly shit music punctuated by about 1 or 2 acts that are worthy of seeing.

What you are describing is cool, but more like a LUG meet on steroids :)

Re:LAN Party ! (2, Funny)

Anonymous Coward | more than 6 years ago | (#21443347)

If there were to be a Woodstock today...
... there would be 100k kids on e and crystal meth bouncing around to mostly shit music punctuated by about 1 or 2 acts that are worthy of seeing.
There has already been a Woodstock like that. It was in 1969.

Connectivity? (1)

wideBlueSkies (618979) | more than 6 years ago | (#21442603)

I don't get it. How portable could a data center be if it's dependant on a hard wired infratructure. Adequate power, network/wan (fiber?) connectivity, etc.

THis stuff takes time to set up....

How cost effective would it be to have a 'portable' DC when you'd have to pay for at least 1 additional set of network and power connections?

Might actually be more efficient to just have 2 seperate DC's. Like a primary/COB kind of setup....

Re:Connectivity? (2, Informative)

Nefarious Wheel (628136) | more than 6 years ago | (#21443061)

How cost effective would it be to have a 'portable' DC when you'd have to pay for at least 1 additional set of network and power connections?

(1) Microwave link or mobile repeater. Costly and needs preplanning, but no external cables. (2) "Portable" can mean "nice quiet diesel or LPG powered generator in the back". Theoretically you could have it up and running while it's being delivered, without waiting for it to reach its destination. I think the target word is "hurry", not "cheap". Fast setup, as in fast market capture or disaster recovery is the word. And I know there are better ways to do DR but not all of your customers think ahead like that, do they? Only the ones who probably don't need you in the first place.

Remember, if all of your customers had perfectly-run data centres, you'd probably be out of a job.

My data in a box (4, Funny)

Anonymous Coward | more than 6 years ago | (#21442629)

> Intel says an approach using a "data center in a box" could be 30 to 50 percent cheaper.

Steps:

1. Get a box.

2. Put your junk in the box.

3. Make her access the box.

and watch the love, baby...

Re:My data in a box (2, Funny)

maz2331 (1104901) | more than 6 years ago | (#21442783)

Come on... how many slashdotters have actually accessed her box recently?

Re:My data in a box (1)

BrookHarty (9119) | more than 6 years ago | (#21445115)

You directed that at the married slashdotters right?

Re:My data in a box (0)

Anonymous Coward | more than 6 years ago | (#21446539)

Married as in Massachusetts? Please keep your infectious molecules to yourselves!

You're all missing the point (5, Funny)

Synthaxx (1138473) | more than 6 years ago | (#21442765)

This isn't about the datacenter, this is a stroke of genious.

You see, by closing the door, the actual data contained within' is either there or not there.

What they've done is run a network cable to that same box to check this, thereby solving one of the most fundamental questions of the universe!

Like i said, absolute genious!

Play games with taxes and states, too (5, Insightful)

timothy (36799) | more than 6 years ago | (#21442809)

If you have a business which can be housed in a portable structure of any kind, it makes it more likely you can move it across a border (state or national) when that makes sense, or just seem inclined to do so if the local powermongers decide they want more (of your) pie.

Coal mines? Hard to do it.

Hospitals? Difficult.

Big factories? Tough.

Data centers? If built into containers or container-friendly, you can start packing now ;)

(On the other hand, it also means that data-centric companies can angle for that famous and annoying "corporate welfare" by flirting with various states and municipalities seeking better goodies like tax abatements, "free" infrastructure additions, etc.)

timothy

Re:Play games with taxes and states, too (0)

Anonymous Coward | more than 6 years ago | (#21444867)

piratebay-in-a-box, anyone ?

Re:Play games with taxes and states, too (2, Insightful)

garett_spencley (193892) | more than 6 years ago | (#21445757)

While this is probably one of many possibilities introduced, I think what most people are missing isn't that this is a 'mobile' data center... but that it's 'modular'.

In the case of Sun's Black Box project it's literally a data center in a standard shipping container. You can do almost anything with that.

Here's one scenario.

Imagine a web hosting company start-up. Their goal is grow as large as a big server provider like The Planet but they don't have several million to invest and even if they did, they won't have the customers yet.

What a traditional start-up might do is rent servers from a provider and resell them. But with these portable data centers you could just rent a secured warehouse somewhere (much cheaper than building a multi-million dollar data center) and then start with ONE portable datacenter. When you get enough customers you simply stick another portable data center right next to it. Or on top of it. Or whatever.

In essence you have a modular data center that will easily scale and can be put pretty much anywhere.

I think that's what most people are missing. The summary said "mobile" whereas I think the real point is "modular". You can pick it up and drop it pretty much anywhere and it can tack on to your existing infrastructure easily etc. Or, as one video clip demonstrating Sun's Black Box said "If you fill this thing with our high end servers you've got one of the world's top 200 super computers".

Point being it just opens up so many possibilities that weren't there before.

Re:Play games with taxes and states, too (1)

aynoknman (1071612) | more than 6 years ago | (#21447119)

Data centers? If built into containers or container-friendly, you can start packing now
However, you have to plug your box into two grids, the electrical and the data grid. Game playing with states most often has to do with labour costs, which aren't on the table here.

Portable is a bad word here (1)

suso (153703) | more than 6 years ago | (#21442835)

I really think portable is the wrong approach. The advantages that they are seeing are from having a compact modular unit that can be plugged in. So what they need to do is develop a building with slots that these modules can plugin to. Then I think it would be more attractive and the whole weariness about it being in a storage container can go away.

I couldn't imagine any hosting provider touting the fact that they have portable data centers built out of shipping containers.

Re:Portable is a bad word here (1)

couchslug (175151) | more than 6 years ago | (#21442993)

"So what they need to do is develop a building with slots that these modules can plugin to."

Simple warehouse space with cabling and backup power would do, and a hardened data center would be especially easy to build. Military containerization by vendors like Sea Box means that there are many different styles of container to choose from.

Upgrades could be easy too. Just truck in new modules and install. Container handling equipment by companies like Tandemloc (good online catalog w. drawings) allows precise positioning of containers. Just use the caster kit, tow bar and a warehouse tug.

I saw a mock up of the SUN model... (1)

FuzzyDaddy (584528) | more than 6 years ago | (#21442941)

at a trade show recently. It was an intermodal container (like an 18 wheeler hauls around). There was a HUGE power connector, an input and output pipe for cooling water, and a network interface. I don't know about the economics one way or another, but it was cool to see. From the outside, you can't help but think that someday we'll have the same thing with a normal power cord, and no cooling water, in something the size of a shoebox. Perhaps because the network connector was no bigger than the one on my computer.

Also, I got to bring home a little foam rubber one for my daughter.

Military (3, Interesting)

SirKron (112214) | more than 6 years ago | (#21442977)

The military already uses these. The Marines uses them to bring their network onto a ship during transit and then into a tent when deployed.

Re:Military (1)

dave562 (969951) | more than 6 years ago | (#21443115)

This is what I was thinking. Perhaps the real audience for this technology isn't even in the United States, or the developed world for that matter. Maybe they are planning on selling these things to people like the UN, the United States military and other similar organizations that need to quickly establish a presence in parts of the world that are not condusive to the kind of long term investments that are required to build a traditional data center in a more stable part of the world. I'm sure that with the modernization of the military and all of the real time intelligence gathering that they are doing, their data storage and processing needs are growing at an exponential rate. The network links in those environments are probably rather slow (packet radio, encrypted, channel hopping, etc.) They would need something like a mobile data center in the field to send all of the information back to for analysis.

Re:Military (1)

S1mmo+61 (1125433) | more than 6 years ago | (#21443983)

Australian military uses them as well. Heat dissipation is the biggest challenge. ADF usually leaves them mounted on the back of a truck.

Re:Military (1)

A New Normalcy (1190543) | more than 6 years ago | (#21444139)

Doom Box! But seriously, getting a permit to install one of these in my town (Thousand Oaks, Calif) would be more difficult than poking butter up a wildcat's ass with a hot awl. ...Lorenzo

Re:Military (1)

Kjella (173770) | more than 6 years ago | (#21444535)

No doubt. The military also spends a lot of money on expensive things because they *have to*. So it doesn't necessarily foloow that they're a good example to follow.

You have to consider all the costs (0)

Anonymous Coward | more than 6 years ago | (#21442981)

Putting everything in a box certainly saves some of the costs associated with installing a data center. That one thing may not make it more economical overall though.

One similar thing is house construction. For many years, people have tried to manufacture houses in factories. Making houses in a factory saves money on a whole bunch of things. You will note, however, that almost all houses are built on site. The reason is that it is much cheaper that way.

I haven't examined all the costs associated with building a data center but I'll bet that it is cheaper to build them on-site than to build them in factories.

Perhaps an indicator of what is coming? (1)

dave562 (969951) | more than 6 years ago | (#21443011)

As population density increases and the raw materials required to generate power become more difficult to obtain in face of increased demand for them, the likelyhood of brown outs and rolling blackouts becomes more and more of a reality every year. Do you think that the ability to move a data center from one location to another might have anything to do with that? Data centers suck up a lot of power. Just because a data center might be in a place where it has a favorable spot on the rolling brown out schedule right now doesn't mean that the power company can keep providing those kind of assurances indefinitely.

Portable Data Centers - Their Place (0)

Anonymous Coward | more than 6 years ago | (#21443039)

The places I see portable data centers useful, are in areas where the local infrastructure is damaged. (Earthquakes, Hurricanes, etc.) Another possibility, is for media, sporting events, where you don't need a building, but you do need the hardware to do it right. I see an important component of a truly mobile datacenter as including a sattellite transceiver.

Other than that, I just see something to talk about.

Speaking of talking about mobile datacenters, Raid5 mobile datacenters? Huh?

Like Prefab Houses (3, Interesting)

Doc Ruby (173196) | more than 6 years ago | (#21443085)

Prefab houses are an increasingly popular method for home construction. They're not really "portable", except when they're delivered from the factory to the "installation site". They're not interesting because of their containers, but because of the economics and other efficiencies in delivering and installing them.

Instead of the house builders building each house as a completely custom job, in an unfamiliar site, in all kinds of weather, with only the tools and materials they bring to some residential area, they've got full control at the factory. They don't have to ship all the excess materials that they used to have to ship back out as garbage. They can keep a pipeline filled with houses they're building, and deliver them very shortly after they're ordered, even quicker than they actually build them. And since so much is standardized, they can mass produce them and otherwise get scale economies that reduce costs. Since they aren't inventing a new, complex device with every home a new, arbitrary blueprint, they are skilled in more than their tools and materials, but rather skilled in producing that exact house, with solved problems presenting higher quality homes quicker.

All that is also true of datacenters. The weather doesn't present so much of a problem avoided, because the datacenter is usually installed in an existing building. But all the rest of the efficiencies are in effect. So datacenters can be cheaper, better, and deployed quicker. This trend makes a lot of sense.

Re:Like Prefab Houses - yabut (0)

Anonymous Coward | more than 6 years ago | (#21443831)

It really is a case of ymmv. There is a company, http://www.atcostructures.com/ [atcostructures.com] , who build pre-fabricated structures and ship them all over the world. In some cases, that is much cheaper than building on site.

On the other hand, pre-fab houses have been around for nearly a hundred years. In fact, Sears used to sell house kits. Even so, you will note that the vast majority of houses are site built. That's because it's usually cheaper that way.

I suspect the same thing is true of data centers. In a few cases, it is cheaper to use a data-center-in-a-box. In most cases, it is cheaper to build on site.

Re:Like Prefab Houses - yabut (1)

Doc Ruby (173196) | more than 6 years ago | (#21444999)

FWIW, ATCO builds modular structures. Like I said, prefab is enjoying a renaissance, and is increasingly giving the benefits I described: http://www.fabprefab.com/ [fabprefab.com] .

Do you work for the pre-fab industry? (0)

Anonymous Coward | more than 6 years ago | (#21445399)

This isn't a discussion of the housing industry. Houses come into it because they are an example of how the economics of datacenters-in-a-box might work.

The main point is that the economics of the situation will drive the decision to build on-site or to ship in a pre-built or semi-pre-built product. The whole life-cycle comes into the calculation.

My own experience is installing navigational aids in remote locations. No matter what we did, there was always a lot of work to be done in the field. For instance, we could install the electronics in a pre-built structure and ship it to site via highway, airlift or sea. That would mean that we didn't have to send some people into the field. We would save a bunch in airfare and hotel expenses that way. We still had to send people into the field for final adjustment and commissioning. Those were the expensive people. Their labor didn't change. In fact, the expensive labor went up because they ended up doing stuff that the cheaper labor would have done if they were there. I often ended up doing all kinds of things that I wasn't qualified to do. (ie. electrical, carpentry, concrete) because it made no sense to ship someone in to do a couple of hours of work. The only thing we saved on was travel and accommodation for the cheap workers. If the people doing the site prep made even minor mistakes, the money we saved evaporated in a puff.

I guess that what I'm saying is that the economics of the situation vary a lot. A temporary datacenter in the middle of the desert is not the same as one in the middle of New York. In the first case, the in-the-box solution is obvious and in the second, it isn't.

Security? (2, Insightful)

Billly Gates (198444) | more than 6 years ago | (#21443371)

To have tens of millions of dollars just sitting in a nice convenient portable container that can be hauled by anyone with a truck seems all too tempting.

Now if some of the data in their included credit numbers and maybe social security numbers of employees as well then you can make money by identity theft as well.

I suppose only a minimum wage paid security guard is guarding it too so anyone with a truck and fake uniform and nametag with a bogus company name can just drive in and convince the guard to drive off with it.

Seems risky.

OMFG! Mods - Where are you? (0)

Anonymous Coward | more than 6 years ago | (#21443723)

Why is the parent post "Insightful"?

Like someone is going to leave a "tens of millions of dollars" asset sitting in an easy to haul-away place with "minimum wage" security watching it.

Gah!

Re:Security? (1)

RightSaidFred99 (874576) | more than 6 years ago | (#21443747)

Why? You think someone can just easily pull up a diesel tractor and haul it away? They have security on these things, nobody's going to steal it. As for data, it's probably attractive to make these containers diskless. Disks fail fairly often and need more cooling. If you network boot and use e.g. iscsi you can run the trailer hotter and not have as many hardware failures.

Perfect for greedy companies... (2, Insightful)

SeaFox (739806) | more than 6 years ago | (#21443617)

Large corporations will love this. Every time the property tax abatement runs out on their current data center location, they can just lay off all the employees and truck the data center to another city.

Coming soon: Portable Oil Refineries.

Portable oil refineries (1)

eknagy (1056622) | more than 6 years ago | (#21446625)

In the original "Alien" book, the spaceship was a portable oil refinery, refining the oil on the way...
And we saw happened.

Seems Insecure (1)

VonSkippy (892467) | more than 6 years ago | (#21443735)

I'll pass.

I like my data centers to be bunker-esque. Not some flimsy trailer parked in the back lot that any schmo with a pair of cable cutters can take off line or with a stick or two of dynamite reduce to component level bits and pieces.

Raises the question (1)

JayTech (935793) | more than 6 years ago | (#21443791)

Wow! Is it just me or did the brainpower meter at Slashdot rise a few more degrees? Finally we're not begging the question.

Arguing with the Sun (1)

Felix Da Rat (93827) | more than 6 years ago | (#21443923)

So far, from what little I've seen, Sun has this one pretty well covered. I'll admit that I haven't checked out the competition, but Sun has been promoting the BlackBox for a while now (check out this video of it in 6.7 magnitude earthquake conditions: http://sunfeedroom.sun.com/?skin=twoclip&fr_story=FEEDROOM198997&rf=ev&autoplay=true [sun.com] Project Blackbox Test)

With everything else they are doing, I think they are cornering this market. Intel getting into it is just solidifying that it's a desirable market.

This video is also neat because of the Sun SPOTs used to monitor the conditions throughout the testing.

P.S. Can anyone tell me how to get URL's to be on just the link text instead of showing the full url? The notes on posting don't really help.

Portable data centers in field use (2, Funny)

Anonymous Coward | more than 6 years ago | (#21444207)

2007: government worker loses unencrypted laptop
2017: government worker loses unencrypted portable data center

One possible use for these portable datacenters. (1)

naota_03 (1093393) | more than 6 years ago | (#21444695)

If you think about it. Data-centers in the way they are built and ran are Secure, redundant and very pricey. Most Data-centers rent out space for many companies. What happens if something happened to the data center? (fire, flood, earthquake, hurricane or other disaster). The companies who are paying good money for the data centers service would be out of service and loosing money. If you sold portable data centers to companies they would buy what they need and have it shipped to their locations. They keep it secure. Their IT Staff keep it running and it reduces the risk of many companies loosing money due to downtime. I don't think these portable data-centers are aimed to be truly portable. I think they are meant to be portable if something should go awry. For example Company A buys a portable data center. they have it installed in their home office outside of New York. Say Something happens to the building where the Portable data-center is being ran. The company could quickly move their servers and racks to another location.

Blue boxes (1)

joaommp (685612) | more than 6 years ago | (#21444823)

This kind of remind me of the "Blue Boxes" in The Pretender series... they had a whole lot of portable storage devices spread across the USA that were all linked together and syncronized every (IIRC) friday with the "The Center"'s mainframe.

New Job Adds (0)

Anonymous Coward | more than 6 years ago | (#21444871)

You better start getting on those treadmills.

Desperately Wanted: Systems Administrator. Usual qualifications. Must be slim enough to fit between racks in a portable datacentre. Paying better than average salary.

Army (1, Insightful)

Anonymous Coward | more than 6 years ago | (#21444931)

I work IT in the Army. Portable is a bad idea because I wouldn't know what to do wtih my free time if I weren't constantly tearing down and setting up. Starting over every 3 months keeps me on my toes.

Not feasable idea for most companies. (0)

Anonymous Coward | more than 6 years ago | (#21445233)

This idea would not work for most companies... here's just a few off the top of my head questions/comments of why this is:

1) Where is the portable fiber to go with the portable data centre?

There is no such thing. Fiber takes not only time to install, but also significant amount of money. And lets not even get into redundant fiber feeds. Fiber is cheaper per month to fixed locations (i.e. 5 years) because the carriers can split the costs of building that fiber infrastructure over 60 months and thus pass on those costs to the customers in small absorvable chunks. If one's definition of portable data centre is that it can move every 2 or 3 months... the costs of each new fiber connection after every move will likely render this project cost innefective for the cost of the fibers alone, nevermind the reconfiguration & resources required for all the extra work.

2) The same could be said for power & cooling.

It costs horrendous amounts of money to buy and install Data Centre grade UPS's and ACs, unless of course you are saying that inside the crate there's already a UPS & ACs, but even so, there's costs to hook up these systems to whatever new building, and further, these are the heaviest and largest items of any data centre... to transport them around is also not cost effective compared to a permanent location data centre.

3) What about risk of damage?

Say the boat on which you are transporting the portable data centre sinks... hope you have backups. But nevermind the backups, the cost of replacement is HUGE. Are they selling portable data centre insurance yet?

4) Transportation of a Data Center = Low up time & bad ROI

Each time a DC in a box is ported around, that's time the DC can not be considered to be in an "UP" and running state. This not only breaks the "must be up 99.9% of the time" (or whatever percentage), but also an expensive capital cost such as a data centre that can not be used part of the time, will have a low efficiency in terms of ROI.

I can see portable data centres being justified in Military war scenarios, or Crisis scenarios such as Katrina... but for everyday operations, the costs & logistics of such operations would be absurd to most Enterprises.

Adeptus

If they steal CDs... (1)

Donny Smith (567043) | more than 6 years ago | (#21445531)

They could also drive off with your mobile data center.
What about physical security of such outfits?

Intel IT (0)

Anonymous Coward | more than 6 years ago | (#21446121)

Posting anonymously for obvious reasons.

Intel IT is the last organization in the world that should be considering this endeavor. This organization is shielded by the company's success but in many aspects they are far from world class.

The "mom and pop" mentality outweighs the very little "world class" practices; the upper IT management is the worse among Fortune 100 companies and their hiring practices reflect that (Intel insists in hiring recent graduates instead of mixing that with seasoned IT veterans).

This produces a culture of "let's do things the Intel way" - but the Intel way in Manufacturing and Process engineering is world class... but their IT leaves a lot to be desired.

I expect this to go nowhere and some manager to claim success, get himself (yes, it's a boys club too) a promotion and leave to another job before the real negative consequences surface.

More deployments than you think! (1)

antarctican (301636) | more than 6 years ago | (#21446935)

I strongly dispute the statement "there are few real-world deployments." From what I hear, Sun's Blackbox is flying off the shelf (figuratively speaking of course, I'd love to see the "shelf" that can hold a few of those...)

When Blackbox was first introduced I tried to convince a friend of mine in a position of managerial influence at Sun to lend one to my employer, we're having data centre space issues and were willing to be a poster child for this new product.

His reply was a simple, no-can-do, they're already selling so quickly they could barely keep up with demand.

They are popular, if they weren't, why would companies like Intel and Microsoft still be looking at joining the game months after Sun first deployed Blackbox?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...