Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

HP Claims Their Moonshot System is a 'New Style of IT' (Video)

Roblimo posted about 2 months ago | from the my-server-uses-less-power-than-yours dept.

HP 68

Didn't we already have something kind of like this called a Blade server? But this is better! An HP Web page devoted to Moonshot says, 'Compared to traditional servers, up to: 89% less energy; 80% less space; 77% less cost; and 97% less complex.' If this is all true, the world of servers is now undergoing a radical change. || A quote from another Moonshot page: "The HP Moonshot 1500 Chassis has 45 hot-pluggable servers installed and fits into 4.3U. The density comes in part from the low-energy, efficient processors. The innovative chassis design supports 45 servers, 2 network switches, and supporting components.' These are software-defined servers. HP claims they are the first ones ever, a claim that may depend on how you define "software-defined." And what software defines them? In this case, at Texas Linux Fest, it seems to be Ubuntu Linux. (Alternate Video Link)

cancel ×

68 comments

Sorry! There are no comments related to the filter you selected.

So dedicated cloud servers (1)

Anonymous Coward | about 2 months ago | (#47461083)

I can see this being useful for approximately ten percent of the market.

Re:So dedicated cloud servers (1)

mjwalshe (1680392) | about 2 months ago | (#47462385)

And not much good for HPC or Bigdata/hadoop where the trend is away from virtualization

Re:So dedicated cloud servers (0)

Anonymous Coward | about 2 months ago | (#47464967)

the trend is away from virtualization

So in fact, perfect for those workloads because one chassis has 45 non-virtualized servers in it?

But wait! (2, Funny)

djupedal (584558) | about 2 months ago | (#47461163)

There's more! Buy now and receive a second HP MS System for free! Just pay shipping and handling.

Not available in any store.

But Moonshot is years old (1)

mimino (1440145) | about 2 months ago | (#47461217)

But Moonshot servers are a couple years old, with a few success stories from HP itself (www.hp.com is fully moonshot-powered) and others. Yes they are efficient, small and easy to run, but they are also quite less powerful than a "traditional" server. Now all they do is release new "cartridges" for the platform. Are we soon to hear about generation 2.0? Maybe at HP Discover?

Re:But Moonshot is years old (5, Funny)

Anonymous Coward | about 2 months ago | (#47461309)

Only HP would call them "server cartridges". I think their CEO cartridge is running low, they should go get a new one.

Re:But Moonshot is years old (1)

afidel (530433) | about 2 months ago | (#47463199)

Nah, HP's all about long lived chassis, the C7000 blade enclosure is 8 years old and they're still adding new blades and I/O modules for it. The previous P class chassis was supported for 6 years.

Re:But Moonshot is years old (1)

amorsen (7485) | about 2 months ago | (#47464479)

www.hp.com is fully moonshot-powered

That would explain why the HP site is so ridiculously slow. Except that it has been slow for years, but maybe they were always running it on prototypes.

Re:But Moonshot is years old (1)

stiggle (649614) | about 2 months ago | (#47464859)

Its because HP customers are used to printer ink cartridges being overpriced disposable units. They're thinking is to move this into computer components and release them as over-priced disposable units too.

4.3 U (3, Insightful)

digsbo (1292334) | about 2 months ago | (#47461297)

4.3U? They couldn't have made a reasonable tradeoff to go to an even unit size?

4.3 U (1)

Anonymous Coward | about 2 months ago | (#47461367)

4.3U? They couldn't have made a reasonable tradeoff to go to an even unit size?

Maybe they have a 0.7U add-on planned for it :-)

Re:4.3 U (1)

Burdell (228580) | about 2 months ago | (#47462103)

It is probably either 7.5 inches (4.29 U) or 190 milimeters (4.27 U) tall. However, I don't know why you'd make something designed to be rack mounted that is not an integral multiple of U, unless you have something that needs cables attached to the front (in which case you still designed it poorly).

Re:4.3 U (2)

mrspoonsi (2955715) | about 2 months ago | (#47461449)

Exactly, in a cold / hot isle rack you are left with a gap which would need plugging with something.

A 42U rack would have 7U wasted space that is almost another 2 servers...

Re:4.3 U (4, Funny)

mlts (1038732) | about 2 months ago | (#47461517)

A swimming pool noodle cut to fit works perfectly with gaps in the hot/cold aisles. Don't ask how I know...

Re:4.3 U (0)

Anonymous Coward | about 2 months ago | (#47461571)

Ummm... Fire Code?

Re:4.3 U (0)

Anonymous Coward | about 2 months ago | (#47468713)

Oh, I know that one! "Find a workaround, or get fired"

Re:4.3 U (1)

Shimbo (100005) | about 2 months ago | (#47461771)

Exactly, in a cold / hot isle rack you are left with a gap which would need plugging with something.

A 42U rack would have 7U wasted space that is almost another 2 servers...

They will sell you a .66U spacer, or a 13U box that fits three of them. It may be a dumb idea but not that dumb.

Re:4.3 U (1)

digsbo (1292334) | about 2 months ago | (#47461941)

39U of these plus 2-3 U of network equipment seems reasonably efficient. I didn't see the bit about the 13U consolidated chassis, but that is pretty sensible.

Re:4.3 U (1)

mrspoonsi (2955715) | about 2 months ago | (#47462357)

It would have been smarter not to require an additional chassis (who wants to lug an extra 13U chassis into a datacenter?). It should be done in the rail system to offset the 2nd and 3rd server, then you only have to fit different rails to offset.

Re:4.3 U (0)

Anonymous Coward | about 2 months ago | (#47462329)

By choosing 4.3U a typical 43U rack can host 10 of these systems.

Re:4.3 U (0)

Anonymous Coward | about 2 months ago | (#47462835)

I am petty sure this is a joke.
I thought the 4.3U unit was a joke too. I used to hate HP purely because of Carly. I'm guessing this is another one of those Metric to English conversion fiascos. But we know deep down it's not.

Re:4.3 U (1)

Anonymous Coward | about 2 months ago | (#47463105)

there are three screw holes per U, so 1/3 u makes sense.

Re:4.3 U (3, Informative)

radarskiy (2874255) | about 2 months ago | (#47463473)

This is actually an established size from HP. It allows two 3.5" drives per vertical blade (cheaper than 2.5") which would not fit inside of a 4U chassis, but fits one more chassis per rack than 5U would

Amazing. (5, Funny)

ddt (14627) | about 2 months ago | (#47461331)

"If you do algorithms, things of that nature, you can run on these systems."

Sold!

What does their website run on? (5, Informative)

guytoronto (956941) | about 2 months ago | (#47461415)

Being in IT sales, I am often required to surf HP's website. Their site is consistently painfully slow. You would think that a company like HP would make sure their servers could serve up webpages faster than a snail.

Re:What does their website run on? (1)

Anonymous Coward | about 2 months ago | (#47461663)

Couldn't agree more on this! At some I just gave up on HP because of their website. Dell / Lenovo may have a million options and overfancy pages too, but at least load times are predictable. They should all take lessons from some of the new co's like google who seem able to run fast sites (news.google.com, etc)

Re:What does their website run on? (1)

duk242 (1412949) | about 2 months ago | (#47462267)

Oh man, plus one to this... Their support site is so slow that when you're logging warranty jobs you hit the button to load the page and then alt tab and do something else for a bit...

Re:What does their website run on? (0)

Anonymous Coward | about 2 months ago | (#47467455)

I let my certification expire so I wouldn't have to deal with that website anymore. The $10 (or whatever is was) they were paying my company to replace a motherboard wasn't worth the hassle, and I couldn't even use the page from my Linux machine because I refused to run WINE and IE. Get a real web page and web developers HP.

Re:What does their website run on? (-1)

Anonymous Coward | about 2 months ago | (#47462961)

They're all like that. IBM, HP, Oracle. That's what you get when you run on these shitty, bloated, god-awful java middleware piles.

Re:What does their website run on? (0)

Anonymous Coward | about 2 months ago | (#47463127)

Their site is consistently painfully slow. You would think that a company like HP would make sure their servers could serve up webpages faster than a snail.

Are you loading up the site with or without NoScript? With NoScript its loading pretty fast. If I allow all scripts to run, I can (roughly) see 2-3x the number of requests and it is slow. Too lazy right now to pull out specifics via a Fiddler but may help you to install NoScript.

NEWSFLASH!! (0)

Anonymous Coward | about 2 months ago | (#47461419)

This just in! Slashdot is just another way to get you to buy shit. Who knew.

The only remaining community not trying to profit from a historical authenticity being you, some hand lotion, and a box of tissues.

Coming soon! Viagra branded right hand, sewn to the end of your wrist while you sleep!

Totally would buy (5, Interesting)

Anonymous Coward | about 2 months ago | (#47461435)

If I had the money, I'd totally buy it and avoid the cluster****ery that is cloud services.

BUT...

Notice what the average cpu is. Intel Atom class hardware. Or in otherwords, this is designed for doing dreamhost-style weak cloud VPS, so while you may have 45 servers in the box, the net performance is ... well...

The Atom processor picked, S1260 (2 core, 4 thread @ $64.00 )has a passmark of 916
The highest rated is Intel Xeon E5-2697 v2 @ 2.70GHz, passmark 17361
So 19 of those Atoms (38 cores, 76 threads) = 1 E5-2697v2 (12 core, 24 thread @ $2614.00)
One dual E5-2697v2 server is almost equal, and you have 24 usable cores that could be turned into weak VPS servers. Get the point I'm making?
Moonshot might be a better choice for provisioning weak dedicated hosts instead of VPS's (which are inherently weak, even when running on solid hardware, they are still subject to being oversold.) The S1260 is 64$, the E5-2697v2 is $2614, or roughly the cost of 40 of the Atom's. So on paper someone might go "oh look I can can afford an entire moonshot server for the price of a single cpu E5-2697v2 and get twice as many cores, when the single thread performance of the 2697 is a passmark of 1,662 (yes . 181% of the 4 threads of the Atom.)

The thing is, this kind of configuration is better suited for certain tasks, such as a web server cluster front end (where it's rarely the CPU, but the network infrastructure that's the bottleneck) where you can turn on or off identical servers as required, and none of them actually need hard drives connected, they can just be PXE booted from a NAS.

Though I'm worried when I see "software defined" anywhere in marketing, as most virtualization software fails hard when under load (75%CPU.) So maybe a data center that is space/power constrained can see a use for this, but if you're running a high usage website, you're better off optimizing the software stack (like switching to nginx or using Varnish in front of apache httpd +php-fpm instead of leaving it at the highly inefficient httpd prefork+mod_php) than deploying more inefficient servers.

IBM? (4, Informative)

s.petry (762400) | about 2 months ago | (#47461755)

The whole promotion seems to resemble everything from IBM PureServers that were introduced about 2 years ago, but of course lacking any type of performance. At least the IBM servers allowed scaling, higher performance CPUs, integrated disks, etc..

When management and marketing design computers, this is what we get. HP has not really been a technical player for a long time, at least in terms of innovation. Superdome was okay, but Sun E class machines made them look like an old mainframe in terms of usability. Itanium flopped and they never put much into the PA RISC chips after that. Omniback and NNM were great, but required manpower and HP has despised T&M billing for as long as I've worked with them which goes back to HP-UX 9 and VUE days. (I contracted for them in Michigan, because they would not hire direct technical people).

Re:Totally would buy (0)

Anonymous Coward | about 2 months ago | (#47462209)

Shouldn't you look at an Atom that is not 2 years old? Like the C2750 which has a passmark of 3797 with 8 cores.

Re:Totally would buy (3, Informative)

Maxwell (13985) | about 2 months ago | (#47462805)

Shouldn't you be telling that to HP? from the site: "The HP ProLiant Moonshot Server is available with the Intel® Atom Processor S1260...."

Re:Totally would buy (1)

radarskiy (2874255) | about 2 months ago | (#47463513)

Your limiting factor is actually cooling. For the W/ft^2 you can pull out of a room you can't fill every rack to the top with Xeons.

Re:Totally would buy (1)

sdguero (1112795) | about 2 months ago | (#47464319)

Agreed! In my experience you lose performance and therefore efficiency with VMs when running CPU core/freq dependent applications. The applicaitons we run are 60-80% faster on "bare metal" linux than any VM deployment we've tried so far.

Re:Totally would buy (1)

Wolfraider (1065360) | about 2 months ago | (#47466235)

The moonshot is targeted for a different workload than general computing. We are currently looking at them for replacing our VDI solution. We have several pieces of software that need a better video card and cpu than what a typical VM could provide. With the moonshot we can simply install our software on the bare metal hardware and skip the visualization layer. The moonshot supports 45 blades and you can get a blade that has 4 servers built in, without a hard drive of course. 45 * 4 = 180 desktops per 4.3U with better performance CPU and video wise. I think the moonshot has it's place in more specialty places but defiantly not general computing. Just an HP customer

Not so fast (2, Interesting)

Anonymous Coward | about 2 months ago | (#47461547)

This sounds like a great idea, right? 45 servers in a single chassis? With an OA (onboard assistant) to allow administration of each individual blade. So about 12 months after you've implemented this monster in your production (no downtime) environment a blade fails. No problem you replace the blade. But the NEW blade comes with firmware that requires an update to the OA (entire chassis) and the new OA firmware won't work with the other 44 blades until you update them also. Hmmmm... hey boss, can I get downtime for servers 1 thru 45 to update our blade chassis? No? Ok well I guess we are hosed unless you get TWO chassis fully loaded and cluster between them.

Not so fast (0)

Anonymous Coward | about 2 months ago | (#47461971)

At 100K + per server, and three arp storms, they can keep their "Moonshot" off of my Production environment.

Re:Not so fast (0)

Anonymous Coward | about 2 months ago | (#47462677)

Mod parent up.

Putting all your eggs in one basket, in any way/shape/form, is a bad idea. Your chassis, all the interconnects and backplanes (not physical back-of-the-server backplane), as well as god-only-knows-how-much-firmware, all become points of failure -- specifically, single points of failure for 45 systems. Parent comment about replacing OA firmware and so on applies greatly, especially when it comes to HP hardware. (Also enjoy those firmware updates that will probably take 12-18 full hours to complete on that many systems)

This hardware makes me think of virtualization, where people are shoving 20+ servers on a single physical system, then when the single physical system has issues (doesn't matter what goes wrong (really it doesn't)) all 20 are affected. Bare metal boxes, with a 1:1 ratio of hardware-to-host, is still worth the investment, as long as you have the rack space.

Re:Not so fast (0)

Anonymous Coward | about 2 months ago | (#47463027)

This hardware makes me think of virtualization, where people are shoving 20+ servers on a single physical system, then when the single physical system has issues (doesn't matter what goes wrong (really it doesn't)) all 20 are affected.

You obviously haven't seen a proper virtualized environment. The goal is to keep the hardware busy, not overprovision to the point where you can't migrate VM's to other hosts when things get cramped or hardware issues rear their ugly head.

What you've probably seen is a few incompetent morons do it wrong.

Bare metal boxes, with a 1:1 ratio of hardware-to-host, is still worth the investment, as long as you have the rack space.

Not really. I can migrate VM's between machines with ease and not have to worry about downtime to reload OS's, drivers and restore from a backup when hardware dies. You really need to tune the virtualization environment for the intended workloads. You can get away with some overprovisioning if the VM's spend the majority of their time doing very little. It takes some skill. Most MS-centric SysAdmins don't have it. Scaling a VM to larger hardware takes a lot less effort as well.

I like the fact that my servers are nothing but a VM description and a disk image. Hell if a catastrophe happens I can even host a couple of servers on my iMac running VMWare Fusion until things get repaired.

Re:Not so fast (1)

dbIII (701233) | about 2 months ago | (#47463157)

That's why I like the SuperMicro (and I'm sure others) way of doing a dense server. With some models each machine in the shared case shares the power supply and that's it. You may need third party software to wrangle the cluster, and no deeper than the OS level but a different bit of hardware isn't going to upset anything else.

Not so fast (0)

Anonymous Coward | about 2 months ago | (#47463587)

Ok well I guess we are hosed unless you get TWO chassis fully loaded and cluster between them.

If you're buying these, 2 chassis' should be your minimum order. Why in the world would you buy one chassis and fill it to the max and use all of it without buying another chassis as well?

Marketing lies (0)

Anonymous Coward | about 2 months ago | (#47461567)

In real life scenarios, you will not see "89% less energy; 80% less space; 77% less cost; and 97% less complex.", they are comparing it to their own not efficient systems , not to competition... another thing it's "UP to"... which in real life is x 10 smaller... they compared it to most power hungry monsters... Just buy MicroBlade server from SuperMicro and get better system for 10 times less money. Really people have your own brain... Hp did what everyone else in business did already some time ago, hp just call it revolution to get attention because of declining market share. That's all folks.

Up to (0)

Anonymous Coward | about 2 months ago | (#47461673)

"Up to" means the same thing as "less than or equal to." So my new server line will use up to 100% less energy, 100% less space, 100% less cost, and 100% less complex.

97% less complex ???? (1)

slincolne (1111555) | about 2 months ago | (#47461791)

Wow !

Imagine if they could back-port this work to their current range of x86 blade servers !

:-)

I have a suggestion (2)

slashmydots (2189826) | about 2 months ago | (#47462025)

They forgot the golden rule of IT. If your company has the #1 worst rated consumer customer support and the #1 least reliable laptops (emachines beat them at desktops) then don't create a brand new technology that people will be hesitant to use. You pretty much have to be the exact opposite. Only the best company can come out with something new, claim "just trust us, it works perfectly and you should use it" and have people believe them. I really hope this finally bankrupts them so I can stop having to put out HP-induced fires at my business. I'ms serious, two dc5700's lit on fire.

Re:I have a suggestion (1)

HornWumpus (783565) | about 2 months ago | (#47462321)

I thought the golden rule of IT was CYA?

Re:I have a suggestion (1)

slashmydots (2189826) | about 2 months ago | (#47480337)

It's short for cover your ass with flame resistant material if you have HPs on the premises,

Re:I have a suggestion (1)

cheekyboy (598084) | about 2 months ago | (#47463741)

Now if each cpu was the size of a usb stick and plugged into the USB3 socket, but gave the power of an atom cpu. You could then dynamically plug in out cpus like in HAL. (or some mini PCIexpress socket)

No Servers! (0)

Anonymous Coward | about 2 months ago | (#47462129)

They're not servers you morons they're workstations. You get up to 90 none VM workstations with their own Ram, CPU and video in a rack the size of a blade enclosure. So I don't have workstations hogging San that the servers need.

Re:No Servers! (1)

X0563511 (793323) | about 2 months ago | (#47462659)

So, like VDI? Because that works SO WELL.

UCS what? (1)

TigerPlish (174064) | about 2 months ago | (#47462597)

Cisco got fr1st post!

Re:UCS what? (1)

prowler1 (458133) | about 2 months ago | (#47464191)

I was about to say that this sounds very much like Cisco UCS where everything is defined in 'software'. You define the template and its components and this includes things like WWN's and MAC addresses and it allows you to migrate the 'server' to different blades since it is all in 'software'.

With that said, the UCS kit we run at work doesn't have anywhere near the density claimed by HP with their moonshot but claiming they were the first to create a software defined blade chassis and the likes is not correct.

Re:UCS what? (0)

Anonymous Coward | about 2 months ago | (#47466525)

software defined servers is just a front end GUI to the same mgmt tools IT has been using for years to manage server clusters. Be ith SCVMM with the Azure pack, vCenter with vDirector(is that what does it on the vmware side?), or XenServer with Citrix Service Provider Framework (or whatever they call it now).

blah blah blah marketing speak as always...

Tried with Transmeta (1)

Gothmolly (148874) | about 2 months ago | (#47462661)

We bought some Transmeta-based blades at $LARGE_US_BANK a while back, and they sucked. Hard. Like, don't bother running stuff on them hard. They went back to HP, or in the trash, I forget, and we got real hardware. It looks like HP is reviving the concept of shitty servers for people who don't do a lot with them. Instead of 1 beefy 4U machine, you have a 45-node Beowulf cluster of suck, and most problems ARENT trivially scalable. Or, if your app really is that scalable (or you've spent the time to make it so) then you're a big boy company and you need real iron.
Fail.

Good idea but not new (1)

dbIII (701233) | about 2 months ago | (#47463109)

Look at a SuperMicro catalogue from around 2008 onwards or Verari from even earlier.

Re:Good idea but not new (1)

afidel (530433) | about 2 months ago | (#47463371)

What does SM have that's even remotely like Moonshot? I don't believe they have anything like 45 modules with 4x 8-core ARM processors in 5U. Verari looks interesting, but at 1700 cores per rack it's almost 10x less dense than moonshot.

Re:Good idea but not new (1)

dbIII (701233) | about 2 months ago | (#47463471)

I don't believe they have anything like 45 modules with 4x 8-core ARM processors in 5U

The numbers are a bit different but the "new style" is not new.

Blonde? (0)

Anonymous Coward | about 2 months ago | (#47463451)

Right, it has blonde hair and recently tried unsuccessfully to buy the governorship of the state of California. Go Muffy!

Moonshot... shot down... (1)

Anonymous Coward | about 2 months ago | (#47463751)

The company I'm at is looking at some new serious number-crunching servers.. We had a HP rep come in and propose a Moonshot system. The head of IT and I looked at each other and laughed out loud... Moonshot uses ATOM processors. I don't care how many of them you have, we're not using a rack of low-ball processors in our system. Moonshot is a complete joke.

I think they use ATOM processors because it was the only way to get such high density and still be able to get the heat out of the system. It also may be that Intel has a boatload of atom processors to unload with the crash of the netbook market. We told them come back when HP can get a Xeon in each module..

Moonshot is a great concept that was implemented very poorly. I asked the rep what market Moonshot servers are targeting. He wasn't able to answer the question.

I don't get it (1)

drsmithy (35869) | about 2 months ago | (#47464465)

We got demoed this 6 months or so ago.

I still fail to see what this buys you over a bunch of regular blades or rackmounts running your virtualisation platform of choice.

Re:I don't get it (1)

neurovish (315867) | about 2 months ago | (#47466105)

We got demoed this 6 months or so ago.

I still fail to see what this buys you over a bunch of regular blades or rackmounts running your virtualisation platform of choice.

Best use-case proposal I've seen is for something like VDI. Instead of sharing the resources of one server, every desktop gets their own processor and memory.

AMD SeaMicro is a better choice (1)

Pegasus (13291) | about 2 months ago | (#47464847)

It's 10U with 64 almost real servers (haswell xeons) and has integrated storage and networking. You only need to hook up some power and a beffy uplink to it and you're done. And did I mention a rest api to controll it? Works even with openstack baremetal if you want that. Last I heard (two weeks ago) Moonshot is still only at cli.
Apollos on the other side, those are worth considering. But Moonshot ... too little, too late.

HW Failure (1)

albertost (1019782) | about 2 months ago | (#47465393)

it must be a nightmare if the chassis fails

Re:HW Failure (0)

Anonymous Coward | about 2 months ago | (#47465785)

Some of the older HP equipment was good enough however a few years back we purchased a bunch of new HP notebooks.
All of them died shortly after the warranty.
After several calls to HP, they claimed there wasn't a defect and wanted to charge big bucks to fix them.
We went ahead purchased another vendors' notebooks, all still functioning perfect !
Some time later, we learned there was a chip defect in a graphics chip in all the HP notebooks.
We never received notice about it.
Contacting HP again was a waste of time, again because it was too late to receive the repair or replacement.
Every piece of equipment had the warranty information sent and copies kept in our file.
We still have that box of the dead HP notebooks, kept as a reminder of how much HP sucks.
Until HP replaces them we will never purchase HP produces again.
Until HP replaces them we will continue to remind anyone considering computer equipment how much HP sucks.
HP, fuck making new questionable shit, how about taking care of your customers ?

 

up to advertising (0)

Anonymous Coward | about 2 months ago | (#47472727)

The article uses up to advertising. I.e. benefits may be 89%, but they also can be less.

For related XKCD see http://xkcd.com/870/ [xkcd.com] .

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>