Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

HP Announces ARM-Based Server Line

Soulskill posted more than 2 years ago | from the go-small-or-go-home dept.

HP 125

sammcj writes with news that HP is developing servers based on 32-bit ARM processors from Calxeda. Their current model is only a test setup, but they plan to roll out a finalized design by the middle of next year. "HP's server design packs 288 Calxeda chips into a 4U rack-mount server, or 2,800 in a full rack, with a shared power, cooling, and management infrastructure. By eliminating much of the cabling and switching devices used in traditional servers and using the low-power ARM processors, HP says it can reduce both power and space requirements dramatically. The Redstone platform uses a 4U (7-inch) rack-mount server chassis. Inside, HP has put 72 small server boards, each with four Calxeda processors, 4GB of RAM and 4MB of L2 cache. Each processor, based on the ARM Cortex-A9 design, runs at 1.4GHz and has its own 80 gigabit cross-bar switch built into the chip"

cancel ×

125 comments

Sorry! There are no comments related to the filter you selected.

Obligatory (-1)

Anonymous Coward | more than 2 years ago | (#37917426)

Make your own Beowulf cluster joke here.

Re:Obligatory (0)

gmhowell (26755) | more than 2 years ago | (#37917434)

Make your own Beowulf cluster joke here.

Where do the jokes about buying them on eBay go?

Re:Obligatory (0)

dbIII (701233) | about 2 years ago | (#37917588)

Then make a Minecraft one about Redstone.

it's begining of the end for x86 (hopefully) (0)

Anonymous Coward | more than 2 years ago | (#37917498)

well, x86 arch. is becoming more and more of a drag on computing industry's advances,
_any_ CISC CPU design eats up way more transistors and power than RISC based arch.,
and while it's obvious to most, yet the whole windows/x86-compatibility baggage still keeps this
power-hungry x86 mammoth alive..

those good old days of hand-crafted human-friendly x86 assembly coding are long over,
wake up Intel and Microsoft, put a nice tombstone on x86 and move on

Re:it's begining of the end for x86 (hopefully) (1)

staalmannen (1705340) | more than 2 years ago | (#37917536)

What I wonder is what the differences are between the PA-RISC design from HP and the various ARM chips. They are both RISC types and I am sort of surprised that HP does not go with its own CPU architecture. What is the "magic sause" in ARM?

Re:it's begining of the end for x86 (hopefully) (1)

Chrisq (894406) | about 2 years ago | (#37917558)

What I wonder is what the differences are between the PA-RISC design from HP and the various ARM chips. They are both RISC types and I am sort of surprised that HP does not go with its own CPU architecture. What is the "magic sause" in ARM?

They are probably scared of oracle "doing an Itanium [zdnet.com] " on them.

Seriously though (1)

Chrisq (894406) | about 2 years ago | (#37917584)

What I wonder is what the differences are between the PA-RISC design from HP and the various ARM chips. They are both RISC types and I am sort of surprised that HP does not go with its own CPU architecture. What is the "magic sause" in ARM?

HP stopped selling PA-RISC in 2008 and will end support at the end of 2013 [hp.com] .

Re:it's begining of the end for x86 (hopefully) (2)

DarkOx (621550) | about 2 years ago | (#37917742)

As others have said PA-RISC has been discontinued for some time, so that is one reason. The other is I am pretty certain this thing is targeted at the Linux and [A-z]*.?BSD ecosystem, which has pretty strong support for ARM these days. The software stack for PA-RISC is just not there unless you want to run HPUX and the market for new HPUX deployments is probably quite small.

80GBps switch or not you not probably running your database on these things, but they sound like a perfect web farm in box solution. The software stack on the Linux and [A-z]*.?BSD is entirely there for that and is largely familiar to existing admins. Apache on Linux is still Apache on Linux even when ARCH=arm5tel

Re:it's begining of the end for x86 (hopefully) (1)

swalve (1980968) | about 2 years ago | (#37918886)

It probably works quite well for virtualization.

not at 32bit that maxes out at 4gb ram (1)

Joe_Dragon (2206452) | about 2 years ago | (#37919574)

not at 32bit that maxes out at 4gb ram where is ARM 64?

Re:not at 32bit that maxes out at 4gb ram (1)

headbulb (534102) | about 2 years ago | (#37920546)

The ram maxes out at 4GB per process. Not per system. LPAE allows memory to be addressed up to 40bits

Re:it's begining of the end for x86 (hopefully) (1)

Kagetsuki (1620613) | about 2 years ago | (#37918594)

Industry support and familiarity, availability of compilers, direct support by large projects (Debian, Ubuntu), and simply brand familiarity. I mean if you are going to make an argument about PA-RISC you may as well make the same argument about MIPS and whatever Motorola and IBM are calling their chips these days while you're at it.

Re:it's begining of the end for x86 (hopefully) (1)

ckaminski (82854) | about 2 years ago | (#37922664)

IBM POWER has a HUGE Linux following - and it's officially supported on all of IBMs machines, from the lowest eServer to the largest of their mainframes.

Re:it's begining of the end for x86 (hopefully) (1)

jones_supa (887896) | more than 2 years ago | (#37917550)

It's still amazing how well x86 + Windows works, taking in account all the hacks and legacy cruft involved. However, it's delightful to finally see ARM being more and more utilized outside the smartphone category, in PCs.

Re:it's begining of the end for x86 (hopefully) (4, Informative)

serviscope_minor (664417) | about 2 years ago | (#37917582)

It's still amazing how well x86 + Windows works, taking in account all the hacks and legacy cruft involved.

The legacy cruft is often microcoded out and runs rather slowly. The modern x64 isn't too bad.

However, it's delightful to finally see ARM being more and more utilized outside the smartphone category, in PCs.

Not just ARM. Both SPARC and MIPS (compatible but independent) have now made showings in the top 10 supercomputers.

Re:it's begining of the end for x86 (hopefully) (1)

zach_the_lizard (1317619) | about 2 years ago | (#37918310)

all the hacks and legacy cruft involved.

ARM has quite a bit of its own hacks to get platforms running. Linus Torvalds had his own rant against this, as he is wont to do.

Re:it's begining of the end for x86 (hopefully) (0)

Anonymous Coward | about 2 years ago | (#37917696)

Are there any compatible reimplementations of ARM processors that don't pay royalties to ARM? At least Intel is kept honest by having to compete with AMD, even if AMD is currently losing. On the other hand, a future where ARM and Intel compete with each other could still result in healthy competition for the benefit of their customers.

Well (1)

maroberts (15852) | about 2 years ago | (#37918184)

ARM presumably has patents on its core technologies, which are good for 15-20 years, and also its chip designs would be covered for at least 10 years, so anything compatible would have to be based on some fairly antiquated stuff.

AFAIK, royalties to ARM are not very high in the first place - even though the company effectively gets royalties from several billion ARM chips, its profits over 3 months are only about £30 million, so it is unlikely that the per chip royalty cost is that high.

Re:it's begining of the end for x86 (hopefully) (0)

hairyfeet (841228) | about 2 years ago | (#37922764)

Can we PLEASE stop with the frankly batshit "Hey lets kill X86 herp derp" bullshit please? AMD is cranking out sub 9w dual cores WITH GPUs built in, Intel is cranking out CULV that get similar numbers so power draw on X86? Ain't shit folks unless you are talking about cell phones which thanks to Apple and the iSliver batteries means you have to run INSANELY low power to get any battery life.

Each chip has its place, ARM for mobile and a few specialized niches (like in TFA where I'm sure it'll be for webservers that aren't getting many hits and thus the lower load makes it more advantageous to worry about power) and X86 for the big loads and and number crunching. Because like it or not cycle for cycle ARM gets royally stomped by even the lowest AMD and Intel X86 chips, go for the higher end chips and it isn't even funny.

NSTAAFL folks and ARM is for power sipping and power sipping ALONE. Ramp it up so it can even crunch the kind of numbers a Core2Duo could from 2 generations ago? watch those power savings dry up and blow away like a fart in the breeze. saying ARM can replace X86-64 is like saying "Now that we have these mopeds we don't need trucks anymore!" Different tools for different jobs and I'd love to see you move a couch on that moped. The ONLY way ARM competes is by having tons of specialized chips like for decoding using DSPs which again raises the power usage and bye bye savings.

SATA?! (1, Insightful)

Anonymous Coward | more than 2 years ago | (#37917518)

Come on, guys, it's 2011. We're talking servers here. Forget SATA; throw in native iSCSI support (or fibre channel, but iSCSI would probably be significantly easier - if only because it uses standard Ethernet ports, rather than needing extra protocol support), and you'll have something that's a serious contendor in that space.

Think about it: with SATA, you have a bunch of hard disks, probably mostly disused, almost all of them performing atrociously (SATA is notorious for only being good with large sequential I/O). With iSCSI, you can hook up any disk array you damn well want, whatever its performance characteristics. Throw 10 Gb ethernet into the mix, and you have a winner (an expensive winner when you factor in the switch ports, but at least it gives the architect the option.)

Aggregate I/O performance (3, Informative)

Junta (36770) | about 2 years ago | (#37917962)

FC/FCoE/iSCSI all deliver much much lower aggregate I/O performance than coordinated use of direct attached storage. Google, Hadoop, GPFS, Lustre all facilitate that sort of usage. You will in any of those remote disk architecture have an I/O bottleneck along the line.

That said, I would presume netboot at least would be there, and from there you can do iSCSI in software certainly. FCoE tends to be a bit pickier, so they may not be able to do that in the network fabric provided.

On the whole, I'm skeptical still yet. So far ARM has proved itself when low power is critical and performance. I'm not sure if performance per watt is going to be impressive (e.g. if it hypothetically takes 10% of the power of a competitor and gave 9% of the performance, that can work well for places like cell phones but perhaps not so much for a datacenter). ARMv8 may make things very interesting though...

Re:Aggregate I/O performance (3, Interesting)

postbigbang (761081) | about 2 years ago | (#37918272)

You can argue, successfully, that via virtualization and multi-core relationships that the ARM power argument is goofy, as number of threads per process and virtualization favors the CISC architectures. The ARM infrastructure, however, the foundation for a couple of decent server product lines. The architecture cited is very much like getting a bunch of ARM CPUs together to do what more power hungry quad/multi-core Intel and AMD chips are doing to day. Remember: the ARM is 32-bit, and the number of threads are limited both by inherent architecture as well as the memory ceiling.

What's scary to me is that someone wrote that it has a crossbar switch on it without understanding what that implies in terms of inter-CPU communications, cache, cache sync/coherence, etc. A well-designed system will perform almost as well with iSCSI (on a non-blocking, switched backplane) as it will with SAS so IO isn't quite the issue; the power claim vs thread density per watt expended claim has yet to be proven.

Re:Aggregate I/O performance (1)

Lumpy (12016) | about 2 years ago | (#37919070)

BAH, why? build a metric buttload of ram on it and have it simply make snapshots of the ramdisks to rotating media when changes are made using a coprocessor letting the main process scream along. you get insane speeds and ram is dirt. if each processor had 64 gig of ram, each can run 4 website VM's with plenty of memory and storage and still outperform the quad bonded OC48 connections into the Server Farm.

This is how Comcasts Video on demand system runs. Main spinning storage servers spool out to ramdisk only servers at local headends.

Re:Aggregate I/O performance (1)

AvitarX (172628) | about 2 years ago | (#37920522)

How well does arm break the 4gb ram wall?

Re:Aggregate I/O performance (1)

dgatwood (11270) | about 2 years ago | (#37922358)

What an amazing coincidence [techspot.com] . A 64-bit arm ISA was just announced last week.

I hope they continue in the tradition and call it the "leg" instruction set.

Re:Aggregate I/O performance (1)

AvitarX (172628) | about 2 years ago | (#37922654)

So the solution of

BAH, why? build a metric buttload of ram on it and have it simply make snapshots of the ramdisks to rotating media when changes are made using a coprocessor letting the main process scream along.

is not particularly effective until prototypes that will be released in 2014?

Re:SATA?! (0)

Anonymous Coward | about 2 years ago | (#37918356)

I know it's Slashdot... but this is covered in the linked articles, but since that's maybe a bit too much reading, there are four 10GbE ports on each of the four enclosures within the 4U chassis, connected to the server cards via an internal switch fabric within each of the four enclosures.

The articles explicitly state external storage arrays - if you want internal HDD/SSD you give up some server cards to provide the space for it.

Re:SATA?! (1)

JoeMerchant (803320) | about 2 years ago | (#37918372)

Has anybody seen the Googleplex "server" spec? from what little I've read, I'd assume they're on SATA.

Re:SATA?! (1)

UnknowingFool (672806) | about 2 years ago | (#37918524)

It looks like a SATA cable [cnet.com] is used but Google doesn't mention it specifically in the article.

Re:SATA?! (1)

datavirtue (1104259) | about 2 years ago | (#37918986)

Isn't it well known that they use cheap disks?

Re:SATA?! (1)

swalve (1980968) | about 2 years ago | (#37919010)

I forget the exact specifics, but I think SAS can use the same cable. And those look like SCSI drives.

Re:SATA?! (1)

Archangel Michael (180766) | about 2 years ago | (#37920398)

the functional difference between SATA and SAS is intelligence on the Drive. SATA is dumb compared to SAS. The pinouts and cables and all that were designed to be interoperable. I think SAS drives can be dumbed down to SATA if you need in a pinch, and SAS controllers can handle SATA drives natively (at least some can).

Re:SATA?! (1)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#37919064)

The fact that they've special-magic-backplane-fabric-ed away all the other busses, while leaving each card bristling with SATA connectors, seems rather weird, just because that's a lot of headers to bring out if nobody is going to use them and it'll be a hell of a rat's nest if you actually try(could they really not have stretched their backplane fabric a little bit more, to include allocating direct attached storage to nodes across it?).

The use of SATA, though, seems reasonable enough, given the low-performance, low-cost, low-energy focus of the design. It just seems really weird that the connectors are on the cards, rather than their being a few high-density SAS connectors on the back, allowing you to either use an iSCSI device over the 10gigE ports or a big SATA/SAS cage directly cabled, with disks being farmed out over the backplane, rather than via internal SATA cabling...

Intel is annoyed (0)

Anonymous Coward | more than 2 years ago | (#37917526)

I don't think Intel is going to be happy with that announcement. Anyway, we will have to wait since ARM-15 (late 2012) that has hardware virtualization and 64 bits to really see if ARM can compete with Intel.

Re:Intel is annoyed (0)

Anonymous Coward | about 2 years ago | (#37918058)

There's no 64 bit support in Cortex A15.
The 64 bits are for the chips made by the next gen spec - armv8.
We'll have to wait for another 3-5 years before we see the on the shelf.

Anyway - this is great news, finally a big player at the ARM/Sever market. And
Calxeda seems to have a moment with its products.
I wonder if this has something to do with the recent Ubuntu/ARM/Server 11.10 release,
and the upcoming 12.04 LTS release.

Re:Intel is annoyed (1)

mevets (322601) | about 2 years ago | (#37921042)

They could have named it Iceberg to really piss em off.

How many platforms do they need? (1)

unixisc (2429386) | more than 2 years ago | (#37917542)

Let's count - they have Xeon/Opteron, Itanium, and among their dead platforms, they have PA-RISC, Alpha (DEC/Compaq) and MIPS (Tandem/Compaq). What made them pick this for servers?

Would one be right in guessing that their Itanium based Integrity servers have been a disaster?

Re:How many platforms do they need? (1)

DarwinSurvivor (1752106) | about 2 years ago | (#37917590)

Sounds like a good choice for file servers.

Re:How many platforms do they need? (0)

Anonymous Coward | about 2 years ago | (#37917602)

"Green power" and "ARM" are the buzzwords nowadays. I bet it doesnt take much to convince the business people to buy into this.

Re:How many platforms do they need? (0)

Anonymous Coward | about 2 years ago | (#37917632)

Maybe, they are a huge company that wants to serve as many customers as possible.

But it's the classic back and forth between bending over backwards to give customers what they ask for and then realizing it's support hell 5 years later and then going over to a more converged approach with a single system that is just good, then after the old lessons are forgotten start bending over again.

Re:How many platforms do they need? (2)

jimicus (737525) | about 2 years ago | (#37917756)

HP's balance sheet is up and down like a whore's drawers - one quarter they make a stonking loss, the next they're making solid profits. They haven't been consistent in years.

Their core businesses are being eaten away by ever-tougher competition; the days when you could confidently recommend an HP inkjet are long gone (have you seen their software suite lately? Multi-function devices are even worse because with them you often can't install just the bare driver and have it work); I wouldn't be surprised if something similar happens to their laser printer division sooner rather than later.

Were I to hazard a guess - and I'm not a fortune 500 CEO (if I was I wouldn't be on /.!) - I'd say they're thrashing around looking for something - anything - to carve themselves a new niche. Something they can do better than the competition, something that differentiates themselves from every other manufacturer out there. Nokia have spent some time doing the same thing.

Re:How many platforms do they need? (0)

Anonymous Coward | about 2 years ago | (#37917908)

Yeah, I mean, competition between CPU manufacturers is a bad thing. Customers need less variety in the marketplace, because that's more efficient. HP should just use Intel only.

Re:How many platforms do they need? (0)

Anonymous Coward | about 2 years ago | (#37917970)

(...) What made them pick this for servers? (...)

You wouldn't expect them to pick ARM for current MS Windows PCs and workstations, now would you?

Re:How many platforms do they need? (1)

Pieroxy (222434) | about 2 years ago | (#37918180)

(...) What made them pick this for servers? (...)

You wouldn't expect them to pick ARM for current MS Windows PCs and workstations, now would you?

No, but for Android/ChromeOS/iOS/WebOS ? And Apple has been known for their ability to change platforms, so an ARM version of OSX is far from being impossible.

Re:How many platforms do they need? (1)

Christian Smith (3497) | about 2 years ago | (#37917990)

Let's count - they have Xeon/Opteron, Itanium, and among their dead platforms, they have PA-RISC, Alpha (DEC/Compaq) and MIPS (Tandem/Compaq). What made them pick this for servers?

You can already add ARM to the mix. Their current crop of low power thin clients are ARM based:
http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/12454-12454-321959-338927-3640405-4063703.html [hp.com] (Wow, nice memorable URL!)

Re:How many platforms do they need? (1)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#37919166)

Why would you need a URL, just call you rep for a quote! We understand the internet!

OT: How many platforms do they need? (1)

ckaminski (82854) | about 2 years ago | (#37922772)

This is an example of how badly corporate sites fuck it up (my current employer is a perfectly good example).

The browser tells you which language is preferred - there's no need to hardcode it in a URL. And if they want to switch/override, put it in a fucking cookie.

www.hp.com/products/PRODUCTNUM. WTF is so hard about that?

Re:How many platforms do they need? (1)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#37919146)

Let's count - they have Xeon/Opteron, Itanium, and among their dead platforms, they have PA-RISC, Alpha (DEC/Compaq) and MIPS (Tandem/Compaq). What made them pick this for servers?

Would one be right in guessing that their Itanium based Integrity servers have been a disaster?

It is entirely possible that their Itanium units haven't been doing so hot(though, from what I've read, it's more of a 'small number of cost-insensitive customers' which is why neither HP nor intel can just shoot the program in the head; but why they can't seem to get it to expand and gain any economies of scale).

However, the fate of Itanium and the fate of this curious box should be almost 100% unconnected with one another: The two are about as different in design and intended workload as two servers could be.

32 bit servers in 2011? (4, Insightful)

Viol8 (599362) | about 2 years ago | (#37917600)

With the world moving to 64 bits to accomodate huge databases in memory and on disk they must be aiming for low hanging fruit here. Still, I'd like to get hold of one IF they ever convert it into a desktop version - would be nice to have a linux installation at home that doesn't pay homage to wintel in any way.

Re:32 bit servers in 2011? (1)

unixisc (2429386) | about 2 years ago | (#37917784)

Not just that, what does ARM have that the other processors of HP don't? Even if one doesn't count PA RISC and Alpha, which are dead, HP could still use MIPS processors in their platforms. And how would Xeons be any worse?

Re:32 bit servers in 2011? (1)

Imbrondir (2367812) | about 2 years ago | (#37917910)

By guess is popularity (compared to MIPS), and power consumption (compared to Xeons)

Re:32 bit servers in 2011? (1)

drinkypoo (153816) | about 2 years ago | (#37918012)

It has yet to be demonstrated that MIPS will scale as well as ARM.

PA-RISC and Alpha should not even be mentioned, since they are dead, though everything relevant about the Alpha lives on at AMD.

Xeons are horribly power-inefficient and always have been.

Re:32 bit servers in 2011? (1)

chuckymonkey (1059244) | about 2 years ago | (#37918228)

Do you remember SGI? Their lineup was all MIPS prior to the ORIGIN 4000 and Altix lines. Those were capable of scaling up to thousands of processors.

Re:32 bit servers in 2011? (1)

drinkypoo (153816) | about 2 years ago | (#37918268)

I'm not talking about fetishistically throwing more cores at the problem, I'm talking about minimizing the number of cores in a multi core system. I'm talking not about the number of processors, but about the speed of individual processors. What's the fastest MIPS core you've ever seen? Was it a particularly good core? Yeah, OK, that's what I thought.

Re:32 bit servers in 2011? (0)

Anonymous Coward | about 2 years ago | (#37918502)

BCM1480 -- a quad core 1.2GHz MIPS64 CPU with HyperTransport and a shitload of I/O. You were saying?

Re:32 bit servers in 2011? (1)

unixisc (2429386) | about 2 years ago | (#37918514)

Not even SGI, remember Tandem's Himalaya NonStop servers - the S series used up to the R14000 processor? Those were MIPS based as well, and scaled pretty well, and that is a platform HP still owns. The fastest MIPS was the R10000 and above, and they were pretty competitive - when the R10000 surfaced, it was about the same as the PA 8000, but slower than Alphas. HP already has several server models that it could use, even w/o considering PA RISC and Alpha.

Re:32 bit servers in 2011? (2)

necro81 (917438) | about 2 years ago | (#37918478)

Multi-source supply - ARM processors are produced by lots of companies. And although Calxeda is the only source of these new server-intended ARM processors, they are only the first.

Re:32 bit servers in 2011? (1)

Surt (22457) | about 2 years ago | (#37920628)

Cheapness in bulk.

Re:32 bit servers in 2011? (1)

janoc (699997) | about 2 years ago | (#37917854)

Easy - ARM doesn't yet have 64bit cores available, they were only recently announced. It will take a while until the manufacturers license them, integrate them into their products and only then can HP buy them and build a server around them.

From the looks of it, this prototype machine is unlikely to be built for databases (4GB of RAM per chip is not a lot for something like Oracle), so the 32bit limit is not really an issue. On the other hand, this screams HPC cluster/supercomputing or some other well parallelizable load, such as web servers. 32bit CPU is plenty enough for that. 64bit on a server buys you only more RAM, not much else.

It would be *very* interesting to see performance comparison between this solution and the traditional Intel one. If it is only 50% as fast, it should give Intel a lot to worry about - the higher installation density, the power savings will easily outweigh the raw power advantage Intel may have.

Re:32 bit servers in 2011? (1)

unixisc (2429386) | about 2 years ago | (#37917912)

All very good - but what about the software? What software are they going to offer on ARM that's not already on Xeon (which itself is both 32-bit & 64-bit flavors)? And what performance advantage will ARM bring? If it's power consumption, how compelling is the argument to switch to a completely new platform w/ little supported software (no, Android apps don't count) and no performance advantages just to lower the electric bills? HP might as well have worked w/ either Intel or AMD to get lower powered Xeons or Opterons to market.

Re:32 bit servers in 2011? (1)

Anonymous Coward | about 2 years ago | (#37918202)

TheRegister had the best analysis of what the Sales pitch for one of these is:

"The sales pitch for the Redstone systems [the HP hyperscale offering with the EnergyCore ARM boards], says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.

A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m. The big, big caveat is, of course, that you need a workload that can scale well on a modestly clocked (1.1GHz or 1.4GHz), four-core server chip that only thinks in 32-bits and only has 4GB of memory."

That makes economic sense.

As to Software - what is the problem? I run Ubuntu on an allways-on ARM box at home. Pretty much anything written for Linux can be compiled for ARM instead of x86.

Re:32 bit servers in 2011? (1)

Afell001 (961697) | about 2 years ago | (#37919058)

If....if...if...you have access to the source code, have software vendors working (or willing to work) on a recompile, or an in-house development team who is familiar with ARM architecture, to include best practices to get the highest performance. This is the Achilles' heel, really. You toss a stone and you will hit a halfway-competent developer who understands X86...not so easy with any of the RISC architectures, and to find efficient coders working with ARM processors, you are going to have to go shopping in the mobile development market. Most businesses are conservative anyway, and won't take the extra effort or spend the extra money to switch operating platforms, especially if the ARM architecture only offers lukewarm benefits compared to staying with tried-and-true X86.

Re:32 bit servers in 2011? (1)

janoc (699997) | about 2 years ago | (#37921350)

That's a red herring. For majority of Linux applications you *do have* source code, thanks to the OSS licensing. And you won't even have to recompile, there are distros targeting ARM already. The only exception are proprietary applications like Oracle, SAP or Exchange, but this machine isn't designed for such workloads (Oracle needs more memory, SAP and Exchange are Windows-only).

Regarding development - development for Linux on ARM is exactly the same as development for Linux on x86 and very similar to any other Unix. Most people do not write in assembler anymore and the platform differences from the point of view of a business application writer are negligible at best.

Re:32 bit servers in 2011? (1)

Pieroxy (222434) | about 2 years ago | (#37918248)

Linux provides good software for servers. Ubuntu even has released Ubuntu-server for ARM.

As far as performance per Watt, that's the key point and it is missing from the article. A pity.

That said, what makes an architecture successful? I think it's the amount of R&D that everyone puts in it. x86 has seen obscene amounts of R&D (as compared with other platforms). ARM is getting a fair share with all the smartphones and tablets nowadays. So in my view, it is much much much better to bet on ARM for the future rather than unearth a dead platform.

Re:32 bit servers in 2011? (1)

janoc (699997) | about 2 years ago | (#37921258)

FYI - ARM is well supported by Linux since ages ago, not only by Android. These CPUs have been around for a very long time, probably longer than Intel's Xeon. So while you probably won't run your Exchange or IIS on such machine in the near future, it will do just fine for everything else. There are plenty of uses for non-Windows servers ...

Re:32 bit servers in 2011? (3, Insightful)

Imbrondir (2367812) | about 2 years ago | (#37917894)

In 2010 ARM announced 40 bit virtual memory extension for 32bit ARMv7. That's 1 Terabyte of RAM. Which should be enough for everybody :)

On the other hand ARM a couple of days ago announced 64 bit ARMv8. But you can probably can't buy one of those for 6-12 months or so. Perhaps HP is simply using ARM chips available now more as a pilot for when the knight in full shining 64 bit address space comes along

Re:32 bit servers in 2011? (2)

Bengie (1121981) | about 2 years ago | (#37919782)

40bit addressing on a 32bit CPU takes a hefty performance penalty when switching 4GB "views" as the CPU can still only see 4GB at a time. Since the CPU can't see the memory as one flat memory range, it has to waste time copying certain things between these "views". It also increases the complexity of the code since any single app trying to use more than 4GB will have to manually manage these "views". So a pointer to a memory location may be valid in one view, but not another. Fun times.

Re:32 bit servers in 2011? (1)

Imbrondir (2367812) | about 2 years ago | (#37920046)

Thanks. I actually hoped some somebody would elaborate on the downside of the 'hacky' 40 bit solution. I knew there'd be some.

Though outside of big databases, the hacky solution still sounds useful. For example on application servers, or cheap LAMP hosting, a massive amount of cool low power cores might perform better

Re:32 bit servers in 2011? (1)

Bengie (1121981) | about 2 years ago | (#37923002)

That was based on my general understanding. I would not "quote" what I said.. :p

And yes, it definitively "works", especially with apps that don't need more than your 32bit range. The OS can transparently handle a sum of more than 4GB of app allocated memory. So any single app may not use more than your standard 4GB, but all of the apps together could. It wouldn't be as fast as a native 64bit CPU, but it would be close enough.

If a single app needed more than 4GB, it would have to make use of special calls to properly let the OS know what it's trying to do.

A 64bit flat memory model is still best for large memory.

Re:32 bit servers in 2011? (1)

Archangel Michael (180766) | about 2 years ago | (#37920478)

That's 1 Terabyte of RAM. Which should be enough for everybody

I think I've heard that before. It wasn't true then, it isn't true now. I ALREADY have systems with 265 GB Ram in them, and looking to get even more.

Re:32 bit servers in 2011? (1)

Imbrondir (2367812) | about 2 years ago | (#37920918)

It was meant as a joke. Though I hope you'll share what you use more than 256 GB of RAM for.

Re:32 bit servers in 2011? (1)

bill_mcgonigle (4333) | about 2 years ago | (#37921144)

I think I've heard that before. It wasn't true then, it isn't true now.

You left off the emoticon in his quote before you deadpan refuted his sarcasm.

Re:32 bit servers in 2011? (1)

falzer (224563) | about 2 years ago | (#37922130)

265? That's an odd number.

Re:32 bit servers in 2011? (1)

raddan (519638) | about 2 years ago | (#37918110)

There are plenty of applications that don't need to be able to address 64 bits worth of memory. Think webapps. Lots of cores with fast I/O are what you want. Core speed itself is less important since you're usually I/O bound.

Re:32 bit servers in 2011? (2)

Alioth (221270) | about 2 years ago | (#37919350)

Not all servers acommodate huge databases. There are plenty of servers that have to service high numbers of users for tasks which are not computationally or memory intensive. 32 bit is likely to be better for these kinds of tasks.

So hang on... (1)

CrazyBusError (530694) | about 2 years ago | (#37917672)

Are we going back to transputers again, then?

Re:So hang on... (1)

JasterBobaMereel (1102861) | about 2 years ago | (#37918070)

Yes, but back is not the right word, since the idea (cheap, not very powerful, but many processors) never went away...

Multicore is the same idea but on one chip

Many of the worlds fastest computers are based on this ... e.g. BlueGene/L 106,496 x PowerPC 440 700 MHz ...

The issues are getting the processors/cores to take the load evenly, and writing the software with parallel running in mind, Many systems up until recently were bad at this ...

Re:So hang on... (1)

drinkypoo (153816) | about 2 years ago | (#37918286)

We would have to have used them to go back to them, but hardly anyone ever did, since the cost of the hardware AND the cost of the development were both staggering.

Transputers lost out to software solutions.

Alike most DSLAMs (4, Informative)

La Gris (531858) | about 2 years ago | (#37917676)

This type of setup is already used in Most DSLAMs. Full rack, 2PSU, cooliing, 24 or 48 port (x)DSL cards with ARM CPU as independent servers, Internal management card and network switch. Think of blade server racks.

Just some back-of-the-envelope numbers... (5, Insightful)

bertok (226922) | about 2 years ago | (#37917806)

Those processors run at only about 1.1 GHz, and ARM isn't quite as snappy on a "per GHz" basis as a typical Intel core because of the power-vs-speed tradeoff, so I figure that a 1.1 GHz ARM quad-core chip has about the same computer power as a single ~3GHz latest generation Intel Xeon core.

They say the can pack 288 quad core ARM processors into 4 rack units (with no disks). For comparison, HP sells blade systems that let you pack in 16 dual-socket blades into 10 rack units. Populate each socket with a 10 core Intel Xeon, and we're talking 320 cores. So for comparison, that's the equivalent of 72 cores per rack unit with ARM, vs 32 with Intel. The memory density is the other way around, with 288 GB per rack unit for ARM, and 614 GB with Intel.

So, if you have a an embarrassingly parallel problem to solve that can fit into 4GB of memory per node, doesn't use much I/O, and can run on Linux, this might be a pretty good idea.

Re:Just some back-of-the-envelope numbers... (1)

JasterBobaMereel (1102861) | about 2 years ago | (#37918144)

BlueGene/L Each node is a Dual Core processor with 4MiB of memory (M not G) and they seem to do OK... it's a case of writing the software correctly to distribute it correctly ... That's a single system running one application

But this is about servers not cores ... this gives you 288 servers per rack, Your blade solution gives you many less independent servers, with each server having many more cores, and more memory, which is not the market they are aiming at ...

Re:Just some back-of-the-envelope numbers... (1)

Archangel Michael (180766) | about 2 years ago | (#37920544)

The limitation on "servers" per rack is not processors, and hasn't been in a while, it is RAM, at least where I work. We need higher density RAM capability.

Re:Just some back-of-the-envelope numbers... (0)

Anonymous Coward | about 2 years ago | (#37920630)

No, it is G NOT M:
http://upload.wikimedia.org/wikipedia/en/2/27/BlueGeneP_schema.png

Only having 4MB of memory would have been extremely silly. Even the original architecture had 512MB per compute card (4 nodes):
http://upload.wikimedia.org/wikipedia/en/d/da/BlueGeneL_schema.png

Re:Just some back-of-the-envelope numbers... (1)

gl4ss (559668) | about 2 years ago | (#37918398)

quad core arm has still nothing on 3ghz intel, not even in things that highly parallelize, with floating point things go even worse(per the semi-recent tegra benches, ).
it's kinda sad, really. there's a lot of nifty stuff that could be done realtime if per cpu-core(per thread) power went up. anyhow
http://www.xbitlabs.com/news/mobile/display/20110921142759_Nvidia_Unwraps_Performance_Benchmarks_of_Tegra_3_Kal_El.html [xbitlabs.com]

smack a lot of shitty cpu's in a small case and call it a day has been done before "supercomputer under your desk" style, there's some applications for them.

a better comparision for this case would be if someone stuck 72 cheapo dc-atoms in a box, for more meaningful x86 vs. arm.
and on the power use front I'd rather see some figures about how much power rendering some test raytrace takes on each system.

Re:Just some back-of-the-envelope numbers... (1)

bertok (226922) | about 2 years ago | (#37918740)

I was thinking renderfarm myself, but a) that's 90% about the floating point performance, not integer, and ARM isn't stellar on floating point throughput, and b) a lot of scenes these days are greater than 4GB. While it may be possible to "tile" some scenes, the most compute expensive bit (that you'd want to accelerate the most) is global illumination, which basically needs the whole scene in RAM.

Being forced to stay under some arbitrary scene complexity limit would suck, especially with tools like ZBrush that can generate billion-polygon models.

Re:Just some back-of-the-envelope numbers... (1)

gbjbaanb (229885) | about 2 years ago | (#37919438)

but it's not for a single server that runs a renderfarm, that's not the target market. (for that you'd want dedicated graphics type chips anyway)

These are to support cloud and web type servers. Note that the intent here is not to provide a single massively virtualized server that you cram hundreds of paying customers onto, but to create a single server that runs 4000 individual OSs. At 1.25W per OS that makes a huge cost saving for most datacentres that are filled with web servers that pretty much don't need CPU power anyway.

Compare to the intel version - at 100W for that 3ghz chip (plus all the other chips that are already integrated into the ARM SoC), you'd have to run 100 VMs to get the same power consumption levels, at potentially drastically reduced performance per user.

I guess you could stick a clustered OS on them all and then run it as a single OS that does parallel tasks well but single-core tasks poorly, but again - that's not the target market for these things.

Re:Just some back-of-the-envelope numbers... (1)

Anonymous Coward | about 2 years ago | (#37919050)

TFA says they run at 1.4Ghz

Re:Just some back-of-the-envelope numbers... (1)

bill_mcgonigle (4333) | about 2 years ago | (#37921174)

So, if you have a an embarrassingly parallel problem to solve that can fit into 4GB of memory per node, doesn't use much I/O, and can run on Linux, this might be a pretty good idea.

I'd imagine people who do 'cloudy' things like remote voice recognition for cell phones are jumping up and down and not renewing all their rackspace commitments.

Now, let's see if HP can actually deliver or if the 6th CEO from now fails to understand how this sells ink.

Re:Just some back-of-the-envelope numbers... (1)

DerekLyons (302214) | about 2 years ago | (#37921826)

While comparing the performance specs is sexy and nerdish and l33t and all that - you leave off important bits, like power consumption and heat production. These matter in the real world of engineering data centers.

Has some similarity to Bluegene supercomputers (1)

markus_baertschi (259069) | about 2 years ago | (#37917976)

This looks to me to be similar to Bluegene supercomputers. A Bluegene essentially consists of packaged PowerPC processors with a scalable high-performance switch interface on board. The two first current generation Bluegenes were using 32bit CPUs as well.

Markus

Redstone? (1)

drfishy (634081) | about 2 years ago | (#37918060)

Make a Minecraft themed one and I will find a reason to need it.

Really? (1)

Sez Zero (586611) | about 2 years ago | (#37918626)

So, HP, are you really going to do this or should I just wait a few weeks and wait for the cancellation announcement?

'Cause recently you guys have been a little wishy-washy...

Re:Really? (1)

dccase (56453) | about 2 years ago | (#37919294)

I will certainly try to get one for $99 at next year's fire sale.

Hope they won't take this any further... (0)

Anonymous Coward | about 2 years ago | (#37918828)

...because 288 cores ought to be enough for anyone.

Honestly curious (1)

bberens (965711) | about 2 years ago | (#37919234)

Where would this fit in the market? My first thought is things with high number of threads but low compute complexity like web servers or something but Oracle essentially flopped in that arena with their ultrasparc or whatever it was with a bunch of threads. It's possible ARM is very fast but I'm only accustomed to seeing it in set top boxes, phones, and such. My understanding is they're great on power consumption but not so great on compute speed...

Re:Honestly curious (1)

Amouth (879122) | about 2 years ago | (#37919716)

Oracle essentially flopped in that arena with their ultrasparc or whatever it was with a bunch of threads

It was Sun who did it before Oracle bought them - it was the Niagara CPU line. It didn't flop, for the people who needed that and where Sun customers it was wonderful, but out side of that ecosystem it had nearly zero application. then Oracle bought Sun and well everything seems to have flopped from that.

H-who? (0)

Anonymous Coward | about 2 years ago | (#37919450)

what? didn't they stop selling pcs?

But... (1)

strangel (110237) | about 2 years ago | (#37920020)

...does it run Android?

They tried this with Crusoe too (1)

Gothmolly (148874) | about 2 years ago | (#37920548)

We got an enclosure full of Transmeta blades, and the performance just sucked. I could see this MAYBE for a VDI solution for a lot of low-power users, but that's it.

Applications? (1)

FunkyELF (609131) | about 2 years ago | (#37920768)

What kind of applications would this be used for. The only thing I can think of would be web hosting. Does KVM / Xen even work on ARM?

There wouldn't be any serious enterprise applications that would run on ARM (right now) are there? Java?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>