×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Baserock Slab Server Pairs High-Density ARM Chips With Linux

timothy posted about a year and a half ago | from the no-need-only-lust dept.

Data Storage 51

Nerval's Lobster writes with a report at Slash Datacenter that a portion of the predicted low-power-ARM-servers future has arrived, in the form of Codethink's Baserock Slab ARM Server, which puts 32 cores into a half-depth 1U server. "As with other servers built on ARM architecture, Codethink intends the Baserock Slab for data centers in need of extra power efficiency. The Slab supports Baserock Linux, currently in its second development release (known as 'Secret Volcano'), as well as Debian GNU/Linux. While Baserock Linux was first developed around the X86-64 platform, its developers planned the leap to the ARM platform. Each Slab CPU node consists of a Marvell quad-core 1.33-GHz Armada XP ARM chip, 2 GB of ECC RAM, a Cogent Computer Systems CSB1726 SoM, and a 30 GB solid-state drive. The nodes are connected to the high-speed network fabric, which includes two links per compute node driving 5 Gbits/s of bonded bandwidth to each CPU, with wire-speed switching and routing at up to 119 million packets per second."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

51 comments

Slashvertisment (3, Insightful)

daniel23 (605413) | about a year and a half ago | (#41096325)

The summary is almost unreadable, too

Re:Slashvertisment (0)

Anonymous Coward | about a year and a half ago | (#41096395)

It's also nothing more than a thinly-veiled way to get people to drive up the page hits to the SlashBI site. Nerval's Lobster is the psuedonym used by the guy who writes 90% of the SlashBI stories.

Re:Slashvertisment (1)

gman003 (1693318) | about a year and a half ago | (#41096615)

To be fair, this one was actually mildly interesting compared to the inanity and insanity of most /BI posts.

Re:Slashvertisment (2, Funny)

Anonymous Coward | about a year ago | (#41097611)

Only slashdot can make "bi" posts uninteresting.

As usual the key information is missing (5, Insightful)

godrik (1287354) | about a year and a half ago | (#41096423)

The main question is how much GFlop per watt you get out of it, or the number of transactions per watt. Saying it is ARM so it is energy efficient is as stupid as saying it is pink so it is pretty.

Some application are best processed (energy wise) by using a kick ass power hungry GPU. Who cares if you consume a lot of electricity if you have a tremendous throughput?

Re:As usual the key information is missing (1)

the_humeister (922869) | about a year and a half ago | (#41096517)

Also no 64-bit, so it might be memory constrained compared to other architectures.

Re:As usual the key information is missing (1)

fuzzyfuzzyfungus (1223518) | about a year and a half ago | (#41096641)

I suspect that it is particularly memory constrained by there being 2GB of RAM hard-soldered to each compute card...

I think that the Armada XPs used on these things support LPAE, so it would theoretically be possible to have more than 4GB of RAM; but with the 32-bit constraints on per-process addressing. For whatever reason, it looks like they went with substantially less RAM than even the 4GB one might have expected.

Re:As usual the key information is missing (1)

exabrial (818005) | about a year and a half ago | (#41096741)

Actually, I believe current generation ARM processors address memory using 40bits, not 32bits. I'm trying to dig up a reference though, I could be dreaming this up.

Re:As usual the key information is missing (1)

Desler (1608317) | about a year and a half ago | (#41096871)

You're thinking of the Cortex-A15 processors which introduces 40-bit addressing of which there aren't any on the market yet.

Re:As usual the key information is missing (1)

fuzzyfuzzyfungus (1223518) | about a year and a half ago | (#41096895)

I think that ARM is currently where X86 was before the 64-bit move: at least some of the classier chips have a PAE-like scheme that allows more than 4GB of address space; but with limits on how effectively any single process can access more than it would on a 32-bit system(and, also similar to PAE-era x86, it isn't terribly common to actually find ARM systems kitted out with even 4GB of RAM, especially at the price points that don't involve a visit from a sales team).

Re:As usual the key information is missing (1)

Jeremy Erwin (2054) | about a year ago | (#41097975)

A number of 64 bit chips were released with smaller address buses-- the powerpc g5, for instance, had a 42 bit address bus.My Core 2 duo has 36 bit address bus. This is flat, mind you, not banked like PAE.

Re:As usual the key information is missing (3, Interesting)

exabrial (818005) | about a year and a half ago | (#41096907)

Sorry, I was wrong. Current generation CortexA9 processors support up to 4gb _per process_ using some virtualization tricks. Cortex A15 has 40bit addressing, supporting up 1TB of ram per process. A15 processors are just being released right now...

Re:As usual the key information is missing (1)

tgd (2822) | about a year and a half ago | (#41096745)

The main question is how much GFlop per watt you get out of it, or the number of transactions per watt. Saying it is ARM so it is energy efficient is as stupid as saying it is pink so it is pretty.

Some application are best processed (energy wise) by using a kick ass power hungry GPU. Who cares if you consume a lot of electricity if you have a tremendous throughput?

No, all the important information for this advertisement is there -- the link to Slashdot's other site with its full page advertisement.

Re:As usual the key information is missing (3, Insightful)

nullchar (446050) | about a year and a half ago | (#41096961)

From the fine "article":

Typical ARM cores consume just a fraction of the power of an X86-based server. While Codethink hasn’t outright disclosed the actual power needs of the Slab, its 260-watt power supply offers something of a clue. Meanwhile, the forward-compatible SOMs (server object managers) will allow operators to replace the CPUs with newer models.

First, it's like the GP said, "it's ARM therefore it's low power" without giving any specifications. To market this, it seems like they would really need tested specs from a decent benchmark tool.

Finally, to praise the quality of the "article", I thought "SoM" meant System on Module [wikipedia.org]. A "server object manager" sounds like something running inside a java virtual machine.

I don't understand how Geek.net thinks attaching poor quality blog posts (they're not really articles) to the Slashdot brand will help them... Slashdotters see through those BI/Cloud//DataCenter posts every time.

Re:As usual the key information is missing (4, Interesting)

hattig (47930) | about a year and a half ago | (#41096839)

Total data centre power consumption is a major problem. We have the space in the racks for more servers, but no more power. In that case getting (example figures) 50% of the CPU power at 25% of the power consumption is totally worth it.

The problem for these ARM servers is whether a 64-core cluster in 150W beats a quad-core low-power x86 server in 150W. "Beating" in this situation means either performance, cost or both.

Re:As usual the key information is missing (1)

godrik (1287354) | about a year ago | (#41097687)

I can understand that. But do you ACTUALLY get 50% of the computing power for 25% of the electric power? You still need disk running, memory. Less computing power means you might need to increase the number of nodes. Which means more network equipement, fan, ...

Is it really worth it? Note that it is a real question, it is not rhetorical.

Floating point (1)

tepples (727027) | about a year and a half ago | (#41097255)

The main question is how much GFlop per watt you get out of it

Provided your workload is floating-point heavy. ARM has historically been weak at floating-point arithmetic, but I'm under the impression that ARM might do better per watt on integer workloads than x86.

Re:Floating point (2)

Desler (1608317) | about a year ago | (#41097473)

Cortex-A15 is, according to ARM, supposed to be much, much beefier for floating point and have better NEON performance. Plus with 40-bit physical addressing it could be quite an impressive competitor.

Re:Floating point (0)

Anonymous Coward | about a year ago | (#41098415)

These new kind of servers will also have to face competition with the Intel Phi [intel.com], in both performance and density.

Re:Floating point (1)

godrik (1287354) | about a year ago | (#41097871)

Any metric will be good for me. If you like better number of HTTP request per watt, I am fine with that. The performance will highly depend on the application anyway. Without actual numbers it is difficult to know if it is interesting or not.

My bleeding eyes... (2)

fuzzyfuzzyfungus (1223518) | about a year and a half ago | (#41096489)

I seriously hope that the mechanical design isn't as nasty as the rendering makes it look...

So, we've got a 260watt PSU in a half-depth 1-U. By my count, there are nine of those weedy little low-profile fans that start buzzing on cheap GPUs after about a week, plus one blower and a 40mm fan in the PSU. Also, there are air intake/exhaust slits on the front and rear of the case(which could be a problem since the manufacturer recommends mounting them back-to-back to achieve full rack density...); but none on the sides and (as best one can tell from the rendering) no obvious flow path from intake to exhaust, just a lot of churn.

I can only hope that this is a low volume product, for which doing actual case design was uneconomic...

Re:My bleeding eyes... (2)

hamjudo (64140) | about a year and a half ago | (#41096813)

It is less than half depth. There is a gap for hot air between the front and back units. In the pictures and animation on the Baserock site there are more ventilation slots. It appears that the air enters each through the front and both sides, and exits through the back. This will produce a chimney of heat in the center of each rack.

Re:My bleeding eyes... (1)

gbjbaanb (229885) | about a year and a half ago | (#41096903)

which is a nice idea - cables and heat in the centre of the rack rather than having a hot and cold aisle. Of course, cabling them up might be tricky, but as they're only half-width, it should be easy to pull them out for access to the back-ends.

Still, ARM SoCs aren't known for producing massive amounts of heat, so I think the cheapo fans are just there for show more than anything, but I agree - a better designed case with air throughput flowing from front to back would be more efficient. The current case design would pull air in and push warm air out in every direction, there's no flow through the case that I can see.

Re:My bleeding eyes... (0)

Anonymous Coward | about a year and a half ago | (#41097155)

Verari had the patent for vertical cooling of servers. It belongs to Cirrascale now.
Cold air intake is at the bottom and hot air exhausts from a chimney on top of the cabinet.
Otherwise, everyone would be building racks this way.

Re:My bleeding eyes... (0)

Anonymous Coward | about a year ago | (#41098331)

Have you ever touched an ARM processor running at full speed?
The fans will probably generate more heat than the processors.

Re:My bleeding eyes... (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#41099439)

Have you ever touched an ARM processor running at full speed?
The fans will probably generate more heat than the processors.

That's part of why I find the thermal design kind of horrifying. It's perfectly understandable(if annoying when you are in the same room) that power-dense servers sound like the turbojet-powered souls of the damned. 1U xeon/opteron boxes, especially the dual socket ones, are absolutely rotten with screaming 40mm fans.

In this case, though, we've got maybe 250 watts total, and there are 11 fans fapping away, most of them just churning the big chaotic flow region in the middle of the box. I have no reason to doubt that it works, it just appears to be infested with unnecessary moving parts, which is not a virtue.

Re:My bleeding eyes... (0)

Anonymous Coward | about a year ago | (#41103117)

> "there are 11 fans fapping away, most of them just churning the big chaotic flow region in the middle of the box."

Watch this video [youtube.com], starting at the 36 second mark. The section of the chassis that has the system boards is closed off and has a fan in the back that pulls (or pushes) the air from this section.

It would be nice if they provided some larger, more finished images/layouts of their design, but between reading the two links and watching the video it appears they won't just be having air swirling around in the middle of the chassis.

Re:My bleeding eyes... (0)

Anonymous Coward | about a year ago | (#41103147)

This PDF [baserock.com] shows what I'm describing pretty clearly.

As everyone else seems to be missing... (1)

Anonymous Coward | about a year and a half ago | (#41096571)

This isn't an SSI either. The interconnects are actually 2x2.5 gigabit ethernet links to a '24 port switch', ethernet bonding, and 2x10 gigabit output for interlinking modules. That's from the site.

I was kinda curious what sort of ARM chips were available with actual interconnects. Combined with the lousy 2 gig a module memory these things sound like a very expensive FAIL for anything other than frontend web services.

What workload would actually work with this? (1)

ebunga (95613) | about a year and a half ago | (#41096791)

HPC wants fast everything and tons of ram. Virtualization wants tons of ram and tons of i/o. Non-parallelizable workloads need fast everything, tons of ram and tons of i/o. As far as I can tell this thing seems like a proof of concept more than anything.

Re:What workload would actually work with this? (2)

fuzzyfuzzyfungus (1223518) | about a year and a half ago | (#41097055)

My guess would be that this is the 'almost as good; but built out of cheap commodity stuff and therefore a lot cheaper' stab at the same niche that Sun was going after with their "T1" and "T2" cores and the T1000 and successor servers based on them. I don't know how well it worked out in practice(obviously not well enough to save Sun; but this was just one product line among others); but the theory was to target certain web and small-database-many-users workloads that tended to have a large number of computationally(especially floating point) undemanding threads in flight at a time.

The Sun version had the advantages of being a single system image, and support for various Big UNIX Vendor goodies(system partitioning and fancy memory error correction, and friends); but I doubt that they had the advantage of costing as little as dinky ARM compute boards do...

Re:What workload would actually work with this? (0)

Anonymous Coward | about a year ago | (#41097807)

cheep? commodity? one server costs $10,000!

Re:What workload would actually work with this? (1)

gman003 (1693318) | about a year ago | (#41097769)

Mostly-static web server, maybe? Hook it up to a SAN for storage, let it cache in RAM or on the internal SSD. A large number of small cores would alleviate many of the problems in handling thousands of concurrent connections, and if none of the pages require intensive calculation, it could work pretty well.

Re:What workload would actually work with this? (0)

Anonymous Coward | about a year ago | (#41098757)

It was designed for native building ARM, but it would work well for any embarassingly parallel problem.

ARM servers... (1)

Bert64 (520050) | about a year and a half ago | (#41096991)

I've been looking for a 1U, non x86, low power server (ie designed to run 24/7, have proper cooling, gige, multiple disks etc) for quite some time... I read about various ARM servers as well as the chinese loongson mips based boards, and have been reading about them regularly for a couple of years now...

And what do all these things have in common? None of them are actually available to purchase anywhere!

Re:ARM servers... (0)

Anonymous Coward | about a year ago | (#41101461)

Soekris.com designs and sells low power (20w) fanless servers. They're kinda pricey, and they're x86. But they're well respected, and well supported by Linux and *BSD kernels. The net6501 came out last year and is pretty nice. I have one in my office running VPN, etc, etc, and will get a rackmount net6501-70 to replace a Sun Fire X2100. The net6501 has a tiny fraction of the performance of the X2100, but for that particular server I value longevity and robustness over everything else (I'll drop in 2 20GB SLC SSDs). And it also uses a fraction of the power---the biggest expense at colocation facilities.

Also, the Atom E6xx in the net6501 is actually 64-bit capable, even though the Intel specs claim it's only a 32-bit part.

price? (0)

Anonymous Coward | about a year ago | (#41097555)

no mention anywhere of price for this thing... not sure how they expect to attract any customers without mentioning the price, even a rough estimate would help.

Woz Thinks Clouds Are Uncool, but... (1)

fm6 (162816) | about a year ago | (#41098015)

It's pretty clear that data centers are rapidly turning into service providers that sell VM time and maybe adding value in the form of SaaS. That's true even for internal data centers that are used only by the companies that own them — they just use a different billing procedure for their customers.

So, a serious developer doesn't buy a 1u server and rent colo space. He buys VM time and any other services he needs, and lets the provider worry about the hardware. Much more cost effective, much easier to scale up when the app becomes popular.

And providers who cater to this new paradigm are not going to bother with variant architectures. The only way they can compete is by making their infrastructure as generic as possible. This minimizes their costs and maximizes their customer base. So x86 processors have won the data center wars the same way they won the desktop wars. There are good reasons to regret this fact, but a fact it is.

Products that ignore the above trends are just wishful thinking.

Re:Woz Thinks Clouds Are Uncool, but... (1)

Kr1ll1n (579971) | about a year ago | (#41098303)

Except for the cases where someone may be doing mobile application development..........

Re:Woz Thinks Clouds Are Uncool, but... (2)

fm6 (162816) | about a year ago | (#41099531)

You have a very strange idea of how mobile apps work.

Re:Woz Thinks Clouds Are Uncool, but... (0)

Anonymous Coward | about a year ago | (#41101339)

You underestimate how hard it can be to cross-build sometimes. To get something to cross compile you have to maintain patches that upstream doesn't want to take, so whenever you want to update to a new version, you effectively have to rewrite the patch, if any significant development has happened.
So if you have to compile your app, it's handy to have something of the same architecture to do the build, and the Slab has been designed to make that fast, by having some of the fastest available ARM cores (alas, in the same speed bracket as the Atom) with fast ethernet interconnect, so that distcc can be used effectively.

Re:Woz Thinks Clouds Are Uncool, but... (1)

fm6 (162816) | about a year and a half ago | (#41103855)

None of which has anything to do with data center computing.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...