Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

12-Core ARM Cluster Beats Intel Atom, AMD Fusion

timothy posted more than 2 years ago | from the 16-pages-seriously dept.

Hardware Hacking 105

An anonymous reader writes "Phoronix constructed a low-cost, low-power 12-core ARM cluster running Ubuntu 12.04 LTS and made out of six PandaBoard ES OMAP4460 dual-core ARMv7 Cortex A9 chips. Their results show the ARM hardware is able to outperform Intel Atom and AMD Fusion processors in performance-per-Watt, except it sharply loses out to the latest-generation Intel Ivy Bridge processors." This cluster offers a commendable re-use of kitchenware. Also, this is a good opportunity to recommend your favorite de-bursting tools for articles spread over too many pages.

Sorry! There are no comments related to the filter you selected.

Were they bored? (2)

JoeMerchant (803320) | more than 2 years ago | (#40344499)

Or, could they just not do the MIPS/Watt calculations without actually building the thing?

Re:Were they bored? (4, Informative)

smallfries (601545) | more than 2 years ago | (#40344537)

What would calculating the theoretical peak tell them about the (real) sustained performance?

Partitioning the problem in chunks that can be distributed to the nodes in the cluster adds overhead. Assembling the finished results does the same. It is kind of hard to predict what this over will be as it depends on the interconnect. In this case they used 100Mb/s ethernet, but there was contention from running NFS over the same network. Building it and measuring it is the only way to find out what kind of performance you really get.

Re:Were they bored? (0)

JoeMerchant (803320) | more than 2 years ago | (#40344619)

Well, it's hard to get first post and simultaneously develop a complete explanation of the concept, but...

They have provided yet another valuable datapoint in the theoretical peak vs actual sustained performance testing set, but, again, this is widely studied, characterized fairly well and predictable with a bit of research and thought experiment.

Reading the article (also impossible to do in a first post time constraint), reveals that they had a particular idea about using a wooden dish strainer to rack the boards... so, yeah, pile of free boards, idea about how to rack them, lots of free time, publish an article, get it on /., profit? I bet they'll get at least $10 worth of ad-clicks.

Re:Were they bored? (-1)

Anonymous Coward | more than 2 years ago | (#40346427)

I can wham.
I have the power to wham.
I've been given a gift for whamming.
I shall wham!
I shall wham the bootysnapcheekcrackholes of all Gamemakerlessnesses!

Return, return, return, return, return to Gamemakerdoooooooooooooooooom!

Re:Were they bored? (5, Insightful)

davydagger (2566757) | more than 2 years ago | (#40344551)

half the fun is building it. good excuse to build a 12-core mini-cluster. I think this is nothing more than some nerd showing off his latest toy. Which is not a bad thing. this 12-core'd cluster might be useful, at the very least proof of concept stage. I could imagine the uses for a highly paralleled mini-super-computer on an affordable budget.

Re:Were they bored? (0)

Anonymous Coward | more than 2 years ago | (#40345739)

12 ARM board = 2184 (at 182)
12 Atom mini itx = $744 (at 62)

The guy that use ATOM has $1440 left over to buy beer

Re:Were they bored? (4, Informative)

timeOday (582209) | more than 2 years ago | (#40344557)

I think independent testing of this sort is tremendously valuable.

What I don't understand is why the summary is focused on ARM beating Atom when the overall winner - in performance, in performance per watt, and in cost - was the Intel Ivy Bridge... by a huge margin.

Re:Were they bored? (1)

Anonymous Coward | more than 2 years ago | (#40344717)

OMAP4460 45nm
Atom 45nm, or 32nm for newer models
Ivy Bridge 22nm

Re:Were they bored? (2)

TheLink (130905) | more than 2 years ago | (#40345693)

Yeah and my car is based on old tech so it has crappier performance than more modern cars.

Whatever the excuse, a loss is still a loss.

ARM is way better for low power consumption stuff, but if you want performance/watt, Intel still leads.

Re:Were they bored? (3)

Patch86 (1465427) | more than 2 years ago | (#40346717)

News is "Ford Fiesta better than Fiat Panda". News is not "Ford Fiesta worse than BMW 7 series".

Of course Ivy Bridge is better. It'd be pretty shocking if it weren't.

Re:Were they bored? (5, Insightful)

Idbar (1034346) | more than 2 years ago | (#40344721)

I think the confusion is that people think Atom is analog to ARM. People keep confusing the fact that ARM is a core processor and Atom an SoC solution. It makes no sense comparing apples to oranges. An appropriate comparison would be an SoC from TI, Qualcomm or Samsung.

Re:Were they bored? (1)

Anonymous Coward | more than 2 years ago | (#40344887)

Well, it was an SoC from TI, but any current-generation SoC with the same core will get reasonably similar performance for CPU-bound tasks.

And the Atom they benchmarked wasn't an SoC anyway (only Medfield is), so... enjoying your trip?

Re:Were they bored? (1)

Idbar (1034346) | more than 2 years ago | (#40345385)

I'm not against this type of benchmarking, I actually enjoy reading people writing them up. Now, on the other hand, I don't think it's fair to compare a cluster of laptops vs. a cluster of desktops. It's fun... but without the proper metrics, it's useless.

How many cores, can you fit in a cubic meter for example, what's the performance per watt per cubic meter. What's the performance of a solution like the Tegra. How do you measure the difference between added hardware, like radios or GPUs, etc.

Re:Were they bored? (4, Insightful)

Dcnjoe60 (682885) | more than 2 years ago | (#40345217)

I think the confusion is that people think Atom is analog to ARM. People keep confusing the fact that ARM is a core processor and Atom an SoC solution. It makes no sense comparing apples to oranges. An appropriate comparison would be an SoC from TI, Qualcomm or Samsung.

But then how could they generate media hype by announcing they are outperforming intel?

Re:Were they bored? (1)

aliquis (678370) | more than 2 years ago | (#40344777)

To get readers?

Re:Were they bored? (2)

Kjella (173770) | more than 2 years ago | (#40344955)

What I don't understand is why the summary is focused on ARM beating Atom when the overall winner - in performance, in performance per watt, and in cost - was the Intel Ivy Bridge... by a huge margin.

Because this is slashdot and the AMD/ARM vs Intel bias is almost as strong as Linux vs Windows? Their best selling point is the APUs but in reality Intel is the one favored most by the move to decent integrated graphics, people still buy Intel but now instead of an AMD/nVidia entry level card many just stick with the integrated one, making GPU market share become more like CPU market share. And Intel is the one with a half-decent ARM competitor (Intel Medfield), AMD isn't ready to play in that arena at all. And don't get me started on Bulldozer and the high end...

Re:Were they bored? (2)

LostMyBeaver (1226054) | more than 2 years ago | (#40349503)

Agreed... and frankly... I thought the comparison was utter crap... Really... a first generation Atom against a modern ARM? First generation Atom was utter crap and solved no other issue that providing a cheap atom based platform to play with. What about the N2600 or even better... the Medfield (had to google for ages for that name haha)? Atom 330 was just not worth it.

Re:Were they bored? (1)

ddd0004 (1984672) | more than 2 years ago | (#40345257)

Actually, I bet they did it to attract women.

Re:Were they bored? (1)

hairyfeet (841228) | more than 2 years ago | (#40347343)

Not to mention the point is.....what exactly? The whole selling point of ARM is how long it will run on a battery, plugged in the difference between say 7w and 12w really isn't enough to get your panties in a twist over and while ARM may get lower power usage while doing work there is no denying that Intel and AMD have the IPS crown by a pretty wide margin, even more if your code is able to use OpenCL so you can use both halves of a Fusion APU to do useful work.

I just don't get all the "Us VS Them" bullshit lately, except maybe its good for a nice flamewar or fanboi circle jerk. To say ARM is gonna defeat x86 is as insane as saying "This moped is gonna replace the trucking industry!". Does that mean the moped is bad? No, in fact if you simply need to move one individual and maybe a single bag of groceries then the moped will be the more cost effective way of doing so. But with the exception of a few TINY niches X86 and ARM really aren't in competition with each other, they just aren't. A quad Xeon will slaughter in the server space, a Fusion APU desktop or laptop will do more per cycle than a good 80%+ of the population will be able to come up with, and ARM gets insane battery life in mobile phones and tablets. So can't we all just.....get along?

Re:Were they bored? (1)

bluegreen997 (2096462) | more than 2 years ago | (#40348421)

You are thinking like an engineer not a marketeer. And any guess which side writers/editors are going to lean towards? Even in the field that writes about engineering stuff?

Re:Were they bored? (1)

hairyfeet (841228) | more than 2 years ago | (#40349073)

Well the nice thing I've found, which is why I ignore most of the benches, is that X86 has gone so far beyond good enough and into insanely overpowered that even a low end system simply never gets stressed, the users simply can't come up with enough work for the chip to do.

Last year I built my dad a Phenom I quad because i found a kit cheap, now most here would consider that a pretty weak chip, we're talking a 2.1GHz first gen Phenom. Now guess what I found? That after 3 months he had simply never gone above 50% CPU utilization, he simply didn't have enough useful work to stress even a 6 year old chip. FB plus Chat plus Internet TV simply couldn't push the chip hard enough to stress the system. I myself traded in my full size laptop for an E350 EEE netbook, simply because i found the work i had to do when i was mobile simply wasn't putting any stress on my old dual core, and with the new unit I get nearly 6 hours watching 720p video and even longer just surfing.

So while I can see why the editors did it, just like SJVN and Thurott they be trolling for page views, but for the user? Right now unless you are talking a phone or tablet even the lowest X86 will simply give them more cycles than they will really ever use. Hell I still sell plenty of Pentium D and Athlon X2s, simply because I can get them at a good price and the average web surfing simply can't stress even those old chips. Maybe in a few years ARM will be the same but right now I've seen plenty of people frustrated because they've slammed their ARM phone or tablet and its slowed down to a crawl, I can't remember the last time I saw that with even a 5 year old X86 unit.

Re:Were they bored? (1)

Targon (17348) | more than 2 years ago | (#40353379)

Applications are SLOWLY making better use of multiple core machines, and that means that as time goes forward, more cores makes for a better experience. The problem you are seeing, that many people are not stressing the system is caused by applications not making good use of system resources. In most cases, even multi-threaded apps are using what, two or three threads when they should be using six or more for what is being done.

Basically, we are seeing most developers failing to re-write applications to make better use of what systems can handle, and the lowest common denominator is what programs are designed to work with. Dual-core is the target, and things don't scale up for 4+ core machines for the most part. Once quad-core becomes the norm in another two to four years, then we can expect better use of hardware.

Now, you shouldn't be surprised that older users don't do much that would stress a modern system, but over time, that will change.

Re:Were they bored? (1)

vivian (156520) | more than 2 years ago | (#40354909)

A lot of apps simply can't be threaded that well.
Even games, with all their graphical snd sound goodness can't use multiple cores that effectively.
you will have one heavy thread which is doing all the graphics, you can throw AI on to one or two threads, put sound on another and UI on another, plus networking and other IO could be on additiona threads, but the graphics thread will be the really heavy one, and the rest will be very lightweight in comparison. You can't break the graphics thread out to multiple threads because your 3d video card's graphics context has to be handled by the same thread.

Re:Were they bored? (1)

hairyfeet (841228) | more than 2 years ago | (#40356817)

That is why I never understood AMD having identical cores as it seems to be a waste with the exception of a few apps like video processing. That is why I snatched a Thuban when they were cheap as at least turbocore will ramp up when you are only using one to three cores heavily but a better design would probably be an uber-powerful Core 0, followed by a decently powerful Core 1 & 2, with the Cores after that being slightly more powerful than Bobcats.

Because as you so rightly pointed out frankly it doesn't matter how good the coder is if the program simply won't benefit from multiple threads and many won't.

price much? (1, Interesting)

Osgeld (1900440) | more than 2 years ago | (#40344501)

I dont know the exact model used but the first one I could find online was 182 bucks * 6, thats a grand just to prove a point (+ other hardware), hope it was worth it to beat a 60$ atom

Re:price much? (0)

Anonymous Coward | more than 2 years ago | (#40344529)

They were given these. Didn't pay full price.

Re:price much? (1)

scheme (19778) | more than 2 years ago | (#40344609)

They were given these. Didn't pay full price.

Unless you're also going to get whatever deal phoronix got, you're paying close to retail. So yeah, price matters.

Re:price much? (1)

hairyfeet (841228) | more than 2 years ago | (#40348763)

If you REALLY care about bang for the buck in the low power space your best best would probably be an AMD E350 kit [newegg.com] which is just $120 USD with a nice little case. I've built a few HTPCs and office boxes out of these and they are not bad little units, if you are really concerned about price you can use Open ELEC [openelec.tv] which makes a Fusion version of their distro with the XBMC UI for a nice dirt cheap HTPC and for offices there are several distros that support fusion OOTB although my customers prefer Win 7 which runs quite well on these.

So if you are REALLY worried about price and want the best bang for the buck that would be the way to go IMHO, to get the same performance with Atom you'd need an ION setup and those are often a good $60 higher on average from my last price checks.

So if you wanted something similar to TFA you are looking at around $720 to get 6 E350 dual core kits, cheaper of course if you simply buy the boards. Of course you'd have to figure in memory and some sort of storage but since everyone has their own opinions on those I figured it wouldn't be worth attempting to figure.

Re:price much? (1)

Anonymous Coward | more than 2 years ago | (#40344561)

Really, but if you don't understand why this is interesting you better turn in your geek card.

Re:price much? (2)

jedidiah (1196) | more than 2 years ago | (#40344949)

Nope. We just did this kind of thing back when something as powerful as that ARM hardware was considered leading edge. We also did real work with it.

12 ARMs to replace a trailing edge x86? Funny.

Re:price much? (1)

Anonymous Coward | more than 2 years ago | (#40345145)

Except they didn't need all 6. 2 Pandaboards = 1 Atom 330 nettop. (A shade less in one benchmark, a bit more in the other.)

And I'm not sure where you pulled that $60 figure from, but I haven't seen any 330 nettops that cheap. Is this that thing where you count the whole system for one side, and just the CPU for the other side?

Re:price much? (1)

reub2000 (705806) | more than 2 years ago | (#40346201)

How is that board worth $182?

Re:price much? (1)

Osgeld (1900440) | more than 2 years ago | (#40349303)

cause thats what they ask for it

Re:price much? (1)

reub2000 (705806) | more than 2 years ago | (#40352491)

Is this a niche product that is made in small quantities?

Re:price much? (1)

Osgeld (1900440) | more than 2 years ago | (#40353811)

probably, I dunno, never heard of it until this article

Article summary says it all (5, Insightful)

Glasswire (302197) | more than 2 years ago | (#40344565)

"Besides winning on performance and efficiency, the Core i7 3770K system would cost less than the cost of a six PandaBoard ES cluster setup."
So a single Ivy Bridge system, which takes up much less rack space, no cluster network ports, outperforms and costs less than the ARM cluster. Is that the definition of a no-brainer?

Re:Article summary says it all (5, Funny)

Noughmad (1044096) | more than 2 years ago | (#40344605)

And yet Phoronix managed to squeeze 16 pages out of it. Good job.

Re:Article summary says it all (2)

KreAture (105311) | more than 2 years ago | (#40344737)

It's called page views and ad reloads.

What I find interesting is the switch probably uses more power than the cluster.

Re:Article summary says it all (2)

Gaygirlie (1657131) | more than 2 years ago | (#40344615)

So a single Ivy Bridge system, which takes up much less rack space, no cluster network ports, outperforms and costs less than the ARM cluster. Is that the definition of a no-brainer?

No, that's the definition of "clearly not as interesting or cool a setup as a cluster of Pandaboards" ;)

Re:Article summary says it all (1)

aliquis (678370) | more than 2 years ago | (#40344723)

No, that's the definition of "clearly not as interesting or cool a setup as a cluster of Pandaboards" ;)

NEW: Biological and eco-friendly pandaboards. Reinforced with eucalyptus fiber.

Re:Article summary says it all (1)

aliquis (678370) | more than 2 years ago | (#40344733)

Complete with Lisa Simpsons face and all directly from the recycling plants of Mr Burns? :)

Re:Article summary says it all (2)

Glasswire (302197) | more than 2 years ago | (#40344829)

Now what WOULD BE interesting is a cluster of NUCs [liliputing.com] with Ivy Bridge Core i3s

Re:Article summary says it all (2)

kelemvor4 (1980226) | more than 2 years ago | (#40344641)

To the arm fanboys it is, apparently. The whole exercise seems fairly pointless to me. Intel netbook cpu outperformed by 12 competing cpu's...

That cluster would probably be more valuable if you melted it down to sell the precious metals inside it.
I can't believe they bothered, I can't believe someone wrote an article about it.. somehow I can believe it would get posted to slashdot, though.

Re:Article summary says it all (0)

Anonymous Coward | more than 2 years ago | (#40355313)

which takes up much less rack space, no cluster network ports, outperforms and costs less than the ARM cluster

Or you could buy a Calxeda EnergyCard [calxeda.com] and have the best of both worlds.

Loses to Ivy Bridge (2)

Noughmad (1044096) | more than 2 years ago | (#40344667)

I must have been under a rock for the past few years, but are Ivy Bridge processors really more power-efficient than Atoms, Fusions and even ARMs? I thought they were designed more for speed than efficiency, while the others were made for low consumption. Was I wrong? On the internet?

Re:Loses to Ivy Bridge (2)

scheme (19778) | more than 2 years ago | (#40344731)

They're more power efficient if you're looking for high performance at reasonable power levels. The ARMs might be much better for tasks that don't need much computation but if you end up needing to combine a bunch of ARM boards into a cluster to get the performance you need then there's a lot of overhead that adds to the power consumption without giving you much.

Re:Loses to Ivy Bridge (4, Interesting)

Noughmad (1044096) | more than 2 years ago | (#40344755)

With the EP.C workload on all twelve ARM cores, the average power consumption was 30.4 Watts for all six PandaBoards, which is in line with each PandaBoard burning through 5~6 Watts under load. When it comes to the performance-per-Watt, the EP.C test was yielding an average of 1.78 Mop/s per Watt, which was an increase over the single PandaBoard ES at 1.60 Mop/s per Watt.

Page 8 of TFA (yes, my quote was the entire text on that page) claims otherwise, that efficiency of the cluster is even better than that of a single board. I really have no idea how they managed that.

Re:Loses to Ivy Bridge (0)

Anonymous Coward | more than 2 years ago | (#40344823)

Probably using a single power supply thus its power loses are distributed across multiple boards.. Mind you, I haven't RTFA.

Re:Loses to Ivy Bridge (1)

CityZen (464761) | more than 2 years ago | (#40345167)

No, each board had its own AC adapter.

Re:Loses to Ivy Bridge (1)

CityZen (464761) | more than 2 years ago | (#40345155)

I also noticed that the combined wattage requirement was less than that of a single system multiplied by the number of units. I'm guessing that their simple meter is not accounting for all the load, since there are transformers in the AC power supplies.

Re:Loses to Ivy Bridge (1)

fa2k (881632) | more than 2 years ago | (#40345441)

Maybe they count the switch in both cases.

Re:Loses to Ivy Bridge (1)

CAIMLAS (41445) | more than 2 years ago | (#40356877)

My guess, is they may be using a different power supply. The pandaboard takes 5V @ 4 amp - hardly anything, really. A single quality 90% efficiency desktop PSU with 6 5V rails will supply that much power and, even if not operating at peak efficiency (low-amp high-efficiency PSUs are hard to find), it may have beat out the common wallwarts used for the devices.

Re:Loses to Ivy Bridge (2)

CajunArson (465943) | more than 2 years ago | (#40344817)

You're confusing efficiency with total power consumption. A desktop Ivy Bridge certainly pulls more watts than the E-350 or Atom boards, but the amount of work that Ivy can do for each of those watts is higher, which gives Ivy the efficiency lead but not a total power-consumption lead.

Re:Loses to Ivy Bridge (1)

CAIMLAS (41445) | more than 2 years ago | (#40356879)

I don't know about that. Our house has a last-die Sandybridge i5 which runs circles around the E350 we also have. Power use is roughly par between the two.

Re:Loses to Ivy Bridge (5, Interesting)

wbr1 (2538558) | more than 2 years ago | (#40344841)

Ivy bridge is more efficent in work done per watt yes, but ARM still wins for low power devices like phones because it draws so much less power. The fact that it does less with that power is moot, because it does enough and lets your battery last much longqer.

Re:Loses to Ivy Bridge (1)

pushing-robot (1037830) | more than 2 years ago | (#40344967)

In addition to the much-increased overhead of the cluster (all the mainboards, memory, storage, etc), the Ivy Bridge chip is on the brand-new 22nm process size while the Atom and ARM chips they tested are stuck on the old 45nm. They could have at least gone with 32nm Atoms and ARMs.

Re:Loses to Ivy Bridge (1)

phantomfive (622387) | more than 2 years ago | (#40345283)

Because they are looking at performance per watt, not "power usage during normal use." Most people think of "power usage during normal use" when they are talking on the internet, because they're thinking of power usage in their phone.

Most people don't have clusters, but in that case you are interested in the power usage of your cluster while the thing is running at full speed. It's not something you're going to put in your phone, but Intel manages some efficient processors.

Re:Loses to Ivy Bridge (-1)

Anonymous Coward | more than 2 years ago | (#40345409)

Sandy Bridge also kicked the crap out of Atom at performance/watt. Intel wants you to believe that Atom systems make small and efficient systems, but the reality is that they're just cheap and small, but not efficient (not even at idle power consumption - Sandy Bridge and Ivy Bridge have excellent clock/power gating). I came to this same conclusion when I was building my NAS/HTPC, and went with a small Sandy Bridge system instead of an Atom. I don't regret the choice one bit.

The conclusion of this experiment is exactly what one would expect: when comparing low-power systems, the good architecture (ARM; Cortex-A9) beats the bad architecture (x86; Atom) by a large margin. x86's legacy baggage makes it a horrible choice for power-optimized small systems, because there's a higher fixed cost (both in chip size and power consumption) implementing all of that baggage that you cannot get rid of without hurting performance even more. When comparing the ARM cluster with a large system focused on higher overall performance (Ivy Bridge), the higher system wins at performance/watt, because there's more distributed overhead in having 12 small cores, 6 SoCs (RAM controller/bus controller/tons of embedded peripherals/...), 6 motherboards, 6 PSUs, etc. than there is in having one motherboard, one PSU, and one big fat CPU with 4 high-performance cores.

Now, the really interesting comparison will come when (if) we get ARM processors that are in the same class as Ivy Bridge and other desktop CPUs. Unfortunately, there's no such thing yet. We already know that ARM beats the crap out of x86 at everything from tiny systems (Cortex-M0 which has more in common with 8-bit CPUs than anything else, where x86 will never work) to medium systems (tablets and high-end smartphones with Cortex-A9), but we don't know whether it will maintain that lead if it is scaled up to large systems.

Re:Loses to Ivy Bridge (0)

Anonymous Coward | more than 2 years ago | (#40345727)

When your power envelopes are static and arbitrary (45w, 60w, etc), the only possible gains you have beyond moores-law-violating technological breakthroughs are via efficiency improvements.

Re:Loses to Ivy Bridge (1)

CAIMLAS (41445) | more than 2 years ago | (#40356857)

Sandybridge, and now Ivybridge, are drastic hand over fist improvements over their previous architectual designs - particularly in terms of power use. An i5 at idle, for instance, is more power efficient than the first generation Atoms as well as the first-generation AMD Bobcat boards (eg. Hudson), but can do a whole lot more while not idled and still maintains a relatively low power usage.

I suspect that the reason we never saw the Atom SoC (Atom 2) was because the power savings engineering went into Sandy and Ivy. For what Atom does, it does it well enough - it hits that performance mark. If they can push the envelope on their high end, not only do their big customers save (datacenters, OEMs, etc.) but they're always able to later come back and scale and pair those high-end designs down for embedded use later.

SPIN (4, Informative)

CajunArson (465943) | more than 2 years ago | (#40344763)

I'm getting Dramamine for everyone on Slashdot to counteract the ARM FUD.

1. Look at both the AMD and Intel boards for the low-end processors... notice anything? They have all of these... features like PCIe, real memory interfaces, SATA controllers, etc. etc. All of these features consume power. Huge amounts? Not really, but compared to both the E-350 and the Atom CPUs, the amount of power being measured for each board is including a very large amount of power that has zero to do with the CPU. Guess what would happen if I took an E-350 or Atom and put it in an equivalent to the Panda board?

2. Apparently ARM's marketing department ran out of money to pay the poster to describe the Ivy Bridge system used in this test. Here's the short results:
      a. In the parallel benchmarks used in this test that are a (probably unrealistically) best-case scenario for the ARM cluster, a single Ivy Bridge CPU was 5 times faster.
      b. Oh but ARM says: So what if Ivy is faster! It's a power hog... look it used over 100 WATTS OMG!!!! Well guess what? On a performace per-watt scale, the Ivy Bridge system is THREE TIMES BETTER THAN ARM.
      c. Oh but the ARM fanboys will say that Intel cheated by using a better lithographic process!! Well guess what: ARM loudly brags that it is better because it is an IP only company, so you have to take the good with the bad.

4. Oh one more thing... the Ivy Bridge system had REAL PERIPHERALS like real memory, reali PCIe, a real SSD, etc. etc. that by themselves probably used more power than at least one of the ARM boards, probably 2 of them. Oh and by the way.. the power used for the network fabric needed to network those ARM boards... *NOT* included in the power consumption figures so ARM had that as an extra advantage! So in many ways the Ivy Bridge system was intentionally disadvantaged.. and was still THREE TIMES MORE EFFICIENT ON A PER-WATT BASIS THAN ARM IN A SERIES OF BENCHMARKS THAT ARE BEST-CASE-POSSIBLE SCENARIOS FOR ARM.

5. For all of those ARM fanbois who are about to say that PCIe, real RAM interfaces, real SATA support, etc. etc. are inelegant artifacts of the stupid x86 instruction set well.. bite me. The last 5 years of ARM trolls who have literally gone down the feature list of every feature that x86 has that ARM doesn't and found a way to call the features that ARM lacks stupid and moronic (until ARM implements them years later and then claims to have come up with them first) is pissing me off.

Re:SPIN (2)

CajunArson (465943) | more than 2 years ago | (#40344799)

Oh one more thing: The Ivy Bridge system is also cheaper not only for up-front price but also for long-term power efficiency and you don't have to worry about maintaining 6 sets of a hardware and updating software on 6 different nodes in a cluster.

Re:SPIN (0)

Anonymous Coward | more than 2 years ago | (#40345433)

Regarding your point #3 --

Re:SPIN (2)

CajunArson (465943) | more than 2 years ago | (#40345449)

Point 3 was intentionally omitted and left as an exercise for the reader. If I had been using decimal points, I would have chalked it up to the FDIV bug. ;-)

Re:SPIN (0)

Tough Love (215404) | more than 2 years ago | (#40346117)

Intel spinbot much?

Re:SPIN (1)

CajunArson (465943) | more than 2 years ago | (#40346957)

Intel spinbot much?

Considering I also said that the AMD E-350 was misrepresented in these test and since the E-350 is a faster CPU part than the Atom, I must not be a very efficient Intel spinbot...

Re:SPIN (0)

Anonymous Coward | more than 2 years ago | (#40347813)

The majority of ARM systems have a zillion GPIO pins to play with, SPI buses, I2C buses etc.. Intel compatible mother boards seem never to include these. So, good luck interfacing your intel box to stuff like relays, motors, and sensors. So, by your reasoning, Intel sucks at everything because it doesn't have GPIO pins, SPI or I2C available. Just as ARM sucks at everything, in your world, because generally, it doesn't have PCI-e available (Marvel ARM SoCs do, BTW).

Curious if Phorix had used an armhf port, how much better they would have done. Hopefully they repeat with a properly optimized software stack (armhf not only enables hardware floating point, but the distributions, like Debian, are targeting newer ARM instruction sets for armhf which also improve perf).

I like my arm based _always on_ server/router/multiple radio AP box. The box is plenty powerful for all its many tasks, and draws 3.8W peak including radios. For a build farm, I'd prefer some i7s. Each has its place.

Re:SPIN (1)

WorBlux (1751716) | more than 2 years ago | (#40351079)

Intel boards do have IC2 buses, they just are rarely used for anything except monitoring hardware temps.

Re:SPIN (1)

ihavnoid (749312) | more than 2 years ago | (#40347843)

Okay, I'll do some counteract to counteract the ARM FUD.

Do you mean that OMAP doesn't have PCIe, real memory interfaces (what do you mean by "real memory interface"? Is there something like a "fake" memory interface?") SATA controllers, etc. etc. etc. Sorry, but they DO HAVE THEM. Plus, the OMAP 4 series has a GPU, video encoder/decoder, its own 2D accelerator and whatever interface it requires to create a smartphone. Guess what will happen if the OMAP lacked all that stuff?

So, maybe it can be a Intel vs. TI issue, but it it certainly isn't an Intel vs. ARM issue. If you want to be fair, you should compare Intel Ivy Bridge with an SoC without the smartphone-or-tablet-or-whatever specific devices, which is manufactured in a recent-enough manufacturing process. Unfortunately, as of now, I fail to find any SoC intended to be used on datacenters.

Yes, you are an idiot to create a datacenter with an array of OMAP 4s. Maybe the Qualcomm S4 may be better (28nm process) but I don't think it is likely to beat Ivy Bridge for now (due to inefficient SoC-to-SoC interconnects - Ethernet wasn't designed to be used for close proximity high-speed/low-power communication). But claiming x86 is more power-efficient than ARM is a completely different issue that can't be resolved by comparing OMAPs and Ivy Bridges.

Re:SPIN (1)

CAIMLAS (41445) | more than 2 years ago | (#40356909)

I don't suppose it does much good mentioning at this point that the Pandaboard has what is at this point a fairly dated CPU with a fairly low clock. When it came out, it was decent, but at this point it's almost 2 years old. The Tegra 3, for instance, puts it to shame in pretty much every regard.

A link of a link (0)

Weatherlawyer (2596357) | more than 2 years ago | (#40344765)

A friend of mine said that

Is this to do with the Great Game or just the stuff the rest of us don't play in real life?

Because the IBM advert in here:

http://www.phoronix.com/scan.php?page=article&item=ubuntu_1204_omap4460&num=1 [phoronix.com] just wouldn't go away.

And the results table wasn't designed to work properly on Ubuntu. Not on my Ubuntu at least.

Or maybe it doesn't like Opera. (Whose DDoSsing them by the way?

Not the Persians shirley?)

So what I was thinking is... that with a super-fast Ubuntu desktop, can Tetris work?

Or Pacman?

Anyway, now that you have scared everyone off Intel Chips and Microsoft Operating systems....

Besides nuclear bunkers, what's next to build?

16 to do the job of 1 (0)

Anonymous Coward | more than 2 years ago | (#40344789)

It took how many arm computers to match one old intel chip? i didn't realize arm was that underpowered.

Are we back to Beowulf cluster comments? (1)

G3ckoG33k (647276) | more than 2 years ago | (#40344797)

any time soon?

Readability works, but no performance images (1)

mounthood (993037) | more than 2 years ago | (#40344865)

http://www.readability.com/articles/sagdka0j [readability.com]

I was impressed that it gets the first 11 pages, and then it includes a 'Next Page' link to in-line the remaining pages. The problem is it didn't get the performance images, which are in separate iframes.

Re:Readability works, but no performance images (1)

mounthood (993037) | more than 2 years ago | (#40344923)

(self-reply) I just noticed that the iframes are "image/svg+xml" so maybe it's the content-type that's the problem, not that they're in a separate iframe.

It would be nice if Readability had page numbers and links to the original page, for problems like this.

The point is wrong (0)

Anonymous Coward | more than 2 years ago | (#40344893)

The point is how much mips do you do per watt or fraction of watt.

Because if you scale up, probably any "new" and more powerfull CPU will beat a "cluster" of low power ones.

They are low power for a reason. To process data without draining batteries.

If you want to go all guns blazing, then do a test with video card GPUs, because they also do MIPS, albeit, specialized ones...

So, interisting project... a bit useless conclusions.

This is fascinating (1)

ZosX (517789) | more than 2 years ago | (#40344907)

So intel finally beats arm in performance/watt, but a 2 board cluster beats intels lowest power offering. So, basically intel has finally eroded the advantage arm has in servers, but arm still maintains an edge in small, low power devices. I love that arm has been so competitive in certain areas. Its good to see something other than x86 everywhere. Imagine if there was no iphone. Imagine if there was no competition and arm was still just a slow, but modern and power efficient core? ARM has come a long, long way in the past few years. Competition drives innovation. We need more.

Did you know Atom consumes 30W? (3, Informative)

michaelmalak (91262) | more than 2 years ago | (#40344939)

So I'm asking myself how 12 ARMs equal the power consumption of one Atom. So I have sit through all the page loads. The "Atom" is a complete off-the-hself "Net Top" box designed to maximize performance (spinning hard drive and high-end graphics card) with the sole constraint of being noiseless -- i.e. the Atom was chosen by the NetTop manufacturer for low heat, not for low energy consumption.

OK, then for the comparison with Ivy Bridge, I wasn't surprised. I've been salivating about the low-power versions of Ivy Bridge for several months now. But this comparison wasn't even againt that. They used the highest clock cycle highest power 3770K variant, which is rated at 77W [wikipedia.org] . There is a 45W version for a bit lower clock speed. (BTW, Intel "produces" low-power variants the same way they "produce" high-clock variants -- they test the chips after manufacturing to see which ones draw less power.)

So, basically, the comparison is completely pointless and a waste of time.

Re:Did you know Atom consumes 30W? (1)

Nimey (114278) | more than 2 years ago | (#40345203)

To a first approximation, heat = energy consumption. You have to dissipate all that energy you use as heat, after all, and that's why lower wattage parts always run cooler, all other things being equal.

Re:Did you know Atom consumes 30W? (1)

michaelmalak (91262) | more than 2 years ago | (#40345367)

You missed my point. The Net Top included, besides the Atom, other devices such as the hard drive and high-end graphics card, which were power-hungry but did not happen to require a fan.

Re:Did you know Atom consumes 30W? (0)

Anonymous Coward | more than 2 years ago | (#40345435)

His goal was not to comment on your general "point", but to demonstrate the absurdity of this particular sentence you wrote:

the Atom was chosen by the NetTop manufacturer for low heat, not for low energy consumption

Re:Did you know Atom consumes 30W? (1)

michaelmalak (91262) | more than 2 years ago | (#40345505)

I'll repeat my point one more time. In a Net Top environment, one has access to AC electricity, but one also desires quiet operation so as to not interfere with the home theatre operation. Quiet means no fan means low heat. Low energy is not relevant to the needs of Net Top. Yes, I understand that low energy = low heat from an engineering standpoint, but try to understand the design requirements. And this difference in design requirements is completely behind why the reviewer ended up with an apples to oranges comparison.

Re:Did you know Atom consumes 30W? (0)

Anonymous Coward | more than 2 years ago | (#40346019)

You fail to understard that high energy = high heat, so there's no way Net Top could meet its low heat requirement by incorporating a chip with a high power consumption. Your argument doesn't work.

Re:Did you know Atom consumes 30W? (0)

Anonymous Coward | more than 2 years ago | (#40348407)

Seems obvious that he gets it. But that wasn't his point anyway. They weren't thinking "this needs to draw as little energy out as possible!" They were thinking "This needs to stay as cool as possible!" They both might result in the exact same choice because they are essentially equal as you point out, but the reasoning for going that route can come from either or both ways. It doesn't have to be both, as one might not care about how much energy it uses (as in, if required, they would be willing to be power-hungry), but they do care about whether it stays cool or hot (so as to not need a fan and still not overheat). So, they choose the lower wattage to save from overheating, and as a benefit (from their perspective), also draw less power, thanks to science.

Re:Did you know Atom consumes 30W? (0)

Anonymous Coward | more than 2 years ago | (#40345831)

On the server benchmarks performed by the anandtech website, the high clock rate versions of the previous Xeons were better in terms of performance per power than the lower clocked ones, even though their peak energy consumption was higher. That might be a argument for the selection of the 3770K.

Parallelizable (2)

Luthair (847766) | more than 2 years ago | (#40344953)

If you're looking at highly parallelizable workloads shouldn't the GPU in the AMD part be part of the equation?

Re:Parallelizable (0)

Anonymous Coward | more than 2 years ago | (#40346381)

Only if it's worth writting specialized code. GPGPU is still a pain to write, so if your task finishes quick enough it's cheaper to let the CPU handle it.

Re:Parallelizable (1)

TurtleBay (1942166) | more than 2 years ago | (#40351061)

And commodity code works on a Beowulf cluster of Pandaboards?

Re:Parallelizable (1)

CAIMLAS (41445) | more than 2 years ago | (#40356915)

It should be, and probably would be, if using the GPU for general computing purposes as you would a CPU was possible yet. But it isn't, so it hardly matters.

de-pagination tools (2)

burne (686114) | more than 2 years ago | (#40344995)

Safari's reader seems to make good work of that. One long page, all the photo's and no adds.

Wrong way to test an ARM cluster (0)

Anonymous Coward | more than 2 years ago | (#40345031)

IMO they did two basic things wrong:
1. They tested the raw CPU power, when it's common knowledge ARMs aren't meant for heavy computations.
2. Using 6 pandaboards is probably the least cost effective way to get a 12-core ARM cluster (but the only way available off-the-shelf right now)

To address 1, they should change the benchmark to something IO-bound instead of CPU-bound, say a database or a static-file webserver. For 2, they'd have to wait for some ARM server boards, which combine all the CPUs on a single board.

Wow! (1)

Anonymous Coward | more than 2 years ago | (#40345051)

Wow! Imagine a Beowulf cluster of these!

Re:Wow! (0)

Anonymous Coward | more than 2 years ago | (#40345529)

Wow! Imagine a Bavarian custard with cheese!

Obligatory retro post.. (2)

scsirob (246572) | more than 2 years ago | (#40345661)

.. Imagine a Beowulf cluster of those!!

Re:Obligatory retro post.. (0)

Anonymous Coward | more than 2 years ago | (#40357179)

.. Imagine a Beowulf cluster of those!!

Wow. Haven't heard that in a while. What happened to Beowulf clusters. Is it dead? What does Netcraft say?

The test in cost is bogus (0)

Anonymous Coward | more than 2 years ago | (#40345687)

What they did not mention is

The panda board cost $182 at Digi-key - for that same $182 per board, one can buy the Intel Atom
mini ITX board for $62 (search for Intel Desktop Board D425KT Innovation Series - motherboard - mini ITX - Intel Atom D425
                                                            or http://www.google.com/products/catalog?q=intel+atom+mini+itx&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a&um=1&ie=UTF-8&tbm=shop&cid=182897328813518723&sa=X&ei=idTcT9fgF6GO2AXl6-WTDA&ved=0CNEBEPMCMAE#scoring=tp)

so for $182 and $4 you can buy 3 Intel Atom mini ITX board and still out run your ARM in cost and performance.
and when the whole performance test finish - you can hook up those atom and boot up window XP and use some useful OS/application like Photoshop

Re:The test in cost is bogus (0)

Anonymous Coward | more than 2 years ago | (#40345811)

right, so electricity is free in your data center?

Re:The test in cost is bogus (0)

Anonymous Coward | more than 2 years ago | (#40346215)

right, so why are you running boatloads of weak-ass atoms or arm cores when a e3 xeon + virtualization beats the living crap out of them for any important metric?

Cortex-A15? (1)

OrangeTide (124937) | more than 2 years ago | (#40345775)

The quad core Cortex-A15s have even better perf/watt. Better cache architecture in them. Support for 40-bit physical addressing. ARM is quickly catching up to Atom and Fusion in terms of performance.

Does ARM support ECC? (1)

JDG1980 (2438906) | more than 2 years ago | (#40348659)

Real servers need ECC RAM. I'd be reluctant to even run a home file server without it, if that server contains critical data.

Does ARM support ECC? If not, then it can be ruled out on that basis alone. Atom and Bobcat can also be ruled out at this time since neither support ECC RAM.

A while back Intel announced a 2-core, 1.2 GHz Sandy Bridge "Pentium 350" that has a max TDP of 15W and has the standard server chip package, including ECC support. This would be nice for small, low-power servers. But for some reason, I haven't been able to find them on sale anywhere (online or off), even though Intel's site says it was launched Q4 2011.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?