Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Announces Atom S1200 SoC For High Density Servers

Unknown Lamer posted about 2 years ago | from the race-to-the-bottom dept.

Intel 78

MojoKid writes "Intel has been promising it for months, and now the company has officially announced the Intel Atom S1200 SoC. The ultra low power chip is designed for the datacenter and provides a high-density solution designed to lower TCO and improve scalability. The 64-bit, dual-core (four total threads with Hyper-Threading technology) Atom S1200 underpins the third generation of Intel's commercial microservers and feature a mere 6W TDP that allows a density of over 1,000 nodes per rack. The chip also includes ECC and supports Intel Virtualization technology. Intel saw a need for a processor that can handle many simultaneous lightweight workloads, such as dedicated web hosting for sites that individually have minimal requirements, basic L2 switching, and low-end storage needs. Intel did not divulge pricing, but regardless, this device will provide direct competition for AMD's SeaMicro server platform." Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that.

cancel ×

78 comments

Sorry! There are no comments related to the filter you selected.

Economies of scale (1)

Anonymous Coward | about 2 years ago | (#42262845)

How can lots of slow processors be better than a few fast ones with virtualization on top?

Re:Economies of scale (4, Insightful)

TechyImmigrant (175943) | about 2 years ago | (#42262949)

>How can lots of slow processors be better than a few fast ones with virtualization on top?

More physical contexts => less context switch overhead => can handle multiple simultaneous sessions more efficiently provided that those sessions are not individually compute or memory intensive.

Re:Economies of scale (4, Informative)

godrik (1287354) | about 2 years ago | (#42262997)

Well, and that's the difference between scale up and scale out in parallel computing. Throughput is typically given by many simple processing units. Latency is typically given by highly specialized processing units.

If it is throughput you care about, simple is the way to go.

Re:Economies of scale (1)

Anonymous Coward | about 2 years ago | (#42263271)

If you are a large corporation with a WAN that handles emails, database, file retention/storage, project management, etc. BUT does not do any rendering or accurate computations, then this is the ideal server. Very little wasted CPU computing potential.

Re:Economies of scale (1)

davester666 (731373) | about 2 years ago | (#42264045)

And really, who needs accurate computations?

Re:Economies of scale (0)

Anonymous Coward | about 2 years ago | (#42264653)

You float my boat with your inefficient eff pee you!

Funny enough, the captcha is "integer"... :)

Re:Economies of scale (1)

picoboy (1868294) | about 2 years ago | (#42269805)

How can lots of slow processors be better than a few fast ones with virtualization on top?

A few points..

1. Most hyperscale server applications are memory and/or I/O bound, not CPU bound (and "memory bound" meaning frequent memory accesses, not memory size bound)

2. Typical applications are search, web serving and data mining. Anything that requires Apache or Hadoop where the processing is highly parallel (and memory or I/O bound...)

3. For those types of workloads, there are often frequent idle times for any individual CPU, so individual CPUs can frequently enter a low power state while only the active CPUs are operating full bore. It's more problematic for large, monolithic CPUs to be power efficient with these types of workloads.

4. Because the applications are typically I/O bound, hyperscale servers have (or will have) more sophisticated parallelized I/O subsystems that provide lower latency access to distributed datasets.

Hyperscale server = I/O engine
Hyperscale server != computation engine

how much is it? (3, Informative)

alen (225700) | about 2 years ago | (#42262849)

one of the reasons no one uses Intel in mobile is the cost.

Re:how much is it? (5, Funny)

Anonymous Coward | about 2 years ago | (#42262983)

Come on man. Are you blind? It says right there in the title: $1200-- oh, wait.

Re:how much is it? (4, Informative)

Anonymous Coward | about 2 years ago | (#42263159)

The Intel Atom processor S1200 is shipping today to customers with recommended customer price starting at $54 in quantities of 1,000 units.

Re:how much is it? (1)

JDG1980 (2438906) | about 2 years ago | (#42264337)

Well, OK, but I'm not going to be buying 1000 units and I don't plan on laying out and soldering my own board either. How much will it cost to get a S1200 motherboard+chip combo in a standard form factor? Or is this going to be OEM-only?

Re:how much is it? (0)

Anonymous Coward | about 2 years ago | (#42268761)

it mainly has to do with the inability to license x86. No major handset maker buys complete SoCs from ARM yet Intel thinks they are going to build mobile SoCs to serve every vendor.

Great that it supports ECC... but the Atom brand? (4, Interesting)

Aphrika (756248) | about 2 years ago | (#42262919)

Quite a few scientific customers will required that, and for performance per watt computing, it's likely that this chip will find its way into those applications.

However, I am amazed that they are using the Atom branding for what is essentially a very different underlying chip. The initial range of Atoms were lacklustre enough that the name seems somewhat tarnished now. Dumping that brand into the server arena may cause some people to have reservations, regardless of how good the underlying technology is.

Re:Great that it supports ECC... but the Atom bran (4, Insightful)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#42263005)

I'm sure that Intel's Xeon team, and their margins, are 100% totally delighted with this chip, have greatest confidence in its success, and wish it only the best in the future...

Re:Great that it supports ECC... but the Atom bran (2)

Kjella (173770) | about 2 years ago | (#42263259)

However, I am amazed that they are using the Atom branding for what is essentially a very different underlying chip.

Why so surprised, Intels are selling "Pentiums" now that have nothing to do whatsoever with any Pentium architecture only watered down versions of Intel Core processors. Same with the Celerons it's more a price segment than an actual technology.

The initial range of Atoms were lacklustre enough that the name seems somewhat tarnished now.

The initial range of Atoms sold really well, it was only after AMD started making decent APUs and the tablet market stole the whole show that they disappeared into obscurity. Maybe to people watching the battle of AMD vs Intel they're a bit lackluster but I think to most it was just about having a computer for light work at all.

Re:Great that it supports ECC... but the Atom bran (2)

Dogtanian (588974) | about 2 years ago | (#42264149)

Why so surprised, Intels are selling "Pentiums" now that have nothing to do whatsoever with any Pentium architecture

By definition, if they're calling it "Pentium", it's "a" Pentium architecture. What you say might have had more weight if the original Pentium architectures were all the same. However, to the best of my knowledge, the original Pentium (P5) was an extension of the 486 architecture, whereas the Pentium Pro and Pentium II (P6) were sort-of-RISC non-x86 cores wrapped in a translation layer; that is, drastically different.

However, I do agree that bringing back the Pentium name after ditching it was a confusing and stupid idea, if only from the point of view of market positioning. They had the Celeron brand for entry-level processors, the new "Core" (*) brand for midrange machines and Xeon for the high-end stuff. What was the new "Pentium" branding supposed to convey? Apparently it's a sort-of-budget-but-not-as-cheap-as-Celeron line. This smacks of internal marketing politics- someone realises that the Pentium brand still had recognition, and seeks a contrived excuse to bring it back when its position has been filled with the new Core brand. Now that we have the clearly-delineated Core i7, i5 and i3 (and the Celeron remaining at the bottom), the "Pentium" brand is even more unnecessary.

(*) Since you mentioned the "new" Pentiums not having the "Pentium" architecture, it's worth remembering that the original "Core"-branded chips didn't have the "Core" architecture, which came out with the "Core 2" line(!!!)

Re:Great that it supports ECC... but the Atom bran (1)

marcosdumay (620877) | about 2 years ago | (#42268073)

...but I think to most it was just about having a computer for light work at all.

I've asked it around a bit (hell, I ask all kinds of strange questions around!) Most people aware enough to know what their procesor is and not technical enough to know what it means won't ever touch it again.

But then, you've just excluded everybody that'll buy a server in "people watching the battle of AMD vs Intel".

Re:Great that it supports ECC... but the Atom bran (0)

Anonymous Coward | about 2 years ago | (#42263319)

I doubt it has good FPU performance. I'm thinking this is more a database/webserver core...bind each worker thread to a real core kind of thing.

Re:Great that it supports ECC... but the Atom bran (2)

Alwin Henseler (640539) | about 2 years ago | (#42263353)

It's a low-power x86 compatible from Intel. Why not apply the Atom label?

Personally I think it's sad these parts aren't available for desktop applications. I wouldn't mind a server-grade (ECC support, virtualization, 64 bit), low power x86 CPU, and I'm sure I'm not the only one. If some company had the guts to put this CPU on a Mini-ITX board or a small all-in-one PC, no doubt it would sell.

Re:Great that it supports ECC... but the Atom bran (0)

Anonymous Coward | about 2 years ago | (#42263747)

What's stopping you from using a server-grade machine as your workstation?

Re:Great that it supports ECC... but the Atom bran (0)

Anonymous Coward | about 2 years ago | (#42268439)

What's stopping you from using a server-grade machine as your workstation?

My desire not to have to sell a kidney for what will essentially be a toy. If you look at the Intel spec sheet, this CPU is limited to 8GB of memory. Enough for a simple, low-power VMWare/OpenStack playground at (hopefully) a sub $1K price point.

Re:Great that it supports ECC... but the Atom bran (0)

Anonymous Coward | about 2 years ago | (#42269335)

Huh? A E3-1275V2 is pretty much the same price as a i7 3770K.
Same thing for a Asus P8C-WS vs. a decent Z68 or Z77 board.

Re:Great that it supports ECC... but the Atom bran (0)

Anonymous Coward | about 2 years ago | (#42263391)

Quite a few scientific customers will required that

It's not even that. Basically anyone doing any server work, as it prevents RAM errors from being fatal, they just get logged (at least in Linux, surely in Windows too). Gives you time to down the system and swap the component.

Re:Great that it supports ECC... but the Atom bran (5, Informative)

gman003 (1693318) | about 2 years ago | (#42263469)

They're using the Atom branding because it is an Atom processor underneath. The Atoms and the Core/Xeon/Pentium/Celeron lines have completely different underlying microarchitectures. In particular, the Atom uarch ("Sodaville" in the current generation) has really poor floating-point and SIMD performance, so you can forget about scientific computing on this.

More to the point, the "Atom" brand implies "cheap, low-power device". The same thing "ARM" implies, and as this processor is mainly there to seize control of a niche ARM was trying to grab, it makes sense to use a similar brand name.

Re:Great that it supports ECC... but the Atom bran (2)

elwinc (663074) | about 2 years ago | (#42267219)

Good points. Another thing that makes the S1200 similar to Atom & different from Core is the S1200 doesn't do out-of-order execution. Core chips have something like a 50 instruction re-order buffer, and that helps Core execute an average of 1.5 instructions per clock per thread (at the cost of greatly increased complexity). Atom on the other hand, so far, does no re-ordering, which makes it much simpler and a bit slower.

Not an end-user SKU (1)

Anonymous Coward | about 2 years ago | (#42262937)

This is all Internet Speculation, but:

This chip won't be sold to end-users. This will only be available in pre-configured high-density systems. You will still pay through the nose.

[citation needed]

Re:Not an end-user SKU (2)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#42263049)

Given that the press shots for the part show a damn lot of teeny BGA balls on the bottom, I'd hope that it isn't an end user part...

The question is whether it will(as some Atoms in the past have) show up fairly cheaply in the nicer Prosumer/SMB NASes and assorted 1U/shallow server barebones kits, or whether this will be a "Well, the totally proprietary cardcage is $25,000, I'll throw in a license for our Enterprise Backplane Management Console for just 3k more, cause I like you, and cards are 6k a pop..." type product.

Re:Not an end-user SKU (2)

JDG1980 (2438906) | about 2 years ago | (#42264397)

Given that the press shots for the part show a damn lot of teeny BGA balls on the bottom, I'd hope that it isn't an end user part...

Existing Intel Atom chips are also BGA-soldered, but you can purchase motherboards with the chip already included for DIY systems. The same is true of AMD's E-series. The question is whether any of Intel's customers will want to supply S1200-series boards to end users, or if they prefer to reserve them for charging out the nose in prebuilt systems.

Re:Not an end-user SKU (1)

TechyImmigrant (175943) | about 2 years ago | (#42266079)

If someone wants to make a low cost, low wattage server board with all the servery goodness (ecc, failover, vt etc.) afforded by the S1200 I'm pretty sure intel would be happy to sell them the chips.

Thought the title said $1200 (3, Insightful)

Revotron (1115029) | about 2 years ago | (#42262965)

At first glance I read the title as "Intel Announces Atom $1200 SoC For High Density Servers".

My first thought: "$1200 for an underpowered Intel server chip? Sounds about right."

Re:Thought the title said $1200 (1)

ak3ldama (554026) | about 2 years ago | (#42265107)

My first thought: "$1200 for an underpowered Intel server chip? Sounds about right."

AMD cpus really are dead, market place confirms.

Good old Slashdot (4, Insightful)

kiwimate (458274) | about 2 years ago | (#42262987)

Oh the irony...

  • Listed as being from the "race to the bottom" department.
  • Person responsible: "Unknown Lamer"
  • Sole "editorial" contribution (and I use that word loosely): a silly and irrelevant snarky comment.

    Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that.

Damn, but Slashdot is a sad place these days.

Re:Good old Slashdot (0)

Anonymous Coward | about 2 years ago | (#42263039)

You must be new here.

Re:Good old Slashdot (1)

Trepidity (597) | about 2 years ago | (#42263235)

He's a fan of AMD [slashdot.org] perhaps?

Re:Good old Slashdot (1)

serviscope_minor (664417) | about 2 years ago | (#42263397)

Listed as being from the "race to the bottom" department.

The departments have always been jokey.

Person responsible: "Unknown Lamer"

Slashdot has alwas been driven by user submissions. Given your UID you have been here even longer than me, which means probably around for at least 10 years, so I'm surprised this comes as a shock to you.

Sole "editorial" contribution (and I use that word loosely): a silly and irrelevant snarky comment.

Actually, it's neither silly nor irrelevent.

It is quite significant that the Atom CPUs support ECC memory, and Intel do make you pay for a lot for it. AMD supports ECC memory on the mid range desktop CPUs and above, whereas for Intel, you have to fork out for the Xeon brand and pay a very hefty premium.

Damn, but Slashdot is a sad place these days.

Then leave and demand your money back.

Re:Good old Slashdot (0)

Anonymous Coward | about 2 years ago | (#42264079)

Mid range? Semprons support ECC ...

Re:Good old Slashdot (1)

asliarun (636603) | about 2 years ago | (#42264125)

Sole "editorial" contribution (and I use that word loosely): a silly and irrelevant snarky comment.

Actually, it's neither silly nor irrelevent.

It is quite significant that the Atom CPUs support ECC memory, and Intel do make you pay for a lot for it. AMD supports ECC memory on the mid range desktop CPUs and above, whereas for Intel, you have to fork out for the Xeon brand and pay a very hefty premium.

Damn, but Slashdot is a sad place these days.

Then leave and demand your money back.

Man, you are clutching at straws, just like the OP did with his snarky comment about ECC. The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?? By what stretch of the imagination is ECC not relevant to a server CPU? In fact, it would have been noteworthy if Intel had cut corners and just rebranded their mobile Atom CPU and not even added ECC support.

And Newegg sells 8GB ECC RAM for 52 bucks vs 40 bucks for non-ECC RAM. Even if you put aside the fact that this is supposed to be server RAM, an extra 12 bucks sounds a "hefty premium" to you?

And yes, for the record, the comment was not just biased (which is okay since this is /.) but was pathetically lame. You at least expect a certain standard when it comes to snarkiness. I mean, OP could have pointed out that this chip only supports up to 8GB memory which is actually a significant drawback considering this is a 64bit chip.

Re:Good old Slashdot (2)

serviscope_minor (664417) | about 2 years ago | (#42264241)

Man, you are clutching at straws,

How so?

The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?

You do realise that nested replies are replied to parent posts, not the original story, right?

I claim that Intel do charge a hefty premium for ECC, which is why the comment is relevent. AMD do not as can be witnessed by cheap midrange desktop CPUs supporting ECC. In other words, you can use cheap AMD CPUs for server grade tasks. Because AMD don't charge a premium for ECC and Intel do. Because for Intel, you need to fork out for a low performing Xeon which will be more expensive than an equivalent AMD desktop processor by a long way. And you can use the AMD desktop processors for servers. Because they support ECC, cheaply, unlike Intel ones, which don't. Got it yet?

Even if you put aside the fact that this is supposed to be server RAM, an extra 12 bucks sounds a "hefty premium" to you?

I don't believe you. Why don't you paste a link. Oh look, now you've pasted it go back and read it really carefully. Go on, read it again. But carefully this time. You will see that, surprise, it is NOT intel who you're buying the RAM from, in fact, Intel don't even sell RAM.

You at least expect a certain standard when it comes to snarkiness

As requested, I've upped the level of snarkiness.

Re:Good old Slashdot (1)

asliarun (636603) | about 2 years ago | (#42264763)

Man, you are clutching at straws,

How so?

The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?

You do realise that nested replies are replied to parent posts, not the original story, right?

I claim that Intel do charge a hefty premium for ECC, which is why the comment is relevent. AMD do not as can be witnessed by cheap midrange desktop CPUs supporting ECC. In other words, you can use cheap AMD CPUs for server grade tasks. Because AMD don't charge a premium for ECC and Intel do. Because for Intel, you need to fork out for a low performing Xeon which will be more expensive than an equivalent AMD desktop processor by a long way. And you can use the AMD desktop processors for servers. Because they support ECC, cheaply, unlike Intel ones, which don't. Got it yet?

Even if you put aside the fact that this is supposed to be server RAM, an extra 12 bucks sounds a "hefty premium" to you?

I don't believe you. Why don't you paste a link. Oh look, now you've pasted it go back and read it really carefully. Go on, read it again. But carefully this time. You will see that, surprise, it is NOT intel who you're buying the RAM from, in fact, Intel don't even sell RAM.

You at least expect a certain standard when it comes to snarkiness

As requested, I've upped the level of snarkiness.

I can't make head or tail of what you are trying to say.

For the record, I'm not trying to be snarky *at* you or asking you to be - my comment was about the OP's comment being lame - which it was.

Yes, I agree with what you are saying about AMD, and definitely, AMD offers and has always offered better value for money than Intel. That is indeed their USP and how they compete. And it is a good thing for average customers like you and me.

My point was that this is a dedicated server CPU so ECC is to be expected. In fact, a snarky comment would have been appropriate if Intel had *not* supported ECC.

As far as the price goes, I simply searched Google for 8GB ECC RAM and 8GB RAM, and verified that Newegg price which was the first search result.
If you really don't want to believe, here are the links:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262 [newegg.com]
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231297 [newegg.com]

And yes, I know Intel doesn't make RAM, and neither did I claim that they did. So your comment above is quite puzzling and I don't understand what I need to re-read *carefully*. It was in response to your previous comment, "It is quite significant that the Atom CPUs support ECC memory, and Intel do make you pay for a lot for it."

I was trying to say that paying an extra 12 bucks for ECC RAM isn't much, so what's your point?

Re:Good old Slashdot (1)

serviscope_minor (664417) | about 2 years ago | (#42265661)

I can't make head or tail of what you are trying to say.

So it would seem. We are talking at crossed purposes entirely.

For the record, I'm not trying to be snarky *at* you or asking you to be

Oh OK. I'll dial it back a bit then :)

ECC RAM is cheap. Intel process supporting it generally are not. You can make a cheap server out of AMD desktop processors because they support ECC. The same cannot be said of Intel: Intel charge a big premium for processors supporting ECC.

Re:Good old Slashdot (1)

marcosdumay (620877) | about 2 years ago | (#42268183)

The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs??

Because AMD desktops come with the functionality, but lots of Intel servers don't.

Re:Good old Slashdot (2)

shiftless (410350) | about 2 years ago | (#42264341)

Then leave and demand your money back.

I'd like to leave, but I can't find any "delete my account" button. I'm pissed cause some asshole purposely modded every single post of mine down for weeks, and destroyed my karma to where I can only make two posts a day and they start at -1. I emailed the admin expressing my extreme displeasure with the situation, and he basically told me tough shit, sorry, deleted the guy but otherwise cant do shit for ya.

So you know what...let me just reiterate the GP's statement: FUCK THIS SITE, it's absolute shit these days. I hope the admin loses all his most important data in a Seagate hard drive crash.

Re:Good old Slashdot (1)

Anonymous Coward | about 2 years ago | (#42263547)

Okay smartass, what other cheap/low end intel CPUs support ECC ram?
And no, a "i3" that's more expensive than a E3 xeon and needs a C20x chipset doesn't count.

Re:Good old Slashdot (1)

fa2k (881632) | about 2 years ago | (#42266687)

The new E3 Xeon "V2" processors seem to be just a bit more expensive than the equivalent i7 processors. These are all expensive parts, but there isn't a huge premium for the Xeon. There isn't much to choose from in the mobo department though, but there's an Asus that seems decent.

Imagine! (1)

Anonymous Coward | about 2 years ago | (#42263077)

A beowulf cluster of these!

Re:Imagine! (1)

edmudama (155475) | about 2 years ago | (#42269533)

And for the first time in 20 years of slashdot, a beowulf cluster joke was actually appropriate.

High density. (5, Interesting)

serviscope_minor (664417) | about 2 years ago | (#42263079)

So, it's high density and supports 1000 nodes per rack, or 2000 cores per rack, since it's dual core. At 6W TDP, that's 6kW.

Sounds great, except...

You can cram 64 piledriver cores into 1U, and they have a 140W TDP for the hottest.

So, crunching some numbers (a typical rack is 45U high).

You would need 31 Opteron servers to have as many cores. That gives... uh what? 4400W.

Hmm

So, if you buy cheapie quad socket piledriver machines, you can fit your 2000 cores into a mere 32U, and draw 2/3 of the power. Of course comparing cores discounts the quality of the cores. While AMD is known for a MOAR COAREZZZZZ1!1!!one! approach, the piledriver cores are considerably faster than Atom ones clock for clock. Generally hard to find benchmarks, but the AMD processors usually lie between the i3 and i5 in terms of single threaded performance and the i3 and i5 trounce the Atom.

This is one of the very strange things.

People keep banging on about high density servers, but even the most cursory check from a standard online price quoter almost always shows that not only are the quad Opteron machines denser, they are usually cheaper too. They also have the advantage that they offer a larger unified system image making them more flexible too.

About the only thing that's comparable in terms of price, performance and density seem to be those intel machines where you can cram 4 dual socket machines into 2U. The quad socket Intel boxes are more expensive.

So, what gives?

Can anyone enlighten me?

What's the appeal?

Re:High density. (2)

serviscope_minor (664417) | about 2 years ago | (#42263103)

OOps!

Out by a factor of 4 on the Opterons.

2000 Opteron cores would cost you 17,000W, not 6000.

Still, given that 2000 Opteron cores will be much faster than 2000 Atom cores, it's going to be much closer.

The Opterons are still denser, however and almost certainly competitive on power.

Re:High density. (0)

Anonymous Coward | about 2 years ago | (#42263611)

Math + thinking fail! Guessing you probably don't run any real volume of modern servers. With the performance per core these days, most aren't CPU limited these days. Certainly not in the world of web, and usually not in DB. Generally memory (even at hundreds of gigs) and IO limits come much sooner. Aside from scientific, semantic search/analysis, etc MOST server workloads are not going to be CPU bound.

Re:High density. (2)

lorenlal (164133) | about 2 years ago | (#42264381)

Well then, you still have the 6366 HE, which has a thermal rating of 85W... Which translates to 10,625 W for the 2000 cores. I'm not sure what the throughput maximum is... But I'm willing to wager it compares favorably to 10kW of Atoms.

Full Disclosure: I'm a sad sad AMD fanboy.

Re:High density. (1)

gman003 (1693318) | about 2 years ago | (#42263779)

The use case for these isn't compute-intensive.

Imagine running static-content webservers on these. Your main bottlenecks are going to be disk and network (and maybe memory), not CPU. Or maybe running an NFS share, or anything else where the spinning disc is the biggest obstacle.

Also, do some idle-power comparisons between the Atom and the Opteron*. Maybe they use the same power under peak load, but what happens when half your processors are idling? I would imagine the Atoms do much better about dropping to very low power for this.

* Actually, it might make more sense to compare it to Xeons. Intel is trying to cover all possible server chip markets. They don't care if an AMD chip covers this niche, they just need an Intel chip to be down there.

Re:High density. (1)

serviscope_minor (664417) | about 2 years ago | (#42264071)

Imagine running static-content webservers on these. Your main bottlenecks are going to be disk and network (and maybe memory), not CPU.

In that case, you'd presumably go for the lowest end Opteron processors which only draw 85W or so, giving you the same kind of thing for less power.

Though interestingly, if IO is really a problem, then they could offer a solution quite easily: the Opteron processors connect to both the chipset and each other using HT. You could put two 6xxx Opterons in one box and use the four spare full sized HT links to connect to four extra chipsets to provide PCIe.

Those could probably provide an unholy amount of IO bandwidth very easily.

Also, do some idle-power comparisons between the Atom and the Opteron

I assume you'd power down any servers not in use, in both cases since even at idle the memory will draw plenty of juice.

Re:High density. (1)

marcosdumay (620877) | about 2 years ago | (#42268357)

I don't get that. If the task is not compute-intensive, why do you want so many cores?

You solve disk throughput by offloading the disks at specialized servers (SAN), and you solve memory throughput by having more servers... And then, you can only increase density and memory throughput at the same time if you go with a custom server design, and less cores here equals to less power and thus more density.

Re:High density. (1)

bored (40072) | about 2 years ago | (#42263545)

You can cram 64 piledriver cores into 1U, and they have a 140W TDP for the hottest.

I don't really think this chip is aimed at AMD, its aimed at ARM (and friends). The ARM guys have been making a lot of noise lately about how ARM is perfect for the datacenter, and this chip is just intel pointing out that if you want a whole bunch of "low" power and crappy performance CPU's they can provide them too.

Even the name is indication of that, Atom's are CPU's aimed at the ARM market, Xeon's are CPU's aimed at the server market. If they want to shoot at AMD's turf, they shave some cache of the E7 series xeons and lower the price.

Many people have pointed out that even intel's Xeon's are competitive with the ARM server vendors because sure they draw 5-10x the wattage (per core), but they get 20x the performance per core too. So many of the 1U machines actually give better throughput. Look at supermicro's "fat twin" machines. The 1U fat twin can take 4 E5-26xx's or 32 cores per 1U. Or 1440 cores in your 45U rack.

Re:High density. (1)

serviscope_minor (664417) | about 2 years ago | (#42263663)

I mean, why do people think they want ARM servers or these funky "high density" ones which are all but.

I guess the absolute minimum power draw is lower, but if you've got a rack full of 45 machines, you're probably expecting a utilisation of greater than 2%.

The Supermicro (Intel and AMD based) give excellent performance in price, power draw, throughput and density. All the new ones seem to be more expensive, less dense and more of a pain in the ass.

Re:High density. (1)

pavon (30274) | about 2 years ago | (#42263653)

Because those 31 64-core piledriver machines won't be able to push the same amount of IO as 1000 2-core Atom machines.
These things aren't for compute intensive tasks. Intel's own advertising comparing to Xeons show the Atoms having twice the performance-per-watt for scale-out tasks, but half the performance-per-watt for compute intensive tasks. It is about providing another option to better match the processor to the task. And it is here today while 64-bit ARM is still a year into the future.

Re:High density. (2)

serviscope_minor (664417) | about 2 years ago | (#42263767)

Because those 31 64-core piledriver machines won't be able to push the same amount of IO as 1000 2-core Atom machines.

How so?

In your 1U, you get 4 processors, 64 cores, and 4 PCI Express 2.0 x16 slots, giving 32 GB/s per U, or about 1TB / s for the rack of 31 machines. You'll also get a bunch (12?) SATA ports or so for your troubles and a couple of gig-E ones too, if you care for such things.

Remember, Opteron processors are popular for supercomputers which rely on very high speed, very low latency interconnects, like infiniband to share vast quantities of data very quickly with neighbouring nodes.

Re:High density. (1)

Kjella (173770) | about 2 years ago | (#42265483)

Generally hard to find benchmarks, but the AMD processors usually lie between the i3 and i5 in terms of single threaded performance and the i3 and i5 trounce the Atom.

I guess it must be hard, with the blindfold on and all. Here [anandtech.com] is a list for example, where the FX-8350 is even beaten by the Phenom II x6 and performs worse than the Intel Pentium G840 in single threaded performance. Anyway comparing 6W/2 = 3W and 140W/16 = 8.75W those Piledriver cores had better do much more than one Atom core. Intel is again trying to create a two-front war against AMD, should they go lower to match the Atoms or higher to match the Xeons or spread themselves too thin doing both. Worst thing is, this is really just a spinoff of their smartphone/tablet work - that they release a 6W server chip I think is only because they can, why risk anyone else taking the market.

On the home front (2)

Larry_Dillon (20347) | about 2 years ago | (#42263421)

I'm using a AMD E-350 as a home server on Fedora 17. It's not a gaming rig but it has plenty of power for DHCP/DNS/File server and can run a Windows 7 VM via KVM. CPU's are so fast these days that even a low-end/low-power offering is fast enough for many jobs. I'm glad to see Intel offering 8GB of RAM on Atom as the older systems could only support 4GB. That's what pushed me to the AMD Bobcat/Zacate platform.

I figure it's saving me about half of the electricity versus running a older Intel PC as a server. Plus the Asus E35M1-M has decent onboard video, USB3 and plenty of STAT3 ports

ARM comparison (1)

faustoc4 (2766155) | about 2 years ago | (#42263509)

How do they compare to ARM 64 chips? in price, performance and power usage

Re:ARM comparison (3, Insightful)

TechyImmigrant (175943) | about 2 years ago | (#42266119)

>How do they compare to ARM 64 chips?
S1200: Exists
ARM 64: Doesn't exist

Might actually be worse than the E5-2600's (1)

bored (40072) | about 2 years ago | (#42263655)

There are a number of vendors providing high density E5 xeons that probably beat this thing on both performance and density. Supermicro's dual twin puts 4 E5 2600's in a single U. Which works out to 1344 cores in 42U.

Its quite possible that the E5 even beats it on benchmark units/watt as well given that the xeon's probably get 5x-10x the performance per core.

This is a preemptive strike against ARM (4, Insightful)

JDG1980 (2438906) | about 2 years ago | (#42264631)

Original poster: "Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that."

This article [anandtech.com] gives some insight into why Intel is doing this. Basically, ARM has been making noises for some time about getting into the server market. Intel is very concerned about this, because ARM is used to lower margins and willing to license their designs widely, and could easily undercut Intel on price. They see the writing on the wall. Sure, they would like to keep ECC and other server-type goodies as premium features, but that's no longer a realistic option. Either they have to offer something cheaper, or customers who want low-cost, high-reliability server hardware will jump ship as soon as they can. This is the market niche the Atom S1200 is designed to fill. Intel gets to tout its advantage of backwards compatibility while being able to dramatically undercut other server-grade hardware on price. With this, ARM is going to have a much harder time convincing data centers to switch.

By the way, if all you care about is ECC, you don't have to buy an expensive CPU from Intel to get that (though you do need a C-series chipset rather than the consumer-grade stuff). Many of Intel's Ivy Bridge Pentium and Core i3 processors now support ECC, though this has not been widely publicized. For example, this i3-3220 [newegg.com] is only $119.99 at Newegg and according to Intel's official site [intel.com] it supports ECC.

Re:This is a preemptive strike against ARM (0)

Anonymous Coward | about 2 years ago | (#42268579)

Impressive. I've been pondering the potential for upgrades recently. I'd been looking at Intel's offerings, they listed a few processors with ECC support, but I couldn't find those being sold anywhere. I'm curious about your impressive talent for finding an actual example of one of these.

Re:This is a preemptive strike against ARM (0)

Anonymous Coward | about 2 years ago | (#42283221)

It's actually pretty easy once you get the hang of using ark.intel.com. For example, to find all Ivy Bridge processors with ECC, just do these steps:

1. Load ark.intel.com
2. Type "ivy bridge" into the search field in the upper right (no quotes)
3. Click "Specify criteria to filter these products"
4. The filters are on the left. Scroll down and set ECC to "Yes", then click Search.

Voila, a listing of every Ivy which supports ECC. You can easily add additional filters to narrow the list down a bit (e.g., only show FCLGA1155 chips since those are the ones you can use to build your own system). Once you're done with that, write down the list of non-Xeon processors and do some searching on Newegg and so forth to find out if they're generally available.

low PCI-e lanes it should have at least 16 (1)

Joe_Dragon (2206452) | about 2 years ago | (#42264821)

low PCI-e lanes it should have at least 16 so you can have a X8 raid card and room for say 10GB / e-net / fiber cards / other IO cards.

Re:low PCI-e lanes it should have at least 16 (1)

edmudama (155475) | about 2 years ago | (#42269587)

Each CPU supports 8 lanes of PCIe 2.0 (4GB/s) meaning it can flush and fill its 8GB (max) of main memory from an IO device every 2 seconds, if you actually had that much IO to pump.

These things are meant to live 1000/rack which is ~24 CPUs per 1U. Give each motherboard a pair of 1Gbit/s ethernet pipes, and i'm sure it's sufficient for the scaleout they expect.

These are not intended to build your normal 4U server chassis with 40 PCIe lanes.

The Dearly Published (1)

tzot (834456) | about 2 years ago | (#42266427)

There is one thing I'm still not clear about.

Wikipedia says that TDP is “thermal design power”. I thought it was Thermally Dissipated Power, but I obviously was wrong. Anyway, Intel used to publish TDP numbers where “T” was equivalent to “Typical”, while AMD's “T” was equivalent to “Top” (in the sense of maximum). Has this changed? The S1200's 6W are a Typical or a Top value?

Re:The Dearly Published (1)

jittles (1613415) | about 2 years ago | (#42266721)

TDP is the maximum amount of power the thing should ever draw. So if your TDP is 85W it could be anywhere between 0 and 85W depending on whether its powered on, and what the workload is. I have a Sand Bridge 35W TDP I3 that runs on ~12W most of the time.

Re:The Dearly Published (1)

tzot (834456) | about 2 years ago | (#42273413)

So your reply is “Yes, this has changed, and 6W is the maximum power that the S1200 SoC should draw.”

Thank you.

Re:The Dearly Published (1)

jittles (1613415) | about 2 years ago | (#42274845)

Hmm someone posted a link in response to me indicating that Intel says that TDP and the current draw are different. I know that the TDP is used as an indication of how much cooling you need. I also know that you will not see 100% of the W being converted to heat, so I could be wrong. I posted that based on my own testing with a Watt meter when trying to build a very low power box. I did various tests of idle and max usage consumption and never saw the W go above the TDP. For my 35W TDP processor I believe the max I ever saw was 40W with all peripherals, etc. But that was a year ago so perhaps I am misremembering, too.

Re:The Dearly Published (0)

Anonymous Coward | about 2 years ago | (#42283851)

Hmm someone posted a link in response to me indicating that Intel says that TDP and the current draw are different. I know that the TDP is used as an indication of how much cooling you need.

Yes, it is exactly that.

If you graph a CPU's power use on the vertical axis and time on the horizontal, you'll get an extremely spiky curve. For thermal management purposes, however, you don't really care about these narrow spikes. The processor die, heatspreader, heatpipes, and heatsink collectively have enough thermal mass to make short duration spikes irrelevant.

So, Intel gives different specifications depending on what kind of engineer you are. If you're an electrical engineer designing the power supply and associated circuitry, you have to be aware of worst case spike behavior. (Note that I'm talking about the last layer of power supply here, the one on your motherboard which converts 12V from your PSU to the ~1V needed by the CPU's core.) If you read through Intel's datasheets carefully, you'll find all kinds of fascinating and deeply technical power supply jargon concerning how to pull it off.

If you're a mechanical/thermal engineer designing the cooling system, however, all you care about is the long term average. TDP is the worst-case continuous average. A 95W TDP means you need to be able to move 95W of thermal power out of the CPU indefinitely without pause, while maintaining the CPU temperature below another specified limit, which Intel also provides.

(Note that while the motherboard regulator designer does have to handle electrical power peaks well above what the TDP rating might suggest, the average electrical power is still identical to the average thermal power.)

In one sense, TDP hasn't really changed much over the years. It's always been a stake in the ground: "you must design your computer to handle this much thermal power to handle this processor, and that's that". But there is one key difference: today's CPUs are much more likely to hit TDP and stay there.

Back when CPU voltages and frequencies were fixed, there was a large spread in average power depending on what code you ran. For such CPUs, Intel would generate artificial code which had the absolute worst power consumption possible (a "power virus"). They'd characterize and quote TDP based on running power virus code, but users would seldom see a CPU actually use that much power.

Starting with Pentium 4, Intel began to develop new technologies permitting the CPU to sense and dynamically regulate its own power use. It was pretty crude in P4, mostly just good for preventing that notoriously hot processor from frying itself, but today it's been developed into a sophisticated scheme for extracting as much performance as possible inside the rated TDP limit. If you buy an i7-3770K, it's rated for 77W TDP and 3.5GHz minimum speed. When running typical software at 3.5 GHz it won't actually be at 77W, so its on-chip power monitor boosts the CPU clock speed and voltage until it does hit 77W (or the upper limits of how much it's allowed to boost, whichever comes first).

They've gotten good enough at this game that if you buy the top processor in a given TDP bin, and run 100% load on all cores using any kind of software you like, there's a pretty good chance you'll see the chip's thermal management nail the TDP rating exactly. Slower models in the same TDP bin usually don't hit full power, however -- Intel wants to allow CPUs with differing performance levels to be interchangeable parts, so they set TDP ratings based on the fastest CPU in each segment rather than giving design engineers a customized rating for every CPU model.

I also know that you will not see 100% of the W being converted to heat, so I could be wrong.

Actually, 100% of the electrical power flowing into the CPU should be converted into heat. No place else for it to go.

Re:The Dearly Published (1)

TechyImmigrant (175943) | about 2 years ago | (#42284225)

>Actually, 100% of the electrical power flowing into the CPU should be converted into heat. No place else for it to go.

Nope. Some of the current flows back out of the pins and is dissipated in the receiving circuit on another chip. The pin drivers consume a fair chunk of the current when they're wiggling at full pelt.

Re:The Dearly Published (0)

Anonymous Coward | about 2 years ago | (#42293617)

>Actually, 100% of the electrical power flowing into the CPU should be converted into heat. No place else for it to go.

Nope. Some of the current flows back out of the pins and is dissipated in the receiving circuit on another chip. The pin drivers consume a fair chunk of the current when they're wiggling at full pelt.

Yes, I know, but as you mention that I/O power is ultimately dissipated as heat too. I was being careful in how I worded that -- didn't say where it would get converted to heat. ;)

(and yeah, this means that technically it's slightly wrong to say that TDP == average electrical power, but then again I/O power going out should be close to balanced by other chips' I/O power coming in, so it's a pretty good approximation in practice.)

Re:The Dearly Published (1)

TechyImmigrant (175943) | about 2 years ago | (#42295545)

Yes, yes and yes.

Re:The Dearly Published (1)

tzot (834456) | about 2 years ago | (#42273693)

Although Intel still declare that their TDP is *not* maximum draw in their Measuring Processor Power: TDP vs ACP [intel.com] paper, so I am not sure whether you answered out of personal experience/knowledge or based on plain theory.

Re:The Dearly Published (1)

jittles (1613415) | about 2 years ago | (#42274793)

Well I know the TDP has to do with the heat released by the processor. But I can only tell you that my watt meter suggests that the TDP is ~ what kind of load I see on the meter. Of course there are other peripherals drawing power as well, and the motherboard uses some itself. Also you will not see 100% of the W being converted to heat, so I suppose that its possible that the TDP would be somewhat lower than the actual draw. But I was specifically trying to create a low W system and had the meter hooked up through various tests of idle and max output. If Intel says differently than I must be wrong.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>