×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Open Compute 'Group Hug' Board Allows Swappable CPUs In Servers

Soulskill posted about a year ago | from the why-not-call-it-the-sunshine-and-rainbows-spec dept.

Facebook 82

Nerval's Lobster writes "AMD, Intel, ARM: for years, their respective CPU architectures required separate sockets, separate motherboards, and in effect, separate servers. But no longer: Facebook and the Open Compute Summit have announced a common daughtercard specification that can link virtually any processor to the motherboard. AMD, Applied Micro, Intel, and Calxeda have already embraced the new board, dubbed 'Group Hug.' Hardware designs based on the technology will reportedly appear at the show. The Group Hug card will be connected via a simple x8 PCI Express connector to the main motherboard. But Frank Frankovsky, director of hardware design and supply chain operations at Facebook, also told an audience at the Summit that, while a standard has been provided, it may be some time before the real-world appearance of servers built on the technology."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

82 comments

frosty piss (-1)

Anonymous Coward | about a year ago | (#42608449)

ahh, feels good.

Re:frosty piss (-1)

Anonymous Coward | about a year ago | (#42608687)

Actually, that's the next release, after Group Hug

Soon to be followed by Fetal Position, Doggy Style, and Swinging From the Chandelier

How wonderful (0)

Anonymous Coward | about a year ago | (#42608485)

And all you need, is a server case to fit it in. A 21" "new" standard to go nicely between already-existing 19" and 23" telco specs, which is just as well because in this day and age, metric still is too newfangled for teh zuck. Obxkcd and all that.

Re:How wonderful (0)

Anonymous Coward | about a year ago | (#42608893)

But, but, it can't be called "Group Hug" because we all know common hardware specifications, like CHRP [wikipedia.org], all need four character acronyms.

huh? (2)

Moses48 (1849872) | about a year ago | (#42608543)

I don't get it. Are we redesigning the whole computer archetecture so we have a different group specing out the north bridge (etc) that all CPU manufactures will use? Or are they just adding an additional cpu into the current architecture the same way we do with graphics cards? And then offloading work onto it, like people have been doing for a while with GPGPUs?

Re:huh? (3, Insightful)

Bengie (1121981) | about a year ago | (#42609503)

It could be like what Seamicro does and use PCIe and a kind of "network switch", minus 10Gb NICs, cables, an actual switch, etc. Everything a bunch of nodes needs minus a lot of overhead.

A bunch of daughter-boards that plug into a PCIe motherboard is a great idea.

Re:huh? (0)

Anonymous Coward | about a year ago | (#42609791)

Or Cray for that matter. Each XC30 blade has 4 node daughtercards all connecting to a single PCIe "Aries" chip connecting the four nodes together and tying them into the larger system over backplanes and fiber.

Re:huh? (0)

Anonymous Coward | about a year ago | (#42609607)

Back in the days of Pentium 2s, there was a "Slot 1" form factor. It was this, a secondary board that supported a range of CPUs and slid into a motherboard. It failed. It failed because the maximum throughput of the pin arrangement, although much higher than needed by the 233 mhz cpus it first carried, were insufficient for cpus coming out shortly afterward. I think the Slot 1 hit its maximum usefulness somewhere in the 700mhz range, but I do know that 900mhz and up were definately back on many-pin motherboard sockets.

In a hypothetical scenario where CPU technology is no longer generally advancing, but is specializing, this might be a useful concept.

Re:huh? (1)

mattsqz (1074613) | about a year ago | (#42650621)

the pentium2 used this design because it did not include L2 cache in the actual cpu die as all later cpu's did. the pentium p54c and p55c had their L2 cache on the motherboard, at system bus speed. the pentium pro had the L2 cache in the ceramic cpu package, at full processor speed (233/266/300mhz), but this was expensive to produce. when they brought the p6 design to mainstream production, cheaper, half cpu speed L2 cache chips were used along with the cpu core in its own package, with both on a daughterboard in a common heatsink assembly called a slot-1 cpu. this was both cheaper to produce than the pentium pro and faster than the L2 at bus speed design of previous cpu's. when die shrinks and memory technology advanced sufficiently to allow L2 cache to be integrated into a single cpu die along with the traditional cpu (coppermine pentium 3) it became cheaper to manufacture just the one chip in a pinned package again, as no external cache chips needed to be included with it. this had nothing to do with a technology limitation, and everything to do with cost. electrically, slot-1 and socket-370 were virtually the same, with similar pinouts.

what's on the board? (1)

PTBarnum (233319) | about a year ago | (#42608597)

I'm having trouble finding technical information about this design, and I'm curious how much of the motherboard logic has to move onto this daughterboard. For example, is memory still on the main board? If so, an x8 PCIe channel doesn't seem adequate.

Re:what's on the board? (1)

viperidaenz (2515578) | about a year ago | (#42608947)

I would assume CPU and memory on the daughter board. PSU and other shit on the motherboard.

If the PSU is on the daughter board, there's nothing really left to go on the motherboard, so would be completely pointless. An Intel server motherboard is nothing but power supply CPU sockets, RAM slots, PCI-e slots and some GbE controllers.

Re:what's on the board? (3, Interesting)

Junta (36770) | about a year ago | (#42609351)

The GbE controller may or may not be part of an IO hub which would provide USB and SATA. Also there is likely to be a video device (though ARM has video as SoC as a matter of course, Intel and AMD server chips do not have Video on package... yet....).

In a server, there is usually some service processor so that software bugs don't require a physical visit to regain capacity. In terms of manageability, I'd expect some I2C connectivity (the relation betwen fan and processor can get very interesting actually). Intel processor speaks PECI exclusively nowadays, wouldn't be surprised if standard basically forces a thermal control mechanism to terminate PECI on daughter card to speak i2c to the fan management subsystem. This is probably the greatest lost opportunity for energy savings; a wholistic system can do some pretty interesting things knowing what the fans are capable of and what sort of baffling is in place.

Also, the daughtercard undoubtedly brings the firmware with it.

All in all, the daughtercard is going to be the motherboard and not much changes. Maybe you get to reuse SATA chips, gigabit, usb and 'on-board' video chips for some cost savings on a mass upgrade, but those parts are pretty cheap and even they get dated. video, USB and gigabit might not matter for the internet datacenter of today and several tomorrows to come, but the SATA throughput is actually significant for data mining.

Re:what's on the board? (1)

TheRealMindChild (743925) | about a year ago | (#42609723)

though ARM has video as SoC as a matter of course, Intel and AMD server chips do not have Video on package... yet....

Then what are these AMD APU's [amd.com]?

Re:what's on the board? (1)

Wesley Felter (138342) | about a year ago | (#42609863)

though ARM has video as SoC as a matter of course, Intel and AMD server chips do not have Video on package... yet....

Then what are these AMD APU's [amd.com]?

APUs are not server chips. And the server ARMs don't have video either.

Re:what's on the board? (1)

Lennie (16154) | about a year ago | (#42610323)

Getting closer, the daughterboard will have a SoC. ARM has been doing SoC for a long time. Remember there are Intel Atom SoC too. AMD I don't know.

Re:what's on the board? (1)

Wesley Felter (138342) | about a year ago | (#42608973)

The only thing that makes sense is that everything is on the daughterboard and the "motherboard" is basically a passive PCIe backplane (with very little bandwidth). This kind of architecture has been used in telco and embedded systems for decades.

Re:what's on the board? (0)

Anonymous Coward | about a year ago | (#42609855)

PCIe is point to point only.

Re:what's on the board? (0)

Anonymous Coward | about a year ago | (#42615261)

PCIe is point to point only.

Even so there are PCIe switches to support multiple devices on a single PCIe lane.

Re:what's on the board? (1)

LordLimecat (1103839) | about a year ago | (#42609885)

Sounds like theyve reinvented the blade.

Re:what's on the board? (1)

Pikoro (844299) | about a year ago | (#42614123)

No, it sounds like they reinvented the Panda compass connector backplane. I used to have a Panda Archistrat 4s that you could slide in a PPro, or DEC Alpha, with plans to add SPARC support as well.

who gives a fuck? (1, Insightful)

hjf (703092) | about a year ago | (#42608599)

who gives a fuck?
seriously. it's a stupid standard. why would I want to swap the CPU for another architecture? Why would I want ARM in a high performance server? Why would I want "easy" replacement of a CPU for another kind, when the rest of the motherboard isn't able to interface with it?

Why should I care for a "standard" connection where pins will be outdated 2 years from now?
Why are "high performance, low cost" servers socketed, instead of processors soldered to a motherboard? What dies is the motherboard, not the CPU. When the motherboard dies, the CPU is so outdated it doesn't even make sense to keep it. Why are we talking about socketed CPUs when a soldered-on will do just fine?

Why do we keep insisting in this new, useless, "proprietary open" standard that NO ONE will use (BTX anyone? wasn't it supposed to be the next great thing and solve everything? Why not focus, say, in a "heatsink landing" standard so i can fit ANY motherboard in a case (1U rackmount cases where the lid almost touches the processor) and have it touch the heatsink. Even make it easier to watercool if you want.

Still trying to figure out what's the deal with all this. Still trying to figure out why racks are limited to 42U or so, instead of less dense but "taller" racks (pretty sure a custom-made datacenter like google's or facebook's could get away with it. WAIT. Google already does!)

Really, let facebook fuck off and die already. It'll probably be dead by the time this "standard" hits the streets.

Re:who gives a fuck? (1)

Moses48 (1849872) | about a year ago | (#42608825)

If this is just a way to swap out archetectures in a box, that seems very useless to me. On the other hand if what they are providing, is something akin to sun boxes hot-swap that could be useful for uptime. On top of that, if it can just add additional CPUs to scaleup a system before doing a scaleout. That would be very beneficial to database managers. (Facebook is supporting this and that is one of their main logistical problems.)

The article is sparse on details though, so someone that knows what this standard can do should chime in.

Re:who gives a fuck? (1)

hjf (703092) | about a year ago | (#42609741)

Who needs hotswap when you have a GRID!

Really, in these "highly distributed" systems, the price of redundancy is much lower than custom hardware.

A while ago I had to put together this server: http://i.imgur.com/iII52.jpg [imgur.com]
That's an IBM x3450 or 3650, I don't remember. The specs are pretty much "meh". It's a 20kg box with a HUGE motherboard in an oversized case, with 3 120mm fans (with the ability to add 3 more in "stand by"). It has a socketed CPU, and to the left of it you can see a black cap - that's where the secondary CPU plugs in a special daughterboard (!) The rest is PCI Express, RAM ("only" 8 slots), 5 gigabit eth interfaces, 2 power supplies, etc.

When putting it together I figured: this server is not about raw performance. It's about reliability. It's the "good ol" IBM kind, the kind you'll stick in a closet and it will keep on working 20 years from now, like those 486 BSD mailservers still on the wild. This is the complete opposite to facebook: the eternal search of high performance, low cost, where power costs alone justify replacement of thousands of servers every couple of years.

The price? "Modest" configuration on top of the base system (16GB RAM, 2 300G SAS drives and 1 extra dual gigabit ethernet interface), nearly $4000 (this is Argentina price which is a bit higher). For $4K you could build at least 5 similar systems that won't give you the simplicity and reliability of this box, but will surely outperform it.

When you scale this to the thousands, and have own programmers developing "grid" architectures for it, the savings of "off the shelf" hardware become much more evident. Google doesn't use off the shelf motherboards anymore. But they use their own system, which replaces the PSU with a single 12V channel and an onboard lead-acid battery. The rest is pretty much a standard ATX board.

42U rack (1)

Wesley Felter (138342) | about a year ago | (#42609487)

I think the 42U rack height comes from the standard 7-foot door/elevator height. Some datacenters definitely have custom tall racks already, and I wouldn't be surprised to see more of that in the future.

Re:who gives a fuck? (1)

Anonymous Coward | about a year ago | (#42609579)

seriously. it's a stupid standard. why would I want to swap the CPU for another architecture?

Are you trying to tell me that your servers can't run multiple instruction set binaries at the same time? You're not installing generic executables that can be run on Intel, AMD, ARM, PPC, Itanium (thanks for the tip, Marcion), etc, etc all at the same time? What kind of wallflower system admin are you? /sarcasm

BTW, if Microsoft supported this technology you could upgrade your Surface RT to a Surface Pro in a snap! /more sarcasm

Re:who gives a fuck? (2)

DaysSinceTheDoor (805570) | about a year ago | (#42609887)

What dies is the motherboard, not the CPU. When the motherboard dies, the CPU is so outdated it doesn't even make sense to keep it.

This is aimed at companies like Facebook, Google, Amazon, and the like. When managing thousands of servers, any number of components will die on a fairly regular basis. Some will die withing a few weeks of them going online. When you have 200k servers and a component with a 1% infant mortality rate, having the ability to quickly and easily change the component is a blessing. You can do this with just about all of the components in a server accept the processor (relatively speaking).

As for why would you want the ability to switch between architectures as the drop of a hat. It is actually really simple. What is the cheapest that day? If the processor in a server goes and you have to find the exact processor that is compatible with that motherboard, you cannot use the cheapest component. Most likely you will have to go with something that is more expensive. With this design, you can simply go to the bin, grab any processor card, and slap it into the server. You might have to change a flag in your server management software saying that node should now network boot the ARM image rather than the x86 image, but that is about it.

Re:who gives a fuck? (0)

Anonymous Coward | about a year ago | (#42611621)

Huh? Google and Facebook have so many servers it's cheaper for them to build and stock a new data center than to waste time dicking around diagnosing and replacing obsolete servers.

Re:who gives a fuck? (1)

nabsltd (1313397) | about a year ago | (#42612131)

This is aimed at companies like Facebook, Google, Amazon, and the like. When managing thousands of servers, any number of components will die on a fairly regular basis. Some will die withing a few weeks of them going online. When you have 200k servers and a component with a 1% infant mortality rate, having the ability to quickly and easily change the component is a blessing.

That's why companies that don't build their own specialized hardware (like Google) use these [wikipedia.org], where the whole "server" is a single replaceable component.

Re:who gives a fuck? (1)

TheLink (130905) | about a year ago | (#42649973)

having the ability to quickly and easily change the component is a blessing.

AFAIK companies like Google and Facebook don't change components.

They don't even swap out individual machines- they swap out entire racks when enough of the machines in the racks are faulty.

When you have say 1+ million servers and only 30000 employees total where probably only 300-3000 are involved in the hardware problems, then you can't really inspect and fix each machine.

Those who might be interested could be companies in cheaper countries (where labour costs are much lower) who can refurbish the computers or identify and sell parts that seem to be OK, in which case they'd offer to buy faulty racks from Google.

Re:who gives a fuck? (1)

pla (258480) | about a year ago | (#42611615)

why would I want to swap the CPU for another architecture? Why would I want ARM in a high performance server?

Because your "high performance" server most likely has a very low demand for 90% of the day, serving up one or two requests a minute - Then needs to handle a few thousand requests per second at peak times.

Idling the horsepower without taking the server offline look very attractive to most datacenters. Current load-balancing farms can do this to some degree, but you can have a several minute lag on spinning up new instances, by which time the surge might have already passed. But having the high-end CPUs available on a few ms' notice while a quarter-watt ARM keeps the lights on? Pure magic!


If you run a dedicated and saturated render farm, you probably don't care about this.

Not sure this will help an ARM system much (4, Insightful)

Marcion (876801) | about a year ago | (#42608605)

I can understand an organisation on the scale of Facebook wanting the ability to take advantage of bargains to buy processors in bulk and swap them out. I am not sure how widely applicable this is though.

The cool thing about ARM is the lack of complexity and therefore a potentially cheaper cost and greater energy efficiency. The daughter board seems to go against that by adding complexity, if you swap out an ARM chip, which might be passively cooled or have low powered fan, with some high end Intel server chip, you will also need to change the cooling and PSU.

Re:Not sure this will help an ARM system much (1)

viperidaenz (2515578) | about a year ago | (#42608823)

So don't swap the CPU with a single ARM CPU.

PSU and motherboard power supply can handle a 130W CPU? Stick 10 13W ARM cpu's on the daughter board.

Re:Not sure this will help an ARM system much (1)

LordLimecat (1103839) | about a year ago | (#42609907)

... and watch as they get utterly annihlated in basically every spec by the 130w CPU.

Theres a reason people dont do that, and its not simply scaling problems.

Why? (4, Informative)

hawguy (1600213) | about a year ago | (#42608691)

Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache, but the bandwidth of a PCI /8 or /16 slot is a fraction of what is available to a socketed CPU.

A core i7 has been clocked at 37GB/sec [anandtech.com] bandwidth, while PCIe /8 is good for 1.6GB/sec, and /16 is good for 3.2GB/sec

Is replacing the CPU socket with a PCIe card really worth giving up 90% of the memory bandwidth? I've never upgraded a CPU on a motherboard even when new generation CPU's are backwards compatible with the old motherboard since if I'm going to buy an expensive new CPU, I may as well spend the extra $75 and get a new motherboard to go along with it.

Likewise, by the time I'm ready to retire a 3 or 4 year old server in the datacenter, it's going to take more than a CPU upgrade to make it worth keeping.

Re:Why? (1)

Junta (36770) | about a year ago | (#42608871)

<quote>Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache,,</quote>

Actually, it would be murder on HPC applications, which generally rely on quality inter-node communication to acheive high scale.

Re:Why? (1)

hawguy (1600213) | about a year ago | (#42609343)

<quote>Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache,,</quote>

Actually, it would be murder on HPC applications, which generally rely on quality inter-node communication to acheive high scale.

Not all HPC applications are the same and don't always require fast interconnects. Seti@home is one example (though maybe not a good example of an app that would run well on a computer with CPU's that rely on a "slow" interconnect to main memory).

Re:Why? (1)

Junta (36770) | about a year ago | (#42609391)

Even an embarassingly parallel workload like you'd find in world community grid won't fit in cpu cache. Short of that, this architecture would bone you horribly *if* memory were on the long side of the pcie transaction, which there's no way that it would be...

Re:Why? (1)

Bengie (1121981) | about a year ago | (#42609799)

The linked article is just talking about a common interface to access additional resources like CPU cores and memory via PCIe. There is no reason why you can't treat daughter-boards more like a co-CPU, like how a GPU is used to offload work.

Re:Why? (0)

Anonymous Coward | about a year ago | (#42611471)

The daughter card would presumably have memory on it.
This is just a motherboard interconnect mechanism.

This sounds familiar... (1)

Bill_the_Engineer (772575) | about a year ago | (#42608759)

I think I experienced something like this almost two decades ago. It didn't pan out. No one wanted to be limited to the least common denominator.

Anyway, the little detail I see in the two linked articles is that it's simply a standardized socket for a CPU mezzanine card that will mount to a motherboard that is little more than a x8 PCI express bus with some other peripheral devices/interfaces installed and power.

Re:This sounds familiar... (1)

Blrfl (46596) | about a year ago | (#42610163)

My thoughts exactly. We saw this 20 years ago with the CPU and bus controller on a ISA cards and peripherals out on the bus.

I don't think this is a bad thing, though. It will encourage re-use of things we'd otherwise throw on the scrap heap because they happened to be soldered onto the same motherboard with all of the CPU- and bus-specific hardware. This will reduce upgrade costs, and I'd much rather see a box of small cards become obsolete than an entire rack full of servers.

Hooray for memory constrained CPU cycles? (2)

jandrese (485) | about a year ago | (#42608815)

A PCIe x8 slot is pathetically slow compared to the memory channels used by CPUs today. These CPUs are going to have to be used like GPUs, sent specific workloads on specific datasets to be useful. Any kind of non-cached memory access is going to cause major thread stalls and probably kill any performance benefits.

A general purpose compute card is probably useful in cases where GPUs aren't a good fit but you want more cores per RU than you can normally get, but I see this as a niche application for the foreseeable future.

Re:Hooray for memory constrained CPU cycles? (1)

Junta (36770) | about a year ago | (#42609009)

They can't *possibly* mean that memory would go over that. They have to mean a CPU+Memory daughtercard. As limited already as the concept is without that assumption, the system would be unusably bad if memory transactions went over such a connection.

In terms of server design, none of those names are particularly reputable. One might have *assumed* Intel would do decent system design, but anyone who has touched an Intel designed server knows they aren't particularly good at whole server design.

Re:Hooray for memory constrained CPU cycles? (1)

jandrese (485) | about a year ago | (#42618149)

That's not a "hot slot processor" anymore, that's a blade server. We already have blade servers, lots of them. The ATCA [wikipedia.org] demands to know why you want yet another blade server standard.

Re:Hooray for memory constrained CPU cycles? (1)

Wesley Felter (138342) | about a year ago | (#42619729)

The ATCA [wikipedia.org] demands to know why you want yet another blade server standard.

I demand to know why every ATCA blade is crazy expensive. Oh yeah, because they're telco and Facebook can't afford to overpay for carrier-grade reliability they don't need.

Re:Hooray for memory constrained CPU cycles? (1)

Blrfl (46596) | about a year ago | (#42610223)

I doubt very much they're proposing that memory be done across the PCI bus. Memory is modular and reusable, so pulling modules out of old CPU cards and snapping it into new ones helps cut the cost of upgrades and changes.

Or you could just break an ankle jumping to conclusions.

Re:Hooray for memory constrained CPU cycles? (0)

Anonymous Coward | about a year ago | (#42611491)

I just wanted to add, that after studying both of the recent architecture designs from Intel, this is indeed the case,
BUT you could simply put the memory on the daughter card, but then AMD would have the advantage right away,
and then why not develop a hyperchannel slot? Then again, AMD would have the advantage.

Apple did this with the 72xx/75xx/85xx/95xx/96xx, and it very nearly bankrupted the company.

Overblown... (1)

Junta (36770) | about a year ago | (#42608851)

When they say '8x' PCIe, if that is accurate and the entiriety of the connectivity, what it really means is a standardized IO board for video, ethernet, maybe storage. The CPU daughtercard becomes the new 'motherboard' in terms of cost. You might marginally improve service time for the relatively rare board replace case (CPU replace case is already easy for repair action), but having that board segregated will drive up cost a bit.

They could mean that 8x is the data path and some amount of I2C and Power is allowed through as well to make it more like a 'daughter' board, but the high cost board will still be the daughter board (CPU is pricy and the layers required for a beefy memory subsystem drive up the actual board cost). Meanwhile you have a mess for management, thermal, and power management, likely resulting in having to be wasteful with fans.

Unless they make the daughtercard a behemoth, the design is a non starter for GPGPU and Infiniband applications (unless daughtercard has own PCIE slot allowed, 8x isn't enough for even one of those applications). The daughtercard must hold the memory, either meaning it will be large and expensive, or mandate soldered memory significantly restricting the customization potential in building out a system.

imagine this scenario.... (1)

freeze128 (544774) | about a year ago | (#42608885)

Suppose you have a machine with an Intel x86 cpu in it and you swap it out for an ARM cpu. What happens?

The BIOS won't even boot! It's instructions are written in x86 machine language. Even if you somehow detect the new CPU architecture, and your BIOS has a copy of code that will run on it, then your OS won't boot!

What a novel but dumb idea.

Re:imagine this scenario.... (1)

lister king of smeg (2481612) | about a year ago | (#42611325)

more likely they will go with uefi (bios is to limited) with custom firmware that contains boot codes for multiple architectures and all you have to do is swap the flag on start up unless it detects it automatically, this could also be used as a secondary cpu having a primary cpu on the board still for the OS and the other for number crunching and virtualization.

old, slow, crap. (0)

Anonymous Coward | about a year ago | (#42609011)

PCI x8? 4Gb's a sec? That like a Pentium 4's 133Mhz quad pumped bus. Congrats on inventing something that transfers as fast as something from 2002!

What a POS.

Bi2natch (-1)

Anonymous Coward | about a year ago | (#42609049)

bureaucratic 4n3d

Intel's not going to be pleased (0)

Anonymous Coward | about a year ago | (#42609233)

Intel doesn't want to be put in a situation where its chips and AMD's (and now various vendor's ARMs) can be shoved in the same machine. That would make the server wars too real - it would be absolutely a race to the bottom, with daughtercards being replaced with the best *OPS/watt instead of whole servers being replaced.

It will be a really damned long time before this hardware ever meets production, if at all.

Re:Intel's not going to be pleased (1)

lister king of smeg (2481612) | about a year ago | (#42611187)

It will show up not not in the consumer space, at least for a very longtime. I will before enterprise systems and will more than likely be meant for hot swapping more than arch shifting

80's passive backplane industrial rises again (1)

silas_moeckel (234313) | about a year ago | (#42609549)

With a 8x pcie interface this sounds like most of the motherboard is on the card. You would be pressed to get storage IO and network IO though that interface forget memory. It's not going to handle many 40gb infiniband adapters or they will move onto the card. This sound alot like the old industrial single board computers of the 80's and 90's.

how does it work for HTX cpus? ram linked to cpus? (1)

Joe_Dragon (2206452) | about a year ago | (#42609583)

how does it work for HTX cpus? ram linked to cpus? QPI CPUS?

Now both AMD and Intel have Memory controller build in the cpus.

all AMD cpus have HTX to link them to the chipset / other cpu. Also that did plan for HTX slots at some time as well.

intel uses QPI in there higher end desktop systems and more then 1 cpu severs / workstations.

Backplane (2)

DavidYaw (447706) | about a year ago | (#42609813)

So the CPU is on the daughterboard. Everything that's specific to the CPU type (North/South bridges, etc) would have to go on there as well. Likely memory as well.

Congratulations. They've re-discovered the backplane [wikipedia.org] and single board computer.

Re:Backplane (1)

loufoque (1400831) | about a year ago | (#42611551)

Indeed, I personally don't see how it is anymore than a standard backplace interface.

Re:Backplane (0)

Anonymous Coward | about a year ago | (#42617745)

It isn't. It will not be important for desktops but it will be for servers and maybe high-end gaming pcs. Mid to high-end servers are expensive because they want to have the minimum amount of downtime and they also have to be flexible so they adapt to the workload. Manufacturers currently offer a lot of models to satisfy all kinds of workloads. The backplane will allow them to offer fewer base models with more options. So the customers will get the flexibility they want at better prices (since they will not be paying for things they don't need).

Didn't PICMG standardize this already? (1)

Dr_Harm (529148) | about a year ago | (#42610011)

And how is this any different from the PICMG 1.x standards?

http://www.picmg.org/v2internal/resourcepage2.cfm?id=8 [picmg.org]

Lots of people have been building systems around this technology for years using passive backplanes.

Re:Didn't PICMG standardize this already? (1)

mattventura (1408229) | about a year ago | (#42613721)

Haven't read it, but I assume that it's not a passive backplane but rather an active one.

Re:Didn't PICMG standardize this already? (0)

Anonymous Coward | about a year ago | (#42615253)

Haven't read it, but I assume that it's not a passive backplane but rather an active one.

PICMG 1.1 is a rather old specification with fully passive backplanes and ISA/PCI connectivity. A newer version, PICMG 1.3, is updated for use with PCI Express expansion.

x186 - x286 - x386 (1)

kimgkimg (957949) | about a year ago | (#42610235)

Okay this didn't work in the x186 days, so I guess it's been long enough that it needs a redux. I remember when they were selling these machines which were essentially a backplane and one of the things you could plug into it was your CPU daughtercard. These were being sold as a way to "futureproof" (remember that buzzword?) your system. Turns out that was only good for a few CPU cycles...

Re:x186 - x286 - x386 (0)

Anonymous Coward | about a year ago | (#42611583)

Yes. You could go from a Intel 8088, in a IBM PC to an Aboveboard i386 card, and use all your same perhiperals.
You could literally skip the 286 processor architecture.

And again, with a CPU upgrade you could go from a 486 to a Pentium,
or a Pentium Pro to a Pentium II.

It really does not sound like architecture breakthrough but rather an economics hack.

Read the spec, it isn't PCIe (1)

anth (2631) | about a year ago | (#42610869)

http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]

It uses that connector but the signals are ethernet, SATA, etc. RAM and optionally mSATA are on board.

Re:Read the spec yourself, it is PCIe (0)

Anonymous Coward | about a year ago | (#42613567)

It's PCIe x4 + 2*SATA2 + 1*GbE + 1*USB2.0 + 1*RS232 + power ... over a physical PCIe x8 connector.
So yes, it IS PCIe. And some other stuff.
== yet another fucking pointless host board "standard".

Seen it before (1)

I_Wrote_This (858682) | about a year ago | (#42611935)

A few years back I had a VMware slice that was mis-behaving, and IT support reckoned that adding a second CPU might help.

That change left the /proc/cpuinfo file insisting that I had one Intel and one AMD processor in the system.

[It didn't fix the problem - nothing ever did. I had it shut-down after 2 years of persistent problems and no useful work. No-one ever did explain why it would totally freeze (system clock stationary) for 60*n seconds]

Will the OS stay up if a CPU goes down? (0)

Anonymous Coward | about a year ago | (#42615185)

I wonder if the machine will stay up without rebooting while trying to isolate the failed CPU like how things work today.
We have some machines were you can reduce or expand memory on an online way, but I don't see a kernel behaving normally if one of the processors goes down for failure reasons

I would say everyone has missed the real point... (1)

TwineLogic (1679802) | about a year ago | (#42651539)

The purpose of this connector is to allow the connection of signal-processing co-processors. Two-dimensional video signal processing, similar to that sold at much cost by Texas Memory Systems, is greatly useful to Facebook, Google+, and all other cloud properties which track identity by facial recognition.

The image-processing algorithms are not as easily distributed as the search indexing. An approach which cuts through the problems at much better cost is the dedicated image processing SIMD pipeline style of processor.

In other words, the purpose of Mezzanine connectors is so that you can have a motherboard with, e.g., two AMD processors on it. You can then add your choice of accelerator via the Mezznine:
*A 2-d signal (image) processor
*A video compression/decompression accelerator
*A SHA256 / bitcoin collision accelerator ASIC
*An extremely high-bandwidth interface for a data acquisition peripheral

The purpose of Mezzanine connections has historically been just this: to provide heterogenous processor expansion to a motherboard.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...