Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Will "Group Hug" Commoditize the Hardware Market?

samzenpus posted about 2 years ago | from the working-together dept.

Facebook 72

Will the Open Compute Project’s Common Slot specification and Facebook’s Group Hug board commoditize the data center hardware market even further? Analyst opinions vary widely, indicating that time and additional development work may be necessary before any sort of consensus is reached. At the Open Compute Summit last week, Frank Frankovsky, director of hardware design and supply chain operations at Facebook, announced both the Open Slot specification and Facebook’s prototype Open Slot board, known as “Group Hug.” Group Hug’s premise is simple: disaggregate the CPU in a way that allows virtually any processor to be linked to the motherboard. This has never been done before with a CPU, which has traditionally required its own socket, its own chipset, and thus its own motherboard. Group Hug is designed to accommodate CPUs from AMD, Intel, and even ARM vendors such as Applied Micro and Calxeda.

Sorry! There are no comments related to the filter you selected.

This would be awesome.. (1)

earlzdotnet (2788729) | about 2 years ago | (#42649763)

if only it came true. Even if politics weren't involed, it still wouldn't be easy at all I imagine. It'd require processors to either rely on a standard memory controller, or to implement their own, along with all sorts of other similar challenges with performance vs. compatibility.

Re:This would be awesome.. (1)

Jeng (926980) | about 2 years ago | (#42649851)

I would figure the memory would be on the daughter card with the processor. That way the main motherboard wouldn't need to be compatible with all the different memory choices, just have to be compatible with the daughter card.

Re:This would be awesome.. (1)

h4rr4r (612664) | about 2 years ago | (#42649873)

If memory and CPU on are the daughter card, how is this any different than a blade chassis?

Seems like doing that remove pretty much all the value from the project.

Re:This would be awesome.. (1)

Wesley Felter (138342) | about 2 years ago | (#42650131)

It's not any different from blades. Actually group hug is not hotswap, so it's worse than blades but probably cheaper.

Re:This would be awesome.. (1)

MBCook (132727) | about 2 years ago | (#42650523)

It's basically a return to the backplane days of the 8080/8086, except that memory has to stay on the CPU card for speed reasons.

Re:This would be awesome.. (1)

Amouth (879122) | about 2 years ago | (#42650773)

just what i was thinking, looked over at an old self and wondered if i could sell them an old backplane box as an example of how to make it work.. it was so nice to just drop in another CPU card as you needed.

Re:This would be awesome.. (1)

swb (14022) | about 2 years ago | (#42651043)

Why not make memory its own card type and have optical interconnects for memory? That should allow enough speed for memory access and with a common interface standard you could design your CPU to do it natively or have a translation controller on your CPU card.

Re:This would be awesome.. (1)

CastrTroy (595695) | about 2 years ago | (#42650099)

I think the is the basic idea, which is why the whole idea won't work. Basically, they are sawing the motherboard in 2, where the CPU and memory are on the daugterboard, and the rest of the components (SATA,USB3, PCIe slots, sound, video outputs) are all that remain on the motherboard. I think it might provide for some interesting ideas where you could have a CPU board that accepted a new chip without having to go out and re-buy the part that the peripheral plug into. Or you might be able to get a motherboard that has lots of space for RAM, but that only has a single PCIe slot (or none at all) for space savings. Most boards seem to have both few PCIeslots and view RAM slots, or many of bother but you don't see many with a combination of the two.

Re:This would be awesome.. (1)

mlts (1038732) | about 2 years ago | (#42651037)

I wouldn't mind a system of going to a completely passive backplane architecture, although with electrical signal distances, this likely wouldn't be really doable until we have the ability to get optical signals onto the fiber from the chip die itself (which means a lot of muxing/de-muxing since having tons of optical connections would be a lot harder than solder pads.)

Re:This would be awesome.. (0)

Anonymous Coward | about 2 years ago | (#42651741)

Not sure if this is what you are after.

Advantech [advantech.net.au] passive backplanes. I have designed systems using these at work and whilst a little expensive compared to Dell or HP etc, the tech support and long manufacturing runs are great for environments which require supportability, are tightly configuration managed (software and hardware) or have interface requirements such as an ISA bus.

Other companies seem to have similar offerings also for example ICP [icpamerica.com] .

I mean this as an actual question. (1)

FatLittleMonkey (1341387) | about 2 years ago | (#42652863)

I think the is the basic idea, which is why the whole idea won't work. Basically, they are sawing the motherboard in 2, where the CPU and memory are on the daugterboard, and the rest of the components (SATA,USB3, PCIe slots, sound, video outputs) are all that remain on the motherboard

Why would it work any less than a graphics card? Isn't that the same? GPU and memory on a daughterboard with a fast interface to the motherboard.

Re:This would be awesome.. (1)

butlerm (3112) | about 2 years ago | (#42654073)

Basically, they are sawing the motherboard in 2, where the CPU and memory are on the daugterboard, and the rest of the components (SATA,USB3, PCIe slots, sound, video outputs) are all that remain on the motherboard

It is actually a micro-server architecture. Think small form-factor blade servers with an optional PCIe interconnect, optional remote SATA devices, and one mandatory ethernet interface, all running through what looks like an ordinary PCIe slot, but isn't.
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]

Intel (1)

binarylarry (1338699) | about 2 years ago | (#42649765)

I don't see why this is a good thing for Intel? They own the x86 market and ARM is a strong competitor for the future.

How does opening up the market help them?

Umm, is there an article here? (4, Interesting)

caseih (160668) | about 2 years ago | (#42649789)

All I see are links to other slashdot articles. Are we going for a new record here? First the ridiculous post about Microsoft welling their entertainment division, now this. And the same style of headline too, which of course is answered with, "No."

Mr Editor, can you at least post a link to some information, like maybe the site where this specification is detailed? Maybe the project web site itself?

Re:Umm, is there an article here? (0)

girlintraining (1395911) | about 2 years ago | (#42650059)

Mr Editor, can you at least post a link to some information, like maybe the site where this specification is detailed? Maybe the project web site itself?

The editors have been outsourced. Now, a team of twenty people who have english as an eleventh language review every submission and green light only those that meet the criterion spelled out in the three ring binder. The three ring binder itself was created from a 7 line Perl script, written by a subcontractor from China, who was hired by a contractor for Dice, who recently acquired the Slashdot brand identity, who shows up once every two weeks to collect his paycheck and update the seed in the random number generator the 7 line Perl script uses.

Re:Umm, is there an article here? (0)

Anonymous Coward | about 2 years ago | (#42650335)

Clearly you don't speak 11 languages, so you don't value the other 10. (I only speak 3).

Re:Umm, is there an article here? (1)

MachineShedFred (621896) | about 2 years ago | (#42650603)

If they have actual links to real articles, then it isn't nearly the electronic masturbatory exercise you see in front of you here.

You can't link to your own shit if you have real information to link to...

Re:Umm, is there an article here? (0)

Anonymous Coward | about 2 years ago | (#42652189)

Umm, is there an article here?

Umm... this is Slashdot. Do links to such fictitious reading material matter?

Re:Umm, is there an article here? (0)

Anonymous Coward | about 2 years ago | (#42652771)

http://www.opencompute.org/2013/01/16/ocp-summit-iv-breaking-up-the-monolith/

I'm guessing "Slashdot Editor" at this point is nothing more than a perl script.

Re:Umm, is there an article here? (2)

butlerm (3112) | about 2 years ago | (#42653881)

Here is a link to an actual specification. If you read it, you will see that about half of what has been written about this announcement is wildly off base. We are talking micro-servers here - complete with on board cpu, ram, boot eeprom, flash storage, and ethernet. PCIe and SATA connections to the backplane are optional. Think small form factor blade server.
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]

S100 anyone? (3, Insightful)

cdrguru (88047) | about 2 years ago | (#42649809)

One architecture that supported "variable CPUs" was S100 where it is was typical to have a CPU card, one or more memory cards, and multiple I/O cards all plugged into a backplane. There were CPU cards for the Apple ][, but these were complete computers on a card that simply allowed use of the Apple ][ I/O.

Given today's multi-gigahertz processors with gigahertz memory access, I would think it would be difficult, if not impossible to effectively separate the CPU and the memory by very much. Similarly, it gets pretty complicated with high speed DMA I/O when you move it away from the memory it is accessing. I'm sure it could be done, but the performance is going to suffer just from the physical distances. Add in connector resistance and noise and you have ample justification for putting the CPU, chipset and RAM in a very small module that then plugs into the rest of the computer for I/O.

Re:S100 anyone? (1)

bluefoxlucid (723572) | about 2 years ago | (#42649847)

That's a motherboard. CPU, RAM, and chipset are on the motherboard. There's some PCI slots to plug other shit into.

Re:S100 anyone? (1)

vlm (69642) | about 2 years ago | (#42650223)

You would probably find googling for N8VEM SBC v2 to be very interesting. S100 lives! as does a eurocard connector-ized version of the same idea, more or less.

http://n8vem-sbc.pbworks.com/ [pbworks.com]

I have the partially assembled system on my workbench. I need a nice blizzard to keep me inside soldering, that'll take care of it. Its all antique thru-hole instead of modern SMD which I find harder to work with and certainly much bigger but its no big deal.

Add in connector resistance

At least the n8vem design has a standard pc molex on the ecb cpu board. Don't need to, but you can dump power in that way. Very much like a modern graphics card having a dedicated power line rather than drawing current from the motherboard.

Re:S100 anyone? (1)

Anonymous Coward | about 2 years ago | (#42650401)

Yeah, I loved the "This has never been done before with a CPU" as I have a couple of S100 systems sitting in my Shed. I am constantly amazed by todays "youth" who know so little (read nothing) of computing's past.

The late 70's and early 80's was probably the era of greatest diversity of computing ideas there has ever been, and perhaps it was even more "open" than today as users could buy complete service manuals for their computers (and have a good chance of fixing them!), there was a ton of info about the OS's, there was often multiple OS's for a single hardware platform (TRS-80 has TRS-DOS, NewDOS80/90, UltraDOS, CPM, MultiDos, DOSPlus,LDOS).

What MSDos/Windows/MacOs bought to the table was compatibility, there were hundreds of different 5 1/4 disk formats many totally incompatible with anything else, never mind the proliferation of other disk sizes (sinclair micro drives, Amstrad 3", and lots of others), I think we traded interesting for compatible (and yeah, try sending a file over RS232 from once machine to another, then using a multi disk formatter to convert the file into another disk format so you could get your file from a C64 -> Kaypro4 ->Cromenco... those were the days...sigh and then the S100 RS232 card turned up which reduced one step)

Magazines back then were worth buying, 80-micro, Dr Dobbs, Compute, etc etc were full of articles on how to build something in hardware and then had full software listings too, they were written by computer people acting as journalists, today they are writing by journalists who rehash manufacturers handouts which is why I haven't bough a magazine for 15 years (online stuff is heading the same way)

"never been done before" - lols (1)

decora (1710862) | about 2 years ago | (#42651003)

almost everything we see in consumer devices has been done before in some market, or at the NSA (the latter of which will not talk about it, but we know because of James Bamford)

it's true (1)

globaljustin (574257) | about 2 years ago | (#42651099)

I came along just a bit later, but this part especially is true:

"users could buy complete service manuals for their computers (and have a good chance of fixing them!), there was a ton of info about the OS's"

The virtual complete absence of true user manuals to this day baffles/angers me.

When I took 'computer class' in the mid-90s we still learned mostly in versions of DOS and we used 5 1/4 and 3 1/2 floppys (mostly the latter).

We could afford 2 computers that could run the current version of Windows.

Re:S100 anyone? (2)

mlts (1038732) | about 2 years ago | (#42651109)

Probably one of the better magazines I bought was the old Computer Shopper, before it shrunk into a "regular" size magazine. Stain Veit's articles were always a treat, and even the ads were useful, back when there were tons of white-box makers (Arche, Bell, Austin PC, etc.)

The early Mac magazines were like this as well. If you had a special device that could scan, you could actually scan a page out of the magazine and have a couple useful applications each month.

I do miss the good magazines that just don't have ads, and ads masquerading as new product reviews.

On one hand, the shift from engineer to tinkerer to professional to drool-cup was inevitable, but on the other hand, there is something missed about getting a magazine that had something worth reading by a very knowledgeable author.

Re:S100 anyone? (1)

Agripa (139780) | about 2 years ago | (#42651061)

Not all of the CPU cards for the Apple ][ had on board memory. The popular Z80 Softcard used the motherboard memory which made it slower than other Z80 expansion cards.

Re:S100 anyone? (2)

exomondo (1725132) | about 2 years ago | (#42651215)

Given today's multi-gigahertz processors with gigahertz memory access, I would think it would be difficult, if not impossible to effectively separate the CPU and the memory by very much. Similarly, it gets pretty complicated with high speed DMA I/O when you move it away from the memory it is accessing. I'm sure it could be done, but the performance is going to suffer just from the physical distances. Add in connector resistance and noise and you have ample justification for putting the CPU, chipset and RAM in a very small module that then plugs into the rest of the computer for I/O.

If they were just moving the CPU to a card then yes, but apparently they aren't:

Intel, another key member of the Open Compute Project, announced it would release to the group a silicon-based optical system that enables the data and computing elements in a rack of computer servers to communicate at 100 gigabits a second.
More important, it means that elements of memory and processing that now must be fixed closely together can be separated within a rack, and used as needed for different kinds of tasks.

http://bits.blogs.nytimes.com/2013/01/17/facebooks-other-big-disruption/ [nytimes.com]

Re:S100 anyone? (2)

butlerm (3112) | about 2 years ago | (#42654005)

More important, it means that elements of memory and processing that now must be fixed closely together can be separated within a rack, and used as needed for different kinds of tasks.

This statement is in reference to Intel's proposal, which is still vaporware. I seriously doubt they are talking about locating main memory away from the processors. That would more or less be suicidal.

Facebook's design certainly does no such thing.
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]

Re:S100 anyone? (1)

exomondo (1725132) | about 2 years ago | (#42654343)

Well yes, like I said the concern would be there only if they were just moving the CPU to an external card - which they aren't doing in the server card spec (they are proposing memory goes on the card as well) and they wouldn't be doing with the Intel technology which provides additional highspeed optical links, either one of those solutions mitigates the issues in the first post i responded to.

Re:S100 anyone? (1)

Anonymous Coward | about 2 years ago | (#42651515)

Add to that CompactPCI, VME, VME64, VME64x, PXI, VXI, VXS, VPI, OpenVPI.

I probably forgot some, but it seems there is more computer-bus/formfactors that don't call for a specific CPU than do.

Re:S100 anyone? (1)

ncc74656 (45571) | about a year ago | (#42669975)

There were CPU cards for the Apple ][, but these were complete computers on a card that simply allowed use of the Apple ][ I/O.

Most of these were just a CPU (usually a Z80) and the minimal logic necessary to take over from the 6502 on the motherboard. A relatively small handful of cards included their own RAM; it was far cheaper to use what was already in the computer.

The only Apple II expansion card that comes to mind that really was a complete computer on a card was the Applied Engineering PC Transporter [applearchives.com] , which had an 8088-compatible CPU, up to 768K RAM, an MFM floppy controller, CGA-compatible graphics that could also drive an analog RGB monitor (commonly used with the IIGS), and most of the other bits that would make up a complete PC/XT-compatible computer. More recently, a Carte Blanche [applelogic.org] could be configured as a nearly standalone computer, running in an Apple II or on a board that provides Apple II expansion slots.

Backplane (1)

Mikkeles (698461) | about 2 years ago | (#42649939)

Also see wire-wrapped and bit-sliced.

What is the use case for regular IT (0)

Anonymous Coward | about 2 years ago | (#42649961)

Perhaps I just don't get the use case for anyone other than someone like Google or Facebook. In the grand scheme of things for companies running enterprise applications, compute just isn't that large a portion of overall costs and certainly doesn't need to be dynamically swappable. The operational and change management costs of tracking and provisioning CPUs dynamically would greatly outweigh the benefits.

Re:What is the use case for regular IT (2)

vlm (69642) | about 2 years ago | (#42650167)

In the early 90s one of our mainframes blew a CPU, so the IBM CE replaced it while the system continued running. Zero reboot time because it wasn't rebooted. Much like you can swap hard drives in a NAS array while it runs.

There's really nothing new in IT. Couple years back a VMware image of mine got moved to another machine mostly seemlessly. Oh it was "frozen/down" for a minute or so but promptly unfroze on the new hardware. Not nearly as advanced as the mainframe was 20 years ago, but someday modern IT might be that advanced again... or maybe not, hard to say.

Re:What is the use case for regular IT (1)

lister king of smeg (2481612) | about 2 years ago | (#42650287)

VMWare dosen't have live migration? I know Virtualbox does though i have never had opportunity to need it yet.

having trouble finding it (0)

Anonymous Coward | about 2 years ago | (#42650013)

but i remember a motherboard vendor semi did this
yes, it was your typical motherboard
but it had a pci-e card with cpu + ram + chipset you could slot-in
pretty sure the board was intel while the slot-in was amd
no reason it couldn't work the other way

Re:having trouble finding it (1)

Panaflex (13191) | about 2 years ago | (#42650473)

You might be thinking of Slot A/B mounted CPU's from AMD and Alpha Processor Inc. They were compatible slot designs where you could plug in Athlon or Alpha 21264 CPU's. AMD licensed the slot design from Alpha.

Unfortunately, I don't think it ever made much of a dent in the market.

might be thinking of the 1970s (1)

decora (1710862) | about 2 years ago | (#42651035)

i remember the psych department at the university had an 'old computer' historical display set up in one of their windows. the 'motherboard' was just a bunch of slots you would fit wire-wrapped boards into. one was the cpu board, one was memory, whatever.

not to mention all of the "upgrade your PC" cards from the 1980s - put a 286 CPU-on-a-card into your 8088 "IBM XT", heck you could even put a PC card in your Mac.

im pretty sure 'industrial' users like Airplanes etc have had similar setups.

Where is the ROI? (1)

alen (225700) | about 2 years ago | (#42650037)

The CPU is a small part of the cost of the server

What is the point in doing this? Where is the return on investment?

Re:Where is the ROI? (1)

JaredOfEuropa (526365) | about 2 years ago | (#42650183)

I'm not sure what the idea is, but it seems they hope it will open up the market. Suppose there's a company putting a new CPU on the market. Today, they have to come up with a motherboard as well, or convince one of the big boys to design one around their new CPU. With this architecture they only need to design the much simpler CPU card around their CPU. This lowers the barrier to entry and means more competition, which is nice for big datacenters like Amazon and Facebook, who buy servers by the boatload. It could also mean that this allows them to standardize on datacenter hardware, which could significantly lower cost and management effort,

Dunno how feasible this is; I know bugger all about datacenters and the last time I looked at computer architecture was back in the 68000 days.

Re:Where is the ROI? (1)

Wolfrider (856) | about a year ago | (#42670793)

--Yeah, I don't know if this is really going to take off. (In general) Universal = generic = NOT optimized for speed/efficiency, etc...

Re:Where is the ROI? (1)

MachineShedFred (621896) | about 2 years ago | (#42650655)

Well, this is to show those scumbags at Intel and AMD that refuse to create products for their competition who's boss! I mean, why wouldn't they spend extra time and money to create a bunch of connections that their customers aren't going to use, and probably make their products perform worse by introducing unneeded complexity?

Never mind that we did already have a "universal" CPU socket, or at least one as close as it mattered. It was called Socket 7, and it fit Intel / AMD / VIA CPUs. And it was abandoned by all three CPU manufacturers.

Re:Where is the ROI? (1)

butlerm (3112) | about 2 years ago | (#42654125)

It is not a CPU slot specification. It is a micro-server slot specification, which is much more practical. Think small form factor blade server. The PCIe part is actually optional.

Commoditize? You keep using that word. (2, Insightful)

DaveV1.0 (203135) | about 2 years ago | (#42650137)

I do no think it means what you think it means. Something that is a commodity product is fungible, meaning any indiviual product from any of various vendors is effectively interchangable with any product of the same kind from any other vendor. Computer hardware has been commoditized for a long time. While processors are not wholely interchangable (AMD vs Intel), the motherboard/CPU combo generally is. Everything else in a computer can more or less be swapped out with a different brand with the same or similar features. All pricing is based on cost and perceived value. The only way it could be more of a commodity is if someone came up with a way to plug any processor into any motherboard socket. Oh, and bonus points to anyone who can guess why the retail companies are moving away from separate box systems to all-in-ones. HINT: Look at the upgrade path for laptops.

Already 'commodity' (2)

Junta (36770) | about 2 years ago | (#42650323)

If you want it to be, server hardware is already commoditized. All the priciest components are interchangeable (you *can* buy DIMMs from wherever and cram them in your server from a technology standpoint). Apart from Intel and AMD playing this game where the IOH is generally affine to some particular CPU generation, things are already there (and the IOH is a pretty inexpensive part however you slice it, and could already be subbed out for a PCI-e constructed device if they saw fit). Now the catch is that for *most* traditional IT shops, there continues to be value in well-integrated systems. This will not change that picture.

I see this as another step toward two goals:
-Getting ARM into the datacenter in some reputable fashion (which may or may not make sense, depending on whether a compelling performance per watt case can be made that offsets the energy/manufacturing gap that might be incurred from requiring more packages to get to performance desired).
-Rebranding 'whitebox'. White box vendors are viewed as the low cost alternative to HP/Dell/IBM, but image wise they are viewed anywhere between 'unacceptably bad' to, at best, 'just as good' from a select portion of the market. Putting cost aside, no one thinks of white box as 'better' than the expensive names. A lot of open compute at the system level is the same thing that has been the reality for the last decade with a shiny new name. The same standards that everyone already followed are getting highlighted more explicitly. This is the opportunity, through marketing, to change minds to say 'better' in some cases or at least make the 'unacceptable' segment of the market take another look.

If this allowed multiple CPUs on a motherboard... (1)

Ardyvee (2447206) | about 2 years ago | (#42650345)

You could probably have an ARM, low load, low energy comsumption processor and a nice High-Performace processor on the same board. You'd just then manage when the high-performance activates, and you could probably switch any (assuming hot-plug) without taking it offline.... It's nice to dream, isn't it?

do the cards have room for 4-8 / 6-12 ram slots (2)

Joe_Dragon (2206452) | about 2 years ago | (#42650381)

do the cards have room for 4-8 / 6-12 ram slots each? and yes that's full size ram.

Re:do the cards have room for 4-8 / 6-12 ram slots (1)

Todd Knarr (15451) | about 2 years ago | (#42651493)

In a setup like this you wouldn't put the RAM on the CPU card. It'd go on the backplane interconnect, independent of the CPU. Think the PDP-11 Unibus or the VAX-11 Synchronous Backplane Interconnect, which are where I first encountered the concept of a backplane and independent CPU, memory, co-processor and I/O processor modules. I doubt they originated there, though, my guess is the concept goes back to the IBM mainframes of the 60s. It was an amusing cycle: external modules would migrate onto the CPU board for performance reasons, then the on-board interconnect would mutate into a new backplane and everything would migrate out to external modules for maintenance and upgradability reasons, and then the cycle would repeat.

Re:do the cards have room for 4-8 / 6-12 ram slots (1)

butlerm (3112) | about 2 years ago | (#42653449)

The memory is going to stay on the processor cards. It would be somewhere between slow and ridiculously slow (by modern standards) to do anything else. The slot interface is PCIe x8. An I/O interconnect. Not memory, certainly not SMP. More like a tightly coupled cluster.

Re:do the cards have room for 4-8 / 6-12 ram slots (0)

Anonymous Coward | about 2 years ago | (#42655649)

You mean... if it doesn't move into the processor package...

Re:do the cards have room for 4-8 / 6-12 ram slots (1)

butlerm (3112) | about 2 years ago | (#42654043)

No. These are micro-servers we are talking about. Two RAM slots typical. Low power energy efficient CPUs too. ARM to start with.

Take a look:
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]

pci-e X8 is limmted IO why not at least X16? (1)

Joe_Dragon (2206452) | about 2 years ago | (#42650391)

pci-e X8 is limited IO why not at least X16?

X8 can be used up by 1 video card on it's own.

Re:pci-e X8 is limmted IO why not at least X16? (1)

butlerm (3112) | about 2 years ago | (#42654185)

This is intended for server use. No video output required. The PCIe part is actually optional. I wouldn't expect to see this in workstations anytime soon, not without a major redesign at any rate. The form factor is designed for small, low power processors. The interface is not designed for SMP or anything like that either.

Re:pci-e X8 is limmted IO why not at least X16? (1)

dacut (243842) | about 2 years ago | (#42654845)

They're only using the PCIe x8 physical connectors; the electrical signals do not resemble PCIe at all.

Presumably, they're also relocating the actual slot location to avoid stupid errors (like plugging one of these into an actual PCIe x8 slot or vice-versa).

No (1)

larry bagina (561269) | about 2 years ago | (#42650563)

I'll wait for Happy Ending.

Re:No (1)

dkf (304284) | about 2 years ago | (#42655189)

By Betteridge's Law, I am forced to agree with you.

What is the invention here? (2)

niks42 (768188) | about 2 years ago | (#42650643)

Looking at the photos of the backplane, it looks like S100 era technology. Where is the trendy stuff? I want to see hermetically sealed illuminated glass-like blocks, changing color and sliding out automatically on detection of failure, a high-bandwidth optical interface on each edge, power inductively coupled to avoid metal connectors, an eerie surround sound voice saying "Dave ... Stop" ..

Re:What is the invention here? (0)

Anonymous Coward | about 2 years ago | (#42654091)

I'm with you on that! I have a Cray-3 Gallium Arsenide logic module sitting on my desk... This is bleeding edge tech from the early 90s that even today looks like something out of an alien space craft. Somehow we all ended up going down the wrong path - I blame Intel and Microsoft!

Re:What is the invention here? (1)

Areyoukiddingme (1289470) | about 2 years ago | (#42655065)

In other words, an iPhone. Minus the data connectors and inductive power. And the rack.

Ya know, I think you're on to something. If Intel can take their optical interconnect out of the lab, it just might be possible. At a reasonable price. It's possible now, for an unreasonable price.

And that's the machine SGI would be building, if SGI were anything but a shadow of its former self.

"Compute" is a verb stem (0)

Anonymous Coward | about 2 years ago | (#42650795)

I compute
He computes
The computer is busy computing.

Open Computing or Open Computation is correct. "Open Compute" is not.

I suppose these people are the same set who would say "I have a drive license" or "The pilot passed his fly test".

No point really (0)

Anonymous Coward | about 2 years ago | (#42650805)

The commodity interconnect that allows scalability is at the Ethernet layer, not the CPU socket. More and more of the motherboard is going inside the CPU's silicon. The integration there is less interesting and making it modular at that layer will only increase cost.

Faster networks, cheaper switch fabrics, and more compact low-power CPUs will have a bigger impact on the density of a cabinet.

Good (1)

lightknight (213164) | about 2 years ago | (#42651273)

And add in some optical links so we can finally scale motherboards to something awesome.

Being limited to certain designs / lengths because of electrical circuitry...madness.

Been done before circa 1974 (1)

ipb (569735) | about 2 years ago | (#42651285)

the digital group -
http://www.pc-history.org/digital.htm [pc-history.org]
http://www.bytecollector.com/the_digital_group.htm [bytecollector.com]

Re:Been done before circa 1974 (0)

Anonymous Coward | about 2 years ago | (#42654475)

the digital group -
http://www.pc-history.org/digital.htm [pc-history.org]
http://www.bytecollector.com/the_digital_group.htm [bytecollector.com]

Exactly. There was a run of DEC XL servers that could run Pentium or Alpha chips, just swap the CPU daughtercard. Not as old as your links, but I see nothing new here....

CPU cards are so 1960s (0)

Anonymous Coward | about 2 years ago | (#42652345)

This has never been done before with a CPU, which has traditionally required its own socket, its own chipset, and thus its own motherboard.

Unibus in 1969. S-100 in 1974. MBus in 1991. Even consumer-level PowerMacs had replaceable/upgradable CPU cards in the 1990s. HyperTransport in 2001. Yawn.

Costs? (1)

Matt_Bennett (79107) | about 2 years ago | (#42653141)

What I don't see in TFA is something that describes how the one big hurdle of this type of design will be overcome- the IMMENSE costs! The speeds that processors and RAM runs is so high you can't just drop it down on a board and expect it to work- you're in a long loop of simulate, build, test, repeat, and each iteration is extremely costly- we're not talking Arduino here- in reality, these boards (populated) in mass production will cost hundreds in just BOM costs, not counting the assembly. If the biggies in the industry are truly willing to foot the bill, great, but no matter what, these boards will still remain expensive, and likely still in the NDA wasteland as the individual parts that make it up are unlikely open up their documentation (or distribution chains) any time soon (yes, Broadcom, I'm talking about you).

OpenCompute (0)

Anonymous Coward | about 2 years ago | (#42653525)

Here's the link to the article from opencompute and my statement below that. http://www.opencompute.org/2013/01/16/ocp-summit-iv-breaking-up-the-monolith/

The interesting part about this is that we are just re-using "old" technology. Telecom did and still uses this method of workloads. Standardize the backplane and let the Intel's, the AMD's of the world to play on it vs trying to build your own chip......

"We had announced the same system that we built over a year ago, here is my interview on the product at the arm techcon. http://www.engineeringtv.com/video/ArmBlock-16-Packs-16-ARM-Cores. We also did a joint effort with Oracle Java team https://blogs.oracle.com/oslab/entry/16_processor_arm_system at the Oracle Open World and Java One. The base idea is that the backplane is the standard and any manufacture (intel, amd, arm, tilera, power pc, etc...) can be used on the system. We felt that by using Arm in this architecture would give the community the best representation of the power to performace capabilities. if anyone has any questions I can be contacted at Stephenm@miwdesign.com."

Lots of words (0)

Anonymous Coward | about 2 years ago | (#42656417)

That's a lot of words to describe a simple cluster. If their clustering technology is better than the rest, then fine... but don't pretend like they fscking invented it, or we'll all have to pay licensing fees for decades-old tech, a la Apple.

Windows activation service (0)

Anonymous Coward | about 2 years ago | (#42661701)

A telephone support person: "Hey, I have a group hug user over the phone again."
A manager: "I can't take this anymore. Give him a virtual hug and send him to www.linux.org."

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?