Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Creating a Low-Power Cloud With Netbook Chips

timothy posted more than 5 years ago | from the but-what's-the-wattage-equivalent-cfl dept.

Supercomputing 93

Al writes "Researchers from Carnegie Mellon University have created a remarkably low-power server architecture using netbook processors and flash memory cards. The server design, dubbed a 'fast array of wimpy nodes,' or FAWN, is only designed to perform simple tasks, but the CMU team say it could be perfect for large Web companies that have to retrieve large amounts of data from RAM. A set-up including 21 individual nodes draws a maximum of just 85 watts under real-world conditions. The researchers say that a FAWN cluster could offer a low-power replacement for sites that currently rely on Memcached to access data from RAM."

cancel ×

93 comments

Sorry! There are no comments related to the filter you selected.

Toy. (0)

Anonymous Coward | more than 5 years ago | (#27604863)

Hey, let's do it because we can! Maybe, with all of our netbook nodes, we can get a web page to load in 10 seconds instead of over 100!

While we're at it, let's make a Beowilf cluster of iPods!

Re:Toy. (2, Interesting)

jonsmirl (114798) | more than 5 years ago | (#27605765)

You can beat this with an array of Pogoplugs at $99 each. They draw under 5W and have 512MB RAM, 512MB flash and GbE. Stick a 64GB USB stick into it. They about about 3in square.

Pogoplug [pogoplug.com] is same thing as a Marvell SheevaPlug [marvell.com] .

Re:Toy. (1)

WebCowboy (196209) | more than 5 years ago | (#27617475)

You can beat this with an array of Pogoplugs at $99 each

Beagleboards might be more expensive and lower capacity per node, but they have more processing horsepower (OMAP3x platform with DSP chipset capable of rendering 3-D and HD video) and draw half the power. They are powered by 5v DC power that could share one power supply whereas each Pogoplug has its own transformer/rectifier power supply. Might get more horsepower per watt for less work from a beagleboard.

Only drawback: need to supply an interconnection solution to match the GbE supplied by the plugs...but there are solutions that would work...

Re:Toy. (1)

jonsmirl (114798) | more than 5 years ago | (#27618523)

The Pogoplug power supply is on a separate PCB, you can just unplug and discard it.

Application is for webservers, the Beagleboard video hardware is just going to draw a bunch of power.

The Marvell ARM CPU scores 2/3 of Intel Atom on integer benchmarks. It doesn't have an FPU.

Cloud? (1, Insightful)

Anonymous Coward | more than 5 years ago | (#27604909)

Didn't we have another term for this before all this cloud hype?

Imagine a beo... Beef? Bud?

I can't remember. My brain can't fight all these buzzwords.

Re:Cloud? (1)

fractoid (1076465) | more than 5 years ago | (#27607687)

Buzzlewolf cloister. I think. And something about running Listix.

But no, just like every other friggin product that uses more than one CPU, it is now a 'cloud'.

Re:Cloud? (0)

Anonymous Coward | more than 5 years ago | (#27608587)

Buzzlewolf cloister. I think. And something about running Listix.

No no, that's Cluster. Cluster, Cloister, Cloister, Cluster the...stupid?

Re:Cloud? (0)

Anonymous Coward | more than 5 years ago | (#27608755)

Now I have a mental picture of sheep-dolphin hybrids frolicking in waist-deep water on Fiji.

oblig..... (3, Funny)

omar.sahal (687649) | more than 5 years ago | (#27604913)

imagine a beo..... oh forget it
I tried but I couldn't resist. I reloaded three times and i was still first post

Re:oblig..... (0)

Anonymous Coward | more than 5 years ago | (#27605113)

But will it run Crysis?

Re:oblig..... (4, Funny)

Anonymous Coward | more than 5 years ago | (#27605773)

But will it blend?

Re:oblig..... (-1, Redundant)

AdmiralXyz (1378985) | more than 5 years ago | (#27605119)

Um... it's quite easy to imagine a Beowulf cluster of these, just RTFA. That's what it is.

Re:oblig..... (2, Funny)

Dachannien (617929) | more than 5 years ago | (#27605151)

Except they gave it a totally pansy acronym. I mean, come on, Beowulf ripped Grendel's arm off and nailed it above the door to the hall as a trophy. The only thing notable that a fawn ever did was watch its mom get killed by hunters.

Re:oblig..... (1)

ozbird (127571) | more than 5 years ago | (#27606559)

Well yes, but even Beowulf had to "FAWN" to get the grant funding for his monster hunting expedition.

Re:oblig..... (0)

Anonymous Coward | more than 5 years ago | (#27607019)

Except they gave it a totally pansy acronym. I mean, come on, Beowulf ripped Grendel's arm off and nailed it above the door to the hall as a trophy. The only thing notable that a fawn ever did was watch its mom get killed by hunters.

Oh, I dunno. [youtube.com]

Re:oblig..... (1)

david.given (6740) | more than 5 years ago | (#27612789)

I mean, come on, Beowulf ripped Grendel's arm off and nailed it above the door to the hall as a trophy.

Yes, but the whole effect was rather spoilt when Grendel's mother stormed down to the hall later to complain. I mean, his mother. What is this, kindergarten?

The only thing notable that a fawn ever did was watch its mom get killed by hunters.

The book's worth reading. Prior to disneyfication, Bambi kicks righteous ass when he grows up.

Re:oblig..... (1)

wastedlife (1319259) | more than 5 years ago | (#27614795)

The book's worth reading. Prior to disneyfication, Bambi kicks righteous ass when he grows up.

Wouldn't have been a fawn at that point, would he?

Re:oblig..... (5, Funny)

Samschnooks (1415697) | more than 5 years ago | (#27605131)

Thank you. You saved Slashdot from this:

I'm thinking of a Fast Array of Gigabyte Systems or "FAGS" as opposed to FAWN.

Imagine talking to your admin in front of PC type of folks,

"Hey Lou, you did you get those new FAGS? That last ones broke down and were a real pain in the ass!"

"No Joe, we still have those old FAGS. The holes in those things were so big, anything could get in."

"Yeah, I know it. They were pigs too. Some of the fuses went. Things really got blown!"

"I tell ya! I tell ya! Hey, how are the boys in San Fransisco? I heard the FAGS vendor is really sticking it up their asses."

"Sort of. They were happy with their shot and reciprocated on the terms."

"Ah, good."

Re:oblig..... (0)

Anonymous Coward | more than 5 years ago | (#27605395)

Why all this talk about cigarettes :-)

Parent contradicts self (0)

Anonymous Coward | more than 5 years ago | (#27606547)

> "You saved Slashdot from this"

You just put it on Slashdot. Thanks for ruining my night, nerd!

Score 4? Really? (1)

hdon (1104251) | more than 5 years ago | (#27607637)

Come on, now

Cradle to Grave (4, Insightful)

lymond01 (314120) | more than 5 years ago | (#27605021)

When I started this post, I was thinking that the overall power usage of building 21 computers that run at 85 W might supersede the power usage of building one 1000 W computer with 32 GB of memory, if you take the whole process from manufacturing to disposal.

But I suppose it's the electric bill of the company we're concerned with so I'll just sit in the corner and re-read Bambi.

Re:Cradle to Grave (1)

maxume (22995) | more than 5 years ago | (#27605071)

If the array is cheaper to buy, probably not. Especially if it uses more materials and is still cheaper.

I guess buying the highest performance Intel chip would throw that off quite a bit, but I doubt that is what you were talking about.

Re:Cradle to Grave (2, Interesting)

drizek (1481461) | more than 5 years ago | (#27605225)

I think with the grave you are screwed either way, but with the cradle you should keep in mind that atom processors are TINY. In fact, they are one tenth the size of a nehalem processor, meaning they require about a tenth of the resources to produce. Assuming that they will be replacing a dual socket system, you break even. A quad socket system gives the atoms the win. The real problem is going to be in manufacturing all those motherboard chipsets.

Re:Cradle to Grave (1)

derGoldstein (1494129) | more than 5 years ago | (#27605731)

They also don't require cooling (or very little of it compared to server CPUs), and the use of economy of scale kicks in *way* faster: how much of a price reduction would you get if you ordered them in batches of 100?

Not to mention, the profit margins on those tiny systems inherently lower than of server hardware. Even if you used "only" 20 of them, you'd probably get more bang-for-the-buck than if you spent an equivalent amount of money on a nehalem system(s).

The bigger issue would be networking and software, but I think most of these solutions could be found off-the-shelf, you just need to do some homework.

Re:Cradle to Grave (1)

BikeHelmet (1437881) | more than 5 years ago | (#27605917)

It's probably not that different. These processors have smaller dies, so making a half-dozen of them or a regular desktop CPU probably takes the same amount of power.

Re:Cradle to Grave (2, Insightful)

chill (34294) | more than 5 years ago | (#27606237)

The single 1000W computer is also a single point of failure.

Re:Cradle to Grave (1)

flappinbooger (574405) | more than 5 years ago | (#27606805)

You'd have to look at computing power per watt to run, per dollar to buy and per watt of cooling, as watts are directly correlated to $$. The atoms are slow, but 21 of them at under 100W is def interesting and undoubtedly useful.

I've been waiting for something like this, I don't think it's coincidence that Intel named this chip the atom. It's small and insignificant by itself, but add enough together and you get some interesting things....

At what point does it become smarter to have a whole slew of the atom chips vs relatively few of a traditional clustered processor?

Re:Cradle to Grave (1)

afidel (530433) | more than 5 years ago | (#27607559)

1000W, what freaking system draws 1000W and only has 32GB?!?!? One of my most power hungry systems is a DL585 G5 with 2x3.2 Ghz quad core Opteron's and 128GB of ram and a few HBA's and it draws less than half that typical.

Re:Cradle to Grave (1)

fractoid (1076465) | more than 5 years ago | (#27607721)

Dude, SLI desktop systems can easily require a 750W power supply already. I can't see it being that hard for a beefed up system to use an extra 30%. RAM on its own isn't that power-hungry anyway, compared to CPUs.

Re:Cradle to Grave (1)

afidel (530433) | more than 5 years ago | (#27607839)

Oh yes it can be, Intel Core2 based Xeon system's fully loaded with FBDIMM's can easily use as much power for RAM as CPU.

Re:Cradle to Grave (1)

geekboy642 (799087) | more than 5 years ago | (#27607935)

Servers don't use SLI unless they're very special-purpose. Easily half the power in that sort of system is supporting the graphics card. A 1000 watt system with only 32GB of ram is either very inefficient, or doing something that a slashdot poster isn't qualified to pontificate about.

Re:Cradle to Grave (1)

gbjbaanb (229885) | more than 5 years ago | (#27608813)

1000W, what freaking system draws 1000W and only has 32GB?!?!?

My Desktop PC and its PSU [overclock3d.net] , you insensitive clod!

Re:Cradle to Grave (1)

petermgreen (876956) | more than 5 years ago | (#27615253)

how much does your system actually draw? have you ever measured it?

Re:Cradle to Grave (1)

zombie_monkey (1036404) | more than 5 years ago | (#27608243)

I know noone reads TFA, but it's worse when someone skims the summary but doesn't pay attention to it.

You should have read the following sentence: "A set-up including 21 individual nodes draws a maximum of just 85 watts under real-world conditions." ratehr than skimming through it.

Re:Cradle to Grave (1)

zombie_monkey (1036404) | more than 5 years ago | (#27608273)

Hm, ok, I may have misunderstood you point, but only because it doens't make much sense. What matters here is whether the price difference would be compensated for from power bills over their lifetime.

Simple economics (2, Interesting)

dj245 (732906) | more than 5 years ago | (#27612325)

This argument is misinformed.

Businesses are in business to make money and put food on the table. Nobody does anything for free. If I build a widget and it costs me $10 in electricity, $5 in heating, and $3 in cooling, my widget is going to be $18 more expensive as a result. Now, I don't do things for free, so I'll just add $18 to the cost of my widget. Probably $20 because I want some more markup for my trouble.

Energy costs are always included in anything you buy. If the initial+electrical cost of buying Widget Z instead of Widget Y is better, then it is probably less energy intensive if you consider the whole system.

Re:Cradle to Grave (1)

UltimApe (991552) | more than 5 years ago | (#27615571)

its 85 watts for the entire setup.... Not per node. ;)

Next Generation (3, Interesting)

glitch23 (557124) | more than 5 years ago | (#27605043)

This is the next generation of a Beowulf cluster using the next-generation of hardware which is cooler, cheaper CPUs and solid-state storage and memory. Someone was bound to come up with this idea because it only makes sense. It is good to know that we have a proof of concept now so someone else can take the idea and modify it to come up with something even better. Eventually hardware manufacturers will take notice and release it as COTS hardware. For companies who want cooler and cheaper server hardware this would be a good fit once it has been packaged as a COTS product.

Re:Next Generation (1)

derGoldstein (1494129) | more than 5 years ago | (#27605845)

Google already uses COTS hardware for their servers [slashdot.org] .

This may be a step forward in terms of modularity and scalability, however. Rather than 1AAA shipping containers, the server "batches" could be the size of refrigerators and powered by one power supply "per-fridge".

At any rate, I especially agree with your latter statement: hardware manufacturers will be forced to take note, as this gradually becomes more common.

Re:Next Generation (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27606209)

Rob Malda (aka Cmdtaco) is not what many would consider "the ideal candidate" for a dot-com start-up. He started his career as a C++ coder for a major manufacturer, but then quit to pursue a mathematics degree in Canada. That didn't quite do it for him either, as he then dropped out to pursue something far more interesting: canoe from Calgary to New Orleans. But after 1,200+ miles of rowing, his journey ended in Minneapolis with a cracked butt and a frozen river. Temporarily, of course, as he plans to pick up and continue south someday soon.

All that said, Malda was pretty excited when he received his first response to all the resumes he'd been sending out to various tech companies. He immediately called back to are you really reading this you stupid motherfucker? schedule an interview and was pleasantly surprised at how flexible the interviewer was: Malda could "stop by any time."

After shaving his pubic area smooth and putting on his interview clothes (leather pants, leather boots and leather vest, steel nipple rings and nothing else), Malda hopped on a bus, transferred to a few other busses, and, after almost two hours, finally reached his destination. It was a residential apartment complex that had obviously seen better days.

When he knocked on the door of "Suite 318," Malda was greeted by Michael Simms, a spry-looking man in his 50's with glasses precariously perched on his conical head and a face a few days past shaved. Upon entering the squalid apartment, the first thing Malda noticed was the rotten stench of ejaculate-stained underwear haphazardly strewn across the living room. There and a blue tarp hung over the south-facing window, blocking the sun and a view of the Mississippi river. "For my little get-togethers" Michael Simms explained using quote marks with his fingers. "Can't have the neighbors looking in now can we?" Are you really reading this you stupid motherfucker? The second was Michael Simms sitting down on a computer, firing up a popular MMO. He was completey nude. Malda stared unabashedly at the sight; Michael's tumescence was incredible.

"You see this," Simms said, avatar running towards the closed city gate, "when you get to the door you have to wait while the game loads the next area. You should just be able to see out through it."

Taking this as the "technical" portion of the interview, Malda started to explain about how he would implement dynamically loading regions. Malda was very clear that, while he had never written something like that before, he was certainly aware of the basic concepts involved.

"Now, look. He just runs right through the tree. Right through it! You see that?" He harrumphed and turned toward Malda with a look like someone died. "Motherfucker! I'll fucking kill you!" Rob shouted.

"Now now, Rob, that will never do. You'll suck my penis to erection and then take it in your sweet little anus until it's time to dump a load of Uncle my special sauce down your slick throat, and you'll like it!"

With this Simms cocked the hammer of his gun and pointed it at Rob's mouth and began forcing his jaw open with the barrel as he poured the JÃffgermeister, thick and dark and brown, into Rob's mouth. He trickled some onto his bush and penis for good measure and jammed his thin cock into Rob's mouth. Rob took it to the hilt.

"That's a good little faggot. You take all of Uncle Michael's junk and you like it!" Michael said as he began pumping his cock in and out of Rob's mouth. Simms's bulbous white gut hovered menacingly over Rob's face like a full moon and his ruddy pubes tickled Rob's nose. The gun barrel wavered at Rob's eyes. Are you really reading this you stupid motherfucker?

Rob moaned as Simms grunted his pleasures into the back of Rob's throat.

"Now Rob, I want you to look me in the eyes. Rob's beady eyes connected with Michael's pale blue irises, tears welling in his eyelids as Simms's crotch continued its assault. "I have with me a funnel, Rob, and you're going to take it in your ass. This old cock of mine needs a little lube and we're going to pack your rec-room full of something quite slippery!" Simms said as his eyes grew wide. He shook his bottle of JÃffgermeister again as he helped Rob pull his pants off.

With a pop Michael removed his pulsating cock, slick with spit, from Rob's hungry mouth as Rob turned over onto all fours, his back arched and ass swaying in the air. Simms's little orange funnel entered Rob's anus without complaint as he began pouring the brown fluid. Rob shivered.

"Good boy, Rob. Good boy." Simms moaned as he rammed his dong home into Rob's familiar rectum. "Reeeal goooood..."

Rob cried out in pain as Simms put his full weight into each and every thrust, Rob's hairy ass-cheeks spread further and further apart with every push. "OK, Rob, I want you to say hello to my little friend!" Simms said with a maniacal laugh. Rob hissed as he felt something cold and metal begin to enter his asshole right beside Simms's rigid cock. "What's the barrel of my .44 feel like up there, Rob?"

"I can't take this anymore! I was done with this when I moved from Ann Arbor! I just want to have a normal straight life with Kathleen, I justÃf"" Rob said through sobs and grunts as he continued his battering ram assault. "I just want to live a straight lifestyle and leave my gay days behind!" He grunted one last time, withdrew his gun and cock from Rob's bloodied anus, and shoved Rob onto his back.

"Get ready to take my load, boy!" he yelled as he jacked his crooked cock into Rob's mouth. He kept his .44 focused on Rob's forehead as he began pouring the brown liquor into Rob's mouth. A few drops of the spirit hit Simms's dick and he lost control. His butt cheeks tightened and his hips thrust forward and backward like a piston as his scrotum tightened.

"You little fucking Linux faggot, take my load!" Simms shouted at the top of his lungs. Spurt after spurt of sickly yellow hacker semen erupted from Simms's straining purple cockhead into Rob's gullet, the JÃffger splashing Rob's face and mixing with the cum into an infernal homosexual cocktail. Rob gagged and flailed his arms.

Rob laid gasping and spitting after Simms climbed off of his spent form. Rob turned and looked at Simms as if in a trance. Blood and semen leaked from his sullied lips and collected in his goatee. Sweat and more JÃffgermeister covered his brow. Are you really reading this you stupid motherfucker? His eyes, bloodshot from the alcohol, strained to make out Simms's pudgy form shuffling in the darkness. Rob started but then cried out in pain as his ass spasmed. His hand went to his sore, puffy anus and he rolled around in his soiled sheets. His eyes were glazed over, almost catatonically. He began crying again, his whole body wracked in weeping.

Simms pulled up a website with a few 3D models on it and asked, "what does 100,000 polygons mean? Hey, could we just have the program write whatever text we wanted over these signs so we could sell ads in-game?"

Malda felt at a disadvantage since he hadn't done any 3D programming since toying around with POV-Ray work a decade ago. He struggled through an explanation about how models are essentially a web of triangles with a flat texture projected onto them, and that in-game ads should not, in fact, be difficult.

"Really? Golly." Simms said. "Okay, they tell me we can finish this in six months if we work part-time, so we can probably finish it in three if we work full-time."

He had been describing a game as complex as Spore.

"I'm outsourcing most of the work to the Philippines. They're willing to work on a royalty basis. You can transmit the technical requirements to them, change my words into code they can understand."

"Change... your words... into code... they'll understand..." Malda gibbered.

"Really, you can do that?" Michael Simms peered through his glasses.

At that point, Malda had given up all hope that a decent game could be made. But he still wanted to know how much money he could get out of this guy...this queer chicken-hawk. "So... how much would I be getting for this?"

"$75,000 per month," Michael Simms said, with no hesitation.

Malda blinked. "That's, um, after the game's done, right?"

Michael Simms pulled up a spreadsheet. "A popular game gets about 250,000 players. At $15 a month, that means they pull in... $3,750,000. They tell me to pay about 10% of that to the developers. I'd give you 2%. That's... $75,000." He smiled at the screen. Are you really reading this you stupid motherfucker?

"But, ah, nothing until we're pulling in that kind of money?"

"We'll offer advertising in the game," he said. "For gay sex. Don't you think companies like Google would like to get in on this, advertise in our game?"

He blinked again. "I'm not sure Google really needs that kind of exposure." Deciding to go on the offensive, Malda said "Okay, a friend of mine, Calvin, worked for five years on his game, Venture the Void. Go ahead and pull it up at venturethevoid.com."

He started typing in the address bar: "venture of the void". Malda corrected him and soon colorful planets and spaceships swirled on the monitor. It was a gay site to behold.

"Okay, see he's generating all these planets automatically. No two are the same, they've got times of day, automatically generated weather, plants... You see that spaceship? No two of them in the game are identical, even over multiple plays. All that and guess how many paying players he got?" Malda paused. "Twenty-five."

"Oh," Michael Simms said, "but did he advertise on this site?" He navigated to the third or fourth hit on Google for "MMO". It was some portal for MMO games with reviews, news, and all sorts of things. Are you really reading this you stupid motherfucker?

"I'm not sure," Malda hesitated, "Calvin submitted it lots of places."

He just shook his head. "All that work and he didn't even advertise in the right places. If he just would have advertised here, he could have been rolling in the money."

It was pretty clear that Malda wasn't going to get any money out of this engagement, so he decided to cut his losses and make the long journey back to his own apartment. As he stood up to end the interview, Michael Simms casually blurted out "I never leave the apartment."

Malda raised an eyebrow. Are you really reading this you stupid motherfucker?

"I've got an idea every day," he said. "I'll just be doing something then, POW! An idea! That's why you need me. Now, look at this."

Michael Simms walked over to his closet and took out one of those massive wargames from the 70's. He told him about a "compare and contrast" essay he had in college, "comparing tic-tac-toe to checkers to chess to games like this with thousands of pieces." Ten years ago, he presented investors the idea of developing a series of games like this on the computer. "ÃfIt's just like printing money!' they told me."

Not one game chit had been popped from its original cardboard. Malda couldn't help but wonder if any investors had sprung for this free money.

"Have you ever played the computer game Civilization?" Michael Simms asked. Before Malda could even nod affirmatively, he continued "One time I was playing and a chariot parked in the mountains defeated a howitzer! That's just never going to happen. One time I just sat down and started writing down things that were wrong with the game. POW! I had a list of ninety things, just like that."

It was time for him to go. Malda wished Michael Simms luck, but told him frankly what he thought of his enterprise...that it was shit. After his two-hour ride home, the first thing he did when was email his friend Calvin the link to the magical money-making MMO forum, asking for only 2% of his proceeds. He expects to be rolling in money any time now.

-=SteveO=-

Re:Next Generation (1)

dbIII (701233) | more than 5 years ago | (#27606315)

There was "green destiny" in 2002 running on the Transmeta Crusoe TM5600. It makes sense for the right job and these sort of machines don't need a lot of cooling.

Re:Next Generation (1)

joib (70841) | more than 5 years ago | (#27608575)

Something like the SGI Molecule [gizmodo.com] , perhaps.

Oh yeah, RIP SGI.

Re:Next Generation (0)

Anonymous Coward | more than 5 years ago | (#27619743)

Also, didn't IBM show that their Blue Gene supercomputer (which is effectively a lot of wimpy nodes with high-tech packaging and a fast interconnect) can run cloud workloads? Previous Slashdot article on Project Kittyhawk [slashdot.org] .

AMD Geode? (2, Insightful)

TheRaven64 (641858) | more than 5 years ago | (#27605073)

Weird choice. I suggested a while ago using a set of OMAP3 chips in blade servers. You can get a 600MHz ARM CPU, 512MB of flash and 256MB of RAM in a single package-on-package module, with a power usage of under 1W. Put an 8x8 grid of them on a board and you've got a nice little wedge of server power at well under 64W. Use a bit SAN elsewhere in the rack and you've got a set of machines you can bring online easily for individual users. You could assign individual ones to different users / customers and just plug in more when they were needed. If I were doing it now, I'd be tempted to use one of the newer Freescale ARM designs that goes to around 1GHz and has on-die Ethernet controllers.

Re:AMD Geode? (1)

BikeHelmet (1437881) | more than 5 years ago | (#27605941)

I remember there was a company that tried this a few years ago. They created a server with something like 3500 CPUs, consuming roughly 1500 watts.

I don't believe it ever caught on. Since it wasn't x86 or ARM, porting software would probably be incredibly expensive. Also, splitting tasks between that many cores or CPU is... difficult.

Re:AMD Geode? (2, Informative)

the linux geek (799780) | more than 5 years ago | (#27607381)

Sounds like you're referring to SiCortex, who uses MIPS chips. They also have a desktop workstation with 72 processors and 48 or 96GB of RAM that only consumes 300W. http://www.sicortex.com/ [sicortex.com]

Re:AMD Geode? (1)

BikeHelmet (1437881) | more than 5 years ago | (#27622825)

Yep! I recognize the pics on their site. Was definitely them.

Looks like they've updated their hardware - 64bit now, with close to 6k processors. (cores, probably)

New buzz words? (4, Insightful)

LoRdTAW (99712) | more than 5 years ago | (#27605087)

So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing.

And since when did the term netbook come to describe low power computing hardware? There have been mini-ITX boards with low power CPU's long before the term netbook was in use. Just more marketing bullshit, repackage existing tech with a shiny new name and sell it.

Re:New buzz words? (1, Interesting)

Anonymous Coward | more than 5 years ago | (#27605279)

So I guess the word cloud has replaced cluster to give old technology a fresh new look.

A cluster is a cloud when it is sufficiently large and the nodes are sufficiently small, like the water vapour of a cloud. Isn't is poetic?

Re:New buzz words? (1)

fractoid (1076465) | more than 5 years ago | (#27607739)

So the massively parallel touting of this new buzzword should best be described as a 'cloudfuck'?

Re:New buzz words? (0)

Anonymous Coward | more than 5 years ago | (#27609245)

So the massively parallel touting of this new buzzword should best be described as a 'cloudfuck'?

The best part is, the moisture is already there so the 'cloudfuck' can last longer than the conventional 'clusterfuck'.

Re:New buzz words? (1)

value_added (719364) | more than 5 years ago | (#27605327)

There have been mini-ITX boards with low power CPU's long before the term netbook was in use.

Allow me to extend the above:

There have been other boards with lower power CPUs longs before anyone cared about VIA or their mini-ITX form factor.

Re:New buzz words? (1)

LoRdTAW (99712) | more than 5 years ago | (#27606807)

Good point but they were targeted at embedded systems and other non-pc oriented systems. I was talking about commodity low power PC hardware.

Re:New buzz words? (1)

martin-boundary (547041) | more than 5 years ago | (#27607847)

So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing. So I guess the word cloud has replaced cluster to give old technology a fresh new look. Gotta love marketing.

Whoa! I just imagined a Beowulf cluster of your comment...

Just imagine... (0)

Anonymous Coward | more than 5 years ago | (#27605121)

A Beowulf Cluster of those!!!

Really, how is a cloud different from a modernized cluster? Or is cloud newspeak for cluster?

Two different problems (2, Informative)

seifried (12921) | more than 5 years ago | (#27605145)

Or to put it simply: pulling a "finished" object from memcached will almost always be faster then having a machine create/render/whatever you do to create the object. If you want to pull large amounts of data from RAM buy a 1U server that takes 64 gigabytes of ram for $5000 (so about $78 per gig of ram, and much faster than a compact flash card in a super cheap laptop). Or buy solid state disks/PCIe RAM cards. Now if we're talking about building a render farm for whatever (frames, objects in database, etc.) simply run the numbers, how many objects/sec/dollar do you get with different solutions and how important is latency.

What interests me is the ease of building a many node cluster and learning how to administer and write software for something with 20+ nodes.

Of course you could just buy computer time from amazon.com EC2 for $0.10 per hour per node and practice there ($2 an hour for 20 systems running. not bad).

Re:Two different problems (1)

derGoldstein (1494129) | more than 5 years ago | (#27605909)

What interests me is the ease of building a many node cluster and learning how to administer and write software for something with 20+ nodes.

This is a chicken-and-egg question. If this becomes more common, the software (and hardware) to manage and administer these will be made available due to commercial needs, and interest from the OSS community. It's just like threading is making its way into almost every software "domain", as multi-core CPUs are becoming the norm.

Re:Two different problems (1)

seifried (12921) | more than 5 years ago | (#27606319)

Agreed, but it's still nice to be able to practice, and for that you need multiple machines (e.g. variable network latency vs.s. VMware running many images with no jitter in communications, etc. transient failure conditions, you name it.).

Re:Two different problems (1)

WebCowboy (196209) | more than 5 years ago | (#27618033)

Or to put it simply: pulling a "finished" object from memcached will almost always be faster then having a machine create/render/whatever you do to create the object.

I don't think the idea is to dump the concept of cache. The idea is to drop the added complexity and expense of "memcached". Instead of retrieving data from slow power-hungry hard drives, processing it and caching it in very expensive SDRAM you employ more traditional filesystem-based caching to much cheaper flash drives. That would still be less resource-intensive than re-rendering data, even though it is slower than memcached on power-hungry systems.

If you want to pull large amounts of data from RAM buy a 1U server that takes 64 gigabytes of ram for $5000 (so about $78 per gig of ram, and much faster than a compact flash card in a super cheap laptop).

More importantly than "is it fast" is would it be "fast enough", because the bottleneck is internet connectivity. A cluster of inexpensive nodes like this is also cost competitive with single large servers for initial purchase. Why spend $5k on one beast machine with gobs of pricey SDRAM when you can spend the SAME OR LESS amount of money on a cluster of, say, 32 low-power nodes with 256MB or 512MB of RAM and 4GB of flash? That gives you ample SDRAM to execute code and TWICE AS MUCH MEMORY for caching than your suggestion! Yes, flash is slower but it's GOOD ENOUGH--it only has to be pushed out as fast as the 'net connection can take it, which is a LOT slower than SDRAM!

Not only do you get more cache of more-than-adequate performance wit the cluster, you get SIGNIFICANTLY better performance-per-watt numbers too, so not only is the initial investment no more expensive, long-term operating costs are MUCH LOWER.

Now if we're talking about building a render farm for whatever

It is clear from the article that this was most certainly NOT the intended application for such a cluster. Running a site like facebook and rendering the next Pixar movie are VASTLY different requirements. That said, there is merit in abandoning the "buy skookum Xeon boxes with gobs of RAM" concept for that too. Using GPUs from NVidia and AMD/ATI for computation is MUCH more efficient and powerful than relying on traditional CPUs. They take this cluster concept down to the die level: instead of 2, 4, or 8 massive power-hungry cores they contain HUNDREDS of very simple cores (shaders or vector processors) and exploit massive parallelism.

What interests me is the ease of building a many node cluster and learning how to administer and write software for something with 20+ nodes.

The compelling advantages make this challenge worthwhile. Optimising apps to work optimally on new architectures such as this FAWN cluster or to a 960-core NVidia Tesla system is indeed a challenge. But it doesn't look terribly difficult to physically build it, and tools to manage clusters are quite mature already.

Of course you could just buy computer time from amazon.com EC2 for $0.10 per hour per node and practice there

Of course, some people might have misgivings about putting their data in other people's hands, but for practicing that would work. But, who's to say Amazon wouldn't employ such clusters (or aren't already going there)?

Wait, what? (3, Interesting)

Enry (630) | more than 5 years ago | (#27605249)

256M per node times 21 nodes equals 5GB. 84 watts is nice, but I just built a home server with 4GB of RAM and 2 1TB drives that has a low power AMD chipset in it. At idle, it's about 70 watts, and gets to about 100 watts when under load. Replacing the two 1TB drives with an 80GB SSD would probably be closer to what if represented with FAWN.

Figuring $100 for the motherboard and parts makes that total system cost $2100. My server was about $500.

Don't get me wrong, this is an interesting idea. Using an Atom can get you a lot more performance for not much more power use, and you can go up to at least 2GB RAM per node. But there's a limit to how small you can make a single item in a cluster before you're duplicating effort without much benefit.

Re:Wait, what? (1)

seifried (12921) | more than 5 years ago | (#27606345)

You forgot the 4gig compact flash card in each machine.

Re:Wait, what? (1)

Enry (630) | more than 5 years ago | (#27612941)

No, I mentioned the 1TB drive that could be replaced with a single 80GB SSD.

Re:Wait, what? (1)

fast turtle (1118037) | more than 5 years ago | (#27606473)

Hell I've got an Intel Board running an e6300 (65 watt cpu) with 8GB of RAM plus a Geforce 7300GT that only draws 120 watts at full load (F@H) on the cpu. Total Cost is just under $800 with the recent RAM upgrade from 4GB to 8GB

Re:Wait, what? (2, Interesting)

CAIMLAS (41445) | more than 5 years ago | (#27608349)

That may (and really is) true. But how well does your machine work with concurrency? Or, for that matter, how fast is the processor?

8Gb of RAM is nice and all, especially with modern software and emulated environments. But how many

For a web-facing system - or anything serving multiple requests per second from different locations, with multiple threads all needing a quick response - having 21 500MHz cores would be much better than having 4 2.6GHz cores. That is, provided you could handle distributing the requests in an efficient manner. And the RAM limitation isn't really much of a limitation, when you consider that any one thread is not likely to use nearly 256Mb, for a web query, stored procedure, etc..

At any rate, this is a proof of concept (and really, not such a good one when you consider what's possible). The benefit is the number of cores in the system and how well you could serve up data, not so much the total amount of RAM.A better implementation could, very likely, be done for roughly the same cost (or less) utilizing similarly clocked multicore ARM processors. Take twenty 2 cores per board, 512Mb+, 500Mhz systems, cluster them... it starts getting impressive in the workload (and reduced I/O wait).

In theory, at least.

Re:Wait, what? (1)

Enry (630) | more than 5 years ago | (#27613023)

It's a 2.5Ghz 64-bit processor, dual GigE. If you're talking concurrency, don't forget OS overhead for each of those 21 systems. Each has a kernel, cache, kernel- and user-land processes running. For modern PCs it's not that big a deal, but it adds up for tiny PCs (consider the NSLU2 which can get overwhelmed running dhcpd and bind).

Again, I'm not saying that this is a bad idea, I think this implementation isn't much compared to what's out there now.

Re:Wait, what? (1)

CAIMLAS (41445) | more than 5 years ago | (#27632347)

The NSLU2 isn't exactly a fair assessment, particularly with bind. The NSLU2 is, at best, a 266MHz Xscale, which Linksys shipped underclocked to 133Mhz. Also, bind isn't exactly a light system - on my 700Mhz Celeron system, serving a small 6-host, 3-user max LAN makes bind be the highest CPU-utilizing process (often). That system also runs apache, mysql, and a small drupal install. Statistics on CPU utilization still shows bind utilizing a lot of CPU utilization.

Re:Wait, what? (1)

ILongForDarkness (1134931) | more than 5 years ago | (#27608797)

Not to mention the admin complication of having to set up 21 machines rather than 1. Also, you now pretty much tie up a 24 port switch rather than 1 port on a switch.

Re:Wait, what? (1)

WebCowboy (196209) | more than 5 years ago | (#27618323)

But there's a limit to how small you can make a single item in a cluster before you're duplicating effort without much benefit.

The thing is, there is vast room for improvement in the cluster concept with more current technology. If you used an ARMv7-based node you'd have better capabilities in each node than the Geode, at about $100 per node (making it cost the same as your suggestion) and a PEAK power consumption of around 30 or 35 watts (your system peaks at 100 watts, still significantly higher than the 85 watt peak of the Geode-based cluster).

Also, slapping an SSD into a regular PC doesn't make it even close to comparable apart from overall capacity. You still have the massive "memory wall" problem. For highly dynamic, componentised network apps like Facebook and Amazon you are relying tremendously on disk-based data retrieval and caching that would massively hit the SSD. To fix that "memory wall" problem in your system you'd probably have to have on the order of 64G of ram, not the paltry 4G you specify. With the cluster you have the equivalent of up to 20 times the bandwidth to your cache. Also, you have 21 500MHz processors at your disposal. Even if having an OS on each node caused a massive 50% overhead you could get more than the equivalent of a 2.5GHz dual-core processor.

Your sugestion might produce a high-performance machine, but it doesn't solve the problem the FAWN cluster set out to accomplish.

Re:Wait, what? (1)

ravyne (858869) | more than 5 years ago | (#27620037)

The problem this addresses is bandwidth, though.

Yes your server has dual-channel DDR2 at ... 400Mhz is common (PC2-6400), with a bandwidth of 12.8 GB/s. You've got maybe 2 Gb/s network connections, for a quarter GB/s network bandwidth, and maybe 3 Ghz of processing power to push it out.

This array, lets assume DDR2 at 333Mhz, has a bandwidth of 5333MB/s x 21, or 112 GB/s to RAM, 1 Gb/s x 21, or just about 3 GB/s to the network, and 500Mhz x 21, or over 10Ghz of processing power to push it out.

Linux FS for SDD drives? (1)

aharth (412459) | more than 5 years ago | (#27605381)

I've been toying around with a Samsung 16GB SSD. Performance improvement over spinning disks in an I/O-heavy scenario was neglegible. Also, it seemed as if the Linux kernel was still using memory to buffer SSD disk I/O. Which somewhat negates the argument of using SSDs to free main memory for other stuff.

Any idea what type of OS/filesystem combination they were using?

Re:Linux FS for SDD drives? (0)

Anonymous Coward | more than 5 years ago | (#27605855)

First, if you want to see performance, get a real SSD like Intel's X25 or OCZ' Vertex. Cheap SSDs are like USB flash sticks with a SATA interface. No IOPS to speak of.
Second, the intention of SSDs were never to "offload RAM". It's just fast and reliable storage.
Third, use a filesystem without journaling. Ext4 + 2.6.29.1 + TRIM patches + correct mount options work fine.

Re:Linux FS for SDD drives? (1)

clarkn0va (807617) | more than 5 years ago | (#27606721)

Fourth, use an unpartitioned SSD, or block-align your partitions if you must.

Fifth, use the deadline scheduler.

Sixth, read the excellend anandtech articles on SSD then hit the ocz ssd forums.

Re:Linux FS for SDD drives? (2, Informative)

shri (17709) | more than 5 years ago | (#27607845)

OR, skip 1-6 and just get a RAMSAN [ramsan.com] .

eXecute In Place (XIP) (1)

spaceturtle (687994) | more than 5 years ago | (#27616477)

Second, the intention of SSDs were never to "offload RAM". The idea is to use SSD as a pre-buffer for RAM, so it's quicker to access than reading from disk.

Well, eXecute In Place was designed to allow SSD to be accessed directly by the CPU. I understand that XIP can improve startup times, as we do not need to move the data from flash to ram to CPU, but we instead move it directly to CPU. SSD tends to be slower than memory, but the CPU cache may offset this. See for example:

http://lkml.indiana.edu/hypermail/linux/kernel/0409.1/0510.html

I'll note XIP is mostly used for embedded devices. So for PCs you are essentially correct when you say that SSD are just fast storage.

Re:Linux FS for SDD drives? (1)

CAIMLAS (41445) | more than 5 years ago | (#27608377)

Which somewhat negates the argument of using SSDs to free main memory for other stuff.

You misunderstand how that's supposed to work. You don't "free main memory" to SSD. The idea is to use SSD as a pre-buffer for RAM, so it's quicker to access than reading from disk.

You buffer from a 500Tb SAN to a 100Gb SSD, to 32Gb of RAM, to 4Mb of L3, to 2Mb of L2, to 512Mb of L1 - or whatever. You don't buffer -to- a slower device, but for a faster one, so the data will be available for the pipeline when it's needed. You want to use as much of the faster memory as possible, to increase system speed.

Using SSD to reduce RAM usage, by putting swap on SSD, is really, really dumb.

Re:Linux FS for SDD drives? (1)

aharth (412459) | more than 5 years ago | (#27637517)

You misunderstand how that's supposed to work. You don't "free main memory" to SSD. The idea is to use SSD as a pre-buffer for RAM, so it's quicker to access than reading from disk.

Sure.

But there's something wrong if the Linux kernel buffers SSD I/O in main memory and swaps code fragments to disk. At least that's what happened in my experiments.

Stress test please (1)

FlyingBishop (1293238) | more than 5 years ago | (#27605463)

Obviously it can perform fast, but it isn't going to last too long. Maybe flash is cheap enough that its limited read/write cycles aren't a serious issue, but this thing is going to chew up flash like no one's business.

I do like that my school's eldest beowulf cluster is now completely obsolete though, costing as much power as a few space heaters and processing as much data as a cluster of iPhones.

Imagine a beowulf cluster of.. oh, never mind (1)

obarthelemy (160321) | more than 5 years ago | (#27605955)

2 buzzwords in 1 title, can we do better ?

Re:Imagine a beowulf cluster of.. oh, never mind (1)

Penguinoflight (517245) | more than 5 years ago | (#27614377)

I think you could include low-power, although that's not really a word.

And only a few years too late to be useful... (1)

Makoss (660100) | more than 5 years ago | (#27605983)

Intel X25-E, 2.6 watts, 3300 Write IOPS, 35000 read IOPS*. So only one or two orders of magnitude more efficient...

And though no prices are given in the article for the FAWN, at $800 for the X25-E it's probably less expensive too. Particularly if you include setup and administration costs.

Not a bad idea in general, and not a bad idea in specific for 5 years ago, but pathetically outclassed in every area by a high end modern SSD.

* http://download.intel.com/design/flash/nand/extreme/extreme-sata-ssd-datasheet.pdf [intel.com]

Re:And only a few years too late to be useful... (1)

CAIMLAS (41445) | more than 5 years ago | (#27608383)

Somehow, I think you misunderstood what this article is about.

It's not about SSD. It's about a "cloud" cluster, performing the same amount of work as (say) a dual quad core server due to its ability to distribute load over many more cores.

Re:And only a few years too late to be useful... (1)

Makoss (660100) | more than 5 years ago | (#27617623)

Somehow, I think you misunderstood what this article is about.

Given the very frequent mention of 'disk based storage', and how flash is so much better, I'm not sure that I did.

It's not about SSD.

No it's not about SSD, that is the problem, it reads like they have never heard of them.

Memcached prevents Facebook's disk-based databases from being overwhelmed by a fire hose of millions of simultaneous requests for small chunks of information.

flash memory has much faster random access than disk-based storage

Each FAWN node performs 364 queries per second per watt, which is a hundred times better than can be accomplished by a traditional disk-based system

Swanson's goal is to exploit the unique qualities of flash memory to handle problems that are currently impossible to address with anything other than the most powerful and expensive supercomputers on earth

Swanson's own high-performance, flash-memory-based server, called Gordon, which currently exists only as a simulation...

I'm not saying that a wide array of low-power nodes is a bad idea. But unless they address the current state of technology, rather than a conveniently quaint world in which using flash as your primary storage makes you some sort of innovator, it's hard to take them seriously.

"you could very easily run a small website on one of these servers, and it would draw 10 watts," says Andersen--a tenth of what a typical Web server draws.

And how does that per-website energy usage compare to a normal server, using SSDs, and running enough virtualized instances (or just virtual domains) to match the per-website performance offered by a single FAWN node?

You need to address the actual state of things, and not the strawman of what computing was 6 years ago (or however long) when the project was started. While they've been working, the world hasn't been standing still, and you cannot pretend that spinning disks are the only thing going.

Perhaps I'm being too harsh and it's a failing of TFA and not the original researchers. Given that a dual core Atom330 takes like 8 watts, it is entirely reasonable that you could build a very efficient cluster out of a whole mess of them and a few SSDs, and produce something like you insist that the article was about. That would be interesting, provided that it compared favourably against similarly state of the art systems of course.

This may be a little offtopic but... (1)

zMaile (1421715) | more than 5 years ago | (#27606439)

I recently had my cat push a laptop off a desk, and break just the screen. I hooked it up to a crt, installed Gentoo, and now use it as a personal server. It's quieter, and uses less electricity. It has been a great solution for me, and wonder how many others could benefit from this thinking. But this article is sort of taking my idea a few steps further. It could work well.

Re:This may be a little offtopic but... (1)

wintermute000 (928348) | more than 5 years ago | (#27606609)

Me too

I used an old thinkpad T20 (P3 700Mhz, 512 RAM) and now rocking a T41 (P1.6M with 512RAM).

Runs a full LAMP stack, squid + privoxy caching for my LAN + a left4dead server 24/7 no worries

Re:This may be a little offtopic but... (1)

setagllib (753300) | more than 5 years ago | (#27606637)

Way ahead of you. I've been using old laptops as servers for years. They're small, quiet, have their own efficient UPS, and are very easy to stow in a corner and add to an established wireless network with OpenVPN. Most old ones will run on under 10W mains, which is less than many devices draw while turned "off".

Re:This may be a little offtopic but... (3, Funny)

_Stryker (15742) | more than 5 years ago | (#27606783)

Why did you have the cat push it off the desk? Were you too lazy to do it yourself?

Re:This may be a little offtopic but... (1)

fractoid (1076465) | more than 5 years ago | (#27607751)

Outsourcing is all the rage these days, I hear.

Re:This may be a little offtopic but... (2, Insightful)

GordonCopestake (941689) | more than 5 years ago | (#27608359)

Laptops make great low power servers. I've been using them for ages. No noise, small footprint and built in UPS. I'm suprised someone hasn't taken the technology and used it in datacentres (without the LCD's of course). I can easily imagine someone like google with racks and racks of $99 laptops without screens being used as nodes

Re:This may be a little offtopic but... (1)

Spatial (1235392) | more than 5 years ago | (#27610135)

without the LCD

Now that you mention it, Asus actually make an extremely small PC called the 'Eee Box' which is just that; the typical netbook hardware in a tiny box with no screen. It's small enough to attach it to the VESA mount on the back of a monitor.

Re:This may be a little offtopic but... (1)

b0bby (201198) | more than 5 years ago | (#27611613)

The problem with the eee boxes is that they cost the same as the netbook versions, but you don't get the screen... I guess it's because it's a smaller market.

Re:This may be a little offtopic but... (1)

Spatial (1235392) | more than 5 years ago | (#27612097)

Yeah, I wish they were a little cheaper. On the plus side, the 80GB versions are 70 euros cheaper than the 160GB ones - 280 vs 350. That puts it into the acceptability range if HDD space isn't important, I think.

That said, even at retail a 160GB 2.5" disc is 45 euros, so you can save a little money there. Apparently it's easy to install as well, there's a hatch on the bottom for it.

I'd imagine SGI would be pissed (1)

LostMyBeaver (1226054) | more than 5 years ago | (#27643271)

http://gizmodo.com/5091473/sgi-molecule-packs-10000-atom-cores-one-ton-of-awesomeness
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?