Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Single-Chip DIMM To Replace Big Sticks of RAM

samzenpus posted about 3 years ago | from the size-is-important dept.

Hardware 100

MrSeb writes "Invensas, a subsidiary of chip microelectronics company Tessera, has discovered a way of stacking multiple DRAM chips on top of each other. This process, called multi-die face-down packaging, or xFD for short, massively increases memory density, reduces power consumption, and should pave the way for faster and more efficient memory chips. Multi-die face-down packaging is exactly what it sounds like, with memory dies stacked on top of each other like roofing tiles. Much like a normal desktop DIMMs and laptop SO-DIMMs, each of the stacked dies is wired to each other in series — but in this case, the connections are much shorter, as they only have to run a few micrometers to the chip below it. This is where all of the power and speed enhancements come from: shorter interconnects mean less power is needed (and thus less heat is dissipated) and signals propagate faster."

cancel ×

100 comments

Sorry! There are no comments related to the filter you selected.

DIMM == dual in-line memory module (1)

Anonymous Coward | about 3 years ago | (#37332768)

Are these still considered DIMMs?

Re:DIMM == dual in-line memory module (1)

mprindle (198799) | about 3 years ago | (#37332974)

My guess would be STIMM, Stacked in-line memory module. :)

Re:DIMM == dual in-line memory module (1)

innerweb (721995) | about 3 years ago | (#37333370)

Oooh. I can finally get a Stimm Pack.

Re:DIMM == dual in-line memory module (0)

Anonymous Coward | about 3 years ago | (#37335616)

Ah, that's the stuff.

Re:DIMM == dual in-line memory module (2)

Artraze (600366) | about 3 years ago | (#37332990)

Yes, DIMM is referring to the board form factor, not layout. Specifically, they are dual because the gold fingers on each side have independent signals, while SIMMs have the same signal replicated on each side.

Re:DIMM == dual in-line memory module (1)

Nutria (679911) | about 3 years ago | (#37333428)

Similar to the old DIP that chips were packaged in back in the 70s and 80s.

Re:DIMM == dual in-line memory module (4, Funny)

jd (1658) | about 3 years ago | (#37333122)

DIMMPLE? (DIMMs in a PiLE)

Not Invensys! (0)

AliasMarlowe (1042386) | about 3 years ago | (#37332772)

For one horrible moment of puke-inducing fear, I thought you wrote "Invensys".
One letter makes a big difference, sometimes.

Patent Licensing (2)

TheReaperD (937405) | about 3 years ago | (#37332786)

The questions is, will the patent fees be reasonable enough that we will see this technology for less than $200 a DIMM?

Re:Patent Licensing (3, Insightful)

Baloroth (2370816) | about 3 years ago | (#37332970)

Patent fees? Why would Tessera charge itself patent fees? I think you have been staring at software patents too long.

They may or may not license this to other companies, and once they start building them they will have to have low enough prices to be competitive with existing DRAM technology. The world of hardware is not quite like the software world where companies routinely submarine others in areas they often don't even make product. In hardware, you can patent an excellent technology, but you either have to build it yourself or license it for affordable rates to actually make money off it. Unlike software where you can look at someone else's product, patent it, then sue their asses off and get a settlement. AFAIK that has never worked in hardware (it probably has, but it is certainly much, much rarer.)

Re:Patent Licensing (3, Insightful)

TheReaperD (937405) | about 3 years ago | (#37333206)

Though I have been following software patents closely, it has no bearing on my question/comment. One of two things will happen, they will either license this to other vendors for a fee where they can manufacture it or they will not and only build them themselves. If they license it, they can charge a fee that is either reasonable or exorbitant. If they build it in house they can charge whatever they want. Though either option is their right, I, as a consumer, would like to see this product come to the consumer market at a reasonable price, thus my question/comment.

And no, they do not have to be price competitive to make a profit. This has been proven many times over. Since they have, according to TFA, a superior product, they have the option of producing it in low volume and charge a high price for the high end server and gamer market. If you insist on a citation, just look at Apple. They produce in low volume, charge a high fee and make a large profit because their customers believe they make a superior product. And I don't have to agree with it for the last sentence to be true.

This invention means jack to me, as a consumer, if they take the Apple route. Thus, my original comment.

Re:Patent Licensing (1)

Nutria (679911) | about 3 years ago | (#37333478)

If you insist on a citation, just look at Apple. They produce in low volume, charge a high fee and make a large profit because their customers believe they make a superior product.

Like the world-conquering FireWire?

Oh, wait. That's a dismal failure because Apple charged too-expensive license fees so the technically way inferior USB conquered the world...

Re:Patent Licensing (3, Insightful)

dgatwood (11270) | about 3 years ago | (#37333898)

Wasn't the licensing fees. They were never all that expensive.

USB is so ubiquitous in large part because the silicon for USB devices is much, much simpler, and thus much, much cheaper. USB devices can be dumb as a post, whereas FireWire devices have to actually understand a lot more about the bus topology, etc., IIRC.

Also, there's no such thing as a slow FireWire bus. S100 is the bottom limit. Therefore, it isn't a great match for really trivial devices like mice and keyboards.

Also, Intel supported USB very quickly, and drug their heels on FireWire until... well, I'm not sure if they've ever shipped a southbridge with integrated FireWire.... So for computer manufacturers, FireWire was an extra part that they had to pay for, not just an extra connector.

And there were no doubt other factors. I'm not convinced that the licensing was a significant one, though. By 2001, it was something on the order of a quarter per device. I think that's less than a tenth what the actual silicon costs. Even back when it was a dollar per port, it was still a tiny cost compared with the silicon.

Re:Patent Licensing (0)

Anonymous Coward | about 3 years ago | (#37334340)

USB cost about 2c per port.
You think anyone would choose to pay 12x the cost for a port hardly anyone uses?

Re:Patent Licensing (1)

dgatwood (11270) | about 3 years ago | (#37335810)

*shrugs*

The difference in licensing costs was still so completely dwarfed by the difference in hardware costs so as to make it largely moot.

It's like comparing a $25,000 used Porsche and a $300 used Pinto and saying that someone won't buy the Porsche because the Pinto gets better gas mileage.

Re:Patent Licensing (1)

afidel (530433) | about 3 years ago | (#37339792)

No, the licensing fee was initially $1/port which dwarfed the cost difference of the hardware.

Re:Patent Licensing (0)

Anonymous Coward | about 3 years ago | (#37336810)

There's only licensing fees if you call it firewire. If you label it IEEE1394, it's free. It's better technology; significantly faster than USB1.1 and operating on 5v, 12v, and 48v IIRC. With USB, the device can onoly request a current limit higher than 100mA. On firewire, the device requests a voltage and current. It's just more expensive to implement.

Re:Patent Licensing (1)

Baloroth (2370816) | about 3 years ago | (#37333814)

Why would they, though? Apple doesn't charge high prices (and low volume? Apple does decidedly none-low volume runs for most of their products) because they are targeting high-end markets with superior performing equipment. No, they generally target middle-range consumers looking for stuff that works and is easy to use. High end consumers and low end consumers go elsewhere. They can do this because of a whole host of reasons, but it has little to do with a superior product. Generally speaking, if they can make a consumer grade product they will because they make up in quantity what they loose in margins, and the high end market will pay extra (a lot extra) for their top-of-the-line low volume stuff. There simply wouldn't be a good reason not to make it for all markets, especially if (as they say) the process is cheaper than conventional methods.

So while your concern has some merit (a lot more than I thought it did, sorry about that) it just doesn't seem like they would follow that route. Very few other true hardware manufacturers do (what Apple really sells software that happens to be bundled with hardware). AMD, Intel, RAM manufacturers, even SSD makers generally make products for widely varying target markets. Also happens to work better that way for performance-related hardware (see: Product Binning [wikipedia.org] ) such as memory. So I think we will see this technology, it just might take a while for it to work at the kinks and get up to mass-production status.

Re:Patent Licensing (2)

spauldo (118058) | about 3 years ago | (#37334126)

Rambus [wikipedia.org] did the hardware patent troll thing. It was all over /. back in the day.

Short story, Rambus was a member of an industry group designing the new RAM chips (SDRAM, pentium II and III era). The new designs used technlology they had patented, but this wasn't a big deal since all members of the group were supposed to license any applicable patents they held under "reasonable" terms.

Rambus didn't like that, so they pulled out and started suing everyone who made SDRAM. Intel had started using Rambus memory modules on their motherboards (RDRAM), and had already committed to the designs before all this broke out. The lawsuits were all over the place, and Rambus was found guilty of fraud, had the ruling overturned, was sued by the FTC for antitrust violations, and so on.

The whole thing was covered by /., and it went on for years. Only the SCO debacle topped it.

Re:Patent Licensing (1)

Daniel Phillips (238627) | about 3 years ago | (#37335824)

Rambus [wikipedia.org] did the hardware patent troll thing... The whole thing was covered by /., and it went on for years...

It's still going on.

Re:Patent Licensing (0)

Anonymous Coward | about 3 years ago | (#37345620)

Rambus [wikipedia.org] did the hardware patent troll thing. It was all over /. back in the day.

Short story, Rambus was a member of an industry group designing the new RAM chips (SDRAM, pentium II and III era). The new designs used technlology they had patented, but this wasn't a big deal since all members of the group were supposed to license any applicable patents they held under "reasonable" terms.

Rambus didn't like that, so they pulled out and started suing everyone who made SDRAM. Intel had started using Rambus memory modules on their motherboards (RDRAM), and had already committed to the designs before all this broke out. The lawsuits were all over the place, and Rambus was found guilty of fraud, had the ruling overturned, was sued by the FTC for antitrust violations, and so on.

Oh, it was much much messier than that. Slashdot didn't cover it much because it didn't quite fit with the usual "Rambus = pure evil, rest of industry = noble victims" groupthink, but Rambus ended up winning gigadollar lawsuits against several DRAM manufacturers. They'd colluded to keep RDRAM prices high so that it couldn't succeed in the marketplace against SDRAM.

Re:Patent Licensing (0)

Anonymous Coward | about 3 years ago | (#37334144)

That's silly. They wouldn't charge themselves fees, but others. And while their RAM would be cheaper, they could still be expensive while making it cost prohibitive to compete by keeping license fees high.

Re:Patent Licensing (1)

unixisc (2429386) | about 3 years ago | (#37336876)

Tessera makes all its money from patents on packages, it doesn't make the DIMMs themselves. They create these patents, license them and the licensees, such as a Samsung, Crucial, Hynix, et al would made the DIMMs, and others. They have some pretty impressive package technologies

Re:Patent Licensing (0)

Anonymous Coward | about 3 years ago | (#37333062)

Not really. This is precisely the kind of narrow, well-specified invention that deserves patent protection. They would have to be stupid to not license their method so that a large number of companies use it (otherwise, people will just ignore it, and use the much cheaper existing technology). Patents aren't all bad--overbroad, trivial, duplicate, and compatibility-preventing ones are.

Re:Patent Licensing (1)

TheReaperD (937405) | about 3 years ago | (#37333258)

I never suggested that the patent shouldn't be valid. Where did you get that from my sentence?

Re:Patent Licensing (1)

rollingcalf (605357) | about 3 years ago | (#37333646)

The trouble is that this invention may be infringing on a dozen trivial and/or overbroad patents, and the licensing fees or lawsuits for those patents could make this new memory technology unprofitable.

Re:Patent Licensing (1)

drinkypoo (153816) | about 3 years ago | (#37335068)

This is one of those times when prior art is an absolute motherfucker. Back in the XT days (or was it the actual PC days?) you could double your memory on some platforms (details hazy) through the same technique. Apparently there were parts available such that the address and select lines would work out and you could bank select the stacked set of chips in software. You just stacked the DIP chips on top of the other ones and soldered them on. I've only seen it once but when I saw it the old DOS guy who cackled about it said it was not uncommon.

Re:Patent Licensing (1)

Chas (5144) | about 3 years ago | (#37335910)

This is how VisionTek started out.

They'd buy old memory from people disassemble it and reassemble it into larger DIMMs through stacking. Actually bought some of it from them back in the day.

Re:Patent Licensing (1)

hawk (1151) | about 3 years ago | (#37336178)

That predates the XT by a couple of generations.

The Model I TRS-80, for example, had only 7 bits of video memory. To conver to upper case, you glued another 2102 on top of one, soldered 14 of the pins to the chip below, and ran the other two by wire.

There was anothernhack, iirc, that did this with even two extra chips on top of each main ram chip, allowing expansion from 16k to 48k in the main unit rather than an external "expansion interface." (yes, odd as it sounds today, the extra ram was close to a foot of wire away).

Hawk

Re:Patent Licensing (1)

drinkypoo (153816) | about 3 years ago | (#37337444)

There was anothernhack, iirc, that did this with even two extra chips on top of each main ram chip, allowing expansion from 16k to 48k in the main unit rather than an external "expansion interface." (yes, odd as it sounds today, the extra ram was close to a foot of wire away).

Well, again I only go back to the PC era, but it doesn't sound strange to me because I had an IBM PC-1 (yes, really, I am not confused) with 64k onboard and 384k on an 8-bit AST ISA card, where the RTC also lived.

Re:Patent Licensing (1)

mirix (1649853) | about 3 years ago | (#37336276)

Yeah, this goes back to the dawn of time. You can stack parallel interfaced SRAM and ROM because everything is common ... /RD /WR and all the addr and data lines. You just had to separate off the chip select line... double the ram or rom instantly, assuming you had enough address space. I presume DRAM would be similarly packaged, plus the refresh lines, I've not personally stacked it though.

Re:Patent Licensing (1)

cbope (130292) | about 3 years ago | (#37337816)

I can confirm that. I had an old PC-AT motherboard I bought at a swap meet back in the late 80's, in fact I believe it was probably from one of the original IBM PC-AT models. It had a 6MHz 286 cpu and the memory, which was in DIP packages back then (individual chips), was double-stacked 2 to a socket. I don't remember how much memory it had, but damn it was a lot of chips when they were double-stacked like that.

Re:Patent Licensing (1)

MobileTatsu-NJG (946591) | about 3 years ago | (#37335452)

The questions is, will the patent fees be reasonable enough that we will see this technology for less than $200 a DIMM?

The monopoly granted by a patent doesn't mean what you think it means.

Neato (1)

d.the.duck (2100600) | about 3 years ago | (#37332806)

I can't wait. Past that.... what is there to say?

Re:Neato (1)

DamonHD (794830) | about 3 years ago | (#37332840)

Cooling?

Re:Neato (1)

jd (1658) | about 3 years ago | (#37333284)

If you just have two dies and align them vertically rather than flat on the PCB, you've got the same cooling surface as you would have with two independent chips. Beyond that, you'd need to interleave the dies with a heatsink and then you're in for all kinds of funky problems. Surely it would be better to increase the total size of the die whilst keeping the same resolution (since you're eliminating even more connections and any supporting bits and pieces and can also exploit immediately any improvements in chip design or scaling without having to redesign anything).

Sure, the straight-line distance is longer, so you won't get quite the speed improvement, but having silicon only will reduce the power consumption (and therefore heat) and will also boost reliability as you don't have fragile gold leads running from die to die.

Having a die four times as big rather than four dies that are then coupled together will reduce the total amount of wafer that can be used, so will increase cost slightly. Unless you go wafer-scale, of course. Then you have exactly the same amount of usable wafer, a memory even Microsoft would have a hard time running out of, and a price tag IBM might be able to afford every third week.

Re:Neato (1)

gstrickler (920733) | about 3 years ago | (#37334124)

They're not talking about orienting the chips vertically (e.g. ZIP packaging [wikipedia.org] ), but stacking two (or more) chips flat against each other horizontally. This may make a slightly thicker package, but since many of the drivers, buffers, & latches for the external connection can be shared, it doesn't need 2x the power/heat. Silicon is a good conductor, so dissipating the extra heat shouldn't be a major issue. The key will be maintaining good thermal conductivity between the two chips and between the upper chip and the package.

How are they handling the heat? (5, Insightful)

jandrese (485) | about 3 years ago | (#37332824)

The problem with stacked chips like this in the past has been cooling the wafers in the middle of the stack. While DIMMs don't run as hot as processors or GPUs, this is still a concern for them. I wonder how they're going to handle this? Or are they only going to target low power low performance parts?

Re:How are they handling the heat? (1)

Anonymous Coward | about 3 years ago | (#37332922)

Plus, you have to remember less power consumption overall as well.
To what extent is another question, but it might just be enough to not need much heating.

It probably wouldn't be hard to add heatsinks to it anyway, will increase complexity ever so slightly, but for extreme high-performance RAM, worth it so your RAM doesn't actually somehow explode.
Besides that, the modules themselves are really close, on the micrometre scales at that. A single heatsink on top would probably still be enough for it.

Re:How are they handling the heat? (0)

rossdee (243626) | about 3 years ago | (#37332998)

Just keep the ambient temperature below absolute zero and you'll have no problems.

Re:How are they handling the heat? (1)

nschubach (922175) | about 3 years ago | (#37333140)

Pfft, is that all?

Re:How are they handling the heat? (1)

rubycodez (864176) | about 3 years ago | (#37333152)

"His dart throwers had been sealed and 'washed' against snoopers, then maintained at minus 340 Kelvin in a radiation bath for five SY to make them proof against snoopers." -- Frank Herbert Heretics of Dune

Why, that's even colder than the null-entropy bin in my Harkonen No-Globe!

Re:How are they handling the heat? (1)

innerweb (721995) | about 3 years ago | (#37333414)

Just keep the ambient temperature below absolute zero and you'll have no problems.
Would this be i-energy?

putting the chips in a negative-temperature bath (0)

Anonymous Coward | about 3 years ago | (#37335176)

would actually make them HOTTER, not COLDER:

a system with a truly negative Kelvin temperature is hotter than any system with a positive temperature (in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system).

That's what makes it "negative temperature." All the usual thermodynamic equations (eg Newton's law of Cooling) still basically work, it's just that they work as though one of the two systems had a negatively-valued temperature.

Re:How are they handling the heat? (1)

ChrisMP1 (1130781) | about 3 years ago | (#37332980)

Or are they only going to target low power low performance parts?

Like mobile? I didn't RTFA, but that's the area that seems to me to be screaming for this. With all the crap we're running on phones lately, we're going to need more memory. You can't exactly stick a SO-DIMM in a phone.

Re:How are they handling the heat? (0)

Anonymous Coward | about 3 years ago | (#37333046)

You can't exactly stick a SO-DIMM in a phone.

Why not? Because then you couldn't hold the phone between your thumb and index finger when you use it?

Re:How are they handling the heat? (0)

Anonymous Coward | about 3 years ago | (#37333212)

In the embedded world, they have been using POP (package on package) for a while. The ram/nand is stacked on the cpu bga fashion. This just looks like an extension of that to the world of multi-chip dram.

Re:How are they handling the heat? (1)

Pentium100 (1240090) | about 3 years ago | (#37333450)

I have some 1GB SDR DIMM sticks that have 36 chips and those chips are on top of each other in pairs (normally you could only fit 9 chips per side, this allows to fit 18 chips per side).

Re:How are they handling the heat? (1)

gstrickler (920733) | about 3 years ago | (#37333720)

This sounds like stacked-die, not PoP [wikipedia.org] .

Re:How are they handling the heat? (1)

Artraze (600366) | about 3 years ago | (#37333300)

They mention mobile, but that's not terribly interesting: Someone (TI, IIRC) has had stackable memory for a while. In particular RAM and Flash that can be soldered directly onto their CPU (though I'm not sure how many of either it supports). That saves routing/board design costs and can make the overall device smaller. There's not too much point in having a stack of RAM elsewhere as you're probably only going to have 2 chips at the most... Current densities are 1GB/chip, so unless you're looking for 2+GB, this won't save any space at all.

I also question their power savings claim... Unless I missed some specific interesting numbers in TFA, I can't imagine they're saving much more than like 5%... Aside from the quiescent power draw, which isn't negligible, they're still routing the same DIMM with the same connections and the same number of data line. The best I can see is this saving about a 2 inches routing the address lines to multiple packages, but at the cost of making the data lines about 1 inch longer on average (as the chip stack is now centered, rather than over the DIMM's data pins like they are now). Guess which ones do most of the switching and burn most of the power? Hell, as the DIMM isn't driving the address lines this looks like it'd actually increase power consumption at the memory and maybe save a little for the CPU/chipset.

tl;dr, unless I'm missing something here (I don't design DIMMs after all), this looks like they're playing up some patent they just got.

Re:How are they handling the heat? (1)

ajlitt (19055) | about 3 years ago | (#37333670)

Samsung (and Micron I think) sell a multi-chip BGA with flash and DRAM stacked in the same way. Some of these models are meant to fit on top of an SoC like Samsung's Hummingbird or TI's OMAP in a scheme called PoP [wikipedia.org]

Re:How are they handling the heat? (1)

gstrickler (920733) | about 3 years ago | (#37334014)

If you design the chips for this purpose, you can share many of the drivers, buffers, and latches for the external connection, thus lowering the power consumption. Also, this won't materially lengthen any of the connections. Half as many packages and ~ half as many driver circuits for a given capacity should produce a notable power savings, even though each package will draw slightly more power than a single chip package would. Or, you could get 2x the capacity with a less than 2x increase in power consumption vs single chip packages.

Re:How are they handling the heat? (1)

Mashiki (184564) | about 3 years ago | (#37333920)

Well in the past with computers, and supercomputers the only way to get around this problem is by using immersion in an inert liquid which can properly draw heat. Plenty of current DIMMs require head spreaders once you cross into PC3-10666, one of the few ways around it is to lower the input voltage, but even then they can get toasty.

I'm using some g.skill eco(1.3v) in my system simply to keep the temperature down but under load those things will still hit around 64C with proper cooling across the spreaders.

Re:How are they handling the heat? (1)

Daniel Phillips (238627) | about 3 years ago | (#37335834)

The problem with stacked chips like this in the past has been cooling the wafers in the middle of the stack. While DIMMs don't run as hot as processors or GPUs, this is still a concern for them.

True, however shorter wires can be driven with less power, creating less heat.

Re:How are they handling the heat? (1)

The Jynx (806942) | about 3 years ago | (#37338752)

But Sir, it is only wafer thin!

Heard it all before (1)

tsotha (720379) | about 3 years ago | (#37332894)

Seems like I've seen this article a half dozen times over my career, and nothing ever comes of it. Usually by the time they get the bugs worked out a higher density generation of RAM comes along and the stacked wafers can't compete on price.

Re:Heard it all before (1)

ajlitt (19055) | about 3 years ago | (#37333626)

Except by now flash manufacturers have the stacked die process down pat, fitting many geebees in a single BGA. Presumably this is using the same manufacturing process, using bond wires on one edge of the stagger to connect to the substrate.

Unlikely (2)

vlm (69642) | about 3 years ago | (#37332930)

The marketing release implies most of the power is being dropped resistively in the leads instead of in the dies. Just doesn't work that way.

Think about it for a second... The voltage on the die is only a tiny bit less than the voltage on the bus... You know the bus impedance too so that gives away current flow. Do a little ohms law on that tiny little drop and the tiny little current and compare it to what the die drops.

Or look at it from a thermal engineering perspective... they put heatsinks on the dies, not on the leads...

Now there will be some savings, probably lower capacitance and inductance and all that makes life easier for the bus drivers. But you're still gonna roast the dies in the middle of the sandwich. So you got three charcoal bbqs stacked on top of each other. No matter how fancy you make the cooking grate the burgers in the middle are gonna fry even if the guys on the end are raw ...

Re:Unlikely (0)

Anonymous Coward | about 3 years ago | (#37333038)

at speed what counts is capacitance and that will be much smaller for dies stacked compared to going on and off a pcb, and the drivers can be smaller
with even less capacitance and less power consumption

Re:Unlikely (1)

nschubach (922175) | about 3 years ago | (#37333178)

You're saying that the tiny traces on the memory die do not inflict exponentially more resistance than the mammoth (in comparison) PCB traces and contacts?

Re:Unlikely (1)

ToddInSF (765534) | about 3 years ago | (#37337390)

Absolutely, what about this is so difficult to grasp for you ? On the die you can use itty bitty tiny gold or platinum or silver or whatever traces. Think about it.

Re:Unlikely (1)

Memroid (898199) | about 3 years ago | (#37335146)

No matter how fancy you make the cooking grate the burgers in the middle are gonna fry even if the guys on the end are raw ...

So if I understand you correctly, we need to devise a ram-stack rotisserie. Wendy's may be able to provide guidance, at least regarding a double stack implementation.

Re:Unlikely (0)

Anonymous Coward | about 3 years ago | (#37336286)

I suddenly have this weird craving for some barbecue meat in the middle of the night...

Stacking RAM is not new. (1)

bmo (77928) | about 3 years ago | (#37332950)

Old Atari heads know that you can stack RAM on top of the existing RAM packages and solder them in the 520 and 1040 ST machines.

This is basically doing the same thing, but inside the package.

--
BMO

Re:Stacking RAM is not new. (1)

JimboFBX (1097277) | about 3 years ago | (#37333160)

I was going to say, stacking Nand has been around for years so I'm not understanding how this is any different for DRAM.

Re:Stacking RAM is not new. (1)

antifoidulus (807088) | about 3 years ago | (#37335194)

Well for one NAND does not require constant refreshes to retain the data, and thus uses almost no power when not in use. DRAM on the other hand needs to be refreshed constantly, creating significantly more heat issues than nand.

Re:Stacking RAM is not new. (0)

Anonymous Coward | about 3 years ago | (#37333362)

Indeed. This used to be a Mac thing as well. You could get the old Mac 128 to a full 512 if you didn't mind cracking it open and doing to soldering. I'm sure this applied to many computers 'back in the day'.

Re:Stacking RAM is not new. (2)

gstrickler (920733) | about 3 years ago | (#37333908)

But doing it inside the package, directly stacking chip on chip has significant advantages over stacking packages. Lower height, better heat dissipation, shorter interconnects, etc. And if the chips are designed such that they share the drivers, buffers, & latches, etc for the external connection, that can save quite a bit of power. There are many things you can do in package, that are impractical or impossible off package.

Re:Stacking RAM is not new. (1)

cb88 (1410145) | about 3 years ago | (#37335214)

I fail to see how heat dissipation is better... you are increasing the density of the components not decreasing them which leads to there being more heat that has to be dissipated over a smaller area.

Re:Stacking RAM is not new. (0)

Anonymous Coward | about 3 years ago | (#37337884)

If you'd got an Amiga, the RAM chips that came with it wouldn't have been cardboard ;)

[ Just kidding; thought it would be fun to revive old rivalries for a moment ;) ]

You'll still need multiple units: Multi-channels (1)

MROD (101561) | about 3 years ago | (#37332962)

This will merely increase the density of individual memory modules. However, with processors using multiple memory channels (for performance reasons) you will still require a separate memory unit per memory channel. For Intel Core i5/i7 processors this would be two units. For Xeons it would be sets of three.

seen this on sdram (1)

zugedneb (601299) | about 3 years ago | (#37333056)

i had an alpha driven compaq xp1000, it had ram with 2 chips stacked on it...

also, heat can be led out from the middle of the sandwitch by thin metal plates, glued to the chips with some epoxy...

Re:seen this on sdram (1)

gstrickler (920733) | about 3 years ago | (#37333968)

That's stacked packages, this is stacked chips in the package. There are numerous advantages to doing it in package (see my post [slashdot.org] above.)

Cyberdyne did it first (1)

ruiner13 (527499) | about 3 years ago | (#37333170)

Neural Net CPU [wikia.com]

I think the terminators want their technology back. Is it time for SkyNet yet?

YUO FAIL/ IT (-1)

Anonymous Coward | about 3 years ago | (#37333314)

Going to continue, need your help! distribution. A5 Raymond in his

Cosmic ray bit flip chance increase? (1)

psyclone (187154) | about 3 years ago | (#37333494)

Doesn't higher memory density result in a greater chance for cosmic radiation to flip bits?

With greater power savings and more memory per module, adding ECC to the mix shouldn't be too painful.

Re:Cosmic ray bit flip chance increase? (1)

gstrickler (920733) | about 3 years ago | (#37333954)

Not in this instance. This actually helps avoid that. Smaller geometry increases the changes of a bit flip due to cosmic rays (capacitance discharge due to cosmic ray). This allows 2x (or more) the memory in a given package (may be slightly thicker), without going to a smaller geometry.

On chip ECC isn't called for (at least not yet), and in fact chipset based ECC has several advantages, including having a single ECC controller for all memory, and being able to detect errors in the memory bus, not just errors in the chip.

Re:Cosmic ray bit flip chance increase? (1)

psyclone (187154) | about 3 years ago | (#37373300)

Thank you for the very informative reply!

Just imagine... (0)

Anonymous Coward | about 3 years ago | (#37334426)

Just imagine a Beowulf cluster of these? Oh wait, we already have them, they are called STICKS OF RAM!!!!

when can I expect 4gb SODIMMs? (1)

wierd_w (1375923) | about 3 years ago | (#37334872)

Right now, I can only get 2x 2gb sticks inside most laptops.

Given the inherent doubling of chip density this offers, when can I expect to be able to purchase 4gb SODIMM packages?

Re:when can I expect 4gb SODIMMs? (0)

Anonymous Coward | about 3 years ago | (#37335224)

Right now, I can only get 2x 2gb sticks inside most laptops.

Given the inherent doubling of chip density this offers, when can I expect to be able to purchase 4gb SODIMM packages?

4GB (shouldn't ram be expressed in GiB and not GB?) chips are already available for SO-DIMM

http://www.kingston.com/hyperx/products/khx_sodimm.asp

Just how long before the 8GB chips come out though?

Re:when can I expect 4gb SODIMMs? (0, Troll)

wierd_w (1375923) | about 3 years ago | (#37335556)

Really? Every place I have looked, only 2gb sticks were available. 8gb sticks would be awesome.

(No. There is no real ambiguity in saying 1gb == 1024mb, 1mb == 1024kb, 1kb == 1024b. The ambiguity was injected by shyster disc manufacturers, wanting to claim 1mb as 1000kb, instead of the correct 1024kb, because they wanted to sell a lower capacity device as larger than it really was. 1gb is 1gb. I refuse to adopt a whole different suffix just because of marketing drones trying to reinvent the term.)

Re:when can I expect 4gb SODIMMs? (1)

Vegemeister (1259976) | about 3 years ago | (#37336840)

Here [newegg.com] is your 2x4GiB SODIMM. $45. There was a sale at $35, but that's sold out.

I refuse to adopt a whole different suffix just because of marketing drones trying to reinvent the term.

My heart bleeds.

Re:when can I expect 4gb SODIMMs? (0)

Anonymous Coward | about 3 years ago | (#37337426)

So there's two different series in common use. You can either use different terms for them, and always be clear, or you can use the same term, and be misunderstood a significant chunk of the time. Is expressing rage against "shysters" who dared to use SI prefixes literally actually worth the communication hurdle you throw up?

Oh, and I suppose there's also no ambiguity in saying "b" instead of "B" when you mean bytes, not bits?

Just admit it: you don't give a crap about ambiguity, you just want to use the least typing effort and hope everyone else makes the effort to figure out WTF you mean. Then again, you can't be arsed to look for the 8GiB SO-DIMMs that are already available and expect everybody on /. to google them for you, so I guess your utter laziness should be no surprise.

Network signaling rates are arbitrary and have always been expressed in 1000-series; RAM comes in powers of two (courtesy of addressing) and so is most naturally expressed in 1024-series. Rotating disk space is arbitrary multiples of 512 or other power-of-two byte sectors, so it doesn't fit either scheme horribly well, and even SSDs using flash (which comes in powers of two like RAM) only exposes an arbitrary fraction (typically ~80%, definitely not 0.5) of that to permit automatic wear-leveling. So even if disk space was uniformly expressed in 1024-series like RAM, there would still be a disconnect between network and disk, and we'd still need to clear up the ambiguity, and you'd still oppose it, because the i key is just so damn far from the home row. Man the fuck up, loser.

Re:when can I expect 4gb SODIMMs? (0)

Anonymous Coward | about 3 years ago | (#37336086)

Try googling. 4GB SODIMMs are very common these days.

Re:when can I expect 4gb SODIMMs? (1)

hawk (1151) | about 3 years ago | (#37336208)

???

I bought them from owc for my MacBook about a year ago.

They now sell 8g modules, at least for the new iMacs (bit they're unGodly expensive, more than $1k each when imchecked a few weeks ago)

hawk

Re:when can I expect 4gb SODIMMs? (1)

TeknoHog (164938) | about 3 years ago | (#37336746)

What? I was speccing a work laptop almost 3 years ago, and I asked if I could get a total of 8 GB. I was told 4 GB SODIMMS are bloody expensive, but they were available anyway.

Re:when can I expect 4gb SODIMMs? (0)

Anonymous Coward | about 3 years ago | (#37337044)

Yesterday.

2 minutes with google (pro-tip: search for e.g. "8GB SODIMM" to find 2x4GB packages) says you can even get an 8GB SO-DIMM [crucial.com] now -- if you can afford it...

Re:when can I expect 4gb SODIMMs? (0)

Anonymous Coward | about 3 years ago | (#37337520)

Don't know where you're living but over here (France) DDR3 SO-DIMMS sell at an all-time low (under €5 / GB).
Example:
http://www.topachat.com/pages/detail2_cat_est_micro_puis_rubrique_est_wme_soddr3_puis_ref_est_in10039815.html

More Stuff to Fall Off Motherboard (0)

Anonymous Coward | about 3 years ago | (#37334968)

They can't keep graphics chips or memory sockets stuck on the motherboard. Now we're talking stacks of chips.

Oh, I can see the warranty repairs and class action lawsuits....

When Cpu's ? (1)

bobjr94 (1120555) | about 3 years ago | (#37335006)

Can they also start stacking cpu's ? 12, 24, 48 cores ? It would have to have cooling pipes running though them, or thin separator plates connected to a cooling system and of soon you would need 220v outlets in your bedroom to power your 48 core system with cooling. Seems per-core cpu seeds have not gained much in the last few years, they are a faster per mhz though from better optimization but seem to mainly faster due to more cores per cpu.

Re:When Cpu's ? (1)

jawtheshark (198669) | about 3 years ago | (#37337514)

Doesn't everyone have 220V outlets in their bedroom? I certainly have....

Re:When Cpu's ? (1)

CaptSlaq (1491233) | about 3 years ago | (#37338966)

What kind of current do you get out of that 220? In the states the 110 generally gets 15-20 amps per circuit.

Re:When Cpu's ? (1)

jawtheshark (198669) | about 3 years ago | (#37339204)

There are countries outside "The States"... That was pretty much the point of my post.. My country is listed as having 230V [wikipedia.org] . I think you can pull up to 16A.

Re:When Cpu's ? (0)

Anonymous Coward | about 3 years ago | (#37347688)

That was kind of my point: I had some ignorance that I was trying to patch, which is why I qualified my question with my current (heh) knowledge. If you can pull 16A on a 220-240 circuit, you do have a significant leg up in powering larger devices than we in "The States".

Re:When Cpu's ? (1)

jawtheshark (198669) | about 3 years ago | (#37349168)

Oh, glad to fill in, even if wikipedia probably is a better source than me. I did some Google searches to come up with the 16A. What I can say that it's not unusual to have (for example) a 3000W water cooker plugged into the normal mains. These days I haven't seen any devices requiring any special connections any more (three-phase [wikipedia.org] ). I think, but it's so long ago I might be incorrect, that the washing machine my parents had in my childhood required such a connection. From what I can see, all my large household appliances (washing machine, dryer, dishwasher, fridge) use a normal connections.

When I was a student and I rented a room, there was one thing I shouldn't do. Use my 2800W water cooker and watch TV at the same time (small CRT TV. I think 21" or so). That would blow the fuse of the floor I was on. It was an old house with old cabling. I've never seen that happen in a house built in the last 40 years, though.

Re:When Cpu's ? (1)

Laurence0 (832251) | about 3 years ago | (#37339024)

I have 240V...

Re:When Cpu's ? (1)

jawtheshark (198669) | about 3 years ago | (#37339090)

Ah, yes... These days it's 240V... I always forget. You're right of course.

Re:When Cpu's ? (1)

jawtheshark (198669) | about 3 years ago | (#37339156)

Hmmm, apparently it's 230V for my country. Anyway, most European countries seem to be between 220V and 240V at 50Hz.

Already been done (0)

Anonymous Coward | about 3 years ago | (#37335062)

NAND packages come like this all the time DDP - dual die package, ODP octal die package etc...

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>