Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Samsung '3D' Memory Coming, 50% Denser

timothy posted more than 3 years ago | from the now-in-stereo dept.

Data Storage 87

CWmike writes "Samsung on Tuesday announced a new 8GB dual inline memory module (DIMM) that stacks memory chips on top of each other, which increases the density of the memory by 50% compared with conventional DIMM technology. Samsung's new registered or buffered (RDIMM) product is based on its current Green DDR3 DRAM and 40 nanometer (nm)-sized circuitry. The new memory module is aimed at the server and enterprise storage markets. The three-dimensional (3D) chip stacking process is referred to in the memory industry as Through Silicon Via (TSV). Samsung said the TSV process saves up to 40% of the power consumed by a conventional RDIMM. Using the TSV technology will greatly improve chip density in next-generation server systems, Samsung said, making it attractive for high-density, high-performance systems."

cancel ×

87 comments

high density + high performance.... (1)

MichaelKristopeit133 (1947020) | more than 3 years ago | (#34484022)

obviously "high" is relative as that is a "choose one" scenario.

Re:high density + high performance.... (-1)

Anonymous Coward | more than 3 years ago | (#34484534)

big black primitive greasy NIGGERS are a "choose none" scenario

Saves up to 40% power savings? (2)

Ismellpoop (1949100) | more than 3 years ago | (#34484044)

Does ram really use that much power.
Now 40% power savings on the latest 3D accelerator would be awesome. Probably help with heat issue.

Re:Saves up to 40% power savings? (5, Informative)

Anonymous Coward | more than 3 years ago | (#34484076)

Googling a bit, one test showed 2x1 GB of memory consuming up to 7.28 watts.
http://www.tomshardware.com/reviews/hardware-components,1685-13.html

For PC, that's practically nothing. For mobile devices, every watt counts.

Re:Saves up to 40% power savings? (4, Interesting)

0100010001010011 (652467) | more than 3 years ago | (#34484184)

Not just mobile. Newer generations of HTPCs, Plug like devices are using 20W. The AppleTV2 has a 6W power supply. Assume they overspec'ed it by 20%, that's 5W at full tilt.

7W is a huge % of those numbers.

Re:Saves up to 40% power savings? (5, Interesting)

0123456 (636235) | more than 3 years ago | (#34484338)

Not just mobile. Newer generations of HTPCs, Plug like devices are using 20W.

Yeah, I measured my MythTV frontend at 26W from the wall; so if the 4GB of RAM is taking 14W, that would be more than half the total consumption of the entire system.

Re:Saves up to 40% power savings? (3, Interesting)

MrNemesis (587188) | more than 3 years ago | (#34484782)

Out of curiosity, what hardware are you using? I've just picked up one of the new ASRock Vision 3D HTPC's (great little machine for Myth/XBMC; works OotB with Linux for everything except the IR receiver, although for some reason amazon won't publish my review) that pulls 23W from the wall on a bad day, and idles at about 17W at idle. My old C2D-based mATX box pulled more like 50-60W.

But yeah, I've never been able to quantify those power usages of memory. I think they must take an absolute worst case scenario along the lines of "if every bit was flipped at once" or something like that. DIMMs even run cooler than they used to, making those ubiquitous heatspreaders all the more ephemeral.

Re:Saves up to 40% power savings? (1)

0123456 (636235) | more than 3 years ago | (#34491666)

Out of curiosity, what hardware are you using?

Probably a bit late now, but that's a Zotac Ion motherboard in a small ITX case with some 'silent'-ish fans, 4GB of RAM and an X25-V SSD. It doesn't really need 4GB, but since it's running off a cheap SSD I wanted to push all temporary storage into a RAM disk to reduce SSD writes.

Re:Saves up to 40% power savings? (1)

MrNemesis (587188) | more than 3 years ago | (#34505388)

Thought about buying an Ion myself, but I found the atom sucked for most non-video stuff; was crappy with XBMC and youtube bits (running XBMC on top of debian myself).

Also using an SSD, a 30GB OCZ vertex, but from my experience with the Intel drives you can safely use them for temp storage. Also got 4GB in the ASRock (which is essentially just intel and nVidia laptop components in a Mac-mini-esque chassis) and it's laughable how little of it linux + XBMC ever use :)

Surprised the atom eats so "much" power though, I'd have expected that setup not to draw any more than 10W.

Re:Saves up to 40% power savings? (1)

Anonymous Coward | more than 3 years ago | (#34484368)

For stationary devices that run on battery anything that peaks over 0.1W is unacceptable. When you are expected to run your stuff for at least 2 years on 3 AAA cells you'd better cut down the average consumption to the uW range.

Re:Saves up to 40% power savings? (1)

bigdaisy (30400) | more than 3 years ago | (#34485364)

Looking up some datasheets on Kingston's "valueram.com", 2x1GB DDR2 DIMMs use about 1.0-1.4W depending on clock speed. That drops to about 0.8-0.9W for DDR3 modules.

Re:Saves up to 40% power savings? (1)

spydum (828400) | more than 3 years ago | (#34489636)

For a server with say, 16x4GB dimms, that can add up real quick. Consider a farm of 40 such machines. Every watt counts.. when dealing in extreme scales (both small and large)
Not to mention, heat generated is just as significant as power drawn.

Re:Saves up to 40% power savings? (1)

wealthychef (584778) | more than 3 years ago | (#34492960)

Yes, cooling costs on a large server are substantial -- you have to run huge cooling towers to cool them down.

Re:Saves up to 40% power savings? (2)

Requia (1734466) | more than 3 years ago | (#34484200)

How much it is depends on what kind of limits you have. Server power draw can run up against building or power grid limits, at which point every watt counts.

Re:Saves up to 40% power savings? (5, Interesting)

ihavnoid (749312) | more than 3 years ago | (#34484440)

Additionally, an average server has 2x cpus, 8x memory, while having 0x graphics compared to an average desktop. Another problem is that we are running out of tricks for reducing dram power, which means that the portion of dram power may increase steadily in the near future.

Even graphic cards have a sizable, high-bandwidth ram on-board.

Trust me, DRAM power consumption is becoming a serious probpem.

Re:Saves up to 40% power savings? (0)

Anonymous Coward | more than 3 years ago | (#34485338)

most graphics cards I've bothered to look at (~5 years ago I admit) spec'ed themselves as having SRAM, not DRAM. //CSB

Re:Saves up to 40% power savings? (1)

Pence128 (1389345) | more than 3 years ago | (#34526930)

Your Thinking of SDRAM, which is pretty much synonymous with DRAM these days.

Re:Saves up to 40% power savings? (3, Funny)

Ailure (853833) | more than 3 years ago | (#34485642)

Trust me, DRAM power consumption is becoming a serious probpem.

So is apparently cosmic rays. ;)

Re:Saves up to 40% power savings? (0)

Anonymous Coward | more than 3 years ago | (#34487534)

No, I won't trust you.

Re:Saves up to 40% power savings? (1)

TheRaven64 (641858) | more than 3 years ago | (#34487856)

Perhaps more importantly, RAM power is close to constant. If your CPU load is low, you can underclock the CPU and lower the power usage. You can spin down disks when they're not in use. Pretty much any other component of a modern computer can be powered down when not in use, but RAM needs to constantly refresh its contents. This means that it is consuming power at a pretty constant rate. It takes slightly more power to read or write, but not very much.

In theory, an OS could swap things out of most RAM modules and power them off, but in practice that's pretty hard to do. This means that RAM power usage also has a significant effect in the sleep time for most portables: The RAM is about the only thing drawing power in this state, and a 5W draw means that a typical laptop battery won't last all that long, forcing you to do some kind of suspend to disk to conserve power (meaning you have to guess how long the machine will be powered off for, because suspend to disk uses more energy if it's a short sleep).

Power also means heat, and in a handheld the amount of heat you can dissipate is severely limited. A lot of ARM systems use a package-on-package configuration, where the SoC, RAM and flash are all in separate packages that are stacked vertically on top of each other. If the RAM gets too hot, the CPU will overheat (and vice versa), so RAM power limits the amount of RAM you can use, even with a large battery.

Re:Saves up to 40% power savings? (1)

Vegemeister (1259976) | more than 3 years ago | (#34493216)

Suspend to disk defeats the purpose of suspend in many use cases. Point in fact: it's ass-slow. A suspended laptop is pretty close to instant-on. Also, a laptop that is writing its memory to disk can't be thrown in the backpack until the disk shuts off without risking a head crash.

What about SRAM (1)

codecore (395864) | more than 3 years ago | (#34493046)

So I'm guessing that much of the DRAM power budget is taken because of the requirement to Refresh. At 1 transistor per cell, DRAM has been 4x less expensive than SRAM, before taking consideration of economies of scale. So where is the SRAM market? Why do we still not see an alternative with better speed, and power, at the cost of price and storage density? Why can't I make that choice? And while we're on the subject of MIA memory technology, where are the FRAM devices? Those would be flash-based SSD killers.

Re:What about SRAM (1)

Pence128 (1389345) | more than 3 years ago | (#34527128)

I heard that since sram cells are larger (6 transistors actually, 4 for the latch and 2 for select), on very large chips the bus capacitance overwhelms the advantage given by faster sensing.

Re:Saves up to 40% power savings? (2)

wagnerrp (1305589) | more than 3 years ago | (#34484640)

Your average desktop with 2-4 sticks of 8 or 16 module memory isn't a concern. When you're talking servers with 16 sticks of 36 module memory and ECC, it really adds up.

Re:Saves up to 40% power savings? (1)

afidel (530433) | more than 3 years ago | (#34486720)

DDR3-1333 RDIMM's use about 5W each, multiply that by the 18 DIMM's available in a fairly typical 1U or 2U server and it adds up. For an even more extreme example look at the HP DL980, 128 slots so 640W just for the memory.

Re:Saves up to 40% power savings? (3, Funny)

tom17 (659054) | more than 3 years ago | (#34487798)

640W should be enough power for anybody.

Re:Saves up to 40% power savings? (1)

Jeprey (1596319) | more than 3 years ago | (#34497118)

Yes. It really does.

The reason is that to get the signals off the chip you have to amplify them and then take the losses of having to line-charge the bond pads, bond wires, package traces, PCB traces, etc. This charging is simply done due to physical parasitic losses induced by shipping the data off-chip. Keeping it all on-chip avoids this and allows nA-uA currents to be used throughout rather than kicking things up to mA currents and then back down again.

This, combined with jitter limits, is part of the reason for going to cores rather than increases GHz clock rates.

The one caveat with 3D is that it probably won't work for microprocessors because too many devices are switching at any one time. This includes CPUs and GPUs. Memory is more "quiescent" in terms of duty cycle of any given transistor on-chip to it's a better candidate for 3D.

As it is, the energy density (J/m^3) in a typical minimal geometry CMOS transistor is higher than the core of the Sun. This is a serious thermal issue, especially since >90% of thermal conduction is only through the silicon substrate. Another reason that SOI/SOS hasn't been nearly as successful as you might expect.

3D (1)

Anonymous Coward | more than 3 years ago | (#34484046)

Great, does the CPU now need 3D glasses too ?

TIME TO BRING BACK CORE !! (4, Funny)

Anonymous Coward | more than 3 years ago | (#34484058)

Core memory is static in the true sense of the word. I've got core memory that hasn't changed a bit in 60 years. Punks !! You don't know memory.

Re:TIME TO BRING BACK CORE !! (5, Funny)

StormUP (892787) | more than 3 years ago | (#34484230)

Sounds really slow. When do you expect the bit to finish changing?

Those Asians... (-1)

Anonymous Coward | more than 3 years ago | (#34484346)

Those Asians sure know how to make the biggest god damn miniature shit around!!

Re:TIME TO BRING BACK CORE !! (2)

Arlet (29997) | more than 3 years ago | (#34484770)

With core memory, a read is destructive, so it's not truly static.

Oh great. Dense memory. (2, Funny)

DWMorse (1816016) | more than 3 years ago | (#34484078)

It'll fit right in with my ex's computer. Stupid P.O.S. Gateway.

*takes a deep breath...* NOW WHEN SHE TYPES IN ALL CAPS and overuses LOL ON FOXNEWS.COM and adds a thousand!!!!!!!!!!!!!!! EXCLAMATION POINTS... her memory can be just as dense as she is.

Re:Oh great. Dense memory. (0)

Anonymous Coward | more than 3 years ago | (#34484482)

That's good stuff. You just made my day!!!!!!

Re:Oh great. Dense memory. (3, Funny)

martas (1439879) | more than 3 years ago | (#34484486)

That's the spirit, let it all out! We're happy to hear to the woes of the few among us who have ever had social contact with the opposite sex ;]

Re:Oh great. Dense memory. (1)

mcgrew (92797) | more than 3 years ago | (#34486858)

We're happy to hear to the woes of the few among us who have ever had social contact with the opposite sex ;]

It can get much worse... [slashdot.org]

Re:Oh great. Dense memory. (0)

Anonymous Coward | more than 3 years ago | (#34484488)

LOL. You stalk you ex. LOL.

Get over it!!!!!!!!!!!!!!

Re:Oh great. Dense memory. (1)

Freultwah (739055) | more than 3 years ago | (#34484776)

Oh, the aggravation that people put up with to get some poontang.

Re:Oh great. Dense memory. (3)

Psicopatico (1005433) | more than 3 years ago | (#34485064)

Snippet from a boot sequence:

CPU: Memory, are you dense?
Mem: Yes, I am.
CPU: Derp

Re:Oh great. Dense memory. (0)

Anonymous Coward | more than 3 years ago | (#34486934)

her memory can be just as dense as she is.

Yeah, but me and the rest of the Chicago Bulls are here to say she's a great piece of ass.

  - Carlos Boozer, Derrick Rose, et al.

Those Asians sure know how... (-1)

Anonymous Coward | more than 3 years ago | (#34484152)

Those Asians sure know how to make the biggest god damn miniature shit around!

good or bad? Not sure yet (1)

Anonymous Coward | more than 3 years ago | (#34484228)

One way to look at this is "oh good, people have been talking about stacked chips for years, and they're finally rolling it out for mass production. Another tool to increase density. Yay!"

The other point of view: "The geometries aren't going to be shrinking much longer, so chip makers are starting to turn to desperate measures to keep Moore's law going. This will work once or twice, but when the shrinks stop, and the chips are already stacked, we're going to run out of roadmap, probably soon".

Not sure which yet. The clocks stopped almost 10 years ago. The geometries are probably next.

Re:good or bad? Not sure yet (0)

Anonymous Coward | more than 3 years ago | (#34485550)

hopefully after that will be price.
once all the R&D shifts to the mass mfg process, stuff should get _cheap_.

(i can dream, right?)

Re:good or bad? Not sure yet (0)

Anonymous Coward | more than 3 years ago | (#34487374)

Hopefully after that, the software retards will have to start to actually learn to program, instead of just tossing abstraction on top of abstraction on top of virtual crap and hoping the hardware is fast enough. It's about time the onus of performance shifts on the software guys. How many metrics do you have to measure hardware performance? Lots. Frequency, bus speed, latency, power, cost, etc... How many metrics are there for software? ... chirp chirp ....

Get those software assholes under control, you don't need a 3GHz CPU to send a few bytes over a network.

Re:good or bad? Not sure yet (1)

HiThere (15173) | more than 3 years ago | (#34489060)

Well, IBM seems to think the next step is liquid coolant. Then you can just keep stacking them higher. Not sure myself. I don't really like the idea of water inside the chips, and there doesn't seem to be a good replacement for freon. (Or maybe there is. What do modern refrigerators work on?)

Re:good or bad? Not sure yet (1)

wierd_w (1375923) | more than 3 years ago | (#34490462)

tetrafluoroethane. [wikipedia.org] A fluorocarbon.

Similar to what is in compressed air dusters. (usually difluoroethane. [wikipedia.org] )

With both compounds boiling at room temperature though, your ram chips will be internally pressurized, which means mechanical stresses during heating and cooling cycles.

Re:good or bad? Not sure yet (1)

Idiomatick (976696) | more than 3 years ago | (#34489560)

We've got the 3rd dimension to fully use. Then we have memsistors (I won't call them memristors because it isn't a portmanteau and is sort of stupid sounding). These two things will be able to feed our high rate of growth for some time. But it will come to an end soon. Maybe within 30 years before we find something else to keep pushing us forwards.

And if we ever DO have a period with no technological progress we have created ourselves a comfortable buffer zone... Software efficiency can be improved greatly, and each improvement to software gets multiplied ten-fold. Hopefully by the time we get through that period, science will have picked up again.

If not? There is always infrastructure to build up. Perhaps computers will act in part like thin clients passing off super complex calculations to a server. This isn't so unreasonable if you think of it like the natural extension of the internet. We pass searches to Google today rather than run a spider ourselves. In the future we'll likely pass more processing over to giant servers. If a true artificial intellect is built than users will likely make requests of it from the outside rather than have it exist on their PC.

Will Apple bite? (1)

AHuxley (892839) | more than 3 years ago | (#34484248)

From ECC, buffered DIMM's to RDIMM's in the Mac Pro's?

Re:Will Apple bite? (1)

nounderscores (246517) | more than 3 years ago | (#34484294)

I'd be interested to see if they put it into a new generation of apple Xserve rack-mounted servers.

Re:Will Apple bite? (1)

Arrepiadd (688829) | more than 3 years ago | (#34484798)

You mean... the ones [apple.com] they've already announced will be discontinued?
From Wikipedia: "On November 5, 2010, Apple announced that it would not be developing a future version of Xserve."

Re:Will Apple bite? (1)

petermgreen (876956) | more than 3 years ago | (#34484564)

Apple are already using DDR3 ECC (they don't say if it's registered or unregistered but I suspect it's registered) in the mac pro and xserve. It's not like apple had a lot of choice in the matter, memory controllers are now in the CPU so the CPU vendors call the shots as to what will be supported.

Good news for data centers (2)

nounderscores (246517) | more than 3 years ago | (#34484284)

Anything that reduces the cooling load and the power bill will be welcome.

Re:Good news for data centers (0)

Anonymous Coward | more than 3 years ago | (#34485866)

Yeah what you don't pay in cooling, is spent in overprized memory modules.

How long... (0)

Anonymous Coward | more than 3 years ago | (#34484374)

Shall we take bets on how long until Rambus sues Samsung, claiming they in fact invented this technology?

Re:How long... (1)

arashi no garou (699761) | more than 3 years ago | (#34485020)

I don't know about Rambus inventing it, but I have seen SIMMs with stacked chips before at my part time bench testing job. There was a batch of them in a box of very old RAM I had to evaluate. We didn't have a server on hand old enough to put them in for testing (they were registered units) so they stayed in the junk bin.

Re:How long... (2)

RoverDaddy (869116) | more than 3 years ago | (#34485684)

Despite the summary, I don't think they're literally talking about 'stacked chips' in the sense of 2 separate packages here. I have (seriously) a 64KB expansion card for the original IBM PC (1982) that achieves its incredible memory density with stacked chips. A quick look at the link to 'Through Silicon Via" suggests something more like two wafers inside a single plastic package, with vertical traces connecting them together inside the package.

Re:How long... (1)

PitaBred (632671) | more than 3 years ago | (#34491584)

That's exactly it. It's a way to have two layers of silicon stacked and still connected through each other. Because right now all the connections for a memory chip are around the edges and the back, and you can't really double that up with just more wires.

huzzah (2)

Apothem (1921856) | more than 3 years ago | (#34484380)

This is great for the big business side of things, but how soon will we see this on the consumer level? I mean, we keep seeing all these really high spec systems being used for the governments and large operations, but nothing for the little guys? TFA gives no hints.

Re:huzzah (1)

petermgreen (876956) | more than 3 years ago | (#34484588)

Thing is the memory support on desktop boards is already ahead of what most people need even with todays "bloatware". LGA1156 supports 16GB and desktop LGA1366 suports 24GB yet even among "enthusiast" forums the consensus seems to be that 8GB is plenty.

Great. (4, Insightful)

olsmeister (1488789) | more than 3 years ago | (#34484400)

We've added another dimension, and got 50% denser. Sounds like we didn't do our jobs very well.

Yeah, what about using both sides? (3, Interesting)

ThreeGigs (239452) | more than 3 years ago | (#34484680)

I've always wondered if there was a reason why manufacturers didn't use both sides of the silicon for lower powered chips, like memory. Seems like a win-win... twice the component count for the same silicon investment. Yeah, handling might be tricky, but not a showstopper.

Re:Yeah, what about using both sides? (2)

guruevi (827432) | more than 3 years ago | (#34484922)

They already do, buy a bit more dense memory than you're used to (or can afford) and you'll see it happen.

This I believe is talking about stacking multiple chips on one of the sides, probably in the same packaging as a single chip.

Re:Yeah, what about using both sides? (1)

tlhIngan (30335) | more than 3 years ago | (#34487988)

This I believe is talking about stacking multiple chips on one of the sides, probably in the same packaging as a single chip.

Not a new technique, either. It's just another stacked die - where you have multiple chips stacked on atop the other. Stacked dies have been commercially available for at least 5 years now (usually they're used in flash chips).

Some form of packing together multiple dies has been around. We've had multi-chip packaging (like the Pentium Pro), package-on-package (where you put two ICs one atop the other, the 80's had it with DIP sockets on chips for EPROMs and the like, and modern ones where you can't even tell, like the Apple A4), and stacked dies. In fact, the A4 has both a stacked die and package on package (there are two memory chips in the memory chip that's soldered on top of the A4 SoC chip). You can get stacked dies with both flash and RAM on it as well, and they can often use package-on-package as well, so you end up with a chip that's basically "apply power, apply clock, watch chip run" (there's no gap with package-on-package).

Samsung's just put that technology to use on its generic memory parts now.

Re:Yeah, what about using both sides? (0)

Anonymous Coward | more than 3 years ago | (#34484994)

Yes, it is a showstopper.

Re:Yeah, what about using both sides? (1)

Anonymous Coward | more than 3 years ago | (#34485030)

Except for the plummeting yields, which could easily - or even likely - mean *more* wasted silicon.

Re:Yeah, what about using both sides? (0)

Anonymous Coward | more than 3 years ago | (#34486552)

Actually it could be HIGHER yields.

If you are making memory and print on both sides. Lets say one side is crapped out but the other one is fine. You now have a module you can sell instead of tossing. It is half the size of your normal module.

Re:Yeah, what about using both sides? (2, Informative)

Anonymous Coward | more than 3 years ago | (#34485672)

They don't use both sides because the back side is where the robot handlers touch the wafer to move it. At several steps in the wafer process it is vacuumed down to chucks to hold the wafer and keep it flat. If you did print on the back the pattern would be damaged by all of the backside handling and ruin the chips back there. There is also the issue of front to back wafer alignment. While I am sure some college kids or some profs will come on and try and quote some things from some text books and sales pamphlets, it is not an easy thing to do and adds an even greater level of complexity to your registration.

Not to mention that the wafer when it goes through a fab is thicker than when it goes into chips. After it comes out of the fab it goes to back grind before it goes to the diamond saw so there is less to cut though and the finished chips can be thinner. A chip cut and put into a package without that back grind would have issues fitting into some packages. Especially when multiple chips are put into a package.

When they are talking 3D chips, they are stacking patterns on the same side of the wafer.

Re:Great. (1)

grumpyman (849537) | more than 3 years ago | (#34488440)

Is it really 'adding another dimension' or more like stacking up more thinner pancakes?

Cube memory? (1)

Khyber (864651) | more than 3 years ago | (#34484452)

Will this perhaps give us a chance at having cubical memory stacks to plug into our motherboards like tiny processors? I could really enjoy 2GB RAM in a little 3/8"x3/8"x3/4" stack. Key it right and save costs on PCB. Might be able to be cooled just as easily.

Re:Cube memory? (0)

Anonymous Coward | more than 3 years ago | (#34484546)

Yeah this is a nice thought. There's so much wasted vertical space on ATX PC motherboards. Even the smaller variants, uATX, mini-ITX, could benefit immensely from this (typically the min case width is determined by width of video card, or sometimes height of CPU cooler).

Re:Cube memory? (4, Interesting)

wierd_w (1375923) | more than 3 years ago | (#34484586)

3D geometries have serious issues with line saturation and heat dissipation. This is because of thermal noise, and the increased voltage needed to overcome it. (which in turn, creates more heat.)

We are already at the point where high performance RAM chips need heat spreaders, and that is with 2D chip geometries that can eliminate heat reasonably efficiently.

When you start stacking multiple silicon fab layers together, heat builds up in the layers, requiring more voltage to overcome thermal noise, which produces more heat...... You get the idea.

Without separating the layers with some kind of highly thermally conductive intermediate to pipe the heat out, the insides of the chips become little easy bake ovens, and estimated service life drops radically, as does performance metrics.

I could see them going 2 levels deep in the geometry, with a special package with heat spreaders on both sides (of the package itself that is- not the DIMM) or something crazy like that-- but I really can't see a big "solid 3D block" of silicon getting plugged anywhere. IF such a technology were to come into being, it would need to be made from something that is damned near to being a room temperature superconductor to keep from being unreliable/a fire hazard from thermal noise.

Alternatively, it could be done in a photonic computing approach, using optical transistors and optical interconnects... that would solve the heat problem too, but would make servicing the system substantially more difficult.

Re:Cube memory? (2)

wagnerrp (1305589) | more than 3 years ago | (#34484690)

There are patents going back a decade pertaining to using microfluidic ducts as a heat transfer mechanism. Every few months now, there's another article on slashdot about one of the chip giants testing out such manufacturing techniques. Just a few links from a quick googling...

http://www.xbitlabs.com/news/coolers/display/20031008155430.html [xbitlabs.com]
http://www.electronics-cooling.com/2002/11/electroosmotic-microchannel-cooling-system-for-microprocessors/ [electronics-cooling.com]
http://www.frostytech.com/articleview.cfm?articleid=2424&page=11 [frostytech.com]
http://www.w7forums.com/nanotechnology-delivers-revolutionary-pumpless-water-cooling-t6658.html [w7forums.com]

Re:Cube memory? (1)

Khyber (864651) | more than 3 years ago | (#34489648)

For heat dissipation, just make the entire outside of the module the heat sink. It's what I do for ultra-power LED diodes, and lemme tell you, those get WAY hotter than any RAM chip could dream of, plus pull more power (some of these diodes are 100w a piece.) Drop a fan on it for when you overclock, just like normal. No big change in anything, really.

Microfluidics got mentioned, but really that's pointless without a huge phase change section, and that addition renders my idea of 3D RAM useless, plus fluid+heat = pressure. I don't like the idea of my RAM venting steam of some sort.

Re:Cube memory? (1)

MattskEE (925706) | more than 3 years ago | (#34496864)

The square cube law is always the elephant in the room when people start talking about 3D circuits. It is certainly a problem, but the field is still open to improvements. For example, the "through silicon via" process presumably means they etch a via entirely through a silicon wafer and plate it with a metal. These could also be used as heatsinking aids and not just ways to transfer signals through vertically stacked chips, and though some surface area is consumed it may be more than made up for by the vertical stacking. There are researchers working on micro-/nano-scale structures which have greatly enhanced thermal conductivities which can be useful for 3D integration.

I assume it is not thermal noise which needs to be overcome, it is probably thermally-induced leakage current which requires the use of higher voltage to increase the difference between a "1" and a "0", because at higher temperatures more electrons have enough energy to jump the barrier of an off-state transistor, making the 0 more of a 0.5 for example. Although both result from thermal processes, they are significantly different effects. (disclaimer: I am not a VLSI engineer).

The last photonics researcher I talked to was of the opinion that photonic transistors are still a pipe dream... but many researchers and big companies are making steady process to replacing off-chip interconnects with photonic modules.

Re:Cube memory? (1)

dargaud (518470) | more than 3 years ago | (#34485296)

Why is DRAM so large compared to flash memory ? I mean, I have a 32Gb micro-SD card in my phone and it's smaller than a fingernail. But the 8Gb of DRAM in my desktop take 4 large slots and at the time (last year) there weren't any 4Gb modules available in this category. And also I had to add a fan and run them at 666MHz instead of their rated 800MHz or I get hard failures. Doesn't sound very exciting to me.

Re:Cube memory? (1)

vlm (69642) | more than 3 years ago | (#34485512)

Why is DRAM so large compared to flash memory ?

flash is nice, but incredibly slow, especially in write, compared to dram. We are talking several orders of magnitude here, not just 10% or something. Also if you have 100 meg write rate (wishful thinking) and the drive burns out at 100K rewrites (wishful thinking) and its about 10 gigs, the numbers divide out to the drive will be dead in about 100 days. Different technologies have certain tradeoffs and flash is nice and small and low power and nonvolatile, but it is slower than molasses and short lived.

And also I had to add a fan and run them at 666MHz instead of their rated 800MHz or I get hard failures. Doesn't sound very exciting to me.

An "amazing" transfer rate for solid state flash drives is in the double digit megabytes/sec.

Your unexciting DDR3 system memory, while dialed down to 667 MHz, can "only" run at 10.6 gigabytes/sec. If it worked correctly, you'd be around 12 gigs/sec. You could get the same aggregate performance from a large raid array of two hundred or so SSDs in a super NAS. Suddenly SDRAM is looking a lot smaller than the flash equivalent...

http://en.wikipedia.org/wiki/DDR3_SDRAM [wikipedia.org]

Re:Cube memory? (1)

wiredlogic (135348) | more than 3 years ago | (#34491026)

Most flash memories use a serial interface providing access to large amounts of memory through a small number of pins. The price is higher latency for memory accesses. DRAM uses a parallel bus to minimize bottlenecks at the cost of needing many more connections to a chip or module. Even RDRAM is parallel to some extent. Furthermore, about half the pins in a modern day memory module are grounds to minimize crosstalk at the high switching speeds. The I/O requirements for high speed memory all conspire to force larger module sizes than applications where space is at a premium.

Re:Cube memory? (0)

Anonymous Coward | more than 3 years ago | (#34496664)

Flash memory is made most of the time with floating gate technology which can be made much smaller than the capacitors which are in DRAM. Not to mention there is the ability to store more than one bit per gate in NAND memory.

And next... (1)

zrbyte (1666979) | more than 3 years ago | (#34484762)

I'm just waiting for the day when Intel and AMD will be competing on not the number of cores in the CPU, but the number of circuit layers on their 3D chip.

I find this funny... (1, Interesting)

Lumpy (12016) | more than 3 years ago | (#34485272)

I have been doing "3d" ram stacking for decades... I did it first in 1983 on a TRS-80 Color computer. I had 2X the max supported ram the machine could handle. I simply used a toggle to switch ram banks, later I added logic to allow the computer to do that for me. Writing programs that consumed most of ram and stored data in the other bank were fun...

What ele is samsung going to discover that hardware hackers have been doing for ever and a day?

Re:I find this funny... (2)

slashdotard (835129) | more than 3 years ago | (#34488196)

they're talking about stacking the dice, not the devices. You know what dice are? They're the little chips of silicon that are then packaged to make the IC's that you typically see and use. Unless you can precisely align and drill little tiny microscopic holes in the dice and electrically connect the one on top to the one on bottom, then you haven't been doing what they're doing. Not even close.

The closest anyone has ever got to this is stacking small dice on a larger die and wire bonding the pads of one to the other.

Re:I find this funny... (1)

Fishbulb (32296) | more than 3 years ago | (#34490230)

Exactly. The Amiga 1000 I bought in 1988 had a hack like this done by the prior owner (in fact, it's still in my attic). Tripled the motherboards' memory (256 to 768k iirc...), and since the Amiga would detect any memory in the system and just tack it onto the address space, no configuration headaches. Damn, those were the days. :) (FWIW, it had that piggy-back chip hack, the front-loaded mem expansion, I added a 1.5 MB daughterboard that plugged into the CPU socket, and finally added some SIMMS to my Xetec SCSI controller)

Prior art and all that, but I'm sure Samsung got a patent.

Re:I find this funny... (2)

PitaBred (632671) | more than 3 years ago | (#34491700)

Except for the fact that this development is absolutely nothing like what you describe. But hey, who let anything like logic stand in the way of a "I used to do X back in the day" post?

3D memory "new" ? (1)

Dee Ann_1 (1731324) | more than 3 years ago | (#34485508)

Radio Shack COCO 1 anyone?

My BF did that to mine for me like back around 1980 or so..

TSV is 29+ years old (1)

slashdotard (835129) | more than 3 years ago | (#34488060)

Interesting that TSV is found to be useful after all. 29 years ago an AMD employee independently conceived of TSV and AMD refused to talk to the employee about this and other concepts, nearly all of which have subsequently been developed and patented by AMD's competitors.

Stupid glasses (2)

Junior J. Junior III (192702) | more than 3 years ago | (#34489226)

As long as I don't have to wear those stupid glasses, I'm all for this 3D memory.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...