Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

3D DRAM Spec Published

Soulskill posted 1 year,18 days | from the set-in-3D-stone dept.

Hardware 114

Lucas123 writes "The three largest memory makers announced the final specifications for three-dimensional DRAM, which is aimed at increasing performance for networking and high performance computing markets. Micron, Samsung and Hynix are leading the technology development efforts backed by the Hybrid Memory Cube Consortium (HMC). The Hybrid Memory Cube will stack multiple volatile memory dies on top of a DRAM controller. The result is a DRAM chip that has an aggregate bandwidth of 160GB/s, 15 times more throughput as standard DRAMs, while also reducing power by 70%. 'Basically, the beauty of it is that it gets rid of all the issues that were keeping DDR3 and DDR4 from going as fast as they could,' said Jim Handy, director of research firm Objective Analysis. The first versions of the Hybrid Memory Cube, due out in the second half of 2013, will deliver 2GB and 4GB of memory."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


And for faster performance (3)

mozumder (178398) | 1 year,18 days | (#43341379)

the CPU vendors need to start stacking them onto their die.

In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

Stacked vias could also be used for other peripheral devices as well. (GPU?)

Re:And for faster performance (1)

Anonymous Coward | 1 year,18 days | (#43341457)

If you want more RAM you just add more CPUs!!!


Re:And for faster performance (0)

Anonymous Coward | 1 year,18 days | (#43343189)

Actually, I'd love it if I could get more processing power just by expanding my RAM. Maybe something like Venray is doing with their TOMI [hothardware.com] technology.

You can still have your main multi-core CPU; it would just talk to the parallel units in RAM like it does to your GPGPU, ASICs, DMA controllers, etc.

Re:And for faster performance (1)

Pseudonym (62607) | 1 year,18 days | (#43343935)

You think modern bloatware is inefficient and slow? Just wait until every machine is a NUMA machine!

Re:And for faster performance (5, Funny)

ArcadeMan (2766669) | 1 year,18 days | (#43341551)

Mac users won't see any difference in 5 years... wink wink

Posted from my Mac mini.

Re:And for faster performance (2)

kaws (2589929) | 1 year,18 days | (#43342637)

Hmm, tell that to my upgraded Macbook. I have 16gb of ram in mine. On the other hand, you're probably right that it will take a long time for the upgrades to show up in apple's store.

Re:And for faster performance (1)

viperidaenz (2515578) | 1 year,18 days | (#43344291)

Like the top of the range Mac Pro's currently with their 2 year old Xeon CPU's.

Re:And for faster performance (1)

Tastecicles (1153671) | 1 year,17 days | (#43348673)

Yesterday's server chip: today's desktop chip.

Prime example: the AMD Athlon II 630. Couple years ago it was the dog's bollocks in server processors and you couldn't get one for less than a grand. Now it's the dog's bollocks of quad core desktop processors (nothing has changed except the name and the packaging) and my son bought one a month ago for change out of £100.

The Core series processors you find in desktops and laptops these days all started life as identically-specced Xeon server processors.

Re:And for faster performance (3, Funny)

Issarlk (1429361) | 1 year,17 days | (#43345347)

Since those are 3D chips, does that mean Apple's price for those RAM will be multiplied by 8 instead of 2?

Re:And for faster performance (2)

TheRaven64 (641858) | 1 year,18 days | (#43341637)

Most CPU vendors do. This has been the standard way of shipping RAM for mobile devices for a long time (search package-on-package). It means that you don't need any motherboard traces for the RAM interface (which reduces cost), increases the number of possible physical connections (increasing bandwidth) and reduces the physical size. The down side is that it also means that the CPU (and GPU and DSP and whatever else is on the SoC) and the RAM have to share heat dissipation. If you put a DDR chip on top of a Core i7, then one or the other (or possibly both) would be too hot to function. There are quite a few interesting experimental architectures that mix execution units and RAM on the same die, because the power cost of moving data between RAM and CPU is starting to be important. It's also often cheaper (in terms of both time and power) to recompute intermediate results than fetch them from main memory for workloads such as image processing.

Re:And for faster performance (3, Interesting)

harrkev (623093) | 1 year,18 days | (#43342035)

HMC does not need to sit on top of a CPU. HMC is just a way to package a lot of memory into a smaller space and use fewer pins to talk to it. In fact, because of the smaller number of traces, you are likely to be able to put the HMC closer to the CPU than is currently possible. Also, since you are wiggling fewer wires, the I/O power will go down. Currently, one RAM channel can have two DIMMs in it, so the drivers have to be beefy enough to handle that posibility. Since HMC is based on serdes, it is a point-to-point link that can be lower power.

I am sure that at speed ramps up that HMC will have its own heat problems, but sharing heat with the CPU is not one of them.

Re:And for faster performance (1)

TheRaven64 (641858) | 1 year,17 days | (#43345691)

You might want to read the context of the discussion before you reply. My post was in reply to someone who said:

And for faster performance the CPU vendors need to start stacking them onto their die.

Re:And for faster performance (1)

SuricouRaven (1897204) | 1 year,18 days | (#43342227)

Don't forget power. The frequencies memory runs it, it takes considerable power to drive an inter-chip trace. The big design constraints on portable devices are size and power.

Re:And for faster performance (1)

unixisc (2429386) | 1 year,17 days | (#43345855)

This is more a practice w/ portable and wireless devices, where low consumption of real estate too is a major factor, and not just low consumption of power. The top package is typically larger than the bottom package, and all the signal pins are at the periphery. For a memory-on-CPU POP, the CPU is typically the bottom package, and its signals are all the core pins, while the memory is the top package, w/ signals at the periphery. Internally, the CPU and memory could be connected, and only the separate signals drawn out.

Re:And for faster performance (2)

ackthpt (218170) | 1 year,18 days | (#43342573)

the CPU vendors need to start stacking them onto their die.

In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

Stacked vias could also be used for other peripheral devices as well. (GPU?)

IBM tried this with the PS/2 line. It fell flat on its face.

Re:And for faster performance (3, Interesting)

forkazoo (138186) | 1 year,18 days | (#43342699)

To be fair, if somebody tried to sell something as locked down as the iPad is during the period when IBM first released the PS/2, it would have also flopped. The market has changed a lot since the 1980's. People who seriously upgrade their desktop are a rather small fraction of the total market for programmable things with CPU's.

Re:And for faster performance (1)

dj245 (732906) | 1 year,18 days | (#43343649)

the CPU vendors need to start stacking them onto their die.

In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

Stacked vias could also be used for other peripheral devices as well. (GPU?)

IBM tried this with the PS/2 line. It fell flat on its face.

This is news to me. I owned a PS/2 model 25 and model 80, and played around with a model 30. The model 80 used 72 pin SIMMs and even had a MCA expansion card for adding more SIMMs. The model 80 I bought (when their useful life was long over) was stuffed full of SIMMs. The model 25 used a strange (30 pin?) smaller SIMM, but it was upgradable. I forget what the model 30 had. Wikipedia seems to disagree with you [wikipedia.org] also.

The PS/2 line stunk (1)

justthinkit (954982) | 1 year,17 days | (#43344465)

I think the grandparent's point is that IBM tried to be all slick and new and proprietary with the PS/2 line and only suckers -- big corp, gov., banks -- bought into it.

I inherited all kinds of PS/2s...excrement. At this time they were being sold with a _12_ inch "billiard ball" monochrome IBM monitor. I eventually upgraded all of them to Zenith totally flat color monitors.

PS/2s were wildly proprietary -- wee, we get to buy all new add-in cards! And performance dogs -- Model 30/286 FTW.

A newb reading the parent's post would think otherwise as you cite wiki and all.

PS/2s and OS/2, released around the same time frame, killed IBM. End of story.

Re:And for faster performance (1)

WuphonsReach (684551) | 1 year,17 days | (#43344613)

From what I recall, IBM's problem with the PS/2 brand was:

1) They tried to shift everyone to MCA instead of the more open ISA/EISA, mostly because they were trying to stuff the genie back in the bottle and retake control of the industry.

2) The lower end of the PS/2 line was garbage, which tarnished the upper-end.

We had a few PS/2 server towers to play with. They were rather over-engineered and expensive, and the Intel / Compaq / AT&T commodity systems were faster and less expensive.

Re:And for faster performance (0)

Anonymous Coward | 1 year,17 days | (#43345153)

I hate to tell you this, but when the IBM 5150 came out 8-bit ISA was proprietary. In addition EISA was no more sucucessful that MCA inspite of the fact that you could plug 8-bit and 16-bit ISA cards into ISA. With the PS/2, IBM simply failed to realize that they had no control of the (100% IBM-)PC (compatible) market.

Re:And for faster performance (2)

gagol (583737) | 1 year,18 days | (#43342577)

I would like to see 4GB on die memory with regular DRAM controller for "swap" ;-)

Re:And for faster performance (0)

Anonymous Coward | 1 year,17 days | (#43345661)

NVIDIA has already announced stacked RAM for their future Volta cards (first link I could find [techreport.com]).

Re:And for faster performance (1)

IllogicalStudent (561279) | 1 year,17 days | (#43347829)

the CPU vendors need to start stacking them onto their die.

In 5 years your systems will be sold with fixed memory sizes, and the only way to upgrade is to upgrade CPUs.

Stacked vias could also be used for other peripheral devices as well. (GPU?)

Problem with this, of course, is that Intel wants to stop having slotted motherboards. Chips will be affixed to boards. Makes RAM upgrades a costly proposition, no?

Every other iteration of ram tech is a dud (3, Funny)

sinij (911942) | 1 year,18 days | (#43341385)

Just like Star Trek movies, every other iteration of memory tech is a dud. I will just wait for holographic crystals.

Re:Every other iteration of ram tech is a dud (0)

Anonymous Coward | 1 year,18 days | (#43341713)

Just like Star Trek movies, every other iteration of memory tech is a dud. I will just wait for holographic crystals.

No, no! It is actually the (1, 2)-Ulam sequence [wikipedia.org].

nothing new here (0)

Anonymous Coward | 1 year,18 days | (#43341391)

Sounds like they've managed to re-invent (with modern fabrication techniques) the "controller+DRAM" modules used in the first-generation (R3K) Indigo by Silicon Graphics.

Re:nothing new here (5, Interesting)

Thagg (9904) | 1 year,18 days | (#43341549)

I was working at SGI at the time, late 1991. The cheapest way to buy expansion memory was to buy Indigo's and throw out the rest of the computer. SGI was just feeling the first tickles of the commoditization of computer hardware, and was looking for ways to make their components unique (and keep them expensive.)

Re:nothing new here (1)

viperidaenz (2515578) | 1 year,18 days | (#43344307)

I worked for a company that needed more RDRAM in a server. We bought a second hand server, took out the RAM and threw away the rest. It was cheaper.

Re:nothing new here (1)

Tastecicles (1153671) | 1 year,17 days | (#43348725)

RDRAM was never cheap. I binned a Dell because it was cheaper to build a new machine with the required spec than to add a Gig of RDRAM to that thing.

Re:nothing new here (1)

icebike (68054) | 1 year,18 days | (#43342863)

Yeah, but how long till one of the partners run off and patent this new process and start suing everyone in sight? (Remember Rambus?)

Still waiting... (4, Interesting)

Shinare (2807735) | 1 year,18 days | (#43341429)

Where's my memristors?

Re:Still waiting... (0)

Anonymous Coward | 1 year,18 days | (#43342081)

Samsung and Hynix announced some time this year a year or so ago. We shall see.

Re:Still waiting... (4, Funny)

fyngyrz (762201) | 1 year,18 days | (#43342095)

Your memristors are with my ultracaps, flying car, and retroviral DNA fixes. I think they're all in the basement of the fusion reactor. Tended to by household service robots.

Re:Still waiting... (1)

SuricouRaven (1897204) | 1 year,18 days | (#43342309)

Ultracaps are readily available now. I've got a bank of 2600 farad jobbies. I use to power my Mad Science setup.

Re:Still waiting... (0)

Anonymous Coward | 1 year,18 days | (#43342511)

seconded. i've got 4- 3000 Farad boostcaps just lying on my desk waiting for some project, and 6 more 2600 Farad caps in my friends subwoofer amplifier.

Ultracaps (4, Insightful)

fyngyrz (762201) | 1 year,18 days | (#43343983)

Um... yeah. No. I appreciate that what you have are considerably better than regular caps, but they're nowhere *near* the performance of what we keep being offered. Nanotube infused designs [mit.edu] with power to weight ratios around that of batteries, graphene designs [ucla.edu], etc. There's a huge wealth of applications waiting for them to hit somewhere around those marks. Electric cars, actual car battery replacements, cellphone power supplies that never die, backup systems for the house with peak powers far in excess of anything we have now but with comparable storage... the ultracap "breakthroughs" are as regular as any other kind (memristors, etc.) and the consistent no-show of actual commercially available units is also consistent. It's the flying car of electronic components, sigh. High voltage, high capacity, high vapor factor, lol.

Believe me, I've been following the whole ultracap thing for a while. I even keep an eye on EEStor, which I can assure you has been a stupendous exercise in fruitless waiting. As a ham with a full boat of offline powered goodies and the beginnings of a household able to run off backup systems, and more than a little willingness to buy an electric car, actual availability of ultracaps in what I call "the battery range" would truly light me up.

But that carrot is well and truly still out on the stick.

Re:Ultracaps (2)

jkflying (2190798) | 1 year,17 days | (#43346343)

TI has a new range of super-low-power embedded chips which use FRAM, they are using it to replace flash and get faster writes, lower power consumption and higher write cycles before failure, so there's one new tech which made it to market and might become more popular over the coming years as it gets cheaper.

And even current-gen ultra-capacitors have a similar or better *power*/weight ratio as a battery - I'd like to see a 30g battery which can give 30A at 600V without damage to itself. It's the *energy*/weight ratio which is a killer - that 30A spike doesn't last long enough to be useful for the types of applications we currently use batteries for.

Re:Still waiting... (0)

Anonymous Coward | 1 year,18 days | (#43342389)

Can I haz the superhydrophobic surface treatment spray?

Re:Still waiting... (0)

Anonymous Coward | 1 year,17 days | (#43346331)

they are not coming, stan williams is a fraud. more precisely, he is one of those parasites commonly found in the research community whose modus operandi is as follows:

1) read about some promising new, relatively unknown and high-risk research ideas
2) change them slightly and/or combine them in obvious ways
3) present this as "brand new work" to his or her victims (typically these would be naive, indifferent or incompetent employers and/or employees)
4) obtain a bundle of money and/or time from the victims to "conduct further research"
5) spend maybe a couple of hours per year actually working
    5a) (unlikely case) if this actually leads to genuine progress, BREAK. there's no need to continue being a parasite.
    5b) (likely case) find and commence grooming the next set of victims before continuing to 6)
6) when reports/results are due, either:
    6a) fabricate plausible-sounding excuses (e.g. blame lack of progress on an unsuspecting victim)
    6b) beg for more time and/or money
    6c) declare that, despite the lack of tangible results, the research was a success because $RANDOM says so
    6d) declare that the research turned out to be a dead-end
7) depending on the victim's response, either loop back to 1) and 4) or move onto the next set of victims found in 5b) and loop back to 1).

$RANDOM can be any one of:
    - a "collaborator" (typically another parasite)
    - an "accepts anything" academic outlet (e.g. japanese journal, iranian university)
    - oneself

the trick to this scam is to select and prepare victims carefully. if done well, a single individual can milk a research lab or university for literally decades before moving on, leaving them none the wiser, and in some cases even with a too-big-to-fire division that can be reused in step 6c) at the parasite's new location.

Not really the first time for this (1)

Anonymous Coward | 1 year,18 days | (#43341473)

Magnetic core menory was 3D. With something like 16k per cubic foot.

Dram (0)

Anonymous Coward | 1 year,18 days | (#43341559)

So when can people running ddr1 or ddr2 expect to get some multilayer chips that vastly increase memory bandwidth in older systems?

Re:Dram (3, Insightful)

fuzzyfuzzyfungus (1223518) | 1 year,18 days | (#43341787)

So when can people running ddr1 or ddr2 expect to get some multilayer chips that vastly increase memory bandwidth in older systems?

Given that, for PC applications at any rate, the memory controller is built into either the motherboard or the CPU, there is likely to be a bottleneck there in any case. There would have been no reason for designers of memory controllers of the era to spec them out with the expectation of more than modest improvements.

Also, this '3D memory' stuff includes a memory controller with the DRAM dice stacked on top. To what, exactly, in a DDR2-using system are you going to connect a fancy new memory controller?

If you were a real high roller with a big cluster full of multi-socket hypertransport based systems or something, somebody might be moved to build some very, very, high performance memory modules that occupy CPU sockets; but that's a serious edge case. Most systems(even new ones) simply don't have a spare bus fast enough to hang substantially-faster-than-DDR3 RAM from.

Re:Dram (1)

harrkev (623093) | 1 year,18 days | (#43342091)

This HMC stuff is going to require new CPUs with new memory controllers on board. On the plus side, for the same bandwidth, they will use a lot fewer pins.

Of course, the down-side is the early-adoper penalty of HMC being rather expensive. I expect that if it takes off, the price will drop rapidly.

Re:Dram (0)

Anonymous Coward | 1 year,18 days | (#43342275)

Will the price drop? Is anyone allowed to join the HMC consortium or will this be another cartel that exploits their patent pool to prevent proper competition?


Re:Dram (1)

viperidaenz (2515578) | 1 year,17 days | (#43344425)

Most of those pins in the CPU are for power. While the overall system power consumption can be lowered, its entirely moved to a single chip. They may need more pins. A 130W CPU with a core voltage of ~1V needs an average of 130A of current going though those pins. The peaks will be much, much higher. They'll need more pins to get more bandwidth in and out of the CPU+Memory chip too.

Re:Dram (2)

jandrese (485) | 1 year,18 days | (#43342225)

The overall design reminds me a lot of Rambus. It saved pins and had excellent sustained throughput, but memory latency suffered.

Re:Dram (1)

K. S. Kyosuke (729550) | 1 year,18 days | (#43342421)

Does that matter all that much? With cache lines sufficiently long, you're doing burst transfers all the time anyway, or not?

Re:Dram (1)

doublebackslash (702979) | 1 year,18 days | (#43342813)

The power of a modern processor to get work done is dominated by cache misses. I mean by a factor of a hundred or more to one unless every bit you are computing lives in cache and nothing ever kicks your code or data's cache line out (including another line of code or data that you need. Because of the way that cache works you can't map every address to every line in cache).

Don't take my word for it though, take Cliff Click's: http://www.infoq.com/presentations/click-crash-course-modern-hardware [infoq.com]

Re:Dram (1)

K. S. Kyosuke (729550) | 1 year,17 days | (#43349437)

The power of a modern processor to get work done is dominated by cache misses. I mean by a factor of a hundred or more to one unless every bit you are computing lives in cache and nothing ever kicks your code or data's cache line out (including another line of code or data that you need.

I happen to know that. What I meant by this was that it shouldn't matter all that much that latency is much worse than the throughput, because the burst transfers effectively amortize the latency cost. You're doing random reads against the L1 cache, not against the main memory. (If you organize your data so as to make the cache miss with every read, you're screwed anyway.)

Re:Dram (0)

Anonymous Coward | 1 year,18 days | (#43343093)

Does that matter all that much? With cache lines sufficiently long, you're doing burst transfers all the time anyway, or not?

Yes, it matters. In the same way that disk access time matters. Most of the time you work against memory but when the swap starts everything sucks.
If you have very fast disk access then working against the swap is less painful.
This is pretty much the same thing. Get fast memory access and you won't be sour about life for those edge cases when the cache isn't enough.

Re:Dram (2)

QQBoss (2527196) | 1 year,18 days | (#43344379)

Back in 1997, it was determined that ~90% of the benchmarks and customer applications (provided to us for testing purposes, the NDAs were amazing) used on PowerPC were completely dominated by cache misses. That means that if we knew how many times the processor touched a bus (data easily obtained in real time), we could be accurate to within 5% of what the performance would be using a spreadsheet calculation (Thanks, Dr. Jenevein) vs running the apps on a cycle accurate system simulation which could take weeks to develop a meaningful profile. Every time the caches got bigger, the code to solve customer problems would get proportionally bigger. That hadn't changed in 2007 and isn't anticipated to change by 2017. There are edge cases, but until people are satisfied with continuing to play Lode Runner instead of Crysis N, it won't matter for the mass market.

That doesn't mean that CPUs don't need to get bigger/faster, but it does mean that there is a meaningful limit on performance relative to the cache size, the calculation of which is probably left to an exercise for the student in H&P's Computer Architecture.

Re:Dram (0)

Anonymous Coward | 1 year,17 days | (#43347363)

Newer instructions allow the CPUs to load data from memory into L1 without affecting L2/L3. This means fewer L2/L3 evictions caused by streaming data that rarely gets used more than once before getting evicted.

The other cool part is that these instructions run async of the request and lower priority than other memory accesses. This means the instructions can attempt to load data a bit before the data is required, while not blocking the pipeline or causing extra memory congestion. The CPU only eats the cost of an instruction decode, processes the load, and discards that instruction. If the data returns in time, then the CPU will access the data in L1, if it does not, then it will just be a cache miss and the normal path will be taken. If the normal path is taken, then the data will be copied into L2/L3 as a normal memory fetch.

Re:Dram (1)

QQBoss (2527196) | 1 year,17 days | (#43348885)

You say newer, I was teaching people to use dcbt/icbt in PowerPC (and similar instructions in other architectures) to do that in the 90's (granted, they affected the L2 if one existed, no one had implemented an L3 on-die at that point). I love the instructions, and used the heck out of them when I hand optimized assembly code- not a career choice I would recommend at this point in time, btw. Compilers exist that can make use of them, fortunately, and they do help maintain the performance curve, but they don't break it out to a new level.

Re:Dram (1)

QQBoss (2527196) | 1 year,17 days | (#43349055)

Rude of me to reply to myself, but I should have added that when the vector units were added to PowerPC in the mid-late '90s, dst (data stream) instructions had the ability to indicate whether the fetches were transient or not and affect only the L1 if they were. gcc has supported the ability to do this since not long after the MPC7400 was released, IIRC.

Re:Dram (1)

cerberusti (239266) | 1 year,17 days | (#43344627)

It matters a great deal, and making sure burst transfers are effective is not always possible.

I do high performance calculations for a living. Knowing in advance what you will need in the future is a somewhat hard problem (and the basis of most modern optimization.)

The difference between main memory and cache is vast, if you can predict what you need far enough in advance to load it into cache that helps quite a bit, but realize that normally at best you are loading 4x what you really will need (which is the nature of trying to predict it so far ahead of time you are not able to calculate what you will really need.)

If you want to contest that, how much memory do you have in cache compared to your data set of a few terabytes? Multiple cores are usually a loss in performance if you even try, most real world problem are not possible to run in parallel once you hit the easy optimizations (which mask latency for the most part at the expense of a large amount of cache memory.)

Most of the harder problems I have run into could scale across multiple cores (or CPUs) if it was designed that way, but the run time would always be worse than a solution which assumed that it will always run on one core (introducing synchronization points kills it.)

Latency is essentially everything in most applications which are optimized (most are not, it costs too much.) The recent trend of simply including more CPUs is essentially an acknowledgement that computers have almost hit their limit in terms of the number of sequential calculations they can run over time.

If you are assuming that your application will become faster as time goes on you already lost. In most cases this cannot happen unless the original implementation was highly suboptimal (such as... you used Java or C# instead of C, or your C code is terrible.)


Anonymous Coward | 1 year,18 days | (#43341563)

By Jim Handy !!

It's "more... THAN", stupid, stupid Americans... (0, Informative)

Anonymous Coward | 1 year,18 days | (#43341565)

For Christ's sake - it's even in the summaries now.

"15 times more throughput AS standard DRAMs"

It's "15 times more throughput THAN standard DRAMs", you illiterate cretins...

What the hell happened to the American education system in the last ten years or so? It seems like half of you ignoramuses don't know what any of your prepositions mean. Just put in 'to', 'on', 'then', 'that', 'than', etc.etc. at random, that'll do. Near enough.

Re:It's "more... THAN", stupid, stupid Americans.. (1)

fyngyrz (762201) | 1 year,18 days | (#43342121)

What the hell happened to the American education system in the last ten years or so?

Absolutely nothing. Hence, no change in slashdot editing quality. New here, are you?

Re:It's "more... THAN", stupid, stupid Americans.. (1)

gagol (583737) | 1 year,17 days | (#43344779)

Come on, it is Anonymous Coward we are talking about! He has been around since the beginning and its UID is so low, it cant be shown ;-)

Re:It's "more... THAN", stupid, stupid Americans.. (1)

sFurbo (1361249) | 1 year,17 days | (#43345357)

I believe the UID for AC is 666, though it isn't shown on his posts.

Re:It's "more... THAN", stupid, stupid Americans.. (0)

Anonymous Coward | 1 year,18 days | (#43342187)

I'm an American, and like many others I too cringed when I read that. Are you implying that people in the Uber-glittery Eurozone never make grammatical errors?
What the hell happened to the Eurotrash education system that you would make such a ridiculous generalization?

Now if you want to complain about the literacy of the submitter, whose nationality you don't even know, and based on a single grammatical error, you may proceed- I'm sure those near you are used to hearing you rant and pound on your keyboard as they mop up the excess foam spurting from your mouth.

Re:It's "more... THAN", stupid, stupid Americans.. (1)

K. S. Kyosuke (729550) | 1 year,18 days | (#43342439)

I'm an American, and like many others I too cringed when I read that. Are you implying that people in the Uber-glittery Eurozone never make grammatical errors?

It could simply mean that L1 and L2 speakers tend to make different classes of errors.

Re:It's "more... THAN", stupid, stupid Americans.. (0)

Anonymous Coward | 1 year,18 days | (#43342839)

Shut up. No one wants to hear your whining about something as insignificant as that.

Re:It's "more... THAN", stupid, stupid Americans.. (1)

Anonymous Coward | 1 year,18 days | (#43344007)

> ... about something as insignificant than that.

There. Broke that for you.

And 12 nanoseconds later (0)

Anonymous Coward | 1 year,18 days | (#43341617)

3D software will bring this hardware to its knees as leet users everywhere complain how "slow" the hardware is!

Latency? (4, Insightful)

gman003 (1693318) | 1 year,18 days | (#43341631)

Massive throughput is all well and good, very useful for many cases, but does this help with latency?

Near as I can tell, DRAM latency has maybe halved since the Y2K era. Processors keep throwing more cache at the problem, but that only helps to a certain extent. Some chips even go to extreme lengths to avoid too much idle time while waiting on RAM ("HyperThreading", the UltraSPARC T* series). Getting better latency would probably help performance more than bandwidth.

Re:Latency? (0)

Anonymous Coward | 1 year,18 days | (#43341771)

That is only a problem at the first bit requested. How often do you want 1 bit and not the rest of the cache line? And you *ARE* going to get the rest of the cache line... Most cpu's work that way these days. Working with 1 byte or even 2 has not been true since around the time of the pentium/586.

Re:Latency? (1)

jandrese (485) | 1 year,18 days | (#43342245)

How often are your memory access patterns not neatly aligned? This is pretty frequent and can be a major bottleneck for some applications.

Re:Latency? (2)

doublebackslash (702979) | 1 year,18 days | (#43343055)

Pointer chasing is the cannonical example. Trees, linked lists of every flavor, maps, many many more.

Even if your memory accesses are aligned you will still start to stream cache misses as soon as you are operating beyond the limits of cache, or start bouncing between cores and/or threads (snooping is cheap, but it isn't free and by the time you get there another thread might have kicked out your data).

Then there is synchronization between threads. Fences aren't free (far far from it, though some can be cheaper than others)

Some practical examples are rays tracers (objects scattered all around memory), XML parsers (relatively huge objects and more. Love or hate it XML is everywhere), precise garbage collection scatters certain objects around memory, and compression.
That is just off the top of my head, but you get the idea. Not everything is contigious. Even when it is you can easily stream misses a rate collosally higher than they can be served.

Re:Latency? (4, Informative)

harrkev (623093) | 1 year,18 days | (#43341961)

I have a passing familiarity with this technology. Everything communicates through a serial link. This means that you have the extra overhead of having to serialize the requests and transmit them over the channel. Then, the HMC memory has to de-serialize it before it can act on the request. Once the HMC had the data, it has to go back through the serializer and de-serializer again. I would be surprised if the latency was lower.

On the other hand, the interface between the controller and the RAM itself if tighly controlled by the vendor since the controller is TOUCHING the RAM chips, instead of a couple of inches away like it is now, so that means that you shold be able to tighen timings up. All communication between the RAM and the CPU will be through serial links, so that means that the CPU needs a lot less pins for the same bandwidth. A dozen pins or so will do what 100 pins used to do before. This means that you can have either smaller/cheaper CPU packages, or more bandwidth for the same number of pins, or some trade-off in between.

I, for one, welcome our new HMC overlords, and hope they do well.

Re:Latency? (1)

smallfries (601545) | 1 year,17 days | (#43345743)

Do you know why the target bandwidth for USR (15Gb) is lower than the bandwidth for SR (28Gb)?

It seems strange that they would not take advantage of the shorter distance to increase the transfer speed.

Re:Latency? (2)

Cassini2 (956052) | 1 year,18 days | (#43342057)

This technology will not significantly affect memory latency, because DRAM latency is almost entirely driven by the row and column address selection inside the DRAM. The additional controller chip will likely increase average latency. However, this affect will be lessened because the higher bandwidth memory controllers will fill the processors cache more quickly. Also, the new DRAM chips will likely be fabricated on a denser manufacturing process, with many parallel columns, which will result in a minor improvement in speed.

All told, this new technology will not change the fact that modern CPU's spend about 50% of their clock cycles waiting for data.

Re:Latency? (0)

Anonymous Coward | 1 year,18 days | (#43342303)

The *interesting* thing is size of package. If they can get it down to the size of a cpu with the same thruput as current then in theory they could put the memory and CPU in the same package removing about 1 ft of wire travel. The only thing holding it back now is the size of the RAM sticks/boards. I see interesting 'SoC' in the near future... Or at the very least phones with radically more memory.

This tech should also be interesting in the GPU market. Where a non insignificant amount of power is lost to the RAM...

It will remain to be seen if moving the memory controller out of the CPU and back to an external bus item would deliver the goods. It may lower the cost though like you pointed out with pin count going down.

Re:Latency? (3, Informative)

hamster_nz (656572) | 1 year,18 days | (#43342693)

This change of packaging allows greater memory density, and maybe higher transfer bandwidths. It will not alter the "first word" latency much, if at al.

Signal propagation over the wires isn't the problem, it is the way all DRAM works is.

- The DRAM arrays have "sense amplifiers", used to recover data data from the memory cell. The are much like op-amps, To start the cycle both inputs on the sense amplifier are charged to a middle level,
- The row is opened, dumping any stored charge into one side of the sense amplifier.
- The sense amplifiers are then saturate the signal to recover either a high or low level.
- At this point the data is ready to be accessed and transferred to the host (for a read), or values updated (for a write). It is this part that the memory interconnect performance really matters (e.g. Fast Page mode DRAM, DDR, DDR2, DDR3).
- One the read back and updates are completed then the row is closed, capturing the saturated voltage levels back in the cells.

And then the next memory cycle can begin again. On top of that you have have to add in refresh cycles, the rows are opened and closed on a schedule to ensure that the stored charge doesn't leak away, consuming time and adding to uneven memory latency.

streaks of bubbles in the water... (1)

nicolaiplum (169077) | 1 year,18 days | (#43341697)

Submarine patent from Rambus [or someone else] surfacing in 3... 2... 1...

Re:streaks of bubbles in the water... (1)

ackthpt (218170) | 1 year,18 days | (#43342581)

Submarine patent from Rambus [or someone else] surfacing in 3... 2... 1...

Yep. Hope they got signatories all notarized and everything.

This is what I call (1)

etash (1907284) | 1 year,18 days | (#43341729)

frakking excellent news. For some time now the bottleneck has been in the memory bandwidth, not in the cpu/gpu processing power. This will help a lot problems like raytracing/pathtracing which are memory bound.

thank you gods of the olympus!

p.s. for some time now I've been trying to find again a .pdf file ( which I had found in the past, but lost it somehow ) with detailed explanations and calculations on the memory and flops requirements of raytracing, and how memory bandwidth is very low for such problems

3D is just a fad! (0)

Anonymous Coward | 1 year,18 days | (#43341753)

No one likes it, it's just a way for the industry to wring more money out of the consumer.

Re:3D is just a fad! (0)

Anonymous Coward | 1 year,17 days | (#43347697)

So true, just a fad. I'm totally waiting until they come out with 4D memory. Can't pull one over on me so easily!

5 years (1)

Anonymous Coward | 1 year,18 days | (#43342067)

It will probably be around 5 years until we can buy these things like we buy DDR3. This industry is developing so fast, yet moving so slow.

I sent the guys an email... (0)

Anonymous Coward | 1 year,18 days | (#43342077)

Seems to me they got the specs backwards in the announcement. Shouldn't UltraSR be faster than SR?
Also asked if they benchmarked WOW yet and what kind of frame rates I can expect?

Good news everyone (0)

Anonymous Coward | 1 year,18 days | (#43342101)

It does not need glasses, only if you want to look smart.

Hybrid Memory Cube has 4 Corner Time (1, Funny)

Anonymous Coward | 1 year,18 days | (#43342235)

Hybrid Memory Cube exists in a 4-point world. Four corners are absolute and storage capacity is circumnavigated around Four compass directions North, South, East, and West. DRAM consortium spreads mistruths about Hybrid Memory Cube four point space. This cannot be refuted with conventional two dimensional DRAM.

Re:Hybrid Memory Cube has 4 Corner Time (0)

Anonymous Coward | 1 year,17 days | (#43344849)

[This is the sound of me jumping from the 56th floor]...

Memory is far more complex than you imagine. (2)

hamster_nz (656572) | 1 year,18 days | (#43342443)

If you think that modern memory is simple send an address and read or write the data you are much mistaken.

Have a read of What every programmer should know about memory [lwn.net] and get a simplified overview of what is going on. This too is only a simplification of what is really going on.

To actually build a memory controller is another step up again - RAM chips have configuration registers that need to be set, and modules have a serial flash on them that holds device parameters. With high speed DDR memory you have to even make allowances for the different lengths in the PCB traces, and that is just the starting point - the devices still need to perform real-time calibrate to accurately capture the returning bits.

Roll Serial Port Memory Technology [wikipedia.org]!

So we can expect (Hope?) to see this in GDDR6 spec (1)

locater16 (2326718) | 1 year,18 days | (#43342453)

? I mean, money? Psssh, there's people out there that have two GTX Titans ($1,000 cards) and would have more if there was room on the motherboard. Plus the vast reduction in power usage would be really useful for mobile high end stuff. Would love to grab a Nvidia 850 or whatever next year with 4 gigs of this onboard.

Cooling (1)

Anonymous Coward | 1 year,18 days | (#43343167)

How do they cool this apparatus?

I for one.. (0)

Anonymous Coward | 1 year,17 days | (#43347303)

I for one will never let go of my Universe-bending 2-dimensional RAM.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account