×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DRAM Almost as Fast as SRAM

CmdrTaco posted more than 7 years ago | from the rammit-rammit-rammit dept.

IBM 115

An anonymous reader writes "IBM said it has been able to speed up the DRAM to the point where it's nearly as fast as SRAM, and that the result is a type of memory known as embedded DRAM, or eDRAM, that helps boost the performance of chips with multiple core calculating engines and is particularly suited for enabling the movement of graphics in gaming and other multimedia applications. DRAM will also continue to be used off the chip."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

115 comments

Trust IBM (5, Funny)

Frequently_Asked_Ans (1063654) | more than 7 years ago | (#18012700)

to go for title of most patents filed in 2007

Yes, trust IBM. (5, Insightful)

mmell (832646) | more than 7 years ago | (#18014402)

Because everybody knows that companies should invest millions of dollars to develop technologies which should then be given away for free. That's the only workable business model, right?

No, I'm not a fan of patent trolls; but this isn't patent trolling. IBM has created a new, better way to embed cache RAM on the CPU die, at a signifigant cost in both manpower and materiel. This isn't like they patented "a method to check customers out with one click" or something similarly banal. This is a real, new technology which took a great deal of time, energy and work to create. No "prior art", no "trivially obvious" - this is exactly the kind of technological advancement which patents should protect.

Re:Yes, trust IBM. (-1, Troll)

Anonymous Coward | more than 7 years ago | (#18015492)

You mean like those pharmacutical patents that "create a new, better way to target and attack cancerous cells" or something to that effect?

This is EXACTLY the kind of technological advancement which patents SHOULD protect.

In other words, if you want to live, pay up, if they decide to actually license or make the actual product being patented. Ransom anybody?

Re:Yes, trust IBM. (1)

dal20402 (895630) | more than 7 years ago | (#18015568)

Red herring.

People are unlikely to die because their DRAM is too slow. (Gamers are not people.)

The fact that we need to think hard about how to make life-saving drugs accessible while preserving an incentive to develop them has absolutely nothing to do with the obvious benefits of patenting this kind of expensive, innovative technology.

Re:Yes, trust IBM. (1)

don_bear_wilkinson (934537) | more than 7 years ago | (#18017324)

People are unlikely to die because their DRAM is too slow. (Gamers are not people.) LOL No, really, out loud. At work. Thanks. Wowtwit: "I need someone to enchant my 2H sword" Me: "You *need* an enchant?" Wowtwit: "ya I need that glow thing." Me: "Look. You need air, you need water, you don't frakking NEED an enchant." Wowtwit: "w/e, I really want that red glow - it's really cool!" (pounds head on keyboard) Me: "I can do the Fiery Enchant, that's the red glow, but it costs ## gold" Wowtwit: "u r kidding that's too much" Me: "Enchants are expensive because they cost money to make and they actually DO things - not just make things pretty. Now go away kid, ya bother me."

Re:Yes, trust IBM. (1)

bubbaD (182583) | more than 7 years ago | (#18019638)

Ya know, the Wowtwit kid was actually using hyperbole, quite correctly. Probably he couldn't explain it, but didn't need someone to bully him about his speech. Also next time you hear someone use "literally" in a metaphorical way, remember, its an exaggeration, hyperbole, not wrong and not a reason to think you have a superior intellect.

I think I use to be like you, and I became very, very unhappy because of my attitude. Be a little more generous and patient. It will pay off.

Re:Yes, trust IBM. (2, Informative)

Anonymous Coward | more than 7 years ago | (#18015614)

Last time I checked, no one has died from an inability to afford faster RAM

Re:Yes, trust IBM. (1)

42Penguins (861511) | more than 7 years ago | (#18016720)

Ransom would be if they created a disease only to sell the cure.
It costs money to develop drugs. LOTS of money. Money can be exchanged for goods and services.

Re:Yes, trust IBM. (1)

Hal_Porter (817932) | more than 7 years ago | (#18021264)

Actually, thanks to the wonders of corrupt capitalism, if you live in a country that's too poor to buy the patented drugs from the company that did the R&D, you probably live somewhere where IP laws aren't particularly well enforced because local companies/people will lobby aggressively against them. As the country gets richer, companies will start to worry about their own IP and lobby to get enforcement tightened.

http://www.usatoday.com/news/health/2007-01-30-tha iland_x.htm [usatoday.com]

Even though the WTO was presumably supposed to stop poor countries doing this, there's still a clause in the rules that allow countries to force compulsory licensing if there's a national emergency, like here in Brazil.

http://news.bbc.co.uk/2/hi/health/4059147.stm [bbc.co.uk]

Even rich places like Taiwan have occasionally done it, based on a potential national emergency.

http://news.bbc.co.uk/2/hi/asia-pacific/4366514.st m [bbc.co.uk]

Of course, most really poor countries tend to be run by a small kleptocratic clique that can be paid off to not allow this sort of thing, even though it's in the national interest, but it's that clique which is the problem, not drug patents per se.

So it seems that the optimal system is democratic enough that the pressure to enforce foreign patents from outside is balanced by domestic pressure to not enforce them, but I think also helps to have domestic drug companies who lack patents of their own.

Re:Yes, trust IBM. (0)

Anonymous Coward | more than 7 years ago | (#18018796)

No, I'm not a fan of patent trolls; but this isn't patent trolling.
That and IBM isn't as big on throwing it's patent weight around.

Re:Yes, trust IBM. (1)

blahplusplus (757119) | more than 7 years ago | (#18022006)

"Because everybody knows that companies should invest millions of dollars to develop technologies which should then be given away for free."

Maybe they wouldn't cost millions if they outsourced the labour!

What's the point? (2, Insightful)

ArcherB (796902) | more than 7 years ago | (#18012732)

With all these improvements in processor and RAM speed, when can I expect a faster HDD? A solid state drive would be nice.

All chips wait at the same speed. Why not concentrate on the bottlenecks rather than what is already one of the fastest components in any system.

Re:What's the point? (1, Funny)

Anonymous Coward | more than 7 years ago | (#18012782)

All chips wait at the same speed.

Nuh uh! I guarantee my computer can do nothing a whole lot faster than yours can.

Re:What's the point? (4, Insightful)

Waffle Iron (339739) | more than 7 years ago | (#18012962)

Why not concentrate on the bottlenecks rather than what is already one of the fastest components in any system.

Firstly, system memory is not especially fast compared to the CPU, and the recent proliferation of multiple cores is making the situation worse because more CPUs are trying to bang on the same memory.

Secondly, the most straightforward way to paper over problems with high-latency devices is to put a cache in front of them. Super fast DRAM would be one way to enable bigger caches that reduce the impact of various system bottlenecks. Sure we can hope to replace all hard drives with solid state devices, but since they still cost orders of magnitude more per megabyte, it will probably be quite a while before that happens. In the mean time, better caches couldn't hurt.

Re:What's the point? (3, Informative)

tomstdenis (446163) | more than 7 years ago | (#18013026)

Because your spinning magnetic platter is a cheaper storage "solution" than edram, flash, whatever.

Unless you want to pay $25 per GB [again...], I'd wait until things improve.

And it isn't like they're not working on smaller/faster memory. Two years ago a 1GB flash was 99$ [in Canada], now they're ~40$ and you can get a 2GB flash for about the price of the 1GB. I imagine this year we'll see 4GB flash drives become more of a norm, and so on.

Most likely, ten years from now 80GB flash drives will be common place enough and not super expensive. But until then, spinning platers!

Re:What's the point? (3, Informative)

maxume (22995) | more than 7 years ago | (#18013972)

Prices too high, sizes too small:

http://www.newegg.com/Product/Product.asp?Item=N82 E16820163159 [newegg.com]
http://www.newegg.com/Product/Product.asp?Item=N82 E16820220156 [newegg.com]
http://www.newegg.com/Product/ProductList.asp?N=20 03240522+1309421175&Submit=ENE&SubCategory=522 [newegg.com]

4GB flash for $40-$60, sd for $45, so $10-$15 per GB, right now. 1 GB cost $60 about 18 months ago(they are less than $15 now); extrapolate linearly, thats 64GB for cheap($60!) in 6 years, and 128+ in 8 years. That doesn't account for a slight depression in prices as the size of the chips used goes up.

I'd pay $100 extra for a laptop with a 32GB flash drive to go with the giant hard disk, just to save power. That's fairly likely in less than 4 years.

Re:What's the point? (3, Interesting)

joto (134244) | more than 7 years ago | (#18014654)

Most likely, ten years from now 80GB flash drives will be common place enough and not super expensive. But until then, spinning platers!

I expect to see 80GB flash drives long before 10 years. Assuming a growth rate of doubled capacity every 18 months, true enough, we'd reach about 80 GB in 10 years, but so far, flash memory has increased much faster than Moores law. Also I assume that the amount of data our computers manipulate continue to increase with each version of windows/HD-DVD/whatever, so we still need larger/slower storage mediums in 10 years, such as harddisks.

In fact, the whole idea of using a (set of) rotating platter(s) with magnetic coating and radially movable read/write head(s) for storage, has been so successful for so long, and continue to improve at such an astonishing rate, that I doubt it will go away any time soon. In the far future, it's more difficult to predict what would happen. But even today, wheels are important, fire is our main source of (non-food) energy, primitive cutting tools are regulary used in any household, and in general, assuming things fail to change, is rarely wrong (we still haven't got flying cars!)

Re:What's the point? (2, Interesting)

Bjrn (4836) | more than 7 years ago | (#18015434)

Flash drives are coming much quicker than that. Se this article [theinquirer.net] in The Inquirer.

"PQI, WHICH IS showing an engineering sample of a 64GB flash-based hard disk drive at Computex says the price for the expensive, but desirable, storage devices could fall below $1000 before the end of this year. "It depends on the chip price, but maybe it can get below $1000 this year" said Bob Chiu of PQI's Disk on Module sales dept. A competitor confirmed that such a precipitous fall in price was a possibility."

Because of the low power consumption and modest speeds flash drives will mostly be interesting for laptops, at least initially.

Re:What's the point? (3, Insightful)

another_fanboy (987962) | more than 7 years ago | (#18013040)

Why not concentrate on the bottlenecks
In comparison to the processor, is RAM not a bottleneck? An improvement in an area that has less need is still an improvement.

Re:What's the point? (0)

filesiteguy (695431) | more than 7 years ago | (#18013096)

Actually they've had "ram drives" for some time now - http://www.cenatek.com/ [cenatek.com] - is one company which makes them. I am sure there are others, but this would be the coolest.

Re:What's the point? (1)

TheRealMindChild (743925) | more than 7 years ago | (#18013386)

Hesus. $1600. What a load of horse shit. Its cheaper and probably smarter to just jam an extra 4GB of ram and make a ramdrive than using a solid-state device at that cost.

Re:What's the point? (2, Interesting)

paeanblack (191171) | more than 7 years ago | (#18014248)

I am sure there are others, but this would be the coolest.

They all run at similar temperatures.

The Cenatek RocketDrive you link to is a very dated product...it's not even bootable. Here is a more practical option:
http://www.gigabyte.com.tw/Products/Storage/Produc ts_Overview.aspx?ProductID=2180 [gigabyte.com.tw]

It's $115 at Newegg and holds up to 4 x 1G of 184 pin DDR.

4 gigs isn't much, but for certain situations, like holding a small database with heavy use, they work great. For random I/O, they are obscenely fast for the price...about twice the speed of two striped Raptors with a good controller.

Re:What's the point? (0)

filesiteguy (695431) | more than 7 years ago | (#18014376)

They all run at similar temperatures.
Yes, but since an old college buddy of mine runs the company, they're cooler. :P

I actually am not involved with servers, and do little which requires fast processing - at least at home - so don't pay much attention to this market. I'm sure there are many options out there.

Re:What's the point? (4, Insightful)

joto (134244) | more than 7 years ago | (#18014734)

For random I/O, they are obscenely fast for the price...about twice the speed of two striped Raptors with a good controller.

Yeah, but wouldn't it be better to buy a real computer with room for more RAM, so you didn't have to use a hardware device to imitate another hardware device, so that you could use software to imitate the drivers of the other hardware device, so that you could use it as the first kind of hardware device, just with lower speed and convenience? Or in other words: wouldn't it be better to just run the database in RAM?

Re:What's the point? (1)

Jeremi (14640) | more than 7 years ago | (#18015572)

Or in other words: wouldn't it be better to just run the database in RAM?


Yes... as long as there was a way to ensure data integrity in the event of an unexpected shtudown. The one nice thing about a journalled filesystem on a persistent store is that it doesn't go away when the lights go out...

Re:What's the point? (1)

owlstead (636356) | more than 7 years ago | (#18016278)

"Or in other words: wouldn't it be better to just run the database in RAM?"

That does not operate on a battery. If you put - as the other poster suggested - a journalled filesystem on there (e.g. ZFS) then this device would not fail even on an unexpected shutdown, and there is little or no chance that it can be corrupted by the OS or another application. Unless the OS or the application mess with the filesystem of course. It's a bit of a shame that they don't allow more than 4 GB or ECC memory or hot swap, so this device doesn't seem to go all the way either.

Currently I always run a tiny 64 MB ramdrive in Windows so that logfiles don't mess up my timing. With the current processors and RAM, the overhead is neglectible. Just an additional tip :)

Re:What's the point? (2, Insightful)

Anonymous Coward | more than 7 years ago | (#18013132)

when can I expect a faster HDD? [...] Why not concentrate on the bottlenecks
Ah, the eternal "why not cure cancer instead?". HDDs aren't the bottleneck for MANY applications, so this DRAM news matters greatly. DRAM engineers don't have the skills to improve HDDs, so you can't just have them work on your pet peeve.

Insightful how? (0)

Anonymous Coward | more than 7 years ago | (#18013198)

So I guess everyone should just drop what they are doing and work on the "bottlenecks". Even though IBM sold off their HDD division, they should still work on it anyway. Intel should stop making cpu's faster and instead redirect the company to do R&D on HD's?

Re:What's the point? (1)

Jeff DeMaagd (2015) | more than 7 years ago | (#18013510)

Because that's a different division or a different company? Only the solid state or long term storage people can improve that, and they are working against major limitations in mass storage. Laying down transistors is expensive, and flash memory isn't necessarily faster than hard drives on anything except maybe latency, throughput is lower enough that the latency advantage doesn't make up for it.

One problem is that many of the companies have nothing to do with solid state storage, so they can do nothing about what you complain about. I don't think IBM does flash memory and they've left the hard drive market.

If you are truly desperate to increase speed, why not stripe a couple Raptors? They will be on average, faster and despite their costs, Raptors are far cheaper than any equivalent capacity flash storage that's currently available.

On Striping Raptors (10k rpm SATA drives) (1, Informative)

Anonymous Coward | more than 7 years ago | (#18014222)

Note that striping (RAID-0) gives no benefit at all for writes, and destroys *latency* for reads. So it's only beneficial for streaming large files (say, in video editing) -- and of course it doubles your risk of data loss (as one failing drive zaps *all* your data) so it's really only useful for a work/scratch space for your large video/audio/CAD files.

Better to have a single 10K Rappy (or better a piece of 15K SCSI/SAS goodness -- where are 15K SATA drives already???) as a "system/apps/work cache" and then large 7K SATA drives for RAID-0 "scratch" pair and a RAID-1 "save" pair and even then burn all the important stuff to DVD weekly...

Of course the "more FPS!!!oneone" kiddies will ignore this advice and add blue case bottom lights too ;-)

Re:What's the point? (1)

dave1g (680091) | more than 7 years ago | (#18020014)

I dont think you can really say that hard drive throughput is faster than flash drive. Maybe against a single chip. But it costs nothing to put the chips in parallel and access them as a bank, and you dont need to do any fancy raid, it just like memory banks. put enough of them in parallel and you will beat out disks.

Of course you can do the same for disks, but its much more costly to have the raid controller with the XOR engine and the typical huge cache sitting in front of it.

Also raids run rather slowly between a crash and rebuild of one of the drives.

Re:What's the point? (2, Insightful)

Lazerf4rt (969888) | more than 7 years ago | (#18014076)

Why not concentrate on the bottlenecks rather than what is already one of the fastest components in any system?

RAM speed is one of the biggest bottlenecks on your system. It's called a cache miss. When your CPU tries to access data outside its local cache, it has to wait for that cache line to come from system RAM. Your CPU currently spends a huge fraction of its execution time doing that. If IBM can provide a significantly faster type of system RAM, they can reduce that huge fraction, which would noticeably speed up the entire system.

Cache misses are also the whole reason why hyperthreading ended up being a good idea: it minimizes the amount of time wasted during cache misses. If system RAM was always able to deliver memory without any latency, there would not have been any point to hyperthreading.

Re:What's the point? (0)

Anonymous Coward | more than 7 years ago | (#18014338)

TFA doesn't make the matter clear one way or the other, but I fear this IBM achievement isn't at all about the clockspeed of discrete DRAM chips (for DDR sticks) but about easier integration of eDRAM blocks into multi-core CPU chips.

Perhaps they have made embedded DRAM reach the speeds of discrete DRAM, so there is a net advantage from the much wider memory buses you can use in an integrated/"embedded" design.

PS2's "Graphics Synthesizer" GPU had a 2560-bit bus to the eDRAM, which of course beats any 2x64-bit dual-channel RAM silly in bandwidth... Remember the Glaze3D GPU design from BitBoys back in the day? 1024-bit 9MB embedded framebuffer addition to regular video memory... could have been a wicked fast video card. (Four pixel pipes too, before Nvidia had unveiled the NV10 Geforce "monster".)

Re:What's the point? (1)

Shaltenn (1031884) | more than 7 years ago | (#18014308)

Because Joe schmoe computer user doesn't care about the bottlenecks. He goes to the store with the impression of "Hey Faster Ram = Faster Computer" even if there's another problem elsewhere.

This is how big corps make money - they keep improving the stuff the no-nothing wants and they make big bucks off minor 'improvements' that don't really help.

Re:What's the point? (1)

joto (134244) | more than 7 years ago | (#18014790)

Because Joe schmoe computer user doesn't care about the bottlenecks. He goes to the store with the impression of "Hey Faster Ram = Faster Computer" even if there's another problem elsewhere.

This is how big corps make money - they keep improving the stuff the no-nothing wants and they make big bucks off minor 'improvements' that don't really help.

Apart from the fact that...

  1. RAM speed is a major bottleneck for computer performance
  2. Even if there are other bottlenecks elsewhere, reducing one as important as RAM speed is undoubtedly going to make a huge difference
  3. Corporations doesn't make money by creating stuff nobody wants
...I find your ideas intriguing, and would like to subscribe to your newsletter.

Re:What's the point? (1)

Dzonatas (984964) | more than 7 years ago | (#18015964)

I wanted a solid state drive back when my floppy was just 1MB. Now, they are able to give each core its own 1MB cache.

Re:What's the point? (1)

dfghjk (711126) | more than 7 years ago | (#18016642)

Hard drives get bigger and faster all the time. Solid state drives become more and more viable as well.

Hard drives aren't the bottleneck in certain applications so it's irrelevant to those.

Finally, why not improve the system everywhere it's possible? Why blow off CPU improvements only become some apps don't benefit?

How informative (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18012748)

That article was just a wealth of information.... not.
Glad we got editors to keep up /. standards of "News for Nerds".

SD-RAM (1, Funny)

Anonymous Coward | more than 7 years ago | (#18012908)

So SRAM and DRAM is fast? That's nothing... Wait until they combine it into SDRAM!

Obligatory. (0, Offtopic)

Pojut (1027544) | more than 7 years ago | (#18012928)

EDram.

I don't know about you, but I wouldn't want something named that in me...

The older fellas should get it (and their wives might, too)

Re:Obligatory. (1)

Kamots (321174) | more than 7 years ago | (#18013254)

"I don't know about you, but I wouldn't want something named that in me..."

That's why you put it in the computer instead.

To those wondering (4, Insightful)

kestasjk (933987) | more than 7 years ago | (#18012974)

To those wondering why it would be good to have DRAM as fast as SRAM: SRAM doesn't need to be "refreshed" constantly, and is faster, but takes up many more transistors and is therefore much less dense and more expensive for the same amount of memory.

However with DRAM it takes quite a bit of power just to keep data in memory (because of the constant "refreshes"), which isn't the case with SRAM. So this discovery wouldn't take SRAM out of production for applications which require its low power usage.

Re:To those wondering (5, Informative)

TheRaven64 (641858) | more than 7 years ago | (#18013292)

To add to this:

Cache misses are expensive. Really expensive. There are two ways of getting around this:

  1. More hardware contexts so that you can switch to another thread instantly when a cache miss happens.
  2. More (SRAM) cache.
The first one is better if you have highly parallel software, but isn't so good for single-threaded applications. The second is the more common approach. While SRAM uses six transistors per bit, DRAM uses one transistor and one capacitor. This could give something around three times the density, allowing CPU manufacturers to triple the amount of cache without increasing die size. Bigger cache means fewer cache misses, which means less time spent doing nothing.

For reference, a cache miss typically costs something around 1-200 cycles.

Great for L3 caches (4, Interesting)

flaming-opus (8186) | more than 7 years ago | (#18014340)

There are 2 areas of latency for a cache, the first is the performance of the actual data cells, and the second is the speed of doing a lookup in the cache. The larger the cache, and the higher the degree of set associativity, the longer the lookup takes. Thus you're unlikely to see this eDRAM used for L1 caches, and probably not for L2 caches either, as more cache would slow them down, even if the cells are just as fast as SRAM. The sweet spot will probably be for L3 caches, that are already slow by cache standards, but a whole lot faster than system memory. Since L3 caches are large, the cost savings for switching to eDRAM would be largest there.

As for power concerns, DRAM is higher than SRAM, but a larger L3 cache may reduce the traffic through the memory controller, and out to the DIMMs, which will probably more than make up for any increase in power density in the cache.

COAST Modules! (1)

DigiShaman (671371) | more than 7 years ago | (#18015320)

Good lord! I've always wondered what happend to those COAST [wikipedia.org] (Cache On A STick) modules back in the Pentium 1 days. Brings back memories...

Re:COAST Modules! (1)

Tmack (593755) | more than 7 years ago | (#18015560)

Good lord! I've always wondered what happend to those COAST [wikipedia.org] (Cache On A STick) modules back in the Pentium 1 days. Brings back memories...

Nah, CoaSt modules were the L2 cache, cause back then the CPU only had on-chip L1. PPro was the first to introduce on-die L2. P2 took a small step back by taking L2 back off the die, but leaving it on the cpu. Sun platforms and iirc Alpha (and probably a few others) used L3, but x86 did not. AMD just recently released info on their next cpu, which includes plans to implement an L3 that all CPUs can share. Makes sense when you think about it (L1 per core, L2 per cpu, L3 for all!). As for L2, most CPUs have it on-die now.

Tm

Re:COAST Modules! (1)

Tmack (593755) | more than 7 years ago | (#18015602)

btw.... I have a few CoaSt modules and Pentium CPUs laying around in anti-statics if anyone is interested ;)

Tm

Re:COAST Modules! (1)

TheRaven64 (641858) | more than 7 years ago | (#18016132)

PPro was the first to introduce on-die L2.
Cache on the PPro was not on die, it was on a separate die in the same package. This was a really bad idea, because you couldn't test the cache or the core until you had put them both in the package. The P2 put them in separate packages on the same daughter board, allowing them to throw away cores and cache chips if either didn't work.

If you get hold of a Pentium Pro, you can actually see both dies in the package.

Re:COAST Modules! (1)

DigiShaman (671371) | more than 7 years ago | (#18019354)

Cache on the PPro was not on die, it was on a separate die in the same package. This was a really bad idea, because you couldn't test the cache or the core until you had put them both in the package.


Not true. After a wafer has been completed and the dies cut, you can start to test each one immediately for a "good" or "bad" status. It's only after they've been packaged at the end, are the re-tested and have their clock speed certified.

If I recall correctly, Intel was running into trouble with yields. They calculated that if you combine the total transistor count (CPU + Cache) under one die, your yields with current (at the time) technology would have been so low, you would have been priced the processors outside the potential market. The solution was to create two wafers of chips. One wafer was nothing but cache, the other the CPU. After the dies are cut from the wafer, they can each be tested and sorted in bins. It's at this point you can match the good ones together to form a single processor and retest them.

Re:Great for L3 caches (1)

willy_me (212994) | more than 7 years ago | (#18017932)

I agree with what you have written but just wanted to add a point.

The power consumption of SRAM is actually increasing to the point where it doesn't offer any real benefits over DRAM. The problem arises from smaller transistors with greater leakage current. Older SRAM could sit there and draw almost no power - but no longer. Because SRAM requires more transistors then DRAM, the leakage current essentially offsets the power used during the refresh cycle on DRAM.

Now I'm not claiming that DRAM currently uses less power then SRAM. I'm just saying that with modern manufacturing technologies, the old assumption that SRAM uses less power then DRAM is no longer valid. I imagine IBM saw this coming a long time ago and is partly why they invested in finding ways to put DRAM into CPUs.

Willy

Re:Great for L3 caches (1)

julesh (229690) | more than 7 years ago | (#18021668)

The power consumption of SRAM is actually increasing to the point where it doesn't offer any real benefits over DRAM. The problem arises from smaller transistors with greater leakage current.

Note that both IBM and Intel have recently announced new processes that provide reduced leakage currents.

No need to refresh? (1)

XNormal (8617) | more than 7 years ago | (#18014032)

Since this is used for cache memory it may be possible to eliminate the refresh cycles. A cache row can always be re-fetched from main memory. All you need is some reliable method to tell if has expired. Any cache row which hasn't been accessed long enough for it to expire is, pretty much by definition, not very critical to performance anyway.

Don't forget about soft error rates (0)

Anonymous Coward | more than 7 years ago | (#18014726)

Typical soft errors (e.g. bit flips due to cosmic neutrons, which is a particular problem on aircraft and/or satellites) occur in any memory device. But because of all this 'refreshing' that DRAM endures, it's soft error rates have been declining while those for SRAM have been increasing. So perhaps SRAM isn't the holy grail of all memory applications after all.

xbox360 (1)

ttnuagmada (1064148) | more than 7 years ago | (#18012986)

doesnt the xbox360 already use Edram?

Re:xbox360 (1, Informative)

Anonymous Coward | more than 7 years ago | (#18013092)

eDRAM is used in many game consoles, including Sony PS2 and PlayStation Portable, Nintendo Wii and GameCube, and Microsoft Xbox 360

Source: eDRAM [wikipedia.org]

Most of them being IBM processors, and one MIPS. The news is not the development of eDRAM, but that IBM seems to be eager to replace SRAM with it in their processors.

almost as fast? (0)

Anonymous Coward | more than 7 years ago | (#18013072)

i'm looking foward to ZRAM more actually. gotta love weird side effects of amd/ibm's SOI process that allows one bit with just a transistor.. no capacitors. :D

but on topic.. almost as fast in what way, bandwidth or latency? i see the former being really easy. the latter... not so much. SRAM still beats the fastest DRAM in latency by an order of magnitude, easily.

Re:almost as fast? (0)

Anonymous Coward | more than 7 years ago | (#18013138)

memory is plenty fast already - what about the BUS they connect on? that seems to be the bottle neck

Re:almost as fast? (2, Insightful)

warrior (15708) | more than 7 years ago | (#18020472)

If you've plenty of memory on-die the bus becomes irrelevant ;) That's how Intel is keeping up with AMD - big cache band-aid on the slow FSB so they can compete with HT.

what about hard drives? (1)

jaimz22 (932159) | more than 7 years ago | (#18013130)

when are we going to get solid state hard drives to keep up with the rest of the computer crap these days? I wish someone would quite lolly gagging and just produce them for christ sake. they have them.. now sell them. stop playing with ram and start playing with harddrives!

eDRAM is quite old (3, Interesting)

Rolman (120909) | more than 7 years ago | (#18013134)

I don't get why this is news. Embedded-DRAM has been in heavy usage for many years now.

Both the title and the summary are quite misleading, since eDRAM is on-chip and that of course is much faster than external off-chip memory, be SRAM, DRAM or whatever.

Some big examples? PS2, Nintendo Gamecube, Wii, Xbox 360. All these consoles use eDRAM for their GPU's on-chip framebuffers to enhance their performance, and that goes back to at least the year 2000 when the PS2 came out.

Some will be quick to say "no, the Nintendo consoles use 1T-SRAM, not DRAM". Yeah, right, but even 1T-SRAM (despite its name) is a form of embedded-DRAM.

Re:eDRAM is quite old (4, Informative)

stevesliva (648202) | more than 7 years ago | (#18013498)

Some big examples? PS2, Nintendo Gamecube, Wii, Xbox 360. All these consoles use eDRAM for their GPU's on-chip framebuffers to enhance their performance, and that goes back to at least the year 2000 when the PS2 came out.

Some will be quick to say "no, the Nintendo consoles use 1T-SRAM, not DRAM". Yeah, right, but even 1T-SRAM (despite its name) is a form of embedded-DRAM.
First, it is news because IBM is announcing that the performance is on par with SRAM, and because they have integrated their deep-trench eDRAM process with the SOI process used for their Power CPUs. The result? 3x the cache on the die. IBM has offered embedded DRAM with bulk technologies for a few generations, but this is the first real SOI annoucement.

Second, the consoles that have issued PR about using "embedded DRAM" with their GPUs don't actually embed DRAM on the GPU die. The "embedded DRAM" is a process offered by NEC that is separate from the Sony and TSMC processes used to fab the GPUs that supposedly have "embedded DRAM." I am pretty sure that all of the consoles you mention include a separate custom DRAM chip in the same package as the GPU. I am certain this is the case for the XBox 360 [arstechnica.com] . I am unsure about Sony. That DRAM process substantially modifies the back end wiring to make room for a MIM cap between the FETs and the first level of metal.

Re:eDRAM is quite old (0)

Anonymous Coward | more than 7 years ago | (#18013694)

I am pretty sure that all of the consoles you mention include a separate custom DRAM chip in the same package as the GPU.

I am unsure about Sony.
Make up your mind :)

PS2's GS chip has 4MB of DRAM embedded on the same die as the GPU itself. And they had working prototypes with as much as 32MB.

Re:eDRAM is quite old (1)

stevesliva (648202) | more than 7 years ago | (#18013742)

PS2's GS chip has 4MB of DRAM embedded on the same die as the GPU itself. And they had working prototypes with as much as 32MB.
You're right, I wasn't paying attention back then.

Mod parent up! TFA sucks balls.. (0)

Anonymous Coward | more than 7 years ago | (#18013946)

IBM said it has been able to speed up the DRAM to the point where it's nearly as fast as SRAM, and that the result is a type of memory known as embedded DRAM, or eDRAM, that helps boost the performance of chips with multiple core calculating engines and is particularly suited for enabling the movement of graphics in gaming and other multimedia applications.

This is just plain wrong. Any "DRAM embedded into logic circuitry" is "eDRAM" (a type of "embedded memory"). And yes it's been around for over a decade. PS2 and GameCube made it commonplace :-) IBM's new speed achievement (what are they exactly???) has jack shit to do with the "eDRAM" name. The article author is simply clueless.

Of course, these definitions aren't absolutely clear-cut and crystal-clear. A DRAM chips in your DDR stick has lots of logic in it -- it's an electronic device, not a rectangular area of storage ;-) And it's somewhat about the ratio of storage to logic whether a chip features "embedded memory" (inside logic) or "embedded logic" (inside memory)... But speed has no place in those definitions.

Sure it's great if they have sped up eDRAM -- AFAIK it has remainded below the 1 GHz barrier so far.

While SRAM usually takes up to six transistors per cell (and an electrically different lithography process) compared to DRAM's just one, there's Mosys Inc's 1T-SRAM that (yup) sports only one transistor per SRAM cell, however the density hasn't been quite as good as with DRAM and it has significantly tailed behind SRAM's slockspeed. The article should at least mention this stuff -- 1T-SRAM is what GameCube used for "eDRAM".

Yup the Xbox 360 has the ATI Xenos GPU (accompanying the IBM Xenon triple-core CPU) which is actually a package with two chips: one for the GPU and northbridge, one for the smart framebuffer. The latter has a few megs of eDRAM, and besides providing humongous framebuffer bandwidth through a superwide chip-to-chip bus, is notably capable of performing the simplest graphics ops (Z comparing and culling, alpha blending, multisample blending for edge anti-aliasing) on-chip within the memory, much like Sun's ancient 3D-RAM. This eliminates a lot of ping-pong traffic between the 3D core and video memory which helps performance a lot -- mostly the GPU can use the GDDR3 shared memory only for texturing.

This really says it all about the quality of TFA:

Earlier this week, Intel Corp. said it has developed a research chip capable of performing calculations as quickly as a supercomputer while only consuming as much energy as a light bulb.

Who the heck are these analogies for? So there is exactly one (1) model of supercomputer ever made and some single universal standard wattage for all lightbulbs, eh?

Re:eDRAM is quite old (1)

ChrisMaple (607946) | more than 7 years ago | (#18015148)

There is a significant difference between DRAM used in a framebuffer and DRAM used in a cache. In a framebuffer the data is only needed for the time span of one frame, and refresh is not necessary. As long as it's truly used as a framebuffer, nobody cares if it loses a bit occasionally, it's just a blip on the screen. In a cache, errors are unacceptable and lifetime in the cache is somewhat uncontrolled. Accordingly, the data in a DRAM cache has to be refreshed.

With small devices leakage is a problem, and it's a severe problem for DRAM because it shortens the required refresh interval. If IBM has improved DRAM to make it useful in general-purpose on-chip applications, they've made a big step forward.

For those who don't know what the point is (2, Interesting)

drinkypoo (153816) | more than 7 years ago | (#18013194)

If you could stick a crapload of this on the Cell, then those SPEs could have more than 256kB memory each, and utilizing them would become dramatically easier.

I'd guess the next revision of Cell will have a shitload of eDRAM on it. And it will either have more SPEs, or a new bus that allows multiple Cells to be used. The latter would be more expensive to implement, but probably result in higher yields than substantially growing the Cell to support more coprocessors - the yields are already poor if they just turn all the SPEs on, or else why would they be disabling one?

Re:For those who don't know what the point is (1)

Funk_dat69 (215898) | more than 7 years ago | (#18013840)

You don't need a new bus to use more than one Cell. The Cell blades IBM sells have two Cells on board already and you can access all 16 SPEs. The blades can also cluster up for more resources. You just need code to manage it all.

Re:For those who don't know what the point is (1)

drinkypoo (153816) | more than 7 years ago | (#18014968)

Sounds good to me. I had heard that they had planned to make them network up, but I thought they dropped that by the end. I probably only thought that because no one is using more than one cell in a single box yet. Or at least, I didn't know anyone was :)

Here's a better explanation (4, Informative)

Wesley Felter (138342) | more than 7 years ago | (#18013264)

EE Times article. [eetimes.com] Today SRAM is used for processor caches, but new multicore chips need massive (i.e. expensive) cache. Because eDRAM is much denser than SRAM, it allows chip designers to fit much more cache in the same size chip, increasing overall performance. IBM and AMD use silicon-on-insulator (SOI) technology, while the rest of the industry uses bulk CMOS; eDRAM for bulk has been available for a while (it's used in Xbox 360 and BlueGene/L for example), but now IBM has developed SOI eDRAM that can be used in IBM's future processors (and maybe AMD's).

Re:Here's a better explanation (1)

Intron (870560) | more than 7 years ago | (#18013906)

From the EE TImes article:

"The new design uses a three-transistor micro-sense amp that lets voltage current directly drive transistor gates."

voltage current?

eDRAM... *snicker* (-1, Troll)

Anonymous Coward | more than 7 years ago | (#18014108)

*snicker* ED.... *snicker*

Can we rephrase that in English? (0)

pla (258480) | more than 7 years ago | (#18014602)

DRAM will also continue to be used off the chip.

Oh, good! They had me worried that I could no longer keep my DRAM in the water cooler. And how could I get through my day without a bit of chipless DRAM floating in midair above my keyboard?

Goodness. What next? They'll try to take away my off-chip flatware?

Not what I thought it would be (1)

jiggerdot (976328) | more than 7 years ago | (#18014848)

Well, it IS pretty late, but I read the headline as: "DRM faster then "SPAM". Quite a disappointment, really...

Another excuse to not drop the price of RAM (1)

symbolset (646467) | more than 7 years ago | (#18014898)

Seriously, what's with the price of RAM?

Sure, we'll get 3THz RAM, and it will be $150 for a 1GB stick. That's not what I want, nor what I expect. What I expect is that I get a 2GB stick for what was the price of a 1GB stick 12-18 months ago. By now 4GB sticks should be $75.

In the last couple years prices haven't dropped hardly at all and new stuff is no bigger than before. That doesn't happen in IT unless someone isn't playing fair. So who is it and how do we get them to stop?

Re:Another excuse to not drop the price of RAM (1)

Wesley Felter (138342) | more than 7 years ago | (#18016632)

Demand for RAM slacked off in recent years because of the delays in releasing Vista (seriously). Now that Vista is out, we can expect mainstream PCs will want 2GB of RAM, which should drop the price of 1GB DIMMs.

Re:Another excuse to not drop the price of RAM (0)

Anonymous Coward | more than 7 years ago | (#18017412)

Once again I see the rather bizarre nothing of higher demand reducing the price - the opposite to what economists tell us should happen.

Re:Another excuse to not drop the price of RAM (1)

Wesley Felter (138342) | more than 7 years ago | (#18017970)

I don't know much economics, but my understanding is that DRAM vendors use demand to decide how much of each type of chip to produce.

Re:Another excuse to not drop the price of RAM (1)

julesh (229690) | more than 7 years ago | (#18021724)

Once again I see the rather bizarre nothing of higher demand reducing the price - the opposite to what economists tell us should happen.

Only in cases where supply is constrained. If supply is not constrained, higher demand enables higher economies of scale. This is EC101 stuff.

Other upcoming types of RAM: Z-RAM and TTRAM (3, Informative)

thue (121682) | more than 7 years ago | (#18014916)

I am in no way an expert, but I read about other upcoming types of RAM which also sound interesting:

Z-RAM. One cell is a single transistor. Faster than SRAM, which uses 6 transistors per cell. http://en.wikipedia.org/wiki/ZRAM [wikipedia.org]

TTRAM. One cell contains 2 transistors. As fast as SRAM, according to Wikipedia. http://en.wikipedia.org/wiki/TTRAM [wikipedia.org]

Re:Other upcoming types of RAM: Z-RAM and TTRAM (1)

owlstead (636356) | more than 7 years ago | (#18016604)

I think the Wikipedia is a bit short on details for Z-RAM, so I'll provide an additional link [innovativesilicon.com] . As you can see this leads to the company behind Z-RAM, so they may look a bit too much on the possitive side. It sounds very promissing, I must say. Only for SOI though, so Intel is left out a bit here, but very interesing for IBM or AMD. I wouldn't be surprised to see eDRAM or Z-RAM in chips pretty soon, since they don't seem to require too much rework.

Still not close to ARAM (1)

xee (128376) | more than 7 years ago | (#18020832)

ARAM is of course still fastest. However it's good to see DRAM get some distance from the horribly slow FRAM and GRAM.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...