Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Silicon-Based Memory 5X Denser Than NAND Flash

timothy posted more than 3 years ago | from the promising-stuff dept.

Technology 162

Lucas123 writes "Researchers at Rice University said today they have been able to create a new non-volatile memory using nanocrystal wires as small as 5 nanometers wide that can make chips five times more dense than the 27 nanometer NAND flash memory being manufactured today. And, the memory is cheap because it uses silicon and not more expensive graphite as been used in previous iterations of the nanowire technology. The nanowires also allow stacking of layers to create 3-D memory, even more dense. 'The fact that they can do this in 3D makes makes it highly scalable. We've got memory that's made out of dirt-cheap material and it works,' a university spokesman said."

cancel ×

162 comments

Sorry! There are no comments related to the filter you selected.

It has been obvious for years. (3, Interesting)

symbolset (646467) | more than 3 years ago | (#33432328)

When we run out of possibilities in shrinking the process we go vertical and take advantage of the third dimension. Moore's law is safe for a good long time.

This tech is still several years out from production but other 3D silicon options are in testing, and some are in production.

When the Z density matches the X and Y density in fifteen years or so we'll be ready for optical or quantum tech.

Re:It has been obvious for years. (0)

Anonymous Coward | more than 3 years ago | (#33432440)

When the Z density matches the X and Y density in fifteen years or so we'll be ready for optical or quantum tech.

Or start taking advantage of the fourth dimension.

Re:It has been obvious for years. (3, Funny)

oldspewey (1303305) | more than 3 years ago | (#33432504)

You mean, by changing the contents of the memory over time? Now that's just crazy talk.

Re:It has been obvious for years. (3, Funny)

LifesABeach (234436) | more than 3 years ago | (#33432566)

...5X Denser Than NAND Flash... Flash is sure taking a beating these days, first Apple, now Intel.

Re:It has been obvious for years. (-1, Offtopic)

Anonymous Coward | more than 3 years ago | (#33432632)

You're taking a vacation from normalcy. The setting: a weird motel where the bed is stained with mystery. And there's also some mystery floating in the pool. Your key card may not open the exercise room because someone smeared mystery on the lock. But it will open: the Scary Door.

Re:It has been obvious for years. (-1, Redundant)

Jane Q. Public (1010737) | more than 3 years ago | (#33432456)

Nonsense. Commercial Flash memory and SSD are already using multiple-layer chips. "3D" has been on the market for a year or more.

Not to be too critical... (1, Insightful)

symbolset (646467) | more than 3 years ago | (#33432650)

Not to be too critical, but did you actually read all of the post you responded to? I know expecting one to read the summary is a bit much, and expecting one to read the fine article is completely out of the question on modern slashdot, but at least reading the comment you're responding to still seems to be a reasonable expectation.

Re:Not to be too critical... (0, Redundant)

Jane Q. Public (1010737) | more than 3 years ago | (#33432758)

Yes I did, but I admit that in reading it fast, I missed this part: "but other 3D silicon options are in testing, and some are in production". Without that, it was reasonable to presume that "this tech" referred to the previous sentence.

As for TFA, I read the whole thing yesterday, well before it ever hit Slashdot.

Re:It has been obvious for years. (3, Informative)

OeLeWaPpErKe (412765) | more than 3 years ago | (#33433016)

2D : anything that only has connections in 2 directions. The fact that it's stacked does not change it's 2Dness, if the layers don't interact in a significant way (a book would not be considered 3d, nor even 2.5D, nor would a chip structured like a book).
2.5D : anything that has connections in 3 directions, but one of the directions is severely limited in what it can connect, and which way the wires can run (e.g. you can only have wires straight up with no further structure)
3D : true 3D means you can etch any 3d structure at all (meaning e.g. you can implement a transistor at a 30 degree angle from another)

The most advanced tech in silicon chips we have now is 2.5D, and these chips are still not fully 3D.

Re:It has been obvious for years. (4, Insightful)

Surt (22457) | more than 3 years ago | (#33432624)

We don't just go vertical without solving the heat dissipation problem. We already have a hard time dissipating the heat off the surface area of one layer. Now imagine trying to dissipate the heat off of the layer that is trapped between two more layers also generating the same amount of problematic heat. Then try to figure out how to dissipate the heat off a thousand layers to buy you just 10 more years of Moore's law.

Re:It has been obvious for years. (2, Interesting)

Anonymous Coward | more than 3 years ago | (#33432814)

Well, at least you have a theoretical possibility to avoid that problem in ssd-disks.
Since you are only going to access one part of the memory at a time the rest could be unpowered. This gives a constant heat do get rid of regardless of the number of layers.

This is of course not possible for CPU's and other circuits where all parts are supposed to be active.

Re:It has been obvious for years. (1)

symbolset (646467) | more than 3 years ago | (#33432828)

The fine article holds an example of a technology that requires less power per transistor or unit of area. I recommend reading it if you can find the time. It's not the only such.

I was thinking millions of layers rather than thousands of course, but we'll start with two. That's the way of such things. Remember that at the same time we're going vertical the process size will still be shrinking with advancements in process technologies. We won't need any cache because the RAM will be inside the CPU - it will all be cache. Some of the technologies that cost a lot of power to get around the fact that some parts are "far" will go away because the processor will be folded into three dimensions.

Of course this deep in the thread it's entirely possible I'm talking to somebody who knows more about the physics involved than I do.

Re:It has been obvious for years. (3, Interesting)

Jeremi (14640) | more than 3 years ago | (#33432846)

We don't just go vertical without solving the heat dissipation problem

The obvious solution to that: don't generate any heat. Now, where are the room-temperature superconductors I was promised???

Re:It has been obvious for years. (2, Interesting)

takev (214836) | more than 3 years ago | (#33433090)

I think we will have to wait until we have super-semi conductors. One where it either conducts perfectly, or not at all, depending on a third input (which itself has an infinite resistance).

Maybe I should patend this "idea" for a transistor, I am probably to late though.

Re:It has been obvious for years. (0)

Anonymous Coward | more than 3 years ago | (#33433412)

Now imagine trying to dissipate the heat off of the layer that is trapped between two more layers also generating the same amount of problematic heat. Then try to figure out how to dissipate the heat off a thousand layers to buy you just 10 more years of Moore's law.

Perforate the layers with thermal ducts - either metal rods or "chimneys" conducting cooling fluid (something of epically low viscosity)

Re:It has been obvious for years. (0)

Anonymous Coward | more than 3 years ago | (#33433584)

is there any reason we can't put metal or a cavity between the layers to push air through? I'm thinking like a mine with support columns left in place out of the blasted rock.

Re:It has been obvious for years. (0)

Anonymous Coward | more than 3 years ago | (#33434588)

What's worse. The power required per transistor doesn't drop and since there are more of them, all things being the same, power required roughly double per layer. You're not really scaling the transistor in the z. You are adding transistors in the Z dimension.

There's a few tricks though, flash requires a very low amount of power, and can possible be made to require even less. So even a layer of 25 only increases wattage by 25 watt

Well that may be problematic (2, Interesting)

Sycraft-fu (314770) | more than 3 years ago | (#33432634)

One thing you could run in to are heat issues. Remember that high performance chips tend to give off a lot of heat. Memory isn't as bad, but it still warms up. Start stacking layers on top of each other and it could be a problem.

Who knows? We may be in for a slowing down of transistor count growth rate. That may not mean a slow down in performance, perhaps other materials or processes will allow for speed increases. While lightspeed is a limit, that doesn't mean parts of a CPU couldn't run very fast.

Also it may slow down. Exponential growth doesn't last for ever. We may start to hit the limits of what we can do.

Have to see.

Re:Well that may be problematic (2, Interesting)

symbolset (646467) | more than 3 years ago | (#33432750)

They're all over that. As the transistors shrink they give off less heat. New transistor technologies also use less energy each per square nanometer, and there's new ones in the pipe. Not all of the parts of a CPU, SSD cell or RAM chip are working at the same time so intelligent distribution of the loads give more thermal savings. Then there are new technologies for conducting the heat out of the hotspots, including using artificial diamond as a substrate rather than silicon, or as an intermediary electrical isolation layer as well as a thermal conductor. If they can solve the carbon or silicon layer deposition issues the thermal issues will be OK.

An interesting evolution of 3D in semiconductors will be leveraging different parts of the processor in three dimensions. This should resolve many of the speed-of-light and latency issues designers have struggled with for some years.

Re:Well that may be problematic (1)

aXis100 (690904) | more than 3 years ago | (#33432790)

If they were making the same spec part that would be fine, but as transistors shrink they cram more into the same space, so total heat flux tends to go up. Also the leakage gets worse too - but that gets offset by lower voltages.

Re:Well that may be problematic (3, Interesting)

repapetilto (1219852) | more than 3 years ago | (#33432862)

This might be a dumb question, but why not have some sort of capillary-esque network with a high heat-capacity fluid being pumped through it? Maybe even just deionized water if you have a way of keeping the resistivity high enough.

Re:Well that may be problematic (0)

Anonymous Coward | more than 3 years ago | (#33432926)

I remember reading somewhere about mircochannels being put directly on the chip and through the chip itself.

Re:Well that may be problematic (1)

marcosdumay (620877) | more than 3 years ago | (#33434736)

The problem with such kind of proposals is that there is no means to actualy build the channels. It is a great idea in theory, and quite obvious (so there is a huge amount of research already on it), but nobody could actualy build it.

Re:Well that may be problematic (2, Insightful)

vlueboy (1799360) | more than 3 years ago | (#33433184)

L1 CPU caches are shamefully stuck with the laughable 20-year old 640K meme in rarely noticed ways. Everyone's first thought is about RAM memory, but remember that CPU's are less change friendly and benefit more from tech like 128K * 5 size at the new density improvement.

Our supposedly macho CPU's have only 128K L1 sizes and comparably, absurdly high L2 and L3 [amd.com] sizes to make up.

The current excuse is that cost and die-space constraints keep size-improvements mostly on the L2 and L3 side. Sadly, someone tagged the article "tenyears" and we'll be dealing with different research by then, like utilizing today's 64 bit, multi-core technology to its fullest.

There's only so much you need (3, Informative)

Sycraft-fu (314770) | more than 3 years ago | (#33433460)

Cache is not a case where more is necessary. What you discover is it is something of a logarithmic function in terms of amount of cache vs performance. On that scale, 100% would be the speed you would achieve if all RAM were cache speed, 0% is RAM only speed. With current designs, you get in the 95%+ range. Adding more gains you little.

Now not everything works quite the same. Servers often need more cache for ideal performance so you'll find some server chips have more. In systems with a lot of physical CPUs, more cache can be important too so you see more on some of the heavy hitting CPUs like Power and Itanium.

At any rate you discover that the chip makers are reasonably good with the tradeoff in terms of cache and other die uses and this is demonstrable because with normal workloads, CPUs are not memory starved. If the CPU was continually waiting on data it would have to work below peak capacity.

In fact you can see this well with the Core i7s. There are two different kinds, the 800s and the 900s and they run on different boards, with different memory setups. The 900s feature faster memory by a good bit. However, for most consumer workloads, you see no performance difference with equal clocks. What that means is that the cache is being kept full by the RAM, despite the slower speed, and the CPU isn't waiting. On some pro stuff you do find that the increased memory bandwidth helps, the 800s are getting bandwidth starved. More cache could also possibly fix that problem, but perhaps not as well.

Bigger caches are fine, but only if there's a performance improvement. No matter how small transistors get, space on a CPU will always be precious. You can always do something else with them other than memory, if it isn't useful.

Re:It has been obvious for years. (5, Insightful)

evilWurst (96042) | more than 3 years ago | (#33432892)

It's not as obvious as it sounds. Some things get easier if you're basically still building a 2D chip but with one extra z layer for shorter routing. It quickly gets difficult if you decide you want your 6-core chip to now be a 6-layer one-core-per-layer chip. Three or four issues come to mind.

First is heat. Volume (a cubic function) grows faster than surface area (a square function). It's hard enough as it is to manage the hotspots on a 2D chip with a heatsink and fan on its largest side. With a small number of z layers, you would at the very least need to make sure the hotspots don't stack. For a more powerful chip, you'll have more gates, and therefore more heat. You may need to dedicate large regions of the chip for some kind of heat transfer, but this comes at the price of more complicated routing around it. You may need to redesign the entire structure of motherboards and cases to accommodate heatsinks and fans on both large sides of the CPU. Unfortunately, the shortest path between any two points is going to be through the center, but the hottest spot is also going to be the center, and the place that most needs some kind of chunk of metal to dissipate that heat is going to have to go through the center. In other words, nothing is going to scale as nicely as we like.

Second is delivering power and clock pulses everywhere. This is already a problem in 2D, despite the fact that radius (a linear function) scales slower than area and volume. There's so MUCH hardware on the chip that it's actually easier to have different parts run at different clock speeds and just translate where the parts meet, even though that means we get less speed than we could in an ideal machine. IIRC some of the benefit of the multiple clocking scheme is also to reduce heat generated, too. The more gates you add, the harder it gets to deliver a steady clock to each one, and the whole point of adding layers is so that we can add gates to make more powerful chips. Again, this means nothing will scale as nicely as we like (it already isn't going as nicely as we'd like in 2D). And you need to solve this at the same time as the heat problems.

Third is an insurmountable law of physics: the speed of light in our CPU and RAM wiring will never exceed the speed of light in vacuum. Since we're already slicing every second into 1-4 billion pieces, the amazing high speed of light ends up meaning that signals only travel a single-digit number of centimeters of wire per clock cycle. Adding z layers in order to add more gates means adding more wire, which is more distance, which means losing cycles just waiting for stuff to propagate through the chip. Oh, and with the added complexity of more layers and more gates, there's a higher number of possible paths through the chip, and they're going to be different lengths, and chip designers will need to juggle it all. Again, this means things won't scale nicely. And it's not the sort of problem that you can solve with longer pipelines - that actually adds more gates and more wiring. And trying to stuff more of the system into the same package as the CPU antagonizes the heat and power issues (while reducing our choices in buying stuff and in upgrading. Also, if the GPU and main memory performance *depend* on being inside the CPU package, replacement parts plugged into sockets on the motherboard are going to have inherent insurmountable disadvantages).

Re:It has been obvious for years. (2, Informative)

Alef (605149) | more than 3 years ago | (#33433142)

First is heat. Volume (a cubic function) grows faster than surface area (a square function). It's hard enough as it is to manage the hotspots on a 2D chip with a heatsink and fan on its largest side. With a small number of z layers, you would at the very least need to make sure the hotspots don't stack.

I'm not saying your point is entirely invalid, however, heat isn't necessarily a problem if you can parallelize the computation. Rather the opposite, in fact. If you decrease clock frequency and voltage, you get a non-linear decrease of power for a linear decrease of processing power. This means two slower cores can produce the same total number of FLOPS as one fast core, while using less power (meaning less heat to dissipate). As an extreme example of where this can get you, consider the human brain -- a massively parallel 3D processing structure. The brain has an estimated processing power of 38*10^15 operations per second (according to this reference [insidehpc.com] ), while consuming about 20 W of power (reference [hypertextbook.com] ). That is several orders of magnitude more operations per watt than current CPU:s have.

Brain consumes more power than the average CPU (0)

Anonymous Coward | more than 3 years ago | (#33433526)

The only way the brain consumes 20W of power is if you exclusively count the power it takes to replenish the ion channels after firing. If the power supply to your head decreases below 80W, you pass out. If it drops below 40W, you will go into coma. If it ever gets to 20W ... I don't think you'll ever wake back up.

The whole point of having a head is cooling. Unless you think evolution/God (who cares) put our brain on a long stick where it can get trivially critically damaged from every direction (except one : your head is quite capable of absorbing shocks from directly in front of you, but every other direction ... You can expect walk away with a headache from a 8000 newton impact on your forehead (assuming you land safely), while 80 newton impacts on the ear have been known to cause permanent comatose states)

One thing that is massively different between human races (and even intra-races) is the power consumption in the head, although it seems to have more to do with the food you get as a baby than with genes. If you take a somali 30-year old, the brain only uses 50-60W at best, while you'll be hard pressed to find a Eropean child with less than 100W power usage in his head. Mostly brain power consumption drops because of energy shortage (ie. famine and the like), however it hardly recovers once the famine ends. If, as a child, you experience a famine before 2-3 years, you're dumb for life.

Re:Brain consumes more power than the average CPU (0)

WrongSizeGlass (838941) | more than 3 years ago | (#33433568)

The whole point of having a head is cooling.

Not so ... the whole point of having a head is to have a place to wear your hat. Every time I wear my hat on the front of my zipper I seem to get in trouble. I'm just sayin' ...

Re:It has been obvious for years. (1)

Alioth (221270) | more than 3 years ago | (#33433698)

The problem is for parallel operations, we start to run into Amdahl's Law: http://en.wikipedia.org/wiki/Amdahl's_law [wikipedia.org]

Re:It has been obvious for years. (1)

petermgreen (876956) | more than 3 years ago | (#33433428)

For low power but high transistor (or transistor substitute) count stuff like memory i'm inclined to agree with you.

For processors afaict the limiting factor is more how much power can we get rid of than how many transistors can we pack in.

Also (unless there is a radical change in how we make chips) going 3D is going to be expensive since each layer of tranistors (or transistor substitutes) will require seperate deposition, masking and etching steps.

Memory crystals (4, Funny)

stox (131684) | more than 3 years ago | (#33432352)

Nope, no one saw that one coming.

Re:Memory crystals (1, Funny)

Anonymous Coward | more than 3 years ago | (#33432374)

But Captain the Memory Crystals are taken a beatin, i dont think theyll last much longer (Scottish Accent)

Re:Memory crystals (0)

Anonymous Coward | more than 3 years ago | (#33432468)

Professor, is it true what they say that crystals have developed consciousnes?

Re:Memory crystals (0, Offtopic)

sessamoid (165542) | more than 3 years ago | (#33432434)

"And thees? Thees is rice?"

Re:Memory crystals (0, Offtopic)

MadKeithV (102058) | more than 3 years ago | (#33433068)

I think "Ridulan" would be a pretty cool brand name for these.

Fool Me Once... (1)

Anonymous Coward | more than 3 years ago | (#33432382)

We've got memory that's made out of dirt-cheap material and it works,' a university spokesman said.

Tell me when it's the head of manufacturing at XYZ Dirt-Cheap-Mass-Produced-Memory Corp saying that, then I'll care.

Is anybody writing this down? (0)

Anonymous Coward | more than 3 years ago | (#33432392)

How many of these 'breakthroughs' do we read about, and how many do we ever actually purchase? Does someone get paid for these kinds of articles?

Re:Is anybody writing this down? (3, Insightful)

MichaelSmith (789609) | more than 3 years ago | (#33432414)

All we ever see is a drop in the price of USB sticks in the shop, but under the surface the duck is paddling as hard as ever.

Re:Is anybody writing this down? (2, Funny)

Anonymous Coward | more than 3 years ago | (#33432482)

Shh. One mustn't identify in public The Great Duck, herald and bringer of technology.

Re:Is anybody writing this down? (0)

Anonymous Coward | more than 3 years ago | (#33432598)

Shh. One mustn't identify in public The Great Duck, herald and bringer of technology.

so that would make Microsoft The Great Duck Hunter then

Re:Is anybody writing this down? (2, Funny)

Yvan256 (722131) | more than 3 years ago | (#33432636)

Nope. Microsoft is that stupid dog that keeps laughing at you when you can't shoot the ducks.

Re:Is anybody writing this down? (2, Insightful)

HBoar (1642149) | more than 3 years ago | (#33432426)

how many do we ever actually purchase?

Some. Is that not enough to make it newsworthy?

Re:Is anybody writing this down? (3, Informative)

symbolset (646467) | more than 3 years ago | (#33432436)

All of the tech we actually purchase comes out of tech published in articles like this one. Processor process technologies, bus evolutions, memory architectures, advancements in lithography are printed here and wind up in the products you buy. Not all of the articles are successful technologies but all of the successful technologies have articles and the time reading about the failures are the price we pay to know about such things in advance. Most of us don't mind, because there are lessons in failures too. Did you read the top of the page where it says "News for nerds."? Are you lost?

Digg is over here [digg.com] .

Re:Is anybody writing this down? (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#33432594)

It would take a fair bit of archival research(and a great deal of definitional quibbling) to actually answer that question; but consider this:

Every breakthrough you have ever had the opportunity to purchase started out like this.

Re:Is anybody writing this down? (0)

Surt (22457) | more than 3 years ago | (#33432638)

That's not at all true. A lot of the technologies that have succeeded have been invented in the commercial labs, where people care about productizability. I think the criticism here is aimed at the university labs, where people invent stuff using outrageous amounts of money that is difficult or impossible to commercialize.

Re:Is anybody writing this down? (1)

radtea (464814) | more than 3 years ago | (#33434530)

I think the criticism here is aimed at the university labs, where people invent stuff using outrageous amounts of money that is difficult or impossible to commercialize.

Absolutely. Commerical labs rarely do this kind of whizz-bang pre-announcement, which means that virtually any story like this is about a technology that is a) still in the lab and b) will never get out.

You have to get to the second page of the article to find out that some tiny tech company no one has ever heard of is "testing" a 1 kilo-bit chip these guys have made. That's right, a whole 128 bytes!

Unsurprisingly, the company is impressed. I was always impressed by the stuff my clients were doing too, when I was doing contract research.

Genuine "news for nerds" would talk more about the physics of the nano-wire processes, which look remarkably interesting. They are doing chemistry on the surface of the wires, from the look of it, forming and destroying atomic and molecular bonds reversibly using electric current. That's damned interesting at a fundamental level, not just a short-sighted "we can make gazillions of dollars on this one particular application!"

Re:Is anybody writing this down? (0, Flamebait)

TheRaven64 (641858) | more than 3 years ago | (#33433630)

This one actually seems pretty underwhelming. 5 times the density of flash? Flash density is doubling about every year, so they have less than three years to get it to market for it to compete with flash, and since it's currently at the university lab prototype stage, this seems wildly optimistic. If it's five times the density with the same process technology, then it would be impressive, but it sounds like it needs entirely new fabrication techniques, so it looks like the kind of thing that will just be surpassed by other (probably similar) incremental improvements from Intel and so on.

And, yes, someone does get paid for these articles. Why do you think MIT has the reputation it has? They do some good work, but they also employ a very competent publicity department. Read a peer-reviewed journal, and you'll find MIT represented about as well as a lot of other decent universities. Read a trade magazine, and you'll see ten times as many articles about MIT breakthroughs as any other university.

I wouldn't be so sure (2, Informative)

TubeSteak (669689) | more than 3 years ago | (#33432428)

"Dirt cheap" isn't here to stay.

Their technology requires polycrystalline silicon & the demand is increasing much faster than the supply.
China might build more polysilicon factories, but they'll undoubtedly reserve the output for their own uses.
This isn't a new problem, since mfgs have been complaining about shortages since 2006-ish (IIRC).

Re:I wouldn't be so sure (0)

Anonymous Coward | more than 3 years ago | (#33432506)

Peak silicon. What a terrible day it will be when we start running out of it.

Seriously though, the dopants needed to make semiconductors tend to be rarer materials.

Re:I wouldn't be so sure (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#33432610)

Were we to run out of silicon, it'd be time to find a new rock because something very serious has happened to this one. That said, the fact that silicon is among the most common of atoms tells us nothing about the short to medium term supply of sufficiently pure and correctly structured polycrystaline silicon.

If it takes 18 months to bring a plant online, that is pretty much the limit of the market's ability to cope with surprise demand(minus any slack in existing capacity that can be wrung out). For highly predictable stuff, no big deal, the plant will be built by the time we need it; but surprises can and do happen, even for common materials(especially given the degree to which "just in time" has come to dominate the supply chain. This isn't your merchant-princes of old, sitting on warehouses piled high. Inventory that isn't flowing like shit through a goose is considered a failure, with the rare exception of "national security" justified stockpiles or the rare hedge or futures position that is actually stored in kind, rather than in electronic accounts somewhere...)

Re:I wouldn't be so sure (1)

fyngyrz (762201) | more than 3 years ago | (#33432812)

This isn't your merchant-princes of old, sitting on warehouses piled high. Inventory that isn't flowing like shit through a goose is considered a failure...

Sir! Sir! Yes, you! I have a package for you here; it's a plaque from from the "most awesome remarks ever" voting board. Yes, that's right, sign here, initial there. Yes, you too sir. Have a good day, sir.

Re:I wouldn't be so sure (0)

Anonymous Coward | more than 3 years ago | (#33433910)

Well first their technology doesn't require polycrystalline silicon, there's no reason for it not to work with any kind of conducting electrode (they used carbon nanotube as electrodes to observe the effect in a highly localized way during their study).
And secondly, this is a component that would be created by standard CMOS process on a standard [monocrystalline] silicon wafer. If they need polysilicon somewhere in their component then they'll simply deposit it, it's not like this is an uncommon process step.

What you are talking about is bulk polysilicon substrates for solar cells. This application requires much more surface area than microelectronics (which was until recently the only user of polysilicon ingots - they're further processed into monosilicon to make wafers) and that's why the production is lacking. But that doesn't mean the whole microelectronics industry is going to collapse because solar panels eat the world's entire silicon processing capacity.

Great, it's denser. (3, Funny)

DWMorse (1816016) | more than 3 years ago | (#33432430)

Great, it's denser. Does this mean it now comes in a yellow-white, almost blonde color?

Re:Great, it's denser. (1)

spazekaat (991287) | more than 3 years ago | (#33432692)

Great, it's denser. Does this mean it now comes in a yellow-white, almost blonde color?

Wot, you mean Paris Hilton?????

Re:Great, it's denser. (1)

RivenAleem (1590553) | more than 3 years ago | (#33433572)

I'd say it's a safe bet you won't insult anyone on /. with that comment

Sigh... (-1, Flamebait)

barfy (256323) | more than 3 years ago | (#33432442)

I don't trust universities anymore. If even ONE of the stupid battery things ever made it to market, then I would be excited.

Show me the product. Otherwise, I want a white one. One with all the gee bees and wai fais.

Re:Sigh... (2, Insightful)

afidel (530433) | more than 3 years ago | (#33432652)

Some supercapacitors have made it to market and refinements on lithium technologies have come a long way in the last decade, tripling the maximum storage density available. The problem is our demand for portable power has outstripped that growth (my blackberry is significantly more powerful than my desktop from 10 years ago and talks 6 different wireless protocols).

25x more dense, not 5x more dense... (4, Insightful)

StandardCell (589682) | more than 3 years ago | (#33432460)

If a single dimension changes, assuming the NAND cell structure is similar, there would be a 5x reduction in size in each of the X and Y dimensions. Therefore, you would get up to 25x more density than a current NAND. This is why process technologies roughly target the smallest drawn dimension to progressively double gate density every generation (i.e. 45nm has 2x more cells than 32nm).

The big question I have for all of these technologies is whether or not is is mass production worthy and reliable over a normal usage life.

But you've got to cool it ... (1)

JoeBuck (7947) | more than 3 years ago | (#33432490)

... which may limit how much of the 3rd dimension you can use.

Re:25x more dense, not 5x more dense... (0, Troll)

noidentity (188756) | more than 3 years ago | (#33432690)

If a single dimension changes, assuming the NAND cell structure is similar, there would be a 5x reduction in size in each of the X and Y dimensions. Therefore, you would get up to 25x more density than a current NAND. This is why process technologies roughly target the smallest drawn dimension to progressively double gate density every generation (i.e. 45nm has 2x more cells than 32nm).

I was going to let this slide, but you blew it at the end. Let's assume you meant to say that 32nm has 2x more cells than 45nm. That's still wrong.

The 32nm one can fit the same cell in half the area. I think we agree so far. This means that in the same area, the 32nm one has twice the number of cells than the 45nm one. It does NOT have two times more; it only has one time more. It'd be like saying that a human has two eyes more than a cyclops. A human has twice the number of eyes, but only one eye more.

This is an amazingly common error I see often here on Slashdot, given that many of us are programmers. "X is two times more than Y" translates as X = 2*Y + Y. The English version of X = 2*Y is "X is two times Y".

Re:25x more dense, not 5x more dense... (1)

Randle_Revar (229304) | more than 3 years ago | (#33434246)

Math is easy, it's English that's tricky.

Superman Technology (0)

Anonymous Coward | more than 3 years ago | (#33432470)

Weren't those crystals from the cave a similar technology? Luthor Corp. may sue them!

obligatory pessimum (0)

Anonymous Coward | more than 3 years ago | (#33432488)

I expect it to cost 10x as much as standard $/gig, expect it to come in a proprietary format that is never adopted by industry and never incorporated into everyday devices.

Wikipedia Admins are fucking bastards (-1, Troll)

Anonymous Coward | more than 3 years ago | (#33432500)

Wikipedia admins are fucking bastards, especially that nigger cock sucking Bsadowski1. He will delete all your articles as not notable and then post gay porn on your talk page.

Here they come... (3, Insightful)

LostCluster (625375) | more than 3 years ago | (#33432510)

Best Buy and Amazon are both selling Intel's 40 GB flash drive for just under $100 this week... I'm building a server based around it and will likely later post on how that goes. Intel recently announced that they're upping the sizes so you're likely going to see the 40 GB model in the clearance bin soon.

It's here, it's ready... and when you don't have a TB of data to store they're a great choice, especially when you read much more often that you write.

And if you want a big SSD (5, Insightful)

symbolset (646467) | more than 3 years ago | (#33432616)

And if you do need a big SSD Kingston has had a laptop 512GB SSD out since May with huge performance, and this month Toshiba and Samsung will both step up to compete and bring the price down. We're getting close to retiring mechanical media in the first tier. Intel's research shows failure rates of SSD at 10% that of mechanical media. Google will probably have a whitepaper out in the next six months on this issue too.

This is essential because for server consolidation and VDI the storage bottleneck has become an impassable gate with spinning media. These SSDs are being used in shared storage devices (SANs) to deliver the IOPs required to solve this problem. Because incumbent vendors make millions from each of their racks-of-disks SANs, they're not about to migrate to inexpensive SSD, so you'll see SAN products from startups take the field here. The surest way to get your startup bought by an old-school SAN vendor for $Billions is to put a custom derivative of OpenFiler on a dense rack of these SSDs and dish it up as block storage over the user's choice of FC, iSCSI or Infiniband as well as NFS and SAMBA file based storage. To get the best bang for the buck, adapt the BackBlaze box [backblaze.com] for SFF SSD drives. Remember to architect for differences in drive bandwidths or you'll build in bottlenecks that will be hard to overcome later and drive business to your competitors with more forethought. Hint: When you're striping in a Commit-on-Write log-based storage architecture it's OK to oversubscribe individual drive bandwiths in your fanout to a certain multiple because the blocking issue is latency, not bandwidth. For extra credit, implement deduplication and back the SSD storage with supercapacitors and/or an immense battery powered write cache RAM for nearly instantaneous reliable write commits.

I should probably file for a patent on that, but I won't. If you want to then let me suggest "aggregation of common architectures to create synergistic fusion catalysts for progress" as a working title.

That leaves the network bandwidth problem to solve, but I guess I can leave that for another post.

Re:And if you want a big SSD (1)

afidel (530433) | more than 3 years ago | (#33432730)

TMS already made such a product with their RAMSAN, it's a niche player for those who need more IOPS than god and don't need to store very much (ie a niche market). Their top end product claims to do 5M IOPS, 60GB/s sustained but only stores 100TB which is the size of a midend array like a loaded HP EVA 6400 but costing about 20x more (granted the EVA tops out in the tens of thousands of IOPS, but that's enough for the vast majority of workloads).

Re:And if you want a big SSD (1)

symbolset (646467) | more than 3 years ago | (#33432786)

Tens of thousands of IOPs (assuming a reasonable block size and write fraction) is a typical metric for a single industry standard server running consolidated workloads with dual Westmere processors and 192GB of RAM. For the same single server doing VDI in an 8AM boot storm condition it's nowhere near enough. Servers aren't going to become less capable next year, and high end servers with 64 cores and 2TB of RAM can already grind that much spinning disc SAN to dust today, saturating bandwidth, disk, cache and processor simultaneously. God forbid you should connect that SAN to a blade system with 16 such servers, let alone a few racks of them. The poor thing would give up.

I hear the RAMSAN dishes the IOPs, but it's spendy.

Re:And if you want a big SSD (1)

afidel (530433) | more than 3 years ago | (#33432878)

Really? Then my load must be very a-typical because I have about a rackfull of database servers and another rack of 144GB x5570 VM servers hitting a half full 6400 and it's not complaining let alone being ground to dust =) Now I'm not doing VDI because I haven't seen a TCO calculation that passes the smell test but from what I've seen in benchmarks and white papers putting a few of the HP SSD's into the 6400 should settle the boot storm problem. Is it possible to completely overwhelm such a SAN with just a few nodes, sure but those aren't typical workloads for ~99% of the customer base.

Re:And if you want a big SSD (1)

symbolset (646467) | more than 3 years ago | (#33432942)

I would guess your experience is atypical, but you seem to have a good handle on the metrics. It seems likely your servers aren't maxing their I/O, so you're distributing the load appropriately to get good response times for your use case. I'll bet those servers could grind up your SAN if they tried. The HP EVA (all versions) maxes out at 8x 72GB SSD's per array so you're unlikely to get a lot of VDI images on that after it's RAIDed and carved into LUNs. A single W7 image runs 10GB or more with Office. Even with smartclones most people need several images for production and several more for test/dev and that capacity runs out quick.

You're right on the VDI TCO calc too. We need that cheap SSD SAN to bring the storage cost down. RAM is still a bit high, and with ASLR and large memory pages you don't get the returns on RAM dedupe that VMWare would like to advertise, so the RAM requirements are the same and in VDI it's more expensive RAM than the desktop replacement RAM that's the alternative. Most of the sales folk are pushing the manageability and security aspect therefore, but to me it doesn't make sense unless you can deliver a better experience to the end customer than he had before, and that means IOPs. If you can do Android desktops you're in the money today though.

It's late, and I'm off to bed. G'nite.

Re:And if you want a big SSD (1)

afidel (530433) | more than 3 years ago | (#33434810)

490GB should be enough for as many linked clone's as you're likely to need.

Re:And if you want a big SSD (2, Informative)

RivenAleem (1590553) | more than 3 years ago | (#33433688)

more IOPS than god

God doesn't need any Outputs. It's all one-way traffic with him.

Re:And if you want a big SSD (2, Funny)

adolf (21054) | more than 3 years ago | (#33432886)

And you, sir, win the Bullshit of the Day award.

Congrats!

Re:Here they come... (1)

afidel (530433) | more than 3 years ago | (#33432662)

Eh, hope that server isn't going to be doing anything transactional, the only Intel SSD I'd trust in any of my servers that actually need SSD's is the X-25e which I have actually used.

density isn't everything (1)

krupan (1816340) | more than 3 years ago | (#33432572)

So it's more dense than NAND flash (and 3D, wow!), but how does it compare on speed, reliability, and endurance?

Re:density isn't everything (1)

Yvan256 (722131) | more than 3 years ago | (#33432648)

how does it compare on speed, reliability, and endurance

No, yes, maybe.

Re:density isn't everything (1)

c0lo (1497653) | more than 3 years ago | (#33433072)

So it's more dense than NAND flash (and 3D, wow!), but how does it compare on speed, reliability, and endurance?

Taking a wild-guess here.
TFA states that the 1/0 is stored as a nanowire that is continuous/interrupted (thus not require any electric charge).

Yao applied a charge to the electrodes, which created a conductive pathway by stripping oxygen atoms from the silicon oxide, forming a chain of nanometer-sized silicon crystals. Once formed, the chain can be repeatedly broken and reconnected by applying a pulse of varying voltage, the University said.

It seems reasonable to think:

  • speed - blind-fast reads with very low currents. Writes (which will require currents to make/break the nanowire) might be slow and will certainly require higher currents (how fast the oxygen atoms can migrate at the nm scale in silicon dioxide?)
  • reliability - high - no electric charge required to store the info (reliable reads). Writes - probably high as well (based on, for example, "If at first you don't succeed, try try again" scheme)
  • endurance - practically infinite for reads (even in harsh conditions of radiations - no electric charge). Writes? Probably limited but I doubt that even the researchers do know

Re:density isn't everything (0)

Anonymous Coward | more than 3 years ago | (#33434012)

Let me quote the abstract: Meanwhile, the switch also shows robust nonvolatile properties, high ON/OFF ratios (>10^5), fast switching (sub-100-ns), and good endurance (10^4 write-erase cycles).
We do in fact have the data for writing speed (pretty good) and endurance (one order of magnitude less than SLC NAND Flash and comparable to MLC NAND Flash, according to Wikipedia, which is good too). And they're pretty promising, I must say.

What to call this new tech? (0)

Anonymous Coward | more than 3 years ago | (#33432576)

We should probably call them Holocrons...that is all.

24 nanometers, not 27 (1)

Chimel31 (1859784) | more than 3 years ago | (#33432580)

Toshiba has started mass production of 24nm NAND cells. Just saying...
Intel and Micron are already at 25nm in their most recent production lines, Hynix at 26nm.
Only Samsung, albeit the world's first NAND manufacturer, seems to be at 27nm.

What about performance? (1)

Mikachu (972457) | more than 3 years ago | (#33432588)

Okay, so they claim that the memory is denser than NAND, and cheap to boot. That's great. But TFA makes no mention of its performance. How does the read/write speed compare to that of NAND, or magnetic drives? Could the 3D architecture potentially slow read/write times? I'm not trying to make any claims here, but it's a little disconcerting that there is no mention of it at all within the article.

Re:What about performance? (1)

M. Kristopeit (1890764) | more than 3 years ago | (#33432918)

either way it's going to depend on the application using drive... perhaps extremely frequent writes on a few blocks would be much faster or slower than a 2D architecture... perhaps multiple continuous large writes would be better multiplexed for higher throughput... perhaps less...

you'd really need to see a detailed report from a quality benchmarking tool that could emulate different real world conditions.

... and reconfigurable. (1)

steeleyeball (1890884) | more than 3 years ago | (#33432590)

I like what the computerworld.com article says about what they are doing with the company NuPGA... Can anyone say "Isolinear chips"? Can you imaging denser FPGAs with more memory that are heat and possibly radiation resistant? ... Processor, cache and chipset all in one made to order.

http://www.pickmbts.com (-1, Offtopic)

Anonymous Coward | more than 3 years ago | (#33432640)

mbt shoes, mbt shoes sale, mbt shoes clearance, discount mbt shoes

http://www.pickmbts.com

so how wide is 5nm? (5, Informative)

ChipMonk (711367) | more than 3 years ago | (#33432658)

The radius of a silicon atom is 111 to 210 picometers, depending on the measurement context. (Check Wikipedia to see what I mean.) That means 5nm is somewhere between 23 and 45 silicon atoms wide.

Will special glasses be needed to read 3D memory? (4, Funny)

PatPending (953482) | more than 3 years ago | (#33432682)

If so, count me out! Besides 3D makes me nauseated.

looking for high density ROM to stop digital decay (3, Interesting)

OrangeTide (124937) | more than 3 years ago | (#33432744)

I'm still waiting for some cheap, stable, high density ROM or preferably WORM/PROM. Even flash has only about 20 years retention with the power off. Which sounds like a lot, but it's not all that difficult to find a working synthesizer or drum machine from the mid-80s in working condition. But if you put flash in everything your favorite devices may be dead in 20 years. for most devices this is OK. But what if some of us want to build something a little more permanent? Like an art piece, a space probe, a DSP based guitar effects pedal, or a car?

Some kind of device with some nano wires that I can fuse to a plate or something with voltage would be nice if it could be made in a density of at least 256Mbit (just an arbitrary number I picked). EPROMs (with the little UV window) also only last for about 10-20 years (and a PROM is just an EPROM without a window). So we should expect to already have this digital decay problem in older electronics. Luckily for high volumes it was cheaper to use a mask ROM than a PROM or EPROM. But these days NAND flash(MLC) is so cheap and high density that mask ROMs seem like a thing of the past, to the point that it is difficult to find a place that can do mask ROMs that can also do high density wafers.

Re:looking for high density ROM to stop digital de (1)

EETech1 (1179269) | more than 3 years ago | (#33433076)

I've seen many application notes relating to long term flash memory reliability in embedded applications, and they all recommend having the bootloader, or application rewrite pages periodically to recharge the flash memory and this will allow the device to provide over 20 or so years of guaranteed operation.

Now it does nothing to guarantee a 20 + year shelf life if left unpowered (service parts). Hopefully if you are the manufacturer, you retained the required cables / interfaces / software / computers / knowledge you need to be able to reprogram your own stuff 10 - 20 years from now so your replacment parts don't die on the shelf right along with the ones that have been working in the field for 20 years!

Re:looking for high density ROM to stop digital de (1)

mr_mischief (456295) | more than 3 years ago | (#33433278)

My parents were told that they were lucky the clutch and clutch plate in their car could be replaced, because the car is a whopping 16 years old. A different part for a different model had to be fitted by a tech who happened to be able to figure out it would work, then the adjustments needed to be twiddled. If Ford Motor Company has problems with the rate at which parts become obsolete, I don't imagine many CE companies are planning for 20-year serviceability either.

What about reliability? (0)

Anonymous Coward | more than 3 years ago | (#33432788)

Not a peep in TFA.. How many read/write cycles? How does the memory degrade over time?

Silicon... (1)

Hylandr (813770) | more than 3 years ago | (#33432818)

So manufacturing with Silicon now huh?

How with this new ram be measured for Capacity? B, C, D or DD?

- Dan.

Re:Silicon... (1)

DWMorse (1816016) | more than 3 years ago | (#33432864)

If you're using silicon, and only getting a B out of it... get your money back!

are you fucking r-worded? (0)

Anonymous Coward | more than 3 years ago | (#33432884)

if so, that was funny as hell and I recommend you look into a career doing standup comedy

beans and cornbread!

Cheap??? (1)

codeButcher (223668) | more than 3 years ago | (#33432830)

We've got memory that's made out of dirt-cheap material and it works

I guess the materials alone don't determine the price, but the expertise/work to put them together. I'm also typing on a computer that's made out of cheap materials (lots of plastic, some alumin(i)um, small quantities of other stuff) - but it didn't come that cheap.

Oblig B5 joke (2, Funny)

tenco (773732) | more than 3 years ago | (#33433028)

Silicon-based lifeforms 5x denser than carbon-based.

that's quite the range (0)

Anonymous Coward | more than 3 years ago | (#33433502)

(James) Tour, a professor of mechanical engineering and materials science and computer science

This guy has got his hands in on at least three normally separate departments.
Probably had to turn down the faculty Dean-ship due the existence of only 24-hours in a day.

But when will it be able to do ... (1)

Skapare (16644) | more than 3 years ago | (#33433756)

... 18,446,744,073,709,551,616 erase/write cycles?

the new news sucks (1)

mestar (121800) | more than 3 years ago | (#33434030)

Boy do I miss the old news, they way they would write this one for example, would be to put some large number for the thumb drives, as in

"USB thumb drives of in the future could reach 150 terabytes."

Or something.

Want one now! (1)

hesaigo999ca (786966) | more than 3 years ago | (#33434108)

Enough with all these coming to you soon posts, I want a fact filled specimen with ready to be bought now presentation with the stores that carry them, might be another 3 years before we see any of these....!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>