×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hynix 48-GB Flash MCP

kdawson posted more than 6 years ago | from the lotta-songs-in-yer-phone dept.

Data Storage 129

Hal_Porter writes to let us know that the third-largest NAND chip maker, Hynix, has announced they have stacked 24 flash chips in a 1.4mm thick multi-chip package. It's not entirely clear from the article whether the resulting 48-GB device is a proof of concept or a product. The article extrapolates to 384 GB of storage in a single package, sometime. Hal_Porter adds: "It's not clear if it's possible to write to them in parallel — if so the device should be pretty damn fast. The usual objection to NAND flash as a hard drive replacement is lifetime. NAND sectors can only be written 100,000 times or so before they wear out, but wear leveling can be done to spread writes evenly over at least each chip. I worked out that the lifetime should be much longer than a typical magnetic hard disk. There's no information on costs yet frankly and it sounds like an expensive proof of concept, but it shows you the sort of device that will take over from small hard disks in the next few years."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

129 comments

I, for one ... (0, Funny)

Anonymous Coward | more than 6 years ago | (#20495289)

... welcome our new 48GB overlords.

Re:I, for one ... (0)

Anonymous Coward | more than 6 years ago | (#20495673)

my 1TB overlord can kick your 48GB overlords ass!

Database servers (4, Interesting)

gnuman99 (746007) | more than 6 years ago | (#20495335)

Random seek is probably one of the biggest bottlenecks in large databases. There are even databases that optimize reads/writes to be more consecutive on the disk. A drive like that would throw that problem out of the window.

Re:Database servers (0, Offtopic)

pilgrim23 (716938) | more than 6 years ago | (#20495443)

"...that the lifetime should be much longer than a typical magnetic hard disk." ... I am curious about how that lifetime is determined. As a retro-computist hardwre buff I tinker with old hardware all the time. One example (of many): I have a Apple /// with 5mb (yeah 5mb) ProFILE drive. I think this beast was made in 1981 and it still runs just fine. I have a few slightly older hard drives too. I am not sure how an average lifetime is determined but I actually play with hardware that is over a quarter century old. If a NAND can last that long I will be impressed...

Re:Database servers (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#20495579)

"...that the lifetime should be much longer than a typical magnetic hard disk." ... I am curious...
The person you replied to never said that. Start your own god damned thread next time instead of hijacking someone else's.
 

Re:Database servers (0)

Anonymous Coward | more than 6 years ago | (#20496993)

you MUST be new here

Re:Database servers (0)

Anonymous Coward | more than 6 years ago | (#20498105)

I'm AC#59. What's your anonymous coward id?

Re:Database servers (4, Insightful)

CastrTroy (595695) | more than 6 years ago | (#20495723)

You'd have to look at how much actual reading or writing to the drive is done by a computer from that era. Currently, hard drive space is really cheap, so we write lots of stuff to the disk, like temp files, log files, swap out programs, and even with some filesystems and operating systems write to the drive every time a file is accessed. A computer from that era wouldn't be writing so much stuff too the hard drive, as hard drives were small and expensive. It would likely only write to the drive when you need a program to save actual human created data, or when you install a new program. Reading would only be done when you start up the computer, a new program, or load a file.

Re:Database servers (-1, Troll)

Anonymous Coward | more than 6 years ago | (#20495801)

Hey, everyone bitching about Apple's price cut on the iPhone! GTFO, you fucking whiners. It's bad enough that you fuckers are switcheurs (PC users posing as Mac users), but it's doubly bad that you're also entitled douchebags. We don't want your kind on your platform. Just fucking leave, you accountants, and go back to your PCs.

TYVM.

Re:Database servers (1)

Klinky (636952) | more than 6 years ago | (#20496455)

But you then have the problem with many databases being much larger than the 48GB listed or even the 384GB. Even if you could buy it today it's probably not very cost effective. Adding more ram or working on better caching solutions may prove to be more valuable. Also it's good to note that while flash meory does have faster seeks they are around 500ns - 1ms. That's quicker than a HDD sure, but it's still exponentially slower than DRAM. see However, this will probably be the wave of the future once we start seeing prices come down and the limits of magnetic storage reached. But with SSDs you'll always have magnetic storage which is going to be exponentially cheaper probably for the next decade or so. Just when the price/performance advantage becomes worthwhile will we really see these take over.

Re:Database servers (1)

Atzanteol (99067) | more than 6 years ago | (#20497649)

But you then have the problem with many databases being much larger than the 48GB listed or even the 384GB.

Let me introduce you to our friend RAID [wikipedia.org].

Re:Database servers (1)

Jeppe Salvesen (101622) | more than 6 years ago | (#20499255)

Well, I'm rooting for this solution for an index tablespace (or .MYI on a MyISAM database), and a regular raid for the actual data. I think that combo would fly, as the tree operations on the index thrives on low latency - and the consecutive reads/writes of table data thrives on regular disk devices as they tend to have a good throughput.

48 GB = 384Gb (5, Informative)

sirket (60694) | more than 6 years ago | (#20495381)

The article does not extrapolate to 384 GB of storage- they extrapolate to 384 Gb of storage which is 48 GB of storage. bits != bytes.

Re:48 GB = 384Gb (2, Informative)

sirket (60694) | more than 6 years ago | (#20495441)

JHust to clarify- the company mentions possibly going to 28 stacked chips which would be 448 gigabits (not gigabytes) of storage- or about 56 GB of space. Now as flash chips grow in size- this could double (assuming 32 Gb NAND chips which are becoming available) to 96 or 112 GB of storage or more (assuming larger chips).

Re:48 GB = 384Gb (3, Insightful)

timeOday (582209) | more than 6 years ago | (#20496079)

But these appear to be be tiny (for mobile applications).... you could fit an enormous number of them into a 3.5" (or even 2.5") hard drive enclosure, if you can afford it. Put in a controller that can read and write to, say, 16 chips in parallel, and you would have a monster hard drive in every respect.

Re:48 GB = 384Gb (1)

thatskinnyguy (1129515) | more than 6 years ago | (#20495543)

384,000,000,000bits / 8bits in a byte = 48GB / 24 flash devices = 2GB per device. Seems like they could almost do better. I'll give it some time. (Impatient for SSHD)

Re:48 GB = 384Gb (1)

HiThere (15173) | more than 6 years ago | (#20495829)

But what they *ought* to do is figure out how to stack 33 or 35 planes (1 word + parity). I suppose that they could do fancier error correction...but this is a ram chip, not a computer.

Anyway, then if they could read/write all planes in parallel you'd not only have fast access, but also simple addressing. (I.e., you could reasonably do I/O to a single column...admittedly slower than block transfer, but nicer if you only need to change one word.) This would be more important if memory usage cycles were more important, and if there were block parity, that would still need updating, but potentially a very nice approach.

Re:48 GB = 384Gb (1)

steelfood (895457) | more than 6 years ago | (#20498631)

I don't know which K you're using, but in my world, there are 1000 bits per Kb, 8 bits per byte, and 1024 bytes per KB.

Which means 384 billion bits is 48 billion bytes, which is only 44.7GB.

HDD manufacturers want 1000 bytes per KB, but I don't buy that at all. It no different from the ram manufacturers rounding up 536866816 to 512MB when 512MB is actually 536870912.

Re:48 GB = 384Gb (0)

Anonymous Coward | more than 6 years ago | (#20499285)

Your world is small... these days, it really depends on where the bytes are used.
If you need to be 100% clear on this, the safest thing is to use the IEC standard prefix [wikipedia.org].

1024 byte = 1 KiB
1048576 byte = 1024 KiB = 1 MiB
And so on. MiB is in full "mebibyte". So you also get: "kibibyte" and "gibibyte". (Yes, I know! I didn't invent these names.)

A gem from the wikipedia article text (didn't know this myself):

CD capacities are always given in binary units. A "700 MB" (or "80 minute") CD has a nominal capacity of about 700 MiB (approx 730MB).[47] But DVD capacities are given in decimal units. A "4.7 GB" DVD has a nominal capacity of about 4.38 GiB.[48]

48GB? (1)

Tiroth (95112) | more than 6 years ago | (#20495397)

Looks like people are confusing bits and bytes. 48GB does not appear in the article anywhere, so I assume this is obtained by dividing 384Gb by 8.

24 layers x 16Gb package = 384Gb, so the article itself is consistent.

Why only 100,000 times (1)

BlowHole666 (1152399) | more than 6 years ago | (#20495413)

Why is it that you can only write to a NAND gate 100,000 times before it stops working? Is it the material they are using or something with the voltage?

Re:Why only 100,000 times (3, Funny)

Anonymous Coward | more than 6 years ago | (#20495479)

The NAND gate is a union worker, also entitled to a smoke break every 3,000 write cycles.

Re:Why only 100,000 times (5, Informative)

jandrese (485) | more than 6 years ago | (#20495511)

It's due to the way Flash works. A flash bit is basically a conductor surrounded by an insulator. To store a bit, you apply a large charge to the insulator to increase the charge of the conductor, basically your burning through the insulator to get your charge though. Once it is on there, to read the charge you have to apply another large charge to the insulator and see if the resultant charge is n or n + m. The m factor comes from latent charge on the conductor.

Anyway, the upshot of this is that because you have to constantly burn charge through the insulator to use the part, eventually you basically burn out the insulator and cause it to leak charge. Once it starts leaking, you lose your stored bits and the part is useless.

Re:Why only 100,000 times (1)

BlowHole666 (1152399) | more than 6 years ago | (#20495557)

Thanks. Yeah I remember from my circuits class designing memory but now that I think about it we had a constant power supply to keep the charge. In this case you don't so you have to keep it some other way. Thanks for the reply.

Re:Why only 100,000 times (1)

petermgreen (876956) | more than 6 years ago | (#20496937)

Yeah memory that needs power to keep it's contents is known of as a ram. Much faster than flash and basically no limit on write frequenc but the boards to make it act like a disk are expensive and you have to make damn sure it keeps power if you have anything important stored there.

Re:Why only 100,000 times (1)

treeves (963993) | more than 6 years ago | (#20496697)

So is the summary (I didn't RTFA of course) misleading by continually referring to NAND flash having this limitation? Doesn't it also apply to NOR flash?
OK, I just looked at the Flash entry on wikipedia, and it appears that it's even worse for NOR flash.

Re:Why only 100,000 times (1)

cheater512 (783349) | more than 6 years ago | (#20501515)

I thought that only writes were impaired, reads didn't affect the flash at all.

Your explanation would mean that both reads and writes degrade the flash.

Re:Why only 100,000 times (1)

Knara (9377) | more than 6 years ago | (#20495649)

Keep in mind that the 100,000 writes number is a stock number that seems to go up all the time, not that this comment directly addresses the "why".

Re:Why only 100,000 times (2, Interesting)

Bacon Bits (926911) | more than 6 years ago | (#20495711)

It's wholly one of mechanical endurance of the components, AFAIK. The gate is wedged, for lack of a better term. Everything physical wears out. It was much worse in the early 1990s, but whole orders of magnitude in improved performance have been made since then.

I've never seen a study conclude that the write limitation on NAND flash-based devices is a significant impact. Some of the studies have cited worst case scenarios of 50 years of continuous operation. It is far more likely that the device will physically fail due to other means rather than fail due to NAND erasing wear. In any case, I've never seen anyone claim that a solid state disk is going to fail before a mechanical magnetic disk simply due to NAND erasing wear. Indeed, the articles that actually go into it make pretty strong claims that the endurance of flash media is far above that of current mechanical-electromagnetic designs. Three or four times the lifespan.

Re:Why only 100,000 times (0)

EnderWiggnz (39214) | more than 6 years ago | (#20496499)

the funny thing is, i haven't seen a manufacturer list MTBF in write cycles in over 5 years. wt the newer materials, and ar levelling, the worst that i've seen is a mtbf of 1 million hours of continuous use. there is no ention of write cycles , ever!

Re:Why only 100,000 times (4, Informative)

networkBoy (774728) | more than 6 years ago | (#20497159)

Flash is rated in erase cycles, not write cycles. Erase is the most damaging event to the tunnel oxide layer in the device, which is why they fail.
Flash Cell stackup (same for NOR and NAND, the interconnection of cells determines what type of array it is):

G - gate (metal)
ONO - Oxide/Nitride/Oxide layer
FG - Floating Gate (Poly)
tOx - Tunnel Oxide (very thin)
Si - wafer (NPN/PNP wells)
-nB

Re:Why only 100,000 times (1)

Intron (870560) | more than 6 years ago | (#20500259)

For a laptop drive replacement, drop cycles are just as important as erase/write cycles. After a few drops a laptop harddrive starts going "eeeeeeeeek eeeeeeeeeeek eeeeeeeeek" whereas I hear no sound coming from the flash drive.

FAT32 (1)

garlicbready (846542) | more than 6 years ago | (#20495423)

Just think 48 GB of storage space on a Fat32 Filesystem
what a waste ...

Re:FAT32 (0)

Anonymous Coward | more than 6 years ago | (#20496337)

Just think 48 GB of storage space on a Fat32 Filesystem
what a waste ...


Imagine your post. What a waste. Flash != Fat32

Re:FAT32 (1)

operagost (62405) | more than 6 years ago | (#20499203)

Fortunately, Microsoft hard-coded XP and later to not format partitions larger than 32 GB with FAT32. It's always possible these will ship with FAT32 via the use of some other utility, but it seems unlikely.

swap space / tmpfs / cacheing (2, Interesting)

lobiusmoop (305328) | more than 6 years ago | (#20495459)

Given the low price of RAM these days (1 or 2 gigs being standard) minimising the need for swapping, and availability of tmpfs in the Linux kernel, I'm surprised there are not more flashdrive based linux boxes available these days.

Re:swap space / tmpfs / cacheing (2, Insightful)

Jeff DeMaagd (2015) | more than 6 years ago | (#20495585)

Have you actually bought a sizeable flash drive? 4GB CD cards are starting to be common, I think CF cards are the most affordable flash drive that you can reasonably use as a system drive. But for the same price, you might buy a 300GB hard drive. Not only that, there doesn't seem to be any affordable SATA-based flash drives, which is quickly becoming the only drive connection type found in computers.

So it would work great for a network terminal, there doesn't seem to be enough for most people to use just yet.

Re:swap space / tmpfs / cacheing (2, Interesting)

Com2Kid (142006) | more than 6 years ago | (#20495763)

2GB SD cards are still a better band for your buck, typically. In the very least, compatibility is better. :)

You can get them pretty easily for $20 a pop.

Amazingly enough Amazon has 2GB SD cards cheaper than Newegg. $15 a pop (no free shipping though!)

That is $30 for 4GB, or $60 for 8GB.

Not quite enough to get Vista up and running, but it should do fine for a stand alone Linux box. :-D

I wonder what the throughput would be if a proper hardware controller was put in place and you had 50 of those things in parallel.

Re:swap space / tmpfs / cacheing (2, Interesting)

Spokehedz (599285) | more than 6 years ago | (#20498111)

I can't find it now, but I remember a device that would take a bunch of SD cards (like, 4 slots) and would combine them into a big disk that had (I believe) SATA on it. So, you would take a bunch of these cheap 2GB SD cards, and it would make one big disk out of them all.

http://www.geekstuff4u.com/product_info.php?manufa cturers_id=&products_id=492 [geekstuff4u.com]

Not it, but close. Also way too expensive.

Re:swap space / tmpfs / cacheing (2, Interesting)

Anonymous Coward | more than 6 years ago | (#20495811)

I'm surprised there are not more flashdrive based linux boxes available these days.

There will be several million shortly...

# Mass storage: 1024 MiB SLC NAND flash, high-speed flash controller;
# Drives: No rotating media.

From the OLPC Spec [laptop.org]

Re:swap space / tmpfs / cacheing (I do that) (0)

Anonymous Coward | more than 6 years ago | (#20500977)

I use a CENATEK "RocketDrive" Solid-State RamDisk that is on a PCI 2.2 bus & uses PC-133 SDRAM! I have owned it since 2003 & it's STILL running strong (let's see a FLASH based one last 5++ years & still work)!

There have been FASTER performing units releasing since then (Gigabyte IRAM, uses SATA 150 bus, & DDR2 RAM, & an even FASTER unit is coming in the DDRDrive X1, which uses PCI-Express bus, & DDR RAM (this is THE ONE to look for imo)).

On mine, I use the 2nd 1gb partition on my SSD for:

====

Webpage Webbrowser caches

Temp ops (via %TEMP% & %TMP% environment variables/SET statements)

Logging (Such as EventLogs from the OS, & inside apps themselves like Windows firewall, or ZoneAlarm for example)

----

* EventLogs CAN be moved via registry hacks onto that one by the by:

SYSTEM LOG:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Eventlog\System

APPLICATION LOG:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Eventlog\Application

SECURITY LOG:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Eventlog\Security

(Check the FILE value in each - then, you will have the actual locations of them AND YOU CAN MOVE THEM!)

====

PLUS - & I use NTFS compressed partitions (this helps, with GOOD reason, read on).

By using NTFS compression, formatted with 4096 byte sectors (to match cache & read/read ahead mechanisms the filesystem driver uses as its size for this)?

This acts on text based data on this partition to benefit perfomance in a couple ways:

1.) Since text compresses FAR above even a 2:1 ratio, my data from temp ops, webpage caches (the HTML stuff) & logging stores excellently, and reads up faster from disk (smaller files) & the NTFS compression stage into RAM is offset excellently by today's fast CPU's + RAM as well...

2.) And, like the pagefile.sys benefits extolled below? I avoid the fragmentation & clutter that logging, %temp% ops, & webpage caches create, + they access for reuse 1000's of times faster than they would off std. mechanical HDD's... by far!

3.) I also avoid head movements on my main C: drive this way as well, thus, allowing programs to launch faster since the HDD is not burdened with I/O for head movements for paging, %temp% ops, logging, OR webpage caches!

----

By using NON-NTFS COMPRESSED partitions (for paging) also formatted to 4096 byte sectors (to match read/read-ahead mechanisms in the filesystem PLUS to match how the pagefile.sys is read/wrote)? I get these benefits (for pagefile.sys placement):

A.) I put my pagefile.sys onto its 1st partition, & it works VERY well this way (taking away the burden of I/O from my main C: drive, in doing paging operations & thus, allowing programs to operate faster because no head movements are in the way doing paging!

B.) PLUS, I avoid fragmentation of the pagefile.sys itself, + other files too as it grows (or contracts @ bootups).

----

You may be tempted to state "logging takes a speed hit" & yes, it does using NTFS compression especially... you'd be right as rain!

HOWEVER, this is largely offset by my ability to double or more in storage of log data (due to compression & the fact logs are usually text data which compresses excellently), in the speed of access to the files, AND the fact today's CPU's + RAM are SO FAST, that the compressed writing stage is not as big of a "hit" to performance as it was years ago... vs. today!

Plus, the ns speed (vs. ms, many orders of magnitude slower speeds of access/reaccess of std. mechanical HDD's) of RAM used on SSD's allows for SUCH fast access/reaccess of files on SSD's, it helps a great deal there too...

APK

P.S.=> Anyhow/anyways, how I use mine is much for the ideas you have & more (I wrote up the article on this for both SuperSpeed.com/EEC Systems back in 1997 that took them to a finalist position @ Microsoft Tech Ed 2000/2001 in the hardest category there, SQLServer Performance Enhancement, by mounting DB devices onto their mirroring to backing HDD software based "SuperDisk" while improving their "SuperCache" program up to 40% on a paid contract)... & later, I wrote up the article that CENATEK featured on their homepage above all others, for the same ideas above on DB engines, & also for how I use mine, above... apk

Wear leveling? (0)

Jeff DeMaagd (2015) | more than 6 years ago | (#20495461)

The problem with the concept of wear leveling is that I hadn't been able to find any specs on such a feature, or whether it actually exists in a product. From what I've heard of it, it would seem that it would only wear-level the free space and often-written files, so writes could still easily hammer some areas more than others, and it gets worse the more you fill up a drive. I'm not sure how it would work the way it's claimed to work, the system would work best if it kept track of the number of writes to a given sector, but that seems like quite a bit of overhead.

Re:Wear leveling? (1)

Bacon Bits (926911) | more than 6 years ago | (#20495795)

You need to have either firmware on the device that handles wear leveling (this seems appropriate for IDE drives or as part of the spec for flash media standards) or use a file system which handles it for you.

One of the biggest offenders is file systems (such as the default configuration for NTFS) that track last access times. That information is all stored in the MFT for NTFS, so frequently accessed files will be writing to this table constantly.

Re:Wear leveling? (0)

Anonymous Coward | more than 6 years ago | (#20496347)

For wear leveling to work efficiently, the medium must have either a rather large set of spare sectors that can be cycled in and out of use, or have some Idea what part of it is unused, i.e. not filled with usable data according to the filesystem that has been used to format it. I can imagine FAT or FAT32 support implemented on such a controller, even with some kind of recognition for partitions, but even then, what happens when the disk is full, and I just repeatedly write over the same set of sectors, chosen to be larger than the suspected number of spare sectors?

Re:Wear leveling? (0)

Anonymous Coward | more than 6 years ago | (#20496963)

Flash technology does not implement wear leveling at all, it is completely done by the controller that talks to the flash (be it NOR, NAND, etc). There is some hybrid flash technology (oneNAND, trustedFlash, etc) that include a controller in the flash package to reduce integration costs, but there is still a controller in front of the flash technology.

From my knowledge (working in the industry), it is exceptionally rare for the controller to understand the filesystem that is on the flash. The controller works in terms of reading/writing LBAs, and not what data is stored. Most of the engineers that I deal with (from multiple companies) do not know the differences in the filesystems, and wouldn't be able to tell you what sector the PBA, FAT are in, let alone the write characteristics for them.

Wear leveling comes in a variety of forms, and has differences in the performance of the device, and the life of the device. The two main buckets for wear leveling are static and dynamic (and I always confuse the two, so forgive me).

For the difference I'm going to use LBA and PBA. LBA = the logical block address that the host/filesystem believes it is writing. PBA is the address that the flash controller is actually writing (it's a little more complicated, but this works)

In all cases there is a pool of PBAs that are reserved by the flash controller for wear leveling. These cannot be directly addresses by the host/filesystem, effectively they are outside the range of known LBAs.

My defintion today of static:
Only data being written is wear leveled. Data that is written once, but never again does not get wear leveled. For example, write data to LBA 0, 1 and 2. The flash controller writes these to PBA 0,1, and 2. The host writes new data to LBA 2. The flash controller writes this data to PBA 3 and moves PBA 2 back into the free area.

My definition of dynamic:
All data is wear leveled, regardless of write or read. This can be done similar to a JAVA garbage collection routine, or other algorithms. For example (an extremely poor example), write data to LBA 0, 1, and 2. The flash controller writes these to PBA 0, 1, and 2. The host writes new data to LBA 2. The flash controller does PBA 0 -> PBA 1, PBA 1 -> PBA 2, new data -> PBA 0.

You should also check your 100,000 write cycles. MLC NAND flash is typically quoting between 5,000 and 10,000 write cycles. SLC NAND flash varies between 10,000 and 100,000 write cycles. The degradation in the flash life time is causing most vendors to move to my definition of dynamic wear leveling to keep the life of the device on par with what you have experienced to date.

Re:Wear leveling? (1)

networkBoy (774728) | more than 6 years ago | (#20497205)

Flash devices have a small microcontroller already embedded in them to control programming and erase voltages. It also is responsible for wear leveling (at least in NOR devices).

Re:Wear leveling? (0)

Anonymous Coward | more than 6 years ago | (#20496369)

I'm not sure how it would work the way it's claimed to work, the system would work best if it kept track of the number of writes to a given sector, but that seems like quite a bit of overhead.

All the wear levelling algorithms I've heard of being actually implemented fell into one of two categories:

1. pseudo-random sector ordering - the position of a sector on the physical media is rotated every time it's written according to a pseudo-random algorithm that should ensure near-equivalent writes to all areas of the media.

2. The flash chip tracks writes by sector or block, and always chooses the least-written free sector to store a new file. This can be done in a data-agnostic method inside the addressing circuitry, and therefore should be fs-agnostic.

Both of these, of course, use a lookup table to dereference - one's doing it in flash, the other in the OS' filesystem driver. I think Windows' mass storage drivers try to use #1 where possible, but don't quote me on that!

NAND flash writes (-1)

GoNINzo (32266) | more than 6 years ago | (#20495467)

100,000 times? Try more like 1,000. That's usually the level that is promised by commercial products, anyway. I'm not sure where he got the 100,000 number from.

It's not a good replacement for a consumer hard drive. There are applications that would use it, I'm sure, but make sure it's low on the writes.

Re:NAND flash writes (3, Informative)

TinyManCan (580322) | more than 6 years ago | (#20495773)

This is straight bollocks. Its ridiculous to think that you could only write to a NAND block 1000 times.

Commercial products in the high-end flash space are promising 500,000+ writes.

We are not talking about glorified thumb-drive flash memory here, but decent chips with good wear leveling and high quality construction.

Re:NAND flash writes (0)

Anonymous Coward | more than 6 years ago | (#20500221)

Whether it might be "ridiculous" or not, for MLC (high capacity) NAND flash, failure rates are such that device lifetime is indeed limited to several thousand (depending on vendor/device) erase/write cycles.

I don't quite know what you mean by "glorified thumb-drive flash memory" or "high quality construction", but the reality of the situation is that the only NAND flash that is available at competitive prices is at the (cheap, dense) corner of the market, and that's where the Hynix 16Gib die referred to in the story linked above is.

Flash lifespan in persective (4, Interesting)

G4from128k (686170) | more than 6 years ago | (#20495623)

Even at only 1,000 writes of reliable lifespan, 48 GB could handle 48 TB of writes or over 4,000 hours of continuous writing of compressed HD video (or about 2 years of 40 hr/week writes of a video stream). Checking my average usage of disk I/O finds that I only average about 2 GB of writes per day which would suggest that this device would last me 24,000 days (or 65 years). And if the life is 10,000 or 100,000, then I'd see 10X or 100X that lifespan.

Your mileage may vary, but I'd bet that 99% of users would never keep their computer (especially a laptop that is the more likely application for flash-based drives) for long enough to see the disk fail from wear.

Re:Flash lifespan in persective (1)

drrck (959788) | more than 6 years ago | (#20495693)

I don't think that is would "just fail" either. What you should see is a slightly lesser overall capacity when those sectors become marked as unavailable, right?

Re:Flash lifespan in persective (1)

russotto (537200) | more than 6 years ago | (#20496231)

Unfortunately, wear leveling also means that when one cell fails, many others are likely to go real soon now. So when you start getting bad sectors, time to replace the device.

Re:Flash lifespan in persective (1)

compro01 (777531) | more than 6 years ago | (#20496301)

though keep in mind that the 48GB might really be 49 or 50* to provide spare sectors in the same manner hard drives do.

*numbers not necessarily based on any factual information.

1 MiB - 1 MB = wear leveling (1)

tepples (727027) | more than 6 years ago | (#20498017)

though keep in mind that the 48GB might really be 49 or 50* to provide spare sectors in the same manner hard drives do.
Based on my experience buying CF and SD cards, this is actually where the 4.8 percent difference between a MB and a MiB goes. When you buy, say, a 512 MB memory card, it is actually a 512 MiB (536 MB) memory card where 4.8 percent of the sectors are spared. I've bought three "1 GB" cards, each of which had 1,024 MB available for files, folders, and allocation data.

Re:Flash lifespan in persective (2, Informative)

smallfries (601545) | more than 6 years ago | (#20497343)

You're assuming that the 2GB a day could be spread evenly over the disk. This would vary depending on how much free space you have on the device. If your drive is 1% full then you can distribute your writes over the other 99%. But most people don't keep their storage mainly empty. In fact people tend to run just under the limit - hence the saying that crap always expands to fill the available space. If your drive was 99% full then you can't distribute the writes over the parts with data (as it would have to be moved somewhere else negating the benefit), and then you run into the problem with the limited duty cycle.

Having said all of that, I don't think my throughput is anything like 2Gb, and most of it would be swap (hasn't happened much this past couple of years) and /tmp. Given that /tmp would be better suited to a RAM disk anyway I don't think that either would pose a problem, and the lifespan of these flash disks is probably comparable to a magnetic platter. As another reply pointed out, when the duty cycle is exceeded you can't alter the sector anymore. On a magnetic disk when a sector dies you're SOFL. Once the price comes down to an afforable level these drives will be beautiful...

Re:Flash lifespan in persective (0)

Anonymous Coward | more than 6 years ago | (#20497591)

Flash drives moves the data to distribute writes so it does not matter how much space one actually uses. This data movement is completely hidden from OS and for all purposes all those flash drives/cards look like normal disk.

Note also that when the data movement algorithm is not done right, the data on the flash can be corrupted when it is powered off during writes. And since the moves are hidden, using a journaling file system does not help to protect against such corruption.

Are they really run near the limit? (1)

tepples (727027) | more than 6 years ago | (#20498059)

If your drive is 1% full then you can distribute your writes over the other 99%. But most people don't keep their storage mainly empty. In fact people tend to run just under the limit
Citation needed, at least for common uses of flash memory. One common use case for flash memory is in digital cameras. A photographer shoots a "roll", copies everything from the pictures folder on the flash card to a larger drive, and deletes the "roll" from the flash card. Even for larger drives such as hard disk drives, Windows encourages the user to keep 15 percent of the drive free so that Defrag can work more efficiently.

Re:Flash lifespan in persective (1)

ivan256 (17499) | more than 6 years ago | (#20498519)

You're assuming that the entire capacity of the chip would be exposed to the end user, and none would be reserved for dynamic load leveling.

You're also assuming that unchanged data would never be moved by the load leveling algorithm.

I don't think either are valid assumptions, and you're just plain wrong.

Re:Flash lifespan in persective (1)

smallfries (601545) | more than 6 years ago | (#20500549)

OK, all three replies came up with the same points but I'll reply to you since you've put them most succinctly. Your point about reservation makes sense, and it does change the overall picture. Spare blocks on the disk would increase the ability to do load leveling. So lets consider the case where we have a disk with 10 blocks, and lets say 4 are free (these can be a mixture of really free, and reserved blocks). If I repeatedly write to a used block (updating / overwriting) then I have five choices of where to put the data - either the original block, or mark that block as free and pick one of the other 4 blocks.

So I claim that the load can be leveled over 50% of the disk at most - this depends entirely (as you've pointed out) on my second assumption. If I decide that I'm going to pick one of the other 5 used slots to write to, then I have to keep that data. So that block needs to be copied to one of the original choice of 5 places. Unless I choose to keep the dirty data in memory, and hope that I don't run out of storage / lose the power then I can indeed level over all 10 blocks in the disk. But is it reasonable to assume that I can cache the dirty values this way to level over the whole disk rather than just the free space?

Re:Flash lifespan in persective (1)

ivan256 (17499) | more than 6 years ago | (#20500701)

So that block needs to be copied to one of the original choice of 5 places. Unless I choose to keep the dirty data in memory, and hope that I don't run out of storage / lose the power then I can indeed level over all 10 blocks in the disk. But is it reasonable to assume that I can cache the dirty values this way to level over the whole disk rather than just the free space?


When you move a block, you don't have to "cache" the data. You wouldn't erase the data from the original location until it was successfully written to the new location. You also wouldn't report the incoming block as successfully written until the entire process had been completed, thus the only thing you could potentially lose in the case of a power loss would be the in-flight operation.... Which you should have already considered potentially vulnerable to power loss anyway.

The short answer is "yes". You can reasonably assume that you can level over the entire pool of blocks in this fashion while maintaining the integrity of the previously committed data. Existing load leveling algorithms already do this.

Re:Flash lifespan in persective (1)

smallfries (601545) | more than 6 years ago | (#20501439)

If I write to a block that is already in use (say 2) then I have two blocks in flight, my new block N, and the contents of block 2 which either have to go somewhere else or get cached in memory. So *moving* the contents of a block so that I can write into that location takes two writes instead of one. How is this improving the duty cycle?

Re:Flash lifespan in persective (2, Interesting)

msgtomatt (1147195) | more than 6 years ago | (#20498495)

Your calculation of 24,000 days is when the drive reaches total failure. Your logic does apply to camcorder applications in which data is always written sequentially. But, in PC applications you do not write the information as a bit stream, you write things fairly randomly. When you change the contents of a file without changing the file size, you update the same physical memory locations. So after you update your file a 1,000 times, it becomes corrupted and you loose your data. Once a single byte becomes corrupted the entire sector can longer be used. So in the worse case scenario, this fancy drive would not even last you a day, before you started to loose information.

To prevent data loss, these drives will require a good CRC algorithm or a RAID configuration that can repair damaged files when they are moved to new sectors. Also, it might be possible to convert the random access to sequential access, by moving the file the end of a circular stream buffer every time it is written too. But this would lead to fragmentation problems, that might be impossible to solve.

Re:Flash lifespan in persective (0)

Anonymous Coward | more than 6 years ago | (#20501689)

Flash doesn't write over the same spot over and over if you're updating the same file. Flash drives use wear leveling - edit and re-save a file and it's actually saved to some other "empty" spot. (File fragmentation isn't a problem, since there aren't any read/write heads to thrash around as in a magnetic drive). You'd only burn through specific parts of a flash drive if you do a lot of writes while the drive is nearly full.

Further: a quick googling reveals that these drives can handle a million writes per sector.

The article extrapolates to 384 GB... (1)

crvtec (921881) | more than 6 years ago | (#20495661)

"The article extrapolates to 384 GB..."

This is one case where I am DEFINITELY not RTFA.

implications of flashing (1, Funny)

one_who_uses_unix (68992) | more than 6 years ago | (#20495667)

It used to be that there were serious implications if you engaged in flashing, potentially including jail time!

The world has come a long way when any geek can flash thousands of times and not have problems with his hard disk.

Re:implications of flashing (0)

everphilski (877346) | more than 6 years ago | (#20495869)

when any geek can flash thousands of times and not have problems with his hard disk.

in a row?

IPod (3, Funny)

dazedNconfuzed (154242) | more than 6 years ago | (#20495713)

iPod Touch, meet Hynix 48-GB Flash MCP!

Re:IPod (0)

Anonymous Coward | more than 6 years ago | (#20499173)

That's not funny. That would actually make it useful.

Imagine being able to carry around 192GB of music, pictures, and videos! Now there's something I wouldn't mind shelling out $600 for, but only if the price doesn't drop $200 3 months later.

media storage (2, Insightful)

Floritard (1058660) | more than 6 years ago | (#20495739)

It is just writing that is limited right? Myself, I'd love to have the space to host all my media, most of which just sits archived on dvd-r. I'd only need to write to the disk once. Seems most people, aside from those who do video production, really only need large amounts of space to serve/store media. Be cool to just keep a 200 gig SATA for regular use and just keep buying these suckers and fillin' them up for all that media. Later, when they're cheap that is.

Re:media storage (0)

Anonymous Coward | more than 6 years ago | (#20497725)

Bits are stored as electrical charges which can leak over time. There is also a limit of how long the data can remain intact. Typical values are on the order of 20 years or so. That is likely a parameter based on statistical values from accelerated test and not tested on a per chip basis. Local defects/impurities can affect it. I am not also sure if alpha particles from packaging would affect that life time either.

Re:media storage (1)

Bigjeff5 (1143585) | more than 6 years ago | (#20497781)

It is just writing that is limited right?
Actually, it "flashes" on both read and write, so while limiting your writes will extend the life of the disk, constant reading is just the same as constant writing.

There's a post above that explains it better but basically, flash writes by bursting a charge through an insulator to the storage bit, changing the charge. To read it sends another burst through the insulator and it reads the value based on what the final charge is (initial burst charge + what was in the storage bit).

It's the insulator that wears down, and once it does the storage bit can't store anything anymore.

Re:media storage (1)

Aetuneo (1130295) | more than 6 years ago | (#20498933)

Cheap as in 4.4GB for between 20 and 30 cents? That's the current price for DVD+Rs, as I recall (HD space is about 1GB for 20 - 30 cents). When you can get a 4GB flash card (SD is a nice form factor - easy to store and move around - so I'll say and SD card) for under 40 cents, it might be ready to compete with DVD+Rs for write once applications. Until then, I'll go with the DVDs, thanks.

HyperDrive4 (1)

myspys (204685) | more than 6 years ago | (#20495797)

Something that should be mentioned when talking about these things is HyperDrive4 [hyperossystems.co.uk]

Nice Butt... (2, Funny)

eno2001 (527078) | more than 6 years ago | (#20496117)

...would you really want to buy something from a company named Hynix? At worst it sounds like a Unix that smells like ass. At best it sounds like a bunch of stoned Unix devels.

Re:Nice Butt... (0)

Anonymous Coward | more than 6 years ago | (#20497393)

Nice Butt...
Thanks! I work out all the time, and reaping burns a lot of calories :)
-- Death

What about RAID? (4, Interesting)

GreatBunzinni (642500) | more than 6 years ago | (#20496547)

Recently, this whole flash drive business has been popping up in the news, with announcements of a whole gob of commercial solid-state drives based on flash technology and the like. Nonetheless, there is a big void in the flash drive world that, at least at first glance, could be easily filled with trivial technology and off the shelf products but no one seems to be paying any attention.

I'm talking about RAID + flash cards.

Flash cards are everywhere and, although their cost per GB is rather high, a 1GB card is easily affordable (1GB microSD card for less than 10 euros) and prices are dropping constantly. If someone decided to build a RAID card reader, we could easily get a foot in the door. For about 60 euros it would be possible to get something between a slowish but reliable 6GB flash drive or a speedy and snappy 1GB flash drive.

So why exactly didn't anyone thought of this? We already have IDE CF card readers, some models supporting 2 drives, that can be had for about 6 euros. Why not a RAID flash card reader?

Re:What about RAID? (1)

dfn_deux (535506) | more than 6 years ago | (#20497833)

The difference is that not all NAND flash is created equally. The multi-layer cell type which is commonly used in commodity flash devices isn't nearly as fast nor as reliable as the single-layer cell type which is used in the highspeed drive replacements we are seeing hit the market now. The difference isn't trivial when an mtron SLC SSD can do about 5 times the throughput speed of competing higher density SSDs which use MLC nand flash.

I don't work for mtron, but I am a satisfied customer.

Re:What about RAID? (1)

thedohman (932417) | more than 6 years ago | (#20498387)

Oh, you mean a device like a USB Hub + Software RAID?

Sure, it won't be as fast as SATA or a controller connected to the PCI(express) bus, but it is flash in a RAID and at least as fast as the 5400rpm IDE HD in my laptop.... (no I haven't tried it, but all my flash drives do rate faster than my HD according to SiSoft)

I remember seeing something like this quite a while ago... I think 256MB sticks were high-end at the time, and they put 4 of them in a single hub to get a full, amazing GB of flash storage. Old News.

Win2k on Compactflash (0)

Anonymous Coward | more than 6 years ago | (#20496605)

I've been running Windows 2000 on a 4GB Kingston Ultimate Compactflash card for the past eight months without a problem.

It has 100,000 write cycles per sector and I reckon about 8000 sectors, so that's 800 million writes for the life time of the card.

My Win2k writes about 10,000 times per session, so with three sessions a day it should last 73 years. The card only holds its data for 10 years so it's no problem.

Not parallel (2, Informative)

RecessionCone (1062552) | more than 6 years ago | (#20496683)

It's not clear if it's possible to write to them in parallel -- if so the device should be pretty damn fast.
It's pretty obvious that it's not possible to write to this array of chips in parallel, because you just can't fit enough pins in a tiny package to provide the necessary interface for talking to 24 chips simultaneously. Also, take a look at the picture from TFA: http://www.koreatimes.co.kr/upload/news/070905_p10 _hynix.jpg [koreatimes.co.kr] - you can see that all the leads to the different chips are wired to the same pads. This doesn't prove my point - they could all be power or ground connections, but looking at the complexity of the packaging here supports the idea that providing a separate interface to each of these chips would be very expensive and difficult. In short, this is a capacity optimized device, it's not meant to break speed records.

100,000 write cycles is plenty... (1)

tyme (6621) | more than 6 years ago | (#20496723)

100,000 write cycles is plenty, so long as you buffer the writes and limit their frequency. All you need to do is either put a big honking RAM writeback cache next to the FlashRAM, or enforce writeback caching in the OS. If you can get the write frequency down to about two writes per hour, and do good load leveling, your FlashRAM will last for 5 years, which is about as good as most consumer grade hard disks (and possibly better, since the 'expired' FlashRAM drive could still be perfectly readable). Two writes per hour may sound like a very aggressive goal, but it wouldn't be so hard if you preferred to evict clean pages from cache before dirty pages (which is not so bad, since the read latency of FlashRAM is not nearly as long as that for a spinning platter, so refilling evicted pages is not too expensive).

According to NASA (2, Funny)

brunes69 (86786) | more than 6 years ago | (#20496851)

Hynix, has announced they have stacked 24 flash chips in a 1.4mm thick multi-chip package

According to NASA, it may even be possible to stack 48 chips in a 2.8mm package. Scientists also speculate someday we may be able to achieve up to 240 chips in a 14mm thick package.

Master Control Program (1)

fear025 (763732) | more than 6 years ago | (#20497249)

Has anyone else wondered, with all of this extra data at its command, how is Tron going to defeat the Master Control Program this time?

25um thin (0)

Anonymous Coward | more than 6 years ago | (#20497685)

For anyone interested, 25um thin is VERY impressive for thinning silicon. (and I am a microelectronics package engineer) It's thinner than a sheet of 8x11 paper, and allows the wafer to flex so easily you can literally roll it up like paper. (this is silicon, mind you, which is brittle like glass!)

48GB MCP? (1)

T-Ranger (10520) | more than 6 years ago | (#20497753)

Wow, and all the graphics were done on a Super Foonly, which had at best a couple of meagbytes.

Parallel read unlikely (1)

mako1138 (837520) | more than 6 years ago | (#20501053)

Flash chips have a standard interface: a set of control lines and an 8 or 16 bit address/data bus. In multichip flash packages, each chip is typically mapped to a portion of the address space. This also allows backwards compatibility when the die density eventually doubles. (e.g. I was using some Micron parts where the 4Gb part was a 2x 2Gb MCP. A couple months later and the chip was revised to 1x 4Gb.)

If you want parallel read, you're going to need a whole lotta pins, and corresponding board area. Whereas the point of the MCP is to reduce pin count and board area. MCPs are for highly integrated devices like cellphones and portable media players, where space is at a premium. The throughput of one Flash chip is sufficient for these applications.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...