Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Garbage Collection Algorithms Coming For SSDs

Soulskill posted more than 5 years ago | from the take-out-the-tash dept.

Data Storage 156

MojoKid writes "A common concern with the current crop of Solid State Drives is the performance penalty associated with block-rewriting. Flash memory is comprised of cells that usually contain 4KB pages that are arranged in blocks of 512KB. When a cell is unused, data can be written to it relatively quickly. But if a cell already contains some data, even if it fills only a single page in the block, the entire block must be re-written. This means that whatever data is already present in the block must be read, then it must be combined or replaced, and the entire block is then re-written. This process takes much longer than simply writing data straight to an empty block. This isn't a concern on fresh, new SSDs, but over time, as files are written, moved, deleted, or replaced, many blocks are a left holding what is essentially orphaned or garbage data, and their long-term performance degrades because of it. To mitigate this problem, virtually all SSD manufacturers have incorporated, or soon will incorporate, garbage collection schemes into their SSD firmware which actively seek out and remove the garbage data. OCZ, in combination with Indilinx, is poised to release new firmware for their entire line-up of Vertex Series SSDs that performs active garbage collection while the drives are idle, in order to restore performance to like-new condition, even on a severely 'dirtied' drive."

Sorry! There are no comments related to the filter you selected.

Hmmmm... (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#28993173)

FP? ....

Re:Hmmmm... (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28993365)

black people are about 13% of the population in this country but about 90% of TV commercials have at least one in them. seriously why all the overrepresentation of niggers?

Re:Hmmmm... (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#28994793)

because you touch yourself at night.

The logical next step... (3, Insightful)

Joce640k (829181) | more than 5 years ago | (#28993177)

A weakness was found in first generation drives, the second generation drives fixed it.

Film at 11.

Re:The logical next step... (1, Informative)

Anonymous Coward | more than 5 years ago | (#28993197)

This is the third generation, the second was to fix speed degradation through fragmentation.

Re:The logical next step... (5, Funny)

miknix (1047580) | more than 5 years ago | (#28993233)

This is the third generation, the second was to fix speed degradation through fragmentation.

And the fourth generation will fix SSD's small life longevity due to massive GC activity.

Re:The logical next step... (0)

Anonymous Coward | more than 5 years ago | (#28993469)

I actually wondered about this. Hopefully the mfgs have thought about it too, and not just in the sense of "they'll have to buy more of our SSD's".

Re:The logical next step... (1)

jbolden (176878) | more than 5 years ago | (#28993531)

Good point I'm worried about that too. SSD already has poor life I'm not so sure about the garbage collection idea.

Re:The logical next step... (2, Insightful)

amRadioHed (463061) | more than 5 years ago | (#28993589)

How would this make a difference? The blocks would have to be wiped out next time they are written to anyway, the only difference here is that the blocks are cleared during idle time so you don't have to wait for it.

Re:The logical next step... (1)

jbolden (176878) | more than 5 years ago | (#28993949)

Block A is full on day 100
10 cells are invalid (i.e. ready to be deleted) but 99% of the blocks still have space

10 cells get wiped cleaan. While allows for 10 new writes. It fills up on day 120 with 10 cells invalid....

day 200 there aren't free blocks anymore
block A has 40 cells that are invalid
they all get deleted. Since there are 40 free cells this lasts till day 280. Say on day 300 it gets deleted and 40 cells are free again....

So in one situation you are flipping every 20 days, the other every 100 days.

Re:The logical next step... (3, Interesting)

pantherace (165052) | more than 5 years ago | (#28994249)

Anything measures in rewrites over hours or larger time spans is not (or shouldn't be) that much of a problem for modern flash. Someone calculated that you'd have to be reflashing a particular device every 15 minutes for 5 years to reach the flash's rewrite limit. That was several years ago. (It may have been 5 minutes as opposed to 15, but I'll give the less reliable number. This number appears to be from 2000 or 2001, as the device was the Agenda VR3 dating from about then.)

Assuming it's as good as the flash from that example, rewriting every hour results in 20 years. I don't know about you, but I don't have many hard drives from 20 years ago.

Now, if it's rewriting all the time, that could go down drastically, and quality might be different, but every 20 days shouldn't be a problem unless you've got really really crappy flash, by the standards of 9 years ago.

Re:The logical next step... (2, Interesting)

Magic5Ball (188725) | more than 5 years ago | (#28993271)

Not to be cynical, but these new algorithms, if implemented poorly, have the potential to run down the limited number of write cycles on the cells. Not that this could be strategically manipulated in any way...

Re:The logical next step... (5, Informative)

zach297 (1426339) | more than 5 years ago | (#28993301)

From the summary: "This isn't a concern on fresh, new SSDs, but over time, as files are written, moved, deleted, or replaced, many blocks are a left holding what is essentially orphaned or garbage data, and their long-term performance degrades because of it." The are talking about clearing sectors of garbage data that is no longer in use. It would have to be done anyways before the sector can be reused. The new firmware is simply doing that time consuming step early while it is in idle. The actual number of write cycles is not changing.

Re:The logical next step... (1)

Magic5Ball (188725) | more than 5 years ago | (#28993565)

The actual number of write cycles is not changing.

That would be ideal, but if the designers of the algorithm could predict with perfect accuracy the usage patterns of the drive, they would be more profitably in a different line of business. The "would have to be done anyway" assumes quite a bit of foreknowledge, and "properly implemented" is an assumption that cannot be tested except through time in the field.

Contemplate the following: some periodic process (log, network manager, etc) makes a few 3KB files which aren't grouped through NCQ, which only need to last 30 minutes each. As a hard disk, you don't know that second part, so when is the disk "idle" enough for this cleaning to take place? Active GC on a soon to be vacated block is at least pointless write when the block can just be marked empty. At what point *should* a sector be reused in this situation?

Now, think about the multitude of ways in which even relatively simple things like a nightly append and tar of a column store (or even things like volume shadow copies) can fall outside assumptions about what is, or is not, data that needs to be consolidated.

I would imagine that drives with a "server profile" GC could sell for much more than drives with a "desktop profile" GC...

Re:The logical next step... (2, Informative)

broken_chaos (1188549) | more than 5 years ago | (#28993769)

I *think* you're misunderstanding how this works, actually.

When a block is written to, the entire block (512KiB) has to be wiped and rewritten from a blank state. When a block is emptied entirely, it does not get touched - just marked as empty. When new data is written to it, the 'empty' block has to actually be wiped, and then the new data written on the just-blanked block.

What this seems to be proposing is to, periodically, actually wipe the blocks marked as empty, when the SSD is otherwise idle - meaning deletes are still fast, and new writes would speed up. I imagine rewrites would stay comparatively slow, though.

(I might be way off on this - someone correct me if I am.)

Re:The logical next step... (0)

Vectronic (1221470) | more than 5 years ago | (#28993575)


Pick a random block:
1. GC comes along, swoops up block, eliminates junk, writes new junkless block
(awhile later)
2. OS requires write, swoops up block, combines data, writes new block.

Step 1, and 2 could be combined

2. OS requires write, swoops up block, GC kicks in, removes junk, OS combines data, writes new block.

Slightly longer write time, but one less write, unless of course the SSD's can erase data, without having to write over it with zero's or whatever.

Or better yet, why not erase the shit properly the first time? Assuming that's what they mean by "junk" cause if I buy an SSD and it wanders around eliminating random files cause I haven't accessed them in 2 months it's going to end up a couple "blocks" away after I see how far I can frisbee it.

Re:The logical next step... (1)

ion.simon.c (1183967) | more than 5 years ago | (#28993767)

...cause if I buy an SSD and it wanders around eliminating random files cause I haven't accessed them in 2 months...

Surely you're suggesting this in jest?

Re:The logical next step... (4, Informative)

zach297 (1426339) | more than 5 years ago | (#28993825)

Read [] . Clearing out an entire block is different than a write. Writing to an SSD is only possible by setting the value to 0. So when I save something to the SSD it is really only writing down the 0's of my file and just leaving the 1's alone. This is not the destructive part of using flash. The part that uses up actual write cycles is clearing a block back to 1's. This is explained in [] .

Taking from your list of actions: Pick a random block:
1. GC comes along, swoops up block, eliminates junk by flashing entire block into 1's (awhile later)
2. OS requires write, swoops up block, writes only the 0's from the file leaving everything else untouched.

In this manner each step does half of the writing amounting to one write when combined. This is exactly how all SSDs work. The major difference announced in the article is that they are separating the two steps.

Normally this is impossible because the SSD doesn't know if something can be cleared until the OS is trying to overwrite it. This makes writes take longer. The new firmware hopes to make writes faster by moving the first step into the idle time of the drive (by figuring out when a overwritten block is unused) sort of like how you can set up a download to only run when your not using the internet connection. It allows for more efficient use of time that the drive would otherwise be doing nothing with.

Re:The logical next step... (1)

the_one(2) (1117139) | more than 5 years ago | (#28994713)

Wouldn't the number of erases drastically drop? Imagine that the every block on the SSD has been written two once. If you want to write anything now you'd have to read a page, erase, write back with the change. So if one assumes that on the filesystem this page was mostly empty a whole lot of garbage was just written unnecessarily. If this page was preemptively erased you'd only have to write the new stuff and the rest of the page will still be free.

Re:The logical next step... (2, Funny)

Anonymous Coward | more than 5 years ago | (#28993299)

The garbage man can, Marge, the garbage man can!

Joke (0, Flamebait)

suso (153703) | more than 5 years ago | (#28993917)

That's funny I always thought scuzzy drives needed garbage collection. Ta-dit-boom. Thanks, try the veal.

Do cleanup in the OS (1)

mrcaseyj (902945) | more than 5 years ago | (#28993193)

It seems like this function should be performed in the operating system. The firmware should just make available the info and commands an OS needs to do the right thing.

Re:Do cleanup in the OS (4, Informative)

mattventura (1408229) | more than 5 years ago | (#28993217)

I think it ends up being like NCQ. The drive's processor can be much more specialized and can do the processing much more efficiently. Not to mention, it might require standards to be changed, since some busses (like USB, IIRC) don't provide commands to zero-out a sector on a low level. On an SSD, just writing a sector with zeros doesn't work the same as blanking the memory. It just makes the drive use a still-blank sector for the next write to that sector. The problem only comes when you run out of blank sectors.

Re:Do cleanup in the OS (1)

maxwell demon (590494) | more than 5 years ago | (#28994063)

On an SSD, just writing a sector with zeros doesn't work the same as blanking the memory.

I think you mean a sector with all ones.
Is there a reason why the firmware couldn't detect a write of an all-ones, and treat it as command to clear the sector instead of remapping it? Note that semantically, it would still do the same thing (i.e. the next read of that sector will indeed give an all-1), and since writes of all-1 blocks are otherwise rare, it wouldn't hurt performance of normal operation.

Re:Do cleanup in the OS (1) (245670) | more than 5 years ago | (#28993239)

Um...How 'bout NO? This kind of thing absolutely should NOT be handled by the operating system. It should be entirely platform independent.

Re:Do cleanup in the OS (1)

mikem170 (698970) | more than 5 years ago | (#28993343)

Will the firmware in the drive be able to do this without understanding the filesystem?

I don't know the structures of filesystems in detail off the top of my head, but the ones I am a little familiar with have directories with pointers to linked lists of sectors and allocation tables. When files are deleted the directory pointers are removed and the allocation tables are updated.

How does the firmware know what sectors are empty if it doesn't understand this stuff?

I am curious how it works, if it doesn't need knowledge of the filesystem. FAT, NTFS, UFS, EXT2/3/4, ZFS, etc are all very different.

Re:Do cleanup in the OS (1)

v1 (525388) | more than 5 years ago | (#28993429)

Will the firmware in the drive be able to do this without understanding the filesystem?

Just off the top of my head I can see where the onboard controller would have a big advantage. If we simplify the case and say the drive uses 2k blocks and the file system can't be modified to use 2k blocks, (lame!) then the onboard controller should watch for situations where a cell (of four 512 byte blocks) is frequently being reflashed because a single one of the four is being changed. Then if it could take a look at history and determine that somewhere else are three more blocks that always are getting changed at the same time, it could remap them to all four use the same flash block, so next time save a file, something that the controller has no higher understanding of, it only has to reflash one block instead of 2 (or 3 or 4).

Of course ideally the OS could just be intelligent to be told the device uses 2k blocks instead of 512 byte which would make the above totally unnecessary.

Re:Do cleanup in the OS (3, Informative)

Hal_Porter (817932) | more than 5 years ago | (#28993685)

How does the firmware know what sectors are empty if it doesn't understand this stuff?

I am curious how it works, if it doesn't need knowledge of the filesystem. FAT, NTFS, UFS, EXT2/3/4, ZFS, etc are all very different.

The filesystem tells the SSD "LBA's x to y are now not in use" using the ATA trim command. []

Over-provisioned SSDs have ready-deleted blocks, which are used to store bursts of incoming writes and so avoid the need for erase cycles. Another tactic is to wait until files are to be deleted before committing the random writes to the SSD. This can be accomplished with a Trim operation. There is a Trim aspect of the ATA protocol's Data Set Management command, and SSDs can tell Windows 7 that they support this Trim attribute. In that case the NTFS file system will tell the ATA driver to erase pages (blocks) when a file using them is deleted.

The SSD controller can then accumulate blocks of deleted SSD cells ready to be used for writes. Hopefully this erase on file delete will ensure a large enough supply of erase blocks to let random writes take place without a preliminary erase cycle.

Actually I used to work on an embedded system that used M Systems' TrueFFS. There the flash translation layer actually understood FAT enough to work out when a cluster was freed. I.e. it knew where the FAT was and when it was written it would check for clusters being marked free at which point it would mark them as garbage internally.

Re:Do cleanup in the OS (3, Interesting)

RiotingPacifist (1228016) | more than 5 years ago | (#28993367)

why? its low level but it doesn't affect the above filesystem.
on the list of reasons why it SHOULD be done by the OS not the firmware are:
*OS has a better clue about idleness
*OS can create idleness by holding unimportant writes for a while (ext4 style) and using this time to do GC
*OS can decide to save power by not doing this while on batterypower
on the list AGAINST i only have:
* thinks it should be platform independent and thinks this can't be achieved without doing it in firmware

put out the essence of the driver in public-domain and code a version for windows/mac if required, that way all oses will use the same logic even if they have completely different drivers.

Re:Do cleanup in the OS (0)

Anonymous Coward | more than 5 years ago | (#28994279)

I see no reason the operating system cannot provide generalized primities to control the process without being too specific.

The OS knows more about whats going on than the disk could ever know which could be quite useful in providing an optimized solution.

Personally I will never buy a SSD disk until crap like wear leveling is unecessary and random write performance does not suck. What ever happened to mram? I'm sticking with spinning platters for the near future unfortunately.

Re:Do cleanup in the OS (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28993353)

I suspect that this is one of those "side effect of standardization" things: "SSDs", in this context, are quantities of flash with an embedded controller that pretends to be an ordinary magnetic drive(and, even there, most magnetic drives have been lying about their internal workings since shortly after MRL went out of style).

If you want the OS to handle it, the option already exists [] . MTD, on the linux side(I assume WinCE has an equivalent, I've no idea what; the NT series OSes don't) is a mechanism for doing exactly that. The OS sees the "raw" flash chip directly, and you then run JFFs or another flash-optimized filesystem on top of it.

On the plus side, this skips the cost, and power drain, of the dedicated controller(great for embedded systems), and allows these sorts of fiddly revisions to be made in software. On the minus side, you lose compatibility. Pretty much every motherboard ever knows how to talk to something that looks like a hard drive, and OSes are equally compliant. Not so much with raw flash chips... This is already common, if not dominant, on the embedded side; but I suspect that it is a fair ways off, if ever, on the normalish X86 side(though, curiously, the OLPC actually works this way).

Who had to creative/hates "defragmentation"? (4, Insightful)

sznupi (719324) | more than 5 years ago | (#28993201)

"Garbage collection" has already quite different usage in CS. And while what has to be done to those SSDs isn't technically the same as defragmentation on HDDs, it is still "performing drive maintenance to combat performance-degrading results of prolonged usage, deletion of files".

Re:Who had to creative/hates "defragmentation"? (5, Informative)

CountOfJesusChristo (1523057) | more than 5 years ago | (#28993375)

So, I delete a file off of a drive such that the Filesystem no longer holds any references to the given data, and the firmware moves in and performs operations to improve the performance of the device. Its not really rearranging files in to contiguous sections like defragmentation does, its restoring unused sections to an empty state, probably using an algorithm similar to many garbage collectors -- sounds like garbage collection to me.

Re:Who had to creative/hates "defragmentation"? (1)

sznupi (719324) | more than 5 years ago | (#28993593)

I wonder if it can do that while block has data in some of it cells; or does it have to move, while idle, chunks of data from cells all around the disk to fill some other block and then restore, now empty of data, "garbage blocks" to pristine state...which has again some similarities to defragmentation.

But really, I wasn't going so much into technical details, more into language conventions/familiarity. From the point of view of...almost everybody this new SSD mechanism is practically synonymous with defragmentation, even if concepts behind implementation would be more reminiscent of good old garbage collection (which I doubt; it doesn't free unneeded space, it repairs unused one) - the latter only helpful to firmware/drivers/filesystem writers.

Re:Who had to creative/hates "defragmentation"? (2, Informative)

natehoy (1608657) | more than 5 years ago | (#28993673)

Right, but recall that SSD can only be erased in large blocks, though it can be written to in smaller ones. Erases are what eventually kill a block.

So if I take a block that has only 25% garbage and I want to wipe it, I have to copy the good data over to another block somewhere before I can do that. So I've written 3/4 of a wipable sector's worth of data to a new sector to get rid of the 25% of garbage. Do that a lot, and you do a lot of unnecessary erases and the drive dies faster.

If, instead, you take a sector that is 90% garbage, you only have to use 10% of a new sector to move off the good stuff before you can wipe it. So if you want the drive to last as long as possible, do garbage collection only when absolutely necessary.

But allow garbage to grow too high, and you'll have to tell the operating system to wait while you rearrange data to make room when a write request comes in for a large file.

Do you want the drive to be neatly optimized with no garbage all the time, or do you want the drive to last? I'm not saying one answer is more or less right than the other, but it's a tradeoff.

Re:Who had to creative/hates "defragmentation"? (1)

Bryan Ischo (893) | more than 5 years ago | (#28994389)

It's my understanding that the balance that SSD manufacturers like Intel have struck is a drive with excellent performance that is expected to life at least 3 or 4 years of heavy usage. Being extra conservative with erases at the expense of performance would increase the lifespan of the device, but in most circumstances, 3 or 4 years is good enough. Most people will have upgraded to a new drive by that point anyway. And quite a few hard drives die within that span as well.

Re:Who had to creative/hates "defragmentation"? (0)

Anonymous Coward | more than 5 years ago | (#28994293)

Perhaps this is more like an optimizer than a garbage collector. I did not read the article, but if I understand the theory right, it is not actually freeing memory. It is instead just preparing freed memory to be re-written before it actually needs to be re-written.

Re:Who had to creative/hates "defragmentation"? (1)

vidnet (580068) | more than 5 years ago | (#28994625)

"performing drive maintenance to combat performance-degrading results of prolonged usage, deletion of files"

This is what defragmentation does from the user's point of view, but not what it means. Defragmentation, as the word implies, is the process of reducing fragmentation, i.e. make files contiguous. This is achieved by moving chunks of data around, and the performance benefit comes from reducing the number of HD seeks required to read the data. Since a SSD is random-access, it doesn't have to seek so defragmentation is pointless. It just eats precious write cycles.

Garbage collection, on the other hand, is the process of reclaiming memory that is no longer used and putting it back in the usable memory pool. This is exactly what this technology does.

SSD GC: (0, Funny)

Anonymous Coward | more than 5 years ago | (#28993213)

First cleanup result will be C:\windows

- -
Attention all personnel: Funny leak detected.

Re:SSD GC: (0)

Anonymous Coward | more than 5 years ago | (#28993259)

Nah, that folder is required. It is that useless /bin and /sbin that will get cleaned up first.

when drives are "idle" ? (1)

Gothmolly (148874) | more than 5 years ago | (#28993215)

If you have an app that needs SSD, when will the drives ever be idle ?

Re:when drives are "idle" ? (1)

basementman (1475159) | more than 5 years ago | (#28993255)

They will be Idle whenever Slashdot needs something stupid to post.

Re:when drives are "idle" ? (1)

Wesley Felter (138342) | more than 5 years ago | (#28993295)

All PCs can benefit from SSDs, and they are often idle. Technology isn't just for those who "need" it.

Re:when drives are "idle" ? (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28993391)

I (very strongly) suspect that any drives based on these indilinx controllers are not going to be aimed at the 24/7 database crowd, where your concern would definitely apply. In the "fancy laptop that makes no noise and doesn't die when you drop it", "l33t gaming box/workstation that boots in 15 seconds and loads Quake N/Maya in another 10", and similar, there will be periods of intense load; but also plenty of downtime.

The indilinx guys are definitely a step above jmicron, who just suck, but tend to come in a decent bit below intel(though they are cheaper), much less the sort of stuff that Fusion IO is putting out(which is massively faster, and doesn't bother emulating a hard drive at all). Based on that, I suspect that they are aiming for the high end desktop/laptop/enthusiast segment.

Re:when drives are "idle" ? (2, Informative)

644bd346996 (1012333) | more than 5 years ago | (#28994065)

The drives don't have to be idle, just the portion being garbage collected. Flash drives typically consist of many independent memory chips united by a single drive. If the block being erased by the GC is on a chip that isn't being read from at that time, then the controller can issue the erase command without affecting the latency of any request from outside the drive. It would take a very full, random workload (and a very fast disk interface) to be able to detect the garbage collection, and even then, it couldn't be worse than the current method of erasing on an as-needed basis.

The cops'll love it. (4, Interesting)

domatic (1128127) | more than 5 years ago | (#28993223)

So what does this do when forensics are being done on one of these drives? Is the firmware just doing a better job of marking a dirty block available or do the dirty blocks have to be zeroed at some point. Even if the blocks are just marked will they output zeros if 'dd'ed by an OS?

Re:The cops'll love it. (1)

The MAZZTer (911996) | more than 5 years ago | (#28993377)

I imagine they would still contain the data, but once you write to anywhere in that block the garbaged data would be zeroed out because it isn't being read into the "combine-read-data-with-new-to-be-written-data" buffer.

Privacy advocates will love it.

Re:The cops'll love it. (1)

distantbody (852269) | more than 5 years ago | (#28994691)

SSD circuitry dictates the creation of pages and blocks.

A page is the minimum amount you can write, say 4KB

A block is the minimum you can delete, and it is made up of multiple pages, say 5.

If a block contains pages to keep and pages to delete, the circuitry has to delete the whole block, then rewrite the valid pages just to clear the deletable pages. This takes more time.

Current SDDs when told to delete a file just update the file table and leave the time-consuming delete task to when something needs to be written to a deleted but non-empty page.

The TRIM command deletes blocks as the file is deleted from the file table, so the write operation doesn't have to wait around for the sequence of: cache-valid-pages>delete-block>rewrite-valid-pages>write-new-pages.

So no, it wont make drive forensics easier, in fact it will remove any trace of a deleted file.

My summation. It might be off the mark. See the great AnandTech SDD explanation [] , where I got the information from.

Lifetime of drives (0, Redundant)

wonkavader (605434) | more than 5 years ago | (#28993235)

This will significantly increase writes. I'm sure it's still worth it, but we ought to know what kind of effect this will have on the time before one hits max writes on the flash device.

Re:Lifetime of drives (1)

Wesley Felter (138342) | more than 5 years ago | (#28993309)

Not necessarily. An SSD has to collect garbage sometime; whether it GCs proactively or lazily causes the same wear.

Re:Lifetime of drives (1)

natehoy (1608657) | more than 5 years ago | (#28993633)

An SSD does have to collect garbage sometime, but waiting until an entire section is marked obsolete until you do the wipe is the cheapest/least wearing method. Remember - SSD can be written to in small increments, but only erased in large increments, and once an increment has been written to it must be erased to be written to again. And each increment can only be erased a certain number of times before it dies.

Let's say we have erasable blocks of 512K with 512 writable sectors of 1K each. A proactive algorithm might move data into unused blocks when, say, half the block has "garbage" and the other half has good data. So when we hit more than 256K of garbage in the sector, we move the good stuff off it and wipe it.

So we take the good data out of two blocks, and write it all to a single new block, then we can wipe both old blocks safely.

But in doing so, we've "used up" a fresh block for no good purpose except to free up space. A noble goal, but I presume it's pretty obvious that if we had waited until each block was down to 128K of good data, we could have copied the good data out of FOUR old blocks to a single block, thereby cutting the "maintenance wear" by half.

Garbage collection of an SSD drive is a continuous choice between:

  - allowing the drive to get clogged up with junk data and being as lazy about maintenance as possible (which statistically will extend drive life, but will mean there are times when you have to wait to do a write because the drive has to find the most-garbagey chunks available and clear them to make room). If you put this task off as long as possible, when you DO wipe a block it's more likely to be mostly garbage, so you're moving good data around as little as possible.


  - obsessively keeping all garbage data off the drive by moving all data from ANY partially-used sectors to new sectors so all garbage can be kept out (which optimizes continuous performance, but means you may wipe an entire block when only a small percentage of it was actual garbage, so you're moving a lot of data around frequently and burning out sectors faster). If you do this task quickly and frequently, you move a lot of good data around from place to place. But you do ensure that free blocks are always available.

The ideal is somewhere around the middle. You pick a point at which you consider a specific block to be "too garbagey" and you clean those up as you go. Where is that point? Depends on your needs, how full the drive really is, and your relative values on performance versus reliability or drive longevity.

Re:Lifetime of drives (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28993993)

Mod parent -1 Fucking Clueless.

Read a book sometimes dumbass.

Leave the garbage (ie my porn) alone!!! (4, Funny)

syousef (465911) | more than 5 years ago | (#28993243)

I don't want my porn garbage collected thank you very much. Who died and made you king of deciding what's garbage.

Filesystem info (3, Interesting)

wonkavader (605434) | more than 5 years ago | (#28993251)

Wouldn't the drive benefit from a real understanding of the filesystem for this sort of thing? If it knew a sector was unallocated on a filesystem level, it would know that sectors were empty/unneeded, even if they had been written to nicely. Or should computers now have a way of tagging a sector as "empty" on the drive?

Either way, it looks like an OS interaction would be very helpful here.

Or are modern systems already doing this, and I'm just behind the times?

Re:Filesystem info (5, Informative)

blaster (24183) | more than 5 years ago | (#28993317)

There is an extensions that was recently added to ATA, the TRIM command. The TRIM command allows an OS to specify a blocks data is no longer useful and the drive should dispose of it. No productions support it, but several beta firmwares do. There are also patches for the Linux kernel that adds support to the black layer along with appropriate support to most filesystems. Windows 7 also has support for it.

There is a lot of confusion about this on the OCZ boards, with people thinking GC somehow magically obviates the needs for TRIM. As you pointed out the GC doesn't know what is data and what is not with respect to deleted files in the FS. I wrote a blog post [] (with pictures and everything) explaining this just a few days ago

Re:Filesystem info (3, Informative)

Wesley Felter (138342) | more than 5 years ago | (#28993325)

You're about two months ahead of the times. The ATA TRIM command will allow the filesystem to tell the SSD which sectors are used and which are unused. The SSD won't have to preserve any data in unused sectors.

Re:Filesystem info (0)

Anonymous Coward | more than 5 years ago | (#28993373)

I think that's the entire point of the "TRIM [] " command recently added to the ATA specifications, supported by Windows 7.

Seems the idea is the OS sends a command to the SSD, "this 4KB page no longer contains useful data, feel free to ignore or lose its contents in the future".

Remember here is that you can't write 4KB to flash, only the 512KB cells.

On a SSD that does no reorganisation, this means that subsequent writes to other 4KB pages in the same 512KB cell as your deleted page don't have the read-update-write performance overhead. When writing a 4KB block into the middle of a 512KB cell, the firmware has to load the other 508KB from flash, update the relevant 4KB, write the whole 512KB back down.

With the trim command,the SSD firmware knows it doesn't have to bother loading the other 508KB first (as these blocks have been marked as free by the trim command). However, some of that 508KB may still have valid data - so the SSD firmware will have to load that from flash to buffer, update, and write the buffer again.

I'm guessing the article is simply a background process of consolidating a bunch partially filled 512kb cells together into totally empty and totally full ones, meaning that totally empty ones are free to be written to quickly. If so, what this means is the SSD defragmentation happens automatically in the background by the firmware rather than by running specific programs.

Re:Filesystem info (1)

Hal_Porter (817932) | more than 5 years ago | (#28994015)

Remember here is that you can't write 4KB to flash, only the 512KB cells.

That's not correct. NAND flash has pages, typically 512 bytes to 4KB, and blocks, typically 128KB to 512KB. You can write any page you want. The limitation is that you can write a pages only once before you erase the block containing that page.

Re:Filesystem info (0)

Anonymous Coward | more than 5 years ago | (#28993401)

nilfs2 on linux is an filesystem designed just for flash memory. It was introduced mainline as part of 2.6.30. It'll be a while before it becomes usable though. []

Re:Filesystem info (1)

grotgrot (451123) | more than 5 years ago | (#28993651)

Wouldn't the drive benefit from a real understanding of the filesystem for this sort of thing?

There is no need as a standard ATA TRIM command exists by which the OS can tell the device when a block is no longer in use. LWN [] wrote about this almost a year ago.

Re:Filesystem info (0)

Anonymous Coward | more than 5 years ago | (#28993753)

I don't know about all that, but isn't why they created the S.M.A.R.T. system for hdd's? or was that just for something else?

At what cost? (3, Interesting)

igny (716218) | more than 5 years ago | (#28993277)

The Garbage collector restores performance of the drive. Nothing comes free, so a question - at what cost?

Re:At what cost? (4, Insightful)

slimjim8094 (941042) | more than 5 years ago | (#28993361)

Possibly shorter drive life. If each cell can be rewritten 100,000 times (don't remember exactly) then - for exactly the same reason you're doing this in the first place (rewriting an entire cell on every write) you'll be wearing out the cell.

Probably a net gain, though. This and wear-leveling algorithms probably will make drives last longer.

Don't be quite so cynical. Usually I'd agree with you - but SSD (not flash) is so new that improvements can be made for free by just changing some techniques.

Re:At what cost? (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28993627)

Possibly shorter drive life.

-1, braindead. The block has to be cleared anyway before it can be reused. It's just being cleared in advance instead of at the time when it's needed. No extra wear is occurring.

Re:At what cost? (1, Insightful)

Anonymous Coward | more than 5 years ago | (#28993665)

Needn't be. Suppose I have 128Kb free wherein I do garbage collect. I could still write a 4 block to it, if I had not done GC before.
Thus, the shorter lifetime.

Re:At what cost? (1)

bonch (38532) | more than 5 years ago | (#28993655)

The garbage collection is doing what already must be done before writing to a cell. This is just doing it at an earlier point when write performance is not a concern.

Re:At what cost? (1)

sl149q (1537343) | more than 5 years ago | (#28994363)

Assuming 8GB flash, with a write speed of 10MB/s, 100000 erases allowed per sector, and a perfect wear leveling algorithm....

Then your device will last (8 * 1024 * 1000000) / 10 / (3600 * 24) = 948.14 days... give or take

Assuming you can write to it continuously at 10MB for that length of time (and assuming that the underlying hardware will do the required erases etc fast enough...)

If you don't wear level and continuously erase same sector it will last about an hour or two.

You *might* wear one out... but really its unlikely that many people will.

Re:At what cost? (0)

Anonymous Coward | more than 5 years ago | (#28993371)

Additional firmware complexity? Increase in firmware image size? I can't see any As an analogy, I guess this is like re-packing your suitcase while waiting for the train to come - you wouldn't do anything useful in the waiting time anyway, and you can put additional souvenirs into the suitcase more quickly once you buy them. The price you pay is putting your mind to re-organizing the stuff in your suitcase.

Re:At what cost? (1)

The MAZZTer (911996) | more than 5 years ago | (#28993393)

The cost would be that it would have to keep track of which blocks of data are in use, so it would have to have a small bit of the SSD storage set aside for this purpose.

There is no performance or lifespan penalty since this only affects what happens when data is written -- currently, the block is always read, combined, and then written. If the block is marked as not in use, the first two steps can be skipped. If the block is in use, we're just doing old behavior, no loss (except we needed to look up to see if the block was in use, but I doubt the performance loss would be noticeable).

Re:At what cost? (5, Informative)

natehoy (1608657) | more than 5 years ago | (#28993397)

Simple. Well, not really, but...

SSD's can be written to in small increments, but can only be erased in larger increments. So, you've got a really tiny pencil lead that can write data or scribble an "X" in an area to say the data is no longer valid, but a huge eraser that can only erase good-sized areas at a time, but you can't re-write on an area until it's been erased. There's a good explanation for this that involves addressing and pinouts of flash chips, but I'm going to skip it to keep the explanation simple. Little pencil lead, big eraser.

Let's call the small increment (what you can write to) a "block" and the larger increment (what you can erase) a "chunk". There are, say, 512 "blocks" to a "chunk".

So when a small amount of data is changed, the drive writes the changed data to a new block, then marks the old block as "unused". When all the blocks in a chunk are unused, the entire chunk can then be safely wiped clean. Until that happens, if you erase a chunk, you lose some data. So as time goes on, each chunk will tend to be a mix of current data, obsolete data, and empty blocks that can still be written to. Eventually, you'll end up with all obsolete data in each chunk, and you can wipe it.

However, it's going to be rare that ALL the blocks in a chunk get marked as unused. For the most part, there will be some more static data (beginnings of files, OS files, etc) that changes less, and some dynamic data (endings of files, swap/temp files, frequently-edited stuff) that changes more. You can't reasonably predict which parts are which, even if the OS was aware of the architecture of the disc, because a lot of things change on drives. So you end up with a bunch of chunks that have some good data and some obsolete data. The blocks are clearly marked, but you can't write on an obsolete block without erasing it, and you can't erase a single block - you have to erase the whole chunk.

To fix this, SSD drives take all the "good" (current) data out of a bunch of partly-used chunks and write it to a new chunk or set of chunks, then marks the originals as obsolete. The data is safe, and it's been consolidated so there are fewer unusable blocks on the drive. Nifty, except...

You can only erase each chunk a certain number of times before it dies. Flash memory tolerates reads VERY well. Erases, not so much.

So if you spend all of your time optimizing the drive, you're moving data around unnecessarily and doing a LOT of extra erases, shortening the hard drive's life.

But if you wait until you are running low on free blocks before you start freeing up space (which maximizes the lifespan of the drive), you'll run into severe slowdowns where the drive has to make room for the data you want to write, even if the drive is sitting there almost empty from the user's perspective.

So, SSD design has to balance between keeping the drive as clean and fast as possible at a cost of drive life, or making the drive last as long as possible but not performing at peak all the time.

There are certain things you can do to benefit both, such as putting really static data into complete chunks where it's less likely to be mixed with extremely dynamic data. But overall, the designer has to choose somewhere on the continuum of "lasts a long time" and "runs really fast".

you're not moving the data unnecessarily (1)

YesIAmAScript (886271) | more than 5 years ago | (#28993881)

Next time you write into that block, this operation will be performed anyway. This is why some SSDs have huge delays on writes, because they delay your write until the data merging is done. Also, not every block needs merging anyway, an area of a file that spans 512K (128 pages) is written in one chunk anyway and never needs re-merging.

To be honest, the data retention time on NAND (where the data just drains out like DRAM) is becoming as big a factor as write wear anyway. You're going to have to move the data around a little to make sure it stays valid.

Honestly, I'm pretty spooked about SSDs right now, the only reason I use one is a friend gave it to me for free to test.

Re:At what cost? (-1, Flamebait)

an unsound mind (1419599) | more than 5 years ago | (#28994011)

-1, Fucking Retard.

You're karmawhoring with your irrelevant shit, replying to every post here.

You've completely misunderstood what they're doing.

Re:At what cost? (1)

redhog (15207) | more than 5 years ago | (#28994643)

This sounds like something you could improve by something analogous to "generational GC"?

Re:At what cost? (1)

Hal_Porter (817932) | more than 5 years ago | (#28994019)

You drive works fine for a while. After that demons visit and eat you alive.

Re:At what cost? (1)

angelbunny (1501333) | more than 5 years ago | (#28994161)

No more un-erase. What is gone is really gone, there is no recovering lost data.

Damn it! (1, Funny)

Anonymous Coward | more than 5 years ago | (#28993293)

The new drive deleted all my %INSERT_POPSTAR% songs!

Captcha is "tragedy", how fitting...

More wearing? (0)

Anonymous Coward | more than 5 years ago | (#28993349)

Isn't this just going to degrade the life of the drive more by moving data around to clean up cells/blocks?

Re:More wearing? (1)

Anachragnome (1008495) | more than 5 years ago | (#28993385)

Imagine that. Integrated obsolescence disguised as a feature.

What ever will they think of next?

Re:More wearing? (1)

Voyager529 (1363959) | more than 5 years ago | (#28994013)

What ever will they think of next?

Why, they'll think of disguising integrated obsolescence as a feature next, but they'll call it a Data Relocation Mechanism (DRM).

Here is one garbage collection algorithm: (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28993441)

void clean(std::vector<Jew*>& jews)
    for(std::vector<Jew*>::iterator i = jews.begin(); i != jews.end(); i++)
        if((*i) != NULL)
            delete (*i);
            (*i) = NULL;

Garbage Collection == Garbage (0)

Anonymous Coward | more than 5 years ago | (#28993517)

Garbage collection should collect itself and throw itself into the trash.

Then again, we can pair up a slow write media, with a slow programming language, a perfect pair.

800lb Gorilla (0)

dubbayu_d_40 (622643) | more than 5 years ago | (#28993535)

Hey, !!!hardware guys!!!

Close the blinds. Put your dresses on. Apply lipstick.

Read about Java gc. You'll find the section on generational gc interesting.

Take out the "Tash"? (1)

Opyros (1153335) | more than 5 years ago | (#28993583)

Not on your life! Unless I've got Aslan to help me, of course.

Re:Take out the "Tash"? (1)

derGoldstein (1494129) | more than 5 years ago | (#28993789)

aw man, you beat me to it. Still, Tash can mean more than one thing [] .

What I'm inferring from the context within the sentence is that "take-out-the-tash" means he wishes to assassinate Rico Smith.

Check out NILFS (1)

rrohbeck (944847) | more than 5 years ago | (#28993591)

No, not MILFs. []

This GC stuff is only needed as long as the FS uses small blocks. File systems should be able to use arbitrarily sized blocks.
Hopefully btrfs will be able to use large blocks efficiently too.

Re:Check out NILFS (1)

derGoldstein (1494129) | more than 5 years ago | (#28993815)

GC: Garbage Collection
FS: File System
BTRFS: B-tree File System
NILFS: New Implementation of a Log structured File System
(I had to look NILFS up though...)

Is this what dreams are for ? (1)

Latinhypercube (935707) | more than 5 years ago | (#28993629)

Is this what dreams are for ?

I feel like my brain goes 'garbage collecting' every night...

OCZ already released the GC tool, but for Win only (2, Interesting)

slagell (959298) | more than 5 years ago | (#28993645)

I see OCZ already released some sort of garbage collection tool, but it only works on Windows. Kind of annoying since I bought their "Mac Edition" drive for my MacBook. Hopefully they'll put this in a firmware update, too, and hopefully I won't have to boot DOS on my Mac to update the firmware with a utility that blows over my partition table this time. That was a lot of fun going from version 1.10 to 1.30 firmware.

Re:OCZ already released the GC tool, but for Win o (2, Informative)

Wesley Felter (138342) | more than 5 years ago | (#28993771)

No, OCZ released wiper, which is a trim tool. Trim and GC are different; in particular, GC requires no tools or OS support.

Defrag? (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28993681)

So when is my DOS Norton Speedisk going to support defragging SSDs?

Wrong data in article? (2, Informative)

thePig (964303) | more than 5 years ago | (#28993701)

In the article it says

But if a cell already contains some data--no matter how little, even if it fills only a single page in the block--the entire block must be re-written

Is this correct?
From whatever I read in AnandTech [] , it looked like we need not rewrite the entire block unless the available data is less than total - (obsolete + valid) data.

Also, the article is light in details. How are they doing the GC? Do they wait till the overall performance of the disk is less than a threshold and rewrite everything in a single stretch or do they rewrite based on local optima? If the former, what sort of algorithms are used (and are the best for it) ?

Re:Wrong data in article? (3, Informative)

blaster (24183) | more than 5 years ago | (#28993779)

No, what the actual situation is is that a block consists of some number of pages (currently on the flash used in SSDs it tends to be 128). The pages can be written individually, but only sequentially (so, write page 1, then page 2, then page 3), and the pages cannot be erased individually, you need to erase the whole block.

The consequence of this is that when the FS says "Write this data to LBA 1000" the SSD cannot overwrite the existing page it is stored without erasing its block, so instead it find somewhere else to store it, and in its internal tables it marks the old page as invalid. Later when the GC is sweeping blocks for consolidation the number of valid pages is one of the criteria it uses to figure out what to do. If a block has very few valid pages and has been completely filled then those pages will probably be copied to another block that is mostly valid and the block the data was originally in will be erased.

Which department? (1)

Voyager529 (1363959) | more than 5 years ago | (#28993997)

from the take-out-the-tash dept.

Is "tash" a play on words regarding SSD's, or does the taking-out-the-TRASH department have a job opening for a grammar nazi?

Oh Wow (1)

Nom du Keyboard (633989) | more than 5 years ago | (#28994099)

Oh Wow, SSDs still have issues. Hey it's new technology, it's still very expensive relative to the technology it replaces, we haven't yet seen how well it holds up long term, and everyone who jumped to it early because it was the new bright and shiny thing should consider being a bit more cautious next time.

Re:Oh Wow (4, Informative)

Bryan Ischo (893) | more than 5 years ago | (#28994241)

You need to read up much, much more on the state of SSDs before making such sweeping, and incorrect, generalizations.

There are algorithms in existence, such as clever "garbage collection" (which is a bad name for this process when applied to SSDs - it's only a bit like "garbage collection" as it is traditionally known as a memory management technique in languages like Java) combined with wear levelling algorithms, and having extra capacity not reported to the OS to use as a cache of "always ready to write to" blocks, that can keep SSD performance excellent in 90% of use cases, and very good in most of the remaining 10%. Point being that for the majority of use cases, SSD performance is excellent almost all of the time.

Intel seems to have done the best job of implementing these smart algorithms in its drive controller, and their SSD drives perform at or near the top of benchmarks when compared against all other SSDs. They have been shown to retain extremely good performance as the drive is used (although not "fresh from the factory" performance, there is some noticeable slowdown as the drive is used, but it's like going from 100% of incredibly awesome performance to 85% of incredibly awesome performance - it's still awesome, just not quite as awesome as brand new), and except for some initial teething pains caused by flaws in their algorithms that were corrected by a firmware update, everything I have read about them - and I have done *alot* of research on SSDs, indicates that they will always be faster than any hard drive in almost every benchmark, regardless of how much the drive is used. And they have good wear levelling so they should last longer than the typical hard drive as well (not forever, of course - but hard drives don't last forever either).

Indilinx controllers (which are used in newer drives from OCZ, Patriot, etc) seem to be second best, about 75% as good as the Intel controllers.

Samsung controllers are in third place, either ahead, behind, or equal to Indilinx depending on the benchmark and usage pattern, but overall, and especially in the places where it counts the most (random write performance), a bit behind Indilinx.

There are other controllers that aren't benchmarked as often and so it's not clear to me where they sit (Mtron, Silicon Motion, etc) in the standings.

Finally, there's JMicron in a very, very distant last place. JMicron's controllers were so bad that they singlehandedly gave the entire early-generation SSD market a collective black eye. The one piece of advice that can be unequivically stated for SSD drives is, don't buy a drive based on a JMicron controller unless you have specific usage patterns (like, rarely doing writes, or only doing sequential writes) that you can guarantee for the lifetime of the drive.

I've read many, many articles about SSDs in the past few months because I am really interested in them. Early on in the process I bought a Mtron MOBI 32 GB SLC drive (I went with SLC because although it's more than 2x as expensive as MLC, I was concerned about performance and reliability of MLC). In the intervening time, many new controllers, and drives based on them, have come out that have proven that very high performance drives can be made using cheaper MLC flash as long as the algorithms used by the drive controller are sophisticated enough.

Bottom line: I would not hesitate for one second to buy an Intel SSD drive. The performance is phenomenal, and there is nothing to suggest that the estimated drive lifetime that Intel has specified is inaccurate. I would also happily buy Indilinx-based drives (OCZ Vertex or Patriot Torx), although I don't feel quite as confident in those products as I do in the Intel ones; in any case they all meet or exceed my expectations for hard drives. I've already decided that I'm never buying a spinning platter hard drive again. Ever. I have the good fortune of not being a movie/music/software pirate so I rarely use more than a couple dozen gigs on any of my systems anyway, so the smaller capacities per dollar of SSDs don't phase me.

And about that Mtron MOBI drive that is sitting in the system I am typing this from right now - it is still as wickedly fast as the day I bought it two months ago. I am very happy with my purchase.

Re:Oh Wow (1)

lukas84 (912874) | more than 5 years ago | (#28994329)

I have an OCZ Vertex 120GB in my Laptop. Performance is okay, though not phenomenal.

The new X25-M 34nm 160GB i bought for my Desktop on the other hand is awesome. Everything is near instantenous - it's like a new PC.

Re:Oh Wow (1)

amorsen (7485) | more than 5 years ago | (#28994467)

everyone who jumped to it early because it was the new bright and shiny thing should consider being a bit more cautious next time.

The only regret I have about my X25-M is that I didn't get one when they first came out but waited till 6 months ago. The only comparable speed increase in Linux I have ever experienced was when I upgraded my parents' 486 from 8MB to 20MB RAM.

What about Flash memory camcorders? (1)

twosat (1414337) | more than 5 years ago | (#28994517)

How would this affect camcorders that record to flash memory? I'm interested in getting a camcorder that uses flash memory for its inherent ruggedness and low power consumption. If the memory ages with time is it better not to get the ones with built-in memory that are not easily replaced by simply inserting a new memory stick? Does it mean that the memory at the beginning of the stick will age more, given that it will be overwritten more?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?