Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

TrueDisc Error Correction for Disc Burning?

Cliff posted more than 7 years ago | from the parity-blocks-in-a-different-volume dept.

68

An anonymous reader asks: "Macintouch has a link to a new piece of software — TrueDisc — which claims to make data burned to record-able discs more reliable. More specifically it uses interleaved redundant cells to rebuild data should part of the disc be scratched. On the developer's blog they say they plan to create an open-source implementation of the TrueDisc system, now that it is not going to be included in the Blu-ray/HD-DVD standards. Have any of you used this software before, and what alternatives are already available?"

cancel ×

68 comments

Sorry! There are no comments related to the filter you selected.

Sheesh (1)

kabdib (81955) | more than 7 years ago | (#18270068)

Okay, so you store redundant (maybe error-correcting) stuff on top of the existing file system (or in otherwise unused sectors), so you can recover your files if the original sectors succumb to bit rot. For fifty bucks.

Why not just store the files twice? It would be a whole lot cheaper...

Re:Sheesh (0, Flamebait)

BronsCon (927697) | more than 7 years ago | (#18270142)

You mean, if Slashdot stored every comment twice, there would be no more redundant posts? What about incorrect or off-topic posts? Would these, also, be a thing of the past?

Does this apply to articles as well? No more dupes? No more FUD? No more slashvertizements?

You, my friend, are a genius!

And I was trying not to post in this thread, so I could moderate something! Damn you!

Re:Sheesh (3, Funny)

Anonymous Coward | more than 7 years ago | (#18270202)

No, you don't understand. The dupes and redundant posts are a required part of Slashdot's backup procedures.

Re;Sheesh (0, Redundant)

Drooling Iguana (61479) | more than 7 years ago | (#18270824)


You mean, if Slashdot stored every comment twice, there would be no more redundant posts? What about incorrect or off-topic posts? Would these, also, be a thing of the past?

Does this apply to articles as well? No more dupes? No more FUD? No more slashvertizements?

You, my friend, are a genius!

And I was trying not to post in this thread, so I could moderate something! Damn you!

Re:Re;Sheesh (0, Redundant)

BronsCon (927697) | more than 7 years ago | (#18271106)

Mod parent redundant! Or insightful! Or funny! Or... Ok, redundant just wouldn't be fair; and I meta-moderate.

Re:Re;Sheesh (1)

BronsCon (927697) | more than 7 years ago | (#18274878)

Wow... someone ignored my sig, BIG TIME. Don't drink and moderate! If you don't know the meaning of the word redundant, you shouldn't use it for moderation. Great-grandparent is dead-on the topic, as well. I know, I'm a bit biased as these are my posts, but, honestly, people, think when you moderate (I'd rather lose the 2 mod points I currently have than to use them unfairly; it just makes sense for the community as a whole.) On the other hand, if you don't like me, man up and say so; and take the time to mark me a foe - I have better things to do than retalliate because you say you don't like me, I won't hurt you... much.

Re:Re;Sheesh (1)

BronsCon (927697) | more than 7 years ago | (#18275586)

The post this was ACTUALLY in response to seems to have disappeared. Someone copied the great-grandparent and posted it in reply; parent was attatched to that comment. That comment is somehow gone now and parent is attatched to the only other existing reply to great-grandparent. Would those who moderated based on the current grandparent (or who are about to moderate on such basis) please take this into account with their moderation? And please, someone who hasn't moderated (god forbid someone waste a mod point) explain how great-grandparent is off-topic and/or flamebait?

Some of the moderations my posts seem to get are flamebait and trollish (the ultimate AC troll, mod obviously unfairly, do undue damage to someone's karma, and nobody will ever know who you are!), regardless whether the posts themselves are. I try to avoid flamewars and trolling but it seems I may have to stoop to that level to have my other posts fairly moderated. Does that sound right?

Re:Re;Sheesh (1)

BronsCon (927697) | more than 7 years ago | (#18278732)

Well, Drooling Iguana's post seems to have reappeared; and been modded down as well. You people have no sense of humor (or those of you who do have no mod points).

Or use par2 (2, Insightful)

bruguiea (1038034) | more than 7 years ago | (#18270144)

Or use PAR2 [par2.net] ? It's free.
Tony

Re:Or use par2 (1)

failedlogic (627314) | more than 7 years ago | (#18270190)

I agree but if the data set is sufficiently large enough to store on a DVD - using PAR/PAR2 can take forever even on "faster" systems. I've tried on my 1.8 GHz iMac G5 and on a Athlon 64 3Ghz. Both systems had at least 1GB RAM (the 64 actually had 2 GB). This was about a year ago. It took about 5 hours for the G5 and about 2 for the Athlon 64. I was backing up PDFs (just txt no images), .DOC and .XLS and small PPT onto DVD. I burned 4 copies - two discs with the Data, two with the PAR2 data at 80%.

If TrueDisc is any faster, its welcome as long as it works and is reliable. Why are they charging $80 when the 'bulk' of the code is going to be OSS'd?

Re:Or use par2 (1)

maxume (22995) | more than 7 years ago | (#18270460)

I was under the impression that par2 worked faster for smaller data sets. If that is correct and you had partitioned your data into ~2 gigabyte chunks and made pars for each chunk and then burned chunk+local pars to a dvd each you would have saved time. (and made your recovery process simpler to boot)

It wouldn't protect you from a 'failed' disc the way your method does, but for anything paranoid I would go with two copies stored separately anyway.

Re:Or use par2 (1)

sweetooth (21075) | more than 7 years ago | (#18272084)

If you take a look at the way a lot of files are distributed on usenet you'll find exactly that. 4GB of data will be broken up in to 40-50MB blocks and par2 files will be built to support the set. Doing a repair on a highly damaged set of between 4-5GB on my Athlon 64 X2 4600 takes about 5 minutes.

Re:Or use par2 (1)

maxume (22995) | more than 7 years ago | (#18272194)

I meant that if you had 4 gigabytes of data, rather than distribute it as 4GB + pars, distribute it as 2GB+pars X 2, with each set of pars only covering half of the data. Files on usenet are chopped up because of limitations in newsreaders and crappy servers, but the pars are generally generated for the whole set of files. My handwavy impression is that the algorithm used gets slower when processing larger amounts of data so that choosing a slightly less convenient breakdown in the files would speed up the processing.

Re:Or use par2 (1)

sweetooth (21075) | more than 7 years ago | (#18272698)

I realize that part of the splitting of files on usenet is due to limitations in the distribution system, I was just trying to reinforce the idea that pars tend to work very quickly with large data sets if that data is split into smaller, though not necessarily convenient chunks.

Re:Or use par2 (2, Interesting)

dgatwood (11270) | more than 7 years ago | (#18273914)

I tried to use it to send my parents a copy of their problematic hard drive that I scraped for them, spread across a handful of DVDs. Turns out that at least on Mac OS X, I couldn't find a PAR decoder implementation that worked correctly if the total data size was over 2GB (or maybe 4GB). It was mistakenly using a 32-bit value for some of its internal math instead of a 64-bit, thinking (incorrectly) that Mac OS X's seek only supported a 32-bit offset. I sent the UnRarX folks a hackish workaround patch to the open source (or was it GPL) tool that they used under the hood. Not sure if they've actually fixed it or not. It also took an eternity to process such a large volume.

Anyway, IMHO, anything that intends to solve the fragility problem of optical media must do ALL of the following:

  • Be no worse than half the performance of direct writes. That means that multi-gigabyte Reed-Solomon codes are right out. That also means a very intelligent checksumming mechanism that avoids unnecessary write/seek/settle cycles of the hard disk that will almost inevitably be providing the backing storage for data prior to writing it onto the disk. I'm not sure what the access pattern should look like, but I can think of a good number of examples of what it shouldn't look like.... :-D
  • Be BELOW the filesystem level, not above it (on a per-file basis). If a file gets corrupted, per-file protection is okay, but such a protection scheme still leaves lots of very critical metadata without any backup. What happens when a block in the root directory structure fails? You're thoroughly screwed if you just have per-file redundancy.
  • Provide backups of data that are sufficiently distributed both in rotational angle and in distance from the edge. Scratches by clumsiness usually occur radially, outwards from the center or inwards from the edge. They may also be arc-shaped. Either way, those are unlikely to destroy a huge amount of data. Scratches due to a piece of sand in your DVD player, spinning the DVD in a roughly manufactured case, etc. happen in a circular pattern, and are, IMHO, one of the major reasons why the mechanisms built into most of the optical disc formats aren't sufficient.... Putting in some n ECC bytes every n*k bytes of data only solves that problem for very large values of n and k so that several entire rotations of the media can be obliterated without losing any data. Unfortunately, that sort of strategy then can quickly turn into a "multi-gigabyte Reed-Solomon code" problem again. Again, on this one, I don't know the solution (though I have some vague notions). I mainly just know a lot of examples of what the solution isn't. :-)
  • Take into account that the last few percent of storage on optical media are notoriously unreliable on cheap media and limit the total capacity appropriately to avoid relying on that space for anything important.

After the failure rate I saw trying to do that data transfer (90% immediate verify failure on discs that were over about 98% full, no burn failures ever on discs that were under 90% full), coupled with the failure rate I saw with Retrospect Remote (almost 100% of DVDs stopping with only 15% of the disc used because the craptastic software didn't support burn-safe and wasn't smart enough to pause the burn while it waited for data over the network), at this point, I trust optical media about as far as I can throw it... though maybe not AOL CDs---I can throw them pretty far.... It would be really cool if this sort of tech works and were implemented broadly, as it might make optical media actually useful instead of it just being a nice pretty round disc to slide label-side down against a plasterboard wall in office pachinko.

Just my $0.0195, adjusted for consumer products deflation.

Re:Or use par2 (0)

Anonymous Coward | more than 7 years ago | (#18273814)

Ahh your error was not segmenting your data more efficiently. Par2 works much, much better with smaller chunks of data. If you have segmented your data into 50MB chunks such as with WinRAR you could have saved yourself hours of time by simply repairing each chunk that was corrupt/missing. I have done a 25% repair on a 10GB archives split into 100MB chunks on my 2 year old P4 in about 30 minutes. Have a read on how to utilize Par2 properly, it is an excellent tool.

I also suggest you look into par2cmdline from http://parchive.sourceforge.net/ [sourceforge.net] as it is a great way to automate recovery with shell/batch scripts.

Re:Or use par2 (2, Funny)

pegr (46683) | more than 7 years ago | (#18271182)

For large data sets, I rar to a "block" size one third of that of my media, then put two data blocks and one par block per disk. Yes, it's a pain to restore, even without damage, but it gives me great recoverability, as I can loose up to a third of my disks and still be able to recover. These data sets are typically 50 to 200 GBs, btw...

Re:Or use par2 (0)

Anonymous Coward | more than 7 years ago | (#18273730)

Or just create a RAID1 array on the CD.

Interesting; but once bitten, twice shy. (1)

Kadin2048 (468275) | more than 7 years ago | (#18281108)

Is PAR2 open-source? It seems like it is, but I certainly wouldn't want to do long-term archives of my data in any format where there were only binary decoders available.

A while back I had an absolute devil of a time trying to unpack some Compact Pro archives (.sea), and that's not really even that old a format -- it was last released in 1995 -- and there are still a lot of Classic Macs around that will run the software. However, in another 10 years, I'd imagine that it would be a lot tougher, since Macs being manufactured now won't run the Classic OS. (Unless someone reverse-engineers it, which I think has been done or at least discussed; according to WP macutils will do it. But hoping that somebody will reverse-engineer your proprietary archive format before you need it isn't the sort of risk you'd want to consciously take.)

At least with formats like TAR and GZIP, not only are they very well understood, but you could easily put a copy of the source code onto each piece of backup media; that way if somewhere down the road, you needed to get your data off on a machine that didn't have the proper decoder or didn't have the capability of running the binary decoder (Microsoft is going to have to break backwards-compatibility sometime...), you wouldn't be completely screwed. And most of all, you'd still be OK even if you turned out to make a bad decision on your choice of format, and it maybe didn't end up being as overwhelmingly popular as you thought it'd be.

I'd really hate to plop all my data into some proprietary archive or compression format in order to save a few percent, and then end up cursing myself (or having someone else curse me) a few decades down the line, when they're left with a binary blob and a decoder that will only run on obsolete hardware, using an obsolete architecture, running an obsolete OS.

People seem to chronically underestimate the lifespan of data. The backup of your vacation photos that you make now, may very well be something that you're going to want to get at in 20 or 50 years. Heck, I've scanned slides that are older than that. A whole lot of folks seem to not really consider the long term when they're backing up and archiving data, and in an average day, it's probably one of the more longer-lived decisions you're likely to make.

Re:Sheesh (1)

IceCreamGuy (904648) | more than 7 years ago | (#18270156)

From their site: you can get 600 megs to a CD and 4.1 gigs to a DVD, pretty good space saving over having two full copies.

Re:Sheesh (1)

djlemma (1053860) | more than 7 years ago | (#18270278)

Music CD's already include error correction bytes embedded in each frame of data, so I assume this technology does the same sort of thing for data CD's/DVD's/bluray's/etc..

On music CD's, there's one error correction byte for every three bytes of data. That's a lot more space-efficient than just burning your data twice.....

Re:Sheesh (4, Informative)

tenton (181778) | more than 7 years ago | (#18271762)

Music CD's already include error correction bytes embedded in each frame of data, so I assume this technology does the same sort of thing for data CD's/DVD's/bluray's/etc..

On music CD's, there's one error correction byte for every three bytes of data. That's a lot more space-efficient than just burning your data twice.....


Music CD's have piss poor error correction, by data standards. CD-ROM and DVD-ROM (which includes the video variant, since it's an application of DVD-ROM) have much more robust error correction. There is more error detection (and correction) per block on a CD-ROM (consequently, less for data) than on a music CD. Music CDs have the additional advantage of not needing to be precise; it can try to guess (interpolate) the missing data it runs into, or, at worse, skip (which may or may not be noticable). Can't do that with a spreadsheet.

Burning your data twice also has the advantage of being able to separate the copies (to different physical locations). Error correction technologies aren't going to help if you CDs and DVDs are roasted in a fire; the extra copy you made and put into storage elsewhere will still be safe.

Re:Sheesh (0)

Anonymous Coward | more than 7 years ago | (#18275912)

"Music CDs have the additional advantage of not needing to be precise; it can try to guess (interpolate) the missing data it runs into, or, at worse, skip (which may or may not be noticable). Can't do that with a spreadsheet."

Shhhhhhh!!! Don't tell the IRS.
That was my explanation for why so much income was mysteriously missing from my taxes.

Re:Sheesh (2, Informative)

phliar (87116) | more than 7 years ago | (#18270374)

Why not just store the files twice?

Then your overhead is 100%. They promise an overhead of 14%.

There are much better error correction schemes than "duplicate the data" -- look up Reed-Solomon.

Re:Sheesh (4, Insightful)

ucblockhead (63650) | more than 7 years ago | (#18271208)

If you are backing up important data, having the disk go bad is only one issue. I always "duplicate the data"...one disc stays home, one goes to work. No other error correction scheme will work if my house burns down.

Re:Sheesh (0)

Anonymous Coward | more than 7 years ago | (#18272130)

Nice prevention scheme in theory, but totally silent about the consequences after your employer finds a stack of porn CDs in your cube.

Re:Sheesh (1)

ucblockhead (63650) | more than 7 years ago | (#18278314)

Not a problem. I label them "$BOSS's files".

Re:Sheesh (0)

Anonymous Coward | more than 7 years ago | (#18270452)

This isn't school so I won't go into the details, but consider you have two copies of a file, and the first one you try is corrupted. You try the second one, and it's corrupted too. Great - which one is right? This is why you don't really want to just store two copies of the same file. You can use parity, error detecting/correcting codes, and all kinds of fancy coding theory stuff so that with 2 (or more) versions of the same file, you can recover the original. This very process is used on the discs that TrueDisc is meant to be used on, however the protection is limited because there a trade-off between capacity and protection. If you are burning 10MB to a 700MB disk, you obviously aren't worried about capacity and can have massive redundancy instead.

Re:Sheesh (1)

kabdib (81955) | more than 7 years ago | (#18270776)

[this isn't school, but i've been to school and i know a whole bunch about Reed-Solomon coding, thanks]

I say: Meh. Fifty bucks buys a bunch of DVDs.

Assuming you care about the archival nature of the data you're storing, the FIRST thing you DON'T do is depend on a piece of software that will no longer run under any OS or on any hardware that you can obtain a decade hence.

In general the ECC on DVD is going to prevent you from getting bad data; it's extremely unlikely that you're going to be able to successfully read an ECC block with (say) some bits flipped. The file that is "right" is going to be the one you can read. If all of the copies of that file are damaged, chances are that the failures will be in different locations of the file. If all the files are damaged, well, the data mattered to you, so did you make multiple copies?

In other words, if I want reliability, I'm going to go SIMPLE, not more complex. And the simplest (and unarguably cheapest) way to go is to write multiple copies of a disc, which is what folks making backups have been doing ever since backups were invented.

Re:Sheesh (2, Informative)

peter (3389) | more than 7 years ago | (#18273564)

> In general the ECC on DVD is going to prevent you from getting bad data; it's extremely unlikely that you're going to be able to successfully read an ECC block with (say) some bits flipped.

  I burn my movie collection on DVD with par2 blocks and md5sum files. When I verify them, with some disks in some drives I get data errors. So I have seen in practice that you sometimes get silently corrupted data. My NEC-3500A burner is starting to get old, and doesn't read as well as it used to, I guess.

  Par2 is slow to generate, but worth it. I have actually recovered files from slightly bad disks thanks to my par2 files.

  For a disc with lots of small files, par2 would suck because it doesn't know about directories, for one thing. dvdisaster is like par2 for iso images. I don't think it's set up so that you can keep the error correction files on the same disc as the data, though.

That's why Blue-Ray and HD-DVD didn't license it. (1)

WilliamTS99 (942590) | more than 7 years ago | (#18272722)

It really is open source if you just store the data twice. The TrueDisk recovery part is proprietary so it's not really open, you will only be able to read the 'master data', still have to buy their software to recover the undamaged data from their 'recovery sections'. Is the author of this article part of the OOXML team?

They have no problem with proprietary. (1)

SanityInAnarchy (655584) | more than 7 years ago | (#18273718)

My guess is, speed would be the problem. If it's anything like par2, the recovery process takes too long to do in realtime; therefore, only useful if you were allowed to burn a backup copy (and if it was economically feasible to do so).

Re:They have no problem with proprietary. (1)

WilliamTS99 (942590) | more than 7 years ago | (#18273904)

Ignore the subject of my post. I was going to go on about how it either wasn't up to par, too expensive, or they had seen no need for it, but when it came to typing it, I didn't feel like typing rambling on. :-)

Explanation of what's interesting about this (3, Informative)

goombah99 (560566) | more than 7 years ago | (#18274108)

So far all the comments I've read are way off the mark about what is interesting about TrueDisk. Yes it's true that TrueDisk is just yet another error correction scheme. What is slick about it is it's high usability. This comes from two things
1) It writes the correction bits to a separate partition from the "regular" bits. As a result, the primary partition looks exactly like a regular CD. put it in any computer, even one not equipped with the TrueDisk Software and it can be read normally.

2) The amount of the redundancy is automatically chosen. It just uses any left over space when it finalizes the CD.

As a result the operation of TrueDisk is pretty much transparent. You only need to invoke the truedisk software to read a disk that has been corrupted. Uncorrepted disks can be read normally. So You won't lose your data if you don't have the software or the company goes out of bussiness and it stops working on newer OS's. (All you would lose without the software is the ability to recover from the redundant bits. ).

In comparison to PAR or RAR, you are not compressing the data so it's faster. Now I note that if you compress and then add redundancy you could potentially have higher redundancy for a given amount of data on a fixed CD size. So there could be some theoretical advantages to RAR and PAR. However, those PAR/RAR disks cannot be read in-place (they have to be expanded) nor in "real time" (say if you are playing video). They are very slow to write. They can't be read on any computer without the same verison of par/rar. And if you do lose bits beyond the point of recovery the compressed bits will span a much greater extent in the data space--you might even lose the entire CD with PAR/RAR. So you can see that TrueDisk has usability advantages even if it's redundancy is less and it's uncompressed.

DVDisaster? (1)

mlts (1038732) | more than 7 years ago | (#18270130)

Isn't there already an open source program which does this called DVDisaster?

Re:DVDisaster? (2, Informative)

brendan_orr (648182) | more than 7 years ago | (#18274630)

I agree, DVDisaster is quite nice, I've my main copies of backups, then a separate disc with error correction files with copies being held on a hard drive and eventually tape, other hard drives and any other medium I can...at least for the really important backups. My ogg collection I'm not too worried about (as I can always re-rip the song/album should corruption occur)

Parchive (2, Informative)

Anonymous Coward | more than 7 years ago | (#18270140)

There's already good parity software available. Parchive will create redundant data that can be burned on the same disc or a separate one. You can create up to 100% redundant data so even if the original disc is lost you can completely restore the files. The software is free and open source. The windows version is called quickpar.

Re:Parchive (1)

IceCreamGuy (904648) | more than 7 years ago | (#18270188)

Maybe I'm just not getting it... if you have a disc, and then enough information on a second disc to completely rebuild the first disc, isn't that the same thing as burning the same data twice? A compressed copy at best maybe. It's very possible I just don't understand correctly though.

Re:Parchive (1)

shadow_slicer (607649) | more than 7 years ago | (#18270356)

Actually 100% redundancy is a good bit better than two copies.
If, instead of losing the entire 1st disk, you scratch the first half of both disks you can still recover everything. If the disks were just identical copies, you would be SOL...

Re:Parchive (2, Informative)

Anonymous Coward | more than 7 years ago | (#18270414)

If you burned two copies of a disc, and both of them went bad on the same file, you're hosed. With error correction codes like par2 uses, you can use any pieces from the par to replace any pieces from the original data, so even if both discs end up scratched, as long as you can get as many par2 blocks off the second one as you can't get blocks off the first, it doesn't matter which blocks they are, you can restore the data.

Re:Parchive (0)

Anonymous Coward | more than 7 years ago | (#18270214)

What would be ideal for disc burning is something implemented at the blockdevice level. Unfortunately par2 redundancy files sit on the filesystem, and so do this "TrueDisc" Error Correction. Still awaiting the real solution...

RAR (2, Informative)

mastershake_phd (1050150) | more than 7 years ago | (#18270192)

what alternatives are already available?

RAR compression has an option for redundancy. You set what % you want to be able to recover if it becomes corrupted.

Is this enough? (1)

complete loony (663508) | more than 7 years ago | (#18270548)

These optical disk formats already use error correction codes in an attempt to recover from small scratches. If you have a big enough error on one part of the disk, chances are lots of the disk will be unreadable. Unless you create multiple disks and spread the redundant data across all of them, this isn't going to add a lot of protection.

Re:Is this enough? (2, Interesting)

loraksus (171574) | more than 7 years ago | (#18270890)

Your average blank disk out there is pretty poor quality, if anything, this lets you burn on crap disks with at least the chance of reading the data a year or more down the road.
Par does take a while to generate the recovery files though...

Distributed ECC. (1)

argent (18001) | more than 7 years ago | (#18270602)

Looks like they're using some kind of error correction code that's better than just parity, and distributing the blocks in each ECC group around the disk. It's better than writing the file twice if this (as it sounds like) happens below the file system level.

It's kind of analogous to a super-RAID, except with the "disks" that are being redundantly striped and mirrored are all on the same physicl DVD or CDR.

RAID 5 sector-based error avoidance (0)

Anonymous Coward | more than 7 years ago | (#18270706)

This is just plain stupid. Why don't you divide a CD into three virtual sectors, and treat each as an independent storage area to be joined in RAID 5?

That way, you lose 1/3 of your capacity, but would have to lose the exact same bit position in two different thirds of the disc to actually lose information. It's so easy it's stupid. I was surprised HD/BR-DVD didn't spec it in when they were released.

=P

Re:RAID 5 sector-based error avoidance (1, Informative)

Anonymous Coward | more than 7 years ago | (#18271546)

Maybe because losing the exact same bit position in two different thirds of the disc is somewhat likely? In some scenarios (uniform degradation of the disc), you only need about sqrt(n_of_blocks) random corrupted blocks before an unfortunate collision of error is likely, by the birthday paradox. On a CD with 360,000 blocks of 2048 bytes, sqrt(360000) = 600, so about 0.167% of corruption and RAID 5 fails.

Re:RAID 5 sector-based error avoidance (1)

Short Circuit (52384) | more than 7 years ago | (#18272082)

This is just plain stupid. Why don't you divide a CD into three virtual sectors, and treat each as an independent storage area to be joined in RAID 5?
Perhaps because seek times on optical media are abysmally slow?

What? Huh? Why do we need this? (3, Informative)

Runefox (905204) | more than 7 years ago | (#18271124)

In Mode 1 CD-ROM, for every 2048 bytes of data, there's 276 bytes of Reed-Solomon error correcting code and 4 bytes of error detection. Considering we're talking bytes, that's pretty reliable, and as you know, a single scratch often doesn't render a CD totally useless. Usually, a CD has to look like an ice skating rink after an hour of skating for it to fail miserably, and light scratches, even in high numbers, are generally not a factor.

So what the hell? Why is this even necessary, unless you're using a Mode 2 CD (and then, Mode 2 is usually used for videos/streaming data, which requires a more sequential read, where adding ECC would defeat the purpose).

Waste of money.

That and you can fix discs (3, Informative)

Sycraft-fu (314770) | more than 7 years ago | (#18271648)

Get some Brasso (brass polish) and a soft, lint free cloth and you are in business. Really. You just polish the surface so it's all even and thus reflects light equally. If you are nervous about using Brasso, there's a number of products designed just for this purpose, though they are way more expensive and Brasso does just as good a job.

Either way the point is that with error correction as it is now, it's not hard to fix a CD if needed.

Re:That and you can fix discs (1, Informative)

Anonymous Coward | more than 7 years ago | (#18272342)

I like Simichrome even more than Brasso - it has a toothpaste-like consistency, so it doesn't make as much of a wet mess, and it's more abrasive than Brasso, while still being a very fine-grained polish that will get you a very smooth surface on a scratched disc.

Re:That and you can fix discs (1)

Short Circuit (52384) | more than 7 years ago | (#18272348)

Wouldn't the difference between the refractive index of the brass polish and the CD plastic cause problems? Or is the angle of incidence high enough that refraction doesn't matter?

Re:That and you can fix discs (1)

drooling-dog (189103) | more than 7 years ago | (#18272578)

You're not filling in the scratches, you're actually polishing them out to a smooth surface. The polish is rinsed away, so its R.I. is irrelevant.

Re:That and you can fix discs (1)

drooling-dog (189103) | more than 7 years ago | (#18272610)

Either way the point is that with error correction as it is now, it's not hard to fix a CD if needed.
For light scratches, I've found that a minute or so of radial rubbing with my t-shirt is usually enough to make an old disc readable.

Re:That and you can fix discs (2, Informative)

dbIII (701233) | more than 7 years ago | (#18273098)

I'll second that - I've used brasso in the late stages of polishing metal specimens to look at under a microscope at 1000x. With a light enough touch you don't see many scratches even at that magnification. A major part of the mechanism is chemical attack on copper alloys, but a lot of suspended really small hard particles that are in there still work with polishing other materials. Silvo has smaller particles again which I've used for the final polish - but you wouldn't need that on a CDROM.

Re:That and you can fix discs (1)

Maximum Prophet (716608) | more than 7 years ago | (#18277206)

That works great if the scratch is on the non-label side. (DVDs as well). Most people don't realize that the label side of most disks is just a thin layer of paint over the thin layer of aluminum/ exotic materials that holds the data. If you scratch this side, your data is gone, unless it's redundant.
Double sided DVDs have two thick layers of polycarbonate with the data sandwiched between them. It's much harder to permenently scratch the data away from one of those.

Re:That and you can fix discs (0)

Anonymous Coward | more than 7 years ago | (#18280120)

Not just double layer DVDs.

"Single layer" (normal average DVD+R, etc..) DVDs have their data layer in the middle (at least, not at the surface).

Re:That and you can fix discs (1)

necro2607 (771790) | more than 7 years ago | (#18281784)

I will 100% vouch for this. I scratched the crap out of my Halo 2 disc one time, throwing the controller at the ground and accidentally hitting the Xbox (I was pissed about a multiplayer game). Needless to say the disc got scratched all to hell and wouldn't even read afterwards.

After some searching around on the net, I found that using Brasso with a cotton cloth would do the trick, so I tried it, and after 10 minutes or so the DVD is back to a totally-readable state! I was really skeptical at first but it makes sense, basically polishing away enough of the plastic to return it to a truly smooth distortion-free surface.

Re:What? Huh? Why do we need this? (1)

Athrac (931987) | more than 7 years ago | (#18272666)

Those 276 bytes are quite useless, because they cannot (at least to my knowledge) be used for correcting errors in other sectors. So, all you need is one sector that is totally unreadable, and the whole disc might become useless.

Re:What? Huh? Why do we need this? (1)

Runefox (905204) | more than 7 years ago | (#18274514)

Not really; That 276 bytes applies to every 2048 byte sector. If one sector dies ENTIRELY, then you still have the rest of the disc (you're only out 2048 bytes), not to mention, as others have said in other comments, you can easily repair physical damage to a CD-ROM, unless it's cracked in half, in which case NO error correction will work for you.

Re:What? Huh? Why do we need this? (1)

bhima (46039) | more than 7 years ago | (#18280194)

That's what I was thinking...

Oh and I did the green hair thing back in the late seventies or early eighties I forget which. It sounded a lot more fun than it was.

ISO9660? (2, Informative)

NekoXP (67564) | more than 7 years ago | (#18273846)

From article: "Since the TrueDisc format is open and the master copies stored by TrueDisc are located in the standard ISO 9660 filesystem"

That pretty much fucks up anyone's day when they wanna burn a UDF DVD doesn't it? ISO9660 doesn't support files greater than 4GB, you can only have 8 directories deep (until the 1999 spec but I always had a hell of a time reading this stuff on anything but XP), stupid filename restrictions (and then do you use Joliet or RockRidge or whatever to fix it or not?)..

Re:ISO9660? (1, Informative)

Anonymous Coward | more than 7 years ago | (#18276738)

There's a 4GB limit for single extents of files, but one can fragment files into multipe extents [wikipedia.org] . Theoretically, at least, since all the burning tools I've tried (basically mkisofs in its various forms and some shitty windows software) just silently skip larger files and will happily produce an empty disk if that was the only file.

Re:ISO9660? (0)

Anonymous Coward | more than 7 years ago | (#18285718)

One cannot use Unicode characters on a CD at all. It's so stupid.

So you have to put your files into an archive anyway to protect the filenames, and at this point you may as well add a recovery record.

Re:ISO9660? (1)

NekoXP (67564) | more than 7 years ago | (#18287092)

Joliet supports Unicode but then you have two file tables, and the length of unicode filenames is limited. You could encode UTF-8 filenames in the Rockridge or ISO9660:1999 records but then your "maximum filename length" becomes variable which is.. too odd for users or standardisation. Basically the ISO9660 format is too restrictive. The entire industry moved to UDF for their advanced DVD format; it's a shame Chinese/Japanese DVD player manufacturers are too f**king lazy to support the advanced disk format and there are those awful UDF/ISO hybrids, and DVD files are split into 1GB chunks, and so on. What the hell is the point of that?

Either way, recovery protection on ISO9660 is a dead loss. Not in the HD-DVD standard for political reasons or because it's a stupid, stupid idea to support improvements and firmware changes for a 10 year old filesystem, especially for media that standardises UDF?

Microsoft won't be adding features to FAT anymore and nobody's worried about that are they? :D

Free alternative: dvdisaster (2, Informative)

adam1101 (805240) | more than 7 years ago | (#18273950)

This $89 (or $52 intro price) TrueDisc sounds rather similar to the open source dvdisaster [sourceforge.net] . It builds Reed-Solomon error correction data from CD or DVD iso images, which can be either augmented to the image and burned on the same disc, or stored separately. It's somewhat similar to par2/quickpar, but dvdisaster is more specialized for CDs and DVDs.

How long does it need to last? (0)

Anonymous Coward | more than 7 years ago | (#18275500)

Burnable optical media is going to go bad, sooner rather than later. I have disks in my collection which are already unreadable after five years. So this is how I went paranoid with reliability for my important stuff:

1) Set up Solaris 10 x86 file server runing ZFS so that I can guarentee the data on the hard drives.
2) Use raidz2 as 6+2 drives. When one drive fails, I can power down the system until the replacement drive comes. I still have one disk protection when I power back up and wait for the re-sync.
3) Back up data to multiple DVDs and share them with out-of-state friends and relatives.
4) Rate each DVD with Nero's CD/DVD Speed.
5) Each year re-test the DVDs to note any degredation. Reburn before they tank.
6) Every other year re-silver the hard drives.

Not that I am obsessive, you understand.

Implementation of existing technology (1)

AmiMoJo (196126) | more than 7 years ago | (#18275960)

This kind of thing already exists in the form of PAR files. Basically RAID5 but on a set of files, with arbitrary amounts of parity data (from 1% to 100%). The advantage of using PAR files, created with a program like QuickPAR, is that you can burn the parity files to a different disc.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?