Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:Commence Pedantry (Score 1) 492

If I recall correctly, the reason we call our usual distros like Fedora GNU/Linux or Debian GNU/Linux or Arch GNU/Linux is because the GNU part is the userspace stuff and the Linux part is the kernel.

Who is "we"? Almost nobody does that, because it is almost pointless. GNU accounts for the most commonly used C compiler, C library, and a handful of other utilities in a typical Linux distribution. The vast majority of the code is developed and owned by someone else. GCC has healthy competition in the form of Clang and with a little competition in the C library and utility space there might be real world Linux distributions without much in the way of GNU code at all.

Comment Re:I could have sworn this was intentional (Score 1) 40

Most NICs don't drop packets with bad L3/L4 checksums

Traditionally, NICs do not even "know" that there is such a thing as Layer 3, let alone check it in any way. L3 checksum validation is a bonus feature.

Bad L3 checksums tend to be caused by defective networking hardware, and in this case the defective networking hardware of the recipient. If you are using checksum validation offload, ignoring the result in the presence of defective hardware isn't likely to make a difference either way.

Comment Re:Data needed (Score 2) 40

How often does the TCP/UDP checksum detect errors that the previous two could not?

TCP/UDP checksums are useful for one thing primarily - mitigating the effect of defective network hardware. That is about the only thing that can cause a transport level checksum error. Anything else is caught with a very high probability by Layer 2 protocols, which typically use a 32 bit CRC. Some Layer 2 protocols have do relatively weak checksums, but not so weak that TCP checksums are likely to catch much more than they do.

Comment Re:What could possibly go wrong? (Score 2) 346

It doesn't help that California imports a third of its electricity from out of state. That is a prescription for instability, as are a number of other California energy policies, like price controls and punitive retail electricity pricing schemes. The state's electricity problems are largely self inflicted.

Comment Re:No supercapacitors? (Score 1) 117

> And when you're doing that - when you're really designing for failure - the type of disk you use really doesn't matter.

In principle, sure. In practice you don't want a power glitch in your data center to potentially corrupt every disk in the facility beyond the point of recovery. You can't recover your site from a remote location if none of your systems will even boot.

This could happen at an ordinary office location as well. Lights on and off a couple of times, and everyone's desktop is a brick. I don't think so. That is not practical.

Comment Re:No supercapacitors? (Score 1) 117

You can't defend the indefensible. Consumer SSD manufacturers purvey millions of devices with known catastrophic failure modes that could be remedied with a few cents in extra parts.

Sort of like if the engineers of the Tacoma Narrows Bridge put it into mass production. It only fails during wind storms you see.

Comment Re:No supercapacitors? (Score 1) 117

Internally, all SSDs essentially implement a database of disk sectors, the equivalent of a log structured filesystem with a single file, due to the inability to overwrite existing data in place. A power loss without backup capacitors places the integrity of that database at risk, which is why any power loss can lead to the loss of completely unrelated areas of the logical device, not just areas with pending in flight writes, and often the contents of the device in its entirety.

It is conceivable that with extremely careful design an SSD could be designed to provide power loss protection without backup capacitors but apparently no one has managed to do that yet - or at least not with adequate performance. Take a good look at the design of something like ZFS for a clue as to how one might go about doing that.

The ZFS designers have an advantage though. Filesystem APIs make a distinction between data that needs to be committed right now and data that the user doesn't particularly mind being lost for the past thirty or forty seconds before a crash or power failure. Disk interfaces generally do not. When the FS asks the disk to flush all outstanding writes to persistent storage it had better commit in a hurry, tens of milliseconds not tens of seconds from now.

The problem here is that consumer grade SSDs cannot commit transactions reliably in the presence of power loss, and often cannot recover their own internal databases to some sort of useful consistent state after a power loss either. It is amazing how fast you can make a filesystem go if you remove all the safety features. Same deal here.

Comment Re:No supercapacitors? (Score 1) 117

SSDs that do not put any of their contents at risk during a power loss except possibly areas addressed by recent, unflushed sector writes are probably the best thing that has happened in enterprise hardware in the past decade.

It is the unnecessarily flaky way consumer grade SSDs are designed and manufactured to the great concern of approximately no one that is what is annoying.

Comment Re:No supercapacitors? (Score 1) 117

Most SSDs are garbage from an engineering perspective. No one in his right mind would use them to store important data, not in a RAID or any other configuration, if he can avoid it. They are unreliable and when they fail it is generally not a lost sector or two here and there, it is their entire contents.

This is nothing but engineering malpractice. A mirrored pair of real hard drives is not generally susceptible to catastrophically losing all your data at a moments notice. RAID solves that problem. Whereas on most SSDs a simple power loss will destroy all the data in your RAID group simultaneously. Say good bye to everything stored since your most recent backup.

RAID means Redundant Array of Inexpensive Drives. It doesn't work with a Redundant Array of Defective Drives, which is what most SSDs are, in this most crucial respect.

Comment Re:No supercapacitors? (Score 1) 117

> The bottom line is ALL DRIVES FAIL. You HAVE to backup. You WILL be restoring from backup.

Guess what? The entire modern financial system revolves around the proposition that block devices do not lose their entire contents during a power failure. Sorry we wired a large sum of money to somewhere but don't quite recall where doesn't quite fly in the real world.

The quaint notion of a data "backup" does not suffice when you can't lose any data recorded since your last one. Such as email messages recently sent and received, for example.

Going out of your way to make borderline defective devices that will force users to resort to backups of varying degrees of staleness simply because you wanted to save a few cents on manufacturing costs isn't common sense, it is more like engineering malpractice.

Comment Re:No supercapacitors? (Score 1) 117

I haven't had a hard drive fail hard on me since the IBM DeathStar drives from the early 2000s - the drives that caused them to get out of the hard drive business.

Should that overwhelming anecdotal evidence convince the reader that hard drives do not catastrophically fail, not ever? Isn't a sample size of a couple of dozen drives enough to make an accurate generalization to a global population? Or am I just lucky?

Comment Re:UPS power isn't exactly expensive (Score 1) 117

UPS power isn't expensive, just a hundred dollars and ten or twenty pounds of lead acid batteries and special communication cables and automatic shutdown software to make up for the failure of SSD manufacturers to include a gram or two of capacitor protection.

Real hard drives, by the way, are not at risk of losing their entire contents in an unexpected power loss. Most SSDs are. Not to be used if you actually want your data to still be there when you return in the morning.

The reason why is that SSDs require a drive level transaction potentially putting the entire drive contents at risk to complete any write anywhere on the device. Given that most modern operating systems write to disk for various reasons on a nearly continuous basis the contents of your entire device are pretty almost always at risk with most SSDs.

One new entry to a log file, power failure, and there goes 100 GB of other data, lost without recovery. Sorry.

Slashdot Top Deals

"An entire fraternity of strapping Wall-Street-bound youth. Hell - this is going to be a blood bath!" -- Post Bros. Comics