×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:HDD Pros (413 comments)

HDDs are not as recoverable as you seem to think. I have several bricked drives to show for it. Plus there is a trade-off in that your HDD's chance of failure goes up dramatically over time no matter how little or how much you use it. Even keeping it on a shelf won't make it last longer. SSD failure mechanics are very different beasts. If your SSD is barely worn after 3 years of operation (and most will be), the failure rate will not be appreciably higher than when it was new. The chance of multi-bit failures eventually overcoming the automatic SCAN/relocation (in SMART) will increase once appreciable wear occurs, but the wear is write-based and not time-based and for most SSD users that means reliability will be maintainable far longer than the 3 years one can normally depend on a HDD for (assuming it isn't one of those 5% of HDDs which fails every year anyway).

And, again... You don't make backups? Depending on the recoverability of your hard drive virtually guarantees that you will lose all your data one day.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:I like both (413 comments)

I hear this argument quite often and gotta ask... what, you don't have backups? When any of my storage dies I throw the drive away, stick in a new one, and restore from one of my two real-time backups (one on-site, one off-site). For that matter, I don't even trust any HDD that is over 3 years old. It gets replaced whether it reports any errors or not. And I've had plenty of HDDs fail with catastrophic errors over the years. Relying on a HDD to fail nicely is a false assumption.

Another statistic to keep in mind is that SSD failure rates are around 1.5% per year, compared to 5% failure rates for HDDs. And, I suspect, since HDD technology has essentially hit up against a mechanical brick wall w/regards to failure rates (if you still want to pay $80 for one), that SSD failure rates (which are more a function of firmware) will continue to drop while HDD failure rates remain about the same, from here on out. And that's assuming the HDD is powered on for the whole time. Power-down a HDD for a month and its failure rate goes up dramatically once you've powered it back on. HDDs can't even be reliably used for off-line backups, SSDs can. SSDs have a lot of room to get even better. HDDs just don't.

It is also a lot easier to run a SSD safely for many more years than a HDD simply by observing the wear indicator or sector relocation count ramp (actual life depends on the write load), where-as a hard drive's life is related more to power-up time regardless of load. If I only have to replace my SSDs (being conservative) once every 5-7 years vs my HDDs once every 3 years, that cuts many costs out right there. I have yet to have to replace a single SSD, but have replaced several HDDs purchased after that first SSD was bought. Just looking at the front-end cost doesn't really tell the whole story. Replacement cost, lost opportunity cost, time cost (time is money). There are many costs that matter just as much.

In terms of speed, I think you also don't understand the real problem. The problem is not comparing the 100-200 MByte/sec linear access time of a HDD to the 500-550 MByte/sec linear access time of a SSD. The problem is that once the computer has to seek that hard drive, that 100-200 Mbytes/sec drops to 20 MBytes/sec, and drops to 2 MBytes/sec in the worst-case. The SSD, on the other hand, will still maintain ~400-550 MBytes/sec even doing completely random accesses. Lots of things can cause this... de-duplication, for example. Background scans. Background applications (dropbox scans, security scans). Paging memory. Filesystem fragmentation. Game updates (fragmented data files). Whatever.

People notice the difference between SSDs and HDDs because of the above, and it matters even for casual users like, say, my parents, who mostly only mess with photos and videos. They notice it. It's a big deal. It's extremely annoying when a machine reacts slowly. The SSD is worth its weight in gold under those conditions. And machines these days (laptops and desktops certainly) do a lot more work in the background than they used to.

There are still situations where HDDs are useful. I use HDDs on my backup boxes and in situations where I need hundreds of gigabytes of temporary (but linear) storage... mostly throw-away situations where I don't care if a drive dies on me. But on my laptops and workstations it's SSD-only now, and they are a lot happier for it. For that matter, in a year or two most of our servers will likely be SSD-only as well. Only the big crunchers will need HDDs at all.

Nobody who has switched from a HDD to a SSD ever switches back. People will happily take a big storage hit ($150 2TB HDD -> $150 256GB SSD) just to be able to have that SSD. Not a whole lot of people need huge amounts of storage anyway with so much video and audio now being streamed from the cloud. For that matter, even personal storage is starting to get backed up 'on the cloud' and there is no need to have a completely local copy of *everything* (though I personally do still keep a local copy).

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:Empty article.. (413 comments)

Only for linear accesses. Once you have to seek the head, it's all over. I'm kinda amazed that people still try to argue this point, HDDs lost the performance war long ago.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:What about long-term data integrity? (413 comments)

You might as well ask the same question about a hard drive. If you power down a hard drive and put it on a shelf for a year, there is a better than even change that it will be dead when you try to power it up again, and an even higher chance that it will die within a few days.

A powered-down SSD that has been written once should be able to retain data for ~10 years or so. Longer if kept in a cool place. As wear builds up, the retention time drops. You can look up the flash chip specs to get a more precise answer. A powered-up SSD should be able to retain data almost indefinitely as the self check will relocate failing sectors as they lose charge. However, in practical terms, it also depends on how the drive firmware is stored. The drive will die when the firmware is no longer readable. But that is true for hard drives as well.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:Question (413 comments)

Hybrid drives do not use their meager flash to cache writes. The flash would wear out in an instant if they did that. It's strictly useful only for boot data and that is pretty much it, if a few seconds matters to you and you don't want to buy a separate SSD. For any real workload, the hybrid drive is a joke.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:Question (413 comments)

Never buy hybrid drives, period. You are just multplying the complexity of the firmware (resulting in more bugs, as Seagate's earlier attempts at Hybrid drives revealed), and decreasing the determinism of the failure cases. And there's no point. A hybrid drive has a *tiny* amount of flash on it. It's good for booting and perhaps holding a program or two, and that is pretty much it. For someone who does so little on their computer that it would actually fit on the flash portion of a hybrid, a hard drive will be almost as fast. For someone who uses the computer more significantly, the hybrid flash is too small to matter.

My recommendation is to use only a SSD for workstations and desktops as long as you don't need terrabytes of storage. For your server, if you can't afford a large enough SSD, then a SSD+HDD combination (or SSD + HDD/RAID) works very well. In this situation you put the boot and swap space and the SSD, plus you cache HDD data on your SSD.

This is pretty much what we do on our systems now. The workstations and desktops are SSD-only, the servers are SSD + HDD(s).

The nice thing about this is that with, say, a 256G SSD on the server caching roughly ~200GB worth of HDD data, the HDD's do not require a lot of performance. We can just use 2.5" 2TB green drives. Plus we can use large swap-backed ram disks and so on and so forth. Makes the servers scream.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:Sure, but speed... (413 comments)

A serious photographer is sitting on $20K+ worth of equipment and a potentially ruined contract if any data is lost. Spending $1000 on a highly reliable high-capacity SSD to backup camera cards on a field trip is worth more than god.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:LOL (413 comments)

A 7200 rpm HDD can do 200-400 IOPS or so, semi-random accesses (normal database access patterns). A 15K HDD can do ~400-600 or so. Short-stroking a normal drive also gains you at least 100 IOPS (so, say 300-500 IOPS on a short-stroked 7200 rpm HDD). That's off the top of my head.

A SATA SSD, of course, can do 60000-100000 IOPS or so and a PCI-e SSD can do even more.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Re:Reliability (413 comments)

Depends on the application. For a workstation or build box, we configure swap on the SSD.

The point is not that the build box needs to swap, not with 32G or more ram, but that having swap in the mix allows you to make full use of your cpu resources because you can scale the build up to the point where the 'peaks' of the build tend to eat just a tad more ram resources than you have ram for (and thus page), which is fine because the rest of the build winds up being able to better-utilize the ram and cpu that is there. So putting swap on a SSD actually works out quite nicely on a build box.

Similarly, for a workstation, the machine simply does not page enough that one has to worry about paging wearing out the SSD. You put swap on your SSD for another reason entirely... to allow the machine to hold onto huge amounts of data in virtual memory from open applications, and to allow the machine to get rid of idle memory (page it out) to make more memory available for active operations, without you as the user of the workstation noticing when it actually pages something in or out.

A good example of this is when doing mass photo-editing on hundreds of gigabytes of data. If the bulk storage is not a SSD, or perhaps if it is accessed over a network that can cause problems. But if the program caches pictures ahead and behind and 'sees' a large amount of memory is available, having swap on the SSD can improve performance and latency massively.

And, of course, being able to cache HDD or networked data on your SSD is just as important, so it depends how the cache mechanism works in the OS.

So generally speaking, there are actually not very many situations where you WOULDN'T want to put your swap on the SSD. On machines with large ram configurations, the name of the game is to make the most of the resources you have and not so much to overload the machine to the point where it is paging heavily 24x7. On machines with less ram, the name of the game is to reduce latency for the workload, which means allowing the OS to page so available ram can self-tune to the workload.

-Matt

yesterday
top

How Intel and Micron May Finally Kill the Hard Disk Drive

m.dillon Inevitable (413 comments)

Happening a little sooner than I thought, but the trend has clearly been going in this direction for a long time now. Just one year ago I stopped buying 3.5" HDDs a year ago in favor of a combination of (short stroked) 2.5" drives and SSDs. I already use only SSDs in all the workstations and laptops, the HDDs are only used by the servers now.

Now it is looking like I will probably not buy any more HDDs at all, ever again, even for the servers. That is going to do wonders for hardware life and maintenance costs.

It's a bit strange having a pile of brand-new perfectly working 1TB and 2TB 3.5" HDDs still in their static bags, unopened, in my spare drawer that I will likely never use again.

I wonder how long it will take case makers to start giving us 2.5"-only hot swap options without all the 3.5" crap taking up room. Of course, there are some already... I mean for it to become the predominant case style.

-Matt

yesterday
top

Top Counter-Strike Players Embroiled In Hacking Scandal

m.dillon Difficult problem to solve (217 comments)

Part of the problem is that it is very difficult to tell a player using hacks from a player who is simply good at playing the game. I remember, a long time ago (10+ years) my brother was a counter-strike player who specialized in head shots. He was very good at it, but standing behind him while he played there were numerous occasions where he got kicked off a server due to players thinking he was cheating. He wasn't. I was standing right there behind him.

I think the only real solution is to video yourself playing the game so other players can see (after the match) that you were not using any cheats or hacks. Either that or play at an official location with monitors and public hardware.

-Matt

yesterday
top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Re:Nice... (147 comments)

Motorola did produce a HCMOS version of the 68000 and related cpus. e.g. the 68hc000 and friends, which I used extensively. HCMOS was billed as a 'fast' version of CMOS, trading off some current draw for speed. As people might remember, HCMOS pulls and pushes about the same (though the ground paths are rated higher). About 50 ohms to either rail, more capacitive than resistive so current draw ran more inline with the frequency and you could use 1M pull-down resistors on the tri-state busses and the logic was pretty bullet proof in terms of noise and reflections.

I really loved HCMOS, and hated TTL, but eventually advanced ttl beat it out (at least for the pin interface logic).

-Matt

4 days ago
top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Re:The original 68000 interrupts were inadequate (147 comments)

Interrupts worked fine. It was bus errors (i.e. for off-chip memory protection and/or mapping units) that were a problem. The 68010 fixed that particular issue if I recall. I'm guessing later 68008's also did but I dunno. Doesn't matter since he isn't running with any memory protection.

You could in fact run a real multi-tasking OS on the 68000. I was running one of my own design for my telemetry projects. It didn't have memory mapping but it did have memory protection via an external static ram, 8:1 selector, and some logic. It managed around 20-30 processes.

And, strangely enough, you could also run a RTOS because the 68000 had wonderful prioritized interrupts. Back then, of course, real time response was required for handling serial ports and things like that.

-Matt

4 days ago
top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Re:Hey, congratulations (147 comments)

Er, I meant 8:1 selector (the R/~W bit was fed into one of the select inputs). The function code logic was used to selectively enable/disable the memory protection unit, so supervisor accesses bypassed it while user accesses did not. Which is good because it wouldn't have been able to boot otherwise.

Another use for the FC logic is to speed up the auto-vector code. The 68K had wonderful asynchronous interrupt logic. You basically had 8 priority levels and you could feed your I/O chips into a simple 8:3 priority selector and feed the result into the interrupt priority level pins on the cpu. The 68K would then do an interrupt vector acquisition bus cycle to get the vector (or you could tell it to generate an internal vector). Every once in a while the async logic would screw up and we'd get an uninitialized interrupt vector but the code to deal with that was trivial, and since it was all level logic the hardware would sort it out soon enough and calculate the correct IPL to request from the bus.

The autovectors are slow, though... it was far better to generate vectors from a ram (if I remember right). The FC logic could be used to force the access to the ram or the eprom (since the address lines I think were all 1's except for three bits defining the priority level being fetched, or something like that).

In contrast, even to this day Intel STILL can't get their interrupt logic to work properly. Even the MSI-X logic is broken in a lot of chipsets. Yuch. So much ridiculous and unnecessary complexity with Intel interrupt handling with all the idiotic IOAPIC and LAPIC sub-processors with non-deterministic reaction times and serialization problems and other stupid stuff. The motorola interrupt logic was a dream in comparison.

-Matt

4 days ago
top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Hey, congratulations (147 comments)

Congratulations, you are now in a rare group indeed. But I gotta say, you haven't lived until you've programmed a 6502 directly in machine code. No assembler :-)

One of my telemetry systems which I built and designed 25+ years ago used the 68000 running at around 10 MHz (with a jumper for 20 MHz, which it could actually do though I didn't deploy it at those speeds). The coolest thing about it was that I had built a rudimentary memory protection unit using a static ram and an 8:2 selector. Any user mode accesses pumped the high address bits into the ram with 2 address bits going to the selector along with the R/~W bit. The result was gated into the bus error logic. The top three bits of the static ram were directly controlled by the kernel which allowed the kernel to 'cache' up to 8 process's worth of protection data in the ram at any given moment, so context switches were still very fast.

Before the 68030 and 68040 came out, Sun (I think) was running two 68000's in lockstep, one one cycle behind the other, in order to implement their own MMU. When a fault would occur, they bus-errored the lead chip and paused the second 'behind' chip so they could take the bus fault, resolve the mapping issue, and then resume the behind chip. Then the 68010 came along and fixed the bus error interrupt stacking bugs in the 68000, and the 68020 came along after that.

The 68030 could hold short loops in its chip logic with some tricks, despite not really having a cache. Unfortunately, the 68040's on-chip cache implementation was horrible and created all sorts of problems for implementers, and by then Intel chips were running much much faster.

When Motorola retired the 68K series some of their larger embedded users asked motorola to re-test the 68000 chip specs at a higher clock, since by then the HCMOS process could obviously run the chip much faster than the ~10-12 MHz that was speced. Motorola tested the HCMOS version of the chip to around 50-70 MHz or so. Such a nice 32-bit chip, I was really sorry to see Moto lose to Intel (mostly because Moto gave up).

-Matt

4 days ago
top

HTML5: It's Already Everywhere, Even In Mobile

m.dillon I like it but... (133 comments)

but... operation is not even remotely smooth enough to compete with apps running with native graphics libraries. On Apple or Android. Still too sludgy, the browser implementation still does not implement sufficient concurrency to make it work well.

-Matt

about a week ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Re:If only history (327 comments)

You obviously have not had to deal with customer/user complaints about filesystem corruption. Because if you had, you'd rapidly come to the conclusion that the manpower required to service all of those complaints, let alone track down the cause (which is virtually impossible if TRIM is enabled), not to mention the bad rep that you get whether it is your fault or not, is simply not worth it.

-Matt

about two weeks ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Re:Easier solution (327 comments)

Complete nonsense. You seem to think that TRIM is some sort of magical command that makes SSDs work better. Spend a little more time on understanding the reality and your opinion will change.

Overprovisioning works extremely well. So well that there is really no reason to use TRIM, and plenty of reasons to avoid using it due to the many reasons stated by myself and a few others here.

Overprovisioning and TRIM do not have linear relationships. You don't get double the value by using both, or even double the value by, say, doubling the amount you overprovsion, or doubling the amount of free space the SSD thinks it has due to using TRIM. Beyond a certain point, overprovisioning and TRIMmed space will have only a minimal effect on actual wear because, as I've said already, a good SSD will do both static AND dynamic wear leveling. Dynamic wear leveling alone using TRIMmed or overprovisioned space, no matter how much space is available, only delays the inevitable need to relocate static blocks.

You can reduce the copying the SSD has to do, but beyond a certain point the amount of copying that remains will be so far under the radar compared to nominal filesystem operation (normal writes to files, etc), that it just won't have any more of an impact.

The non-deterministic nature of TRIM (depending heavily on how full your filesystem is and how fragmented it is), not to mention firmware bugs, filesystem bugs, the impossibility of diagnosing and tracking down corruption when it occurs, problems with feeding TRIMs through block managers which use large-block CRCs, and many other issues wind up creating huge complexities that increases your chance of hitting a bug somewhere that blows you up... it's just not a good trade-off. I will take the *highly* deterministic and generally bullet-proof overprovisioning method over TRIM any day of the week.

The only time I advocate using TRIM is when someone wants to wipe and repartition a SSD from scratch. That's it. I don't even advocate it for cleaning up the swap partition on reboot because then you can't recover crash dumps (and you might already be paging after that point so...). Though I suppose one could add some logic to TRIM the swap partition if no crash dump is present. That's about as far as I would go, though.

-Matt

about two weeks ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Re:Why? (327 comments)

I can't agree with your reasoning. The many computer companies that have lived and died over the years have primarily died because they were producing hardware that could not keep pace with developments in the industry.

Commodore .. which I was a developer for the Amiga (and a machine language programmer in my PET days), commodore died because AmigaOS was 100% dependent on Motorola and Motorola couldn't keep up with Intel, period.

NeXT died for the same reason. NeXT couldn't keep up with Intel and by the time Jobs caved in and went with his dual-architecture 68K/Intel binary format, it was too late. Also, depending on display-postscript for EVERYTHING was a huge mistake for NeXT and having 15+ year old OS tools for a weird Mach/BSD core that they never really updated messed them up too. I was a developer for the NeXT too.

In modern times, everyone runs on similarly powerful hardware and generally can stay up-to-date on the hardware front. OS makers die from a lack of apps or a lack of ease-of-use. Apple certainly does not suffer from either.

Linux and the BSDs are entirely dependent on a relatively common library of ~20,0000 to 30,000 or so (substantial) open source apps in order to stay relevant, but all suffer from the lack of a cohesive GUI that is powerful and easy to use. KDE, Gnome, the many other little window managers available... none hold a candle to either Apple or Windows. Unfortunately. At least as a consumer machine.

I have no problem running linux or a BSD as my workstation, as long as I am only doing programming or browsing. But if I want to play a *real* game or run *real* photo or video software (not something stupid like gimp which is virtually unusable)... then I have to shift my chair over to my Windows box or my refurbished Mac laptop. For that matter, if I want brainless printing which just works, I have to run it through my Windows box because CUPS is an over-engineered piece of crap that only works well on Macs... certainly not on linux or any of the BSDs.

-Matt

about two weeks ago

Submissions

m.dillon hasn't submitted any stories.

Journals

m.dillon has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?