Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

OCZ Releases First 1TB Laptop SSD

timothy posted more than 2 years ago | from the want-want-want-want dept.

Data Storage 128

Lucas123 writes "OCZ today released a new line of 2.5-in solid state drives that have up to 1TB of capacity. The new Octane SSD line is based on Indilix's new Everest flash controller, which allows it to reduce boot-up times by half over previous SSD models. The new SSD line is also selling for $1.10 to $1.30 per gigabyte of capacity, meaning you can buy the 128GB for about $166."

cancel ×

128 comments

Sorry! There are no comments related to the filter you selected.

OCZ (1)

hedwards (940851) | more than 2 years ago | (#37784528)

Didn't they have reliability problems in the past? Am I wrong about that or have they finally fixed it?

Re:OCZ (5, Informative)

Kjella (173770) | more than 2 years ago | (#37784558)

Supposedly the fixed one BSOD bug [anandtech.com] a few days ago. That wouldn't be with this controller anyway, but their record isn't spotless. Then again, Intel managed a SSD blemish too so... you're seeing an industry moving at breakneck speed, just make sure yours isn't on the line.

Re:OCZ (1)

blair1q (305137) | more than 2 years ago | (#37784638)

How do these things take to RAID configurations?

Re:OCZ (1)

silas_moeckel (234313) | more than 2 years ago | (#37784740)

Raid 0 1 and 10 are fine, raid 5 or 6 your quickly start getting the raid controller being the bottleneck, it's a whole lot of CPU grunt to do those calcs at a gigabyte or more a sec.

Re:OCZ (1)

blair1q (305137) | more than 2 years ago | (#37784942)

Is there a chipset that does RAID 5 or 6 in HW?

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37785058)

3ware

Re:OCZ (2)

silas_moeckel (234313) | more than 2 years ago | (#37785304)

It's not so easy they require reads to write. Say you have 5 drives that's 4 blocks of data and 1 block of parity info the OS has to be able to write out as little as a single block so you have to read 4 blocks then write out 5. In reality the stripe size is bigger than one block. Now the controller has to handle a pile of these at the same time, all real raid controllers do this in hardware. Not your el chepo motherboard built in raid but most of the add in cards over a couple hundred bucks. If you realy want to get more space on your ssd's try something like an adaptec 6q it does hot read and write caching, if your running linux dm-cache and similar will do the same thing. ioFusion and other have proprietary software to do this with there SSD's.in windows as well.

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37785646)

After reading some recent reviews of hardware vs software raid the software actually performed faster in some cases due to the fact that CPU's are getting so crazy fast that the extra work for doing raid functions is barely a tax on the CPU... something like 2% of a 3 ghz quad core at most. Now imagine a Corei7 with 8+ cores and 16GB of ram, this would be far more capable of RAID functions than a single cpu hardware raid card.

Re:OCZ (1)

kungfuj35u5 (1331351) | more than 2 years ago | (#37787212)

A stripe would be glorious from a performance perspective. It would only take a few to saturate the bus to the CPU.

Re:OCZ (1)

Sancho (17056) | more than 2 years ago | (#37784776)

I think that TRIM isn't supported in RAID5. Not sure about other RAID levels. I would expect that you'd be able to RAID1 just fine, though.

Re:OCZ (1)

laffer1 (701823) | more than 2 years ago | (#37784926)

TRIM doesn't work with RAID, period. Most drives can work with RAID 0 or 1.

Re:OCZ (5, Interesting)

Cyberax (705495) | more than 2 years ago | (#37785372)

Not so fast!

You can use my https://github.com/Cyberax/mdtrim/ [github.com] to periodically TRIM unused space on software RAID-1 on Linux (ext3/4 are supported right now).

Extending it for RAID-0/5/6 is not hard, but right now I don't have time for this.

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37786238)

Why is trim hard with raid? Is it only an issue with hardware raid?

Re:OCZ (2)

Cyberax (705495) | more than 2 years ago | (#37786922)

It's not easy to implement it completely correctly, even in software. My approach is to create a large file which takes almost all free space, then TRIM its content and then just remove this file. Obviously, it's a bit racy if you have services which could suddenly need several more gigabytes of disk. On the other hand, TRIM-ing 160Gb SSD drives takes about 5 seconds, so I don't worry about it.

Here is a thread about it on linux-raid: http://thread.gmane.org/gmane.linux.raid/31941/focus=32022 [gmane.org]

As far as I know, there are no hardware RAIDs with TRIM support.

Re:OCZ (1)

nabsltd (1313397) | more than 2 years ago | (#37786716)

TRIM doesn't work with RAID, period. Most drives can work with RAID 0 or 1.

I just installed a pair of SSDs in RAID-1 using Fedora 15. I had searched far and wide to see if TRIM was supported, and didn't find anything conclusive, so I installed anyway, since I'm only using 16GB out of the 64GB drive (which means that even non-TRIM-aware wear-leveling would last 4 times as long). After I had installed, I did some more searching, and somewhere I found info that said the 3.0 kernels (which Fedora 15 uses, although renumbered to keep some apps from breaking) supported it. I didn't bookmark it, but I think it was part of the kernel mailing list.

Based on the fact that /sys/block/[device]/queue/discard_granularity contains 512 for every device in the stack (LVM/dm -> md -> sd[ab]) and there are no warnings at mount time with ext4 using the "discard" option, I believe it. Also, for any device not on top of an SSD, "discard_granularity" is 0 and you get warnings at mount time if you try to use the "discard" option.

Re:OCZ (1)

Cyberax (705495) | more than 2 years ago | (#37789246)

Try my https://github.com/Cyberax/mdtrim/ [github.com] , we use it to periodically TRIM empty space. Works fine with md RAIDs on raw devices, it won't work with LVM though.

Unfortunately, RAID on stock md devices does NOT work, even if LVM/dm TRIM works. You won't get any warnings from ext4, TRIM just silently won't work because md doesn't pass through TRIM requests. There's a thread (dated August of this year) on linux-raid where md maintainer says that TRIM is not his priority.

Re:OCZ (2)

Runaway1956 (1322357) | more than 2 years ago | (#37786334)

So - I'm wondering why, exactly, do people want to do RAID with SSD? There are really only two reasons (that I'm aware of) for a RAID. Either you need performance - which the SSD delivers, or you need redundant security for your data. Even in a corporate setting, it seems that running the OS and applications from from the SSD, while keeping data on RAID would be a good solution. Investing in enough SSD's for RAID arrays seems a bit of a waste.

Servers, on the other hand, might make good use of SSD's in RAID arrays. I wonder if Google or Amazon are going that route yet? About the time that they both finish upgrading all of their servers to SSD RAIDs, the price of SSD's should be down where I can afford a terrabyte drive!

Re:OCZ (2)

nabsltd (1313397) | more than 2 years ago | (#37786786)

So - I'm wondering why, exactly, do people want to do RAID with SSD? There are really only two reasons (that I'm aware of) for a RAID.

And both are as valid for SSD as they are for spinning disks.

RAID-0 can turn SSDs from fast to insane (if you have the controller bandwidth). Any of the data-protection RAID levels will help if your drive-with-really-new-and-somewhat-untested-technology dies an untimely death.

Re:OCZ (1)

kmoser (1469707) | more than 2 years ago | (#37787870)

Another reason: to be able to address the RAID as a single, contiguous (virtual) volume.

Re:OCZ (1)

ttong (2459466) | more than 2 years ago | (#37788628)

That's what Logical Volume Management its friends are for.

Re:OCZ (1)

Cyberax (705495) | more than 2 years ago | (#37789266)

Keeping OS on SSDs doesn't make any sense in for server applications. You don't need to worry about boot speed and all your code is usually in RAM anyway.

However, databases and SSDs are a natural fit. We accelerated our server code more than 10 times just by moving it to SSDs. And since we're storing critical data, RAID-1 are a must (backups are the last resort - you inevitably lose some data if you have to restore from backups).

Re:OCZ (1)

Luckyo (1726890) | more than 2 years ago | (#37786338)

It's worth noting that if you desperately need more speed, you should look at Revo line of OCZ's drives. Those are SSDs mounted in a cluster of four separate drives linked to a single raid 0 controller all on the same PCI-E card.

The reason why I'm not touching that is that raid controller doesn't let TRIM commands through, so after a year or so, drive will slow down significantly. That said, at raid 0 and PCI-E interface, it will still smoke any and all SATA solutions.

Re:OCZ (4, Informative)

iamhassi (659463) | more than 2 years ago | (#37785068)

10/17/2011 : After months of end user complaints, SandForce has finally duplicated, verified and provided a fix for the infamous BSOD/disconnect issue that affected SF-2200 based SSDs. [anandtech.com]

wow, that's not something anyone wants to see, a bug in their hard drive. CPU I can replace, ram I can replace... pretty much everything I can swap out, but my hard drive is where everything is stored, I can't risk losing data because of a bug.

Intel just had problems too that cause loss of data: [storagereview.com]
"JULY 13TH, 2011 : Intel has recently acknowledged issues with the new SSD 320 series, where by repetitively power cycling the drives, some may become unresponive or report an 8MB drive capacity."

I was waiting on an SSD until they worked out the bugs and there were no articles about problems for awhile but with stories like these I'll keep waiting, it's just not worth the risk.

Re:OCZ (2)

FyRE666 (263011) | more than 2 years ago | (#37785256)

Well my experience was that the issue wasn't fixed. I just returned one of these drives due to lockups, "disappearing drive" and random BSODs. This happened with a Corsair 120GB Force 3 SSD, but I know the OCZ drives are also affected. The issues have been going on for months.

Re:OCZ (1)

dbraden (214956) | more than 2 years ago | (#37787342)

Were you using the firmware update that was just released three days ago? I'm not being a dick, just genuinely curious.

Re:OCZ (2)

FyRE666 (263011) | more than 2 years ago | (#37789042)

Well I had used the firmware available on the same day this article was posted. I returned the drive on that day, after it still failed to fix the issues. I had tried the drive on 2 different machines with different motherboards, and in each case the problems occured in the same way when the drive was used. There are plenty of other people on the forums who have had the same experience so basically I had no faith whatsoever in the Sandforce controller-based SSDs. I've bought several in the past - my gaming rig has a RAID(0) setup with the OCZ SSDs, and it experiences random lockups as one or other drive disappears. On the gaming rig the problem seems less frequent, and as it's just used for games it isn't so much of an issue - although I'll be replacing them soon.

Re:OCZ (3, Insightful)

Anonymous Coward | more than 2 years ago | (#37785280)

SSDs aren't meant/ready as a mass storage solution. They are a performance product aimed at system drives and portable devices -- situations where replacing them isn't a big deal data wise.

Also. if you're storing data on one one drive you're an idiot no matter what kind of drive you are using.

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37785396)

Yes I would assume SSD's are best suited for running the OS/Software and that all data should be stored separately and redundantly.

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37787436)

Unless it's an enterprise PCIe-style direct-attached storage device, like Fusion-io, in which case you get like 3-5 years without failures (provided you're willing to pay for that level of reliability!!).

Re:OCZ (1)

vaporland (713337) | more than 2 years ago | (#37787574)

i'm immune - i keep MY data in the cloud! :-p

Re:OCZ (1)

ShakaUVM (157947) | more than 2 years ago | (#37785424)

>>wow, that's not something anyone wants to see,

There's been a LOT of "you lose everything" bugs in the SSD market up till now.

Buying the latest and greatest is nice, but I like to wait until there's a couple hundred reviews of a product on Newegg before buying (filter at 1-egg level to see what can go wrong...)

Re:OCZ (1)

A12m0v (1315511) | more than 2 years ago | (#37787314)

My SSD is only for the OS and most used Applications, everything else is stored on a network drive that is mirrored in 3 different machines. If my SSD buys the farm right now I will lose nothing of importance.

Re:OCZ (1)

ShakaUVM (157947) | more than 2 years ago | (#37787372)

>>My SSD is only for the OS and most used Applications, everything else is stored on a network drive that is mirrored in 3 different machines. If my SSD buys the farm right now I will lose nothing of importance.

Me, too, actually, but going through the process of reinstalling Windows and my applications is something I'd rather avoid with a little bit of research.

One of the SSDs I almost bought had a firmware bug which could very occasionally cause you to lose everything. The problem, though, was that patching the firmware also wiped the drive. Damned if you do, damned if you don't...

I didn't buy that one.

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37790350)

> going through the process of reinstalling Windows and my applications is something I'd rather avoid with a little bit of research.

Yeah I did that research. I have Windoze plus all my apps on a separate partition and use Linux "dd" to back that up to an image file on my file server every so often (whenever I install something new) The file server is backed up weekly to an offsite server.

Works for me and takes a couple of hours should I wish to change out my primary hard drive.

Re:OCZ (1)

tlhIngan (30335) | more than 2 years ago | (#37787512)

There's been a LOT of "you lose everything" bugs in the SSD market up till now.

There's plenty of "you lose everything" bugs in spinning rust drives as well.

From Western Digital drive failures that led to them being nicknamed "Western Digicrap", to IBM's Deathstar drives, Seagate's logfile one, etc.

Every manufacturer has come up with a line of drives that proved to be lemons, and we're talking about very mature technology here - it's been around since the IBM RAMAC.

SSDs are a game changer in that any idiot with a soldering iron and a few tools can build one in their garage - there's only a few controller chips out there from SandForce, Indilinx, J-Micron, Intel, and maybe a few others (Samsung, Toshiba, etc). And you buy the flash chips. Plop them on a board and you've made a storage deivce. Whereas hard drives require clean rooms, chemicals, mechanical engineering, etc.

Here's a funny question. Apple's probably the largest consumer of SSDs on the market (for their Macs, not the ones that go into iPhones/iPods/iPads). Yet there doesn't seem to be much about SSD sudden failure or loss of data or other thing. I wonder why that is...?

Perhaps it's the relative immaturity of the high-speed cutting edge SSDs and Apple goes for last-gen chips that give "good enough" performance and deliver on reliability?

Re:OCZ (1)

ShakaUVM (157947) | more than 2 years ago | (#37787836)

>>There's plenty of "you lose everything" bugs in spinning rust drives as well.

I can't recall the last time there was a firmware bug in a HDD that caused catastrophic loss of data... I can think of a couple RAID controllers, but nothing baked into the drives themselves. There's a lot of such problems in SSDs.

I know it's a generalization, but HDDs tend to fail more gracefully (though SMART is by no means foolproof, don't get me wrong) whereas SSD failures are characterized by simply not working one day.

Re:OCZ (1)

sunderland56 (621843) | more than 2 years ago | (#37785862)

wow, that's not something anyone wants to see, a bug in their hard drive. CPU I can replace, ram I can replace... pretty much everything I can swap out, but my hard drive is where everything is stored, I can't risk losing data because of a bug.

So, you're storing important data on a single drive with no backup? And you expect a typical rotating-magnetic hard drive to last forever and never fail?

Studies show that, overall, long-term failure rates for SSD drives are *lower* than traditional hard drives.

Re:OCZ (1)

Luckyo (1726890) | more than 2 years ago | (#37786384)

In vast majority of cases, modern hard drive's SMART will start screaming its mechanical lungs out at you long before failure comes. Even if you ignore that, physical errors with the drive will often cause noise that will lead you to suspect that drive is failing. The "holy shit, it just died" failures are in minority.

On the other hand, SSD dies like that in vast majority of times. Add to that unreliability of the new controllers (not the drive itself which is what you were talking about), and you get a perfect storm where SSD is easily an order of magnitude less reliable when it comes to not losing data.
Personally, I really want a 60ish GB SSD for my OS and most used software, but I just can't justify a purchase with severe reliability issues that everyone seems to be having with both controllers and drives at the moment.

Re:OCZ (2)

hawkinspeter (831501) | more than 2 years ago | (#37786542)

I seem to remember a while ago that google published stats on their hard disk failures and that SMART wasn't particularly useful in predicting failure times of drives.

I've personally seen several drives fail with no warning whatsoever and that's without anything dramatic like a nearby electrical storm to fuse the drive controller. If you don't have your data replicated, then either you don't care about the data (easy to recreate) or you will eventually become wiser.

Re:OCZ (1)

epine (68316) | more than 2 years ago | (#37786676)

Yes, IIRC, the Google report indicated that SMART murmured not a peep in about half of all failures.

Re:OCZ (1)

Alamais (4180) | more than 2 years ago | (#37787544)

Yeah, I've had drives in my research group fail and lose a bunch of data, and then have their SMART kick in hours/days later while we're trying to recover what's left. Thanks a lot.

Re:OCZ (1)

Anonymous Coward | more than 2 years ago | (#37788162)

Google's analysis (which someone else mentioned) is correct here -- SMART will generally not trip the overall SMART health status from OK to FAIL. Why doesn't this happen sooner? Because the vendors themselves are the ones who decide the thresholds for when an attribute goes from OK/good to FAILING/bad. On most drives -- consumer-grade and "enterprise-grade" (yes, BOTH!) -- the thresholds are set to the point where the drive will have already experienced so many failures that the end-user notices it as I/O operations which either time out, return EIO, or ENXIO (drive falling off the bus).

It's important to remember that SMART attributes VALUE, WORST, THRESH metrics are all calculated based off of proprietary formulas which are not disclosed. The same applies to the RAW_DATA section (64-bits of data which can be encoded in any format the vendor chooses -- this is why you can't read RAW_VALUE and assume it's a 1:1 number ratio. It's not, especially on Seagate drives; "oh my god, Hardware_ECC_Recovered is 84384928! My drive is failing!" "No it isn't, it's perfectly normal, the data is encoded in a proprietary format" "Oh."). These formulas and encoding methods absolutely vary per vendor, but quite often also vary per drive model. T13/ATA specification also does not cover any of this; the vendors are given free reign to decide what's what.

I'd provide an example of SMART data taken from a Western Digital WD2001FASS-00U0B0 disk, but, well, see my last line...

Out of all the attributes, approximately 90% of them have THRESH set to 000. Focusing on SMART attribute 197 (which indicates how many suspect LBAs there are -- these are LBAs which the drive, during read operations, has to do repetitive reads or auto-correct with ECC to get valid data from (excessively), so it marks them as unreadable until they can be re-examined (by submitting a write to that individual LBA)), the VALUE of a good/clean drive is 200, and WORST is also therefore 200. The drive is in perfect shape.

Now let's pretend RAW_VALUE increases to 1 (meaning on this model of drive, there is 1 suspect LBA). What should VALUE become? Nobody knows; not even Bruce Allen knows. Because vendors have proprietary formulas that calculate what VALUE should be (calculated/adjusted based on how many counts anomalous behaviour here are), we have no idea if 1 suspect LBA should diminish VALUE from 200 to 199 or not. Will it? No, it won't. So at what point would this happen? It's hard to say for certain, but I estimate around 300 LBAs being marked suspect would cause the drive to drop VALUE down a bit. Meaning: it will almost certainly never reach THRESH.

The "overall SMART health status" value, which tells people "your drive is good" vs. "your drive is bad" will therefore never state the latter. So why do vendors pick such outrageous THRESH values? You should already know the answer to that -- because being too aggressive means higher RMA rates, which means more returns, which means loss in revenue and decrease in cash flow, which means a drop in stock price. Yup, it's that simple.

Readers of this post will almost certainly state "so SMART is worthless, I knew it! Fuck SMART!" No, SMART is actually extremely useful, just not for immediate detection of a drive failure. It's extremely useful for *diagnosis* if you think your hard drive is experiencing issues. I help on average ~10 Internet users a month get their hard disks into a working state (or recommend RMA) using SMART, in addition to a series of manual tests (something similar to Bruce Allen's bad-block-howto, but much easier to understand -- that document does not read well and is also filled with some mistakes).

Takeaway points:

* SMART is a really great thing to have, but the viewer of the data needs to become familiar with how to read SMART data, and how to understand the data that's being shown to them.
* SMART is absolutely a necessity for diagnosing problems with a drive, especially those which are not of a physical nature (e.g. actuator arm out of alignment, spindle motor acting wonky or not getting enough voltage, etc.)
* SMART in general is not good for pre-emptively detecting drive failures. Have I ever seen it work? Surprisingly yes -- on very specific models of Fujitsu SCSI drives, where the number of grown defects exceeded some vendor-chosen threshold and thus tripped the SMART overall health status to FAIL.
* T13/ATA really needs to come forth and establish a base standard set of SMART attributes for all drives, both mechanical and solid-state. Formulas for calculation should be provided, as should disclosure of the 64-bit RAW_VALUE portions of a SMART attribute. If they did this it would make our lives -- hell, my life! -- a lot better. I would propose this to T13, except the last time I tried to sign up as a member, my application was /dev/null'd (I never received a rejection or an acceptance letter).

P.S. -- Sorry for the lack of attribute data, Slashdot kept telling me to "use fewer 'junk' characters". Fuck you Slashdot: "junk" characters are what get actual work done! Posts like these are probably why I should get a registered Slashdot account, but most of the time I just can't be bothered.

Re:OCZ (1)

Luckyo (1726890) | more than 2 years ago | (#37789646)

You have a good point. I made it a habit to quickly check some important statuses of drives through gsmartcontrol, usually parts like reallocated sector counts and other pre-failure marked parts every couple of months. Essentially parts that show that drive is starting to really suffer from wear and tear, and I should pay more attention to backing it up and probably replacing it sometime soon.

I tend to do that long before it trips the threshold set on the drive because I know that those are artificially high.

Re:OCZ (1)

dark_requiem (806308) | more than 2 years ago | (#37785982)

I've been using an OCZ Vertex 3 120GB since April. Throughout that time, their firmware releases have variously resolved certain bluescreen issues, while introducing others. Since installing 2.13, I have had major issues, and just last week was complaining that I would have to return the thing to OCZ and demand my money back. However, I installed 2.15 the day it was released, and all bluescreen issues have stopped.

Through all that, I have never lost any data. I was rather panicked at first (the drive won't show back up after one of these bluescreens until you cold boot the machine), but once I realized I could recover with a cold boot each time, I was merely annoyed. That being said, I knew going in that SSDs in general weren't that mature a product, and that issues like this crop up. I would never have placed important data on those drives, and would have kept a backup even if I had. Just as I never put any important data on an HDD without a backup (or at least a bit of RAID action).

Re:OCZ (1)

dbIII (701233) | more than 2 years ago | (#37788494)

I've been paranoid about mine (sync to other drives every hour) but apart from the annoyance of successive 120GB models not being the same size they've been OK so far (Vertex and Vertex2).
Setting up a couple of striped disks took three attempts before I got something with good read and write speeds. You know when you've got the wrong stripe size because it ends up being noticably slower than a mechanical disk.

Re:OCZ (2)

Just Some Guy (3352) | more than 2 years ago | (#37786994)

I was waiting on an SSD until they worked out the bugs and there were no articles about problems for awhile but with stories like these I'll keep waiting, it's just not worth the risk.

You didn't think HDDs have similar dumb BIOS errors [sourceforge.net] ?

Re:OCZ (1)

AmiMoJo (196126) | more than 2 years ago | (#37788324)

I have an older Intel SSD (160GB XM25) and have already had it replaced under warranty once because it ran out of spare sectors. Looking at the spec it is rated for about 17TB of writes, and the SMART attributes report 650GB of writes after a few months of use.

At the time I bought it the expected life time was supposed to be at least 5 years, and I only put the OS and apps on it (data is on a traditional HDD). It looks like I will just have to do another warranty claim next year, and them when the warranty has expired get a partial refund (Sale of Goods Act) and buy a replacement. Very annoying, especially since the performance is fantastic and otherwise I am very happy with it.

Re:OCZ (1)

sentimental.bryan (2489736) | more than 2 years ago | (#37789300)

Supposedly the fixed one BSOD bug [anandtech.com] a few days ago. That wouldn't be with this controller anyway, but their record isn't spotless. Then again, Intel managed a SSD blemish too so... you're seeing an industry moving at breakneck speed, just make sure yours isn't on the line.

Using OCZ 8 months now. I tend not to get BSOD's on Linux anyhow.

Re:OCZ (1)

Fackamato (913248) | more than 2 years ago | (#37784600)

The problems with the Sandforce controller has been fixed. But yeah OCZ were in the SSD game early and had a few flaky firmwares in the wild. All is good now though.

Re:OCZ (1)

rhook (943951) | more than 2 years ago | (#37784656)

These drives are not Sandforce based. Did you even read the article?

"The drive, based on the new Indilinx Everest controller, includes an "instant on" feature, that reduces boot times over previous OCZ SSDs by 50%."

Re:OCZ (2)

Joce640k (829181) | more than 2 years ago | (#37787832)

Just when their controller matures a bit they switch to a different one....?

Re:OCZ (2)

kolbe (320366) | more than 2 years ago | (#37784726)

OCZ's reliability record is in no way different to any other Data Storage Manufacturer past or present.

Seagate's recent 1TB woes: ST31000340AS [tomshardware.com]
Western Digital's recent woes: Caviar Green EARS 1.0TB and 1.5TB SATA [newegg.com]

Going further back, anyone who's been around in IT for a decade or longer recalls the old Micropolis 9GB drive failures that sent the company into bankruptcy. In any case, OCZ is a relatively good company and a notable innovator of SSD technology and I personally find most of their products to be just as reliable as any other in the same category.

Re:OCZ (1)

Kjella (173770) | more than 2 years ago | (#37784850)

You jumped from 1TB to 9GB, but forgot the biggest one in between. The IBM DeskStar aka DeathStar was a huge scandal and probably lead to them selling off the division.

Re:OCZ (1)

kolbe (320366) | more than 2 years ago | (#37785138)

We all survived the 18GB Deathstar and avoided Fujitsu's sudden death syndrome and it further proves most have had their fair share of failures at one time or another. The only ones I cannot recall any fault with were Quantum and later Maxtor drives. I loved my Fireball and Atlas drives!

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37785298)

Weren't the problematic Deskstars in the 75GB+ range?

Re:OCZ (1)

kolbe (320366) | more than 2 years ago | (#37785586)

Weren't the problematic Deskstars in the 75GB+ range?

IBM Deskstar 75GXP was the main one, yes. However, I think all of their hard drives at the time had similar issues revolving around the BLUE GLUE [ufl.edu] used to seal the canister

Re:OCZ (1)

scalarscience (961494) | more than 2 years ago | (#37785316)

Maxtor had so many QA issues in their latter days that it contributed greatly towards the purchasing price that Seagate got when they bought them. Of course the CEO Bill Watkins promptly moved a huge bulk of the Seagate drive ops to the same plant (as a cost cutting measure it was presumed) that had issues for Maxtor and denied the 'bricking' issues that resulted for AS and ES series drives for at least 9 months... until the Seagate Board of Directors fired him [theregister.co.uk] and replaced him with the former CEO from 2004 (and subsequently addressed the firmware issues too.)

I flashed all 3 of my ES.2's to SN06 'just in case' while all of this was underway, and though it was denied that they had issues in my serial # range quite a few reports cropped up down the line of ES.2's outside the reported serial range having problems also so I'm happy I did, they still run fine in an array to this day.

Re:OCZ (1)

mikael (484) | more than 2 years ago | (#37785622)

Remember that - happened to my workstation.

  Then my laptop hard disk drive (TravelStar) fried. Had to do the recommended trick of putting the disk drive in a freezer bag, chill it out, and give it a good whack on power-up. Managed to get one final incremental backup before it went up to the great server rack in the sky..

Re:OCZ (1)

TheRaven64 (641858) | more than 2 years ago | (#37789034)

Not to mention Maxtor's 20GB and 40GB drives. Out of about 20 of those bought over a period of a year (so not just one bad batch), none lasted more than two years and most died in under one.

Re:OCZ (1)

haruchai (17472) | more than 2 years ago | (#37784934)

Micropolis!! Haven't heard that name in 15 years! I sure hope OCZ doesn't suffer the same fate.

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37785028)

Kalok. 'nuff said.

Re:OCZ (1)

Anonymous Coward | more than 2 years ago | (#37785282)

Miniscribe! Conner! Feel old yet?

Re:OCZ (0)

Anonymous Coward | more than 2 years ago | (#37785234)

WD's most recent woe is Thailand flooding, knocking 1/3rd of all hard drive shipments offline. Better by your spinning drives now because this is going to squeeze supply until 2013.

On the other hand, the flash drives may become price competitive if DELL/HP end up being forced to use them in their lower-end models. We are pretty close to, but not quite at the point where a 64GB SSD (90$ the smallest you can ship an OS on) and 500GB (60-80$ the smallest most Build-to-Order configuration laptop) become price competitive.

Re:OCZ (1)

aztracker1 (702135) | more than 2 years ago | (#37786182)

I had a few 20GB IBM Death/DeskStar drives when they came out... raid-0, fast as sin at the time.. dead in 2 months, both of them.

Re:OCZ (1)

Luckyo (1726890) | more than 2 years ago | (#37786440)

For the record, seagate's 7200.11 was the only instance of buggy controller in that line of drives. I have two of 7200.7 drives that clocked over 50.000 hours uptime in raid0 as OS drive pair with no issues (and hilariously, one raid controller failure), and I have several 7200.12 drives (including the warranty replacement drive for 7200.11 that had a controller bug mentioned on the TH.com). No problems.

Problem with OCZ is that they do not have a proper solution for their clients' woes, and they don't send you a fixed drive if you ask for warranty replacement on ones that BSOD. That alone is a huge negative that stops me from getting SSD from them, in spite of really wanting one for an OS drive.

Who hasn't had reliability problems? (0)

Anonymous Coward | more than 2 years ago | (#37784728)

Aside from FreeBSD, what software or hardware vendor/provider hasn't had some sort of reliability problems in the past? It's just part of the game. When it comes to hard drives, they are among the MOST COMPLEX TECHNOLOGY EVER CREATED, and individual drives are being sold for A FEW HUNDRED DOLLARS! Hell, these aren't even 1980s dollars that were actually worth something. These are 2010s dollars that are a fraction of the value of earlier dollars. Given the low cost of the final product and the extreme complexity of the technology and the manufacturing processes, it's extremely difficult to do it perfectly. I won't hold it against a company if they make products that have occasional problems with them. That's just life. That's just part of the game.

Re:Who hasn't had reliability problems? (0)

Anonymous Coward | more than 2 years ago | (#37784908)

What's up with this aside from FreeBSD comment? I'm a BSD fan, but early 5.x releases weren't so good. Maybe you have a selective memory.

Tried using zzz? Early ZFS on x86?

Re:Who hasn't had reliability problems? (1)

mug funky (910186) | more than 2 years ago | (#37785166)

there's a freeBSD shill floating round these parts. i think they might be using the same script the rather humourous microsoft bot was using.

Re:OCZ (1)

Wolfling1 (1808594) | more than 2 years ago | (#37787880)

I ran an OCZ 60GB SSD in my desktop until I ditched it for a laptop. It fully KICKS ASS. I did Windows XP and Windows 7 startup comparisons (seemed to be the easiest 'controlled' test). The SSD on a clean/fresh/fully patched install on WinXpSP3 or Win7 (not SP1 at the time) was four times faster than a conventional drive. And that is including the POST.

To be more specific, my test was 'power on to an open Windows Explorer displaying a network drive directory'.

On a traditional drive (200Gb Seagate Barracuda), Win 7 did it in roughly 60 seconds - whereas the SSD did it consistently in under 15.

Over a 12 month period, the SSD performed flawlessly.

The only reason I have discontinued using it is the switch to a lappie, and the need for more drivespace in a single drive. I'll upgrade to a 500Gb SSD as soon as the price is a bit more tolerable.

Re:OCZ (1)

dropadrop (1057046) | more than 2 years ago | (#37788262)

I think just about everyone had reliability problems with Sandforce based drives, but as OCZ was one of the closest partners to Sandforce they have generally released drives before others; often containing older firmware with more bugs.

It does seem their QA process is not especially robust looking at their track record though, where ever the problem is, you would think they would pick it up and delay shipment until it's fixed (especially seeing how widespread they where).

How long until... (2)

Alwin Henseler (640539) | more than 2 years ago | (#37784824)

  • Largest-capacity milestones are presented in the form of a SSD before good old magnetic HDDs?
  • Annual sales (in $$) of SSDs tops that of magnetic HDDs?
  • Price/TB for SSDs drops below that of magnetic HDDs?
  • Major OEM builder announces they remove magnetic HDDs as an option?
  • Well-known manufacturers do the same, turning magnetic HDDs into a niche product only produced by 1 or 2 specialist companies?
  • Most retailers drop magnetic HDDs from their inventory?
  • Dad, what's a magnetic HDD?

Re:How long until... (2)

kolbe (320366) | more than 2 years ago | (#37785062)

SSD's must meet or surpass all of your mentioned categories and overall capacity limits before Magnetic HDD's are cast the way of the floppy disk drive. Even then, look at how long it took to get rid of the floppy disk drive:

Beginning of end for Floppy Disk Drive: 1998 with a CD-ROM drive but no floppy drive
End of the Floppy Disk Drive: 2009 Hewlett-Packard, the last supplier, stopped supplying Floppy Disk Drives on all systems

It could be stated that the HDD is more entwined in technology than the FDD was and so it may be more well more than 11 years before we see Magnetic HDD's disappear from the consumer marketplace.

Re:How long until... (1)

DigiShaman (671371) | more than 2 years ago | (#37785604)

Two things really.

1. When the cost per GB become cheaper than what's available in an HDD.
2. Proven track of reliability.

Currently only the Dell PERC 6/i and above will work well with SSDs. Once the two major hurdles have been achieved, we can kiss the ol winchester drive goodbye. Honestly, HDD technology is a marvel of technology. But its time has come.

Re:How long until... (1)

bill_mcgonigle (4333) | more than 2 years ago | (#37786058)

1. When the cost per GB become cheaper than what's available in an HDD.

Heck, I'd probably pay $200 for a 500GB SSD for my laptop. That's way more than a hard drive, but I'd get something more for it. That's forty cents a GB, these guys want triple that currently.

2. Proven track of reliability.

Yeah, I'm only using SSD's for caches now because of their failure mode, and write caches get mirrored. If OCZ wants to make that 500GB drive a 750GB drive, and put four controllers onboard with a SATA multilane interface so I can run RAIDZ on my laptop, they'd make this storage geek smile. There's plenty of room in a 2.5" drive.

OK, so I'm pretending the laptop's motherboard can support it (AHCI doesn't by default, right?).

Re:How long until... (1)

DigiShaman (671371) | more than 2 years ago | (#37786510)

The newer SSD drives are SATA3 now. But even with a SATA2 motherboard, these drives fetch data at amazing speeds even if you were to fully saturate the port. Unless you need the performance of an Avid workstation working with multimedia over a fiber channel SAN, I don't think you'll personally see the difference between SATA2 and SATA3 connectivity in a laptop. YMMV of course.

Anyways, I've got an Intel M25 160GB in my MacBook 13". When booting up my Windows XP VM in Fusion, for a moment my CPUs are maxed out. That's sick (awsome maybe?). Now my laptops bottle neck is the bloody processor. I never would have thought in a million years. 0_o

Re:How long until... (1)

EETech1 (1179269) | more than 2 years ago | (#37787202)

I just love booting my pc in 6 seconds and going to my photo manager and before you can blink an eye... Scrolling through thousands of photos as thumbnails, and seeing every one no matter how fast I scroll, or what size the pics are, and changing zoom is instantaneous! I have sold 5 SSDs now to friends just based on that!

I never see an IOWait anymore. Truly magical!

Re:How long until... (1)

bill_mcgonigle (4333) | more than 2 years ago | (#37787236)

That's sick (awsome maybe?). Now my laptops bottle neck is the bloody processor.

Awesome - in my opinion, that's where you always want to be.

Re:How long until... (1)

CSMoran (1577071) | more than 2 years ago | (#37790062)

Now my laptops bottle neck is the bloody processor. I never would have thought in a million years. 0_o

Actually, your processor is probably doing wait-states waiting for the RAM.

Re:How long until... (1)

TheRaven64 (641858) | more than 2 years ago | (#37789114)

Heck, I'd probably pay $200 for a 500GB SSD for my laptop

Laptops aren't the only place people use disks. I've bought two new machines this year. My laptop has a 256GB SSD and the difference that it makes to performance is amazing. I'd hate to go back to using a hard disk. The other machine is a NAS. It has three 2TB disks in a RAID-Z configuration. I am mostly going to be accessing it over WiFi, so as long as the disks can manage 2MB/s I really don't care about performance. I care a lot about capacity though, because it's having things like snapshotted backups of my laptop dumped on it. If I paid $200 extra per 500GB, then the machine would have been insanely expensive.

If SSDs had cost the same per GB as mechanical disks, I'd have gone with SSDs in both cases.

Re:How long until... (1)

WuphonsReach (684551) | more than 2 years ago | (#37786696)

When the cost per GB become cheaper than what's available in an HDD.

For laptops, it happens once SSDs get cheap enough for large enough that you don't feel like you're trying to live with a gold-plated shoebox sized amount of storage.

By a lot of metrics, we've hit that point with it being possible to pickup 128GB SSDs for about $1.50/GB. That's plenty for most machines and the responsiveness puts it far ahead of the competition.

Once it drops below $1/GB and starts heading down to about $0.50/GB, you're going to see more and more mass adoption for cases where you just don't need a few TB of storage.

Re:How long until... (1)

tlhIngan (30335) | more than 2 years ago | (#37787478)

1. When the cost per GB become cheaper than what's available in an HDD.

That won't happen for a LONG time.

SSDs only get cheaper in line with Moore's law (the number of transistors limits the size of the flash chips).

Hard drives, it seems, grow in capacity faster than Moore's law. At some point in the future they'll slow down, but that's a huge gap to catch up to.

Re:How long until... (1)

CSMoran (1577071) | more than 2 years ago | (#37790722)

However, SSDs will likely soon start reaping the benefits of scale.

Re:How long until... (0)

Anonymous Coward | more than 2 years ago | (#37786900)

The reason the floppy drive phaseout took so long is that it wasn't the CD alone that killed it. It was the USB flash drive that killed it. CDs are good, but they're a bit clumsy to work with compared to the floppy drive. Whereas the flash drive doesn't get scratched like an optical disc, or demagnitized like a floppy. It's physically smaller than either but stores more than either. It plugs into a now-ubiquitous USB 2 port instead of a drive that may fail (I have a pile of failed optical drives).

With the size of hard drives going steadily up, floppies were useless for backups and for distributing programs - that got taken over by CDs (and then DVDs, and sooner or later blu ray.) The only use profile left for most people was portable re-usable file transfers. The final use profile was, I guess, boot disks, but BIOS support for booting from optical drives had been eating away at that need for ages, and booting from USB became universal a few years ago. (As did updating the BIOS by other means, which was really the last thing anyone really needed a boot disk for anyway, right?).

As you say, to do the same for the hard drive requires a similar process - all the hard drive's use profiles need to be surpassed by one or more other things with collectively the same or lower price for equal or better performance. IMO, the one that will keep the hard drive alive for many years to come is its bulk capacity advantage for the price. Like the CD to the floppy, flash is the first competitor but probably not the final nail in the coffin - the natural limits of NAND flash won't let it hit densities that will allow it to catch up to or beat the price/density of hard drive platters. But it's got hard drives decisively beat on speed, and it ought to be possible to match on reliability, so all we need is for something else to beat hard drives for cheap bulk storage, and then the hard drives are dead. Maybe that memristor stuff will be what does it, maybe something else.

Re:How long until... (0)

Anonymous Coward | more than 2 years ago | (#37788574)

Your analogy is flawed in that the SSD is a drop-in replacement for an HDD, whereas the same cannot be said for the other case. When SSDs come close to the price of HDDs (and that won't be long) the HDD will die, quickly.

Re:How long until... (2)

CheshireDragon (1183095) | more than 2 years ago | (#37785134)

I think it will be when: the SSDs $/MB/GB/TB is below the HDD. That I believe will be the biggest milestone to pave the way for the rest. inexpensive. I will only switch over when I feel they are extremely reliable, which would be about the time any one of the many options above is true.

Re:How long until... (0)

Anonymous Coward | more than 2 years ago | (#37785326)

I have used an OCZ 30 gig SSD for over a year now in my desktop. Keep the OS on it and put all other stuff on a spinning HD. A good combo as I get the performance of the SSD for my OS and have the cheap mega-storage on the spinning HD.

Now one thing I like about the larger drives getting a bit less dear is for laptop/netbook use. 128 gig is really plenty for those. Seems the price drop per gig is taking a lot longer than expected on SSD's. I hope it finally gathers some steam.

Re:How long until... (1)

timeOday (582209) | more than 2 years ago | (#37787102)

SSDs are better in enough other respects that I'll bet they take over before they're the rock-bottom $/MB choice. CRT TV's and monitors exited the market, or at least were on a very sharp downturn, even when they were still somewhat cheaper than the same-sized LCD.

Re:How long until... (1)

jbolden (176878) | more than 2 years ago | (#37789856)

The cost per gig is unlikely to ever get below the HDD cost. The multiplier used to be around 20x. Maybe it is down to about 15x 2 years later. So if we assume something like the multiplier halves every 4 yrs you still need another dozen years before HDDs are even close. I don't think HDD has that long.

At this point most people experience essentially unlimited storage. Generally the switches: 14 in -> 8 in -> 5 1/4 -> 3 1/2... all happened when the larger drives still had better performance but lower price per gig. I think this is likely to happen in reverse where HDD is being used for external storage.

Re:How long until... (1)

jones_supa (887896) | more than 2 years ago | (#37790614)

Interesting list. For an even easier goal, it would be nice to see when will most high-end laptops ($1500+) start to have a SSD. Only very select models currently have.

buy em' now (1)

loVolt (664437) | more than 2 years ago | (#37785408)

wd and seagate hard drive prices are roofing and will keep going up over the months to come, ssd will be the way as energy costs go up..time for spinning crap to leave anyway

Re:buy em' now (1)

Luckyo (1726890) | more than 2 years ago | (#37786454)

What makes you think that energy costs will go up? It's very country-dependent and at least where I live, electricity is far more likely to go down in price as next nuclear power plant is built in a couple of years.

Re:buy em' now (0)

Anonymous Coward | more than 2 years ago | (#37787252)

Yeah, and all the machinery and chemicals needed to make the solid state crap is magically energy-free, though.

OCZ can't make decent ram... (0)

Anonymous Coward | more than 2 years ago | (#37785518)

OCZ can't make ram that lasts for more than a year, why would I want to use them for ACTUAL data storage?

Pass (1)

p51d007 (656414) | more than 2 years ago | (#37787300)

If/when SSD's get to the price point of mechanical drives, I'll get one. Not worth it now.

Ssd v hdd (0)

Anonymous Coward | more than 2 years ago | (#37787420)

A lot of people including myself are only using and SSD as a boot drive, not as storage. I don't think these drives will replace HDDs for storage any time soon.

5000-10000 program/erase cycles? (1)

hackertourist (2202674) | more than 2 years ago | (#37788310)

I was wondering how long this SSD will last in typical use. Let's take the 128 GB unit; thats 128Gx10000 write cycles.
Some numbers for my system: I've got 4 GB RAM, and at the moment it's using 1.5 GB of swap. Let's say the swap gets overwritten once a day. I hibernate 2x/day. New data: that won't be too much, Time Machine backs up maybe 1 GB in a week.
In total 10GB of writing per day. That's 120,000 days, not bad.

Worse case would be rewriting the entire SSD each day, that still 5000-10,000 days. Still good enough.

I wonder how those early failures happen? (see the Hot/Crazy scale and SSDs [codinghorror.com] )

Re:5000-10000 program/erase cycles? (1)

sheepe2004 (1029824) | more than 2 years ago | (#37788564)

You've assumed that the writes will be evenly distributed throughout the disk. It seems more likely that a few large chunks of it will be written once then largely left alone (OS, key applications), whereas other parts will be re-written constantly, multiple times per day even. So it seems likely that the time before some part of the drive fails is more likely to be somewhat lower than 10,000 days. I'm not sure how catastrophic small regions of an ssd failing is. It could be possible to work around even fairly large numbers of transistors failing as long as the lost data isn't essential. Having said that maybe the firmware is smart enough to shuffle things around to get a more even distribution of writes? Although this could only help to a certain extent - if you shuffle things around constantly you could even up with even more writes.

Re:5000-10000 program/erase cycles? (1)

hackertourist (2202674) | more than 2 years ago | (#37788670)

As I understand it, write leveling is already being done.
The number of extra writes necessary could be an issue if the leveling mechanism is too agressive (e.g. not allowing three consecutive writes in the same location). The other extreme would be allowing 5000 writes before you move a block of data to a new location (ie only moving the data once in the lifetime of a drive).

Re:5000-10000 program/erase cycles? (1)

pjc50 (161200) | more than 2 years ago | (#37788950)

I think the early failures are other problems than the flash wearing out; as these things are so price-sensitive, corners are being cut on the controller, controller firmware, and mechanical construction of the PCB etc.

Who cares anymore (2)

dgas (1594547) | more than 2 years ago | (#37790332)

I know I'm supposed to care that an SSD is unreliable, but the truth is you have to back up everything anyway because hard drives aren't reliable. I have a server with conventional drives in a raid array for data security, I want my main machine to fly.... and SSD lets you do that. I just wish it didn't cost so much.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?