Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Replaces Consumer SSD Line, Nixes SLC-SSD

Soulskill posted more than 3 years ago | from the more-cheaper-please dept.

Data Storage 165

Lucas123 writes "Intel today launched a line of consumer solid state drives that replaces the industry's best selling X25-M line. The new 320 series SSD doubles the top capacity over the X25-M drives to 600GB, doubles sequential write speeds, and drops the price as much as 30% or $100 on some models. Intel also revealed its consumer SSDs have been outselling its enterprise-class SSDs in data centers, so it plans to drop its series of single-level cell NAND flash SSDs and create a new series of SSDs based on multi-level cell NAND for servers and storage arrays. Unlike its last SSD launch, which saw Intel use Marvell's controller, the company said it stuck with its own processing technology with this series."

Sorry! There are no comments related to the filter you selected.

First (-1)

Anonymous Coward | more than 3 years ago | (#35642150)

First

Generations (5, Interesting)

DarkXale (1771414) | more than 3 years ago | (#35642166)

The 320 series isn't quite as impressive over the X25-M G2 series as I had originally hoped, so will likely be quite some time before I bother replacing the current one (and move that into the laptop instead).
Still, an update has been due for a long time now the X25-M G2 is ancient in SSD terms. Just hope the new controller is as reliable as the Intel one found in the old drives.

Re:Generations (1)

dc29A (636871) | more than 3 years ago | (#35642564)

Same controller.

Re:Generations (1)

DarkXale (1771414) | more than 3 years ago | (#35642988)

Strange, could've sworn they used a Marvel controller. Oh well.

Re:Generations (2)

Lucas123 (935744) | more than 3 years ago | (#35643326)

The Marvell controller is only being used in the higher-end 510 series SSD that was announced last month. That SSD is being aimed at gamers, workstations and such. This is being marketed to laptop and desktop users, even though it's winding up in data centers.

Don't like this (3, Insightful)

XanC (644172) | more than 3 years ago | (#35642238)

MLC the only option on a server? For high-transaction databases, I don't see how it will work.

*SMOOTCH!* Buh-bye Enterprise! (3, Insightful)

Chas (5144) | more than 3 years ago | (#35642336)

Seriously. Any sort of enterprise-level should be swearing off these things as a storage medium then. Well, maybe for a boot drive. But anything with massive amount of writes should be kept as far away from an MLC drive as possible.

Re:*SMOOTCH!* Buh-bye Enterprise! (2)

DigiShaman (671371) | more than 3 years ago | (#35642942)

Why? If the MLC cells are both fast and reliable, why does that matter? If I understand this correctly, MLCs would be the equivalent of clusters on an HDD. If any bit of that data within that cluster needs to be changed, its entire contents will be all read, and re-written back to another cluster. The same process occurs on an MLC.

Re:*SMOOTCH!* Buh-bye Enterprise! (4, Informative)

XanC (644172) | more than 3 years ago | (#35642952)

Because SLCs survive for two orders of magnitude more writes than MLCs.

Re:*SMOOTCH!* Buh-bye Enterprise! (0)

Chas (5144) | more than 3 years ago | (#35642986)

Thanks for getting to this first.

Re:*SMOOTCH!* Buh-bye Enterprise! (1)

arth1 (260657) | more than 3 years ago | (#35643164)

Also, each cell write is much faster (because it can be "sloppier" with only two states per cell), which greatly affects random write speeds even if the speeds are the same for sequential writes.
And random writes is often a bottleneck in master databases.

Re:*SMOOTCH!* Buh-bye Enterprise! (1)

nyctopterus (717502) | more than 3 years ago | (#35643180)

Because SLCs survive for two orders of magnitude more writes than MLCs.

I don't work with this sort of stuff, but does that matter? If MLCs have other advantages, then what the problem with chucking them out and replacing them when they wear out?

Re:*SMOOTCH!* Buh-bye Enterprise! (1)

by (1706743) (1706744) | more than 3 years ago | (#35643336)

Because two orders of magnitude is the difference in price between a Honda Civic and a Lamborghini Gallardo.

Re:*SMOOTCH!* Buh-bye Enterprise! (2)

fnj (64210) | more than 3 years ago | (#35643452)

More to the point, it's the difference in life between one month and 8 years.

Re:*SMOOTCH!* Buh-bye Enterprise! (1)

Anonymous Coward | more than 3 years ago | (#35644324)

Or decades and years. After a year in a database server at 195 writes/sec (after merge), 1MB/s average, 24x7, which would easily saturate 4 disks at 10k RPM, the X25-M 80G with MLC cells says it's still fine. Smart attribute 233, Media_Wearout_Indicator, is at 95% of new. 19 more years should be enough, I doubt any other component in the server would last this long.

SLC, MLC, it doesn't matter that much, SSDs are still more than capable to replace a whole stack of spinning media in IOPS. For Intels, just use 20MB/s = 1 year for 80G, 40MB/s = 1 year for the 160G version. Mind you, one megabytes per second for 24/7 is a *lot* of INSERT queries, database servers are high IOPS, but low throughput. It's enough for ~50% of the pageviews of Slashdot in our case.

Re:*SMOOTCH!* Buh-bye Enterprise! (1)

ebuck (585470) | more than 3 years ago | (#35643738)

Because two orders of magnitude is the difference in price between a Honda Civic and a Lamborghini Gallardo.

It's a lot closer between the max speed of a Honda Civic (117 MPH) and something that can cross the Atlantic in 29 minutes.

Re:*SMOOTCH!* Buh-bye Enterprise! (1)

by (1706743) (1706744) | more than 3 years ago | (#35643912)

No, I'm pretty sure this Civic [wordpress.com] could cross the Atlantic in 29 minutes... ;)

Re:*SMOOTCH!* Buh-bye Enterprise! (0)

Anonymous Coward | more than 3 years ago | (#35645014)

Because SLCs survive for two orders of magnitude more writes than MLCs.

I don't work with this sort of stuff, but does that matter? If MLCs have other advantages, then what the problem with chucking them out and replacing them when they wear out?

You'll have to buy (at least) two so that when one goes south you don't have an outage. Of course if you put both sides of a mirrored-pair in at the same time, they'll likely die at the same time because of similar wear patterns.

Most MLCs also do not have things like super-caps, so if you ever suddenly lose power, any data in the buffers suddenly goes away. Similarly many "consumer" SSDs don't respect cache flush SATA/SAS commands, and so if you call fsync() on some data, and the drive says it's on stable storage, but it isn't, you have more issues of data corruption. (Ignoring flush commands is usually done to help boost benchmark numbers.)

A lot of these scenarios have been discussed on the zfs-discuss list because ZFS can use slow SATA disks for bulk storage, but you can then transparently slot in SSDs for read- and write-caches: basically the cost of SATA but the speed of SSD.

Re:*SMOOTCH!* Buh-bye Enterprise! (2)

Frnknstn (663642) | more than 3 years ago | (#35644084)

No. Two orders of magnitude is 100x. Good SLC vs good MLC is 10x, only a single order of magnitute longer lasting.

http://www.anandtech.com/show/2614/4 [anandtech.com]

What you forget is MLC is about 2x cheaper than SLC, so you can get 2x the space for the same price. With wear leveling, extra space is extra lifespan, so MLC dies 5x faster than SLC.

What does that mean for you? I put my money (job) where my mouth is. Our reasonably high traffic OLTP database server uses Intel SSDs as filesystem-level write cache. We get an average write level of 10MB/sec. The minimum expected lifespan of the drive is 2 petabytes. That means we likely have SIX YEARS before the cells start to become unwritable. At that point, no data will be lost: the drive will report the write failures to the OS and store to cells that haven't become unwritable yet, and you will be able to continue operating for the next few months while you get a replacement drive.

Re:*SMOOTCH!* Buh-bye Enterprise! (1)

dkuntz (220364) | more than 3 years ago | (#35644038)

Thats why they now have eMLC drives... which, if I'm not mistaken, is what the intel drives are. The e in eMLC stands for Enterprise. They can do 3x the number of write/program cycles as the average MLC drives. While still not as good as SLC (or eSLC, which is also out there), it's not as bad as it could be.

But really, for enterprise level storage, you should still stick with spindle based storage, and use MLC drives for read cache, and a mirrored pair of eMLC, or SLC drives for write cache. Or, in ZFS parlance, use MLC drives as cache, and eMLC/SLC as mirrored log drives in a zpool.

Re:Don't like this (1)

Firethorn (177587) | more than 3 years ago | (#35643094)

Well, they DO mention that they tripled the sequential write speed, so it could be that the MLC is now competitive, speed wise, with SLC. High-transaction databases are the devil's bane of storage devices as it is, you're probably best going with a high amount of RAM cache - both read and write, if that's what you're doing. Enough cache and the right database system and you can turn random writes into what are effectively sequential writes, improving performance that way.

Re:Don't like this (0)

Anonymous Coward | more than 3 years ago | (#35643142)

But SLCs survive a lot longer.

Re:Don't like this (0)

Anonymous Coward | more than 3 years ago | (#35643264)

It's not about survival length, it's about cost. It's always about cost and always will be.

It's obviously cheaper to buy many MLC drives and replace them sooner rather than a single SLC drive.

Re:Don't like this (1)

arth1 (260657) | more than 3 years ago | (#35643214)

Well, they DO mention that they tripled the sequential write speed

But sequential write speed is rarely a bottleneck - random access small writes are. And those tend to be much worse for MLC than SLC.

Didn't I mention that? (1)

Firethorn (177587) | more than 3 years ago | (#35644390)

I know that random writes is a problem - which is why I mentioned things like 'enough cache' and 'right database system' to turn those random writes into sequential ones. It's expensive in terms of storage capacity(your DB will have to be bigger), but if MLC is 'enough' cheaper than SLC, you just buy the additional storage. Plus, with modern MLC and wear-leveling you're looking at years at the drive's maximum write speed to start wearing out the cells. If MLC is around an order of magnitude cheaper than SLC, it's cheaper just to replace the drive more often - especially with prices for a given size dropping all the time.

Re:Didn't I mention that? (1)

afidel (530433) | more than 3 years ago | (#35645116)

Huh, with write amplification you can wear out an MLC drive in a matter of months at a small fraction of their write speed. This is why I dished out the money for FusionIO's SLC based cards, estimated life based on data from our existing SAN is ~5 years which means we should be for good our planned replacement time of 3.5-4 years (our current servers which are to be retired in a few days are 4.5 years old but have been in production use for just over 4).

Re:Don't like this (1)

CAIMLAS (41445) | more than 3 years ago | (#35644466)

Why would it be a problem? I haven't looked at the specs on these yet, but some of the sandforce based MLCs have MTBFs of a million years and can handle something like 100 years of constant writing.

Still too pricey per gig for mass storage (4, Insightful)

sandytaru (1158959) | more than 3 years ago | (#35642262)

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon. However, I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data. I think the article is right, in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers. 95% of users can't tell the difference between a 5600 RPM HDD and a 10,000 RPM one, so they won't care about SSD speeds that much either.

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35642466)

Looking at the prices in TFA, the 120GB is the sweet spot at 1.74/GB. I have an X25-M and I found the speed increase to be significant, coming from a 640GB 7200RPM HDD. I'm on Win7 ... login takes 2-3 seconds, and once I can see the desktop I can start loading applications as fast as I can click. :) Noise reduction was nice, too. If you've got the money, do it ... you won't be disappointed.

Re:Still too pricey per gig for mass storage (4, Insightful)

CastrTroy (595695) | more than 3 years ago | (#35642468)

Maybe the reason users can't tell the difference between 5600 (5400??) RPM and 10,000 RPM is because for the most part what is slowing things down is the seek latency. In both those drives, they seek latency is going to be 12 ms and 7 ms respectively. Which you're right, the user probably won't notice. But a solid state drive will give you a seek time of about 0.1 ms which will make a huge difference in many situations. Most users will probably notice a change like this because seek time is probably what is slowing down the computer most of the time.

I can tell a diff. between 5400/7200/10k, bigtime (0)

Anonymous Coward | more than 3 years ago | (#35644230)

Whenever I use my niece's laptop (pretty "state of the art/current" as my brother makes good coins in the service as an officer & takes care of her, with the very best stuff no less & bigtime)?

It PALES in comparison the the WD Velociraptors I use (SATA II 10k rpm + 16mb buffers & elevator algorithms) driven by a Promise Ex-8350 128mb ECC RAM Raid 6 caching controller.

Heck, even BEFORE I was driving my raptors off of that controller, I had MUCH faster response from 10k rpm diskdrives...

Yes, I notice a diff. between 5,400rpm disks vs. even 7,200rpm ones, like my Seagate PRT tech using 7200.11 SATA II unit, than I see from 5,400rpm disks you typically see in laptops (albeit w/ good reason there, because of HEAT building faster on faster RPM disks).

---

"Maybe the reason users can't tell the difference between 5600 (5400??) RPM and 10,000 RPM is because for the most part what is slowing things down is the seek latency. In both those drives, they seek latency is going to be 12 ms and 7 ms respectively. Which you're right, the user probably won't notice." - by CastrTroy (595695) on Monday March 28, @02:07PM (#35642468) Homepage

Again, in regard to that? All I can state is that which I did above... the rest of what you stated though, quoted next below??

Hey - I agree, 110%, & from MORE than 15++ yrs. of experience using SSD/RamDrive/RamDisk tech!

---

"But a solid state drive will give you a seek time of about 0.1 ms which will make a huge difference in many situations. Most users will probably notice a change like this because seek time is probably what is slowing down the computer most of the time.." - by CastrTroy (595695) on Monday March 28, @02:07PM (#35642468) Homepage

Man, again: ABSOLUTELY - been doing it here, for 15++ yrs. straight for a HUGE # of things that yielded in summation, in MUCH faster & much noticed performance gains... no benchmarks required (link below shows what I do, EXACTLY, enumerated in fact... lots of things get moved to SSD here, non-FLASH types though!).

APK

P.S.=> I know one thing from experience over that 15++ yr. period using RamDisks/RamDrives/SSD's here (whatever you wish to call them):

You speed up the slowest part of a system, typically the HDD's? You can tell, no benchmarks needed, that it's dead up F A S T E R, by far... far quicker reponse!

I even offset my HDD disks' "lag" (imagine that, saying that about my setup above) even MORE by using NON-Flash RamDisks/Ramdrives in hardware for around 10 yrs. now to "unburden them" + reduce fragmentation on them (& 5++ yrs. before that using Software based Ramdisks)...

How I do it is illustrated in my post on this article today, here for 7 things, logging included, I do on SSD that work for a NOTICEABLE performance gain overall for the entire systems' operations:

http://hardware.slashdot.org/comments.pl?sid=2057790&cid=35643162 [slashdot.org]

You MAY find it interesting (or not, as you may be aware of it)... all I know is, it WORKS! apk

Re:Still too pricey per gig for mass storage (1)

MikeBabcock (65886) | more than 3 years ago | (#35644612)

In my experience users can see the difference from just 4500 to 5400 rpm laptop drives. And that's not much of a jump at all.

From 5400 to 7200 RPM, a Windows PC is quite substantially more responsive. If you don't feel the difference with a 15k RPM enterprise drive your problem may simply be that the drive was already fast enough for the workload you're giving it.

That said, for intense database applications, every moment counts.

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35642486)

Uh, no. Regular hard drives are at 20 Gb / $1 and this is at 0.5 Gb / $1 so the prices need to come down about an order of magnitude (10-20 times less per Gb) before it is useable as mass storage. However if regular hard drive makers continue to stagnate as they have for a few years now (the only recent advance was from 2 Tb to 3 Tb HDDs which is negligible) then we could get to competitive flash pricing within a decade.

Re:Still too pricey per gig for mass storage (1)

ThanatosST (1896176) | more than 3 years ago | (#35642566)

I wouldn't exactly call an increase in capacity of 50% "negligible".

Re:Still too pricey per gig for mass storage (4, Insightful)

Rockoon (1252108) | more than 3 years ago | (#35643010)

The $/GB metric is often irrelevant.

Sure, I can get 3TB for $100, but for $170 I can get a very high performance SSD that is large enough (90GB) for my needs.

Why do all my computers need terabytes of storage? Thats right.. they don't. I only need large storage on shared network media. My computers need high performance storage, not stupid amounts of extra GB's.

Re:Still too pricey per gig for mass storage (4, Interesting)

CohibaVancouver (864662) | more than 3 years ago | (#35643226)

The $/GB metric is often irrelevant.

Bingo.

I know of a large company that is starting the switchover. They calculated that removing the loss in productivity caused by long OS startups more than easily pays for the cost of switching to SSDs. The math that you might use on your home computer doesn't always apply in the business world.

Re:Still too pricey per gig for mass storage (1)

Nikker (749551) | more than 3 years ago | (#35643800)

If you're revamping your data center to SSD's cause of boot times you're doing something really wrong. There is no way you should be able to reap any kind of cost benefit from that amount of boot time. Even workstations normally get left on most of the year anyway and IT should only be patching rebooting during off hours anyway. Not sure where you expect to gain any kind of benefit by cutting your boot times maybe you could let me know?

Re:Still too pricey per gig for mass storage (1)

CohibaVancouver (864662) | more than 3 years ago | (#35644062)

Not sure where you expect to gain any kind of benefit by cutting your boot times maybe you could let me know?

This was largely in a fleet of tens-of-thousands of laptops, many of which previously took several minutes to boot, due to a large number of services, security applications etc.

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35643512)

How nice for you. Your needs are not very much then.

Re:Still too pricey per gig for mass storage (1)

Kevin Stevens (227724) | more than 3 years ago | (#35643862)

Its interesting you mentioned shared network storage. After I ditched my desktop 4 years ago and just went laptop, I decided to buy a NAS for mass storage. At the time, my motivation was mostly due to the fact that I was living in a small Manhattan apartment, and laptop disks were relatively slow and expensive. I was ahead of the curve then, but at the time it was unheard of even in geek circles- I am glad it is getting mentioned on slashdot and is thus getting a bit more prominence. I can definitely see this being the model of the future.

The fact that my NAS box is essentially a shoe box sized low powered linux server that could I could access from anywhere on the net is a huge plus too. (I have a synology ds211j).

Re:Still too pricey per gig for mass storage (3, Insightful)

Kevin Stevens (227724) | more than 3 years ago | (#35643730)

Considering flash was about $7.50/gb in 2007, $3.80 in 2009, and is now down to about $1.71/gb, all the while capacities are increasing, I think pricing will be "competitive" in a year or two. Also we are just beginning the release cycle of the next generation- OCZ and crucial are set to release their products this month, so price/$GB could drop further in the very immediate future. Speeds are still increasing by leaps and bounds with each generation- the new vertex 3's, in real actual use, have seen sustained transfer rates over 400 MegaBYTES per second.

Adding an SSD is the best upgrade you can do to increase performance. If you look at the videos on you tube, they show that loading even the largest, slowest apps like Photoshop, CAD, WoW, etc are more than 2x as fast as a hard disk- most app loads are instantaneou-, and thus halve boot times. SSD's use a fraction of the energy, which means cooler laptops with longer battery lives, and quieter desktops that also require less cooling. You are right that SSD's aren't suitable for mass storage, I think for at least 5 years we will see hybrid setups, and then gradually we will see a move towards SSD only systems.

There is real value in adding an SSD today though, IMHO.

Re:Still too pricey per gig for mass storage (4, Interesting)

hairyfeet (841228) | more than 3 years ago | (#35642500)

Well and I'd argue that on modern Windows the extra expense really isn't worth it for a lot of regular users. RAM is cheap and Windows 7 Superfetch will quickly learn what programs you launch and when, and lets face it no SSD beats RAM.

I maxed my board out at 8GB and with Superfetch frankly everything I normally use launches as fast as I click it since with my predictable behavior Windows 7 simply loads it into RAM at the appropriate time. Considering maxing out most boards costs less than $100 and hybrid sleep makes shut downs kinda pointless unless you have a program that requires serious I/O for the average user there simply isn't a point in going SSD, not when 2TB drives can be had for $80.

Too bad SSDs didn't come out 10 years ago as it would have been most welcome when everyone was stuck on IDE with tiny caches and lousy memory management, for the "Average Joe" with plenty of RAM, big caches on the HDDs, and Superfetch preloading programs into RAM based on time and usage patterns? Kinda pointless IMHO especially at the prices per GB.

The only ones I've sold have been to my ePeen "Must have the highest benchmarks!" gamer customers and playing with their PCs other than bootup I really couldn't feel a difference. That is why I've been telling my regular customers and those wanting new builds to max out on RAM first and then if they still have money to blow after getting the rest of their wish list get an SSD for an OS drive, because frankly if their choice is RAM or SSD I'd always advise the most RAM as it'll get more use.

Re:Still too pricey per gig for mass storage (2)

fast turtle (1118037) | more than 3 years ago | (#35642740)

I've been preaching the max ram option to people who are planning on new systems since 2003 when I was able to see the difference it made using Gentoo Linux. The testing method was a bit simplistic but as it involved bootstrapping the system, the difference in time required with 512 compared to 1GB was impressive and convinced me at that time to install the most memory I could afford.

What I find funny now is people are spending their money on High Performance Gaming RAM when actual benchmarks show no improvement in performance for the same amount of memory. Instead you get more bang for buck by going with the max memory your system can use and for gaming, that helps improve things much faster then any other upgrade with the exception of a High End Video card.

Re:Still too pricey per gig for mass storage (1)

GameboyRMH (1153867) | more than 3 years ago | (#35643488)

What I find funny now is people are spending their money on High Performance Gaming RAM when actual benchmarks show no improvement in performance for the same amount of memory.

Worst of all are the ones with the fancy heatsinks. Add those and the price goes up by 50%.

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35643686)

The appeal for the High Performance RAM (or the ones with the fancy heatsinks) is for the tight timings that allow for more stable and ambitious processor overclocks. Heatsinks are necessary because when upping the voltage results in increased heat, thereby reducing lifespan of the components.

If you don't overclock, then you don't need fancy RAM.

Re:Still too pricey per gig for mass storage (1)

GameboyRMH (1153867) | more than 3 years ago | (#35643826)

I know, I actually have some tight-timing gaming RAM in my PC that did come with heatsinks. I was talking about the UBER-L33T FATAL1TY DOMINATOR SKULLBRINGER RAMPANT ASSASSINATION MODULES with big colorful sci-fi looking heatsinks with fancy graphics on the sides that cost way more because of the e-peen points.

Re:Still too pricey per gig for mass storage (1)

Roger Lindsjo (727951) | more than 3 years ago | (#35645036)

Max ram might be a bit pricey. OWC DDR3 SDRAM 2x4GB ~$100, 2x8GB ~$1600 and I doubt that is a cost effective upgrade for most users.

Re:Still too pricey per gig for mass storage (1)

lgw (121541) | more than 3 years ago | (#35642824)

I turn off my gaming rig when I'm not using it - it's a bit of a power hog even when idle. An SSD reduced the power-switch-to-usable delay to 1/3 of what it was with a fast HDD, especially reducing the time spent grinding from logon to all services started and actually ready.

Storage performance in general isn't very noticable on a running gaming system, as other delays tend to dominate (especially on DRM-infested games than=t need to phone home), but the boot up time reduction was worth it for me. I only really need ~200GB for gaming, so the extra capacity in the HDD isn't much to give up.

Re:Still too pricey per gig for mass storage (1)

GameboyRMH (1153867) | more than 3 years ago | (#35643096)

This is why I went with a RAID0 array of 10krpm drives for my gaming machine in early/mid 2009. I get most of the speed and way more capacity (600GB) for a much lower price.

Early last year I bought a laptop, which as usual came with a hard drive that was too slow, so I was going to get a replacement hard drive the same time. I figured I needed at least 64GB of storage. An SSD was still way too expensive so I went with a 160GB 7200RPM drive that only cost me $100 and doesn't make me wait too long for anything other than going in and out of hibernation (which is actually way slower than shutting down and booting up in Ubuntu Lucid).

All this time SSDs just didn't, and still don't make sense for anything other than a small boot drive. I plan to switch my home server/HTPC's boot drive to an SSD when its nearly 20 year old boot disk fails - I don't need the speed on that computer, I'm doing it for the power savings and quiet.

Re:Still too pricey per gig for mass storage (2, Informative)

Anonymous Coward | more than 3 years ago | (#35643344)

I used to have two regular HD's in RAID 0, now I have two SSD's in RAID 0. There is simply no comparison. SSD's absolutely blow away traditional Hard Drives, it's not about the Mb/sec... it's about the I/O's per second, and in this sense SSD's are about 70-100 times faster than traditional disks. Photoshop, Office, Firefox, everything opens instantly. I can even open 5 programs at once and they still all open instantly. This can all be done with 4GB of ram too, no need to buy more memory to make up for having a slow disk array anymore. Even with 2GB of ram it's still insanely fast.

Oh and a virus scan takes about 90 seconds, and I don't even notice it running, everything still opens instantly even with it running in the background... try that with your Raptor Raid 0 array.

Re:Still too pricey per gig for mass storage (1)

petermgreen (876956) | more than 3 years ago | (#35643218)

RAM is cheap

Checking one of my favorite suppliers (there are probablly cheaper ones out there)

Ram is about £10 per gig and if you want to go beyond 16GB it's time to bend over and pay a lot more for your CPU/MB to get support for that extra ram.
SSD is about £1.50 per gig
HDD is about 10p per gig

Superfetch sounds great if you have a regular schedule every day switching between programs at known times, for those whose usage patterns aren't so consistent it doesn't seem so useful (still mostly on XP mysefl, wondering whether to try and get a copy of XP for my next computer somehow or bite the bullet and go win7).

Re:Still too pricey per gig for mass storage (1)

hairyfeet (841228) | more than 3 years ago | (#35644874)

It isn't just a regular schedule thing! Let me give an example, while I have probably 30 games installed on my game drive at any one time I'll usually only be focused on one or two at a time, the rest are deals like Steam's midweek madness where I couldn't beat the price and I'll get around to them when I can.

Windows 7 knows this about me and has the essential .DLLs loaded for the games I'm using, so that when I'm watching the developer screens (that sadly are more and more becoming unskippable ) it quickly loads the chunks of the game into RAM to minimize HDD access. And I bet if you were to actually look at what you launch and when you have more habits than you'd care to admit, like when you like to check your mail or when you watch videos or listen to music.

So I'm telling you don't "bite the bullet" jump on Windows 7 with both feet and I bet you'll quickly love it. And this is from someone who HATED the fisher price XP UI and thought Vista looked like a bad DeviantArt theme and handled worse. I don't know how they managed to pull it off, and it wouldn't surprise me if they cocked it up for Windows 8, but Win 7 seems to be that rare mix between friendly for noobs while being even easier for old hands like myself.

Now as far as memory goes? If you have 2Gb it'll run fine and be responsive as hell, 4Gb it'll be great and crazy responsive, and 8Gb is just like nirvana IMO, and 2Gb stick are pretty reasonable as long as you aren't trying for say DDR 1. When I say "max out" I mean within reason, I don't mean go crazy ePeen money on it. I mean theoretically my board will hold 16Gb if I was to come up with 4 4Gb DDR 2 800MHz sticks, but the price/performance ratio just isn't worth it. If your board can take 4 2Gb sticks cool, get that . if not just put as many 2Gb sticks as it'll hold.

Oh a final word of advice for when you switch...Readyboost: Use it. Flash drives are so incredibly cheap now NOT using it is frankly just stupid, you are throwing away free performance for the price of a cheapo flash stick. I picked up an 8Gb flash on sale for $8.99 at Tigerdirect and I CAN tell the difference. What it does is turns any flash stick you assign to Readyboost into a poor man's SSD for random reads. Since random reads are where an SSD shines and where HDDs suck this in effect turns ANY HDD into a hybrid for cheap and with the size of the cache determined by you. It'll speed up your boot and shutdown as well as making random reads of often used files just crazy fast, and like Superfetch the more it has available for cache the more it can do for you. But with 8GB drives so dirt cheap it is an easy way to boost your machine on the budget.

So try Windows 7, the new memory management and better UI with breadcrumbs (man that is sweet, instantly hopping anywhere in a tree structure) and Explorer remembering the last 10 folders and making them one click away on the desktop is just too damned nice. And don't go nuts with the RAM, just look at the sweet spot on chips and max out on those. As I said 4Gb will give Win 7 plenty to work with (with all the bling and Aero on I'm using less than1GB for the OS) and 8Gb will just let it go nuts when it comes to caching. And as I said in the other post NO DRIVE will touch RAM in the foreseeable future, period.

Re:Still too pricey per gig for mass storage (4, Informative)

PRMan (959735) | more than 3 years ago | (#35643316)

I already have 8GB on my home server and that makes very little difference from 4GB since it sits idle at ~4GB most of the time. But the SSD made a world of difference. A 2-3 minute boot became 25 seconds. A 1+ minute shutdown became about 5 seconds. I don't worry about reboots anymore, because it's around 30 seconds total (instead of 5 minutes)! Game cutscenes are almost instantly skippable (within 2-3 seconds), if they allow it. EVERY program loads instantly. Installs take mere seconds (even OpenOffice or Office 2007).

BTW, my RAM maxes out at 24 GB on this board, but if you told me the 24GB would help more than the 64GB SSD (about $90), you would be doing me a horrific disservice.

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35644764)

And you call this a server? How is a server supposed to be shutdown/booted up every now and then? My home server is a late 2007 Mac Mini which is coincidentally the system that I do most of my stuff now. That system has been running 24/7 with 3 interruptions only - first one was to replace the RAM and the HDD (max out ram, install 7200rpm drive), second one was when it was in my suitcase when I moved from EU to US, and the third one was another HDD replacement to a higher capacity. A server is supposed to be running, boot time is irrelevant when you only boot it up once every year (or more like once in two months when security patches come).

Yes, I have used a compact flash to PATA IDE adapter in my home router, which is coincidentally my desktop from 1997. The Pentium MMX 233Mhz cpu, overclocked at 250Mhz, RAM maxed at 256MB and that 2GB compact flash have been running 24/7 for the past 3/4 years with Linux and then pfSense. The compact flash was put for noise reduction, there are no moving parts in that router now.

Re:Still too pricey per gig for mass storage (5, Informative)

Ndkchk (893797) | more than 3 years ago | (#35643646)

SSDs affect other things besides just speed. I put one in my netbook and battery life went from six hours to eight - and it boots in fifteen seconds and starts programs almost instantly. The difference in power consumption matters less in a bigger laptop, but it would still help. I also don't see why you're talking about an SSD and a 2TB drive as a binary choice. The "average user" doesn't need 2TB; they already have enough space with the ~500GB that came with their Dell. They could get an SSD, keep the hard drive they already have, get someone to move the Windows install, and have the best of both worlds.

Re:Still too pricey per gig for mass storage (1)

mattack2 (1165421) | more than 3 years ago | (#35643724)

hybrid sleep makes shut downs kinda pointless

Why? According to the wikipedia article, hybrid sleep puts the machine in 'standby' mode. Isn't that just another term for sleep.. where the computer is still using _some_ more power than turned off?

Re:Still too pricey per gig for mass storage (2)

gad_zuki! (70830) | more than 3 years ago | (#35644910)

Caching solutions are always poor. No system is smart enough to cache everything and there's a cost to caching - misses, reading the first time, etc that produce lag and the characteristic disk churn of mechanical drives.

I find in everyday usage, most users are disk bound. CPU and RAM are just sitting around waiting for the disk. I've only put in 3 SSDs and the difference is night and day. The low seek times and transfer speeds make the computer feel completely different. Once Joe Average gets to see one of these in person, he'll be demanding one. Unfortunately, computer marketing is built around CPU speed, which is useless past a certain point for most users eg paying $150 for a .1 ghz uptick.

Ironically, gamers probably dont get as large of a boost as a general user. Your game is doing big reads and occasional little writes. You don't get all the benefits from SSDs in that scenario.

Toss in the power savings and you'll find that SSDs are ready for the mainstream. Not to mention the average user uses something like 30 or 40gigs of the drive, most of which is the OS and binaries and only has couple of gigs of personal files. Power users will just tack on 1TB drives and be on their way.

  Once 120gb drives hit $100, mech hard drives in laptops are dead. I'm already seeing people complimenting the Macbook Air on how fast it, when its a pretty meager CPU, but they don't know that, they just see it runs quick because of the SSD.

Re:Still too pricey per gig for mass storage (1)

Junior J. Junior III (192702) | more than 3 years ago | (#35642560)

If you think it's too pricey for mass storage, just pretend that it's 2005.

Re:Still too pricey per gig for mass storage (1)

pz (113803) | more than 3 years ago | (#35642604)

I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data.

I've been doing exactly that for about a year now and highly recommend it. Use 3 (or 4) 1TB HDDs in a RAID for your non-OS storage and you'll add some failsafe capacity as well as speed.

Re:Still too pricey per gig for mass storage (4, Funny)

dc29A (636871) | more than 3 years ago | (#35642620)

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon.

I am not going to run out and replace my minivan that I use to ferry my four kids and wife with a two seater sports car any time soon!

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35642818)

That's cool, I'll ferry your wife back and forth to the motel in my two seater sports car then.

Re:Still too pricey per gig for mass storage (1)

Paracelcus (151056) | more than 3 years ago | (#35643896)

Thou shalt not covet thy neighbors wife, unless..

She looks like Angelina Jolee, or Raquel Welch (circa 1970) or Halle Berry!

Re:Still too pricey per gig for mass storage (1)

GameboyRMH (1153867) | more than 3 years ago | (#35643588)

I'd say small high-speed drives are the 2-seater sports cars of storage. No major hauling capacity but you can still fit a decent bit of stuff inside and go plenty fast enough. SSDs are the sportbikes of storage. Costly, finicky and kind of unsafe but ZOMG SO MUCH SPEED!

And Intel SSDs are the Italian sportbikes of storage - more expensive than the competition because of the name :P

Re:Still too pricey per gig for mass storage (1)

Anonymous Coward | more than 3 years ago | (#35643880)

What you need is a mid-life crisis.

Run out and replace the minivan AND wife and four kids with a two-seater sports car.

Go on, you know you want to... ;)

Re:Still too pricey per gig for mass storage (1)

ebuck (585470) | more than 3 years ago | (#35643936)

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon.

I am not going to run out and replace my minivan that I use to ferry my four kids and wife with a two seater sports car any time soon!

The analogy falls apart when your two seater sports car can make 80 round trips in the time the minivan makes one. Once you realize the true speed differences, you will start thinking of that two seater sports car as a minivan with seating capacity for 81, or one that can shuffle a "mere" four people around in seconds instead of minutes.

Re:Still too pricey per gig for mass storage (1)

Firethorn (177587) | more than 3 years ago | (#35642756)

in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers.

Hmm... Hard to say, hard to say. Personally, I'm thinking more like $.10 per gig. As you mention, HDs are currently around $.05 per gig. I bought a 60gig SSD a while back, it's just not big enough - it constantly forces me to shift stuff to the HD(I LOVE symbolic links!). I can keep the OS, a few applications, and maybe a couple games on it. Performance improvements, at this point, are almost unnoticable. Personally, I think that a hybrid SSD/HD [storagemojo.com] solution is currently the best idea, at least for the common user. Though I think I'd prefer 8-20 Gigs of flash cache, not 4.

Re:Still too pricey per gig for mass storage (1)

demonbug (309515) | more than 3 years ago | (#35642786)

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon. However, I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data. I think the article is right, in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers. 95% of users can't tell the difference between a 5600 RPM HDD and a 10,000 RPM one, so they won't care about SSD speeds that much either.

The difference between a 10,000 RPM hard drive and and SSD is much bigger than the difference between a 5600 RPM HDD and a 10,000 RPM HDD. They will notice.

Like many others I went with an SSD for my boot drive when I built my last system (Crucial 64GB RealSSD), in combination with a cheap 7200 RPM HDD for data. The SSD makes a HUGE difference - no waiting for applications to start, very quick startup (the longest part by far is the various BIOS checks that it feels the need to go through), no need to defrag, etc. Very nice. The one downside is that since the HDD is rarely used it tends to go to sleep, so when something DOES need to access it it takes a couple seconds to spin up and get going. After cruising through everything with the SSD this makes waiting for the HDD to get going especially painful, but it isn't a big deal - and obviously you would have the same issue using two HDDs.

With the availability of a relatively cheap 40 GB option I can see the start of widespread adoption in the corporate world. In my experience 40 GB is plenty for OS and applications for the vast majority of office drones (myself included), with pretty much all data staying on the server these days. With the cheapest HDDs you can get generally around the $40 mark you are looking at only a $40 price difference to stick in an SSD, with the attendant massive speed increase. Having an SSD makes everything else much faster - virus scans are quicker, the computer is more responsive, no need for defragging - and if you are using it for relatively static application and OS data only, you should see a significant decrease in drive failures. At this point I would be pretty pissed if management/IT went ahead with a new computer system rollout that didn't take advantage of SSDs in workstations (well, except for the fact that the computer manufacturers all like their massive markup on SSDs so it is doubtful you could get them built at a reasonable price right now).

Re:Still too pricey per gig for mass storage (2)

Lord Ender (156273) | more than 3 years ago | (#35643590)

Right. Focusing on read and write speed is misleading. The reason for this is that the perceived speed of SSDs comes from seek times, not R/W speed.

Think of it like this: ever play a game on a server in Korea with a one-second ping? Even if your connection is 100Mb/s, that feels horrible. This is analogous to a mechanical hard drive. Compare it to the LAN game where the server is 10ms away - even on a 10Mb/s pipe it's far better. That's what an SSD feels like.

Re:Still too pricey per gig for mass storage (1)

Firethorn (177587) | more than 3 years ago | (#35644016)

With the availability of a relatively cheap 40 GB option I can see the start of widespread adoption in the corporate world. In my experience 40 GB is plenty for OS and applications for the vast majority of office drones (myself included), with pretty much all data staying on the server these days.

I see several adoption points - and the biggest one isn't performance related, but when the cheapest SSD that 'works' is cheaper than the equivalent cheapest HD.

What does this mean? When the cheapest available HD costs the manufacturer $20 and the equivalent SSD costs $19. It might be a 500GB HD for $20 and $19 for 40GB, but it'll be cheaper. HDs offer 'enough' performance today, and their vastly cheaper cost per GB still outweighs that SSDs scale 'down' better than HDs. The cheapest HD at the moment for newegg is the 160GB model - $35. $.22 per gig. The cheapest per gig? 2TB for $75. $.04 per gig.

A 40Gig SSD still runs ~$95 and up. It's got to get to around 1/3rd that price. Probably 1/4, because HD price and size isn't staying static either, and demand for size is still rising.

Personally, I think one trick would be integrating the SSD chips directly onto the motherboard for savings that way.

Re:Still too pricey per gig for mass storage (1)

GameboyRMH (1153867) | more than 3 years ago | (#35642832)

Oh they sure can tell the difference (they may not know it's the hard drive, but they definitely know the computer is shit-slow), but they'd just rather keep the cost of their computers down.

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35642944)

One acronym: l2arc

I double my rotating storage as the price comes down to around $150/drive, and I don't even care if I'm using low power drives. Chances of my working set for my file server exceeding 40GB are very small. When they do, I'll upgrade, but it actually takes about a week or more for my cache to warm up fully.

WRT hybrid drives.... Why put this in closed firmware? Doesn't it make sense to decouple the devices based on SSD cost right now?

I would love to see this accessible outside of zfs, but without check summing, I don't really trust it (bcache). Btrfs needs this. (or Oracle should dual-license ZFS. Why they are sponsoring development of Btrfs, and keeping ZFS CDDL eludes me).

Re:Still too pricey per gig for mass storage (1)

TheTyrannyOfForcedRe (1186313) | more than 3 years ago | (#35642982)

You're right about the $1/gig price point. My money will remain in my wallet until then. Besides price, I'm disappointed at the lack of a 60GB model. 40GB is too little; 80GB is too much. I guess I have to wait until the next release cycle for my mythical $60 60GB Intel SSD.

Re:Still too pricey per gig for mass storage (1)

mattack2 (1165421) | more than 3 years ago | (#35643872)

OK, Goldilocks.

Re:Still too pricey per gig for mass storage (1)

ColdWetDog (752185) | more than 3 years ago | (#35643320)

For laptops, the performance increase is incredible and obvious. I've replaced spinning drives in two MacBooks and a MacPro. The latter was pretty damned fast anyway and going from a 7500 RPM drive to the SSD did make a difference, but not the absolute stunning level of performance increase that I've noticed in the laptops. That might be because, as Macs, they're on the slow end of high performance (they're both circa 2007) and came with pretty sluggish hard drives to boot.

But it's night and day. For my MacBook Pro, I replaced the optical drive with a 1 TB 5400 RPM drive and I can boot off that as well - seems to take forever to boot and switch applications. I'll never go back.

The SSD speed is of a different magnitude (4, Informative)

aussersterne (212916) | more than 3 years ago | (#35643962)

There's no comparison between the 5,600-10,000 RPM gap and the HDD-SSD gap.

I took the plunge last year and installed X-25M drives in my desktop and laptop as OS drives, with secondary drives for user data. The difference is the single greatest performance jump I've ever experienced in 30 years of upgrading, going even back to the days of replacing clock generators on mainboards to overclock 8-bit CPUs by 50 percent.

There is literally a several-orders-of-magnitude difference in the overall speed of the system. If you haven't experienced it, a description of the difference doesn't sound credible, but a multi-drive RAID-0 array of 10k drives doesn't come close to a single SSD in terms of throughput.

I can't go back to non-SSD OS installs now. Systems without an SSD literally seem to crawl, as if stuck in a time warp of some kind. Non-SSD systems seem, frankly, absurdly slow.

Re:The SSD speed is of a different magnitude (1)

the eric conspiracy (20178) | more than 3 years ago | (#35644918)

There is literally a several-orders-of-magnitude difference in the overall speed of the system.

LITERALLY?

I call BS. Assuming several is equal to at least 3, several orders of magnitude implies an increase of a factor of at least 1000 of your overall system performance.

Since most performance comparisons between an ordinary hard drive and a SATA drive show at most a factor of 2 difference in tasks like booting Windows you are WAY off. Even drive specific tasks like sustained reads are typically no more than a factor of 4 better.

Re:Still too pricey per gig for mass storage (0)

Anonymous Coward | more than 3 years ago | (#35644144)

The difference between a 5600RPM drive and a 10000RPM one isn't as big as any HDD and an SSD. Seek times are orders of magnitude smaller (6.9ms on a WD Raptor vs 0.1ms of any SSD)

Re:Still too pricey per gig for mass storage (1)

glwtta (532858) | more than 3 years ago | (#35644348)

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon.

And I'm probably not going to replace my RAM with a tape drive; what's your point?

It's a 3GBps part (2)

ameline (771895) | more than 3 years ago | (#35642528)

It is a bit behind the times with no Sata 3 (6 GBps) support.

Re:It's a 3GBps part (1)

Rockoon (1252108) | more than 3 years ago | (#35643082)

They probably see the SATA 3.0 market as currently too small, such that OCZ can not become so dominant so as to prevent Intel from successfully entering it later.

Re:It's a 3GBps part (0)

Anonymous Coward | more than 3 years ago | (#35643534)

AFAICT OCZ's product is too flaky to prevent themselves from dominating the market. I've purchased 2 Vertex and 1 Vertex 2 and all have failed multiple times. I've probably spent close to $100 in RMA shipping and the lost time is more expensive than that.

Sure they just released the Vertex 3, but I think there are a lot of people out there like me who are far to skiddish to deal with OCZ anymore.

Re:It's a 3GBps part (0)

Anonymous Coward | more than 3 years ago | (#35644108)

The 3XX drives are the Mainstream segment. Intel's 510 drives already have S-ATA III (6 Gbit/s).

X25-E replaced by Hitachi (0)

Anonymous Coward | more than 3 years ago | (#35642768)

Note that the as-yet-unavailable Hitachi SSD400S enterprise drives are based on Intel controllers and SLC NAND. No word on pricing, but at the capacities they plan to offer and with it being targeted as an Enterprise SAS/FC part, they will probably be very expensive.

Power Safe Write Cache (1)

KonoWatakushi (910213) | more than 3 years ago | (#35643028)

Looks like like Intel has scrapped the "power safe write cache" that was slated for the next generation of drives.

Re:Power Safe Write Cache (1)

disccomp (1454521) | more than 3 years ago | (#35643334)

...Well, "Intel has also included small capacitors in its latest SSD, so that in the event of a power loss, data writes in progress to the NAND flash memory will be completed."

Re:Power Safe Write Cache (1)

Amouth (879122) | more than 3 years ago | (#35643508)

humm what??? they added caps to ensure that there was enough power to finish write operations in the advent of power failure.. they didn't "scrap" it, they implemented it.

Re:Power Safe Write Cache (1)

greg1104 (461138) | more than 3 years ago | (#35644154)

No they didn't, read the white paper [intel.com] about it. You can see all the capacitors involved in the anandtech review [anandtech.com] even. In theory, this has finally fixed the problem that made Intel's drive unusable for high-performance databases, that the write cache was useless in that context because it lied about writes.

FLASH SSD's are FINALLY "there" (almost) (0)

Anonymous Coward | more than 3 years ago | (#35643162)

I say that, because I have "held off" for years now, waiting for them to get to THIS stage of performance (especially on writes, where they're now, FINALLY, 'dusting' std. mechanical HDD's... even my WD Velociraptors, iirc!)...

Fact is, this one (or the one AFTER it, which I am guessing WILL be based on 6gb/sec. tech), will probably be what I purchase. I took a look over at STORAGEREVIEW.COM & only the OCZ Vertex seems to 'best it' along with the Intel 510 model in fact (however, I only cursorily skimmed, there may be faster ones for all I know (I don't pay that much attention to hardware nowadays & for years now really... only when something REALLY NICE comes along!)

I've truly been a "BIG FAN" of SSD tech & before that, software based ramdisks... & it really doesn't take a genius to know if you speed up the slowest/worst part of ANYTHING, you "gain a 1,000-fold", but there you are...

See... when I wrote stuff up about it for SuperSpeed.com back in 1996:

http://www.superspeed.com/desktop/faq.php#R001 [superspeed.com]

(Formerly EEC Systems, which did really well software-wise improving their stuff by up to 40% + at MS-TechEd 2000-2002 in SQLServer Performance Enhancement, as well as how to use them 'creatively')

OR

For CENATEK (now known as DataRAM) as to the same benefits I extolled for SuperSpeed's SuperDisk/SuperCache programs, albeit this time in hardware for DataRam/CENATEK?

It all worked... but? LOL, I was "shooting from the hip", to be honest about it!

(I didn't express that well! (I.E.-> I was really taking chances/risks really, based on theory alone & common-sense really) when I bought into that tech, both in software &/or hardware)

I say that because it was EXPENSIVE! Especially for the CENATEK stuff (my IRAM isn't anywhere NEAR that cost & how I use it is in the list below), but "the #'s were there" & so was the theory... as well as "common-sense".

Still - After tests?

Well - it merely proved true when I did the tests/analysis & benchmarks, + tests w/ webservers &/or DB engines, as well as stuff "normal folks/users" could use them for also.

Using SSD tech IS the 'future' even now (albeit I was saying that 15++ yrs. ago, and doing it too, but the buy in was HUGE, as well as a risk on product immaturity).

For the past few years now though, industry uses it like mad (when performance is "everything" or, rather, counts a LOT)... like in webservers, or database engine work!

However/Again? Normal "end user folks" can use it too, to much gain... & not just on benchmarks.

Yes - You can IMMEDIATELY see/feel the diff. when you do things like this below with SSD's:

1.) Place your pagefile.sys (WinNT based OS) or swap partitions onto it (Linux for example)

2.) Place your %temp% ops onto it

3.) Place your print spooler location onto it

4.) Place your webbrowser caches onto it

5.) Place your %comspec% onto it

6.) Run programs from them

& more...

LASTLY, on NON-FLASH based SSD tech though, such as the Gigabyte IRAM &/or CENATEK RocketDrive I use:

What makes ME wonder, is that nowadays, we have 64-bit technology in software...which means the 4gb limit of my two NON-FLASH Ram based SSD's can be upped HUGELY from that 32-bit memory addressing limitation IN THE DRIVERS!

I am truly suprised no "super-huge" non-FLASH ram based Ramdisks/Ramdrives aren't being made & released... because 64-bit drivers can address many orders of magnitude more memory than do 32-bit ones, which was the "stumbling block" for them being larger (even though you can "span" 4x4gb of them into a 16gb unit? That's PUNY by comparison to the 600gb size of THIS one in the article here today).

They would have 1 gain over these FLASH units though I imagine:

LONGEVITY (yes, I am taking another "risk about disk" here, but there you are!)...

APK

P.S.=> There's only 2 things I am waiting on before I nab one of these, well, 3 actually:

1.) Longevity/Endurance (proof, OVER TIME, that they last as long as mechanical disks).

2.) CO$T$ per mb (not sure, as I will admit here that I have NOT really looked @ the cost-per-mb/gb ratio, but last I checked, they're still a WEE bit "pricey").

3.) This particular unit going to the 6gb/sec. interface... that's probably WHEN I'll be on this like "white on rice" (as I was on the CENATEK 4gb RocketDrive (PCI 2.2 bus, PC-133 SDRAM, 133mb/sec rate) circa 2001, & the Gigabyte 4gb IRAM (SATA 1 bus, DDR-2 RAM, 150mb/sec rate)... apk

Re:FLASH SSD's are FINALLY "there" (almost) (1)

arth1 (260657) | more than 3 years ago | (#35643328)

Place your webbrowser caches onto it

Very bad idea. That's random writes, where SSDs are typically far slower than normal hard disks.
In particular, many users with SSDs complained about freezes for several seconds at a time when using Firefox, and it turned out that all the updates to the history and cache indexes were the root of the problem. Moving it to a HDD drive, and the problem was gone. No, this was not a bug in Firefox, but a side effect of how SSD operates. IIRC, the Firefox developers still added a workaround, which to some degree alleviated the problem at the expense of a risk of the history and cache not necessarily being in a consistent and up-to-date state if the browser crashes.

I do that on my NON-flash based units (0)

Anonymous Coward | more than 3 years ago | (#35643462)

"Very bad idea. That's random writes, where SSDs are typically far slower than normal hard disks." - by arth1 (260657) on Monday March 28, @03:10PM (#35643328) Homepage

Per my subject-line above:

I do that on NON-FLASH based units here that aren't subject to that "hassle" (CENATEK RocketDrive &/or Gigabyte IRAM) & for more than a decade now...

On smallish files like browser caches? They FLY for the purposes of webbrowser caches & more, of course... & they don't "hose up" like you're describing!

(Especially when formatted to 4kb size to match the memmgt in Windows which reads/writes @ that rate).

HOWEVER: Moving browser caches off std. HDD's, especialy the type of SSD I use?

It does also yield another benefit though: It "stalls"/reduces fragmentation that smallish files like browser caches create on HDD's (or any disk really)...

Plus, in THAT regard... SSD's? They defrag, fast too to offset that on them too (especially the 4gb units that are NON-FLASH that I just mentioned/noted I use).

APK

P.S.=> Thanks for the tip on this on FLASH based SSD's here though - like I noted in my init. post you replied to?

I don't use them, for the reasons YOU are noting (poor write performances in the past, even vs. std. HDD's)... & like I said, occasionally? I take "risks" based on theory alone, but not when I get informed that this SSD tech has 'issues' with that part... I'd simply use my Gigabyte IRAM as I have, for the purpose of WebBrowser caches then as I have for 15++ yrs. now... apk

THIS may offset it, & interest U + other SSD u (0)

Anonymous Coward | more than 3 years ago | (#35643562)

DISKEEPER 2011 & it's "HYPERFAST" feature for SSDs:

http://www.storagereview.com/diskeeper_2011_now_available_includes_ssd_optimizer [storagereview.com]

Pay attention to "HyperFast" + "Intelliwrite" featureset...

(As it may be the "cure" here for that which you speak of (it sure sounds it @ least), which folks that use SSD's for browser cache hoseups are 'victim to'... )

Incidentally, as an "addendum" as well?

PerfectDisk by Raxco has a "background defrag" perfect write feature also, but, not 110% SURE it's "geared to SSD optimizations" only though...

APK

P.S.=> That's my "tit-for-tat" info. for "U", in return for your tips on browser cache hassles on random writes of smallish files on FLASH SSD's... hope it helps, or, is at least interesting to you & other FLASH BASED SSD users that have experienced the write issue you speak of! apk

Re:FLASH SSD's are FINALLY "there" (almost) (1)

compro01 (777531) | more than 3 years ago | (#35643666)

Place your webbrowser caches onto it

Very bad idea. That's random writes, where SSDs are typically far slower than normal hard disks.

That's a problem with some controllers (Marvell's Da Vinci controller and whatever the hell WD uses in their ones, for example) that don't handle garbage collection well. after being in use for some time, the write speeds and IOPS goes all over the place. It's not a problem with better controllers, like Sandforce's stuff.

the SSD market still has a bunch of garbage in the mix and you need to research stuff to avoid getting burned.

Perhaps Diskeeper 2011 could help others there? (0)

Anonymous Coward | more than 3 years ago | (#35643740)

I noted its "HyperFast" + "Intelliwrite" feature in my subsequent/2nd reply to he here:

http://hardware.slashdot.org/comments.pl?sid=2057790&cid=35643562 [slashdot.org]

"That's a problem with some controllers (Marvell's Da Vinci controller and whatever the hell WD uses in their ones, for example) that don't handle garbage collection well. after being in use for some time, the write speeds and IOPS goes all over the place. It's not a problem with better controllers, like Sandforce's stuff." - by compro01 (777531) on Monday March 28, @03:41PM (#35643666)

It may be of interest to you, or even practical use on your end (IF you have this issue on FLASH SSD's you use, I don't on ones I use that are based on PC-133 SDRAM or DDR-2 RAM), or to others here as well!

APK

P.S.=>

"the SSD market still has a bunch of garbage in the mix and you need to research stuff to avoid getting burned." - by compro01 (777531) on Monday March 28, @03:41PM (#35643666)

Right-on, & WHY I mentioned I "hold off" & have HELD off on FLASH SSD tech (vs. the FAST & PROVEN tech in non FLASH Ramdisks/Ramdrives I use)...

Sure - it's "getting there", as I stated in my init. post, but not quite for ALL makes &/or models apparently.

Man - I used to be a fairly "avid" hardware enthusiast, & was "up on the latest/greatest" in hardware in years past... but not so much anymore... & tips like yours + the fellow you replied to, definitely help (since it seems you guys pointed out a couple 'downsides' I wasn't aware of here on this write issue on browser caches/smallish files on random writes due to poor "garbage collection"/TRIM algorithms etc./et al, thanks by the way on YOUR reply also, it helped me learn a bit more on how this stuff has progressed (or not, lol))... apk

SMALL "ADDENDUM" I missed adding I do (LOGS) (0)

Anonymous Coward | more than 3 years ago | (#35644012)

LOGGING!

I.E./E.G.-> I also move Windows NT-based OS logs to my non-FLASH SSD's, as well as many apps' logs also...

Why? To gain speed in THEIR ops, as do webbrowser caches, paging duties, %temp% ops, print spooling, & more... as well as offloading the slower HDD's I use so they are "faster" by actually, less burdened with those tasks, & fragmenting less because those tasks no longer are done on my main C: drive where my OS + Programs lie.

(It effectively speeds up HDD's doing that because the mechanical HDD's, fast as they are here, are not getting disk head movement contention (elevator algorithms not withstanding mind you, even)).

Yes, it works... & even though they're 10k rpm Velociraptors by WD w/ 16mb cache on them + driven by a SATA II RAID 6 capable 128mb ECC RAM caching controller by Promise (model Ex-8350)), they're just doing less and thus, "doing more" faster!

APK

P.S.=> Had to add that last "tidbit", as I like being as "complete" as possible... "mea culpa" (not really, lol, but there you are!)... apk

SSD vs HD (1)

dargaud (518470) | more than 3 years ago | (#35643184)

So, anyone cares to make a forecast as to when SSDs will overtake HDs even for large Tb units in terms of price+perf ?

Re:SSD vs HD (2)

baka_toroi (1194359) | more than 3 years ago | (#35643708)

Yes, sir.

2014 will be that year.

Re:SSD vs HD (1)

MikeURL (890801) | more than 3 years ago | (#35644118)

Oh maybe even sooner than that. A company like Intel can scale this thing right up into mass market efficiency/pricing any time it wants. It has that kind of capital (much like Apple scaled iPad right up to mass market pricing on day one).

How do you make a PC sexy again? How do you help it compete with tablets? The most important thing, IMO, is to get that spinning disk out of there and make SSDs so fast that you finally get near instant on/off.

If no PC vendor is up to it then Intel should go it alone and do the hardware themselves. The PC has been a dead lumbering industry for so long that I think people have forgotten there are still a shitload of advances to be made.

Re:SSD vs HD (1)

baka_toroi (1194359) | more than 3 years ago | (#35644244)

Well, I believe spinning disks will stop being massively shipped when local storage becomes irrelevant, i.e.: when consumers start to store everything in The Cloud. Like it or not, we are half way there.

That's why I think Intel doesn't want or needs to rush. They're gonna milk the market as much as they can (Just like any other corporation)

Re:SSD vs HD (2, Interesting)

Anonymous Coward | more than 3 years ago | (#35643856)

It took them 3 years or so to go down 30% in price, maybe. It'll probably take them 2 more years to drop another 30%, and after that 1 more year to drop another 30%. At which point they'll most likely hit a wall and they'll only drop variably 30% every year, year after year.

I speculate 5 - 10 years to beat the price / performance of conventional hard drives. That's the point at which your average consumer does not find any value at all in owning a conventional hard drive. Already, many enthusiasts are willing to make their main HDD a SSD even at current prices, there's demand here and it's going to drive up research and drive down prices as people thirst for more storage space at a lower price point with a higher speed. Many of those same enthusiasts still see value in have 2nd and 3rd conventional hard drives for cheaper and larger secondary storage. At some point their slow speeds combined with low price are going to meet or near SSD price points and consumers are simply going to purchase SSD all around.

traditional laptop drives (1)

p51d007 (656414) | more than 3 years ago | (#35644536)

On the bang for your buck argument, I'll stick with a platter hard drive. I've had laptops for what seems like forever. I carry one in a shoulder tool bag that gets bounced around on a two wheel cart, plopped down on concrete 6-10 times a day, and in 10 years, I think I've lost one drive. Until the price per megabyte comes down close to that of a traditional platter drive, I'll stick with them. The technology is bang on reliable, speed is pretty good, and the "green" drives use less current. The SSD's are still too pricy, the performance "boost" isn't that impressive.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?