Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Unveils SSDs With 6Gbit/Sec Throughput

Roblimo posted more than 3 years ago | from the got-to-get-one-got-to-get-one dept.

Data Storage 197

CWmike writes "Intel announced a new line of solid-state drives (SSDs) on Monday that are based on the serial ATA (SATA) 3.0 specification, which doubles I/O throughput compared to previous generation SSDs. Using the SATA 3.0 specs, Intel's new 510 Series gets 6Gbit/sec. performance and thus can take full advantage of the company's transition to higher speed 'Thunderbolt' SATA bus interfaces on the recently introduced second generation Intel Core processor platforms. Supporting data transfers of up to 500MB/sec, the Intel SSD 510 doubles the sequential read speeds and more than triples the sequential write speeds of Intel's SATA 2.0 SSDs. The drives offer sequential write speeds of up to 315MB/sec."

cancel ×

197 comments

Sorry! There are no comments related to the filter you selected.

that's smokin' (0)

Anonymous Coward | more than 3 years ago | (#35344710)

I hope this blazing speed is used for good and not for evil purposes.

Re:that's smokin' (1)

ehrichweiss (706417) | more than 3 years ago | (#35344732)

If it's twice as fast as all the other SATA devices I have that are rated 3Gb/s then that'll only average about 40MB/s with peaks around 80MB/s..

Re:that's smokin' (4, Informative)

Wovel (964431) | more than 3 years ago | (#35344752)

Your devices are not rated at 3Gb/s, the sata connection was. This device is..Supporting data transfers of up to 500MB/sec,....

Maybe just read the summary :)

Re:that's smokin' (1)

bluemonq (812827) | more than 3 years ago | (#35344798)

When transferring files between SATA hard drives on my desktop I usually get around 90 MB/s. I suspect it's your devices that are the problem.

Re:that's smokin' (0)

Anonymous Coward | more than 3 years ago | (#35344962)

OP seems like too low numbers.

I get 100 MB/sec between two SATA 3 devices on separate connectors. Intel 58 mobo and core i7 and plain jane 1TB Seagate drives.

Re:that's smokin' (0)

Anonymous Coward | more than 3 years ago | (#35345064)

He probably uses Ubuntu.

I don't know what the hell they're doing with drive access, or exactly which hardware suffers from the problem, but I get numbers like that too, and I've heard the same story from lots of other ubuntu users, across Jackalope, Koala, and Lynx.

I'm too lazy to install another OS. I've been using different flavors of linux for a decade now, and the slow transfers and everything freezing up for 10-60 seconds every few hours is just not enough of an issue to go through all the effort of setting things up again, just in hope of not having a worse problem in Suse, Mint, Debian, or whatever.

Re:that's smokin' (0)

Anonymous Coward | more than 3 years ago | (#35345500)

Which Ubuntu are you using? I think maybe you have a hardware problem or somehow DMA for disk transfers isn't on.

#this is on a 1GB RAM machine so disk cache is <1.3GB.
#disks are 4 x samsung 2TB 5400RPM drives.
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=10.04
DISTRIB_CODENAME=lucid
DISTRIB_DESCRIPTION="Ubuntu 10.04.2 LTS"

cd $raid10mountpoint

time dd if=/dev/zero of=test.bin bs=131072 count=10000
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 6.3466 s, 207 MB/s

real 0m6.377s
user 0m0.008s
sys 0m2.220s

time dd if=test.bin of=/dev/null bs=131072
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 4.71161 s, 278 MB/s

real 0m4.749s
user 0m0.012s
sys 0m1.504s

cd $mirrormountpoint
time dd if=/dev/zero of=test.bin bs=131072 count=10000
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 11.8113 s, 111 MB/s

real 0m11.888s
user 0m0.000s
sys 0m2.360s

time dd if=test.bin of=/dev/null bs=131072
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 10.8265 s, 121 MB/s

real 0m10.875s
user 0m0.012s
sys 0m1.316s

Re:that's smokin' (1)

Stalks (802193) | more than 3 years ago | (#35345906)

By adding sync you are forcing all writes from cache to be flushed to its destination, making for a more accurate test..

time (dd if=test.bin of=/dev/null bs=131072; sync)

no prob under ubuntu (0)

Anonymous Coward | more than 3 years ago | (#35345568)

on my uncles computer (i set it up with ubuntu as he doesnt play computer games and mainly just torrents movies) he's filled another TB so another was installed and i benchmarked it. A recient seagate sata TB and it was reading at over 90MB/s on his P4 with an ancient sis mobo from about 2002, i think it was about one of the first sata mobos on the market. so no problem with ubuntu, and no problem playing h264 on this ancient system with an agp ati 9550

i think the parent is full of it. im pretty happy using ubuntu on several computers including this laptop which is more recient and while xp cant even be made to boot with the esata ports enabled ive never seen any problems with functionality or speed with ubuntu, hotswapping esata drives, and dumping TB's from through one esata port and out another, try that with usb docks and you'll be waiting for a week. unfortunately xp blue screens with the esata enabled in the bios. so if i want to use windows eg for to run the tax pack program to lodge once a year, i have to enter the bios. and i returned it for a hw warranty issue and the tech agreed that xp just couldnt be made to work with esata on this laptop, a Dell Latitude E6400.

Re:no prob under ubuntu (1)

ls671 (1122017) | more than 3 years ago | (#35345926)

I have never heard of "sata mobos". I heard about motherboards that had a SATA controller hardwired into them. Using the right driver for that specific controller might help in gaining speed.

http://slashdot.org/comments.pl?sid=2016546&cid=35345062 [slashdot.org]

Re:that's smokin' (1)

ls671 (1122017) | more than 3 years ago | (#35345062)

Guys, don't forget to take into account cached disk data/buffers, use free to see how much you've got. Just run hdparm -tT to see the difference between cached reads and non cached reads. If this sounds to technical just do a test with a 20 GB file. This should be enough to make sure cached disk data doesn't give you a false illusion of speed. Also read man hdparm under the -t section.

90 MB/s seems on the upper end to me while 40 MB/s is on the lower end but I have seen it with generic drivers.

Also, try to upgrade your SATA drivers to one that is designed specifically for the SATA controller you are using instead of using a generic one. Best bang for the buck if you are looking for speed.

Re:that's smokin' (1)

Rockoon (1252108) | more than 3 years ago | (#35345548)

If you are averaging a sustained 20MB/sec, then your drive, controller, partition alignment, or drivers suck.. and by suck I mean really really really suck.

It is hard to imagine the great lengths of time you would have to invest in finding a collection of SATA 2.0 hardware that bad, so its almost certainly your partition alignment or drivers.

You do know that modern drives require 4096 byte partition alignment, while most older OS's presume that 512 byte is good enough?

Re:that's smokin' (1)

omfgnosis (963606) | more than 3 years ago | (#35345868)

But would you prefer it be used for good or awesome?

How much? (0)

Anonymous Coward | more than 3 years ago | (#35344718)

I'll just re-mortgage my house in order to buy one.

$584 for 256GB (2)

Joe The Dragon (967727) | more than 3 years ago | (#35345018)

$584 for 256GB

Re:$584 for 256GB (1)

Sabriel (134364) | more than 3 years ago | (#35345470)

$584 for 256GB

That's $584 for 250GB, in lots of 1000.

Re:$584 for 256GB (1)

TheLink (130905) | more than 3 years ago | (#35345522)

AFAIK this new Intel SSD is using the same Marvell controller as Crucial's RealSSD C300, which goes for about USD490 for the 256GB drives in lots of 1 unit. Just look at Amazon, or even Crucial's own site.

So I'm not sure what makes the Intel drive better than the Crucial one which has been around for many months (and gone through some pains and fixes... ).

A supposedly nerd site like Slashdot linking to low-info press release or marketing article on Computerworld is stupid.

Re:$584 for 256GB (1)

Lennie (16154) | more than 3 years ago | (#35345930)

Software/tuned hardware is what makes the difference.

Intel M25, etc. was better than it's competitors at the time because of the software, because it's performance did not degrade over time.

Re:$584 for 256GB (1)

mabhatter654 (561290) | more than 3 years ago | (#35345754)

that's not awful.

The point of these is to feed devices like Drobo anyway. So three would get you 460GB with one failure. That would also nicely fill up a Thunderbolt connection to one or two computers with speeds and latency way faster than the built in drive.

Finally, decent write speed from Intel ... (1)

haruchai (17472) | more than 3 years ago | (#35344746)

on a (rich) consumer SSD. But, while I'm loving all the Marvell / Sandforce / Intel hypersonic speed-worthiness, how about a decently fast, really affordable solid state drive? How much longer will these be 20x the per GB cost of a HDD?

Re:Finally, decent write speed from Intel ... (1)

fucket (1256188) | more than 3 years ago | (#35344792)

The 80gb Intel X25-M is/was a decently fast, really affordable solid state drive, even at the $240 I paid for mine. Easily dollar-for-dollar the most effective computer upgrade I've ever made.

Re:Finally, decent write speed from Intel ... (2)

Rockoon (1252108) | more than 3 years ago | (#35344800)

We saw prices of the previous generation drop from $4/GB to under $2/GB while waiting for SATA 3.0 adoption. Now those same SATA 2.0 SSD's will drop much further... by years end it will be a great time to pick up older SATA 2.0 SSD's

Re:Finally, decent write speed from Intel ... (2)

haruchai (17472) | more than 3 years ago | (#35344846)

Having been a somewhat early adopter of SSDs, I got bitten a couple of time by the JMicron bug - a pox on them. I've decided I won't buy another SSD until I can get a 128 GB drive for $100

Re:Finally, decent write speed from Intel ... (2)

MikeURL (890801) | more than 3 years ago | (#35345256)

Everything I've read so far suggests that if you are buying SSDs you want to go with Intel. I have the small 40GB drive and it has been flawless. It is also actually pretty roomy as long as I don't start to download entire movies or my whole song collection (but really that is what external drives are for).

I have not been stingy on what I installed and I still have 14GB free.

It uses a Marvell controller, not Intel controller (1)

MojoStan (776183) | more than 3 years ago | (#35345606)

Having been a somewhat early adopter of SSDs, I got bitten a couple of time by the JMicron bug

Everything I've read so far suggests that if you are buying SSDs you want to go with Intel.

Note that this new Intel SSD is the first Intel-branded SSD that uses a non-Intel controller. It uses the same Marvell controller [techreport.com] used in the well-regarded Crucial RealSSD C300.

I've also read about Intel's great combo of performance and robustness, but that reputation is mostly a result of Intel's controllers. JMicron, a manufacturer of SSD controllers, got its buggy reputation from early JMicron-based SSDs. Marvell's controller performance has been proven in many reviews, but Intel's "endorsement" gives me more confidence in Marvell's reliability and robustness.

Re:It uses a Marvell controller, not Intel control (1)

Lennie (16154) | more than 3 years ago | (#35345940)

"mostly a result of Intel's controllers"

Actually, the controller is just the hardware. It is the software running on it which makes it smart. Intel owns the software I would think.

Re:Finally, decent write speed from Intel ... (-1)

Rockoon (1252108) | more than 3 years ago | (#35345558)

I got bitten a couple of time by the JMicron bug

The bug was well known pretty much right from the start, yet you still got bit twice by it. My suggestion to you is to find someone knowledgeable to make hardware purchases for you.

Re:Finally, decent write speed from Intel ... (0)

Anonymous Coward | more than 3 years ago | (#35345028)

How much longer will these be 20x the per GB cost of a HDD?

Probably for the next 10 to 20 years. Flash memory isn't the only thing that gets cheaper over time. Spinning "iron rust" has a pretty good record at that, too. (In 1977, you might have paid $5 each for floppies that stored less than 100 KB. Now $5 would get you 1/16th of a 500 GB portable notebook drive => 300,000 times as much storage for your money.)

The question is not so much when these become cheaper than hard drives, as when these become cheap enough to use. (And if you were satisfied with the capacities and prices that were common a years ago, that time might already be here.)

Re:Finally, decent write speed from Intel ... (1)

TheLink (130905) | more than 3 years ago | (#35345560)

But this new Intel SSD uses a Marvell controller - the same as one used by Crucial's C300. Why should people pay a higher price for an SSD with the same controller? The uninformative article says nothing about that - no proper benchmarks or tests (random reads/writes, sequential/read writes, max latency of a random read/write transaction that interrupts those random/sequential read writes, read/writes of high/mid/low compressible data). Such an article should be pretty useless for real Slashdot nerd.

As for Sandforce, apparently they have problems when waking up from hibernation (google for that). This is not a problem for many desktop users but a showstopper for many laptop users (I won't be buying a sandforce drive for this reason alone).

Re:Finally, decent write speed from Intel ... (5, Interesting)

Neil Boekend (1854906) | more than 3 years ago | (#35345672)

IMHO SSD's should be used as "something in between your HDD and your memory". As long as it's big enough to contain the OS of your choice and all the programs it's big enough. Your MP3's and movies do not require the high throughput. In a desktop this means there should be 2 disks: an ssd for speed and a hdd for capacity. In a laptop there should be an SSD on the mini-PCI-e and an HDD on sata OR, if you must (for space reasons), an SSD on the mobo and an HDD on sata. This would optimize both capacity and speed, while keeping costs (relatively) down.
The cost of an SSD is paid back by the speed, not the capacity. What I find strange is that shops list SSD's by EUR/GB instead of EUR * s/MB. The speed is the defining attribute, not the capacity.

Re:Finally, decent write speed from Intel ... (1)

thue (121682) | more than 3 years ago | (#35345876)

> Your MP3's and movies do not require the high throughput.

And more importantly, your MP3's and movies do not require the random reads and writes which is an SSD's greatest strengths.

Re:Finally, decent write speed from Intel ... (2)

Lennie (16154) | more than 3 years ago | (#35345990)

That is what ZFS with L2ARC ('level 2'-cache) does, it uses the SSD as a cache for the slower but bigger disks.

On Linux a fairly new development called bcache does something similair

Re:Finally, decent write speed from Intel ... (1)

mabhatter654 (561290) | more than 3 years ago | (#35345800)

HDD tech is established... it's like complaining that 1.5 TB disks cost $100 when a backup tape the same size costs $50. You're paying for speed and essentially today's device helps pay the companies to make tomorrow's devices as well.

The thing I really want to see is better syncing of devices. Really I don't NEED more than 64GB or 128GB on a laptop, certainly not for my day job... and I'm one of the biggest users of HDD storage at my company. If there was a good, solid way to sync to a 1TB drive easily and painlessly. I'm not opposed to syncing my laptop like I do my iPhone... but the media selection needs to be seamless. Done correctly, I could even sync music from my drive at home over the internet... movies would be a little harder but not impossible.

"Thunderbolt SATA bus interfaces"? (4, Informative)

Chuck Chunder (21021) | more than 3 years ago | (#35344758)

Somebody is confused. Thunderbolt is DisplayPort and PCIe bundled together.

The SATA 3 ports on Cougar Point platform have nothing to do with Thunderbolt.

Re:"Thunderbolt SATA bus interfaces"? (0)

Anonymous Coward | more than 3 years ago | (#35344922)

Somebody is confused. Thunderbolt is DisplayPort and PCIe bundled together.

The SATA 3 ports on Cougar Point platform have nothing to do with Thunderbolt.

I'd vote this up if I had any karma. Completely correct AFAIK, and I'm in a position to know.

Re:"Thunderbolt SATA bus interfaces"? (1)

2themax (681779) | more than 3 years ago | (#35344992)

Yes, somebody is probably confused, but I think we will eventually see hard drive enclosures with a Thunderbolt port on the outside and a PCI-e to SATA 3.0 bridge chip inside.

Re:"Thunderbolt SATA bus interfaces"? (1)

mabhatter654 (561290) | more than 3 years ago | (#35345814)

that's the only way to feed the darn Thunderbolt bandwidth! Even to max Thunderbolt you'd need at least 3 of these in RAID of some fashion, meaning something to control them, like Drobo.

Re:"Thunderbolt SATA bus interfaces"? (1)

keith_nt4 (612247) | more than 3 years ago | (#35344984)

The summary is probably oddly worded. I think the summary just meant this new SSD will be able to keep up with the sata 6 speeds from a "thunderbolt" interface. I just saw a podcast about "thunderbolt" recently (tekzilla I think?). Looks like the interface will be that of DisplayPort but be able to do video/hdd/network/audio/other peripherals (and power also) whilst daisy chaining a number of devices. Not fiber optic like originally talked about (maybe later) but still fast.

Re:"Thunderbolt SATA bus interfaces"? (1)

benow (671946) | more than 3 years ago | (#35345480)

Optical is still in the spec, but probably only to be used for long spans.

Re:"Thunderbolt SATA bus interfaces"? (0)

Anonymous Coward | more than 3 years ago | (#35345172)

Thunderbolt is the commercial name of Light Peak and the MacBook Pro implementation uses a connector identical to the Mini DisplayPort kind. Thunderbolt is protocol agnostic, so it is more than DisplayPort and PCI-E bundled together. Thunderbolt is also the name given by Intel, not Apple.

Re:"Thunderbolt SATA bus interfaces"? (2)

Chuck Chunder (21021) | more than 3 years ago | (#35345276)

Thunderbolt is protocol agnostic, so it is more than DisplayPort and PCI-E bundled together.

Thunderbolt only supports two protocols, DisplayPort and PCI-E. Other controllers can hang off the end of the PCI-E channel and drive other protocols from there but Thunderbolt itself is certainly only DisplayPort and PCI-E [intel.com] .

Re:"Thunderbolt SATA bus interfaces"? (1)

mabhatter654 (561290) | more than 3 years ago | (#35345838)

that makes it simpler to implement as a device. Basically the consumer equivalent to 10Gb Ethernet/8GB Fiber Channel on servers that can speak Ethernet or Fiber Channel drive... (and virtualize anything else)

Consumers care about Video and Audio devices mostly, maybe a smattering of other things. This brings back the PCI enclosure again and should open up Macs to things like Robotics, test equipment, etc that requires dedicated hardware cards most Macs can't have.

Re:"Thunderbolt SATA bus interfaces"? (1)

Redlazer (786403) | more than 3 years ago | (#35345182)

I believe it's intended to be used for external hard drives as well. Lacie and WD have signed on to develop compatible hardware - I imagined at the time it would be in that vein.

Is there any reason you can't transfer hard drive data over PCIe?

Re:"Thunderbolt SATA bus interfaces"? (0)

Anonymous Coward | more than 3 years ago | (#35345650)

I suppose that you could design the controller on the back of a HDD or SSD to "directly" support the PCI-E part of Thunderbolt. On the other hand, given that everyone has standardized on SATA for "low-end" drives, Thunderbolt-to-eSATA adapters (or cases that take Thunderbolt and have SATA adapters inside) seem like they'd be the path of least resistance.

Re:"Thunderbolt SATA bus interfaces"? (1)

Neil Boekend (1854906) | more than 3 years ago | (#35345696)

No there is no reason, as can be seen by the first card to appear in Google [newegg.com] The PCI-e card simply needs a sata controller.

Re:"Thunderbolt SATA bus interfaces"? (1)

syousef (465911) | more than 3 years ago | (#35345494)

Somebody is confused. .

With Intel's naming (lack of) conventions I'm surprised anyone ISN'T confused.

The perfect compliment to Sandybridge (0)

Anonymous Coward | more than 3 years ago | (#35344774)

Oh wait....

Re:The perfect compliment to Sandybridge (1)

Ster (556540) | more than 3 years ago | (#35345518)

The perfect compliment to Sandybridge

Actually, yeah. The problem with Cougar Point (the Platform Controller Hub that goes along with Sandy Bridge, not the Sandy Bridge chip itself, BTW) is with the SATA 3.0Gbps ports. So to get the maximum performance of these new SSDs, you wouldn't use those ports anyway.

Wear usage? (1)

wisebabo (638845) | more than 3 years ago | (#35344776)

I know this problem has (probably) been satisfactorily addressed but if one were to use such a super fast drive for an application that had extremely heavy usage (swap space for the OS or an program like Photoshop) wouldn't it cause those sectors to be read/written to many many times very quickly? Doesn't each "cell" have a limited number off times it can be accessed before it fails? (on the order of 100,000 i think). And wouldn't that case the drive to fail (sooner rather than later because it is so fast)?

Again, I'm sure the SSD drive manufacturers have looked at this problem very closely, I'm just concerned that's all. After all, even if your computer made only one error every billion instructions that would mean it would break down in less than a second!

Otherwise, I'm getting one of the 512GB drives for a smokin' fast laptop drive so I boot up lightening fast! (Time can never be recovered especially for an old fart like me).

Re:Wear usage? (3, Informative)

pz (113803) | more than 3 years ago | (#35344818)

Again, I'm sure the SSD drive manufacturers have looked at this problem very closely, I'm just concerned that's all.

So, look up the specs, then. Current write cycles are over 1,000,000 per cell. Modern wear-leveling algorithms combined with extra blocks and ECC mean that it's more likely that some other component will fail before your SSD will.

Besides, if you were really concerned, and not just trolling, wouldn't you have the same issues with your hard drive, too? Doubly so in a laptop?

Re:Wear usage? (1)

Microlith (54737) | more than 3 years ago | (#35344892)

Current write cycles are over 1,000,000 per cell.

Far from it. Depending on the litho, at 25nm for instance you're down to 3,000 program/erase cycles. And even at 34nm, you're still little better than 10k cycles. The overprovisioning and ECC required at these scales is massive.

But yes, studies have been done and it takes an industrial strength workload to kill an SSD. If one of these is in your home machine, you likely won't kill it. If you think you might, then you should already have practices in place to deal with disk failure.

Re:Wear usage? (1)

aztracker1 (702135) | more than 3 years ago | (#35345210)

I'm paranoid... generally redundant backups over raid redundancy as Ive had raid1 with both drives dead relatively close together, one time before I had a replacement, my nas box has a spare next to it. Though my intel ssd died att 11 months in my desktop, I still won't go back for my boot+os ...

Re:Wear usage? (4, Informative)

TheEyes (1686556) | more than 3 years ago | (#35345444)

But yes, studies have been done and it takes an industrial strength workload to kill an SSD. If one of these is in your home machine, you likely won't kill it. If you think you might, then you should already have practices in place to deal with disk failure.

Just as important to note is the failure mode for flash memory is for it to become read-only; in other words, it simply becomes impossible to delete what is written on your drive, which is a perfect reminder to get a new one. Given that this sad event will be nearly ten years from now, it should be dirt cheap to buy a replacement drive.

When you do, though, don't forget to remove the metal shell on the old drive and cook it in the microwave for a minute or two to destroy your old data. It's not like you're going to be able to sell the drive used anyway.

Re:Wear usage? (2)

Tapewolf (1639955) | more than 3 years ago | (#35345706)

Just as important to note is the failure mode for flash memory is for it to become read-only;

Are you sure? I was under the impression that it worked like EPROM, where the bits were set high by the erase cycle and data was written by grounding the bits which needed to be zero.
That being the case, it's more likely that the data would be corrupted (since it would fail to set bits anymore) which is actually the sort of thing one of my old USB keys started to do.

'Course, there might well be logic in the controller to detect this and put the drive into read-only mode when it runs out of non-defective blocks, but I don't think that would be an inherent property of flash drives and certainly not one you'd want to rely on.

Re:Wear usage? (0)

Anonymous Coward | more than 3 years ago | (#35345770)

I've never run into any source that says anything different from this. The big problems that arise are, first this will show up during an erase-write cycle, so you only discover this when things explode. Second and very dangerous is the claims of 100,000 write cycles are including wear-leveling, hence after 100,000 writes or the first failure shows up, the entire drive is 100% toast (unlike disks where you can typically still read and write 90% of the data).

Re:Wear usage? (0)

Anonymous Coward | more than 3 years ago | (#35345790)

you'd set the bits high for a secure erase, but normal deletions would just remove the entry from the filesystems index (as is done on rotary drives). The data itself is still there until overwritten.

Re:Wear usage? (1)

Soft Cosmic Rusk (1211950) | more than 3 years ago | (#35345812)

Uh... Sorry, but the data is in the microchips, not in the metal shell. Cooking it is more likely to destroy the microwave than the data!

ugly opportunity for malware (1)

SethJohnson (112166) | more than 3 years ago | (#35345540)

If one of these is in your home machine, you likely won't kill it. If you think you might, then you should already have practices in place to deal with disk failure.

How about a vicious piece of malware? Could a piece of code be written to circumvent the wear-leveling algorithm and carpet-bomb your SSD with repetitive writes so that it's worthless overnight? Could be a real PITA in cases like the Macbook AIR where the SSD drive is built into the mobo. It's not a case of just paying for a new SSD to replace. On the plus side, this type of hardware failure just limits additional writes to the device. You should still be able to retrieve your data.

Seth

Re:ugly opportunity for malware (1)

Neil Boekend (1854906) | more than 3 years ago | (#35345712)

The same piece of malware would destroy the contents of an HDD, since it overwrites all sectors. How often do you see that?

Re:Wear usage? (0)

Anonymous Coward | more than 3 years ago | (#35345058)

Besides, if you were really concerned, and not just trolling, wouldn't you have the same issues with your hard drive, too? Doubly so in a laptop?

What are you talking about?

Re:Wear usage? (1)

Anonymous Coward | more than 3 years ago | (#35345082)

There are a lot of interesting papers on the reliability decline of flash as the technology shrinks and becomes more affordable. Every year at ISSCC, companies like Samsung and Hynix will come and present a tiny portion of how their flash controller works, and as part of the presentation they talk about raw error rates of MLC NAND flash. They are easily 1e-6 per cell for 45nm, but at 32nm and 27nm it's a different story (~600x worse BER, with ~100-1000 write cycles per cell). Consumers don't usually see this side of flash because manufacturers will build in enough ECC and wear leveling to make the device useable, but this comes at the with a price in terms of power and capacity (that write cycle count is AFTER the wear-leveling).

Sadly most of the interesting papers require an IEEE subscription, but there's one from 2009 that gives a nice overview of the ECC required to get the overall error rate to something reasonable:

http://cyclicdesign.com/whitepapers/Cyclic_Design_NAND_ECC.pdf

Re:Wear usage? (1)

Kjella (173770) | more than 3 years ago | (#35345120)

So, look up the specs, then. Current write cycles are over 1,000,000 per cell. Modern wear-leveling algorithms combined with extra blocks and ECC mean that it's more likely that some other component will fail before your SSD will.

Are you looking at MTBF numbers or something? Expensive 34nm flash has 10,000/cell, cheap 34nm 5000/cell and 25nm is down to 3000/cell.

And yes, you can kill an SSD that way, already done it. Granted, I tortured it in pretty much every way possible by running torrents and Freenet 24/7 and keeping it 90%+ full all the time. It died after about 1.5 years with 7000 writes/cell average, 15000 highest.

Fortunately for me I still managed to sneak it in as a warranty repair, even though they aren't supposed to do that on drives that are worn out. Now I've got a new SSD and I'm doing all my heavy lifting to rotating media keeping it as a fast OS/application disk. I'm guessing it'll last 10+ years that way. But if you want to kill it quickly, you can.

This is very much unlike rotating media. Google released a huge bunch of data on HDD reliability, what they found was that actual use didn't really matter. Hard disks lasted more or less X years spinning away at 5400-7200 rpm whether you read/wrote to them or not, as long as they weren't completely idle and could sleep. SSDs are exactly the opposite, each read/write takes a little bit of life out of it.

Re:Wear usage? (1)

TrancePhreak (576593) | more than 3 years ago | (#35345068)

This is why I believe in hybrid setups. The other replies to this are talking about mere millions of writes. Do you know how many writes the swap file gets every hour you use your computer? I too would be worried about a pure SSD setup. If you check out the how-to guides on installing SSD's in your machine, they almost all mention to set the swap on a non-SSD and to move your home directory off it too. (Home directory contains such things as temp web files!)

Re:Wear usage? (2)

m.dillon (147925) | more than 3 years ago | (#35345342)

Insofar as actual swap (as in paging physical ram in and out) goes, it depends heavily on the amount of ram the machine has and the type of work load. Mostly-read workloads with data sets in memory that are too large can be cached in swap without screwing up a SSD. A large dirty data set, however, will cause continuous heavy writing to swap space and that can shorten the life of SSD-based swap very easily.

A system which needs to page something out will stall badly if the pageout activity is rate-limited, thus you just don't want to use an SSD in any situation where huge, continuous pageout rates are expected. Huge pagein/re-pagein rates due to largely static in-memory data sets are fine. Huge pageout rates are not.

It is a different story for filesystem meta-data or data caching. You should be able to control worst-case write rates in those sorts of setups (at least you can with DragonFly's swapcache), making the SSD a huge win under virtually any circumstances.

-Matt

Re:Wear usage? (0)

Anonymous Coward | more than 3 years ago | (#35345450)

I thought I'd find you here. Quit procrastinating, you've got a release to work on!

-Not Matt :-P

Re:Wear usage? (0)

Anonymous Coward | more than 3 years ago | (#35345636)

If your swap file is constantly being written to, you should spend your money on memory, not SSDs :) It really doesn't get that many writes.

"Mere millions of writes" -- I don't think you know what you're talking about. A normal user probably wouldn't get million writes per cell during _his_ lifetime... I realize that's not even relevant to the question here because these new denser SSDs don't have that kind of write-counts but you brought that number up.

Re:Wear usage? (1)

Rockoon (1252108) | more than 3 years ago | (#35345772)

This is why I believe in hybrid setups. The other replies to this are talking about mere millions of writes.

There are no replies talking about mere millions of writes. The one reply that did mention 1 million was talking about per cell and is also very wrong. Furthermore, we do not speak of write limits with SSD's. We speak of erase limits. We should learn the difference before being critical. SSD's write in 4K Pages but erase in Blocks. OCZ is currently using 128 Pages per Block, so Blocks are 512K on their devices.

Do you know how many writes the swap file gets every hour you use your computer?

You are attempting to invoke a nebulous unknown to support your argument for hybrids. When you know how much you are writing per hour on average, and how that compares to the limits of these drives, then you might have support for your belief in hybrids. Right now all you have given is voodoo hand-waving. Swapping has been considered bad for a lot longer than SSD's have been around, because disk reads and writes were slow, so you are essentially only hitch-hiking on that traditional negativity here. On modern SSD's, reads and writes are much faster, so that traditional reason for hating swap doesnt just naturally cross over.

My guess is that you are severely over-estimating how much writing you are doing per day while also severely under-estimating how much writing can be done on these drives. The erase-limits of these drives will last years even if you routinely erase and refill them entirely each day.

Re:Wear usage? (1)

moogied (1175879) | more than 3 years ago | (#35345314)

Hi! Its called ram. Its where you store data that you plan to access and manipulate a lot before you save it.

Laptop Backup Times... (1)

BoRegardless (721219) | more than 3 years ago | (#35344782)

ought to be severely reduced if you can pass the info off at those data rates to a similarly fast external drive.

Then if you want to archive to a "slow" spinning hard drive, the external SSD could supply the data at the slower rate of the HD

Re:Laptop Backup Times... (1)

Neil Boekend (1854906) | more than 3 years ago | (#35345760)

It depends: You could use a Raid 5/6 NAS and have high disk speeds again. Something like this [softpedia.com] . The max speed of a thunderbolt connection is 800MB/s so, assuming they are WD black 2TB 3GB/s drives, you can get an average of 109*5 (raid 5 with 6 disks) = 545 MB/s (host to disk speeds. Host to buffer doesn't matter).
That's, effectively, 5 disks to go slightly over their read speed. Then again, you'd get 10 TB (=9.09 TiB) on the external enclosure, while the biggest of these will be 250 GB.

Why not battery backed RAM straight on the bus? (2)

joesteeve (2002048) | more than 3 years ago | (#35344790)

With some lazy writing to solid-state-chips.. :D

Yeah, I am dreaming.. Sigh!! :-/

Re:Why not battery backed RAM straight on the bus? (0)

Anonymous Coward | more than 3 years ago | (#35345188)

With some lazy writing to solid-state-chips.. :D

Yeah, I am dreaming.. Sigh!! :-/

While you're dreaming, dream up an OS written from the start for such modern advances.

Ok, sure, any such idea will be obsolete before you even finish coding the basics, let along get it to a usable state, but somebody has to break the chains that bind us!

Re:Why not battery backed RAM straight on the bus? (1)

joesteeve (2002048) | more than 3 years ago | (#35345452)

While you're dreaming, dream up an OS written from the start for such modern advances.

I am actually.. :D But well, can only dream.. sigh!

Just imagine:
- A power-cycle would leave the machine exactly where it was. Well, that would be a problem with buggy software. But, for such cases we could have a hard-reset of some sort..
- Absolutely no waiting for shutdown/power-up/etc.
- Stuff like 'object serialization', 'state persistence', 'configuration persistence' etc.. would just be there. We wont have to deal with the whole lot of parsing, etc. nonsense.
- Application startup would be blazingly fast.
etc. etc.

Yeah, those chains need to be broken. The 21st century feels like stone age :-/

Re:Why not battery backed RAM straight on the bus? (1)

joesteeve (2002048) | more than 3 years ago | (#35345462)

We actually did consider a similar option for an embedded system solution. A few MBs of "battery backed" RAM apart from the normal volatile memory. But well, the client ran away.. so .. :-S

Re:Why not battery backed RAM straight on the bus? (1)

mabhatter654 (561290) | more than 3 years ago | (#35345866)

That's how expensive SCSI controllers work. They have big read/write cashes of RAM to speed transactions and the RAM has it's own battery should power blink. That allows the captured transactions to be written to disk or recovered from journal on the next IPL. In today's world, that's a lot of hardware to throw at a simple problem though, definitely not for the "consumer" or even "prosumer".

Re:Why not battery backed RAM straight on the bus? (2)

edmudama (155475) | more than 3 years ago | (#35345194)

The DDRDrive X1 almost fits your design. It's not on the memory bus, but on the PCI-e bus as a storage device. Bit pricey though per gigabyte.

Re:Why not battery backed RAM straight on the bus? (1)

joesteeve (2002048) | more than 3 years ago | (#35345420)

Checked it out. pretty cool :)

But still, 'persistent' memory gives too many opportunities.. I've been dreaming since morning about it. :D

3rd generation X-35M? (1)

scheme (19778) | more than 3 years ago | (#35344862)

What happened to the 3rd generation SSDs that Intel was supposed to release this month? They were supposed to be using 25nm flash and offer roughly twice the space for the same price as the G2 drives. Using a new controller and upgrading things a bit seems a poor substitute for that.

Re:3rd generation X-35M? (0)

Anonymous Coward | more than 3 years ago | (#35345248)

If Intel was the only 25nm producer then they wouldn't simply double the space. They'd increase the space or lower prices just enough to beat the competition, and then keep the remaining, fatter profits for themselves. I mean, that's how I'd do it. But maybe things work differently where you live.

for McAfee (0)

Anonymous Coward | more than 3 years ago | (#35344926)

Obviously the extra speed was simply to ensure decent performance for McAfee [slashdot.org] .

Finally (0)

Anonymous Coward | more than 3 years ago | (#35345048)

A drive that can run McAfee properly.

Wonder why Marvell chip (1)

RightSaidFred99 (874576) | more than 3 years ago | (#35345060)

I wonder why they outsourced their controller chip, previous incarnations used an Intel controller.

Re:Wonder why Marvell chip (1)

pipedwho (1174327) | more than 3 years ago | (#35345164)

I wonder why they outsourced their controller chip, previous incarnations used an Intel controller.

Marvell acquired some of Intel's sub-divisions (including their wireless and embedded groups) - this happened a couple of years ago when Intel decided to focus heavily on their main processor and support chipset lineup. Given that, there is very likely a tight integration between the two companies, so no real reason for Intel to duplicate work and design their own controller chip.

Re:Wonder why Marvell chip (1)

RightSaidFred99 (874576) | more than 3 years ago | (#35345344)

They already had a controller chip, and the Marvell split off was well before their SSDs. Could be some Marvell ties there somewhere maybe, my guess is SSD controllers just became commodity (those things are coming out like every month now) and performed well enough (they didn't a few years ago) to use now.

Re:Wonder why Marvell chip (1)

Ster (556540) | more than 3 years ago | (#35345526)

Where are you seeing that they're using a Marvell controller, I didn't see that in any of TFAs?

Only makes sense (1)

m.dillon (147925) | more than 3 years ago | (#35345092)

Now that 6GBit/s SATA ports are becoming commonly available on motherboards it's only natural that SSDs follow. Normal HDs can't really take advantage of a 6Gb/s port but SSDs can. These high speed ports will make port multiplier enclosures more useful as well.

There's certainly a lot of use for this sort of thing. SSDs can already replace far more expensive DRAM (and the far more expensive motherboards needed to support lots of DRAM) for numerous problem sets, including mostly-read database accesses. This will significantly reduce the cost of server architectures for large swaths of problem areas.

What we are seeing is a natural progression.

-Matt

Re:Only makes sense (1)

jrobot (1239050) | more than 3 years ago | (#35345376)

A modern DDR3 part at 800 MBit/s x 16 bits/part = 12.8GBit/s
Granted, you add a lot more complexity to your board, but DRAM is not going away

Re:Only makes sense (1)

afidel (530433) | more than 3 years ago | (#35345552)

Try 32GBytes/s for a dual Nehalem system with DDR3 1333 pieces in banks 1 and 2 on each CPU.

Re:Only makes sense (1)

Neil Boekend (1854906) | more than 3 years ago | (#35345786)

With the limited write cycles for SSD DRAM will not go away. Could you imagine your RAM to crash permanently after a long weekend of gaming?

6Gbps throughput is impossible (0)

Anonymous Coward | more than 3 years ago | (#35345238)

With 8b/10b encoding, throughput absolutely cannot be more than 4.8Gbps (600MB/s).

Re:6Gbps throughput is impossible (1)

omfgnosis (963606) | more than 3 years ago | (#35345962)

Oh. Er. Why?

is it 6gb/s or 500mb/s ? (1)

Latinhypercube (935707) | more than 3 years ago | (#35345264)

is it 6gb/s or 500mb/s ? Make your fucking mind up.

Re:is it 6gb/s or 500mb/s ? (1)

Neil Boekend (1854906) | more than 3 years ago | (#35345804)

The SATA speed is 6Gb/s. The disk speed is 500MB/s. You can write to the disk itself at "only" 500MB/s. Since SATA doesn't have that exact speed they chose the next best thing: the 6Gb/s (=768 MB/s) SATA connection.

Another breakthrough in marketing the next tier.. (0)

Anonymous Coward | more than 3 years ago | (#35345464)

Same old superior specifications at unsaturated retail market prices...its always more worth buying, yet again, and again. Incrementally strung out technology, as usual. Social engineering, not technology, turns Prometheus into a bic lighter.

No new SLC? (1)

afidel (530433) | more than 3 years ago | (#35345556)

It's been almost 30 months since the X-25e was launched and now the max 64GB capacity looks pathetic next to the competition so when will Intel launch their next generation SLC based SSD?

Don't forget about Sandforce/OCZ (4, Informative)

0111 1110 (518466) | more than 3 years ago | (#35345654)

Sandforce has already announced [anandtech.com] its new sata3 controller. On paper it looks like it will have much faster sequential writes than Intel, but it sounds like it will also have a shorter lifetime and shorter data retention times due to the use of 25nm NAND. Intel is wisely sticking with 34nm. It may be more expensive to manufacture, but is superior tech. I can only hope that OCZ changes their mind and decides to at least offer a more expensive 34nm version. OCZ won't be shipping their Vertex 3 drives until Q2 so Intel will have a big head start in the market.

The NAND industry seems to be doing its best to encourage ignorance on the disadvantages of smaller process sizes from the consumer POV and the ignorance seems to be widespread. Getting the facts on this issue can be a bit difficult. Here is a good thread on the topic.
http://forums.anandtech.com/showthread.php?t=2142742 [anandtech.com]

The following post sums it up better than I could. Note his point about data retention times as well. That is a point that is often ignored when the focus is solely on write cycles.

As flash cells are shrunk, they become less good. This is a fundamental feature of the technology. The overall volume of the cell becomes smaller, so less electrons can be stored in the cell (so the signal picked up by the electronics is weaker and less clear, so you get a higher error rate) and the insulating barriers around the cell must be made thinner, in order to save space - allowing the electrons to leak out of the cell more easily (reducing power off data retention time). The thinner insulation also wears out more quickly (reducing life cycles)

It's difficult to define a 'fundamantal' limit for flash, because it may be possible to work around poor performance, and as yet unknown new manufacturing techniques and semiconductor materials may be developed. However, it has been suggested in the scientific literature that 18-22 nm, is the realistic limit. Beyond that, the performance/reliability/lifespan of the flash would be too poor, no matter how much wear levelling, and how sophisticated the ECC codes were.

Enterprise grade SSD flash, will need higher specifications than flash for toy cameras. Enterprise applications are unlikely to tolerate 18 nm flash with 100 write cycles and one lost sector per 100 GB of data stored. However, this probably would be acceptable for toys or throwaway devices.

Some more coverage of the topic:
http://techon.nikkeibp.co.jp/article/HONSHI/20090528/170920/ [nikkeibp.co.jp]

NAND Flash memory quality is also beginning to drop. Chips manufactured using 90nm-generation technology in 2004-05, for example, were assured for about 100,000 rewrites and data retention of about a decade. As multi-level architecture and smaller geometry are introduced, quality is showing a sharp decline. The 30nm 2-bit/cell chips expected to enter volume production in 2009-10 may well end up with a rewrite assurance of no more than 3,000 cycles, and a data retention time of about a year. The first 3-bit/cell chips are hitting the market now, with only a few hundred rewrites.

http://hardforum.com/showthread.php?t=1502663 [hardforum.com]

Flash memory works by trapping electrons. Over time these electrons leak away, until the charge is too small for the data to be read any more. With smaller feature sizes (34 nm instead of 45 or 65 nm) this leakage is more significant and fewer electrons can be stored per bit, thus the time during which the stored value can be maintained is decreased.

http://www.corsair.com/blog/force25nm/ [corsair.com]
http://www.storagereview.com/ssds_shifting_25nm_nand_what_you_need_know [storagereview.com]
http://www.dvhardware.net/article48180.html [dvhardware.net]
http://www.overclock.net/ssd/942073-ocz-vertex-2-34nm-nand-versus.html [overclock.net]

As we discuss these wonderful new SSDs I think it is important to be aware of the longevity/retention issues. Larger process sizes are not usually a desirable feature, but in the case of flash NAND SSDs it most definitely is. Many will find this counterintuitive. AFAIK, there are no advantages to 25 nm NAND for the consumer, but there are many for the manufacturer. Not only lower costs leading immediately to greater profits, but more future sales as SSDs wear out faster and need to be replaced. It could be argued that the lower manufacturing cost is a rather large advantage, but that is only if the sellers choose to pass on the savings. I haven't noticed Intel CPUs decreasing in cost after a process shrink and desktop memory seems to price as a commodity without much regard for process size. Even if the cost to the end user really does drop you have to balance the lower initial cost with the lower longevity. I am very pleased with Intel for having the integrity to stick with 34nm despite the fact that 25nm would earn them more money in the short term. I usually have a great deal of respect for OCZ, but in this case Intel deserves the praise and OCZ deserves the loss in reputation.

Despite all this, and as much as I hate to admit it, I will be rewarding the wrong company by waiting for the Sandforce SF-2000 SSDs to appear. Maybe someone other than OCZ will step up to the plate and offer 34nm SF-2000 based drives. If not I would be willing to accept a shorter lifetime and shorter data retention times in exchange for much higher speeds. I mean that is the whole point of these drives. Speed. Intel's sequential write speeds are just too low.

[Disclaimer: I own a single share of stock in Intel. Decisions like this one make me pleased that I do.]

Re:Don't forget about Sandforce/OCZ (1)

loosescrews (1916996) | more than 3 years ago | (#35345892)

I can only hope that OCZ changes their mind and decides to at least offer a more expensive 34nm version.

Is a 32nm version good enough for you? They are going to offer one of those called the Vertex 3 Pro. Source: http://www.anandtech.com/show/4186/ocz-vertex-3-preview-the-first-client-focused-sf2200/2 [anandtech.com]

Hardware TRIM support? (1)

Golbez81 (1582163) | more than 3 years ago | (#35345714)

What would really be nice is to get TRIM support on hardware controllers. I've been waiting on TRIM drivers now since 2008...

Sod SATA (2)

Eunuchswear (210685) | more than 3 years ago | (#35345890)

Give us fucking SAS already.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?