Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Can SSDs Be Used For Software Development?

timothy posted more than 5 years ago | from the real-world-odds dept.

Data Storage 480

hackingbear writes "I'm considering buying a current-generation SSD to replace my external hard disk drive for use in my day-to-day software development, especially to boost the IDE's performance. Size is not a great concern: 120GB is enough for me. Price is not much of a concern either, as my boss will pay. I do have concerns on the limitations of write cycles as well as write speeds. As I understand, the current SSDs overcome it by heuristically placing the writes randomly. That would be good enough for regular users, but in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day. I wonder how SSDs will do in this usage pattern. What's your experience developing on SSDs?"

cancel ×

480 comments

Umm... (1, Insightful)

addaon (41825) | more than 5 years ago | (#27096409)

If you're not good enough at arithmetic to understand that this isn't an issue, should you really be developing software?

Re:Umm... (4, Funny)

Rockoon (1252108) | more than 5 years ago | (#27096501)

Math is hard! Lets buy both!

Re:Umm... (3, Funny)

JamesP (688957) | more than 5 years ago | (#27096963)

I'd say: "Programming is hard let's do Java"

Re:Umm... (2, Funny)

Anonymous Coward | more than 5 years ago | (#27097211)

Java is hard, let's use Python.

Re:Umm... (0, Offtopic)

Hurricane78 (562437) | more than 5 years ago | (#27097235)

Python is hard. Let's use Visual Basic.

Re:Umm... (1)

hosecoat (877680) | more than 5 years ago | (#27097357)

visual basic is hard, so many bugs.

right, because java makes your puddy sore (0)

Anonymous Coward | more than 5 years ago | (#27097485)

EOM

Re:Umm... (-1)

Anonymous Coward | more than 5 years ago | (#27096549)

I'm sure they will change industries as soon as possible, thanks to you.

Forming an equation is much different than computing it.

Re:Umm... (-1, Troll)

ByOhTek (1181381) | more than 5 years ago | (#27096729)

I understand your sentiment, but unfortunately your sig (which currently reads "I've had this sig for three days.") denies you the right to express it without giving others the right to laugh at you incessantly and call you names.

Re:Umm... (0)

InvisibleClergy (1430277) | more than 5 years ago | (#27096831)

So how about that wikipedia article? Cheaper drives (which mgmt is sure to require) have 1,000 write cycles (assuming the worst). For certain high-traffic files, that means (assuming 30 writes in a day) a whole 33 days of use.

This is why most SSD manufacturers say you should leave a certain percentage of the memory as a backup.

Re:Umm... (0)

InvisibleClergy (1430277) | more than 5 years ago | (#27096913)

Also, seriously. How the hell did the grandparent get +5 insightful? It's a valid issue with SSD drives, because cheap ones die fast.

Re:Umm... (1)

clone53421 (1310749) | more than 5 years ago | (#27097137)

For certain high-traffic files

Well, there's your incorrect assumption. The hardware will not re-save the file in the same location every time; it will save it in empty space elsewhere that has experienced relatively low usage.

Re:Umm... (1, Interesting)

Joce640k (829181) | more than 5 years ago | (#27097165)

Show me a manufacturer which makes a drive which simultaneously:

a) Competes with hard drives for speed
b) Uses the cheapest possible MLC memory in it

Grandparent is correct: If you're not clever enough to figure out if this will be a problem, you shouldn't be a programmer.

Scary thought: Hard drives don't last forever either....

Re:Umm... (4, Insightful)

Tetsujin (103070) | more than 5 years ago | (#27096881)

If you're not good enough at arithmetic to understand that this isn't an issue, should you really be developing software?

Maybe you can explain why it isn't an issue, then?

One thing about flash in general is that in order to rewrite a small amount of data, you need to (at the low level) erase and rewrite a relatively large amount of data. So depending on how extensively the filesystem is cached, where the files are located, etc., rebuilding a medium-sized project could wind up re-writing a large portion of the SSD...

Re:Umm... (4, Insightful)

blueg3 (192743) | more than 5 years ago | (#27097077)

Neither he nor you have attempted to answer the question quantitatively. Look at how big a block is, a bit about their write-leveling strategy, how large your source files are, the quantity of data you overwrite and how frequently, and what the lifetime of SSD blocks is, and figure out how long the SSD should last. Even an order-of-magnitude calculation would be better than nothing.

You both are approaching the problem qualitatively: SSDs have limited rewrite lifetimes, and I'm doing a lot of rewriting -- isn't that bad? You don't know! Figure it out!

Re:Umm... (1)

clone53421 (1310749) | more than 5 years ago | (#27097215)

Well, common sense would dictate that the filesystem would be laid out in such a manner that rewriting a single file would affect as few other files as possible. In other words, arrange your data into "blocks", so that any edit to any part of one block will require the entire thing to be rewritten, and then store exactly one file per block, one or more blocks per file.

Re:Umm... (1)

tepples (727027) | more than 5 years ago | (#27097403)

arrange your data into "blocks", so that any edit to any part of one block will require the entire thing to be rewritten, and then store exactly one file per block, one or more blocks per file.

File systems already support "clusters". But as I understand it, most formatting tools that come with PC operating systems still do not try to make a file system's cluster size match an SSD's erase block size, or even align the start of a partition to an erase block boundary. Or has this changed in recent versions of Windows and Linux?

Re:Umm... (1)

corbettw (214229) | more than 5 years ago | (#27097453)

Yes he should, and he should be committing all of his revs to TheDailyWTF for our enjoyment.

Re:Umm... (2, Insightful)

gandhi_2 (1108023) | more than 5 years ago | (#27097471)

You are confusing programming and computer science.

My experience (-1, Troll)

Anonymous Coward | more than 5 years ago | (#27096459)

My experience with STDs has taught me to avoid CmdrTaco's mother.

Word

Re:My experience (1, Informative)

Anonymous Coward | more than 5 years ago | (#27096995)

LOL. On the plus side, you can only get herpes once :)

Re:My experience (0)

Anonymous Coward | more than 5 years ago | (#27097253)

Not to be negative, but there's also no guarantee that she hasn't caught other new and exciting bugs in the time since you've last encountered her.

I'm not seating it (5, Insightful)

timeOday (582209) | more than 5 years ago | (#27096465)

I'm using the Intel SSD and I think it's great - fast and silent. Will it last? I'd argue you never know about any particular model of hard drive or SSD until a few years after it is released. On the other hand, I'd also argue it doesn't matter much. Say one drive has a 3% failure rate in the 3rd year and another has a 6% rate. That's a huge difference percentage-wise (100% increase). And yet it's only a 3% extra risk - and, most importantly, you need a backup either way.

Re:I'm not seating it (5, Interesting)

Zebra_X (13249) | more than 5 years ago | (#27096891)

The real key here is this: when an SSD drive can no longer execute a write - the disk you will let you know. Reads do not cause appreciable wear so you will end up with a read only disk when the drive has reached the end of it's life. This is vastly superior to the drive just dying becuase it's had enough of this cruel world.

I'd be interested to see some statistics on electrical failure of these drives though... but it seems that isn't as much of an issue.

Swap? (3, Interesting)

qoncept (599709) | more than 5 years ago | (#27096469)

Do you have a swap file/partition? You're talking hundreds of writes a day, tops. That sounds like a big number, but in reality it just ain't. I would question why you feel the need for an SSD, though. I know the difference between $300 and $50 isn't that big in the grand scheme of things, what benefit are you expecting?

Re:Swap? (5, Informative)

timeOday (582209) | more than 5 years ago | (#27096593)

The main difference is a good SSD is much, much faster than any hard drive. If discussions about the topic don't give that impression, it's only because people fixate on sustained transfer - where there is still some competition between slower SSDs and hard drives - rather than seek time, which is often more important, and where SSDs blow the doors off hard drives. To me, suddenly widening the biggest bottleneck in PC performance for the first time in a couple decades is pretty exciting.

Re:Swap? (3, Informative)

Mad Merlin (837387) | more than 5 years ago | (#27096819)

Yeah, except only the SLC SSDs are worth having. MLC SSDs are junk and extremely common, you're better off with a spinning platter drive. However, I can't recommend SLC SSDs enough, they're substantially faster than conventional spinning platter drives in all ways.

Re:Swap? (2, Interesting)

Anonymous Coward | more than 5 years ago | (#27097283)

Would you care to explain your opinion that MLC SSDs are junk? I know some people have gotten a bad impression of MLC SSDs because Windows' default configuration doesn't play nicely with them. However if you tune Windows, MLCs work great. If you use OS X, just about everything is, by accident, property tuned and they work great. My guess, with Linux they will just work great.

Three days in with my new SSD and OS X, and I love it. The almost total elimination of disk latency has made it a whole new experience. I can't even measure launch times in icon bounces any more; on average the windows appear before the icon has even finished its first jump off the dock.

Re:Swap? (-1, Troll)

bluefoxlucid (723572) | more than 5 years ago | (#27097085)

That's why USB2.0's 480 Mb/second (80MB/s) is always seen in those sustained 80 megabyte per second writes to SSD-based flash drives, right? Oh, wait no. Well you still have the seek time, which lets you boot Ubuntu in 5 seconds, right? ... no? ... more like 3 minutes? But it's 30 seconds booting off a spinning hard disk!

Re:Swap? (1)

clone53421 (1310749) | more than 5 years ago | (#27097323)

Who said anything about USB?

Re:Swap? (4, Insightful)

afidel (530433) | more than 5 years ago | (#27097287)

The best bet if your project is smaller than about 20GB is to buy a box full of ram and use a FAT32 formatted ramdrive. Orders of magnitude faster than even an SSD.

Re:Swap? (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#27097479)

Depending on the application, using a filesystem with such niceties as, oh, file permissions might be a good idea...

Re:Swap? (0)

Anonymous Coward | more than 5 years ago | (#27097459)

On modern developers workstation (16GB or more RAM) why would you need swap?

Sandisk SSD G3 (1)

ErikZ (55491) | more than 5 years ago | (#27096479)

Does anyone know when the Sandisk SSD G3 are coming out?

should be fine (3, Informative)

MagicMerlin (576324) | more than 5 years ago | (#27096491)

Unless you type like The Flash, even MLC SSDs from the better vendors (Intel) should be fine for anything outside of server applications. Simple math should back this up (how many GB total the drive can write over its lifetime vs how much you produce each day). merlin

Re:should be fine (4, Funny)

Tetsujin (103070) | more than 5 years ago | (#27097009)

Unless you type like The Flash, even MLC SSDs from the better vendors (Intel) should be fine for anything outside of server applications. Simple math should back this up (how many GB total the drive can write over its lifetime vs how much you produce each day).

I don't know who this "The Flash" is... But this reminds me of some odd invoices I've seen here lately at Star Labs. Someone special-ordered a custom keyboard rated to one hundred times the usual keystroke impact, an 80MHz keyboard controller, and a built-in 1MiB keystroke buffer. Pretty ridiculous, huh? The usual 10ms polling rate for a USB keyboard should be enough for anybody - no need for all that fancy junk.

Re:should be fine (3, Funny)

nextekcarl (1402899) | more than 5 years ago | (#27097481)

Find who ordered that keyboard and I think you'll find out who the Flash is.

Re:should be fine (2, Insightful)

clone53421 (1310749) | more than 5 years ago | (#27097031)

how many GB total the drive can write over its lifetime vs how much you produce each day

It's not as simple as that. Make a small change (insertion or deletion) near the beginning of a large source code file, and the entire file – from the edit onward – must be written over. Then, any source code file that has been modified must be read and built, overwriting the previous binary files for those source codes. Finally, all the binary files must be re-linked into the executable.

So you're not just writing ___ bytes of code. You're writing ___ bytes of code, re-writing ___ bytes of code because it followed code that was added or modified, and overwriting ___ of the object, library, debug, executable, etc. etc. files that are created when the project is built. In a large project that's probably in the order of megabytes. That is what TFS meant by:

in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day. I wonder how SSDs will do in this usage pattern.

Re:should be fine (1)

ultrabot (200914) | more than 5 years ago | (#27097327)

It's not as simple as that. Make a small change (insertion or deletion) near the beginning of a large source code file, and the entire file – from the edit onward – must be written over.

It's not like any normal editor actually opens the file in edit mode and only patches in bytes that have been modified. They all rely on the simple solution of actually writing the whole file at once.

Get an enterprise drive (SLC, not MLC) (4, Insightful)

vlad_petric (94134) | more than 5 years ago | (#27096509)

If they're good enough for Databases (frequent writes), they should be just fine for devel.

OTOH, You should be a lot more concerned about losing data because of a) software bugs or b) mechanical failures in a conventional drive

Re:Get an enterprise drive (SLC, not MLC) (1)

Z00L00K (682162) | more than 5 years ago | (#27096823)

It also depends on what type of filesystem you use. A journaling filesystem like ext3 can wear down a disk a lot faster than a non-journaling filesystem.

Re:Get an enterprise drive (SLC, not MLC) (1)

vlad_petric (94134) | more than 5 years ago | (#27097041)

Do you actually have some data (specific to proper SSDs) to support this statement?

Re:Get an enterprise drive (SLC, not MLC) (1)

Mad Marlin (96929) | more than 5 years ago | (#27097489)

It also depends on what type of filesystem you use. A journaling filesystem like ext3 can wear down a disk a lot faster than a non-journaling filesystem.

This is incorrect. Don't put swap on an SSD though, that is really bad; this is probably what the parent poster is mis-remembering.

Re:Get an enterprise drive (SLC, not MLC) (1)

clone53421 (1310749) | more than 5 years ago | (#27097079)

Database edits don't propagate through the database the way a code edit propagates through the files in your project. In addition to the source code itself, object files, dlls, and executables will probably have to be re-written if you change a source code file.

Re:Get an enterprise drive (SLC, not MLC) (1)

afidel (530433) | more than 5 years ago | (#27097473)

Uh, my lightly used Oracle server does 100GB of writes a day just to it's log partitions which are only 8GB in size. I doubt your compiler can even chew on code fast enough to do more than that.

Backups (4, Informative)

RonnyJ (651856) | more than 5 years ago | (#27096559)

If you're worried about losing work, I think your backup solution is what you need to improve instead.

Death knell for *BSD (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27096591)

Whatever the differences a few of us might possess, we certainly can strive to find some common ground. No doubt all of us can easily acknowledge the plain truth that in the balance *BSD would have to be considered a failure. So why did *BSD fail? What is at the root of *BSD's colossal miscue?

Once you get past the fact that *BSD is fragmented between a myriad of incompatible kernels, there is the historical record of failure and of failed operating systems. *BSD experienced moderate success about 15 years ago in academic circles. Since then it has been in steady decline. We all know *BSD keeps losing market share but why? Is it the problematic personalities of many of the key players? Or is it larger than their troubled personae?

The record is clear on one thing: no operating system has ever come back from the grave. Efforts to resuscitate *BSD are one step away from spiritualists wishing to communicate with the dead. As the situation grows more desperate for the adherents of this doomed OS, the sorrow takes hold. An unremitting gloom hangs like a death shroud over a once hopeful *BSD community. The hope is gone; a mournful nostalgia has settled in. Now is the end time for *BSD.

How do raids perform? (2, Interesting)

wjh31 (1372867) | more than 5 years ago | (#27096597)

could a raid structure give the performance boost i assume you are after? ive no experiance with them but i gather they can offer higher read/write rates. Can someone with more experiance say exactly how much of a performace boost they give, a set of small HDD's could be the same price without the concerns over cycle limits

Re:How do raids perform? (1)

Grindar (1470147) | more than 5 years ago | (#27096875)

He specified an external drive. An external raid would be unncecesarily bulky, most likely, and given the use of an external drive that's frequently used, I'd be willing to bet this is for a laptop, so mobility counts a lot. If it is a mobile solution, then the SSD's probably the way to go. He said he's using subversion, so the code should be backed up to a server. You avoid the whole "oops I jostled my hard drive that grinding sound can't be good" scenario. And given a reasonably large drive, he should have plenty of space to allow for the occasional bad location in memory. Once again, allowing for an external drive, his read/write speed is going to be more throttled by the USB than the performance of the drive, in my experience.

Re:How do raids perform? (0)

Anonymous Coward | more than 5 years ago | (#27096897)

could a raid structure give the performance boost i assume you are after? ive no experiance with them but i gather they can offer higher read/write rates. Can someone with more experiance say exactly how much of a performace boost they give, a set of small HDD's could be the same price without the concerns over cycle limits

Better idea, mdadm -C /dev/md0 -n2 -l1 /dev/ssd --write-behind --write-mostly /dev/loop0

Raid1 with a ramdisk and the SSD, ramdisk will queue up writes and flush them to disk, if you happen to accumulate writes to the same portion, both of them won't happen to the ssd :)

For an added bonus, try xfs with its delayed write stuff on top of this. Bound to cause loss if your machine dies, but it'll be damn funny to see a floppy disk get 100MB/s write speeds until all the buffers fill up.

Re:How do raids perform? (1)

Guspaz (556486) | more than 5 years ago | (#27096955)

RAID can increase throughput, but it can't reduce access latencies. Of course, if you can read two different things at the same time, that has a similar effect to halving the effective access time. But it'd take a lot of Raptors to get the effective access time down from ~7ms to ~0.1ms.

as an external drive? (1)

mediocubano (801656) | more than 5 years ago | (#27096613)

I'm not sure you will be able to take advantage of the best of the speeds for an external drive. Won't your interface be the limiting factor?

Probably better to buy more RAM.

Good luck with that though.

Non-comperable experience (1)

Dishwasha (125561) | more than 5 years ago | (#27096635)

We actually just had a DNS appliance go south on us because the flash-card used went bad (older, non-SLC flash). We suffered complete data loss on the card with major I/O errors to the point where linux would not recognize it as a hdX or sdX. Having a couple sectors on a hard drive go bad can be a pain, but at least you can recover some data. On the other hand, it had been running for several years, so I would recommend that if you plan on using an SSD, try to trade it in or upgrade it on a regular basis to avoid this because when they fail, they FAIL!

Can computers be used for software development? (-1, Troll)

Anonymous Coward | more than 5 years ago | (#27096645)

It's a tough question.

IDE? (4, Funny)

Hatta (162192) | more than 5 years ago | (#27096651)

You should get an SATA SSD instead.

Re:IDE? (0)

Anonymous Coward | more than 5 years ago | (#27096767)

He wasn't talking about the hard drive interface. He was referring to the Integrated Development Environment (IDE) that he uses.

Re:IDE? (1)

Hatta (162192) | more than 5 years ago | (#27096925)

It was a poor attempt at humor. You know it's an obvious joke when you're the first person to make it, and you still get modded redundant. :)

Re:IDE? (1)

clone53421 (1310749) | more than 5 years ago | (#27097465)

Either that, or it was non-obvious to the degree that the mods didn't get the joke and thought you were saying SATA is faster than IDE.

Answer: (1, Insightful)

BitZtream (692029) | more than 5 years ago | (#27096653)

Yes, a SSD can be used for development.

A better question to ask is should you use a SSD for development.

Best bang for the buck in software development (0, Offtopic)

Anonymous Coward | more than 5 years ago | (#27096707)

Would be 2 overclocked ATI HD4870s in crossfire mode.

That's my story and I'm sticking to it.

Re:Best bang for the buck in software development (1)

AndrewNeo (979708) | more than 5 years ago | (#27097421)

Why don't you just get an HD4870x2? Or.. two of them?

X300 (1)

Hougaard (163563) | more than 5 years ago | (#27096715)

I have been using my Thinkpad X300 for developing for the last several month without any problems !

Lifetime is not an issue :p (1)

AlmondMan (1163229) | more than 5 years ago | (#27096753)

Current SSDs have a lifetime of somewhere around 10.000 years. I think that's enough.

Re:Lifetime is not an issue :p (4, Funny)

Tetsujin (103070) | more than 5 years ago | (#27097035)

Current SSDs have a lifetime of somewhere around 10.000 years. I think that's enough.

10000 years or 100000 writes, whichever comes first. :D

*SSD is Dying (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27096769)

It is now official. Netcraft confirms: *SSD is dying

One more crippling bombshell hit the already beleaguered *SSD community when IDC confirmed that *SSD market share has dropped yet again, now down to less than a fraction of 1 percent of all servers. Coming on the heels of a recent Netcraft survey which plainly states that *SSD has lost more market share, this news serves to reinforce what we've known all along. *SSD is collapsing in complete disarray, as fittingly exemplified by failing dead last [samag.com] in the recent Sys Admin comprehensive networking test.

You don't need to be the Amazing Kreskin [amazingkreskin.com] to predict *SSD's future. The hand writing is on the wall: *SSD faces a bleak future. In fact there won't be any future at all for *SSD because *SSD is dying. Things are looking very bad for *SSD. As many of us are already aware, *SSD continues to lose market share. Red ink flows like a river of blood.

FreeSSD is the most endangered of them all, having lost 93% of its core developers. The sudden and unpleasant departures of long time FreeSSD developers Jordan Hubbard and Mike Smith only serve to underscore the point more clearly. There can no longer be any doubt: FreeSSD is dying.

Let's keep to the facts and look at the numbers.

OpenSSD leader Theo states that there are 7000 users of OpenSSD. How many users of NetSSD are there? Let's see. The number of OpenSSD versus NetSSD posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 NetSSD users. SSD/OS posts on Usenet are about half of the volume of NetSSD posts. Therefore there are about 700 users of SSD/OS. A recent article put FreeSSD at about 80 percent of the *SSD market. Therefore there are (7000+1400+700)*4 = 36400 FreeSSD users. This is consistent with the number of FreeSSD Usenet posts.

Due to the troubles of Walnut Creek, abysmal sales and so on, FreeSSD went out of business and was taken over by SSDI who sell another troubled OS. Now SSDI is also dead, its corpse turned over to yet another charnel house.

All major surveys show that *SSD has steadily declined in market share. *SSD is very sick and its long term survival prospects are very dim. If *SSD is to survive at all it will be among OS dilettante dabblers. *SSD continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, *SSD is dead.

Fact: *SSD is dying

You're already doing backups, no real worries (1)

BitZtream (692029) | more than 5 years ago | (#27096787)

Since you're asking about it and mentioning revision control up front, I'm going to assume that you'll be committing your changes frequently to the revision control system.

If thats the case, you've already got a backup system in place to deal with hard disk failures thats probably better than any other solution for a workstation. Not only do you get backups of your source, you get (assuming your commits are good) nice checkpoints of working code rather than a backup of some random stuff you were working on and don't recall EXACTLY what your thoughts were during that backup.

My revision control system is my backup for my development machine. I commit (with comments) many times a day, the automated backups only run once a night and have no comments.

SSDs = productivity (5, Interesting)

Civil_Disobedient (261825) | more than 5 years ago | (#27096805)

I use SSDs for my (both) development systems--the first was for the work system, and after seeing the improvements I decided I would never use spinning-platter technology again.

The biggest performance gains are in my IDE (IntelliJ). My "normal" sized projects tend to link to hundreds of megs of JAR files, and the IDE is constantly performing inspections to validate the code is correct. No matter how fast the processor, you quickly become IO-bound as the computer struggles to parse through tens of thousands of classes. After upgrading to SSD, I no longer find the IDE struggling to keep up.

I ended up going with SSD after reading this suggestion [jexp.de] for increasing IDE performance. The general jist: the only way to improve the speed of your programming environment is to get rid of your file access latency.

Re:SSDs = productivity (1)

Slacksoft (1066064) | more than 5 years ago | (#27097095)

I'd probably notice a difference too if I switched from my 1TB drive to a 64gig SSD drive when it comes to IO response. Just like I'd save more money riding a bike compared to driving. Neat tech, but i'll wait till the size/price catches up with conventional drives.

Is it worth the money for you? (3, Informative)

Zakabog (603757) | more than 5 years ago | (#27096815)

The company I'm working at thought about using SSDs, but we were thinking more on the server end (to allow faster database access.) You don't have to worry about the write limits as it's highly unlikely you will hit them within the lifetime of a standard hard drive.

The main issue we ran into was cost, the drives we were looking at started around $3,000 for something like 80 gigs. That just wasn't worth it for us, though if you personally feel that the added cost (and I doubt you're looking at a $3,000 SSD, more likely you're looking at the $300 drives) is worth the performance gains then go for it. Though I think even for $300 it won't make a worthwhile difference.

There are other bottlenecks to consider, is your CPU fast enough, do you have enough RAM, could the hard drive your software and OS is on use an upgrade, etc. Perhaps even buy an internal SATA drive (if you can) to replace the external you're using, those external enclosures generally aren't known for their performance. If you've exhausted all of those options and you still need more speed, then I'd say go for the SSD.

oh no! several times per day! (1)

Cajun Hell (725246) | more than 5 years ago | (#27096827)

That would be good enough for regular users, but in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day

I couldn't help but notice, that you said several times per day, rather than several times per second.

Are you worried that after you die of old age, in the unlikely event that your great grandkids start to have problems with their inherited flash drive, they won't be able to replace it?

Re:oh no! several times per day! (2, Funny)

MichaelSmith (789609) | more than 5 years ago | (#27097193)

I used to worry about rewrites on my eeepc. But I have installed ubuntu twice in the last month and the disk seems to be exactly the same as it was initially so I don't worry any more.

Answer (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#27096833)

The answer is a simple NO. Your computer will blow up if you use SSDs for software development. SSDs can only be used for playing games and watching movies. Nothing else. Ever. Period.

I wouldn't want someone who asks such an asinine question doing any software development for me.

buy 2 (1)

HeyBob! (111243) | more than 5 years ago | (#27096835)

hey, if your boss is paying for it, buy a couple and replace them when they wear out
(or just tell him you'll need a better, bigger, faster one in a year)

You probably just need a server. (1)

Colin Smith (2679) | more than 5 years ago | (#27096847)

Something that'll handle 30+Gb of RAM. Then it pretty much doesn't matter.

 

Will it help? (1)

91degrees (207121) | more than 5 years ago | (#27096855)

Is disk access time really the limiting factor for development time? I'd suggest what you really need is heaps of RAM and a big disk cache.

The primary advantage of an SSD is that it's cool. This is a good enough reason to buy any piece of tech but the thing will possibly wear out in a year or so, which will probably outweigh the cool factor.

Run regular backups and go for performance. (1)

Zoson (300530) | more than 5 years ago | (#27096905)

I find that a large part of my programming experience deals with VM images.

SSD's kick serious butt for VM's. So if you're a serious programmer that works with multiple environments - go for the performance and just back your stuff up daily.

make backups? (1, Insightful)

bzipitidoo (647217) | more than 5 years ago | (#27096927)

You do back up your work, don't you? You know, in case it's lost, stolen, destroyed, etc.? An SSD going bad is hardly the only danger. So why not try out an SSD, and if you're especially worried, backup more frequently and keep more backups?

SSD, maybe not right now.. MacBook Air developers? (1)

jacksinn (1136829) | more than 5 years ago | (#27096993)

I'm not completely sold on using SSD for development (yet) as well because of the write-cycle problem but do think that the consumer SSD device in the near future will eliminate the write wear. If you must use a solid state device, I'd suggest getting one that is much larger than the projected development size and has better write cycle/wear numbers to help alleviate any premature data loss. I primarily use SSD's for backups. Any Apple Developers on a MacBook Air out there?

Developers should use *slow* machines (4, Insightful)

petes_PoV (912422) | more than 5 years ago | (#27097007)

That way it'll encourage them to write efficient implementations.

If you give your programmers an 8-way 4GHz m/b with 64GB of memory (if sucha thing exists yet), they'll use all the processing power in dumb, inefficient algorithms, just because the development time is reduced. While those of us in the real world have to get by on "normal" machines.

When we complain about poor performance, they just shrug and say "well it works fine on my nuclear-powered, warp-10, so-fast-it-can-travel-back-in-time" machine"

However, if they were made to develop the software on boxes that met the minimum recommended spec. for their operating system, they'd have to give some thought to making the code run efficiently. If it extended the development time and reduced the frequency of updates, well that wouldn't be a bad thing either.

Re:Developers should use *slow* machines (4, Insightful)

Anonymous Coward | more than 5 years ago | (#27097309)

compile time has nothing to do with inefficient algorithms slowing down programs.

Depends upon the source (1)

InsaneGeek (175763) | more than 5 years ago | (#27097043)

It really depends upon the size of the sutff you are doing. If you are going to recompile the same stuff over and over and the dataset will fit in memory... you most likely will get little to no benefit. Linux (Vista and others) cache every single file until some app needs memory and pushes it out. It sounds like he's doing it on a box by himself (not a server shared by 5000 other people), and with memory so cheap... unless you are compiling something huge I'd guess that you probably not have to disk again after the first time it read it in (as long as there isn't another app ran that eats up all the memory, forcing out cached files from buffer cache and at some point freed up all the memory again and the compile is ran again).

For a point app for a single user, spending less on SSD and buying more memory would probably give you much more benefits.

I've been doing just this (4, Interesting)

SanityInAnarchy (655584) | more than 5 years ago | (#27097053)

Just got one in a Dell laptop, came with Ubuntu. A subjective overview:

I have no idea how well it performs with swap. I'm not even really sure why I have swap -- I don't have quite enough to suspend properly, but I also never seem to run out of my 4 gigs of RAM.

It's true, the write speed is slower. However, I also frequently transfer files over gigabit, and the bottleneck is not my SSD, it's this cheap Netgear switch, or possibly SSH -- I get about 30 megabytes per second either way.

So, is there gigabit between you and the SVN server? If so, you might run into speed issues. Maybe. Probably not.

Also worth mentioning: Pick a good filesystem if a lot of small files equals a lot of writes for you. A good example of this would be ReiserFS' tail packing -- make whatever "killer FS" jokes you like, it really isn't a bad filesystem. But any decent filesystem should at least be trying to pack writes together, and I only expect the situation to improve as filesystems are tuned with SSDs in mind.

It also boots noticeably faster than my last machine. This one is 2.5 ghz with 4 gigs of RAM; last one was 2.4 ghz with 2 gigs, so not much of a difference there. It becomes more obvious with actual use, like launching Firefox -- it's honestly hard to tell whether or not I've launched it before (and thus, it's already cached in my massive RAM) -- it's just as fast from a cold boot. The same is true of most things -- for another test, I just launched OpenOffice.org for the first time this boot, and it took about three seconds.

It's possible I've been out of the loop, and OO.o really has improved that much since I last used it, but that does look impressive to me.

Probably the biggest advantage is durability -- no moving parts to be jostled -- and silence. To see that in action, just pick out a passively-cooled netbook -- the thing makes absolutely no discernible noise once it's on, other than out of the speakers.

All around, I don't see much of a disadvantage. However, it may not be as much of an advantage as you expect. Quite a lot of things will now be CPU-bound, and there are even the annoying bits which seem to be wallclock-bound.

Why not use both? (0)

Anonymous Coward | more than 5 years ago | (#27097065)

Take your local storage zpool and add the SSD as a cache device.

# zpool add (pool-name) cache (device path)

If you really want blistering performance... (2, Insightful)

Mysticalfruit (533341) | more than 5 years ago | (#27097073)

If price isn't an option, then he should get himself 4 ANS-9010's and set them up as a hardware RAID0 hanging off the back of good fast raid controller.

If he filled each of them with 4GB DIMMs he'd have 128GB of storage space.

Volatile? Hell yeah... But also just crazy fast...

Simple arithmetics (4, Insightful)

MathFox (686808) | more than 5 years ago | (#27097093)

A typical flash cell easily lasts 10.000 writes. Let's assume that every compile (or svn update) only touches 10% of your SSD space, that gives you 100.000 "cou" (compiles or updates). If you do 20 cou per day, the SDD will last 5000 working days, or 20 year.

Now find a hard disk that'll last that long.

Nope (1)

dvh.tosomja (1235032) | more than 5 years ago | (#27097111)

Only for porn, move along.

Software Development? Really? (1)

Greyfox (87712) | more than 5 years ago | (#27097113)

How much do you hit ^X^S? And are you really going to notice a few ms difference between loading your source file off the drive? If you were using it as a database server with frequent writes that'd be one thing, but software development?

If your boss is willing to shell out for one, then go for it. If you actually do the math on the write limit, you'll find that you'll be dead of old age long before the drive runs out of writes in any given cell (Last time I checked it was something like 160 years of constant writes.)

RAM disk ? (2, Interesting)

smoker2 (750216) | more than 5 years ago | (#27097179)

Can't you just load up on RAM and create a RAM drive for working stuff and keep the slow HDD for shutdown time ? Cheaper than SSD and no write cycle issues. You can also get RAM based IDE and SATA drives.

Re:RAM disk ? (1)

Microlith (54737) | more than 5 years ago | (#27097305)

While fine for temporary files, RAM disks tend to be small and extremely volatile.

Not exactly a place where you want to host files from your current project, which is what OP wants.

Adaptec confirms it... (3, Informative)

guruevi (827432) | more than 5 years ago | (#27097201)

Although they use an SSD for another purpose, they said currently SSD's last about 6 months under heavy read/write conditions (cache on a RAID controller) even with leveling techniques. Hard drives last a whole lot longer for those purposes I would say.

I think SSD in a desktop-type system would be all right however I would suggest you invest in some fast disks instead of SSD until SSD matures and more lifetime data is available. Remember MTBF doesn't always mean that a piece of hardware will last that long. Most likely it will die long before that.

How about ramdisks? (2, Interesting)

ultrabot (200914) | more than 5 years ago | (#27097217)

Sometimes I wonder whether it would make sense to optimize the disk usage for flash drives by writing transient files to ramdisk instead of hard disk. E.g. in compilation, intermediate files could well reside on ramdisk. If you rely on "make clean" a lot (e.g. when you are rebuilding "clean" .debs all the time), you won't have that much attachment to your object files.

Of course this may require more work than what it's really worth, but it's a thought.

Intel or bust (2, Informative)

Chris Snook (872473) | more than 5 years ago | (#27097303)

Developing on a conventional SSD with large user-visible erase blocks is PAINFUL. The small writes caused by creating temporary files in the build process absolutely destroy performance. There are ludicrously expensive enterprise products which work around this in software, but at the laptop/desktop scale, you want something that's self-contained. As far as I'm aware, Intel's X25 drives are the only ones actually on the market now that hide the erase blocks effectively at the firmware level. The MLC ones should be fine.

Writes aren't placed randomly. (0)

Anonymous Coward | more than 5 years ago | (#27097331)

There are very precise wear-leveling algorithms behind the scenes that take care of the fact that any one cell can only handle so many cycles. And on that note, technically, you count a cycle when the cell is erased, not written (though most writes mandate an erasure). It follows that SSDs offer predictable failure (firmwares don't tell you this, however, because in practice you will replace any SSD drive long before you approach the limit of erase cycles, even for MLC drives).

The reason MLC drives offer so many fewer cycles is right there in the name. There are multiple bits stored within a single cell, and to erase any one bit, you need to erase the entire cell.

Just did some build speedups (1)

tthomas48 (180798) | more than 5 years ago | (#27097377)

This was on ant, but I think they're fairly applicable elsewhere:

1) Making the build less object oriented. Code reuse is not as important as speed in a build. Try to limit the creep of including all sorts of build task libraries.
2) Map your build and dist directories to a tmpfs.

I halved our build time just with these two steps.

SLC will last longer than your computer. (1)

Anonymous Freak (16973) | more than 5 years ago | (#27097399)

SLC can handle millions of write cycles; and it's fast. With modern wear-leveling, you could erase and re-write to an SLC drive at maximum speed continuously for years before you would hit the maximum write cycle cap.

The best one at present appears to be the Intel X25-E, which is a whopping $800 street price for the largest 64 GB model. If that isn't large enough, then yo'll have to wait for the 128 GB model.

Intel's X25-M MLC model claims to have way better wear leveling algorithms than most MLC drives; and has demonstratedly better read and write performance than pretty much all other MLC drives; and is available at 160 GB. Even it had a predicted lifespan longer than most rotating drives.

Work from RAM, only write occasionally (1)

RAMMS+EIN (578166) | more than 5 years ago | (#27097405)

Just to float an idea, why not do it like this:

  - Build a computer with flash storage and lots of RAM
  - Use RAM to store the code and data you're using for development
  - Write commits to flash
  - Write to flash occasionally to prevent data loss

Flash drives may be faster than disks, but RAM is still _much_ faster. An extra 4 or 8 GB of RAM doesn't cost that much, and is probably enough to hold the code and some test data for most projects. If you spend a lot of time compiling, you'll probably recover the cost of the RAM in no time, thanks to increased developer productivity.

Silly question (1)

BigZaphod (12942) | more than 5 years ago | (#27097435)

I've been developing iPhone apps on my 15" SSD MacBook Pro since they were released in September-ish? The SSD is awesome-fast compared to normal laptop HDDs. I have a friend who installed his own SSD before Apple was selling them as a BTO and his is still running fine, too. You'll likely upgrade before you hit any SSD writing limits anyway. It's not something I'm even remotely worried about. If the drive does happen to fail, that's what AppleCare and backups are for. :)

If Cost is no issue... (0, Offtopic)

Anarke_Incarnate (733529) | more than 5 years ago | (#27097451)

Then forget the SSD drive, unless you are worried about head crashing during a fall. Get yourself a 4+ bay ESATA enclosure and do a Raid 1+0. You will pin the bus for throughput and have fault tolerance out the wazoo. I would recommend larger drives, simply for the larger outer tracks (Larger, meaning high density, not necessarily greatest total capacity).

Cat got your tongue? (something important seems to (0)

Anonymous Coward | more than 5 years ago | (#27097477)

http://www.theinquirer.net/inquirer/news/293/1051293/ocz-1tb-ssd-raid-module

If you're out for performance, this could be very interesting.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...