Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Plans 'Overclocking' Capability On SSDs

Soulskill posted about a year ago | from the just-make-sure-you-get-some-liquid-cooling dept.

Data Storage 106

Lucas123 writes "Anticipating it will make a 'big splash,' Intel is planning to release an product late this year or very early next that will allow users to 'overclock' solid-state drives. The overclocking capability is expected to allow users to tweak the percentage of an SSD's capacity that's used for data compression. At its Intel Developers Forum next month in San Francisco, Intel has scheduled an information session on overclocking SSDs. The IDF session is aimed at system manufacturers and developers as well as do-it-yourself enthusiasts, such as gamers. 'We've debated how people would use it. I think the cool factor is somewhat high on this, but we don't see it changing the macro-level environment. But, as far as being a trendsetter, it has potential,' said Intel spokesman Alan Frost. Michael Yang, a principal analyst with IHS Research, said the product Intel plans to release could be the next evolution of SandForce controller, 'user definable and [with the] ability to allocate specified size on the SSD. Interesting, but we will have to see how much performance and capacity [it has] over existing solutions,' Yang said in an email reply to Computerworld."

cancel ×

106 comments

Sorry! There are no comments related to the filter you selected.

Awsome (4, Insightful)

ciderbrew (1860166) | about a year ago | (#44715475)

Time to make some watercooling blocks and special fans and make money from those with too much.

Re:Awsome (1)

slashmydots (2189826) | about a year ago | (#44715661)

Over-provisioning doesn't change the temperature and they run at about 2 watts anyway.

Re:Awsome (0)

Anonymous Coward | about a year ago | (#44715917)

Doesn't mean that "gamers" won't buy this stuff... People are stupid when it comes to buying components. Only have 1 video card? Not overclocking your CPU? Only one hard drive? Yeah let's just get this 750W supply just in case.

Re:Awsome (1)

binarylarry (1338699) | about a year ago | (#44715971)

Pff 750 watt PSU.

Re:Awsome (0)

Anonymous Coward | about a year ago | (#44715997)

I'm not sure if 750W is enough, I might want to upgrade my RAM in the future. Better to just get the 900W now to be safe.

Re:Awsome (3, Interesting)

dj245 (732906) | about a year ago | (#44716559)

Doesn't mean that "gamers" won't buy this stuff... People are stupid when it comes to buying components. Only have 1 video card? Not overclocking your CPU? Only one hard drive? Yeah let's just get this 750W supply just in case.

A 750W power supply isn't that ridiculous. I had a system with 2 hard drives, a mid-level medium-power video card, and a dual-core processor (45W TDP). It had a 450W power supply from a reputable brand, which should have been more than plenty. However, when I switched out the processor to a 6-core one with a 95W TDP, stability went out the window. No overclocking, all BIOS settings on "safe", but it would freeze about every 30 minutes or so. I was going crazy trying to figure it out. I reformatted Windows, but no good. It was crashing using some Linux CD diagnostics tools so it had nothing to do with the software. I even RMA'd the CPU and got a new one.

Eventually I bought a new 550W power supply and all the problems went away. Maybe the "reputable brand" of my 450W power supply wasn't actually reputable, or maybe some element inside had degraded over time, but power supply problems are the most frustrating kinds of problems to solve if you are assuming that X watts should be enough. I'm not made of money, but I'm going to buy the best power supply I can afford in the future.

Re:Awsome (2)

David_Hart (1184661) | about a year ago | (#44716989)

Doesn't mean that "gamers" won't buy this stuff... People are stupid when it comes to buying components. Only have 1 video card? Not overclocking your CPU? Only one hard drive? Yeah let's just get this 750W supply just in case.

A 750W power supply isn't that ridiculous. Eventually I bought a new 550W power supply and all the problems went away. Maybe the "reputable brand" of my 450W power supply wasn't actually reputable, or maybe some element inside had degraded over time, but power supply problems are the most frustrating kinds of problems to solve if you are assuming that X watts should be enough. I'm not made of money, but I'm going to buy the best power supply I can afford in the future.

Since when is a $70 to $90 part considered a luxury high-end part? If you has said a $300 water-cooler, I would have agreed with you.

I'll grant you that for a single CPU system with a single hard-drive that a 750W power supply is overkill. However, you need to remember that cheap PSUs do not always provide the power that they are rated at, having a larger PSU provides room for expansion, and having a larger PSU doesn't usually use any more energy as you are drawing the power that the system requires (not the full capability of the PSU) and efficiency is fairly even across different sized PSUs.

I have an 850W power supply. I have a single i7 CPU but I have two high end graphics cards, 6 hard-drives, a PCI Soundblaster card, and a PCI RAID card. Right now, my system is using 225W because it is basically idling while I type but the power usage goes up pretty quickly when playing Far Cry 3.

If you want to find out what you should have for a power supply, check this out: http://www.extreme.outervision.com/PSUEngine [outervision.com]

Re:Awsome (1)

firex726 (1188453) | about a year ago | (#44718421)

Similar to me. I run a 750w even though I don't need all the power, but it's the added features and quality I want. I'll gladly pay an extra $30 if it means less of a chance of it frying my entire build.

Re:Awsome (0)

Anonymous Coward | about a year ago | (#44719183)

You can spend a fair bit more that $90 on an PSU, even a 500w PSU.

Cooling (80mm fan? 140mm fan? passive?), modularity, and built in protections can all drive up the cost of a PSU pretty quickly. It's the most important component in any system, so not something you should be skimping on.

Of course, that doesn't stop people putting generic 300w power supplies with no cooling in to their work machines, then blaming everything else when their hard drives keep dying.

Re:Awsome (2)

toddestan (632714) | about a year ago | (#44721955)

Some "reputable" brands I have found to be overrated. But even the "best" will produce a dud now and then.

However, I will say my old Athlon XP box, three HDDs, two optical drives, two video cards, several other PCI cards (2nd network, firewire, SATA), kewl case lights, and too many fans was perfectly fine for years on a 380W power supply. And last time I checked, still happily boots up. 180W was probably overkill as about the most I ever measured at the wall with a Kill-a-watt was about 185W (it "idled" at around 150W by the way).

Re:Awsome (1)

slashmydots (2189826) | about a year ago | (#44716771)

I had a customer want a 1000 watt power supply just because. I told him his current PSU was actually a 4x rail one and each rail was 20 amps top and 2 of the rails were not currently in use and were tied off at the bottom of the case. That's also probably why it failed. His graphics card drew too much power on the trails. I got him a split 32/24 amp one.

Re:Awsome (1)

firex726 (1188453) | about a year ago | (#44718433)

I did that once, before I understood what a rail was and thankfully only had to RMA the vid card.

Re:Awsome (0)

Anonymous Coward | about a year ago | (#44716777)

Id get the 750 watt just because 500s never have enough 12 volt and SATA power for my needs. also, the difference in price is small at best.

remember kids, a dual chamber case is an awesome investment, Heat is drastically reduced and it allows you to stick a GIGANTIC fan assembly in the top of the case.

Re:Awsome (1)

Miamicanes (730264) | about a year ago | (#44717345)

The problem is, you aren't necessarily talking about 750 12-volt real (RMS) watts that are clean & stable. Cheap power supplies, in particular, are notoriously stingy in the 12v department. It's one thing to make a cheap power supply capable of surging to 750w for a few hundred milliseconds before frying itself or sagging. It's another matter ENTIRELY to make a power supply capable of outputting 750w RMS without breaking a sweat, and able to keep doing it 24/7 for the next 3 years.

That said, 750 watts is probably overkill by almost any modern standard... but 350 watts in a high-end PC with multiple hard drives two video cards, and a hot CPU is probably pushing your luck unless you know beyond doubt that you're talking about 350 watts of solid RMS power.

(RMS wattage is basically the amount you can draw continuously without unacceptably degrading the power or damaging the power supply). Think: 5'3 MMA fighter who's 180 pounds of solid muscle, vs 500 pound fat lady (height almost completely irrelevant) riding her scooter at Wal Mart.

Re:Awsome (2)

gigaherz (2653757) | about a year ago | (#44720283)

A power supply is its most efficient at around 50% usage. So if you take the time to measure the average power usage of your system you can buy a PSU that's approximately 2x the average, or 25% more than the peak, whichever is higher.

My system takes around 100w idle, or 250w on load, with peaks never going over 300w. so a 400w PSU would be more than enough for my usage patterns. Yet I have a 850w PSU simply because it was the one PSU they had in stock in the store I buy from, when the old one broke. The old one was 650W, and it was also more than enough for my needs.

Re:Awsome (1)

gigaherz (2653757) | about a year ago | (#44720291)

Also, it's much harder to find efficient PSUs at low ratings. I checked at some point, and a 800W PSU at 20% usage was more efficient than a 450W PSU at 50% usage, so you have to keep that in mind too.

Re:Awsome (1)

ciderbrew (1860166) | about a year ago | (#44716023)

With your precious data can you really take that chance? Our elite-X watercooling blocks and special fans for power SSD users are designed to protect you*.




*Product offers no level of protection to your SSD. T&Cs apply.

Re:Awsome (1)

Zero__Kelvin (151819) | about a year ago | (#44716839)

If you read this [tomshardware.com] , making sure to click on the first see full content link you will see where the GP was going with this one.

Re:Awsome (4, Funny)

Thanshin (1188877) | about a year ago | (#44715669)

Time to make some watercooling blocks and special fans and make money from those with too much.

Wow! That would be Overclocked!

Years ago I could buy a cheap overclocked machine and play any overclocked game I could find. Nowadays, it's not so easy. You need an overclocked watercooling block and overclocked fans.

I only hope MS overclock their efforts and manage to get an overclocking product in time. I'd be very underclocked otherwise.

Sandforce... (5, Insightful)

Knuckx (1127339) | about a year ago | (#44715523)

So, what Intel are saying, is that they are going to take a SSD controller with unstable, buggy firmware - and then add a feature that allows users to modify the internal constants the firmware uses to do it's job. This can only end very badly, unless Intel and Sandforce do some serious testing to find and fix the data corruption issues, the problems with the drive ignoring the host, and the problems where the drive gets stuck in busy.

(all problems detailed in this post have been experienced with an Intel branded, Sandforce controller-ed drive)

Re:Sandforce... (4, Funny)

c0lo (1497653) | about a year ago | (#44716805)

So, what Intel are saying, is that they are going to take a SSD controller with unstable, buggy firmware - and then add a feature that allows users to modify the internal constants the firmware uses to do it's job. This can only end very badly,...

How come? Personally, I can see the benefits... run the SSD to glowing hot, write your data then cut he power. Upon cooling down, the data will be compressed (by thermal shrinking) in a hardware mode... and is only common-sense that being hard is better than being soft when it comes to compression.

Re:Sandforce... (0)

Anonymous Coward | about a year ago | (#44719359)

I know that you are trying to be funny but up until the thermal shrinking part it sounded like a great idea.
Heating the SSD to 250 degrees C (482F) restores the NAND cells and can extend the allowed number of writes from 10,000ish to 100,000,000ish
Source [networkcomputing.com]

Re:Sandforce... (1)

Minwee (522556) | about a year ago | (#44716825)

I don't see the problem there. If they did release an SSD that ran at turbo speed out of the box but threw up all over itself unless you tuned it to run at a snail's pace, then it would be fantastic for Intel.

Not only would they be able to blow through benchmarks and post amazing scores in their ads and reviews they also would be able to ship barely usable controllers _and_ shift the blame the user for either fiddling with the settings or not applying the recommended stability settings when things invariably go horribly wrong.

It's a WIN-WIN-WIN situation.

Re:Sandforce... (1)

Gravis Zero (934156) | about a year ago | (#44716991)

So, what Intel are saying, is that they are going to take a SSD controller with unstable, buggy firmware

so... business as usual?

Re:Sandforce... (2)

w1zz4 (2943911) | about a year ago | (#44717001)

I have two Intel SSD (120GB 520 and 180GB 330) and never experienced problems you had...

Translation (1)

Anonymous Coward | about a year ago | (#44715531)

Translation:
"It's useless, but idiot gamers will buy anything if we call it over clocked :D"

Re:Translation (1)

h4rr4r (612664) | about a year ago | (#44715555)

Pretty much that.
If gamers really wanted fast disks they would be buying SSDs that plug into PCIe lanes.

Re:Translation (1)

Shark (78448) | about a year ago | (#44715839)

Their strategy is going to have to be different though... With CPUs, they can overcharge and cripple chips for the privilege to overclock but with Sandforce controllers, I doubt they'll be the only vendor.

Re:Translation (1)

CastrTroy (595695) | about a year ago | (#44716699)

Gamers usually have graphics cards taking up all the PCIe lanes. Usually the way they are designed, they are big enough to cover the next slot (or 2). Motherboards are even designed with 2 (seldom used) smaller PCIe slots below the top PCIe slot because they know that the graphics card will take up more than a single row. Also, PCIe has disadvantages, such as not being easily hot swappable. It costs very little for a hot-swap tray, I've used them at work, and I'm definitely putting one in my next build. In a single 5.25 inch slot you can fit 4, 2.5 inch drives.

Re:Translation (1)

petermgreen (876956) | about a year ago | (#44720899)

In a single 5.25 inch slot you can fit 4, 2.5 inch drives.

Or even six.

http://www.newegg.com/Product/Product.aspx?Item=N82E16817998144 [newegg.com]

Re:Translation (0)

Anonymous Coward | about a year ago | (#44721539)

Dear manufacturers, SATA has been around for years. You god-damn better fucking well know by now that SATA power connectors have a 3.3V line, and you can't fucking supply that with a 4-pin molex. Get a fucking clue, and use the right connectors already. Fuck-wits. Shit-brains. God-damned morons. My sincerest apologies, but my vocabulary of insults is sadly lacking, you ignorant small-minded retarded feces-eating dickwads.

Re:Translation (1)

bmo (77928) | about a year ago | (#44716631)

"It's useless, but idiot gamers will buy anything if we call it over clocked :D"

And

"A certain percentage will fry them and void the warranty and will have to buy more. They lose, we win!"

--
BMO

Seems ironic... (0)

Anonymous Coward | about a year ago | (#44715557)

...since Intel is the historical enemy of overclocking. In their view if you get more performance than you paid for you are stealing from them, even though you are modifying a piece of hardware that you own. I suppose Intel's overclockable SSD will be an 'extreme edition' version that costs several hundred dollars more than an equivalent device that is rigged with extra DRM to forbid overclocking.

Re:Seems ironic... (1)

TheSkepticalOptimist (898384) | about a year ago | (#44715627)

Intel was never an enemy of overclocking, Intel was an enemy of people taking their CPU's and turning them into a puff of smoke because they thought they would just set the clock speed and multiplier to maximum and start playing Quake immediately. Eventually they decided to prevent the average moron from destroying their CPU in a nanosecond by putting in more restrictions. Also they realized that these days that with the quality of GPU's and the fact few people even maximize the speed of a single core, let alone all 4 or 8 cores, overclocking a CPU is moot.

Re:Seems ironic... (3, Insightful)

the_B0fh (208483) | about a year ago | (#44715737)

Wow. You really like the taste of the koolaid huh. Intel has always been against overclocking because it eats into their margin. They had thermal protection so overheating is not a problem. Overvoltage might fry your cpu, but only after a very long time or a very high voltage - both of which can be controlled, so it's not like you'd pump 5V through a 1.35V part.

And Intel traditionally bins its parts - you might want to check that out.

Re:Seems ironic... (0)

Anonymous Coward | about a year ago | (#44715985)

They had thermal protection so overheating is not a problem.

Thermal protection schemes have been around for some time now, but are far from a guaranteed protection against frying your processor from thermal damage. This is something I've had to deal with first hand.

Overvoltage might fry your cpu, but only after a very long time or a very high voltage - both of which can be controlled, so it's not like you'd pump 5V through a 1.35V part.

That would be beyond Intel's control and up to the motherboard what voltage options they give you. Kind of by the nature of those trying to push the system, there would certainly be options going beyond any official limit given by Intel.

Re:Seems ironic... (1)

the_B0fh (208483) | about a year ago | (#44717143)

They had thermal protection so overheating is not a problem.

Thermal protection schemes have been around for some time now, but are far from a guaranteed protection against frying your processor from thermal damage. This is something I've had to deal with first hand.

And? Was it an overclocked CPU? If not, how is it relevant to OP's claims? Of course thermal protection is not perfect, and I've never claimed so. I only pointed out that it is there and helps with the issue because an overclocked CPU might just run a tad hotter.

Overvoltage might fry your cpu, but only after a very long time or a very high voltage - both of which can be controlled, so it's not like you'd pump 5V through a 1.35V part.

That would be beyond Intel's control and up to the motherboard what voltage options they give you. Kind of by the nature of those trying to push the system, there would certainly be options going beyond any official limit given by Intel.

And....? If your motherboard puts 5V through a 1.35V pin, it doesn't matter whether the CPU is overclocked or not in that case.

Re:Seems ironic... (0)

Anonymous Coward | about a year ago | (#44716369)

You don't "pump through voltage" into a device. In what you're trying to describe, it's placing a 5V potential across the supply lines where the specifications call for a 1.35V potential for what the load can safely operate at.

Current is a construct which measures how much energy is transferred through the device based on the load (per unit time). It's a ratio of supply potential and load resistance so even current is not "pumped" unless you're letting the voltage vary into a fixed load (yes, current pumps exist but not in this case).

The power consumption is current x voltage, directly multiplied for DC but altered based on the waveform and it's harmonics for AC analysis. This is where it gets complicated quickly wrt to the mathematical analysis, but relatively easy in simulation software for digital circuits.

The device specification for maximum power as a function of voltage is not just related to heat dissipation. It's also the maximum potential that can be applied across a semiconductor junction and not avalanche it (diodes, FETs, BJTs from base-emitter, etc.) Once you've exceeded the rated voltage + safety margin, you've morphed your junction into a puff of smoke.

There's more to it and the previous paragraphs are simplified quite a bit but overall accurate.

Re:Seems ironic... (1)

the_B0fh (208483) | about a year ago | (#44717119)

Sheldon, is that you? Why aren't you working on your sciencey stuff instead of playing on slashdot?

Re:Seems ironic... (0)

Anonymous Coward | about a year ago | (#44719925)

You had me right up to "potential". What is this "potential" you speak of in your first paragraph? What is the cause of it? How does it work?

Re:Seems ironic... (1)

Ravaldy (2621787) | about a year ago | (#44718435)

I look at Intel's decision to not encourage overclocking as a move to keep their products stables. The minute you take hardware outside it's originally tested context you risk instability. If you know how processors are tested then you understand that 2 processors side by side have completely different capabilities. One may be overclockable by 10% while the other by 20%. Regardless, IMHO overclocking if for hobbiest, not business. It serves no real purpose in the business world and that is why Intel is the #1 choice for Corporations and Enterprises.

Re:Seems ironic... (1)

Shark (78448) | about a year ago | (#44715953)

Worthy of your name there, buddy... But while don't expect parent to be 100% right, I think you're overly optimistic when it comes to Intel's view of the market. You're probably right that they didn't care much about someone having to buy extra CPUs because they ruined the first one. They surely did care about people selling overclocked systems and digging away at their profit margins though. I think they got spooked when enthusiasts started buying celerons instead of pentiums to build decent gaming boxes on the cheap.

Re:Seems ironic... (0)

Anonymous Coward | about a year ago | (#44717033)

Bullshit. Intel has always been down on overclockers.

If you let the customer clock a $100 chip to run as fast as a $150 chip. You have lost money.

So once again it comes down to... Why? because money.
Really simple shit. No conspiracy needed. Money. They don't give a damm about the end user destroying their chips. They care about money. And only money. And things that gets them more money.

Re: Seems ironic... (0)

Anonymous Coward | about a year ago | (#44717935)

Sounds like a corporation to me.

Re:Seems ironic... (1)

w1zz4 (2943911) | about a year ago | (#44717021)

Intel is not a enemy overclocking, but do charge a premium price for overclocking friendly CPU. Is those SSD gonna be labeled as "K" SSD???

BSOD (1)

Anonymous Coward | about a year ago | (#44715561)

BSOD

If you are lucky. A silent killer of data, sneaking around like Jack the Ripper, never known by name, only by result.

Intel

Nuf said

Speed (1)

rossdee (243626) | about a year ago | (#44715569)

I guess the word "Turbo" is out of favor these days.

Time for Seagate to make some real hard drives that spin at 20000 RPM

Re:Speed (0)

Anonymous Coward | about a year ago | (#44715647)

Why would a 20,000rpm hard drive solve anything?

(Current) SSDs are about 4 times the speed of 7200rpm disks in ideal conditions for the disk, so even in ideal conditions you need something aloud 25,000-30,000rpm to rival the sequential read speed of an SSD.

When you hit random data, SSDs are thousands of times faster, and no amount of turning up the rotation speed on an HDD is going to catch up.

Re:Speed (1)

hoboroadie (1726896) | about a year ago | (#44715701)

We sure noticed the difference when we switched to 10,000 rpm drives, I'm waiting for the SSDs to mature more before a full commitment, hopefully put them on the PCIe bus when we do.

Re:Speed (2)

stms (1132653) | about a year ago | (#44715779)

Time for Seagate to make some real hard drives that spin at 20000 RPM

I hear the 20000 RPM drives cut roast beef extra thin.

Re:Speed (1)

jeffmeden (135043) | about a year ago | (#44715857)

I guess the word "Turbo" is out of favor these days.

Time for Seagate to make some real hard drives that spin at 20000 RPM

That, or they can keep doing what they already do; make the heads smaller and cram more bits onto each track. Triple the per-track density and you have a drive at 7200rpm that performs like a (completely theoretical) 21,600 rpm drive doing sequential reads. Random reads are for suckers who don't know how to cache.

Re:Speed (1)

Miamicanes (730264) | about a year ago | (#44717553)

Or something that, AFAIK, has never been done on a production hard drive, like make the arms & heads independent for each platter, and build a 3-platter drive that transparently does RAID 5 internally (ie, presents itself to the outside world as a normal SATA drive, but internally reads & writes in parallel). Or makes more intelligent use of the flash cache by caching the first N sectors of a large file in flash, and the remainder on the drive, so it can begin reading from flash immediately while moving the head into position for the remainder.

The single biggest problem with SSDs is the fact that they just fall over and die without warning due to firmware bugs, with no hope of data recovery due to mandatory full-disk encryption that can't be disabled. That's the part about SSDs that aggravates me the worst... if they at least allowed you to boot into Linux and siphon the raw bits from the drive for offline recovery, I wouldn't mind so much. But they don't.

I made the mistake of buying a Sandforce-based drive 3 years ago, and got burned badly. The worst thing is, you can't even upgrade the firmware without losing everything on the drive, because the bastards set up the bootloader to blow away the encryption key whenever the firmware gets replaced. A Sandforce SSD is a fundamentally-flawed black box that will fail in ways that don't technically qualify as "breakage" (because you "only" have to blow it away with SecureErase to restore it to "like new", but still-flawed, condition), but can't be fixed (or even meaningfully recovered-from) because the whole thing is booby-trapped to self-destruct if you try (Google "Sandforce panic mode").

Never. Again. I won't touch a Sandforce-based SSD with a dirty tetanus-infected pole, or taint my computers with drives built around that filthy, worthless chipset family.

what? (0)

Anonymous Coward | about a year ago | (#44715571)

Why even mention the word overclocking? How does that make any sense? Marketing trying to market with senseless words again.

Re:what? (0)

Anonymous Coward | about a year ago | (#44715691)

Wall Street will buy it.

Re:what? (2)

unixisc (2429386) | about a year ago | (#44717925)

Mod AC up! How does one even 'clock' an SSD, much less 'overclock'? SSDs are made up of NAND flash, which do NOT include a clock input - they operate in asynchronous operation. So how does that even start to make sense?

And are we talking just read speeds here? They are typically 70ns for single word mode, and somewhat less - say 50ns for page mode. That would translate to 20MHz. Not your 2 or 3GHz that one thinks of, particularly w/ CPUs and DDR3 RAM

are they serious? (2)

slashmydots (2189826) | about a year ago | (#44715587)

Over-provisioning already exists on a ton of different SSDs like Samsung and OCZ. Intel didn't invent anything new and the controller's MHz isn't going anywhere, nor would that be a good idea anyway. One flaw in the data and it's goodbye boot drive data integrity. What a useless "catching up" announcement.

Why... (2)

InfiniteBlaze (2564509) | about a year ago | (#44715595)

would I want to use compression at all, if my goal is speed? If maximizing total capacity is not the concern, I would use none of the drive for compression. I think the point to be taken from this is that Intel is recognizing that storage capacities for SSDs are reaching the point where compression is no longer necessary to make the technology a viable alternative to mechanical drives, and we will now begin seeing the true speed potential of the technology.

Re:Why... (1, Interesting)

slashmydots (2189826) | about a year ago | (#44715681)

All SSDs use compression. It's part of why they're so fast. Also, they're quite small. I find it hilarious that a lot even use encryption by default and yet the controller decrypts and spits out the data. There is actually zero encryption then, even if you plug the SSDs into another system.

Re:Why... (1)

InfiniteBlaze (2564509) | about a year ago | (#44715749)

That's counter-intuitive. "Running processes" on data does not make the data travel faster. If using compression improves speed, there is a bottleneck somewhere that allows the data to pool in cache. When the interface reaches the speeds that eliminate the bottleneck, we'll really have some fast drives.

Re:Why... (1)

EnsilZah (575600) | about a year ago | (#44716535)

From the little I've read, it seems that the data is copied to a fast buffer, compressed, and then written to the drive's Flash.
I guess the buffer is necessary because the OS still sees the SSD as just another SATA spinning drive so the controller has to do all the SSD specific stuff like allocating blocks based on wear-balancing.

So once it's in the buffer, it's just a matter of whether the time to compress a file and store the smaller result is faster than just storing the uncompressed file.
I can only assume that running data through logic is faster than the process it takes to store it in Flash and since a lot of the data you'd have on an SDD these days (system files, databases, basically anything other than highly compressed video, audio and already compressed files) would be pretty highly compressible and thus speed things up.
As a bonus, if the file in Flash is smaller than it's reported to be when uncompressed, you have more free space which allows for more efficient wear balancing and a faster drive.

Re:Why... (1)

Arker (91948) | about a year ago | (#44717287)

The compression should be done at a higher level, however. And if things are set up properly it almost always does when it counts. So this sounds suspiciously like the inflated connection speeds I remember from the modem days.

You would be connected at a much lower speed than the box said, the difference being the 'expected' gain from the built-in compression. In the rare occasion that you were a total idiot and sent large amounts of uncompressed data then expectations would be met. In other cases it was meaningless - compressing exchanges consisting of small 'sentences' back and forth may work but it produces no dramatic gains since that stuff is tiny to begin with. And large files are compressed beforehand, either by the originating application or via lessless or lossy compression for video and audio. Running a lossless compression algorithm on already compressed files is never a performance win.

In this case, compressing files smaller than the allocatable block size is pointless first off. And larger files, again, are normally going to be compressed at a higher level, by something with better understanding of its properties, possibly even using lossy compression.

It sounds like the old Stacker/Doublespace stuff is built into the controller. If that is accurate I give it a big DO NOT WANT.

Re:Why... (1)

kasperd (592156) | about a year ago | (#44718217)

So this sounds suspiciously like the inflated connection speeds I remember from the modem days.

This sort of inflated numbers is completely alive to this day, but in different areas. One area, where I have seen it myself, is on tape drives. When LTO5 was being standardized the manufacturers could not keep up with the planned improvements in storage capacity (the plan was that forward from LTO3 each generation would double capacity and increase throughput by 50%). And with LTO6 they were falling even further behind planned capacity and throughput, so something had to be done. They have somewhat compensated for that by improving the compression ratio from a factor 2 to a factor 2.5. I have no idea what that number is supposed to mean though, as anybody with knowledge about how compression works will know that compression ratios aren't just some constant that you specify.

From 50% to 40% (1)

tepples (727027) | about a year ago | (#44720625)

They have somewhat compensated for that by improving the compression ratio from a factor 2 to a factor 2.5. I have no idea what that number is supposed to mean though

I'd imagine it compares to the upgrade from DEFLATE (used in PKZIP and Gzip) to LZMA (used in 7-Zip). As I understand the claim being made, the original algorithm compressed a representative corpus of data to 50% of its original size and the new algorithm 40%.

Re:From 50% to 40% (1)

kasperd (592156) | about a year ago | (#44730223)

As I understand the claim being made, the original algorithm compressed a representative corpus of data

There are several problems with that sort of benchmark. The smallest of those problems is the question about who decides what is a representative corpus. The larger problem is that if the developers know what corpus will be used to benchmark the algorithm, they may look at that corpus when deciding what the algorithm should work like. In that case it is easy to intentionally or accidentally get an unfair advantage on that exact corpus.

It's hard to draw a line between what is cheating and what is just clever tricks achieving good compression in real-world scenarios. With knowledge about the corpus, it is easy to step over that line accidentally. Without any knowledge about what sort of data will be compressed, it is impossible to come up with a good algorithm for such data.

An example of something which is clearly cheating would be to define the compression such that if the input is identical to the benchmark corpus, then the compressed output is simply a single zero bit. Otherwise the compressed output is a single one bit followed by the input. On the benchmark you compress the entire input down to just one bit, that's the best compression ratio you could ever get on that input. On any other input the algorithm does not achieve any compression.

An example of something which is a good trick for real world usage is to initialize the compression state with a predefined input containing lots of substrings which are known to be common in practice. AFAIR the SPDY protocol does exactly that. To compress HTTP streams well, the state is initialized with substrings matching all the commonly used HTTP headers as well as other common strings such as content-types and encodings. By making such initialized state part of the protocol specification, both ends will have access to it at the start of the protocol, and in practice every HTTP stream will use substrings from that state, and thus there is a real benefit.

Though those two approaches may sound different, they are not all that different. Initializing compression state with likely strings is not cheating, and it also provides a benefit even if the exact strings in the state are not used, but some similar strings are used (and thus substrings of the state can be utilized). What would be the consequence if you knew a benchmark corpus and simply included that corpus in it's entirety in the initial state of the compression? If the algorithm was then asked to compress that corpus, it would simply need to specify an index to the beginning of the state and the length of the entire state, thus the corpus would get compressed down to just a handful of bytes regardless of its initial size. That's not the "perfect" compression ratio you could achieve by cheating and compress down to one bit, but it is still unrealistically good. Still, it is hardly cheating.

In certain scenarios you would get around that sort of "cheating" by measuring not just the size of the compressed data but also the size of the decompression code. That however requires somebody to specify on what platform the code will have to run. It also doesn't give a fair measurement in case the benchmark data is too small. So you need a large set of benchmark data. If bandwidth is small but the device in each end is powerful enough, it is not unreasonable to allow the decompression code to have a size on the order of 10s of MBs, if that can give a good compression ratio. But then you need a benchmark corpus of 100s of MBs to get a good idea about the performance of the compression, if you insist on including the size of the decompression code in the calculation.

Another approach to avoid cheating would be to have one representative corpus, which the developers can look at while designing the algorithm and another representative corpus, which will be used to measure its performance once finished. This sort of approach is well known to people working with machine learning. But having people agree on the correct corpus to use is somehow conflicting with the idea that developers will never see that corpus.

How to infringe copyright with data compression (1)

tepples (727027) | about a year ago | (#44732217)

The smallest of those problems is the question about who decides what is a representative corpus.

Wikipedia's article about LTO claims that the algorithm is based on Hifn's LZS, and benchmarks are relative to the Calgary corpus.

An example of something which is clearly cheating would be to define the compression such that if the input is identical to the benchmark corpus, then the compressed output is simply a single zero bit.

This would require the compressor and decompressor to contain an exact copy of the benchmark corpus, which would likely result in copyright problems.

In certain scenarios you would get around that sort of "cheating" by measuring not just the size of the compressed data but also the size of the decompression code. That however requires somebody to specify on what platform the code will have to run.

I assume it would run on whatever platform the drive's microcontroller uses, and compression on an MCU might not favor use of a multimegabyte corpus. But I see your point that more transparency in this benchmarking would be good for consumers.

Re:Why... (1)

UBfusion (1303959) | about a year ago | (#44715775)

This had totally escaped me, you are right. In the article on SSD, Wikipedia states "SandForce controllers compress the data prior to sending it to the flash memory. This process may result in less writing and higher logical throughput, depending on the compressibility of the data."

Re:Why... (3, Insightful)

jones_supa (887896) | about a year ago | (#44715867)

All SSDs use compression.

Citation needed.

Re:Why... (0)

Anonymous Coward | about a year ago | (#44716735)

Have you considered that it isn't needed and you're just an incompetent idiot?

Re:Why... (1)

jones_supa (887896) | about a year ago | (#44717059)

If we grab an old basic SSD from Transcend or Super Talent, I guess there are good chances that it does not do compression. I'm not sure if the first-generation Intel drives (X25-V, X25-M) do either. Maybe all new SSDs do compression, that could be.

Re:Why... (0)

Anonymous Coward | about a year ago | (#44720605)

Don't listen to this idiot, many SSDs do not use compression. Actually I'm pretty sure only SandForce does; it is easily shown in benchmarks.

Re:Why... (0)

Anonymous Coward | about a year ago | (#44716027)

I've looked into encryption because I thought it was odd.

SSD encrypts the blocks of data on the flash. This way if you switch the controller it can still not access the encrypted data.
The SATA connection between the host and the SSD is in clear text.

There is a IDE command to login to the SSD with a password. This password is used to decrypt the data.
There is a second IDE command that is able to reset the drive fresh so you can reuse the disk (but not access the old data) in case you loose your password.

The drive will forget the password when it powers down, so when you plug it in another host you still need a password to decrypt the data.

If you want to use encryption on the root disk, you can set a password in the BIOS which will login to the drive. The password by the BIOS are the scancodes that are entered, the BIOS does not do correct localisation of the keyboard like in a normal OS.

You can only plug in an SSD into another system and being able to access the data if you hotplug the sata cable but keep the driver powered. This is actually used to restore a drive when the BIOS had entered scancodes as passwords, and the BIOS doesn't have a utility to reset the password.

Re:Why... (1)

0123456 (636235) | about a year ago | (#44718683)

I find it hilarious that a lot even use encryption by default and yet the controller decrypts and spits out the data.

Sigh.

The Intel SSDs encrypt data so you can 'secure wipe' them by just erasing the encryption key, not to make them secure against external attack.

But yes, I guess that's hilarious to some people.

Re:Why... (0)

Anonymous Coward | about a year ago | (#44720839)

Where's -1, wrong?
The only SSD controllers that do data compression are Sandforce.

Re:Why... (0)

Anonymous Coward | about a year ago | (#44716175)

You would want to use compression because compression means that less data is being written to and read from the device, which means in turn higher data rates.

Re:Why... (1)

InfiniteBlaze (2564509) | about a year ago | (#44716371)

No, that means improved longevity. Performing operations on the data before it is written to the storage medium does not improve the speed.

Re:Why... (0)

Anonymous Coward | about a year ago | (#44717229)

Transfer bottlenecks. Why do web servers really like gzip? because both ends of the pipe tend to have processing to spare and the pipe is constrained. Have a quad core processor and single threaded streaming application? compression of input and output on other cores will mitigate the bandwidth requirements in the system. x265 uses way too much processing power? it halves the bandwidth at the same visual appearance. Same concept.

Re:Why... (1)

InfiniteBlaze (2564509) | about a year ago | (#44718425)

Thanks. I responded to that aspect of it earlier, but I appreciate your taking the time to remind me of what we're dealing with here. I hope this upcoming hardware is an indication bus speeds will soon catch up to the performance metrics of the devices they are connecting.

Re:Why... (1)

unixisc (2429386) | about a year ago | (#44718853)

would I want to use compression at all, if my goal is speed? If maximizing total capacity is not the concern, I would use none of the drive for compression. I think the point to be taken from this is that Intel is recognizing that storage capacities for SSDs are reaching the point where compression is no longer necessary to make the technology a viable alternative to mechanical drives, and we will now begin seeing the true speed potential of the technology.

Precisely! I'd think the compression would be more needed on the slower media, like HDDs, where you want to transfer less data due to the slow speeds. Here, in SSDs, where speeds are orders of magnitude higher, compression would be less necessary. So if compression is more popular in SSDs than HDDs, it would have more to do w/ the fact that HDDs have higher capacities than SSDs, and hence the need for SSDs to compress what wouldn't be necessary for HDDs

Re:Why... (1)

qubezz (520511) | about a year ago | (#44721001)

>>would I want to use compression at all, if my goal is speed?

Compression speeds up SSDs, pretty much universally. The speed of reading and writing to the memory cells is limited, but a 200MB/s data transfer speed becomes 300MB/s after the data is compressed/decompressed on the fly. The current generation of Intel drives do use compression in just this way to speed performance (but not to increase apparent size). I cannot see the advantage to disabling any compression as it is currently used with the present flash controller technology.

This is not "overclocking". Intel has used a very bad word to describe user configurable storage parameters, and many /.ers commenting here obviously couldn't RTFA.

Stacker all over again (2)

Gothmolly (148874) | about a year ago | (#44715787)

It's the ancient tradeoff of CPU vs. IO. When you have more of one than you need, burn it to improve the other.

Overclocking the device that stores your data (0)

Anonymous Coward | about a year ago | (#44715795)

What could possibly go wrong?

I have the perfect term for it! (4, Funny)

FuzzNugget (2840687) | about a year ago | (#44715889)

"Overclocking" is technically a misnomer. It's a sort of tweaking, but it's a bit more than that; we could call it ... twerking!

Re:I have the perfect term for it! (1)

UnknownSoldier (67820) | about a year ago | (#44717807)

When a CPU / GPU / memory runs at a _fixed_ frequency (due to a clock) the term over-clocking is slang and kinda makes sense.

What other term would you use?

Engines typically don't function when over revving.

Re:I have the perfect term for it! (1)

unixisc (2429386) | about a year ago | (#44718875)

Let's say the access times for a flash - which is at the heart of an SSD - is 50ns. If you supply it w/ data faster than that, it won't give you reliable results, so the extra speed just won't be there. You would need more wait states before you can read from it. And writes would be even worse, since those things are in the microsecond range.

Sounds pointless (1)

jones_supa (887896) | about a year ago | (#44715903)

Why would I want to tweak "how much data is used for compression"? If the drive compresses data internally, why not just do compression for all data?

And, all the consumer drives are bottlenecked by the SATA bus anyway.

Can you say... (2)

PopeRatzo (965947) | about a year ago | (#44715937)

"...gimmick?"

As nearly as I can tell (1)

Impy the Impiuos Imp (442658) | about a year ago | (#44716003)

tl;dr Allow users to adjust the compressed vs. uncompressed section sizes. Compressed goes faster, but rewrites a lot more and thus wears it out faster.

This isn't overclocking. It's overprovisioning! (2)

Theovon (109752) | about a year ago | (#44716149)

They're using "overclocking" here as a metaphor, but people seem to take it literally. Overclocking the drive would involve raising voltage and increasing clock speeds. That's probably possible. But what they're talking about appears to be to give the user the ability to influence the amount of overprovisioning on the drive. For an SSD, the physical capacity is larger than the logical capacity. This is important in order to decrease the amount of sector migration needed when looking for a block to erase. From zero, adding overprovisioning will substantially increase write performance, but at a diminishing rate as you add more extra space.

As for compression, it does two things. It allows more sectors to be consolidated into the same page, amplifying the very limited flash write bandwidth. And it effectively increases the amount of overprovisioning. These two mean that more compressible data will have substantially higher write performance and somewhat higher read performance. (Although reads are already fast enough, on many drives, to max out the SATA bandwidth.)

Anyhow, giving the user the ability to tweak overprovisioning seems pretty worthless to me. At best, some users will be able to increase the logical capacity, at the expense of having lousy write performance. Maybe this would help for drives where you store large amounts of media that you write once and read a lot. But how much more capacity could you get? 25%? Another knob might be compression "effort", which trades off compute time against SSD bandwidth. There's going to be a balancing point between the two, and that probably should be dynamic in the controller, not tweaked by users who don't know the internal architecture of the drive. Some writes will take longer than others due to wear leveling, migration, and garbage collection, giving the drive controller more or less free time to spend on compressing data.

Re:This isn't overclocking. It's overprovisioning! (1)

RGladiator (454257) | about a year ago | (#44717493)

Considering SandForce sells controllers to multiple vendors isn't the only difference between them how they choose to provision the drives. I know there can be hardware differences, but lets say we have two drives with basically the same internals. Lets also suppose that Drive X is faster than Intel's equivalent, but Intel's is cheaper (not likely but stay with me here). Now you may be able to tweak the Intel drive's settings and get it to match or closely match Drive X for cheaper. That could be a good use.

Or you want to use the consumer level drive in a small business. You don't want to spend the money on the enterprise level drive so you take this new drive, reserve more space for failure (such as dropping the 250GB down to 220GB or something) and continue on.

Overall I agree that this won't be much use to most people but it does have potential for the unusual cases.

Re:This isn't overclocking. It's overprovisioning! (1)

wiredlogic (135348) | about a year ago | (#44717995)

They're using "overclocking" here as a metaphor, but people seem to take it literally.

Because it is a specific technical term that shouldn't be misappropriated for something completely unrelated. This foolishness is what happens when the marketing department steers the ship. Something Intel should have learned their lesson on with the MHz wars and the P4.

Then again maybe I've just been ahead of the curve all these years when I "overclock" a new ext[2-4] partition with the minimum superuser reserved space. I've also taken a liking to "overclocking" my tarballs by switching from gzip to bzip2.

Re:This isn't overclocking. It's overprovisioning! (1)

unixisc (2429386) | about a year ago | (#44718915)

You seem to be using HDD terminology in SSDs, when the analogies simply don't hold good. The terms 'sectors' or 'blocks', which mean different things in HDDs, are almost synonymous in SSDs. Essentially, they mean the minumum erasable areas that one can erase before one can write to one or more locations within that area. There is no concept of 'sectors within the same page' or anything like it. If the flash device in question supports page mode reading or programming, it simply defines the area that can be written in a single operation.

Re:This isn't overclocking. It's overprovisioning! (1)

thoromyr (673646) | about a year ago | (#44720237)

wrong.

sector: smallest addressable unit of space

in hdd this is the same for reads and writes. for ssd its the smallest adressanle unit but....

blocks: no such concept with hdd. traditionally there were cylinders, heads and sectors (addressing scheme) and some folks may have used block to refer to a sector, but normally in data storage a block is the smallest addressable unit in a file system, sometimes called a cluster.

for ssd its different: it can only write either ones or zeros, not both. by definition, a sector is the smallest addressable unit on the drive: so whatever the smallest data unit that can be written at a time is, is a sector. This can be complicated (as it is for high capacity hdd) by the drive lying about its sector size. but wiping is done in groups of pages -- all at once, all data in that "block" is wiped out. Confusingly, called a "block" which is also a term from file systems, joy.

in short, you are backwards and confused. sector/block are *not* "amost synomymous in SSDs". If you take the SSD "block" (smallest unit that can be erased) and note that, for a spinning platter drive, sector and *this* notion of a "block" are synomymous then you realize that the aboev statement was reversed. Writing for SSDs is much more complicated than for spinning platter due to the separation of writing ones and zeros, the consequent requirement for over capacity, and the wear that comes from writing to an SSD (whereas frequent writing to a spinning platter drive keeps it "refreshed").

Re:This isn't overclocking. It's overprovisioning! (1)

unixisc (2429386) | about a year ago | (#44722247)

In SSD's - which ultimately boils down to the NAND flash, there is no such thing as 'sectors' the way you defined it above. The smallest addressable unit there is a word, if one is talking about reads. If one is talking about programming, one can program a page, which is smaller than a block. Also, NAND flash has to be erased first (set to all '1's) before any area of it can be programmed. The only type of flash where one can write either '0's or '1's is the EEPROMs (or E-square), but those things are of the order of less than 1kbit, not the GB or TB that people are thinking of here.

Writing for SSDs ain't complicated on its own - one can easily just write to wherever the assigned addresses are. It's just that since the endurance of a flash - the number of times a block can be erased - is 10k cycles or less for each block - it is undesirable to discard a flash where a few blocks have been repeatedly written to, while some other blocks may be totally unused - one would then end up having to discard a flash that still has some perfectly good areas. To avoid that, you have memory management routines that would try and use different areas of the flash and even out the usage, so that the lifetime of the flash can be maximized. But that seems different from doing compression, and using compression techniques to alter write performance. Write performance is a silicon level issue and would have to be fixed at that end - the SSD can't give a performance higher than that, regardless of whether compression is used or not.

Better filesystems for SSD = needed imo (0)

Anonymous Coward | about a year ago | (#44723413)

Filesystems & datastructures (vectors vs. pointer-based structures like linked-lists) that promote contiguous memory *might* help - since the need for "circular disk" patterning isn't required on SSD! Eliminating it would help imo.

* Think about it!

Compression helps me 4 HOW I use ramdisks/ramdrives (detailed below) in hardware - especially for browser caches, logging, print spooling, & temp ops!

(E.G./I.E.-> I 1st did the list below on multiple HDD's to distribute workloads & then software based ramdisks carving system RAM out to do so, but then going "dedicated hardware RAM" instead as shown below)

I found I can "pack more" into the areas on disk structures per (insert measurement here) for faster pickup on loads after seek/access/read cycles from disk.

The compression stage is offset by today's FAST cpus too! The gain, is there, despite the small "hit" in that stage.

I went "True SSD's" for decade++ now though:

Cenatek "RocketDrive" PCI bus, 4gb PC-133 SDRAM circa 2000-2007 iirc, & currently a Gigabyte IRAM PCI-Express bus, 4gb DDR-2 RAM 2008 to present (BOTH can be spanned/striped into a single 16gb unit).

Both = CONSISTENTLY excellent 4 BOTH read + WRITE speeds (flash wasn't so great on write especially for years so I stuck by using these with std. HDD's in the following ways to offload their workloads, effectively speeding them up by making them work less)!

Only reason I haven't done the SAME with Flash RAM based SSD? Concerned about longevity mainly (not proven in REAL YEARS out there still imo is all):

I move this off my WD Velociraptor sata II 10,000 rpm 16mb buffered harddisks driven off a Promise Ex-8350 128mb ECC ram caching raid sata 1/2 controller (which defers/delays writes via said cache, & also lessens physical head movement on disks & this is where I am going to make it even faster via lessening its workloads, read on & reduces fragmentation as well in the same stroke - "bonus") onto my 4gb DDR2 Gigabyte IRAM PCIExpress ramdisk card:

---

A.) Pagefile.sys (partition #1 1gb size, rest is on 3gb partition next - this I didn't do on software ramdrives though)
B.) OS & App level logging (EventLogs + App Logging)
C.) WebBrowser caches, histories, sessions & browsers too
D.) Print Spooling
E.) %Temp% ops (OS & user level temp ops environmental variable values alterations)
F.) %Tmp% ops (OS & user level temp ops environmental variable values alterations)
G.) %Comspec% (command interpreter location, cmd.exe in this case, & in DOS/Win9x years before, command.com also)
H.) Lastly - I also place my custom hosts file onto it, via redirecting where it's referenced by the OS, here in the registry (for performance AND security):

HKLM\system\CurrentControlSet\services\Tcpip\Parameters

(Specifically altering the "DataBasePath" parameter there which also acts more-or-less, like a *NIX shadow password system also!)

* All of which lessen the amount of work my "main" OS & programs slower mechanical hard disks have to do, "speeding them up" by lessening their workload, fragmentation, and speeding up access/seek latency for the things in the list above too.

---

Thus - HDD's concentrate on program &/or data fetches that are still hdd bound (& not kernelmode diskcaching subsystem cached in 4gb of DDR3 system ram here either yet) done on a media that has no heads to move, & thus, more mechanical latency + slower seek/access as you get on hard disks + reduced filesystem fragmentations due to that all, also & it works!

Since 1992 or so I've done the above for performance & efficiency (boosting HDD slower speeds). Again - 1st using separate HDDs (slower seek/access by FAR) & then using software ramdisks per the list above (on a MS-DDK based one I wrote in fact, on how I apply them), even later by applying Software-Based Ramdrives to database work with EEC Systems/SuperSpeed.com on paid contract (which did me VERY WELL @ both Windows IT Pro magazine in reviews, & also MS TechEd 2000-2002 in its hardest category: SQLServer Performance Enhancement & SuperSpeed.com too - since I improved their wares efficacy by up to 40% via programmatic control & tuning programs for them) - The EEC/SuperSpeed.com unit had 1 great thing going for it - mirroring back to HDD to save state of data!).

In the end I found NOTHING BEATS DEDICATED HARDWARE!

(The "flash cached" hybrid HDD's you see now? NOT the much diff. than what I do above really).

APK

P.S.=> IT works &'s proven 4 DB work, terminal servers, virtual machines, + more & only LATELY (past 5++ yrs. or so) have "youth of today" begun applying it 4 "industrial purposes"... apk

I'll wait for underclocking (0)

Anonymous Coward | about a year ago | (#44716655)

I want my SSD running smooth, with a capital smoo.

Give me dead nuts reliability! (3, Insightful)

BoRegardless (721219) | about a year ago | (#44716911)

After you deal with HD & SSD failures, you are only concerned with reliability.

Spare Area - Not Overclocking (2)

adisakp (705706) | about a year ago | (#44717855)

It is not an overclock but the ability to adjust the "spare area". This is the percentage of flash on the drive that is not exposed to the user and is used for garbage collection, write acceleration (by having pre-erased blocks), reduction of write amplification, etc. You can emulate more spare area on drives already if you take SSD and format it to less that it's full capacity.

This is the SSD equivalent to short stroking a hard drive [tomshardware.com] .

It's worth noting that the higher performance and enterprise level drives already have much more spare area but that results in a tradeoff of capacity for performance. They are just going to let you set this slider between consumer level (maximum capacity per $$$) and performance level (higher performance but less capcity).

Re:Spare Area - Not Overclocking (1)

adisakp (705706) | about a year ago | (#44717875)

FWIW, a larger spare area also increases reliability since there are more free blocks to handle any memory blocks that are bad. Also a larger spare area tends to have an effect on reducing write amplification and reducing redundant data writes during garbage collection -- both of which extends the overall lifetime of the entire drive.

Maxed out bandwidth (0)

Anonymous Coward | about a year ago | (#44719013)

I've always assumed that the biggest performance problem now is that we quickly maxed out the SATA 6GB/s standard.

It seems like overclocking is just tinkering at the margins, and that a more complete solution is either another rev. of the SATA standard, or else go with PCIe.

Not really overclocking is it? (1)

DJRikki (646184) | about a year ago | (#44719885)

Its just unlocking more of the safety margin for general use. Either way, an OC'd CPU might fall over and you lose an online game - a FUBAR'd overclocked SSD could result in bye bye all your data.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?