Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Data Storage Predictions for 2008

Zonk posted more than 6 years ago | from the we-all-like-storing-data dept.

Data Storage 81

Lucas123 writes "IDC just released its predictions for 2008 with regards to data storage trends. Its research shows, among other things, a greater adoption of online backup and archiving services, the 'prevalent' use of full-disk encryption in the data center, and mainstream adoption of solid-state disk drives due to falling prices. From the story: 'There are very simple situations and application scenarios where solid-state disks will be worth the risk. It does promise some great potential benefit in terms of I/O ... [and] solid state will make a significant impact on reducing heat from spindle usage in server blade deployments and to boost functionality in mobile devices.' According to IDC, storage capacity is exploding at a rate of almost 60% per year."

Sorry! There are no comments related to the filter you selected.

Datacenters (4, Insightful)

fishybell (516991) | more than 6 years ago | (#21834552)

I imagine that full-disk encryption for datacenters is a while off as any drop in I/O and throughput will be a non-starter for the already tasked drives. IMHO full-disk encryption isn't necessary as long as the datacenter is physically secured, just that all off site backups be encrypted. Anytime data leaves the datacenter it should be encrypted, but encrypting local storage only matters if you fear someone breaking in physically (encrypted disks won't help when broken in through a network as the computer will decrypt the data for the intruder) or you are selling the disks on eBay afterwards.

Re:Datacenters (1)

networkBoy (774728) | more than 6 years ago | (#21834754)

I'm more interested in the de-duplication deal.
Anyone know of a good home server that is client OS agnostic that can do this? We use Connected Net Backup at my work, but it's a bit pricey for my home stuff.

Re:Datacenters (5, Interesting)

DaveWick79 (939388) | more than 6 years ago | (#21834800)

While datacenters may be physically secured, they are also sometimes broken into. The last thing a company wants is to have personal information lost because a server was stolen. It may depend on what law or regulations are put in place to provide for data security compliance, and it may depend on what type of data the datacenter holds. I can sure see banks, insurance companies, or any company with a large amount of employee data, wanted to have that data encrypted at all times.

Re:Datacenters (0)

Anonymous Coward | more than 6 years ago | (#21837948)

Since I rent a server in a datacenter, the last thing I want is somebody extracting the data off the disk when I stop renting the server. Therefore, WDE for me!

Re:Datacenters (1)

poot_rootbeer (188613) | more than 6 years ago | (#21839762)

The last thing a company wants is to have personal information lost because a server was stolen.

Why bother breaking into a server facility -- which typically have several hard-to-circumvent layers of physical security -- when some dumbass C[EFT]O is going to leave a notebook PC full of unencrypted business intelligence on the passenger seat of his Acura?

By the time somebody responds to the OnStar alarm, the window's already smashed and 10 million customer records compromised.

Re:Datacenters (2)

gh5046 (217974) | more than 6 years ago | (#21835482)

> but encrypting local storage only matters if you fear someone breaking in physically ... or you are selling the disks on eBay afterwards.

Or if a drive fails and is replaced by a vendor. That is unless the company doesn't want the drives going off-site and is willing to buy new components to replace what failed.

Re:Datacenters (1)

jargon82 (996613) | more than 6 years ago | (#21835994)

Many vendors offer a plan for this, where they will honor warrenty and (for a fee) allow you to keep the failed disk. Several of my customers make use such offerings.

Re:Datacenters (1)

Smallpond (221300) | more than 6 years ago | (#21835600)

Disks fail. With encryption, I can return them for a credit or throw them in the trash. Without, I have to worry about data security. The thing holding it back is not performance, it is key management.

A disk failing doesn't get me fired, but losing a key when the data is perfectly OK, sitting right there and now forever inaccessible will.

Re:Datacenters (0)

Anonymous Coward | more than 6 years ago | (#21836034)

The advantage of encrypting a local disk is multiple:

First, armed data center robberies are happening far more often in the past 1-2 years than in the past, and this will only increase. For your criminals, it is a lot easier, in a lot of cases, to rob a data center than a bank. Data centers do not have holdup alarms and the security of a data center is not designed for holdups, making them easy prey. A trend with data centers is to have locations which are essentially unmanned, or just have 1-2 people physically at the location to physically push a reset button. This is a jackpot for armed robbers. Rob a bank, get 1-2k worth of cash in most cases. Rob a data center, and one not just scores the hardware, but the data on the hard disks can be sold for use for extortion or identity theft.

Data center robberies at one time were not profitable, because the hardware was hard to fence, but with the advent of how valuable information is on the servers, just the fact that the data is missing and is vital for a company's existence (few companies actually have an offsite plan in place to get back in production after a loss of their central data center.)

Hard disk encryption is one of the last things protecting data physically on disks in a data center. A crackhead with a 12 gauge would have to find a way to obtain the decryption key and/or start a decryption operation before removing the disks, or else they will end up with just some unsalable stuff. In almost no case (barring an inside job), no armed robber is going to stand around, demanding a local admin on site export all the containers for a database to a tape drive.

WDE used to not be an issue in data centers, but times have changed. Data centers are decentralized and can't afford the security they really need.

Finally, WDE isn't that big a performance hit. Most WDE programs like PGP or BestCrypt will have the sector decrypted and passed to the OS before the next sector fills up the buffer in RAM. Hard disk IO is still limited by the time it takes for the sector to pass under the array of heads.

Re:Datacenters (1)

socz (1057222) | more than 6 years ago | (#21841882)

The thing i think about, is what happens when this crazy world decides you are doing something naughty, like helping terrorists! And they justify taking your servers and drives to those ends.

The Pirate Bay is the biggest example in mind of who had to have their data encrypted. Who cares if the cops take the drives, what matters is that the data is secure, right?

I honestly think anything on a server should be encrypted, but is that really economically sound? I don't but it's like insurance, it doesn't matter until something happens, right?

And a 3rd thing (1)

suso (153703) | more than 6 years ago | (#21834568)

that nobody saw coming.....

Re:And a 3rd thing (1)

johannesg (664142) | more than 6 years ago | (#21837706)

The Spanish inquisition?

My office (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21834604)

I've got quite a nice little solution, you should check out my server. Full redundancy etc, check out my blog [] for more information.

Forecase: Overcast with clouds increasing (4, Interesting)

pheared (446683) | more than 6 years ago | (#21834606)

I already know some people using the Amazon data cloud [] technology and I suspect that will increase. I'm a bit leery of putting my data in the hands of Amazon, who have essentially stated before that they will never delete anything they know about you. Probably doesn't exactly apply to this service, or does it?

Forecast! (1)

pheared (446683) | more than 6 years ago | (#21834630)

... not case.

Re:Forecase: Overcast with clouds increasing (2, Insightful)

tokamoka (859800) | more than 6 years ago | (#21834842)

For the most part, storing personal or sensitive data on Amazon S3 (like backups - see duplicity [] ) should go hand in hand with encryption (GPG etc). I carry my laptop in my bag to work, and really do think that the data on that stands much more chance of being nicked than the encrypted data I have on S3.

Re:Forecase: Overcast with clouds increasing (1)

crt (44106) | more than 6 years ago | (#21836482)

Use an Amazon S3 backup tool with built-in encryption like Jungle Disk [] and you won't need to worry. The fact that you can even use 3rd party tools says a lot more about Amazon's approach compared to other "cloud" storage providers.

Bigger tubes... (2, Funny)

russlar (1122455) | more than 6 years ago | (#21834612)

...we're gonna need them.

Re:Bigger tubes... (4, Funny)

Red Flayer (890720) | more than 6 years ago | (#21835488)

No, we'll need smaller tubes.

According to IDC, storage capacity is exploding at a rate of almost 60% per year."
No, you've got it backwards -- since only 40% of our storage capacity will be unexploded at the end of next year, we'll need tubes only 0.4 of the size of the current tubes. In 2010, we'll only need tubes 0.064 the size of the current tubes. See where this is headed?

In some 15 years and change, we'll only need microtubes.

In just 23 years, we'll need nanotubes. Let's just hope no one tries to send anything bigger than a picotruck down them.

Alternative Future (2, Funny)

Warbothong (905464) | more than 6 years ago | (#21835628)

Or, the RIAA, MPAA et al actually succeed in their worldwide legal battles, thus without mountains of music and films to consume, home users' data storage use plummets and the floppy disk becomes the dominant format once more. The world begins to use floppy-based Linux distributions (because Vista takes too many disk swaps to install) and thus everyone enjoys a renaissance of console-based system rescue distros, streaming everything they might want through a lynx port of Gnash. Gradually, as more and more features are packed in to the disks it is realised that a modern form of storage is indeed beneficial. Hence the Zip drive makes a Lazarus style comeback. Hey, it could happen!

Redundant? (3, Funny)

thatskinnyguy (1129515) | more than 6 years ago | (#21834634)

This article along with all of those who have something to say about backups should be modded "Redundant". After all, what good is a backup solution without redundancy?

Re:Redundant? (2, Interesting)

canuck57 (662392) | more than 6 years ago | (#21835706)

This article along with all of those who have something to say about backups should be modded "Redundant". After all, what good is a backup solution without redundancy?

That whole article sucked.

1) Says absolutely nothing that hasn't been true for over 30+ years.
2) Did this come from a random word generator?
3) Object based storage systems, maybe given enough time but 2008 isn't going to be magical.
4) Yep, we will see very high end $$$ laptops use solid state, but given the cost, current densities and Moore's law, at least 5 more years.
5) iSCSI? Why not DASD? DASD is still faster. EMC paying the bills?
6) Already happened. Think removable disks and USB.
7) Why eat the latency, recovery risk and costs in a secure data center? The TAPE needs securing, not the disks. (They didn't mention laptops, different story).
8) Says nothing
9) Green, had to find an excuse to say the word. If I bought new 35W CPU it could be green, or if I re-use the 145W heater it is green?
10)Is fluff 'n stuff. Motherhood.

Now a few choice predictions I will make.

1) Think if your organization has 5000 desktops and each has a spare 100GB that is 50TB of backup storage that is not used. 2008 will be the year we will serious start to look at distributed disk to disk backups.
2) Big one box storage solutions have maxed out in market penetration, mid-sized and small sized storage appliances is where the growth this. Disk is cheap and we over manage it.
3) Disk drive manufacturers will still do very well as they have the price/performance point. Even a high end laptop will say boot from 64G of flash, will still want a 800GB drive for storage.
4) Disk encryption will be standard in **laptops** for government and many corporations making some small headway into the consumer market.
5) Your next high end tape cartridge might be a hard drive with contact points. Same volume, higher density, 10 times as fast and no tape mechanism to eat tapes. Might even have built in hardware encryption. 2008 will be a serious start year for this.
6) A realization of what information we need to "dump" and what we really need to keep will grow. While an unsightly mess inside a computer goes unseen, it is none the less there. Data retention policies will grow and need more work.

BTW, personally I haven't used tape backup in over 9 years. After spending far too much money on tape transports, tape jams, longevity/storage issues I gave up on tape. Been using disk-2-disk over the network ever since. Preferring cpio, Samba, NFS, rsync/rdist etc. For compression, use gzip in a pipe, for encryption (where I need it) keys on a USB and PGP. Works great. And oh yes, I have had to recover. Works like smoke.

Re:Redundant? (1)

thatskinnyguy (1129515) | more than 6 years ago | (#21836322)

1) Think if your organization has 5000 desktops and each has a spare 100GB that is 50TB of backup storage that is not used. 2008 will be the year we will serious start to look at distributed disk to disk backups.
Something like a distributed RAID volume striped over multiple machines?! BRILLIANT!

Re:Redundant? (1)

mr_mischief (456295) | more than 6 years ago | (#21837774)

Add parity and/or redundancy, and consider it a Guinness commercial.

There's really little reason you couldn't load some clustering, redundant filesystem on all of your desktops. Using Linux (and probably some of the BSDs) it'd be pretty easy. Something like AFS or GFS with enough nodes wouldn't even need to be backed up explicitly if you had multiple office sites and configured your redundancies carefully.

Of course, you'd have to make sure your distributed data is only accessible to the proper people in your organization. That could be more difficult on many desktops than on a few servers, but perhaps locking down the desktops enough would suffice. Internal threats are the worst kind, though, and the workers have physical access to their workstations.

Re:Redundant? (1)

canuck57 (662392) | more than 6 years ago | (#21843374)

Something like a distributed RAID volume striped over multiple machines?! BRILLIANT!

Add the words encrypted and redundant to it and you have the idea.

A file system that is like raid 5, but double writes each entry to different networks for added redundancy so if one dies another picks up. Add the sophistication in the background to dynamically repair for missing nodes and volume segments.

Then the 100GB unused disk space of 5000 PCs becomes a 500TB disk volume, say 200TB of double redundant reliable and usable. Or add 1 500GB dedicated disk to each PC, make it 1000TB of storage. A lot of backup storage. (Yes, my initial numbers were off but the point is the same).

I wish I didn't have to work, would be neat to write a file system driver/server to do this. Or maybe Google can contribute theirs. I have long suspected Google does not use tape for redundancy.

Re:Redundant? (1)

hr raattgift (249975) | more than 6 years ago | (#21844214)

Redundancy isn't the problem. Mirroring writes of something that overwrites good data with bad data is a poor strategy.

Recovery is the problem. When you accidentally delete a file, save bad data on top of an existing file, or a bug or hardware crash strikes and messes important data up, you want to be able to undo that easily.

That is, while your data-scatter idea is fine, the data-gather part needs to work when the user decides the existing version of data is bad, and that a previous version might be good.

rate the poor age predictions for 2008 & beyon (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21834678)

almost nothing will be in a 'solid' state. being 'fluid' will be all the rage. almost time to get real yet?

we're intending for the whoreabull corepirate nazi felons to give up/fail even further, in attempting to control the 'weather', as well as a # of other things/events. []

the creators will prevail. as it has always been.

corepirate nazi execrable costs outweigh benefits
(Score:-)mynuts won, the king is a fink)
by ourselves on everyday 24/7

as there are no benefits, just more&more death/debt & disruption.

fortunately there's an 'army' of light bringers, coming yOUR way.

the little ones/innocents must/will be protected.

after the big flash, ALL of yOUR imaginary 'borders' may blur a bit?

for each of the creators' innocents harmed in any way, there is a debt that must/will be repaid by you/us, as the perpetrators/minions of unprecedented evile, will not be available after the big flash.

vote with (what's left in) yOUR wallet. help bring an end to unprecedented evile's manifestation through yOUR owned felonious corepirate nazi glowbull warmongering execrable.

some of US should consider ourselves very fortunate to be among those scheduled to survive after the big flash/implementation of the creators' wwwildly popular planet/population rescue initiative/mandate.

it's right in the manual, 'world without end', etc....

as we all ?know?, change is inevitable, & denying/ignoring gravity, logic, morality, etc..., is only possible, on a temporary basis.

concern about the course of events that will occur should the life0cidal execrable fail to be intervened upon is in order.

'do not be dismayed' (also from the manual). however, it's ok/recommended, to not attempt to live under/accept, fauxking nazi felon greed/fear/ego based pr ?firm? scriptdead mindphuking hypenosys.

consult with/trust in yOUR creators. providing more than enough of everything for everyone (without any distracting/spiritdead personal gain motives), whilst badtolling unprecedented evile, using an unlimited supply of newclear power, since/until forever. see you there?

"If my people, which are called by my name, shall humble themselves, and pray, and seek my face, and turn from their wicked ways; then will I hear from heaven, and will forgive their sin, and will heal their land."

meanwhile, the life0cidal philistines continue on their path of death, debt, & disruption for most of US;

gov. bush denies health care for the little ones []

whilst demanding/extorting billions to paint more targets on the bigger kids []

& pretending that it isn't happening here []

all is not lost/forgotten/forgiven

(yOUR elected) president al gore (deciding not to wait for the much anticipated 'lonesome al answers yOUR questions' interview here on /.) continues to attempt to shed some light on yOUR foibles; []

ACK! (0, Offtopic)

9Numbernine9 (633974) | more than 6 years ago | (#21834680)

According to IDC, storage capacity is exploding at a rate of almost 60% per year.

Quick! Someone back up all of the porn before we lose it all!

wish list (2, Interesting)

fred fleenblat (463628) | more than 6 years ago | (#21834682)

Is there a product that fits this description?
  • flash drive, say 64G or so
  • on board ram cache (let's say 1G) that stores most recently accessed files for really fast access.
  • the 1G cache is expandable if you want really high performance
  • modest battery or capacitor, enough to enable write-back instead of write-through
  • USB 2.0, firewire, or eSATA it's all good.
  • doesn't cost significantly more than you'd expect from the above components
when i google for something like this the closest hits i get are for products with spinning platters instead of the flash, some horrendously expensive SSD drives, and readyboost-branded flash drives.

Re:wish list (3, Informative)

DaveWick79 (939388) | more than 6 years ago | (#21834756)

1. They already exist, but for about $4000 for example here []
2. On board RAM cache - it's called Intel Turbo Memory, it's cheap and it's been availabe on laptops for several months now and will soon be on the desktop also. Coupled with Vista readyboost it will do what you want it to, or it can also serve as a high speed flash RAM drive on which you can install frequently used apps or files.
3. They have them in 2GB also.
For the rest, they already have 32GB Flash for a reasonable price (around $300) if you make the comparison to RAM rather than spinning platters.

Re:wish list (1)

fred fleenblat (463628) | more than 6 years ago | (#21834848)

i don't mean a flash drive that functions as a cache to speed something else up, i mean a flash drive that HAS a ram cache to speed itself up. sorry if i wasn't clear.

Re:wish list (3, Informative)

DaveWick79 (939388) | more than 6 years ago | (#21834970)

Problem with RAM is that it's volatile and you'd be screwed if power went out while writing back to that cache. Intel Turbo Memory uses an internal PCI Xpress slot as it's interface, and employs high speed flash memory. While not as fast as RAM memory, at least you wouldn't have to keep a battery in it to power it for long enough to write the entire contents of a RAM cache back to memory. Besides, if you want a RAM cache, isn't that what the OS does already with RAM? If you want control over what goes into your RAM cache, there are a number of softwares which will create a RAM drive, which you can then load with the data you choose at system startup.

Re:wish list (1)

fred fleenblat (463628) | more than 6 years ago | (#21835132)

my fourth bullet point was that i was willing to pay for a battery, which shouldn't be a big deal since it only has to last long enough to finish write back.

your point that the OS should be doing the caching is a very good one. what started me on this quest was that there is a certain OS and OS-supplied service that my employer uses that isn't very good at keeping files cached in RAM. it seems to prefer to let the thread pool fill up all available space in a few minutes, reserving only a small fixed amount for file caching.

Re:wish list (1)

maelstrom (638) | more than 6 years ago | (#21835964)

Just add more RAM to your system, your OS should already be block caching for you...

Re:wish list (4, Informative)

Joe The Dragon (967727) | more than 6 years ago | (#21835042)

at that size usb 2.0 is out firewire is faster and has less cpu load.

Re:wish list (1)

m50d (797211) | more than 6 years ago | (#21835380)

Bollocks is it out. Neither of those factors are significant enough to matter - and remind me just how common firewire 800 is, hmm?

Re:wish list (1)

Courageous (228506) | more than 6 years ago | (#21835452) []

Not on the market yet.

Doesn't have the ram, but then given its performance figures you shouldn't care (and if you do, let's not forget you're asking it to do what your OS already does). Same goes for your write-back: at 600MB/s, why?

They're targeting 30$/gb.


Re:wish list (1)

SacredByte (1122105) | more than 6 years ago | (#21835688)

They're targeting 30$/gb.

Thats the reason this isn't likely to be widely sucessful: Hard-drives can be had for under 0.30 USD per GB. Lets not forget what R.A.I.D. means: Redundant Array of Inexpensive Drives. 'Redundant array' being important, but 'Inexpensive' being crucial. The purpose of a R.A.I.D. is to achieve performance of expensive things like this but without the expense.

Re:wish list (1)

Courageous (228506) | more than 6 years ago | (#21836118)

14 15K FC won't give you this level of performance, unless you go into RAID-0. I'll discount RAID-0, because it's almost never used in real deployments.

I feel certain that this class of device will appear, and quite soon, in enterprise storage solutions where it will be used as a persistent backing store (cache) in the very RAID arrays that you are talking about. This isn't just guesswork; my position in industry is such that enterprise storage vendors do backflips in order to show me their developing products and roadmaps.

You're losing site of the forest for the trees if you think that the ratio of the costs is the deciding factor for purchase. It's absolutely not. Think it through. If a 64GB device like that were $50, would it bother you to pay for one even if you get (one) 500GB drive for free? The ratio is infinite, but I doubt you'd be bothered, and feel sure that you'd think long and hard about forking over the $50.

You'd think longer and harder if you knew just how hard it is to get reliable, sustained, real-world read rates of 800 MB/s...


Re:wish list (1)

SacredByte (1122105) | more than 6 years ago | (#21837352)

You're missing my point, and casting wild aspersions to boot. My point was simply: I doubt these will become popular as longterm/mass storage devices. You also assumed my mention of RAID was to say "Just put in a RAID-0, and it'll solve everything." This is nowhere near what I meant to imply. I was using the example of RAID (big, fast, and cheap) to show that you can combine a number of smaller, slower, cheaper disks into one large, fast volume. It is similar to what Cray did on one of his early supercomputers: He went with the lowest bidder, and ended up having to rig part togther to combat inconsistancies; his technique worked. The idea behind RAID is that if you can rig toghther several smaller, slower, cheaper drives, you can end up with performance to rival much more expensive single disks. I'm not losing sight of the forest for the trees on this one; I'm simply coming at it from a different perspective. This thing may have its uses, but it is NOT as a replacement to HDD's but rather as a fast, temp cache. And if I had a choice between say, one of the new 64GB SATA flash drives for $50 USD, or a 500GB 7200rpm Seagate SATA HDD for free, I'd go for the Seagate. And not soley because of the price. If I have a choice between a very small, fast drive that I had to pay for, and a much larger drive that wasn't much slower, the combination of high storage capacity and low price would win. I would rather have all my games installed on my hard drive and still have free space, then have to constantly scrape for free space just to have a marginal speed improvement. Your last point is a red hering; You blindly take their word for it (As far as I can tell; please point me to evidince if I'm wrong here) as to the read/write speeds this thing can achieve... As for the RAID level I'd choose? RAID 2. One of the fastest, and most robust RAID levels (even though no-one does it.... hmmm...)

Re:wish list (1)

Courageous (228506) | more than 6 years ago | (#21839960)

And if I had a choice between say, one of the new 64GB SATA flash drives for $50 USD, or a 500GB 7200rpm Seagate SATA HDD for free, I'd go for the Seagate.

There are many buyers who are not like you. The issue is that to many buyers, both $50 and $0 are "free". There is a price point threshold below which the cost is a non expense. This is particularly true in enterprise purchasing situations, where the processing of the paperwork to merely buy and item is hundreds of dollars. While that only assesses the impact of one whole purchase, these and other factors are very real. They are also real in the consumer space, where many classes of buyers routinely behave exactly the same way. I would happily pay $50 for a device that is 8X faster than a hard drive for read and 1000X faster for random access. I already have a 500G hard drive, and it's not full...

I did not imply that you said RAID-0 would solve everything. Rather I am saying that it can be quite difficult to nurse high read/write rates out of RAID structures that don't lose large amounts of data. RAID-5 and RAID-6 won't match the read/write rate of one of fusionio's cards in any configuration from any enterprise seller of storage.

You're right, by the way. I am accepting their read/write rates at face value. This shouldn't trouble you, its a discussion about a theoretical, not-on-the-market product. Let's just suppose.

It might surprise you to learn that we are considering using their product in a clustered storage solution at their listed price. A couple grand per card does not seem out of line to me, as the backing device for a persistent cache. If the product performs as advertised, of course.

I poked around looking for RAID-2 information. Both striping and some sort of error correcting code are mentioned, but otherwise this solution is not discussed much. Tell me more. What are the read/write performance figures for a group of 10-14 drives in this configuration? And what happens if you lose a whole single drive?


Re:wish list (1)

SacredByte (1122105) | more than 6 years ago | (#21845792)

There are many buyers who are not like you. The issue is that to corporate many buyers, both $50 and $0 are "free".
Fixed it for you...

The reason I would choose a slower 500GB drive for 0 USD over a faster 64GB drive for 50 USD is that I place capacity over speed. With the 64GB drive, while I would (in theory) have improved disk access times, which would result in better performance in software (read: Games) The reduced load times and fractionally higher FPS would be outweighed by the fact that I would have to constantly uninstall software in order to install other software. I would much rather have a drive that could fit all my software at once. Currently, I have a 60GB hard drive in my laptop, a 160GB in my desktop, and a 200GB USB external. They all run at 7200 rpm, and they all have less than 10% free space.

RAID 6 isn't an official RAID level; It is just the common name for RAID 5 extended to cope with multiple disk loss. And, since you say we should take these people at their word over the possible performance, it is only fair to do the same to current solutions: Compair them both based on their maximium theoretical throughput.

RAID 2, in theory, should have the best mix of performance and reliability, but it get it's redundancy from ECC (which most current HDD's do on their own) instead of parity. Another issue is that it require all the drives to spin syncronously - something that most current drives don't support. Admitidly I know very little other than what I've read on the subject, most of which is contradictory; some say it is cheap, while others say it is costly. But anyway, in theory it should be quite fast.

Finally, I understand where you're coming from saying going through all the red tape makes the $50 USD difference negligible, but price alone isn't the issue: Other issues are actual, real world performance, and capacity.

Re:wish list (1)

Courageous (228506) | more than 6 years ago | (#21845948)

Re your "correction": there are plenty of /consumer/ buyers that are insensitive to price below a certain price point, and for some of them, that price point is well over $50 when it comes to things like consumer electronics. I'd say my own personal wallet would open up for a card of that capability, were it available today, in the $300 range. Honestly.

I see what you're saying about sensitivity to data locality. While there is unfortunately as of yet a solution for this, what's wanting here is "transparent storage migration" (on a block level). Think logical volume that is data access frequency aware and likewise aware of drive characteristics. I.e., just make sure the blocks are where you need them most. Not a hard algorithm to write, just needs to be a market need.

While I am indeed sure that the "wanting" thing above will see solutions, I could see how they are unlikely to see the consumer light of day. I'd be more likely to expecting things like hybrid drives featuring something like fusionio technology internally, for a mega cache of sorts. At the right price point, of course. This leverages all sorts of sensible economies of scale, like being able to plug the drive into your favorite RAID controller.

Be that as it may, consider. Is there not some near feature, not so far off day where software itself would be unlikely to be large enough to fill up a drive like this, for many/most users? I think so. Also, there are many classes of work where the job being worked on does fit reasonably well on the drive, and would benefit. Digital image production, various kinds of (small) streaming IO problems and so on.

Anyway, I agree that the probable use of these cards are for cache. I'm just saying that there are plenty of applications that would benefit, should the price point be low enough, and there would certainly be buyers, /regardless/ of the ratio of price to bulk storage.

In enterprise space, this is obvious from the bevy of buyers who eat up 15K FC/SCSI/SAS drives just to scrape out every last ounce of IOP. Those drives are /capacity limited/, man.

I consumer space, it's also obvious. Just look at the enthusiast market.

There's also the workstation space; people with a capital budget, but are sensitive to labor hours.

As far as whether or not their performance claims are lies or not, I can't say. At the moment they are not available, and I'm not at the point of wanting to test prerelease silicon. What I will say is that if they promise "sustained read rates of 800mb/s" and don't deliver in good faith approximately that, I can arrange some very bad mojo for them. :-)

I do feel sure of is that it's possible to do what they say. They really just need to do simple striping of the nonvolatile RAM devices on that card. The way they are doing it has its limits (its scale-limited), but it's also economical. While they could make a 300mB/s SAS form factor, all buyers would then have to buy the RAID card in addition to their drives. Instead, they've married the "RAID" and "drive" circuitry on board. Interesting choice. Says a lot about their intended market.

I think that market is this: enthusiast, and add ons to enterprise storage products (EMC, NetApp, etc). They might seem some workstation buyers also.


Re:wish list (1)

SacredByte (1122105) | more than 6 years ago | (#21852520)

What I was trying to get across with my correction was that to most consumers, while they have a 'price range' in the sense you suggest, that below the upper limit of that range, the judge based on factors OTHER than price; So because I have need of mass storage, I rate a device with storage capacity higher by an order of magnitude higher than a device with marginally faster IO speed. I recognize that not all people have the same constraints I do, but I feel confident saying that most people would make the same choice. Some people however, always go for the cheapest solution, and they often pay the price (read: they buy low quailty items); This is why when I buy this kind of hardware, I buy from manufacturers I trust not to produce lemons (the exception to this rule being Dell, b/c their gold business tech support/service is so good).

I don't think the enthusiast market is going to go for this--not because of the monitary cost, but because most people who spend that much money on their computers often have all their expansion slots full of graphics/sound/physics cards--unless they start making enthusiast motherboards with more than 7 expansion slots.

Workstations are where I can see a device like this being usefull: Setting it up to hold the OS's swap to speed up things like CAD software, or game level design software, or DTP software. I know the owner of a print-shop local to me, and the owner there understands this; If he skimps on computer hardware, he has to pay an employee to waste time waiting for the hardware. He sees that over time, buying better equipment in the first place pays for itself over time (spending around 1500 USD on a workstation instead of 500 USD), and because the employee isn't waiting on the hardware/software as much, he is more productive--meaning that not only does he not pay the employee for wasted time, his business can be more productive as a whole.

The market as I see it for this type of product is for workstations as an alternative to having the OS's swap file on the HDD, but only if they come out with smaller versions, say around 16GB....

Re:wish list (1)

Courageous (228506) | more than 6 years ago | (#21858392)

I, too, am annoyed by their minimum size. My guess is that the size they chose has something to do with the base cost of the other circuitry (RAID like hardware) they have present. With smaller sizes, the cost per GB will start to look particularly bad, so they upped the minimum size and are going after premium buyers for now.

BTW, for reference, I have a Dell PERC5e controller and 10 10K 300GB SAS drives. Configured in RAID-5, these drives manage to sustain just over 200MB/s on read. If the device performs as promised, at 700MB/s sustained read and 600MB/s sustained write, this level of performance is truly incredible and will provide for lots of interesting niche solutions at a minimum.

Further, flash prices are dropping 30% annually. That will put the device in range of even more niche solutions pretty quickly.

To me, there is no question of the utility of high speed flash. There is some good question on whether or not a board solution for it is likely to sustain in the market. I'd rather have a device that can pull 375MB/s sustained, in a SAS form factor. I could then leverage those devices into solutions with ordinary RAID cards if I so chose, with commodity parts and the like.

But I can see the appeal of the card, with the extra headroom of the PCI bus. Do you recall Gigabytes IRAM device? There was much criticism that this used the SATA interface and not the PCI bus, as much of the b/w and latency advantages of the RAM were lost due to the interface. This would be particularly interesting if you were, say, a vendor of enterprise storage hardware and wanting to use the device for your transaction log.


Re:wish list (1)

SacredByte (1122105) | more than 6 years ago | (#21860634)

I don't have time to post a full reply, but from what I've heard Dell's PERC5 cards SUCK. I've heard that Dell takes a perfectly good card, and removed just about every usefull feature, which requires that you use the drivers provided by Dell, instead of those provided by the card maker....

Re:wish list (1)

Courageous (228506) | more than 6 years ago | (#21935600)


A Network Appliance 3000 series (3020) will top out at around 275MB/sec. I have a 3020, a 3050, and a 3070...



Re:wish list (0)

Anonymous Coward | more than 6 years ago | (#21835700)

Plus the babe is totally hot.

Re:wish list (0)

Anonymous Coward | more than 6 years ago | (#21917112)

they changed the picture. new babe is a blonde, but she looks older and more stressed out.

Bullshit (0, Troll)

gweihir (88907) | more than 6 years ago | (#21834688)

With regard to the "prevalent use of full disk encryption".

1) there is no need 2) encryption costs resources

Re:Bullshit (2, Interesting)

canuck57 (662392) | more than 6 years ago | (#21835366)

1) there is no need 2) encryption costs resources

Except for laptops. Especially those that belong to governments and corporations. But do agree with the datacenter, it is useless in a secured area. The IDC serves up a poorly thought out storage trends should be the title.

Re:Bullshit (1)

SacredByte (1122105) | more than 6 years ago | (#21835434)

Actually, if what another poster says is true, in that this is just a veiled attempt to make the technologies companies want to sell poplular, then FDE is a good thing. This is because a company can charge MUCH more to recover data from an encrypted disk than from a non-encrypted disk.

Numbers 9 and 10 are red herrings... (2, Insightful)

SacredByte (1122105) | more than 6 years ago | (#21834724)

9. Green storage initiatives will cause companies to seek nondisruptive/partial hardware upgrades.
This assumes that the 'environmental cost' of continuing to operate obscelete technology is less than the 'environment cost' of upgrading to more efficient technology. This is not always the case; Imagine adding capacity to a PDP-11 to 'keep it modern.' The cost of powering the equipment more than makes up for any possible environmental ills. So basically what they are saying is that next year people are going to start upgrading their computers a little bit at a time instead of chucking out the window every time Intel, AMD, Nvidia, Dell, HP, Apple, etc. come out with something new. It seems like a good idea, so much so that; in fact, most sane people already do it.

10. De-duplication, thin provisioning and virtual tape libraries will be in demand because of power saving efforts in the data center.
The issue here is that they make the false assumption that skimping on backups is a good thing. Due to certian high-profile corporate scandals, many companies MUST keep certian records either for a specific term, or indefinately. The problem with TFA's assumption is that they assume that the 'saving' money by not having multiple backups in multiple locations couldn't come back to bite the company in the ass big-time (huge fines). The reason companies keep multiple off-site backups is simple: The cost of keeping multiple off-site backups is LESS THAN the cost of losing the data.

Re:Numbers 9 and 10 are red herrings... (3, Insightful)

DaveWick79 (939388) | more than 6 years ago | (#21834924)

9. I agree with you - the cost of powering old equipment is going to be the driving force behind hardware upgrades in the next 2 years, not the requirement for more speed and capacity. I don't think people have been upgrading their systems a little bit at a time since the sub-$1000 computer became mainstream. The only systems that are going to be upgraded that way are the systems that are designed for expansion, like servers that are designed for storage expansion or blade-type expansion.

10. I don't think they mean skimping on data backups, they mean de-duplication of unnecessary hardware and not necessarily data backups. For instance not having 2TB of storage on a server when it is only using 100GB - use thin provisioning to give that server access to a dynamic storage volume that gives it only the space it needs. Cut down on duplicate hardware that handles things like backup AD controllers, data backup, etc. and put those tasks on virtual servers. Virtualize your tape libraries with an offsite hard disk backup array. All these lessen the power footprint of your datacenter without lessening the redundancy of your critical data backups.

Re:Numbers 9 and 10 are red herrings... (1)

milsoRgen (1016505) | more than 6 years ago | (#21835332)

I don't think people have been upgrading their systems a little bit at a time since the sub-$1000 computer became mainstream.
Are you talking about data centers doing piece meal upgrading or, like you said, people. Because if you honestly think people are just buying new sub 1k systems instead of incremental upgrades... Well let's just say I'd like to see the sub division you live in!

Re:Numbers 9 and 10 are red herrings... (1)

T-Bone-T (1048702) | more than 6 years ago | (#21836414)

If I could stick an Intel Core 2 Quad Q6600 in my laptop, I'd get 10X more performance with only a ~25% increase in energy use. That just blows my mind.

Re:Numbers 9 and 10 are red herrings... (1)

ckaminski (82854) | more than 6 years ago | (#21840466)

10: Data dedup -> means single-instance storage. That powerpoint you sent around about the companies revenue results for 2007. Instead of 200 copies on the network, only ONE is stored. Or for backups, instead of backing up 200 copies of the same Windows Server 2003 installation, only one is stored to tape. The saving can prove to be immense.

Some products even posit to do block-level changes, so if one page of a word document changes, then only those blocks that changed will be copied. Products from Data Domain and others are entering this space.

Re:Numbers 9 and 10 are red herrings... (1)

matuscak (523184) | more than 6 years ago | (#21844128)

Indeed. We use the Avamar backup software from (now) EMC that does the block level deduplication in software running on the client. It really does find just the modified chunks of a file to backup. It's amazing stuff. It makes remote backups and replication over modest bandwidth WANs really painless.

Re:Numbers 9 and 10 are red herrings... (1)

BBandCMKRNL (1061768) | more than 6 years ago | (#21838846)

This is not always the case; Imagine adding capacity to a PDP-11 to 'keep it modern.' The cost of powering the equipment more than makes up for any possible environmental ills.
When DEC introduced their 'PDP-11/70 on a board' they pretty much obsoleted their existing PDP-11 line. We did a quick analysis and realized that the reduction in power costs from having to power a single board vs. many boards in a cabinet would pay for the upgrade in less than a year.

Massive optical storage? (1)

faragon (789704) | more than 6 years ago | (#21834738)

I can not understand why massive optical writable storage has not been introduced at reasonable prices. Some solutions are born almost outdated: 25GB for a single sided Blu-Ray disk it is far from meeting mid term so-ho necessities. In my opinion, it is a necessity to push for a 100GB multilayer writable optical media, to cover the next 4-year home and small business backup and data distribution necessities.

Re:Massive optical storage? (2, Interesting)

SacredByte (1122105) | more than 6 years ago | (#21834814)

I'm not sure I agree with your proposal, but I definately don't agree with the storage capacity you mention. The issue is that developing technology takes time. What you propose is like planning a new highway for today's needs without realizing that by the time you actually complete construction you still don't have enough capacity.

What you need to do is say "how much will I need in five years?" and then build that. That said, if the purpose is long-term archival backup of hard-drives, anything smaller than 500GB will be nearly useless in five years. Anything much less would be like backing up your RAID array on floppy disks. Eight inch floppy disks.

Re:Massive optical storage? (1)

ppz003 (797487) | more than 6 years ago | (#21834864)

I can not understand why massive optical writable storage has not been introduced at reasonable prices. Some solutions are born almost outdated: 25GB for a single sided Blu-Ray disk it is far from meeting mid term so-ho necessities. In my opinion, it is a necessity to push for a 100GB multilayer writable optical media, to cover the next 4-year home and small business backup and data distribution necessities.

What's wrong with tape drives?

Tape drives? (2, Funny)

SacredByte (1122105) | more than 6 years ago | (#21834914)

Thats a sticky subject.

Re:Tape drives? (0)

Anonymous Coward | more than 6 years ago | (#21835184)

I'm not going to let you get off scotch-free for that remark.
With puns like that I better duct or else I'll get hit in the head.
Believe me, I don't want to mask the truth.

Re:Tape drives? (1)

SacredByte (1122105) | more than 6 years ago | (#21835372)

People who reply as AC like you did deserve to be sent to the punitentiary for a nice, long stay.

Re:Massive optical storage? (1)

faragon (789704) | more than 6 years ago | (#21835282)

Tape drivers are OK, but too much expensive for SO-HO. The point of cheap optical media is not just as a way storage but also for easy delivering.

Re:Massive optical storage? (1)

SacredByte (1122105) | more than 6 years ago | (#21835346)

I guess it depends on who you use to deliver them; Optical media is extremely easily damaged.

Re:Massive optical storage? (1)

WuphonsReach (684551) | more than 6 years ago | (#21844140)

What's wrong with tape drives?

They're finicky. There's too many formats. Not everyone has the same tape drive (and very few folks even have one in the first place). The drives are expensive and the tapes are no bargain either. And going hand-in-hand with the "nobody has one" is the issue that if your tape drive dies a few years down the road, you may be SOL at getting data back off of it if you picked the wrong brand.

Then there's the whole access time issue and tapes that only last a few times before they start sucking hard.

I despise tape for SOHO. It's too damn expensive and hard to work with. If the drives cost about 1/4 what they do now and the tapes were about 1/3 current prices, it would be worth dealing with.

Re:Massive optical storage? (1)

Kaell Meynn (1209080) | more than 6 years ago | (#21836142)

Perhaps the holographic storage disks in development which promise 1TB per disk may be useful here. I think they are currently at 300MB, set to be up to 1TB in a few years time.

Predictions, my arse... (5, Insightful)

xxxJonBoyxxx (565205) | more than 6 years ago | (#21834826)

IDC just released its predictions for 2008 with regards to data storage trends. Its research shows...

If you've ever been involved in an IDC, Gartner or whatever marketing discussion, you know that the "research" mainly consists of going from vendor to vendor (data storage vendors in this case) and asking what, in their wildest dreams, would the ideal demand curve look like. Then they charge for actually coming up with some supporting information to meet the vendors' preferred conclusion, and release the whole thing to consumers in the hopes of stimulating some demand for the paying vendors. Very scientific.

Re:Predictions, my arse... (2, Insightful)

dave562 (969951) | more than 6 years ago | (#21835022)

Someone mod this up.

Re:Predictions, my arse... (3, Insightful)

OnlineAlias (828288) | more than 6 years ago | (#21835198)

I have, and you hit it right on the head. IDC, even more so than Gartner in my opinion, are famous for their ridiculous "predictions". Nothing to see here, please move along...

Re:Predictions, my arse... (1)

aggles (775392) | more than 6 years ago | (#21835552)

Stimulating the market is really not how it works at Gartner. There is an element of consumer driven data in the predictions. Not all the predictions turn out to be accurate, but they've been at it for over a decade and have an impressive history to help you calibrate the quality of their market projections.

Re:Predictions, my arse... (1)

sgtrock (191182) | more than 6 years ago | (#21836948)

Let me guess...

You work for Gartner, right? ;)

Inertia will keep its hold... (1)

mi (197448) | more than 6 years ago | (#21835122)

I've written a wonderful (in my opinion, anyway) plugin [] for Sybase's backup-server. It allows one to (among other things) send the dumps over to the outside backup-providers immediately — without waiting for the dump to complete. One can also do on-the-fly encryption and not worry about the unencrypted data remaining on disk. Etc, etc.

The price is low (compared to the cost of even a single Sybase installation) and yet I sold less than a handful of licenses in 8 months, plus a few given away to qualified professionals [] . Inertia rules — there is no other explanation. Well, you may suspect, my plugin just sucks, but I know, it does not...

My prediction; high profile data loss (4, Funny)

Kris_J (10111) | more than 6 years ago | (#21835680)

In 2008 some twit with a soapbox (magazine column, TV show, whatever) will lose 3TB or more in a single failure and rant about how digital is so much worse than analogue. I bet he'll mention Laserdiscs in there somewhere and possibly The Domesday Book if he's from the UK.

Re:My prediction; high profile data loss (1)

DarkEmpath (1064992) | more than 6 years ago | (#21838148)

Yep, and I'll bet $10 it's John C. Dvorak.

The Macintosh uses an experimental pointing device called a "mouse". There is no evidence that people want to use these things.
- John C. Dvorak, SF Examiner, Feb. 1984

When I hit Ctrl-Alt-Delete, I see that the System Idle Process is hogging all the resources and chewing up 95 percent of the processor's cycles. Doing what? Doing nothing?
- John C. Dvorak, PC mag, 29th Sept, 2003
The man is a retard.

Blades? (1)

HockeyPuck (141947) | more than 6 years ago | (#21835970)

If you're looking to use blades, get the storage OUT of the blade and onto the SAN. Otherwise using tools like VMotion are a waste.

In 2008 enterprises are continuing to move to boot from SAN.

Virtual servers drive iSCSI? (1)

jombee (111566) | more than 6 years ago | (#21836194)

In limited cases will "5. Virtual servers will become an ideal conduit for iSCSI." Virtual host servers with a reasonable consolidation ratio of production, enterprise servers may stress 1Gb/s iSCSI. A SAN with both fibre channel and iSCSI capability is great to leverage iSCSI for *non-virtual* and/or test/dev servers to connect cost-effectively, but in my TCO calculations 4Gb/s fibre channel is a better choice for production virtual host servers. Once 10Gb/s iSCSI becomes less expensive and available in a mid-tier SAN it may begin to drive iSCSI for production virtual servers, but so will faster fibre channel. The trade rag rhetoric on iSCSI lately has over reached.


Re:Virtual servers drive iSCSI? (1)

jabuzz (182671) | more than 6 years ago | (#21838814)

I think FCoE is more useful in the data centre than iSCSI. None of the TCP/IP overhead, and all the cheapness of Ethernet. Thing is that long term 10GbE is going to eat FC alive in cost.

My predictions (2, Insightful)

Markus Landgren (50350) | more than 6 years ago | (#21837100)

I predict that drives will get bigger, and that many slashdotters who have not heard of wear levelling will be worried about the limited write cycles of flash and get modded insightful for that.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?