Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Data Storage Leaders Introduce New Wares

simoniker posted more than 10 years ago | from the mmm-wares dept.

Data Storage 29

louismg writes "Data storage giant EMC announced upgrades to their storage hardware family this morning, and claimed performance increases of 25% to 100%, with increased capacity and disk speeds. This comes two weeks after competitor BlueArc announced Titan, the world's biggest ever NAS box, which claims throughput of 5 Gbps and 256 terabytes in a single hardware file system. How much is enough, and as IT administrators, what is the answer to today's issues - improved hardware, or software?"

cancel ×


Sorry! There are no comments related to the filter you selected.

FP (-1, Troll)

lcde (575627) | more than 10 years ago | (#8229865)

My hardware is so fast i got first post :)

Re:FP (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8229925)

Damn, you beat me to it by a full four minutes, you bastard. Despite that, I salute you. Please accept this [] as a token of my appreciation.

First post! (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8229909)

Hello. I wish to bless this first post with a very special link [] .

I have some predictions too... (2, Insightful)

ivan256 (17499) | more than 10 years ago | (#8230019)

I predict that the storage industry will continue to produce boring incremental improvements on archaic paradigms untill somebody comes out with something revolutionary. Yes, that was vague and truly deep. Since you probably didn't read the article, here's the spoiler: it's esentially the same thing the author of the story said. Given the history of the industry, you can bet you'll get old and go grey before something revolutionary comes from one of the established players.

Something revolutionary is coming soon [] though.

Re:I have some predictions too... (3, Insightful)

zuzulo (136299) | more than 10 years ago | (#8231832)

Adding the dimension of time to data storage as in the link you provide is hardly revolutionary (cf cvs and other version control systems). On the other hand, there are some very interesting developments in distributed file and archival systems.

Some of this work is happening in the academic community (OceanStore, et al) and some is happening in the commercial sector (Avamar, Connected, etc etc).

It seems to me that the storage industry is advancing on two main fronts.

First, hardware is getting better and better at a fairly rapid rate. Storage densities, I/O speeds, hardware based data protection are all improving. This area is generally characterized by incremental improvements like you discuss and is where established players like EMC and other hardware players dominate.

Second, the community is in the process of developing software that attempts to handle (index, search, backup, restore, distribute, etc) the exponential growth in amount of data stored.

The difficult problem between the two is the algorithmic one at this time. This is where revolutionary approaches are needed. And, in fact, there are quite a few folks working in this area. More interesting, perhaps, is the number of efforts that have tried and failed to make significant headway.

I agree that there are likely to be revolutionary changes in the software that we use to interact with data, and sooner rather than later.

Re:I have some predictions too... (2, Insightful)

ivan256 (17499) | more than 10 years ago | (#8231941)

Adding the dimension of time to data storage as in the link you provide is hardly revolutionary (cf cvs and other version control systems).

There have long been snapshoting solutions too, the key diference here is that you can go back to any point in time, and that is truly new. With other version control systems you can only go back to where you manually told it to checkpoint.

As for revolutions in indexing and searching storage, I have yet to see something that's not a new take on an old concept. There are lots of companies with cluster file systems and database filesystems out there. When somebody comes up with one that's more than incrementally better than what we had 20 years ago, and they can make it work, I'd be interested in hearing about it.

Re:I have some predictions too... (1)

Bombcar (16057) | more than 10 years ago | (#8234682)

Have you seen RAIDn from Inostor/Tandberg Data? Multiple drive redundancy is an interesting development.

More info []

Re:I have some predictions too... (1)

shachart (471014) | more than 10 years ago | (#8237773)

Excuse me, but RAID rendundancy through (n,k) Humming Code (n data bits, k extra bits) is hardly interesting, let alone a development. Most other implementations work with (n,1), so they "innovated" and work with (n,k)? Big deal.

Oh, and those 8 years of development you get to hear about when reading the link on their website titled "RAIDn"? I pity their shareholders' nerves.

suggestions for improvement (0)

Anonymous Coward | more than 10 years ago | (#8230435)

EMC could improve performance and do themselves a favor if they removed all the WinNT 4 embedded OS from their Clariion brand. Even though it's specialized hardware, it will still BSOD.

And yet, strangely, 100% speed increase of broken (1, Informative)

Anonymous Coward | more than 10 years ago | (#8230649) still broken. My company is finishing up a particularly nasty lawsuit with EMC now over the crap that they "sold" us. I'd advise anyone in a position to make a purchase for their company to consider all the options before going with EMC. Their products are unfinished and unreliable. Ugh.

Re:And yet, strangely, 100% speed increase of brok (0)

Anonymous Coward | more than 10 years ago | (#8235590)

Let me guess - you bought a Clariion?

The Clariions may suck (we only use them for scratch space) , but Symmetrix frames kick serious ass.

The EMC tech may be out three time a week to replace drives (between our hundreds of frames we have a few disks die every week), but we've never lost any data.

Re:And yet, strangely, 100% speed increase of brok (0)

Anonymous Coward | more than 10 years ago | (#8238437)

We're just coming out of a 2 disk simultaneous failure with them. Basically their "analysis" says that Veritas Volume Manager made the disks fail.

Yeah. Nice call. Goodbye.

new warez? (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8230680)

user/pass? ip?

Improved backups.. (3, Insightful)

Sri Ramkrishna (1856) | more than 10 years ago | (#8230928)

What they need is improved backups. I don't give a fig about space if I can't back it up. So maybe someone should be looking at how we're supposed to be backing this stuff or archive this stuff. Or are we supposed to keep a warehouse of EMCs around? I can lay a bit that we are going to need serious backup infrastructure than what we have today to keep up.


Re:Improved backups.. (1)

Smallpond (221300) | more than 10 years ago | (#8231684)

Companies are still adding 40%/year to their storage and filling it with what? mail, word docs, downloads off the internet.

Instead of better backup, we need intelligent agents that figure out whats duplicates or unneeded old versions and deletes it. That makes better use of the storage you have, and makes it easier to find what you need amidst the clutter.

Re:Improved backups.. (1)

spike2131 (468840) | more than 10 years ago | (#8237256)

I used to work for EMC.... This wasn't my division, but if I recall, their preferred backup strategy is not to keep you EMC boxes in the same warehouse, but to have you buy two machines, keep them in geographically separate locations, and have them mirror each other over a Wide Area Network. They have some pretty tight functionality built in to handle the mirroring in real time... its features like that which make EMC boxes more than just a bunch of disks.

Its also a clever way of getting you to spend twice as much money on their products. They put the "Redundant" back in RAID.

Re:Improved backups.. (1)

Jim_Maryland (718224) | more than 10 years ago | (#8253922)

Having the redundant systems is great for protecting failure/destruction of the devices, but it doesn't really address file corruption/deletion by users. A "snapshot" system may offer some help, but when data retention is an issue, you'll still need to look at long term backup solutions. As earlier post have stated, backup of these huge amounts of storage is becoming very difficult.

My understanding of snapshots may be a bit out of date (latest employer doesn't have storage with this feature) but snapshots probably wouldn't work too well in a situation where single files are GB's in size.

Mirroring is a good failsafe, but it doesn't really offer the long term protection. Snapshots aren't much better, except for short term recoveries.

The price (2, Informative)

dtfinch (661405) | more than 10 years ago | (#8231338)

BlueArc appears to charge about $100/gb for storage solutions, and claims that its price is less than its competitors. At first, this looks to me like an insanely high price because my last hard disk cost $0.88/gb. But after some thought to the other hardware involved, I figure I could build an almost equally capable solution for $8-$20/gb, not counting software development costs. But adding the cost of the room to hold it all, plus the insane electrical and air conditioning costs, $100/mb is starting to look fairly reasonable for those who really need what they offer, and need it soon.

Re:The price (0)

Anonymous Coward | more than 10 years ago | (#8237449) -> $50/GB

BS (0)

Kanasta (70274) | more than 10 years ago | (#8232431)

Why can't I copy a 100mb file from C:\bob to C:\fred at more than aobut 5mb/s?

All this claim of speed, in theory, and I get speeds that wouldn't even max out usb2.

Re:BS (0)

Anonymous Coward | more than 10 years ago | (#8233255)

Why can't I copy a 100mb file from C:\bob to C:\fred at more than aobut 5mb/s?

Because your C: drive is really slow?

Re:BS (2, Informative)

PurpleFloyd (149812) | more than 10 years ago | (#8233907)

Your problem is because of Windows (or DOS, if you're even more of a masochist). It will tend to move the file in small chunks, so it goes something like this: read a little bit (maybe a few K) from disk, copy it to memory, seek head to new location, write that tiny amount back to the new location, then go back to the previous location and start over again with a new tiny chunk. As a result, your hard drive's heads are in transit more often than they're reading data, and speeds really suffer. Remember that old versions of DOS and Windows were designed to run on systems with very little memory; this strategy, while slow, also uses very little scratch space.

If you're using Linux and want to copy a lot of stuff from one place to another, you can use dd ('disk dump', designed for moving large files) and specify a blocksize of a few megs; this means that you will be moving data a few megs at a time, rather than a few K at a time - of course, this means that you have to use that much more memory. Also, I would imagine that Cygwin [] would allow you to use dd under Windows; another option is NTFS, where transfers from one directory to another on a single drive are nearly instantaneous. Of course, then you lose compatability; while FAT variants are understood by almost all OSes, you will have an unpleasant time trying to mount and use an NTFS volume from anything other than Windows. It's all about tradeoffs, but hopefully something here will help.

Re:BS (1)

jbert (5149) | more than 10 years ago | (#8247624)

Because you are copying from a disk to itself.

All the "max bandwidth" figures you see are for streaming reads, where the disk heads move (relatively) smoothly along logically continguous chunks of disk.

Compare that to copying from one part of the disk to another. Your 100Mbyte file will be copied in chunks. The sequence of events will go something like this at a low level:

while( data left to copy )
move disk heads to offset in file to be read
read a chunk
move disk heads to offset in file to be written
write a chunk

The bit that really costs you is the two seeks. For a disk with an advertised seek time of 10ms, you are paying 20ms per chunk on top of your read+write times.

20ms/chunk == 50 chunks/second. So, 5Mbytes/second would be 100Kbytes/chunk, (assuming the actual read+write are free). [If you meant 5Mbits a second that would be ~12Kbytes/chunk.]

If you had two disks (on different disk controllers, etc etc). The disk heads wouldn't need to seek around much at all, and you'd get much closer to the theoretical bandwidth.

Or if you have a RAMdisk (and enough RAM), you could try:

cp bob/file to RAMDISK/file
cp RAMDISK/file fred/file

which should also run at full speed.

Also - note that if you do any other task which involves reading or writing to the disk at the same time, you'll hurt performance even more.

Its not the case that "time taken to perform tasks A and B in parallel" == "time taken to perform task A + time taken to perform task B", you also pay the cost of switching between them, which is comparitively steep in the case of disk I/O.

Does that make sense? Or Have I Been Trolled? :-)

Re:BS (1)

Kanasta (70274) | more than 10 years ago | (#8255372)

OK i guess, but this means there's little point making it faster cuz it'll still be bottlenecked by the seeks - which are still the same over the past n years.

Re: Copying one file to another (1)

some guy I know (229718) | more than 10 years ago | (#8258786)

There is also the extra housekeeping that goes on for clearing bits from the freemap, updating the file size in the dest directory entry, etc.
Things like that also contribute to the performance penalty.

Improved hardware or software? (0)

Anonymous Coward | more than 10 years ago | (#8234513)

Well, I will say that our throughput to our emc clariion nx600 increased significantly when we installed the latest FC drivers for our qla2300 cards. We were barely beating 4 meg a sec read/write prior, and that was without enabling hardware volume mirroring. I will also state emphatically that emc support is shit. An entire ISP was dead in the water for 3 fucking days and they sent us one guy that didn't know shit. We pay a fortune for support contracts. Millions of dollars worth of EMC hardware and it took them 3 days to get someone on the PHONE that knew what they were doing. Any other REAL company would have had half their fucking technical department on a plane to you the next day with a million dollar client on the line. We were losing 10 grand every hour the damn thing was down. We've already started talking to network appliance. That's how quickly you lose customers in the business. I predict EMC will lose a lot of customers.

In Soviet Russia... (-1)

I'm not a script, da (638454) | more than 10 years ago | (#8236659) leader ware you!

Thought..... (1)

Shiek2BGeek (748442) | more than 10 years ago | (#8251773)

While the industry..... and consumers.... spend billions a year on R&D for larger storage devices/solutions and more secure ways to store data without losses, has anyone considered making the data SMALLER? Unlimited hours are going into encryption algorithms every year but most of the people I've seen out there are still using WinZip and other usefull but not too impressive compression utils. MP3 made audio better at a smaller (data) cost, mpeg for video etc..... what about the rest of the crap on your drive? Is it not possible to keep a compressed/simplified version on files on drives/b'ups and reinflate them when needed for operation?
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?