storagedude (1517243) writes "With Amazon Web Services losing $2 billion a year, it’s not inconceivable that the cloud industry could go the way of storage service providers (remember them?). So any plan for cloud services must include a way to retrieve your data quickly in case your cloud service provider goes belly up without much notice (think Nirvanix). In an article at Enterprise Storage Forum, Henry Newman notes that recovering your data from the cloud quickly is a lot harder than you might think. Even if you have a dedicated OC-192 channel, it would take 11 days to move a petabyte of data – and that’s with no contention or other latency. One possible solution: a failover agreement with a second cloud provider – and make sure it’s legally binding." top
Blogger starts Whitehouse.gov petition to fight data breaches
Writing on InfoStor, Henry Newman said the security problem ‘is one hundred percent solvable with the right amount of motivation and the right amount of resources.’ His petition requests that if organizations with more than 1,000 employees fail to protect data, 'the organization becomes responsible for that loss with no exclusions and no liability limits.'" top
Data archiving standards need to be future-proofed
storagedude (1517243) writes "Imagine in the not-too-distant future, your entire genome is on archival storage and accessed by your doctors for critical medical decisions. You'd want that data to be safe from hackers and data corruption, wouldn't you? Oh, and it would need to be error-free and accessible for about a hundred years too. The problem is, we currently don't have the data integrity, security and format migration standards to ensure that, according to Henry Newman at Enterprise Storage Forum. Newman calls for standards groups to add new features like collision-proof hash to archive interfaces and software.
'It will not be long until your genome is tracked from birth to death. I am sure we do not want to have genome objects hacked or changed via silent corruption, yet this data will need to be kept maybe a hundred or more years through a huge number of technology changes. The big problem with archiving data today is not really the media, though that too is a problem. The big problem is the software that is needed and the standards that do not yet exist to manage and control long-term data,' writes Newman." Link to Original Source top
He writes: 'With next-generation technology like non-volatile memories and PCIe SSDs, there are going to be more resources in addition to the CPU that need to be scheduled to make sure everything fits in memory and does not overflow. I think the time has come for Linux – and likely other operating systems – to develop a more robust framework that can address the needs of future hardware and meet the requirements for scheduling resources. This framework is not going to be easy to develop, but it is needed by everything from databases and MapReduce to simple web queries.’" Link to Original Source top
'[F]lash technology and SSDs cannot yet replace HDDs as primary storage for enterprise and HPC applications due to continued high prices for capacity, bandwidth and power, as well as issues with reliability that can only be addressed by increasing overall costs. At least for the foreseeable future, the cost of flash compared to hard drive storage is not going to change.'" Link to Original Source top
'The oldest drive in the list is the Seagate Barracuda 1.5 TB drive from 2006. A drive that is almost 8 years old! Since it is well known in study after study that disk drives last about 5 years and no other drive is that old, I find it pretty disingenuous to leave out that information. Add to this that the Seagate 1.5 TB has a well-known problem that Seagate publicly admitted to, it is no surprise that these old drives are failing.'" Link to Original Source top
Tulips, Dot-coms and SANs: Why SSD Merger Mania Won't Work
storagedude (1517243) writes "Texas Memory and IBM; Cisco and Whiptail; STEC, Virident and WD: the storage industry seems to be in full merger mania over SSDs, but Henry Newman at Enterprise Storage Forum doesn't think the current mania will work out any better than any other great mania of history. Not Invented Here opposition by acquiring engineering teams and the commodity nature of SSDs will make much of the money poured into SSD companies wasted, he says.
'I seriously doubt that the STEC Inc. technology will be seen in HGST/WD SSDs, nor do I think that Virident PCIe cards will be commoditized by HGST/WD to compete with LSI and others,' writes Newman. 'A Whiptail system will likely be put into a Cisco rack, but it’s not like Intel and Cisco are the best corporate partners, and we will likely see other SSDs put into the product.... It all comes down to what I see as 'the buying arms race.' Company X purchased some SSD company so company Y needs to do the same or they will not be considered a player.'" Link to Original Source top
Software-defined data centers might cost companies more than they save
storagedude (1517243) writes "As more and more companies move to virtualized, or software-defined, data centers, cost savings might not be one of the benefits. Sure, utilization rates might go up as resources are pooled, but if the end result is that IT resources become easier for end users to access and provision, they might end up using more resources, not less.
storagedude (1517243) writes "10 Gigabit Ethernet may finally be catching on, some six years later than many predicted. So why did it take so long? Henry Newman offers a few reasons at Enterprise Networking Planet: 10GbE and PCIe 2 were a very promising combination when they appeared in 2007, but the Great Recession hit soon after and IT departments were dumping hardware rather than buying more. The final missing piece is finally arriving: 10GbE support on motherboards.
'What 10 GbE needs to become a commodity is exactly what 1 GbE got and what Fibre Channel failed to get: support on every motherboard,' writes Newman. 'The current landscape looks promising. 10 GbE is starting to appear on motherboards from every major server vendor, and I suspect that in just a few years, we'll start to see it on home PC boards, with the price dropping from the double digits to single digits, and then even down to cents.'
storagedude (1517243) writes "You may have one of the best-known and respected brands in cloud computing, but that may not matter much when it comes time for RFPs, according to a new survey of IT buyers from Palmer Research/QuinStreet. A third of respondents view big names like Google, Amazon and Microsoft very favorably, yet at RFP time, less than 10% of those names get asked for formal proposals. It could be a sign that the cloud is a wide-open market that's up for grabs, as buyers seem much more interested in basics like reliability, technology expertise, pricing, maintenance and customer service, according to the survey. Oh, and trialware doesn't hurt either." Link to Original Source top
storagedude (1517243) writes "The idea of holographic storage is 50 years old, but it's never become a commercial reality. That may be about to change, according to Henry Newman at InfoStor, who reports on a big breakthrough by Hitachi announced at the IEEE Mass Storage conference this week.
'If the information provided is accurate, then I would consider this the first disruptive technology to hit the storage industry in a very long time,' writes Newman." Link to Original Source top
It takes 1.5 GB of data to sequence the genome of an individual. With 12.5 million cancer patients in the U.S., it would take just under 19 PBs to store all that data. Then you need to sequence the cancer, which would take 20 to 200 times more storage than that. Throw in all the other diseases that could potentially be treated with in silico analysis, and you have one heck of a Big Data problem.
Writes Henry Newman on Enterprise Storage Forum: 'We are on the brink of having the technology and methods to be able to detect and treat many diseases cost-effectively, but this is going to require large amounts of storage and processing power, along with new methods to analyze the data.... Will we run out of ideas, or will we run out of storage at a reasonable cost? Without the storage, the ideas will not come, as the costs will be too high.'" Link to Original Source top
With storage getting 'dumb,' what's an admin to do for job security?
storagedude writes "Storage appliances are solving the age-old problem of storage management complexity — and in the process, endangering the jobs of storage admins. So what's an admin to do? Henry Newman at Enterprise Storage Forum has a suggestion: Get into complex data analysis. From the article:
'The storage complexity problem for file systems has been mostly solved. There are still a few hard problems out there, but not as many as there used to be.
'However, there is a new and even more complex set of problems right in front of us. These will require a deep understanding of what the users need to do with the storage and how they plan to access the data to create information on which actionable decisions can be made. These jobs are going to be high paying and require a broad set of skills. But the skills will be different than the current skills required for SAN and NAS and even the other types of appliances that are out there. Those involved are going to have to work directly with the application developers and users.
'Come to think of it, this sounds a great deal like 1996 and 1997 when SAN file systems started to come out. Those of us involved then had to talk with everyone up and down the stack to get things going quickly and efficiently. I believe the same approach is needed today.'" Link to Original Source top
storagedude writes "With vague claims of data 'durability' of 11 nines and costs of a penny a gigabyte, Amazon is making assertions about its Glacier cloud storage system that, at a minimum, need to be explained or clarified, writes Henry Newman at Enterprise Storage Forum. 'Is durability the data integrity of the file, or does durability mean the availability?' asks Newman. 'Does the claim mean that you get 11 nines if the file does not disappear because the storage fails? Is this a guarantee? What does average durability mean?'" Link to Original Source top
storagedude writes "Google could go the way of the dodo if ultra intelligent electronic agents (UIEA) make their way into the mainstream, according to technology prognosticator Daniel Burrus. Siri is just the first example of how a UIEA could end search as we know it. By leveraging the cloud and supercomputing capabilities, Siri uses natural language search to circumvent the entire Google process. If Burrus is right, we'll no longer have to wade through "30,000,000 returns in.0013 milliseconds" of irrelevant search results." Link to Original Source top
storagedude writes "Vendor SSD performance claims may not matter much beyond a certain point, as other limitations like the file system and kernel may limit performance long before theoretical drive limits are reached, according to this analysis from HPC storage specialist Henry Newman. 'If you are doing single threaded operations, the limiting factor is going to be the time it takes to do the I/O switching between the user and the kernel. For file system operations like find and fsck, I think the difference between having a 100,000 IOP SSD and a 1 million IOP SSD likely does not matter.... SSDs are a great thing, and I suspect that in the near future we will see changes to operating systems to allow them to work more efficiently.'" Link to Original Source top