Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Hard Drive Relaibility Study Flawed

storagedude Typo in headline (1 comments)

Misspelled "reliability." Sorry!

about 9 months ago
top

LinuxDevices.com Vanishes From the Web

storagedude Re:WayBack machine.. (69 comments)

Agreed. The site hasn't been archived successfully since June 2009. Did Dice fire all the competent Slashdot editors?

about a year and a half ago
top

House Calls For Hearing On Stock Market "Glitch"

storagedude What everyone keeps missing... (180 comments)

...is the role of the patchwork system of individual stock circuit breakers. When the NYSE briefly halted trading on a few stocks like PG, they plunged in electronic trading elsewhere. No doubt high-frequency trading made it all worse, but can we fix the obvious simple problem first?

http://online.wsj.com/article/SB10001424052748703338004575230440147772822.html?mod=WSJ_hpp_LEADNewsCollection

more than 4 years ago
top

Stock Market Sell-Off Might Stem From Trader's Fat Finger

storagedude Re:Uptick rule seems to have provided stability (643 comments)

Congratulations on surviving 2007-2008. I did too.

Now that we're past that, the market has fundamentally changed in the last three years and by any measure is the most volatile in 70 years. I'm not advocating banning short-selling, but the timing of the removal of the uptick rule sure is interesting (the removal of the uptick rule was based on "academic" studies, by the way). Maybe the problem is algo trading; it's more than doubled in the last three years. Whatever it is, technology, trading vehicles and leverage seem to have outstripped our ability to anticipate the worst-case scenario, or at least the combination of them mixed with good old-fashioned greed and fear have proven to be a potent combination.

And my other point has been lost in this - the NYSE individual stock circuit breaker system has got to be fixed. Pronto. Why there aren't consistent rules for all exchanges is mind-boggling.

more than 4 years ago
top

Circuit Breakers, Electronic Trading Cause Selloff

storagedude High-frequency trading under attack (1 comments)

Looks like high-frequency algo trading will also come under attack as a result of this...Interesting, since program trading was blamed for the 1987 crash. But it seems to have been a combination of a few things: trading errors trip NYSE circuit breakers; NYSE circuit breakers create vacuum for stocks to drop on electronic exchanges; and high-frequency trading probably played a role in there too.

http://www.reuters.com/article/idUSN0624451020100507

http://voices.washingtonpost.com/economy-watch/2010/05/lesson_of_todays_stock_market.html?hpid=topnews

more than 4 years ago
top

Stock Market Sell-Off Might Stem From Trader's Fat Finger

storagedude Uptick rule seems to have provided stability (643 comments)

The uptick rule started at the end of the last 60% bear market (1937-1938) and was removed at the start of the next one (2007-2008). Without banning short selling, it seems to have contributed to stability, or at least made declines more orderly. Makes logical sense - if you can't pile on a falling market, the decline should be more orderly.

1987 was about computerized program trading, or at least that's the most common explanation - which seems to have been a contributing factor today too (high-frequency algo trading).

Let's face it, traders are better armed and funded; the best regulators can do is clean up the last mess. Sure would be good if they actually got in front of something for a change...

more than 4 years ago
top

Stock Market Sell-Off Might Stem From Trader's Fat Finger

storagedude The problem is the NYSE circuit breaker system (643 comments)

The brief halts on the NYSE when stocks fall 10% allow for big moves on low volume elsewhere, where they continue to trade electronically. Hence Procter & Gamble was halted at $56 on the NYSE, fell to $39 elsewhere, and then reopened back near $56 on the NYSE. That's what really triggered that 15-minute, 7% decline in the market, and it's the real culprit that needs fixing here - we either need a real circuit breaker system, or the good old fashioned uptick rule brought back. http://www.internetnews.com/bus-news/article.php/3880681

more than 4 years ago
top

Apple just became bigger than Microsoft

storagedude It's disputed (1 comments)

A number of data services give Microsoft a $275B cap to Apple's $242B. And Microsoft's sales and earnings are still greater than Apple's.

Not saying it won't happen eventually, but it seems a little premature...

more than 4 years ago
top

Data Domain takeover battle a tech culture clash?

storagedude What kind of company is better to work for? (1 comments)

A quote from Wedbush Morgan analyst Kaushik Roy: "I would rather be at an evil, arrogant, aggressive company than in a 'nice' company such as IBM."

more than 5 years ago
top

Digitizing Literary Treasures Leads to New Finds

storagedude Works on Previously Unreadable Manuscripts (1 comments)

Also seems to be able to read manuscripts that couldn't be read previously because of their poor condition.

more than 5 years ago

Submissions

top

If your cloud vendor goes out of business, are you ready?

storagedude storagedude writes  |  about two weeks ago

storagedude (1517243) writes "With Amazon Web Services losing $2 billion a year, it’s not inconceivable that the cloud industry could go the way of storage service providers (remember them?). So any plan for cloud services must include a way to retrieve your data quickly in case your cloud service provider goes belly up without much notice (think Nirvanix). In an article at Enterprise Storage Forum, Henry Newman notes that recovering your data from the cloud quickly is a lot harder than you might think. Even if you have a dedicated OC-192 channel, it would take 11 days to move a petabyte of data – and that’s with no contention or other latency. One possible solution: a failover agreement with a second cloud provider – and make sure it’s legally binding."
top

Blogger starts Whitehouse.gov petition to fight data breaches

storagedude storagedude writes  |  about two weeks ago

storagedude (1517243) writes "A blogger is calling for an end to liability limits for companies that expose users' personal and financial information, saying that 'Only when the cost of losing data exceeds the cost of protecting data will anything likely change.'

Writing on InfoStor, Henry Newman said the security problem ‘is one hundred percent solvable with the right amount of motivation and the right amount of resources.’
His petition requests that if organizations with more than 1,000 employees fail to protect data, 'the organization becomes responsible for that loss with no exclusions and no liability limits.'"
top

Data archiving standards need to be future-proofed

storagedude storagedude writes  |  about a month ago

storagedude (1517243) writes "Imagine in the not-too-distant future, your entire genome is on archival storage and accessed by your doctors for critical medical decisions. You'd want that data to be safe from hackers and data corruption, wouldn't you? Oh, and it would need to be error-free and accessible for about a hundred years too. The problem is, we currently don't have the data integrity, security and format migration standards to ensure that, according to Henry Newman at Enterprise Storage Forum. Newman calls for standards groups to add new features like collision-proof hash to archive interfaces and software.

'It will not be long until your genome is tracked from birth to death. I am sure we do not want to have genome objects hacked or changed via silent corruption, yet this data will need to be kept maybe a hundred or more years through a huge number of technology changes. The big problem with archiving data today is not really the media, though that too is a problem. The big problem is the software that is needed and the standards that do not yet exist to manage and control long-term data,' writes Newman."

Link to Original Source
top

TrueCrypt gets a new life, new name

storagedude storagedude writes  |  about a month ago

storagedude (1517243) writes "Amid ongoing security concerns, the popular open source encryption program TrueCrypt may have found new life under a new name, reports eSecurity Planet. Under the terms of the TrueCrypt license — which was a homemade open source license written by the authors themselves rather than a standard one — a forking of the code is allowed if references to TrueCrypt are removed from the code and the resulting application is not called TrueCrypt. Thus, CipherShed will be released under a standard open source license, with long-term ambitions to become a completely new product."
Link to Original Source
top

The evolution of PTSD treatment since WWII

storagedude storagedude writes  |  about a month ago

storagedude (1517243) writes "In the course of writing an article on my father’s WWII experiences, it was interesting to note how PTSD treatment has evolved since then. For a crippling case of PTSD, my father received “sedation and superficial psychotherapy,” according to his military records, which seems to have been the standard practice of the day (and better than the lobotomies inflicted on roughly 2,000 soldiers).

Fast forward to today. A number of treatments have been developed that have had some success reducing the symptoms of PTSD. And a new book by former Washington Post Magazine editor Tom Shroder has noted some success from controlled treatment with psychedelic substances. PTSD is notoriously resistant to treatment, so it is encouraging to see new avenues explored, however taboo."
top

Linux Needs Resource Management for Complex Workloads

storagedude storagedude writes  |  about 3 months ago

storagedude (1517243) writes "Resource management and allocation for complex workloads has been a need for some time in open systems, but no one has ever followed through on making open systems look and behave like an IBM mainframe, writes Henry Newman at Enterprise Storage Forum. Throwing more hardware at the problem is a costly solution that won’t work forever, notes Newman.

He writes: 'With next-generation technology like non-volatile memories and PCIe SSDs, there are going to be more resources in addition to the CPU that need to be scheduled to make sure everything fits in memory and does not overflow. I think the time has come for Linux – and likely other operating systems – to develop a more robust framework that can address the needs of future hardware and meet the requirements for scheduling resources. This framework is not going to be easy to develop, but it is needed by everything from databases and MapReduce to simple web queries.’"

Link to Original Source
top

SSD-HDD Price Gap Won't Go Away Anytime Soon

storagedude storagedude writes  |  about 6 months ago

storagedude (1517243) writes "Flash storage costs have been dropping rapidly for years, but those gains are about to slow, and a number of issues will keep flash from closing the cost gap with HDDs for some time, writes Henry Newman at Enterprise Storage Forum. As SSD density increases, reliability and performance decrease, creating a dilemma for manufacturers who must balance density, cost, reliability and performance.

'[F]lash technology and SSDs cannot yet replace HDDs as primary storage for enterprise and HPC applications due to continued high prices for capacity, bandwidth and power, as well as issues with reliability that can only be addressed by increasing overall costs. At least for the foreseeable future, the cost of flash compared to hard drive storage is not going to change.'"

Link to Original Source
top

Hard Drive Relaibility Study Flawed

storagedude storagedude writes  |  about 9 months ago

storagedude (1517243) writes "A recent study of hard drive reliability by Backblaze was deeply flawed, according to Henry Newman, a longtime HPC storage consultant. Writing in Enterprise Storage Forum, Newman notes that the tested Seagate drives that had a high failure rate were either very old or had known issues. The study also failed to address manufacturer's specifications, drive burn-in and data reliability, among other issues.

'The oldest drive in the list is the Seagate Barracuda 1.5 TB drive from 2006. A drive that is almost 8 years old! Since it is well known in study after study that disk drives last about 5 years and no other drive is that old, I find it pretty disingenuous to leave out that information. Add to this that the Seagate 1.5 TB has a well-known problem that Seagate publicly admitted to, it is no surprise that these old drives are failing.'"

Link to Original Source
top

Tulips, Dot-coms and SANs: Why SSD Merger Mania Won't Work

storagedude storagedude writes  |  1 year,28 days

storagedude (1517243) writes "Texas Memory and IBM; Cisco and Whiptail; STEC, Virident and WD: the storage industry seems to be in full merger mania over SSDs, but Henry Newman at Enterprise Storage Forum doesn't think the current mania will work out any better than any other great mania of history. Not Invented Here opposition by acquiring engineering teams and the commodity nature of SSDs will make much of the money poured into SSD companies wasted, he says.

'I seriously doubt that the STEC Inc. technology will be seen in HGST/WD SSDs, nor do I think that Virident PCIe cards will be commoditized by HGST/WD to compete with LSI and others,' writes Newman. 'A Whiptail system will likely be put into a Cisco rack, but it’s not like Intel and Cisco are the best corporate partners, and we will likely see other SSDs put into the product. ... It all comes down to what I see as 'the buying arms race.' Company X purchased some SSD company so company Y needs to do the same or they will not be considered a player.'"

Link to Original Source
top

Software-defined data centers might cost companies more than they save

storagedude storagedude writes  |  about a year ago

storagedude (1517243) writes "As more and more companies move to virtualized, or software-defined, data centers, cost savings might not be one of the benefits. Sure, utilization rates might go up as resources are pooled, but if the end result is that IT resources become easier for end users to access and provision, they might end up using more resources, not less.

That's the view of Peder Ulander of Citrix, who cites the Jevons Paradox, a 150-year-old economic theory that arose from an observation about the relationship between coal efficiency and consumption. Making a resource easier to use leads to greater consumption, not less, says Ulander. As users can do more for themselves and don't have to wait for IT, they do more, so more gets used.

The real gain, then, might be that more gets accomplished as IT becomes less of a bottleneck. It won't mean cost savings, but it could mean higher revenues."

Link to Original Source
top

10GbE: What the Heck Took So Long?

storagedude storagedude writes  |  about a year ago

storagedude (1517243) writes "10 Gigabit Ethernet may finally be catching on, some six years later than many predicted. So why did it take so long? Henry Newman offers a few reasons at Enterprise Networking Planet: 10GbE and PCIe 2 were a very promising combination when they appeared in 2007, but the Great Recession hit soon after and IT departments were dumping hardware rather than buying more. The final missing piece is finally arriving: 10GbE support on motherboards.

'What 10 GbE needs to become a commodity is exactly what 1 GbE got and what Fibre Channel failed to get: support on every motherboard,' writes Newman. 'The current landscape looks promising. 10 GbE is starting to appear on motherboards from every major server vendor, and I suspect that in just a few years, we'll start to see it on home PC boards, with the price dropping from the double digits to single digits, and then even down to cents.'

See the article at 10 Gig: What Took So Darn Long?"

Link to Original Source
top

Brands Don't Matter Much to Cloud Computing Buyers

storagedude storagedude writes  |  about a year ago

storagedude (1517243) writes "You may have one of the best-known and respected brands in cloud computing, but that may not matter much when it comes time for RFPs, according to a new survey of IT buyers from Palmer Research/QuinStreet. A third of respondents view big names like Google, Amazon and Microsoft very favorably, yet at RFP time, less than 10% of those names get asked for formal proposals. It could be a sign that the cloud is a wide-open market that's up for grabs, as buyers seem much more interested in basics like reliability, technology expertise, pricing, maintenance and customer service, according to the survey. Oh, and trialware doesn't hurt either."
Link to Original Source
top

Hitachi's holographic storage breakthrough

storagedude storagedude writes  |  about a year and a half ago

storagedude (1517243) writes "The idea of holographic storage is 50 years old, but it's never become a commercial reality. That may be about to change, according to Henry Newman at InfoStor, who reports on a big breakthrough by Hitachi announced at the IEEE Mass Storage conference this week.

'If the information provided is accurate, then I would consider this the first disruptive technology to hit the storage industry in a very long time,' writes Newman."

Link to Original Source
top

How Much Storage Does It Take to Cure Cancer?

storagedude storagedude writes  |  about a year and a half ago

storagedude (1517243) writes "The answer: A lot.

It takes 1.5 GB of data to sequence the genome of an individual. With 12.5 million cancer patients in the U.S., it would take just under 19 PBs to store all that data. Then you need to sequence the cancer, which would take 20 to 200 times more storage than that. Throw in all the other diseases that could potentially be treated with in silico analysis, and you have one heck of a Big Data problem.

Writes Henry Newman on Enterprise Storage Forum: 'We are on the brink of having the technology and methods to be able to detect and treat many diseases cost-effectively, but this is going to require large amounts of storage and processing power, along with new methods to analyze the data. ... Will we run out of ideas, or will we run out of storage at a reasonable cost? Without the storage, the ideas will not come, as the costs will be too high.'"

Link to Original Source
top

With storage getting 'dumb,' what's an admin to do for job security?

storagedude storagedude writes  |  about a year and a half ago

storagedude writes "Storage appliances are solving the age-old problem of storage management complexity — and in the process, endangering the jobs of storage admins. So what's an admin to do? Henry Newman at Enterprise Storage Forum has a suggestion: Get into complex data analysis. From the article:

'The storage complexity problem for file systems has been mostly solved. There are still a few hard problems out there, but not as many as there used to be.

'However, there is a new and even more complex set of problems right in front of us. These will require a deep understanding of what the users need to do with the storage and how they plan to access the data to create information on which actionable decisions can be made. These jobs are going to be high paying and require a broad set of skills. But the skills will be different than the current skills required for SAN and NAS and even the other types of appliances that are out there. Those involved are going to have to work directly with the application developers and users.

'Come to think of it, this sounds a great deal like 1996 and 1997 when SAN file systems started to come out. Those of us involved then had to talk with everyone up and down the stack to get things going quickly and efficiently. I believe the same approach is needed today.'"

Link to Original Source
top

Is Amazon Glacier So Much Marketing Hype?

storagedude storagedude writes  |  more than 2 years ago

storagedude writes "With vague claims of data 'durability' of 11 nines and costs of a penny a gigabyte, Amazon is making assertions about its Glacier cloud storage system that, at a minimum, need to be explained or clarified, writes Henry Newman at Enterprise Storage Forum. 'Is durability the data integrity of the file, or does durability mean the availability?' asks Newman. 'Does the claim mean that you get 11 nines if the file does not disappear because the storage fails? Is this a guarantee? What does average durability mean?'"
Link to Original Source
top

Is Siri Smarter than Google?

storagedude storagedude writes  |  about 2 years ago

storagedude writes "Google could go the way of the dodo if ultra intelligent electronic agents (UIEA) make their way into the mainstream, according to technology prognosticator Daniel Burrus. Siri is just the first example of how a UIEA could end search as we know it. By leveraging the cloud and supercomputing capabilities, Siri uses natural language search to circumvent the entire Google process. If Burrus is right, we'll no longer have to wade through "30,000,000 returns in .0013 milliseconds" of irrelevant search results."
Link to Original Source
top

SSD Performance Potential Can Be Tough to Achieve

storagedude storagedude writes  |  more than 2 years ago

storagedude writes "Vendor SSD performance claims may not matter much beyond a certain point, as other limitations like the file system and kernel may limit performance long before theoretical drive limits are reached, according to this analysis from HPC storage specialist Henry Newman. 'If you are doing single threaded operations, the limiting factor is going to be the time it takes to do the I/O switching between the user and the kernel. For file system operations like find and fsck, I think the difference between having a 100,000 IOP SSD and a 1 million IOP SSD likely does not matter. ... SSDs are a great thing, and I suspect that in the near future we will see changes to operating systems to allow them to work more efficiently.'"
Link to Original Source
top

Hadoop, Big Data and Small Businesses

storagedude storagedude writes  |  more than 3 years ago

storagedude writes "Hadoop and Big Data are all the rage these days in enterprises, but what's a small business to do if it wants to get in on growing trends like distributed data mining, search and indexing? The options for small businesses are limited: they can either move their data to the cloud for such services or build their own infrastructure, which few SMBs have the expertise or infrastructure to support. HPC veteran Henry Newman says what's needed is a search and indexing appliance with multi-level security (MLS) that can be installed and run with little IT expertise. Some appliances are on the way, says Newman, but they'll lack MLS and other critical features that could make Big Data work for SMBs."
Link to Original Source
top

Coming Soon: Google+ Data Mining

storagedude storagedude writes  |  more than 3 years ago

storagedude writes "In the wake of the meteoric rise of Google+, it should come as no surprise that social CRM and social media monitoring firms are already considering monitoring the service in addition to the usual suspects Facebook, Twitter and LinkedIn. But until Google releases an API or lets it be known how much data it plans to release, those firms will be limited to screen scraping and other such methods. But whether their methods are crude or sophisticated, it's just a matter of time until companies start listening in on your Google+ conversations."
Link to Original Source

Journals

storagedude has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?