Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Open Source Moving in on the Data Storage World 169

pararox writes "The data storage and backup world is one of stagnant technologies and cronyism. A neat little open source project, called Cleversafe, is trying to dispell of that notion. Using the information dispersal algorithm originally conceived of by Michael Rabin (of RSA fame), the software splits every file you backup into small slices, any majority of which can be used to perfectly recreate all the original data. The software is also very scalable, allowing you to run your own backup grid on a single desktop or across thousands of machines."
This discussion has been archived. No new comments can be posted.

Open Source Moving in on the Data Storage World

Comments Filter:
  • The data storage and backup world is one of stagnant technologies and cronyism.
  • by Anonymous Coward on Wednesday April 26, 2006 @05:57PM (#15208129)
    Editors please note!

    Editors, please note that there is some incorrect information in this post. Firstly, the original concept of the IDA was designed by Shamir of RSA fame, not Rabin.

    Also note that the Cleversafe IDA is a custom algorithm, and is only similar to Shamir's initial concept.
    • Really? This is just error correction. Reed-Solomon [wikipedia.org] error correction, and even the Chinese Remainder Theorem [psu.edu] can be applied to reconstruct data when some has been intentionally or unintentionally punctured.
      • You are right; however, Shamir first observed that Reed-Solomon error correction also has nice secrecy properties. That is, even if you have one share less than the number required to reconstruct, you actually have no information about the secret at all. This can be a good thing if you are distributing your data among potentially untrustworthy servers.
    • Rivest, Shamir, Adleman invented RSA.

      Shamir invented secret sharing.

      Rabin invented the Rabin public key cryptosystem, and IDA.

      IDA is not like secret sharing.

      With secret sharing, you have a secret, which you break up into shares. You can decide how many shares you need to reconstruct the secret when you break it up. Without the right number of shares, you know nothing about the secret. But the big difference is that EACH SHARE IS SLIGHTLY BIGGER THAN THE INITIAL SECRET.

      With IDA, you have lots of data. Yo
      • No you would be wrong, the RSA algorithm was first described by Clifford Cocks, a British mathematician working for GCHQ in 1973, four years before the description in 1977 by Ron Rivest, Adi Shamir and Len Adleman.

        It is a classic example of a bad patent. There was prior art (though admittedly this was kept top secret till 1997) and it also failed the obviousness test. Clearly if someone else came up with the same algorithm four years earlier it was clearly obvious to someone skilled in the art of cryptograp
  • Backup for Backuper? (Score:3, Interesting)

    by foundme ( 897346 ) on Wednesday April 26, 2006 @05:57PM (#15208130) Homepage
    I can't find this in the FAQ -- is there a "creator/seeder" in the whole process? Which means a particular group of slices can only be unlocked by a particular seeder created by Turbo IDA.

    If there is a creator/seeder, then we are still burdened by having to keep this seeder safe so that we can retrieve the distributed slices.

    If there is no creator/seeder, is this safe enough so that people cannot patch slices together by way of trial-and-error?
  • At work we're looking into this to store critical data on out intranet which spans several states and facilites. Looks great, but only time will tell.

    I seem to remember a project months ago that was going to use P2P to backup your data on other P2P users computers which to me sounds quite insane. Anyone know if this is related?

    http://religiousfreaks.com/ [religiousfreaks.com]
    • If you're talking about using a public P2P, then it's insane.
      If it could be set up properly to be used on a large corporate intranet, then there's some merit to it. If you could use this system to spread chunks of data out over an intranet that spans several states, then it could be a useful way to store critical data during hurricane season or the like. If a building took sufficient damage from weather, earthquake, terrorist, broken water main, etc. so that the data center in that building was a loss, t
    • Comment removed based on user account deletion
    • I think Freenet performs something like what is described in this article, but I'm hardly a crypto expert. I do know that freenet slices up data that is inserted into small chunks (I believe it's 32k chunks with the newest darknet.) There is also healing chunks too... the only disadvantage with backing up data on freenet is that data/information that is rarely accessed falls off the network as newer information replaces it.
  • by Durindana ( 442090 ) on Wednesday April 26, 2006 @06:01PM (#15208162)


    While Michael Rabin was inventor of the Rabin cryptosystem [wikipedia.org] in 1979, it was Ronald Rivest, Adi Shamir and Len Adleman behind RSA [wikipedia.org] two years earlier.
  • by El Cubano ( 631386 ) on Wednesday April 26, 2006 @06:06PM (#15208204)

    Using the information dispersal algorithm originally conceived of by Michael Rabin (of RSA fame), the software splits every file you backup into small slices, any majority of which can be used to perfectly recreate all the original data.

    It seems like this can be tuned to provide varying levels of fault tolerance. According to the abstract (I don't have an ACM web account, and I couldn't find the full text), it seems like I can take a file and make it so that any four chunks can be used to rebuild the file. I can then take those chunks and distribute them eight times to different machines. Thus, five of the eight machines would have to be rendered inoperable before I were unable to retrieve my data.

    If I understand it correctly, then this is really slick.

    • by Anonymous Coward
      Meh, it sounds like it's just par2 integrated into a distributed filesystem.
    • by dracken ( 453199 ) on Wednesday April 26, 2006 @06:33PM (#15208366) Homepage
      Rabin's algorithm relies on a nifty trick. If you take a k dimensional vector and store the dot product with k orthogonal vectors then the vector can be reconstructed using just the dot product. This is a fancy way of saying any point on the x-y plane can be located if you have the x-coordinate and y-coordinate. However, if you take a k dimensional vector and compute the dot product with l mutually orthogonal vectors (where l > k), then any k dot products are enough to reconstruct the original vector.

      Rabin has shown how to come up with l vectors of which k are mutually orthogonal.
      • by Cal Paterson ( 881180 ) on Wednesday April 26, 2006 @06:46PM (#15208439)
        We all knew that.
      • Pardon?!

        I think I've suddenly gone blind because your "[non-]fancy way of saying" doesn't sound a damn thing like the gibberish my eyes just read. "mutually orthagonal vectors" ?!

        If I'm wrong, then I should probably go and lie down, but I just showed my wife and now she's crying... so I think it's your explanation and not me.

        *goes to find advil*

        • "mutually orthagonal vectors" simply means that two separate things are going in the X-Y plane, which is good. If one of them might be travelling in the Z plane, it might have poked you in the eye for reading it. That would be bad.
      • That can't be right, can it?

        However, if you take a k dimensional vector and compute the dot product with l mutually orthogonal vectors (where l > k), then any k dot products are enough to reconstruct the original vector.

        Consider three dimensional space (l = 3, k = 2). Let the k dimensional vector be v = (1/2, 1/3) in the x-y plane. Then the l dot products are v.i = 1/2, v.j = 1/3, v.k = 0. I cannot pick any two products (say 1/2 and 0) to reconstruct v.

        Something must be lost in translation

      • "However, if you take a k dimensional vector and compute the dot product with l mutually orthogonal vectors (where l > k), then any k dot products are enough to reconstruct the original vector."

        Do you mean that we have a k-dimensional vector space V, a vector on this vector space and calculate the dot product with l mutually orthogonal vectors where l>k?

        Is that it? Because if it is it's strange to say the least.

      • Just to make things clear.

        On a k dimensional vector space you can't come up with l>k (non null) mutually orthogonal vectors. After all k non null mutually orthogonal vector will form a basis for the vector space.
      • I think you mean linealy independent, not mutually orthogonal. Infact, the word orthogonal isn't even in Rabin's paper. Thus, what Rabin has done is shown how to generate n vectors such that any m are linearly independent .
    • by jd ( 1658 )
      The basis of the method lies in the Byzantine General's Problem and related mathematical puzzles. A derivative is used in cryptography for distributed keys. As a backup strategy, it looks interesting - you don't need any higher level of trust than you would need in the Byzantine General's Problem, for exactly the same reasons. This includes not just backup devices but also all connections to backup devices (so you have security against SAN failures, packet corruption and other such problems). The price you
    • Yes. This is a specific application of secret-sharing algorithms.

      In the classic formulation, the secret is split into N parts, such that no part reveals any information about the secret (that is, knowing one of the parts does not make any possible secret more likely than any other possible secret). The really cool thing is that you can decide that's not good enough, and can split up your secret such that knowing M or fewer parts reveals no information about the secret (for sufficiently large N). Normally
  • stagnant?? (Score:4, Insightful)

    by Phredward ( 254393 ) on Wednesday April 26, 2006 @06:09PM (#15208218)
    Companies are crying out for new storage solutions all the time. If the answer is slow in coming it is not due to "cronyism" and "stangnation". Rather the causes include the facts that distributed storage is hard, and people don't like loosing their data.
    • by Anonymous Coward
      "people don't like loosing their data."

      Wouldn't distributed storage be loosing data? After all, it's being set loose from one device, to be stored upon many...

    • Ah, where is LoseNotLooseGuy when you need him? Haven't seen that dude around in a long time... that saddens me. So much for the cause.
    • people don't like loosing their data.

      One method of reducing risk is to place redundant vowels in some of your words. In case the first one gets loost somehow, you still have the second one.
  • Since all we need is a majority of files, its a realtime compression scheme of 51%. ------ Thats what I would do. You do whatever you want.
    • I would expect at least some expansion so that 50% of the encoded data is substantially greater than the size of 50% of the original data. Thus, it probably is a net expansion rather than compression.

  • by DigitalRaptor ( 815681 ) on Wednesday April 26, 2006 @06:17PM (#15208272)
    This sounds like Rar, Par, and BitTorrent got merged in some freak transporter accident...

    Par files (for use with QuickPar, etc) are great, saving all sorts of extra posting on binary newsgroups.

  • Not a new idea (Score:5, Informative)

    by D3viL ( 814681 ) on Wednesday April 26, 2006 @06:18PM (#15208278)
    so it's sort of like parchive http://parchive.sourceforge.net/ [sourceforge.net] which is software splits every file you backup into small slices, any majority of which can be used to perfectly recreate all the original data
  • Sourceforge page (Score:1, Informative)

    by Anonymous Coward
    Well, their webserver seems like it's been smoked, here's a link to their sourceforge page, where you can grab the actual software:

    http://it.slashdot.org/it/06/04/26/2039224.shtml [slashdot.org]
  • by Anonymous Coward on Wednesday April 26, 2006 @06:20PM (#15208296)
    While R in RSA stands for Ron Rivest, it is Adi Shamir (S of RSA) you have in mind. He came up with a wonderful secret sharing scheme which allows a bunch of folks or computers to keep pieces of secret in such a way that no N of them have any idea what the secret is, even if they collude. OTOH N+1 of them can easily figure out the secret. RSA can help you keep important secrets safe this way: if the owner is OK, the secret cannot be recreated; if the owner quits or dies, all-important secret holders can recover his password and unencrypt critical company data. And if a couple of them cannot participate, you still can get your secret back.

    Even more amazingly Shamir's secret sharing scheme allows computing math functions, such as digital signatures, without ever recovering secret keys. This is called threshold cryptography, some of you may be interested to learn about its many wonders. Shamir rocks and so is threshold crypto!
  • innovation (Score:2, Interesting)

    by Ajehals ( 947354 )
    Any innovation (if that's what this is - no doubt it will turn out to be something that someone else thought of in the 80's..) is welcomed in this area.

    Maybe one day vendors will stop pushing overly expensive and utterly bland storage solutions. i.e. Last time I had a meeting about storage the product was: 2x Servers 2x Disk Arrays with possible storage of a little under 2TB (using 24 80Gb SCSI HDDs) with RAID 5, Oh and the storage was presented as 4 @500Gb drives to the OS (Some proprietary thing). all

    • by stereoroid ( 234317 ) on Wednesday April 26, 2006 @07:19PM (#15208614) Homepage Journal
      One point that's been brought home to me in a very real way, in my position in senior support for one of the major storage system vendors: the hard disks themselves really do make a difference. SCSI disks are much more expensive because of their construction, the duty cycles they can perform to over long periods. You can NOT hammer a SATA disk at 90% of the time, 24/7, and expect it to last the way an enterprise-class SCSI disk does. My company sells low-cost SATA disk systems too, and some customers find that the lower price is a false economy for what they need the system to do.

      I'm kinda missing the point of the "editorializing" in this article: when a storage system is doing its job, it IS boring. You put bytes in, assured they will be stored, and you get them out on demand. You want nothing "interesting" to happen to the data that your business is built on! Sure, the technology is stagnant, if that means customers can get access to the data, reliably, year after year. We Slashdotters are prepared to take "bleeding edge" risks that enterprise customers are not.
      • My approach, given that even a SCSI drive can fail unexpectedly is to add redundancy at the RAID level. Now, given that any drive (or two, depending on the RAID level) can fail without losing data, what matters to me is warranty. Since SATA drives are available with a warranty which is longer than the useful life of the drive (5 years from now, I'll be tossing the whole array for something 10x the size), it really doesn't matter whether SCSI drives hold up better.
    • The last storage proposal I heard was from Lefthand Networks [lefthandnetworks.com] for their iSCSI based SAN systems.

      They basically sell a stack of drive arrays which you can configure as volumes as you see fit. Some notable features:

      Ability to configure multiple RAID types within the stack. So you could have RAID 10 and RAID 0 within the same stack of drives depending on if you need speed or redundancy.
      The ability to stripe the data and parity across units in the whole stack (RAID 10 level 2 and 3). So if you have 3 4 drive sys
  • been done before (Score:5, Informative)

    by Splork ( 13498 ) on Wednesday April 26, 2006 @06:26PM (#15208331) Homepage

    Related companies/projects happened in this order: MojoNation [archive.org] .. MNet [mnetproject.org] .. HiveCache [archive.org] .. AllMyData [allmydata.com]

    good luck!

    • Publius (Score:3, Interesting)

      by twitter ( 104583 )
      ATT has something like this called Publius [att.com]. Scientific American reviewed it [essential.org] and, in a most unscientific and unAmerican opinion, called it "irresponsible." The goal was not just storage, but publication.

      It's nice to see another attempt that's free. Free speech requires anonymity.

      • Free speech requires anonymity.

        Anonimity contributes to meaningless and criminal communication. Perfect anonimity will result in nearly worthless communication. Take a look at the "p3n15 pi11z!!" offers in your Email inbox for an excellent example.

        Free speech requires VIGILANCE by a population to ensure that the rights to speak freely are not suppressed, and that takes organization, effort, and might.
        • Anonimity contributes to meaningless and criminal communication.

          Any right X "contributes to meaningless and criminal" Y.

          You are making the classic argument in support of a police state. Just because X can be used by criminals, or X makes harder the police's job in catching criminals, does not mean that we can or should criminalize X itself.

          Perfect anonimity will result in nearly worthless communication... Email

          Just because most people use a lousy email system with a rotten design and limited capabilities in

          • You are making the classic argument in support of a police state. Just because X can be used by criminals, or X makes harder the police's job in catching criminals, does not mean that we can or should criminalize X itself.


            I don't believe I said that. I only said that anonymous communication results in meaningless communication, and is certainly not a requirement for living in a free society.

            Just because most people use a lousy email system with a rotten design and limited capabilities in no way means that
  • by dfloyd888 ( 672421 ) * on Wednesday April 26, 2006 @06:28PM (#15208342)
    In the early 90s, a company made a virtual file server for networked Macs. Each client Macintosh had a file on its hard drive, and when a request was made through the driver, a number of Macs were contacted, and files were read and written to in a fairly load balanced fashion. I'm pretty sure it used some decent (think single DES) encryption at the time too, so someone couldn't just dig through the server's file on their Mac's hard disk and glean important data. It also added some redundancy, so if a Mac or two wasn't up on the network, it wouldn't kill the virtual Appleshare folder.

    By chance, anyone remember this technology? I have no idea what happened to it, but it would be a blockbuster open source app if done today, and was platform independant. If done right, one could create data brokerage houses, where people could buy and sell storage space, and also reliability, where space on a RAID or server array would be of higher value than space on a laptop that is rarely on the Internet.
  • generally, speaking the more copies of something you have floating around, the larger the probability they get into the wrong hands. so this whole redundancy thing is just going to be viewed as a huge security breach, and never really become popular...
    • Not necessarily; if the copies you have are broken apart and split up, that doesn't mean you have a security breach.

      For example, if I tell you my 8 character password has a "q" in it, you've only lowered the number of possible passwords from 2821109907456 to 78364164096. Not exactly useful, either way.

      And of course, what good is keeping the data out of the wrong hands if the RIGHT HANDS can never get to it?
      • that's why I said that it is going to be "viewed" like that. it won't necessarily be less secure. and of course you are forgetting what any corporate/military/government official would answer to your query: "what good is keeping the data out of the wrong hands if the RIGHT HANDS can never get to it"
    • Hello-

      I am the chief designer of the Cleversafe dispersed-storage system (aka a grid-storage software system) and am one of the project's co-founders. The Cleversafe system never stores a complete copy of the data in any one place (or "grid node" in our terminology). At most 1/11th of the file data--we call it a file "slices"--is stored at any one grid node in a "scrambled" (i.e., non-contiguous), compressed, and encrypted/signed fashion. The grid _never_ stores more than one copy of the data on the grid
  • by JoeCommodore ( 567479 ) <larry@portcommodore.com> on Wednesday April 26, 2006 @06:34PM (#15208374) Homepage
    When I read the statement: ...the software splits every file you backup into small slices, any majority of which can be used to perfectly recreate all the original data. The software is also very scalable, allowing you to run your own backup grid on a single desktop or across thousands of machines.

    I was immediately visualizing a Borg Cube regenerating after a hit from the Enterprise.

    regardless, it sounds cool.

  • by andrew cooke ( 6522 ) <andrew@acooke.org> on Wednesday April 26, 2006 @06:42PM (#15208416) Homepage
    The most interesting link here is behind a pay-wall. Do the editors bother to follow the link in articles? Do they just assume we all have ACM access? Come on, this place used to be a bit better than this, didn;t it?
  • New idea... NOT. (Score:5, Informative)

    by pedantic bore ( 740196 ) on Wednesday April 26, 2006 @06:44PM (#15208428)
    Why does this remind me of something [harvard.edu]? It sounds like something I've heard [carleton.ca] about already [cmu.edu], more or less [gatech.edu].

    I just hope they don't patent it [uspto.gov]!

  • by Saturn49 ( 536831 ) on Wednesday April 26, 2006 @07:01PM (#15208523)
    This can be done quite easily with Reed-Solomon coding. In fact, you don't need the majority of the nodes, but simply an arbitrary N set of nodes, with an arbitrary M nodes as redundancy. N=1 and M=1 is basically RAID1. N = n and M = 1 is simply RAID5, N=n and M=2 is RAID 6.

    In fact, I wrote a RSRaid driver for Linux for my thesis and did some performance testing on it. I'll save you the 30 pages and just tell you that the algorithm is far too CPU intensive to scale up very well for fileserver use (my original intent,) but I did conclude it could be used as a backup alternative to tape. Hmmmm.

    Direct Link [dyndns.org]
    Google Cache [72.14.203.104]
    Please forgive the double brackets, I fought witH Word and lost.
    Contact me if you'd like to play with the code. I never did any reconstruction code, but the system did work in a degraded state, and was written for the Linux 2.6 kernel.
  • As they appear to be toast now...

    And how can you say backing up to a *single* desktop pc is of any value?
  • ...for my alma mater.

    Cleversafe's headquarters are located at the new University Technology Park [university...gypark.com] at IIT...no, not that IIT, this one [iit.edu].
  • by Anonymous Coward
    Anyone who has used usenet in the last decade or so knows most binaries are split into multiple parts (RAR's now-a-days) with PAR and PAR2 recovery volumes. So instead of making this sound like an awesome new development, why not be honest about what it is: a slightly different application of a very old technology/algorithm.
  • by kbahey ( 102895 ) on Wednesday April 26, 2006 @07:43PM (#15208745) Homepage
    Slashdotted! Can't check the site contents or the wiki.

    From the summary : "the software splits every file you backup into small slices, any majority of which can be used to perfectly recreate all the original data."

    So, basically it is like RAID 5 striping and parity [wikipedia.org] applied to the file level.

    Neat concept.
  • by mengland ( 78217 ) on Wednesday April 26, 2006 @08:59PM (#15209129)
    (This is a repost from an earlier part of the thread so that I can get these comments on the toplevel.)

    Hello-

    I am the lead designer of the first Cleversafe dispersed-storage system (aka a grid-storage software system) and am one of the project's co-founders. The Cleversafe system never stores a complete copy of the data in any one place (or "grid node" in our terminology). At most 1/11th of the file data--we call it a file "slices"--is stored at any one grid node in a "scrambled" (i.e., non-contiguous), compressed, and encrypted/signed fashion. The grid _never_ stores more than one copy of the data on the grid, and that one copy is never stored all in the same place--it's dispersed using an optimized information-dispersal algorithm that we created but has similar properties to the previously-published info-dispersal algorithms (IDAs).

    If a grid node and its associated content--i.e., the user's file slices on that node--are ever completely compromised (firewall comes down, all encryption and scrambling is cracked, etc), then the cracker acquires at most 1/11th (one-eleventh) of the data users data.

    Further, if any half (or at least 5 out of any 11) of the grid nodes are for any reason destroyed or otherwise unavailable, all of the user's data is still accessible. This is done by generating a "coded" file slice for every data slice that we store on the node, and regenerating missing file slices from down nodes by pumping the available data and coded slices through our info-dispersal algorithms (which are all open-sourced, by the way) that are executed on the client side or when the grid "self heals" for destroyed nodes.

    The system can also be implemented in a cost-effective fashion. The grid system can sustain so many concurrent, per-node outages that the availability/uptime requirements for each node are minimal. Also, the grid-node servers need not support much processing capability, for the client offloads much of the work from the servers.

    We feel this system provides a powerful combination of reliability, scalability, economy, and security.

    The hardest part of the design, imo, is to be able to reliably track all of these file slices across a large and heterogeneous set of grid-node machines housing these info-dispersed file slices. We designed the grid meta-data system from the ground up to do this and to be capacity-expandable, performance-scalable, and easily serviceable. More details for the open-source flavor of the grid-software design can be found here:
    http://wiki.cleversafe.org/Grid_Design [cleversafe.org] [cleversafe.org]

    There's much more that I can say about this system; I plan to add additional comments to this thread as more questions and comments arise. I'm sure there are new comments I have yet to read, for they're coming in pretty quickly...

    I also encourage further discussion at our newly-created web forums: http://forums.cleversafe.org/ [cleversafe.org] [cleversafe.org]
    Mailing lists (that will be synchronized with the web forums) will also be available at cleverafe.org in the near future.

    -Matt
    Cleversafe project lead
  • More notes on our IDAs compared with others:

    The Cleversafe information dispersal algorithms (IDAs) were designed to provide real-time performance with large amounts of data storage and retrieval (gigabytes, petabytes and above). Previous algorithms, like Rabin, Shamir and Reed-Solomon, are very effective at storing smaller amounts of data (kilobytes), but their computational overhead which is proportional to the square of the data block size or greater arent well suited for quickly dispersing/restoring l
  • ou backup into small slices, any majority of which can be used

    Ok, I'm numb in the morning, but what the hell does that mean ? ... I won't trust my data to something I don't even understand. You can say to RTFM, but hey, this is the first paragraph about the software, it should be catchy and clear.

    • Aside from RTFM, let me, as a Cleversafe employee, try to explain a bit of what's happening. Cleversafe technology allows for a client-server application where your data to be backed up is sliced up into eleven pieces using our OWN Information Dispersal Algorithm... This is not RSA as some previous posts would lead you to believe. Once the data is split using this algorithm, it is sent out to eleven different sites running our server software. When you want to restore your data (say after recovering from a
  • MS has a similar concept already going through deep testing.

    http://research.microsoft.com/sn/Farsite/ [microsoft.com]

    Pretty cool stuff, check this out:

    Our prototypical target is a large company or university, meaning an organization with around 10^5 machines, storing around 10^10 files, containing around 10^16 bytes of data. We assume that the machines are interconnected by a high-bandwidth, low-latency, switched network. Also, at least for our initial version, we are assuming no significant geographical differences amon

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...