We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
rawler writes "I've been researching network filesystems for a while now, attempting to build a distributed content delivery system for relatively large content. I've still not found the perfect filesystem, and/or not found enough details about certain filesystem, such as success stories for, for instance, Coda.
The main requirement is to keep high availability, and keep bandwidth-requirements low. Due to the task, content distribution, the filesystem need only be read-only, simplifying cache implementations.
The main characteristics of the desired filesystem is:
Heavy client-side caching to reduce bandwidth requirements
Ability to handle large active sets and moderately large main repositories. The filesystem must handle a full repository of at least 25TB, and an active set (cache size) of 4TB.
Big blocksize is an option to scale on-disk. 16 MB blocks is not a problem, since the smallest files stored will be ~50mb.
However, latency on network must be kept down. The maximum allowed bitrate for all streamed assets ~100 mbit, with a nominal bitrate of 4 mbit. 16MB blocks on the wire would require 1.28 seconds to pre-cache, which is not acceptable. Pain threshold is ~200msec.
Live-changing content is a plus (I.E. distributing while recording)
Since the main repository must not be overloaded in network bandwidth, cascading nodes is a must
Topology-reconfiguration in case of intermediary nodes down is desirable.
Offline operation, in case networking fails, the cached content must remain available.
Prefetching of currently accessed files is a plus.
Native mirroring of master-repositories is desired, preferably a distributed central storage-model with mirroring/parity.
Re-distributing 25TB worth of data takes a looong time. Solution must be proven data-safe.
The connected ordering system must be able to extract information about the network flow and active resources used between nodes, and deny access if there's no available bandwidth left for new content.
May very well be implemented on the side, but a plus if the filesystem facilitates, by i.e. offering good data on used remote resources, and current topology.
All the characteristics and the solution are just ideas and estimates, and may very well be changed or altered in almost every way, as long as content reach the customer. Right now the storage-size-requirements are almost 10 times the actual requirements in use in the existing system, but must be able to grow up to this limit in the future.
Now I'm asking the Slashdot crowd. What are your experiences around this? Have you built something similar? Is the whole concept flawed? Heat up your keyboards and give me your thoughts.:)"