Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The DNA Data Deluge

samzenpus posted about a year ago | from the too-many-letters dept.

Biotech 138

the_newsbeagle writes "Fast, cheap genetic sequencing machines have the potential to revolutionize science and medicine--but only if geneticists can figure out how to deal with the floods of data their machines are producing. That's where computer scientists can save the day. In this article from IEEE Spectrum, two computational biologists explain how they're borrowing big data solutions from companies like Google and Amazon to meet the challenge. An explanation of the scope of the problem, from the article: 'The roughly 2000 sequencing instruments in labs and hospitals around the world can collectively generate about 15 petabytes of compressed genetic data each year. To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall. And with sequencing capacity increasing at a rate of around three- to fivefold per year, next year the stack would be around 6 to 10 miles tall. At this rate, within the next five years the stack of DVDs could reach higher than the orbit of the International Space Station.'"

cancel ×

138 comments

Sorry! There are no comments related to the filter you selected.

At least they're not rolling their own. (4, Interesting)

The_Wilschon (782534) | about a year ago | (#44128807)

In high energy physics, we rolled our own big data solutions (mostly because there was no big data other than us when we did so). It turned out to be terrible.

Re:At least they're not rolling their own. (0)

Anonymous Coward | about a year ago | (#44128905)

Care to comment on lessons learned?

Re:At least they're not rolling their own. (2)

nan0 (620897) | about a year ago | (#44128935)

a brief review of their documentation should shed some light. http://root.cern.ch/root/doc/RootDoc.html [root.cern.ch]

Miles of DVDs? (2)

sanman2 (928866) | about a year ago | (#44130757)

Can't we have more meaningful units?

How many Libraries of Congress is that?

Re:Miles of DVDs? (1)

sarysa (1089739) | about a year ago | (#44131157)

IEEE wrote that? Watch out...if Jesse Ventura runs for president, the prophecy may be fulfilled...

Re:At least they're not rolling their own. (1)

Anonymous Coward | about a year ago | (#44129131)

I rolled my own but forgot what the results were...

Re:At least they're not rolling their own. (2)

stox (131684) | about a year ago | (#44129203)

Being the wake in front of the Bleeding Edge, HEP gets to learn all sorts of lessons before everyone else. As a result, you get to make all the mistakes that everyone else gets to learn from.

Re:At least they're not rolling their own. (2)

bdabautcb (1040566) | about a year ago | (#44129213)

I'm no techie, I programmed some in basic as a kid thanks to 321 contact, and the last thing I did of note was to put a girl I liked in math's TI on an infinite loop printing 'I got drunk last weekend and couldn't derive' or some such. Been running linux because I inherited a netbook with no disc drive and couldn't get windows to install from USB and I can't afford a new computer, and I've been reading slash for years and read about USB installs. My question is, is there any movement to use compute cycles at publicly funded data centers like the one going up in utah to crunch big data like this that would benefit the public? Is that even possible in the current vitriolic environment regarding data? I am young but old enough to remember people fighting over access to processing power just so they could try out new ideas. Often when someone had an idea good enogh to warrant investigation, their colleagues would go above and beyond to make a run happen.

Re:At least they're not rolling their own. (4, Informative)

Samantha Wright (1324923) | about a year ago | (#44129343)

I can't comment on the physics data, but in the case of the bio data that the article discusses, we honestly have no idea what to do with it. Most sequencing projects collect an enormous amount of useless information, a little like saving an image of your hard drive every time you screw up grub's boot.lst. We keep it around on the off chance that some of it might be useful in some other way eventually, although there are ongoing concerns that much of the data just won't be high enough quality for some stuff.

On the other hand, a lot of the specialised datasets (like the ones being stored in the article) are meant as baselines, so researchers studying specific problems or populations don't have to go out and get their own information. Researchers working with such data usually have access to various clusters or supercomputers through their institutions; for example, my university gives me access to SciNet [scinethpc.ca] . There's still vying for access when someone wants to run a really big job, but there are practical alternatives in many cases (such as GPGPU computing.)

Also, I'm pretty sure the Utah data centre is kept pretty busy with its NSA business.

Re:At least they're not rolling their own. (0)

Anonymous Coward | about a year ago | (#44129731)

It's obvious, at least to me, that the genome must be organized hierarchically like a tree, the tree of life. The tree is the most efficient way to organize data because it eliminates unnecessary duplication. Of course, wherever there is a tree, there are also branches and wherever there are branches there must be a branch control hierarchy to manage it all. This is where research should focus, IMO.

Re:At least they're not rolling their own. (2)

Samantha Wright (1324923) | about a year ago | (#44129827)

It's a neat thought, but it would never beat the basics. While there are a lot of genes that have common ancestors (called paralogues [wikipedia.org] ), the hierarchical history of these genes is often hard to determine or something that pre-dates human speciation; for example, there's only one species (a weird blob [wikipedia.org] a little like a multi-cellular amoeba) that has a single homeobox gene.

While building a complete evolutionary history of gene families is of great interest to science, it's pointless to try exploiting it for compression when we can just turn to standard string methods; as has been mentioned elsewhere on this story, gzip can be faster than the read/write buffer on standard hard drives. Having to replay an evolutionary history we can only guess at would be a royal pain.

That being said, we can store individuals' genomes as something akin to diff patches, which brings 3.1 gigabytes of raw ASCII down to about 4 MB of high-entropy data, even before compression.

Re:At least they're not rolling their own. (1)

K. S. Kyosuke (729550) | about a year ago | (#44130691)

gzip can be faster than the read/write buffer on standard hard drives.

Gzip of what? Chromosome-at-once? Isn't that the wrong way of traversing the data set, if you're aiming for actual compression? More to the point, gzip, if I'm not mistaken, is good for data with 8-bit boundaries. What if the data gets stored in base-4, six bits per triplet/codon? Finally, talking about string algorithms, I'd have thought that the best way of compressing the stuff would involve mapping the extant alleles and storing only references to them in the individual genomes.

Re:At least they're not rolling their own. (1)

msevior (145103) | about a year ago | (#44129901)

But it (mostly) works...

Re:At least they're not rolling their own. (1)

K. S. Kyosuke (729550) | about a year ago | (#44130647)

In high energy physics, we rolled our own big data solutions (mostly because there was no big data other than us when we did so). It turned out to be terrible.

But genetic data isn't particle physics data. It makes perfect sense to roll out a custom "big data" (whatever that crap means) solution because of the very nature of the data stored (at the very least, you will want DNA-specific compression algorithms because there's huge redundancy in the data spread horizontally across the sequenced individuals).

Does it have to be said? (0)

Anonymous Coward | about a year ago | (#44128813)

We've got bigger storage media than DVDs!

obvious solution (1)

Anonymous Coward | about a year ago | (#44128817)

don't store it all on DVDs, then

Bogus units (5, Insightful)

Marcelo Vanzin (2819807) | about a year ago | (#44128835)

Everybody knows we should measure the pile height in Libraries of Congress. Or VW Beetles.

Re:Bogus units (2)

schivvers (823289) | about a year ago | (#44128851)

I thought the standard was "Statue of Liberty" for height, and "Rhode Islands" for area.

Re:Bogus units (0)

Anonymous Coward | about a year ago | (#44128875)

I liked using Texases for area, but Billy Bob Thornton ruined it for everyone.

Re:Bogus units (0)

Anonymous Coward | about a year ago | (#44129129)

I thought Billy Bob Thorntons were measurements of insanity?

Re:Bogus units (0)

Anonymous Coward | about a year ago | (#44129573)

Or we could just do the math on real world storage (fucking crazy I know). A 2TB drive is about $80, so let's just round up and say $50/1TB * 1024TB/1PB * 15PB = $768000. So less than a million for the entire world's worth of data storage. Pretty insignificant compared to the cost of the sequencing equipment really.

Who uses DVDs? (0)

Anonymous Coward | about a year ago | (#44128861)

If they put it all on hard drives, it would only be 600 feet tall.

Re:Who uses DVDs? (2)

schivvers (823289) | about a year ago | (#44128879)

Who measures in feet? That's so archaic! Try using something more modern, like Empire State buildings...or Saturn V rockets.

Re:Who uses DVDs? (0)

Anonymous Coward | about a year ago | (#44129145)

Or AC penises.

Re:Who uses DVDs? (1)

Anonymous Coward | about a year ago | (#44129441)

Or AC penises.

We're talking about big size measurements not micro measurements.

Re:Who uses DVDs? (4, Funny)

Samantha Wright (1324923) | about a year ago | (#44129351)

And we can double storage efficiency by using two stacks! Clearly, they need to hire one of us.

Digital DNA storage anyone ? (2, Insightful)

Anonymous Coward | about a year ago | (#44128881)

why aren't they storing it in digital DNA format?. Seems like a pretty efficient data storage format to me! A couple of grams of the stuff should suffice.

Re:Digital DNA storage anyone ? (1)

Anonymous Coward | about a year ago | (#44128971)

That brings up an interesting point. I wonder how they ARE storing it? With 4 possible bases, you should only need two bits per. So, 4 per byte, with no compression. I hope they aren't just writing out ASCII files or something ...

Re:Digital DNA storage anyone ? (3, Interesting)

Anonymous Coward | about a year ago | (#44128987)

Actually ASCII files are the easiest to process. And since we generally use a handful of ambiguity codes, it's more like ATGCNX. Due to repetitive segments GZIP actually works out better than your proposed 2-bit scheme. We do a lot of UNIX piping through GZIP which is still faster than a magnetic harddrive can retrieve data.

Re:Digital DNA storage anyone ? (0)

Anonymous Coward | about a year ago | (#44129265)

Correct, much of the work (especially experimental "poking around" sort of work) is done with ASCII files. I regularly check out data in vim, transform it with sed or awk, and send it back to gzip when I'm done
But for certain tasks, or for archival purposes, we do have more advanced compression methods. For example, BAM and CRAM for alignments/assemblies, and SRA for unmapped reads. VCF files efficiently summarize known differences from reference strains, collapsing multi megabases of information into tens of lines. There exist specialized database solutions which allow quick searching/retrieval and somewhat tight storage for several kinds of biological data.

Re:Digital DNA storage anyone ? (3, Interesting)

wezelboy (521844) | about a year ago | (#44129529)

When I had to get the first draft of the human genome onto CD, I used 2 bit substitution and run length encoding on repeats. gzip definitely did not cut it.

Re:Digital DNA storage anyone ? (1)

mapkinase (958129) | about a year ago | (#44130477)

I did a first draft of insulin sequence on punch cards.

Re:Digital DNA storage anyone ? (-1)

Anonymous Coward | about a year ago | (#44129923)

"Actually ASCII files are the easiest to process." Huh? Are you doing the processing with humans or computers? A bit is a bit to a machine and more bits is more work. If you are storing your data ASCII because its "easiest to process" you need to fire the old Cobol hack you have designing your software.

(and if you do your work in vi or emacs you are NOT working with "big data")

Re:Digital DNA storage anyone ? (0)

Anonymous Coward | about a year ago | (#44131331)

If your high throughput operation is at the level of piping an inefficient textual format through UNIX pipes with GZIP... if you are using GZIP for data at rest while having grave issues with storage size and the data your are working on has a lot of higher-level redundancy that GZIP could not possibly catch... then you are simply not solving your needs with appropriate tools. It sounds like you might be using several orders of magnitude more storage and computing resources than you would really need to. Which I guess might be just the way it has to be if hiring high-cost programmers is not acceptable to the people offering the funding while buying high cost super computers is.

Re:Digital DNA storage anyone ? (4, Informative)

the gnat (153162) | about a year ago | (#44129461)

why aren't they storing it in digital DNA format

Because they need to be able to read it back quickly, and error-free. Add to that, it's actually quite expensive to synthesize that much DNA; hard drives are relatively cheap by comparison.

Re:Digital DNA storage anyone ? (0)

Anonymous Coward | about a year ago | (#44129963)

ASCII is fast and error correcting now? I have been long gone!

The problem will solve itself (5, Funny)

Krishnoid (984597) | about a year ago | (#44128931)

To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

Once that happens, they'll be able to stop storing it on DVDs and move it into the cloud.

Re:The problem will solve itself (1)

c0lo (1497653) | about a year ago | (#44129333)

To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

Once that happens, they'll be able to stop storing it on DVDs and move it into the cloud.

And before anyone knows, we would have a space elevator within the next 5 years instead of the eternal +25 [xkcd.com] .

Re:The problem will solve itself (1)

Swampash (1131503) | about a year ago | (#44131045)

Please, do continue measuring the massless sizeless thing in units of things with mass and size. It makes lots of sense.

This just goes to show... (3, Informative)

Gavin Scott (15916) | about a year ago | (#44128945)

...what a shitty storage medium DVDs are these days.

A cheap 3TB disk drive burned to DVDs will produce a rather unwieldy tower of disks as well.

G.

Re:This just goes to show... (1)

fonske (1224340) | about a year ago | (#44131125)

I remember the picture of Bruce Dickinson (of heavy metal fame) taking a big bite out of two compact discs filled like a (big) sandwich with all things that make you fat, meant to illustrate the robustness of CD's.
I also remember the feeling when I had to face it that I lost data beyond repair on a CD "backup".

Simple. Get the NSA to do it. (5, Funny)

Anonymous Coward | about a year ago | (#44128963)

Publish a scientific, paper stating that potential terrorists or other subversives can be identified via DNA sequencing. The NSA will then covertly collect DNA samples from the entire population, and store everyone's genetic profiles in massive databases. Government will spend the trillions of dollars necessary without question. After all, if you are against it, you want another 9/11 to happen.

Re:Simple. Get the NSA to do it. (2)

Tablizer (95088) | about a year ago | (#44129121)

You mean ask the NSA how they've already done it.

Re:Simple. Get the NSA to do it. (1)

DoctorBonzo (2646833) | about a year ago | (#44131215)

Whoever modded this "funny" isn't paranoid enough.

Database Replication (4, Insightful)

VortexCortex (1117377) | about a year ago | (#44129015)

Bit rot is also a big problem with data. So, the data has to be reduplicated to keep entropy from destroying it, which means a self corrective meta data must be used. If only there were a highly compact self correcting self replicating data storage system with 1's and 0's the size of small molecules...

My greatest fear is that when we meet the aliens, they'll laugh, stick us in a holographic projector, and gather around to watch the vintage porn encoded in our DNA.

Re:Database Replication (3, Funny)

nextekcarl (1402899) | about a year ago | (#44129137)

I propose we call this new data method Data Neutral Assembly.

Re:Database Replication (1)

phriot (2736421) | about a year ago | (#44129187)

If only there were a highly compact self correcting self replicating data storage system with 1's and 0's the size of small molecules...

In the future, if sequencing becomes extremely fast and cheap, it might make sense to discard sequencing data after analysis and leave DNA in its original format for storage. That said, if the colony of (bacteria/yeast/whatever you are maintaining your library in) that you happen to pick when you grow up a new batch to maintain the cell line happened to pick up a mutation in your gene of interest, you won't know until you sequence it again. I'm a graduate student in a small academic lab and if I want to "access my stored gene data" in the way you suggest, I need to: 1) Grow an overnight culture from my freezer stock of E. coli carrying a plasmid with my gene of interest inserted in it. 2) Isolate the plasmid DNA. 3) Take a reading on a spectrometer to determine DNA concentration. 4) Prepare a sample for sequencing at the concentration the Core Facility prefers. 5) Fill out an order form for sequencing. 6) Walk the sample over to the Core Facility. 7) Wait 1 to 3 days to get my sequence data back. I can pull up the FASTA file I have from the last time I got this gene sequenced in about 15 seconds.

Re:Database Replication (2)

c0lo (1497653) | about a year ago | (#44129357)

Bit rot is also a big problem with data.

Take a whiff from a piece of meat after 2 weeks at room temperature and compare it with how a DVD smells after the same time.
Complains on bit rot accepted only after the experiment.

Nice amount of intellectual capacity by evolution (0)

Anonymous Coward | about a year ago | (#44129025)

Pretty amazing amount of intellectual capacity, generated by evolution.

Re:Nice amount of intellectual capacity by evoluti (2)

hedwards (940851) | about a year ago | (#44129181)

Yes, but that took millions of years to develop the simplest versions.

It's astonishing that it took humans only a few millenia to get to that point on our own.

Just use DNA to store the data. (0)

Anonymous Coward | about a year ago | (#44129047)

Problem solved.

2000 devices make a lot of data (2)

hawguy (1600213) | about a year ago | (#44129057)

It seems a little overly sensationalist to aggregate the devices together when determining the storage size to make such a dramatic 2 mile high tower of DVD's... If you look at them individually, it's not that much data:

(15 x 10^15 bytes/device) / (2000 devices) / (1 x 10e9 bytes/gb) = 7500GB, or 7.5TB

That's a stack of 4TB hard drives 2 inches high. Or if you must use DVD's, that's a stack of 1600 DVD's 2 meters high.

LHC (1)

Azure Flash (2440904) | about a year ago | (#44129069)

The LHC generates a shitton of data as well, but from what I've seen (something like this [youtube.com] ) they use extremely fast integrated circuits to skim the data. Perhaps geneticists could use a similar technique.

Hmmm, shitton... (1)

DoctorBonzo (2646833) | about a year ago | (#44131259)

pronounced shi-TAWN ?

Storage Non-Problem - Sequences Compresses to MBs (5, Informative)

esten (1024885) | about a year ago | (#44129089)

Storage is not the problem. Computational power is.

Each genetic sequence is ~3GB but since sequences between individuals are very similar it is possible to compress them by only recording the differences from a reference sequences making each genome ~20 MB. This means you could store a sequences for everybody in the world in ~132 PB or 0.05% or total worldwide data storage (295 exabytes)

Now the real challenge is more in having enough computational power to read and process the 3 billion letters genetic sequence and designing effective algorithms to process this data.

More info on compression of genomic sequences
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074166/ [nih.gov]

Re:Storage Non-Problem - Sequences Compresses to M (0)

Anonymous Coward | about a year ago | (#44129205)

Yes, storing the finished results of human genomic sequencing is a solved problem. The interim data, though, is massive. Also, not every sequencing experiment is of good ol' Homo sapiens.

Re:Storage Non-Problem - Sequences Compresses to M (1)

Anonymous Coward | about a year ago | (#44129257)

Each genetic sequence is ~3GB but since sequences between individuals are very similar it is possible to compress them by only recording the differences from a reference sequences making each genome ~20 MB.

That's true, but the problem is that as a good scientist you are required by most journals and universities to keep the original sequence data that are coming off these high-through sequencers (aka the fastq files) so that you can show you work so-to-speak if it ever comes into question. These files often contain 30-40x coverage of your 3Gb reference sequence and even compressed are still several GB in size. Additional, because these large-scale sequencing projects are costing millions of dollars, the NIH isn't going to be happy if you lose the data due to drive failure, so you'll need that data duplicated using a RAID setup and offsite backup. So storage is actually a huge problem.

Re:Storage Non-Problem - Sequences Compresses to M (1)

timeOday (582209) | about a year ago | (#44129299)

That's your own germline DNA. But it would be cool to get the distinct sequences of all the cells in your body. Most of those cells (by count, not mass) are various microorganisms, lots in your gut, or infections that are making you sick or wearing down your immune system, or a latent conditions like HIV or HPV, and you would see the evolution of a few strains of precancerous / cancerous cells evolving too. Taken altogether that would be a huge amount of DNA. But I guess a lot of the distinct genomes are localized and you couldn't sample them easily.

Re:Storage Non-Problem - Sequences Compresses to M (3, Informative)

B1ackDragon (543470) | about a year ago | (#44129449)

This is very much the case. I work as a bioinformatician at a sequencing center, and I would say we see around 50-100G of sequence data for the average run/experiment, which isn't really so bad, certainly not compared to the high energy physics crowd and given a decent network. The trick is what we want to do with the data: some of the processes are embarrassingly parallel, but many algorithms don't lend themselves to that sort of thing. We have a few 1TB ram machines, and even those are limiting in some cases. Many of the problems are NP-hard, and even the for the heuristics we'd ideally use superlinear algorithms, but we can't have that either, it's near linear time (and memory) or bust which sucks.

I'm actually really looking forward to a vast reduction in dataset size and cost in the life sciences, so we can make use of and design better algorithmic methods and get back to answering questions. That's up to the engineers designing the sequencing machines though..

Re:Storage Non-Problem - Sequences Compresses to M (3, Interesting)

Anonymous Coward | about a year ago | (#44129661)

A single finished genome is not the problem. It is the raw data.

The problem is that any time you sequence a new individual's genome for a species that already has a genome assembly, you need minimum 5x coverage across the genome to reliably find variation. Because of variation in coverage, that means you may have to shoot for >20x coverage to find all the variation. The problem is more complex when you are trying to de novo assemble a genome for a species that does NOT have a genome assembly. In this case, you often have to aim for at least 40x coverage (and in the 100x range may be better).

To get the data, we use next-gen sequencing. To give you an idea of the data output, a single Illumina HiSeq 2000 run produces 3 billion reads. Each "read" is a pair of genomic fragments 100 bases long. That means 600,000,000,000 bases are produced in a single run. The run is stored as a .fastq file, meaning that each base is stored as an ASCII character, and has an associated quality score stored as another ASCII character. So that's 1.2 trillion ASCII characters for a single run, or about 1.09 terabytes uncompressed. This does not include the storage for the (uncompressable) images taken by the sequencing machine in order to call the bases. They can be an order of magnitude larger. A single experiment may involve dozens of such runs.

There is an expectation that these runs will be made available in a public repository when an analysis is published. That puts great stress on places like NIH, where 1.7 quadrillion raw bases have been uploaded in about the last four years:
http://www.ncbi.nlm.nih.gov/Traces/sra/ [nih.gov]

You are correct when you say that computational power is a bigger problem, but again, this is not related to the three billion bases of the genome, which is trivial in size. Once again, the problem is the raw data. When assembling a new species' genome from scratch, you somehow have to reassemble those 3 billion pairs of 100-base reads. The way that is done is by hashing every single read into pieces about 21 nucleotides long, then storing them all, creating a de Bruijin graph, and navigating through it. The amount of RAM required for this is absolutely insane.

Re:Storage Non-Problem - Sequences Compresses to M (0)

Anonymous Coward | about a year ago | (#44129683)

Actually the larger problem at the moment is not so much computational power, its the extremely large amounts of memory (TB range) to assemble the genomes. 1000 core computers are fairly common, ones with 2 TB of memory which is available for weeks at a time are not.

Oddly enough this won't be a problem as newer machines make longer reads which can be assembled with less memory.

Re:Storage Non-Problem - Sequences Compresses to M (1)

Kjella (173770) | about a year ago | (#44130399)

Each genetic sequence is ~3GB but since sequences between individuals are very similar it is possible to compress them by only recording the differences from a reference sequences making each genome ~20 MB. This means you could store a sequences for everybody in the world in ~132 PB or 0.05% or total worldwide data storage (295 exabytes)

For a single delta to a reference, but there's probably lots of redundancy in the deltas. If you have a tree/set of variations (Base human + "typical" Asian + "typical" Japanese + "typical" Okinawa + encoding the diff) you can probably bring the world estimate down by a few orders of magnitude, depending on how much is systematic and how much is unique to the individual.

Re:Storage Non-Problem - Sequences Compresses to M (1)

mapkinase (958129) | about a year ago | (#44130495)

Does it say something about handling this way also internal repeats?

The answer is obvious! (3, Funny)

plopez (54068) | about a year ago | (#44129097)

They should use a NoSQL multi-shard vertically intgrated stack with a RESTfull rails driven in-memory virtual multi-parallel JPython enabled solution.

Bingo!

Re:The answer is obvious! (1)

Tablizer (95088) | about a year ago | (#44129135)

They should use a NoSQL multi-shard vertically intgrated stack with a RESTfull rails driven in-memory virtual multi-parallel JPython enabled solution.

Brog, that tech stack is like soooo month-ago

Re:The answer is obvious! (0)

Anonymous Coward | about a year ago | (#44129241)

Is it web-scale?

http://www.youtube.com/watch?v=b2F-DItXtZs

Re:The answer is obvious! (1)

K. S. Kyosuke (729550) | about a year ago | (#44130723)

They should use a NoSQL multi-shard vertically intgrated stack with a RESTfull rails driven in-memory virtual multi-parallel JPython enabled solution.

Sounds like the technological equivalent of the human body => sounds about right!

AO-Hell metrics... (1)

geekmux (1040042) | about a year ago | (#44129107)

"...At this rate, within the next five years the stack of DVDs could reach higher than the orbit of the International Space Station."

And another 10 years after that, the amount of DVDs used will have almost reached the number of AOL CDs sitting in landfills.

Sorry, couldn't help myself with the use of such an absurd metric. Not like we haven't moved on to other forms of storage the size of a human thumbnail that offer 15x the density of a DVD...

Re:AO-Hell metrics... (1)

Samantha Wright (1324923) | about a year ago | (#44129379)

I think you mean "exciting and hitherto unleveraged microwaveable coaster opportunities."

I have a solution, molecular storage (1)

Proudrooster (580120) | about a year ago | (#44129189)

If there were only some way to store the information encoded in DNA in a molecular level storage device... oh wait, face palm.

Oddly... I have a clue about this stuff lately (5, Interesting)

WaywardGeek (1480513) | about a year ago | (#44129211)

Please... entire DNA genomes are tiny... on the order of 1Gb, with no compression. Taking into account the huge similarities to published genomes, we can compress that by at least 1000X. What they are talking about is the huge amount of data spit out by the sequencing machines in order to determine your genome. Once determined, it's tiny.

That said, what I need is raw machine data. I'm having to do my own little exome research project. My family has a very rare form of X-linked color blindness that is most likely caused by a single gene defect on our X chromosome. It's no big deal, but now I'm losing central vision, with symptoms most similar to late-onset Starardt's Disease. My UNC ophthalmologist beat the experts at John Hopkins and Jacksonville's hospital, and made the correct call, directly refuting the other doctor's diagnosis of Stargartd's. She though I had something else and that my DNA would prove it. She gave me the opportunity to have my exome sequenced, and she was right.

So, I've got something pretty horrible, and my ophthalmologist thinks it's most likely related to my unusual form of color blindness. My daughter carries this gene, as does my cousin and one of her sons. Gene research to the rescue?!? Unfortunately... no. There are simply too few people like us. So... being a slashdot sort of geek who refuses to give up, I'm running my own study. Actually, the UNC researchers wanted to work with me... all I'd have to do is bring my extended family from California to Chapel Hill a couple of times over a couple of years and have them see doctors at UNC. There's simply no way I could make that happen.

Innovative companies to the rescue... This morning, Axeq, a company headquartered in MD, received my families DNA for exome sequencing at their Korean lab. They ran an exome sequencing special in April: $600 per exome, with an order size minimum of six. They have been great to work with, and accepted my order for only four. Bioserve, also in MD, did the DNA extraction from whole blood, and they have been even more helpful. The blood extraction labs were also incredibly helpful, once we found the right places (very emphatically not Labcorp or Quest Diagnostics). The Stanford clinic lab manager was unbelievably helpful, and in LA, the lab director at the San Antonio Hospital Lab went way overboard, So far, I have to give Axeq and Bioserve five stars out of five, and the blood draw labs deserve a six.

Assuming I get what I'm expecting, I'll get a library of matched genes, and also all the raw machine output data, for four relatives. The output data is what I really need, since our particular mutation is not currently in the gene database. Once I get all the data, I'll need to do a bit of coding to see if I can identify the mutation. Unfortunately, there are several ways that this could be impossible. For example, "copy number variations", or CNVs, if they go on for over a few hundred base pairs, are unable to be detected with current technology. Ah... the life of a geek. This is yet another field I have to get familiar with...

Re:Oddly... I have a clue about this stuff lately (1)

Samantha Wright (1324923) | about a year ago | (#44129405)

CNVs actually can be detected if you have enough read depth; it's just that most assemblers are too stupid (or, in computer science terms, "algorithmically beautiful") to account for them. SAMTools can generate a coverage/pileup graph without too much hassle, and it should be obvious where significant differences in copy number occur.

(Also, the human genome is about 3.1 gigabases, so about 3.1 GB in FASTA format. De novo assembles will tend to be smaller because they can't deal with duplications.)

Re:Oddly... I have a clue about this stuff lately (1)

the biologist (1659443) | about a year ago | (#44129693)

I agree, CNVs are really easy to detect if you have the read depth. I've been using the samtools pileup output to show CNVs in my study organism. However, to make the results mean anything to most people, I've got to do a few more steps of processing to get all that data in a nice visual format.

If you don't have the read depth, you lose the ability to discriminate small CNVs from noise. Large CNVs, such as for whole chromosomes, are readily observed even in datasets with minimal coverage.

Re:Oddly... I have a clue about this stuff lately (1)

B1ackDragon (543470) | about a year ago | (#44129477)

Dang, this is cool. In a post above I mentioned I work as a bioinformatician at a sequencing center--if you need guidance or advice, contact me and I'll see if I can point you in the right direction!

Re:Oddly... I have a clue about this stuff lately (1)

amacbride (156394) | about a year ago | (#44129669)

As this sort of thing is my day job [stationxinc.com] , I find this sort of thing really cool, and I'd be happy to help if I can. I'd recommend looking into snpEff [sourceforge.net] , it's pretty straightforward to use, and is available on SourceForge. (Feel free to track me down and message me, I think we overlapped at Cal [berkeley.edu] .)

Re:Oddly... I have a clue about this stuff lately (0)

Anonymous Coward | about a year ago | (#44129691)

That sucks man. I'd suggest finding a bioinformatics grad student to help you. It's much trickier than you think and you'll go blind before you figure it out.

Someone who knows what they were doing could do it in weeks, it would take you much longer just to begin figuring it out.

Re:Oddly... I have a clue about this stuff lately (1)

mapkinase (958129) | about a year ago | (#44130509)

>I need is raw machine data

Too bad genome centers disagree with you (I, au contraire, agree with you). We need raw NMR data for structures as well.

Re:Oddly... I have a clue about this stuff lately (0)

Anonymous Coward | about a year ago | (#44130735)

The problem is not compressing the sequence data efficiently. The problem is compressing the quality values. (DNA sequencing is not 100% perfect; each base that comes off the machine comes with an error estimation.) The error rates tend to be quite random from base to base, and so are hard to compress. They also make up the bulk of the data (typically 5-8 bytes per base of sequence)

The quality values are really important for many applications. Without them, you can't tell whether your rare SNP is real, or simply sequencing error...

Re:Oddly... I have a clue about this stuff lately (0)

Anonymous Coward | about a year ago | (#44131339)

Final assembly might end up being something like 1 GB, but nowadays it's made from short reads that easily account to 500 GB..

To put this into perspective (1)

khchung (462899) | about a year ago | (#44129323)

To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

NO. This does not put anything "into perspective", except it meant "a lot of data" for the average Joe.

To put it into useful perspective, we should compare with large data encountered in other sciences, such as 25PB per year from the LHC. And that's after aggressively discarding collisions that doesn't look promising in the first pass, it would be orders of magnitude bigger otherwise.

But now just 15PB per year doesn't look that newsworthy, eh?

Re:To put this into perspective (1)

Samantha Wright (1324923) | about a year ago | (#44129427)

Well, if you really need to have that kind of contest...

The data files being discussed are text files generated as summaries of the raw sensor data from the sequencing machine. In the case of Illumina systems, the raw data consists of a huge high-resolution image; different colours in the image are interpreted as different nucleotides, and each pixel is interpreted as the location of a short fragment of DNA. (Think embarrassingly parallel multithreading.)

If we were to keep and store all of this raw data, the storage requirements would probably be a thousand to a million times what they currently are—to say nothing of the other kinds of biological data that's captured on a regular basis, like raw microarray images.

Re:To put this into perspective (1)

khchung (462899) | about a year ago | (#44131049)

Not really trying to turn it into a contest, but just "to put this into perspective". More or less, the point is other science projects have been dealing with similar data volume for a few years already, if there is anything newsworthy about this "DNA Data Deluge", it better be something more than just the data volume.

Yet more perspective (0)

Anonymous Coward | about a year ago | (#44129373)

Or, to put it in even better perspective, if you encoded each bit in a Rubik’s cube and stacked them end to end, it would stretch for half a light year. With the data increasing 5x every year, in less than 2 years the stack will reach to Alpha Centauri and back. In under 8 years, it will be the width of the galaxy. In 16 years, it will be as wide as the entire universe.

Or perhaps a more reasonable perspective is to realize that the entire genetic data collected each year by all sequencers in all labs and hospitals in the world, if stored on SATA disks, would fit in a Subaru Forrester.

a straightforward solution (1)

mtrachtenberg (67780) | about a year ago | (#44129613)

"At this rate, within the next five years the stack of DVDs could reach higher than the orbit of the International Space Station."

Use more than one stack. You're welcome.

Moores Law? (0)

Anonymous Coward | about a year ago | (#44129721)

I never knew Moores law (which applies to transistor count) now applies to the cost of DNA sequencing....

Here's hoping Moores law applies to the housing market soon too!

600 bytes per person (1)

bob_jenkins (144606) | about a year ago | (#44129783)

If you have the genomes of your parents, and your own genome, yours is about 70 new spot mutations, about 60 crossovers, and you have to specify who your parents were. About 600 bytes of new information per person. You could store the genomes of the entire human race on a couple terabytes if you knew the family trees well enough. I tried to nail down the statistics for that in http://burtleburtle.net/bob/future/geninfo.html [burtleburtle.net] .

Heroic acts along with create the globe (1)

rs3gold (2966019) | about a year ago | (#44129841)

Within your character's lifestyle as being a main character, you can enterprise in a few quests to perform heroic acts along with create the globe a greater spot for a are in.

Yay, AdEnine & 1 click splicing (4, Funny)

Charles Jo (2946783) | about a year ago | (#44129955)

Scientists who viewed this sequence also viewed these sequences...

Good thing there are new algorithms (0)

Anonymous Coward | about a year ago | (#44130031)

As an example, I used to build phylogenetic trees with an algorithm called RAxML. Along comes something called FastTree, which is just or nearly as accurate, but 1-2 orders of magnitude faster..

Re:Good thing there are new algorithms (1)

K. S. Kyosuke (729550) | about a year ago | (#44130745)

That's probably because it doesn't have "XML" in its name!

doubling doubling (1)

mlush (620447) | about a year ago | (#44130057)

The amount of biological data doubles every 9 months, processing power doubles every 18 months we have already reached the crossover point.

Actually, it's only 15 discs per year (0)

Anonymous Coward | about a year ago | (#44130211)

15 discs is enough, according to the article [slashdot.org] on storing 1 petabyte on a single DVD we had just one week ago. Problem solved!

I'll send my invoice later (1)

David Govett (2825317) | about a year ago | (#44130375)

Why not have a reference genome? For everyone else, simply store deviations from the reference. Seems a possibility.

Compression by Reference (1)

sowalsky (142308) | about a year ago | (#44130441)

The sequence read archives (such as the one hosted by NCBI) as a repository for this sequencing data, uses "compression by reference," a highly-efficient way to compress and store a lot of the data. The raw data that comes off these sequencers is often >99% homologous to the reference genome (such as human, etc), so the most efficient way to compress and store this data is only to record what is different between the sequence output and the reference genome.

We need to find new approaches ... (1)

Ihlosi (895663) | about a year ago | (#44130705)

... to store all this data.

I suggest storing it molecular form by pairs of four different bases (guanine, thymine, adenine, cytosine) combined in an aesthetically pleasing, double helical molecule!

Re:We need to find new approaches ... (1)

byrtolet (1353359) | about a year ago | (#44130819)

If it is easy to sequence - dna is easy to replicate, this is indeed the best storage!

Perspective? (1)

jbmartin6 (1232050) | about a year ago | (#44131059)

To put this into perspective, if you were to write this data onto standard DVDs, the resulting stack would be more than 2 miles tall.

This doesn't put it into perspective at all. What is a DVD and why would I put data on it?

Why Bother (1)

morgauxo (974071) | about a year ago | (#44131335)

It's not like they can use the data, it all is or soon will be patented! Even the patent holders are SOL because anything their bit of patented gene interacts with is patented by someone else. What a lovely system we have!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>