Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Storage Dilemma Looms for NASA

CmdrTaco posted more than 15 years ago | from the this-is-gonna-get-crazy dept.

Science 75

John Keeton writes "Guys, This story talks about how NASA is moving its data from tapes as old as seven tracks to newer media, but then they get done, they have to start moving it again to new media, and how they are falling behind, and may have to lose TB's worth of data.. Really interesting.." It says it will take them 4 years to move all the data to tapes that have a 6 year life expectancy. Hmmm.

cancel ×

75 comments

Sorry! There are no comments related to the filter you selected.

What about optical (0)

Anonymous Coward | more than 15 years ago | (#2031177)

As the time involved to transmit the data is the crucial factor here (as well as cost) optical just does not make sense. they are talking 4 years to move over their tapes... how long would you take to do it with a 4x CDR or DVD.

And talk about interchangable standards and format... think those supercomputers can read UDF DVD's?

Wow! That explaines it! (0)

Anonymous Coward | more than 15 years ago | (#2031178)

No wonder they are so lagged! But shouldn't vger be farther then 1 light day away?

Also, isn't it impossible to use TCP/IP over distances farther then 2 light minutes? (max RTT) Or is vger.rutgers.edu some kind of proxy?

Huge amounts of data (0)

Anonymous Coward | more than 15 years ago | (#2031179)

Dozens and dozens of terabytes!

It's worse than that. Consider this from the article:

"The task is multiplied for all of NASA, which holds thousands of terabytes of data..."

--
Jason Eric Pierce

20+ (0)

Anonymous Coward | more than 15 years ago | (#2031180)

Before or after post-processing? You guys
don't save the bit stream from the satellite
do you?! My dept at Ford Motor runs a lot of
engine dyno tests which generate huge amounts
of data. Most of it is extremely "boring" and
once post-processed fits easily on a CD.
Isn't NASA's situation similar?

Optical? (0)

Anonymous Coward | more than 15 years ago | (#2031181)

I have worked at a corporate data center. It was for a medium sized insurance company in the midwest. They employ about 12,000 people. Quite modest compared to some of the mammoths like Prudential and Aetna.

The data center had a tape library of over 500,000 tape cartridges. Each tape could hold at least 210 MB of data. I think these were the 9 track tapes. The 18 tracks could hold more. We had 16 large robotic tape silos which each housed 5,500 cartridges, eight tape drives and two robot hands to mount/unmount the tapes. The place was staffed 24/7 and it never stopped. Those robots were always loading and unloading.

And this was just the mainframe stuff. We had some smaller silos for the AS/400's, Unix and NT boxen. I imagine NASA has lots more data than we did.

Spin the data off to an "internet" company (0)

Anonymous Coward | more than 15 years ago | (#2031182)

nasa to company: "here's a shitload of tapes; put them on the internet and see if you make any money"

company to nasa: "uhhhh, ok."

wallstreet in general: "oh boy, i WANT some of that!!!"

It's either that or start funding those holographic storage projects.

20+ What an interesting way to inflate a budget!!! (0)

Anonymous Coward | more than 15 years ago | (#2031183)

So, that's what drives the number of data colletion gizmos per square inch parameter. I always thought it was the cost per pound launch metric. No wonder satellites cost so much.

Say, you nasa guys are beginning to catch on to some lessons that th NSA learned years ago. Be sure interleave classified data streams across all your media so that future budget requirements become non-optional for those criters in congress (gota' love those black budgets that JUST WON'T DIE).

BTW, do you factor Moores Law into the data storage budget requirement or just the need to have high speed/density data output from the space craft? Don't bother answering... the main topic of this thread makes the answer obvious.

If you do factor Moores Law into the storage into future requirements, remember the bulky gear requires lots of space to store. If you WORK with the equipment venders you can usually get OTS gear bulked up by 2x. Plus, if you have the right contacts, you can get kickbacks for leasing secure storage facilities from your favorite friends.

Oh boy... fun with numbers at the tax payer's expense.

Funny, you have all that data collection and you still can't get an antenna to deploy correctly or mirrors without defects or... sometimes I really wonder if you guys ARE aliens.

"Storage speed has only 4x in last decade?" (0)

Anonymous Coward | more than 15 years ago | (#2031184)

They are kidding, right? I ran the numbers from their article: 28 Tbytes/4 years = 221 Kbytes/sec.

Ummmmm.....

Figuring that NASA's data is probably uncompressed, we can pack 70 GB per DLT tape *today*, with a write speed approaching 10MB/sec.

At *those* specs, you can duplicate 100TB of info in 115 days on a mere 1429 tapes. With one DLT drive. With 10 DLT drives, that drops to 11.5.With 100 DLT drives, you can dup the entire dataset every other day. That is with *TODAY'S* technology. What's the issue?

Are they being stupid and copying 9track->9track instead of 9track->modern media? Is the *REAL* issue that NASA is unwilling to give up a 20 year old storage technology?

NASA is cheap.. (0)

Anonymous Coward | more than 15 years ago | (#2031185)

It can be done... [atlp.com]

20+ (0)

Anonymous Coward | more than 15 years ago | (#2031186)

This is orbiting Earth right? A satallite, I would guess, otherwise you've got a much bigger problem than data storage, you've got to transmit that data.

CDs? (0)

Anonymous Coward | more than 15 years ago | (#2031187)

Under proper conditions (i.e. tempurature and humidity controlled, sheilded from ultraviolet and other forms of radiation) almost any media can be stored indefinately, including magnetic tape. The problem is having a warehouse with the proper storage conditions. And NASA's probably gonna need a really, really big warehouse. That's a lot of money for data that they're not using and not likely use any time in foreseeable future.

I imagine what NASA would really like is a storage medium that doesn't require a lot of effort for storage. In "normal" warehouse conditions (e.g. the ones in my tin tool shed in the yard) a CD-R will only last 5 years before noticable amounts of data are lost; regular CDs and DVDs are only garaunteed to last 10 years.

"Storage speed has only 4x in last decade?" (0)

Anonymous Coward | more than 15 years ago | (#2031188)

Judging by their estimage, around 220K/Sec per drive. So you put multiple drives in parallel, use an intermediate disk to flow smooth and feed from it to the DLT. 50 9-tracks->1 hard drive->1 DLT. The 50 drives stagger start times by 1/50th of the time to read one tape. One complete 9-track tape becomes available in the same time it takes to write the contents of one 9-track tape to the DLT.

Aggregation and pipelining. We aren't talking rocket science (well...ok...maybe we are ;) ).

Just like GRO. (0)

Anonymous Coward | more than 15 years ago | (#2031189)

To those of us working on AXAF, it will always be AXAF. Just like for the people that worked on the Compton GRO, it will always bw Gee R Oh. No disrespect for the work of Chandra, it's just that habits die hard.

What about stones? (0)

Anonymous Coward | more than 15 years ago | (#2031190)

This is not fantasy. Such devices *are*
being developed.

Two Words (0)

Anonymous Coward | more than 15 years ago | (#2031191)

Summer Intern

JWitt

problem in general (0)

Anonymous Coward | more than 15 years ago | (#2031192)

This is a bigger problem in general. Librarians are faced with it constantly. There was an article in Scientific American last year.

What to do with those Polaroid(tm) pictures of your childhood, they only last a few years before they fade. Well copy them to Kodachrome(tm) slides, they last 25 years if stored in the dark at an ideal constant temperature. So in your 40's you make dupes of those slides to pass on to your kids......

Same with digital media (but hopefully no loss when copying). What about your personal backups. Got them all on a removable disk. What to do in 10 years when that format is no longer available.....

Two possible solutions (0)

Anonymous Coward | more than 15 years ago | (#2031193)

No we don't know for sure but you can do some statistics and analyze discs and come up with a good estimate.


Hard drive manufacturers have been posting MTBFs for drives in the quarter million hour range and up for years. They do the same thing. You build 10000 discs, you run them through an electron microscope. You handle them for a few days each, you run them through the microscope again and try to detect changes. You do this until you can interpolate how long it will take for a disc to oxidize or rust.


From what I've heard, CD media is so good that they have to make assumptions to come up with 100years. If you take good care of them they could last a lot longer than that.

Why Tape? (0)

Anonymous Coward | more than 15 years ago | (#2031194)

You messed up a little:
2400 ft per reel
x 10 usable inches per foot (2 inches for several interrecord gaps per foot)
x 6250 bytes per inch (i.e., bits per inch on each track)
= 150MB. That's still at least 4 reels/CD-ROM.

However, some of the data was on seven-track tape. that's 556 characters per inch, not 6250 characters per inch.

BTW, using jukeboses, you can store more than 100 CD-ROMs "on line" in the same space that you can store 10 tape reels in off-line racks.

why not hard drives? (0)

Anonymous Coward | more than 15 years ago | (#2031195)

Perhaps this is a dumb question, but can someone explain why you can't store data on many pluggable hard drives? They are cheap, the DTR is high, and you could store everything twice if needed.

Are the high end tape systems cheaper? (I guess they probably are...)

gaack! (0)

Anonymous Coward | more than 15 years ago | (#2031196)

Engineering=applied science. Rocket Science includes Rocket Engineering.

Besides, a lot of people don't grok Newton's Third.
"How does a rocket work in space, when there's nothing to push against?"

gaack! BOOM! (0)

Anonymous Coward | more than 15 years ago | (#2031197)

"gas shooting out the back of the rocket"?

I thought it was the push from the thermonuclear fireball against the blast plate which made it go. :-)

drives not archival (0)

Anonymous Coward | more than 15 years ago | (#2031198)

So you're going to put these hard drives on the shelf for twenty years? Then try to read them after the lubrication has evaporated, you can only find a dozen disk controllers, and a fork lift tipped over a rack with five percent of the collection?

Any NASA URL of white paper? (0)

Anonymous Coward | more than 15 years ago | (#2031199)

Anyone find a URL to the original report? I couldn't find one, even on the site of the author's organization.

distrubuted.net/cdwrite (0)

Anonymous Coward | more than 15 years ago | (#2031200)

maybe need to set up a distrubuted.net-like effort with all those folks owning cd writers taking a portion of the data.

Cost of storage Vs. Re-creation (0)

Anonymous Coward | more than 15 years ago | (#2031201)

I wonder what the cost of launching a new probe in 100 years to re-gather data would be compared to the cost of archiving it over and over every 10 years?

What about stones? (0)

Anonymous Coward | more than 15 years ago | (#2031202)

The point is that you write bigger letters on stones.

If you would give me 1 cm^2 per byte, I think I could write the byte in a way so that you can restore the data even after some hundred years.

But if you have only a microscopic small area to store the informationen, a single bit gets more fragil. The more capacity you have on the same space, the less life expectancy I would assume.

We know this from stones, too. The smaller the letters, the easier they get unreadable.

I don't know the theoretical measures, but I would think life expectancy times amount of data is roughly proportional to some function of size (of tape, stones whatever). Dunno if function is linear, polynmial or exponetial. I have not had information theory courses yet.

Worldwide PC drives > 1.5 exabytes (0)

Anonymous Coward | more than 15 years ago | (#2031203)

hey now,

How about the PC data storage problems? Forbes
reported last summer 98 harddrive sales would be
772,000 terabytes, 97 was 338,000. I think its
safe to assume all drives sold prior to 97 and
still in use total at least 400,000 terabytes.
In 2000 they expect over 3 petabytes in sales.

So the question is... what the fuck are we doing
with all that space? And can Microsloth write
code that much more inefficiently to take it all
up? Theres already almost a GB for every person
on the planet - not even counting "real" disk
drives in mainframes, etc. Yikes...

University of Mars (0)

Anonymous Coward | more than 15 years ago | (#2031204)

Check the Linux TCP/IP code, it specifically states it is able to work at very long distances.
I believe the University of Mars is mentioned as accessible with Linux TCP/IP stack.

-AT

Why Tape? (0)

Anonymous Coward | more than 15 years ago | (#2031205)

Capacity of 9 track 2400 foot tape is calculated wrong. there are 6250 BYTES per inch (not BITS) since each bit is stored in one track, plus a parity bit gives 9-track!

that gives 180 Mb per tape maximum. However ...

there is an inter-record gap that can take as much space as the data with small records.


Note that NASA are talking about 7 track tape.
Note that 9 track tape started at 800 BPI, went on to 1600 BPI and finally ended at 6250 BPI.

New tape storage media can store up to 100Gb per tape and will transfer at speeds that disk drives have trouble keeping up with. This media has a life expectancy of 10 or more years. This new media can be managed by robotics which old 9 track and older types could not be and need real people to mount the drives.

Once they have done one archiving to modern media then the next re-copy will be much less painful.

What is the transfer rate of CD-RW or DVD-RAM?

If NASA are only now copying their 7 track tapes then they have left it rather late - 7 track drives have been obsolete at least 15 years.

David.


"Storage speed has only 4x in last decade?" (0)

Anonymous Coward | more than 15 years ago | (#2031206)

New tapes drives are as fast as disk drives. There is thus no value in intermediate disk storage. Additionally the input and output can double-buffer and run in parallel.

You also have to allow for mount/dismount times and human operator factors with 9 track tapes.
I havn't seen anyone allow for that. It probably halves the effective input transfer rate.

David.

20+ (1)

Anonymous Coward | more than 15 years ago | (#2031207)

The project I'm working on right now has a requirement that all data be stored for the life of the spacecraft +3 years. The spacecraft in question is expected to last 20 years. We're expecting data dumps of about 10GB each twice an orbit (an orbit is about 90 minutes).

This is just one project, and we haven't even launched the spacecraft yet.

No Subject Given (1)

Anonymous Coward | more than 15 years ago | (#2031208)

My understanding is that a lot of this data
was just collected by automatic telemetry and
archived. It has never been analyzed, there
are no plans to analyze it in the future, it
can take major research just to figure out
the file and data structures, etc...
The proper conceptual model for visualizing the
structure of this archive is the buildup of
ocean sediments, and the proper data mining
technique is based on the analysis of drill
cores... AHA... here we have reached the
boundary layer between the 7090/Fortran II
sediments and some traces of an early 360...

There is NO lasting format/medium solution (1)

Anonymous Coward | more than 15 years ago | (#2031209)

Our organization has vast amounts of important data, some of which dates back several decades.
This data must remain accessible in the future as well. Some time ago they got rid of their once-
multi-million, now-obsolete mainframe. After that
they wasted a lot of money finding a method to read legacy mainframe tapes on Unix after discarding the mainframe. The costs involved, such as reverse-engineering the interfaces of the tape device, file format etc. and simply the effort of having all of those clumsy tapes read taught them a lesson: As data amounts continue to grow, ANY backup medium will grow obsolete in a few years. In a few years you can't buy a DVD drive anymore because it's grown obsolete, just like C=64 cassette tapes today. Massive transfer operations from obsoleted media repeatedly are simply out of the question. Their solution was to keep ALL of the data ONLINE. Backups are naturally taken regularily, with whatever equipment is in use at a given time. As all of the old data migrates to the new whizbang online storage along with everything else, there is no need to worry about aging archive libraries and preserving the technology to read the obsolete media. Not to mention the newfound ease of access to the more rarely needed old data. Sure, you need to invest
in large RAID arrays or what have you. Sure, you
need to invest in a high-volume backup system. But those systems are replaced "often", since they are
a part of the production system. In a few years, the extra volume that required the extra investment today will seem non-existent.

Now this may sound wild at first, but think about it: Data grows. Disks grow larger and cheaper, and are replaced. Backup media grow obsolete. The conclusion is a no-brainer, actually.

--
Teemu Yli-Elsila

This one's fairly easy to get round. (1)

Anonymous Coward | more than 15 years ago | (#2031210)

It just costs a bit of money.

Get a heirachical storage system. They make it fairly easy to migrate data across different media types. When a new bigger, faster , better storage media comes along, just add it to the heirarchy. Course BIG automated media libraries help a lot, saves on the arms.

Two possible solutions (1)

Anonymous Coward | more than 15 years ago | (#2031211)

NASA is good a pushing things so this might be good for all of us.


I think their problem calls for a layered solution, first they should copy their oldest tapes on to modern archival grade tape. Optical solutions just won't cut it yet and if they feel the data is worth keeping (and it probably is) then they need to get it into a more easy to work format.
I would duplicate the new tapes two, take an old tape and copy it on to 2 new tapes.


Secondly, they need to go optical. Pressed CDs have an average life of 100 years. CD-Rs are good for a couple decades. If you take good care of them they are expected to last longer. NASA needs to push on blue, green, or violet laser stacked CDs and DVDs. That stuff is in the labs now but if NASA and some government agencies started sounding the alarm for it maybe production could be expedited. I blue laser dvd should hold between 50 and 100GB of data, that is still short of some of the really big tapes out there but I would think it would work. Get STC to build an automatic tape to DVD jukebox machine and the problem would start to go away.

No Subject Given (1)

drwiii (434) | more than 15 years ago | (#2031212)


mke2fs /dev/null

they are just laisy! (1)

gavinhall (33) | more than 15 years ago | (#2031213)

Posted by korto:

as long as they don't have the goverment on their back (the comunists are coming!!!) to pump their energies up they don't do zilch!

looks good to me. (1)

bluGill (862) | more than 15 years ago | (#2031214)

As an employiess of StorageTek I like reading this artical. It gives me hope for the future. :)

#include

speed (1)

bluGill (862) | more than 15 years ago | (#2031215)

I can place 50 gigabytes on a singe tape, uncompressed. I can read or write that entire tape in the same time that a single speed cdrom can read 640 megs. A 40x cdrom would need more media changes (a robot would do the changes of course), and not counting the media changes would take 1.5 times as long!

Optical storage has promis, don't get me wrong, but when your trying to spool data as fast as NASA does from some applications they aren't suitable.

#include %lt;stddisclaimer.h> I'm not speaking for anyone here, all numbers have been rounded and esitmated.

speed (1)

bluGill (862) | more than 15 years ago | (#2031216)

I agree use the right tool for the job. I also agree that optical storage can work, but optical drives go obsolete too, my perdiction is that in 5 years manufactures are going to notice that CD-ROM is never used and the DVD players will cost $.50 less because they don't put the ability to read CDROMS in DVD drives anymore. Whopps, goota move that optical data off cdrom not, while you can still find a reader.

Your still missing the point though: They want to use that data. I just stated the speed they can get from tape. They can't tell me which tapes they will need to read next year, but some of those tapes will be needed for some project. A supercomputer is a device for turning a CPU bound problem into an I/O bound problem. While many supercomptuers run unix and can multitask, the users still want the answer fast, and waiting for data to come off an optical cartrage isn't a good use of time. In todays world human time is more expensive then computer time, so it is worth the cost to make sure human time isn't wasted.

Don't forget that were talking about several hundred terrabytes of data at NASA, even in the optical stroage system everyone is envisioning (which may eventially be made, but it isn't effective today) it will take up significant space, and unless the media never changes (like CDROM->DVD media didn't change, right) they still need to migrate. I'm not a profit, I'm not about to perdict formats won't change.

robots vs by hand (1)

bluGill (862) | more than 15 years ago | (#2031217)

I know for a fact that most current NASA storage is from StorageTek, and they are famious for robots, so it is likey that the current stuff is robotic. I'm also well aware that 2 years ago it was someone else, and if they aren't careful it could be someone else again as they keep upgrading capacities.

So it is a safe bet that the new tapes are in a robot. I'm comfortable saying, though unsure, that the old tapes were at least partially manual.

Accually I know the new systems are robotic, because NASA keeps their data in the same building they handle the deadly chemicals for the Shuttle booster rockets. The data center people really hate to be in a room that shares ventalation with a room where they mix two deadly gasses to make something even more deadly. I don't know why they don't move it.

Why Tape? (1)

bluGill (862) | more than 15 years ago | (#2031218)

Access time is a legitimate concern, if it becomes a bottle neck. in comptuer scient terms: a 50 gb tape takes O(r+n) time. A bunch of cdroms take r^75+n time. Also note that the constant before n is bigger with a cdrom. Simply a dense tape is faster then optical, and has about teh same lifetime. (CD-R is not good for 100 years, as others have noted it is gaurentied for 20 years, tape can be that good)

I'm not against an optical storage system. I'd seriously consider investing money in reasearch ofr such systems. But magnetic media still has life, and is still in general a better solution then optical. Yes I expect this to change in the future, but NASA is dealing with today, they will probably migrate to better media again in 5 years. SOP for many buinesses as they try to re-claim the space consumed by the older storage.

If you don't get this... (1)

jkovach (1036) | more than 15 years ago | (#2031219)

go watch the first Star Trek movie.

If you don't want to, the premise of the movie was that some aliens picked up the voyager probe, read the programming that said that it needed to return the information it collected to Earth, and sent it back towards earth with a bunch of new machinery inside a gas cloud sever solar diameters big. It destroyed everything in its path trying to get back to Earth, and although it was sending out some data, no one on Earth remembered how to activate Voyager's transmit sequence.

HD-ROM -- particle-beam, not optical (1)

coats (1068) | more than 15 years ago | (#2031220)

Have a look at this one:

[norsam.com]
http://www.norsam.com/hdrom.htm

They are a DOE spin-off working on archival technologies. The idea is to use particle beams to do the writing instead of lasers: you can focus the beam much more tightly, hence make much smaller dots. They have two technologies -- digital holding 165GB/disk, with 20MB/s storage rate, and analog, holding 90,000 pages scanned at 300dpi. Both use _very_ durable silicon-wafer substrates.


At that density, a 6-platter changer holds a terabyte, and a dozen 500-platter jukeboxes hold a petabyte. If you want really fast access, stripe across multiple platters -- if you stripe 8-way, you get a transfer speed of 10 terabytes per minute, which does better than NASA's old tapes (someone said 23 months, iirc).


fwiw

There is NO lasting format/medium solution (1)

sjames (1099) | more than 15 years ago | (#2031221)

Sure, but how about a DC600A tape drive? For that matter, now that you've got the c64 tape, you will read it with what? Sure, if the data is desperatly important, you could do some sort of hack involving a soundcard and a tape player, but that's for data measured in K.

5.25 IBM formatted still isn't too hard, but how about 8 inch from a PDP-11?

The point is, no matter how long lived a storage method looks now, in 10 to 20 years, it'll be a big pain.

Low tech vs. High (1)

sjames (1099) | more than 15 years ago | (#2031222)

Although painful to think about (given the volume of data, perhaps constantly migrating to the latest and greatest isn't the best answer. For example, I have some old WORM media, and some old punch cards. Guess which one I can still read (if I were really desperate to preserve COBOL code).

I also have some ancient 78 rpm records that I can still play, and some 10 year old audio CDs that I can't. It seems that there's been a wee little bit of spec drift in CD players so that not all new players work well with some old CDs. I say that because I have an older CD player that has no problems with the same old CDs. Wierd but true.

Two possible solutions (1)

jafac (1449) | more than 15 years ago | (#2031223)

could you imagine trying to locate a specific file, somewhere in a library of 100,000 CD's?

they MUST find a solution to the capacity problem first. Optical don't cut it.

Steve Jobs (1)

jafac (1449) | more than 15 years ago | (#2031224)

Really, it's all Steve Jobs' fault.

All those Macintoshes at NASA, now they have to copy their data from floppies to USB-connected ZIP drives. Just because Steve Jobs says floppies are obsolete!

Thousands of terrabytes? Think how many ZIP disks that is!
Giga-click-of-death!

Stones don't scroll (1)

dylan_- (1661) | more than 15 years ago | (#2031225)


...and they're difficult to grep....

dylan_-


--

500 terabyte RAID array? (1)

dylan_- (1661) | more than 15 years ago | (#2031226)


Wow!

dylan_-


--

Why tape? (1)

red_dragon (1761) | more than 15 years ago | (#2031227)

If they know that tapes have such a short life span (for their purposes, at least), why do they want to transfer their old data from tapes to *shudder* more tapes? Geez... BTW, there are storage devices intended specifically for long-term storage, and they happen to be based on optical storage. WORM and COLD are but a few of many. If DVD weren't such a mess of a "standard", they could use 4-layer double-sided disks to store the data. In short, optical storage is the best candidate for such a task (but many of you already knew that :).

6 years of life expectancy LEFT (1)

Omnibus (1831) | more than 15 years ago | (#2031228)

The new tapes will have 6 years of life left AFTER the project is over, meaning the life expectancy is 10 years, not 6.

Not that it makes it any better. Someone tell NASA about DVD and other digital storage methods.

asinus sum et eo superbio

Better hurry (1)

Gregg M (2076) | more than 15 years ago | (#2031229)

At least save the transmit code for Voyager 6.
! ! ! ! ! ! ! ! ! ! !


Wired Magazine Article (1)

vallee (2192) | more than 15 years ago | (#2031230)

Hi.
This is exactly the subject of a really fantastic article at Wired's magazine archives. Thought I'd contribute the URL [wired.com] .
Enjoy,
-p
--

Modern Tape Technology... (2)

slk (2510) | more than 15 years ago | (#2031231)

Modern commercial tape technology, specifically DLT, has gotten very fast and reliable. While I realize most of you haven't dealt with anything larger than a peecee and therefore find real technology hard to deal with, it is out there. A Quantum DLT7000 drive, for under $5k, can write 35GB native (70GB compressed; onboard hardware compression) at 5MB/sec media speed. Also, according to Quantum's specs, a DLT cart has a storage life of "More than 30 year with less than 5% loss in demagnetization (at 20C and 40% non-condensing humidity)". DLT is very fast, reliable, reasonably priced (given what it does), and has been around for a while. If they're using DAT (or other helical scan technology) for all this data, they need to get their head(s) checked.

Shortage of Money? (1)

Epeeist (2682) | more than 15 years ago | (#2031232)

Is this the major cause? The amount of data doesn't seem paticularly large. The company that I work for (a large British bank) has 36 STK Powderhorn silos each of which holds 12TB.

Alternatively are they limited by the speed of the
old tape drives, rather than CPU throughput?

CDs? (1)

acb (2797) | more than 15 years ago | (#2031233)

Don't CD-Rs have a life expectancy somewhere in the same ballpark as NASA's tapes? I heard that they start to suffer bitrot as the dye fades/decays.

Come to think of it, is there any high-capacity digital medium that could be reasonably expected to securely hold its data for centuries (as printed text on paper lasts)?

Parcel the tapes out to qualified parties (1)

Skip666Kent (4128) | more than 15 years ago | (#2031234)

Parcel out the less-critical/unknown tapes out to various interested parties or institutions, with the understanding that a serious effort be made to upgrade the storage.

If all else fails, draw up a full-color glossy ad and sell 'em off to the SF nuts in the back of STARLOG, complete with velvet-lined display case and commemorative plaque. Oh, and when the display case is opened, you get a cheap, tinny-sounding rendition of the first few bars of "2001".

Kewl!

What about stones? (1)

WhiteDragon (4556) | more than 15 years ago | (#2031235)

yes, but for every parchment/papyrus made when the dead sea scrolls were made, there is probably 10000 that have rotted away long ago.

size of tape (1)

datazone (5048) | more than 15 years ago | (#2031236)

how much data does one of these rolls of tape hold?

speed (1)

datazone (5048) | more than 15 years ago | (#2031237)

speed should not be a problem in accessing data from an optical disc. true you may not be able to spin the platter beyond certain speeds. isn't there some technology call "zen" that uses multiple lasers or heads to read multiple areas of the disc and recombines it, giving you a larger flow of data. If NASA or anyone else modifies this technology to its upper limit, you should be able to reach very large data bandwidths, true you will need some cpu power to put the data back together, and maybe a secondary large buffer to store data while it could be reorganized, but that should be nothing for a supercomputer to do.
It may not be perfect, but it could tide them over for the next 10 years at least, until we start using bio-chemical storage devices.

Too bad 'CD Quality' audio isn't the end-all... (1)

jwilloug (6402) | more than 15 years ago | (#2031238)

For spoken voice, what's the difference?

20+ (1)

rpete (6612) | more than 15 years ago | (#2031239)

> Would like to know what project your talking about (maybe AXAF?)!

What's AXAF? :-) See the satellite formerly known as AXAF [harvard.edu] .

Optical? (2)

Bilbo (7015) | more than 15 years ago | (#2031240)

Why don't they go to optical disk libraries? (I mean the big 14inch disks.)

I wonder about some of these time estimates though. Are they talking about the total time to copy all the tapes one at a time? Seems like they could just add more tape drives. The one bottleneck might be the fixed number of readers, since the formats are so old they can't buy new drives to read them...

(gack! The media loves to latch on to "disaster" stories. :-/)

Solution Sought for Brain Imaging Data Storage (2)

malpern (8521) | more than 15 years ago | (#2031241)

Hi, I work in a Cogntive Neuro-Imaging lab at Princeton University. We use an fMRI scanner to image people while they're doing working memory tasks. Our work generates a substantial amount of data that needs to be archived. Our current need is to archive around 500 GB, but this figure will likely increase to 3 to 5 TB in the next two years. Our current plan includes investigating a Pioneer double sided DVD-R jukebox that should store 9.4 GB per disk and be available around this summer. We considered tape solutions but found them significantly more expensive (when we factoring in the cost of the robotics) for similar capacity. A DVD based solution also offers the promise of non-proprietary universal access. We would be very interested in any advice/suggestions that the Slashdot community has to offer about "reasonably priced" high capcity data archive solutions. Thank you.

Micah
malpern@princeton.edu

speed (1)

Lando (9348) | more than 15 years ago | (#2031242)

This is absurd. I deal with terabytes of data every day and the cost for what they want to do is not as expensive or as troublesome as they seem to think.

A system could be set up to give them real-time access to all their information and provide storage for less than 10 million and that's total cost up to 2005 where they were stating storage costs of 50 million a year.

Something is obviously wrong here. Am I missing something?

why not hard drives? (1)

Lando (9348) | more than 15 years ago | (#2031243)

23 TB roughly 2.3 million dollars raid 5 drives? Too expensive?

What about optical (2)

Tuor (9414) | more than 15 years ago | (#2031244)

Is there any good optical storage medium they can take advantage of? CD's and MO have an incredible storage life.

Maybe they would have to use Laser Disc-sized DVD technology? Any other thoughts?

Optical is definately an option, but... (1)

trims (10010) | more than 15 years ago | (#2031245)

Cost is a problem right now.

In case everyone forgets, we're not spending hords of money on NASA and related departments anymore. In fact, they generally have either static or slightly shrinking budgets. So, naturally, they've gone to strictly commertial stuff whenever possible. No custom build stuff here.

The biggest problem isn't the throughput of modern tech (I do suspect that DVD-RAM/DVD-RW will be the format of choice), but the rate at which they can read data off the old systems. As other people have pointed out, a huge chunk of the data is on VCR-style types (or 9-track reels) - the readers are old, hard-to-find, and I suspect can't do more than a couple hundred kB per minute.

OK, math quiz: 100TBytes / 1MB per minute =~ 100,000,000 minutes =~ 190.25 YEARS. Say you have maybe 100 such readers at your site. It still takes almost 23 months of completely continuous reading to read it all off. No wonder they have a problem...

Oh, and the stab at all the old farts at NASA was unjustified. Most of the people I know at NASA (@ Ames, Goddard, etc.) are real engineers. Many are getting long at the tooth, but I can safely say most of the them are extremely competent, and I'm completely sure this isn't their fault. Probably just the typical upper-level funding problems (ie - complain to the top dogs @ NASA, and, more likely, to the dolts in congress who don't have the vision to properly fund them).

Why Tape? (2)

rnturn (11092) | more than 15 years ago | (#2031246)

I read a similar story several years ago. NASA collects an enormous amount of data from the various probes that are wandering about the solar system. At that time, I'm not sure that CD-ROM was proven for data yet and they were placing everything onto magtape.

Now that CD-ROM is pretty well established, I can't see why it wouldn't be suitable for copying those old tapes onto. OK, OK, DVD will hold more but even CD-ROM will hold tons more than an old 9-track tape. A simple calculation (feel free to correct me if I messed up here) shows

(2400 * 12 * 6250) / 8 = up to about 21 MB

I'm guessing that a 9-track tape takes up about the same amount of shelf space as about 6-7 CD-ROMs. Let's see that's 21 MB vs. 3600-4200 MB. Looks to me like they gain back some floor/shelf space as well as longer life for the data.

The concern about access time can't be that legitimate. Robotic tape handlers aren't any faster than CD-ROM handlers/jukeboxes.

I hope NASA acts on this before those old tapes become totally unreadable. Loss of this data, IMHO, would be a catastrophe.

Why? (1)

Axe (11122) | more than 15 years ago | (#2031247)

I would love a low job right now...

Too much data (1)

qseep (14218) | more than 15 years ago | (#2031248)

Why do they need all that data? Are they ever going to look at it? What if they hadn't collected as much in the first place? Would they be much worse off than they are now? Can't they just back up the most important stuff?

How about compressing the data? Not just lzh or something, but things like peaks/troughs and other statistically significant items? Once the raw data has been around for a while, say a year, they can reduce it to what's significant. If later they change their mind, realize they need the original raw data, too bad! They'll just have to revise their algorithms for the future. No big loss.

What about stones? (1)

Frey (14600) | more than 15 years ago | (#2031249)

oh yeah, some were on copper sheets.

On mass storage migration... (1)

zealot (14660) | more than 15 years ago | (#2031250)

I work at the National Center fro Supercomputing Applications at the University of Illinois at Urbana/Champaign, and we use all the latest mass storage technologies (DLT drives, TLM Robots, Tape Silos, etc), and we've also had do to a migration from older media. And it takes time. First of all, our migrations weren't from such old media, which meant that they held more than NASA's tapes. So, we had less tapes to deal with, and they transferred faster to DLT. NASA has so many (relatively) low-capacity tapes that read slowly, it would take a huge amount of time to do anything. It doesn't matter how fast the medium you're copying to can write at, in this case the bottleneck is reading the old media. Not to mention the fact that tape drives are relatively unreliable. That is, they tend to break every few months when you use them 24hours/day, 7days/week. And we are talking about huge amounts of data... I know at NCSA I once had a user request the deletion of a 100GB file that was tarred and gz'ed. Optical drives would be great, but they don't hold enough compared to tapes.

What about stones? (1)

freon12 (141886) | more than 15 years ago | (#2031251)

I read an article (posted here?) about how data had a short life span. The comparison it made was between "modern" media (tapes, CDs, etc.) and documents which have lasted basically forever: the constitution (on parchment) and the dead sea scrolls (is that what they're called? I forget) which were carved in stone.

So why not just carve everything in stone? Yabadabadoo! :)
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>