Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Best Format For OS X and Linux HDD?

timothy posted more than 4 years ago | from the cross-the-beams dept.

Data Storage 253

dogmatixpsych writes "I work in a neuroimaging laboratory. We mainly use OS X but we have computers running Linux and we have colleagues using Linux. Some of the work we do with Magnetic Resonance Images produces files that are upwards of 80GB. Due to HIPAA constraints, IT differences between departments, and the size of files we create, storage on local and portable media is the best option for transporting images between laboratories. What disk file system do Slashdot readers recommend for our external HDDs so that we can readily read and write to them using OS X and Linux? My default is to use HFS+ without journaling but I'm looking to see if there are better suggestions that are reliable, fast, and allow read/write access in OS X and Linux."

cancel ×

253 comments

Sorry! There are no comments related to the filter you selected.

UFS. (2, Informative)

necroplasm (1804790) | more than 4 years ago | (#32763632)

UFS would be the best option. Linux supports it with -rw since Kernel 2.6.30 (afaik) and OS X mounts UFS natively.

Re:UFS. (1)

Moblaster (521614) | more than 4 years ago | (#32763814)

If you spend millions on an MRI machine, you should be able to ask the manufacturer to produce more reasonable software that splits those 80GB files into a number of smaller ones that most filesystems can manage.

But why are you trying to roll your own solution anyway with such a proprietary system anyway?

Re:UFS. (1)

ceoyoyo (59147) | more than 4 years ago | (#32764266)

MR scanners usually produce individual files that are smaller than a MB. I think the poster was referring to the total size of the dataset.

It's quite possible that when they analyze the images they put them in a format where individual files are considerably larger though. It's a pain to do 3D, 4D or 5D analysis on a set of 2D files.

Re:UFS. (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#32763848)

FAT

Re:UFS. (1)

ecloud (3022) | more than 4 years ago | (#32763910)

Seems to be an oldie. Is it better than HFS? More reliable or higher performance? Still requires the occasional fsck right?

Re:UFS. (1)

kestasjk (933987) | more than 4 years ago | (#32763978)

It's the default filesystem in *BSD, so it's very well maintained etc. It has journalling (or does it call it "soft updates"?) auto-defrag, etc, etc. You fsck it if you power off without umount but otherwise you won't need to.

It's definitely a perfectly capable, full-featured, modern filesystem.

Re:UFS. (2)

X0563511 (793323) | more than 4 years ago | (#32764034)

Every filesystem warrants the occasional check. If you never check, there are lots of errors that can accumulate and burn your ass.

Re:UFS. (2, Funny)

RobertM1968 (951074) | more than 4 years ago | (#32765344)

...storage on local and portable media is the best option for transporting images between laboratories. What disk file system do Slashdot readers recommend?

Every filesystem warrants the occasional check. If you never check, there are lots of errors that can accumulate and burn your ass.

Methinks you may be plugging your portable media into the wrong place... then again, I've never tried that, so I could be wrong. ;-)

Re:UFS. (4, Informative)

clang_jangle (975789) | more than 4 years ago | (#32763920)

UFS would be the best option.

Unless you're using Tiger or earlier, UFS is not an option. The last two versions do not support UFS at all. However, HFS+ support in Linux is pretty good. Otherwise you're looking at mac-fuse for ext2/3, which IME is pretty slow and buggy. I thinks Jobs has gone out of his way to make OS X incompatible with OSes other than windows. Maybe he's afraid of what will happen if everyone becomes aware they have other choices.

Re:UFS. (1)

necroplasm (1804790) | more than 4 years ago | (#32763986)

I wasn't aware of that. When I used UFS with Leopard it worked ok. It seems they crippled it in Snow Leopard, so it's of almost no real use.

Too bad. Typical for Apple though. Such a shame.

Re:UFS. (1)

edman007 (1097925) | more than 4 years ago | (#32764192)

Really? I remember I tried UFS around 10.4 and it was just a nightmare, every BSD seems to have their own version of UFS and linux could never autodetect it just right for me, I eventually gave up and went with HFS+ which worked just fine with the minor exception that unmounting HFS+ didn't set the flags right and I had to run some HFS+ app in linux (or boot OSX) to reset the flags so it would mount. I assume some of that has changed, but HFS+ still works in linux without a problem and the apple software likes it a bit better than UFS.

Re:UFS. (1)

BlackBloq (702158) | more than 4 years ago | (#32765084)

Mac doesn't exist before tiger as a real beast so who fucking cares. IE they get what they deserve running osx9 and trying for interoperability. So really I would think they are not on Os9 or the answer should be "stop running old ass shit".

Re:UFS. (1, Flamebait)

Haxzaw (1502841) | more than 4 years ago | (#32765306)

you were doing fine until you felt the need to bash on the OS. IMHO that was unnecessary and discredited your remarks. Whatever, I've come to expect that from the majority of /. Posters.

4GB per file limit (4, Insightful)

Ilgaz (86384) | more than 4 years ago | (#32764298)

OS X UFS has a very unfortunate limit as it doesn't support files over 4 GB. Or, there was no chance, I would format everything (especially USB) as UFS.

Lack of commercial quality disk tools like Disk Warrior if a true catastrophe happens is a problem too. Of course, fsck can do good things but after a true catastrophic filesystem issue, diskwarrior is a must. That was one of the things Professional Mac community had hard time explaining ZFS community.

As Apple was truly wise to completely document it down to a point you can even write a full feature defragmenter (iDefrag), HFS+ without journaling seems to be the best option. I am in video business and I have seen it deal with files way beyond 80GB without any issues. In fact, lots of OS X users who images their drives see it everyday too.

I don't know why journaling is not implemented, it is open and documented too. If a bit hassle happens, it sure deserves it since he deals with external drives which are just fit to journaling purposes.

Re:4GB per file limit (1, Funny)

Larryish (1215510) | more than 4 years ago | (#32765586)

iThis and iThat, blah blah blah

How about iJustpukedalittlebitinmymouth?

Re:UFS. (0)

Anonymous Coward | more than 4 years ago | (#32764580)

NFS/CIFS in the cloud; only say this as currently working on a cloud offering around this. Wondering how the pricing is going to work out for connections though.

Re:UFS. (0)

Anonymous Coward | more than 4 years ago | (#32765262)

At first I thought you ment UDF. I have no idea how good UDF read/write support is in either MacOS X or Linux, which versions are supported, and what restrictions exist regarding filesize, filenames etc., but I think that UDF might just be the replacement for FAT I've been waiting for.

Followup question... (2, Informative)

serviscope_minor (664417) | more than 4 years ago | (#32763660)

I have a similar problem, albeit on a smaller scale. I use unjournalled HFS+.

However, the problem is that HFS+ being a proper unix filesystem remembers UIDs and GIDs which are usually inappropriate when the disk is moved.

Is there any good way to get Linux to mount the filesystem and give every file the same UID and GID, like for non unix filesystems?

Re:Followup question... (2, Informative)

SEAL (88488) | more than 4 years ago | (#32764004)

Many filesystems support uid= and gid= options in their mount command (including HFS). Just add that to a mount script or set it up in fstab.

Re:Followup question... (4, Informative)

X0563511 (793323) | more than 4 years ago | (#32764060)

Non-native filesystems usually let you set UID, GID, and permission masks. Check the "mount" manpage and look for the filesystem you want. You might also try "man filesystem"

NTFS (0)

Anonymous Coward | more than 4 years ago | (#32763668)

NTFS :)

Re:NTFS (5, Funny)

Tirs (195467) | more than 4 years ago | (#32764176)

Hah! In my company we call it "NoTeFíeS" (for non Spanish-speaking people: "Don'tTrustIt").

Re:NTFS (4, Funny)

FreonTrip (694097) | more than 4 years ago | (#32765120)

Roses are red,

Violets are blue,

All of my mod points

Would belong to you.

Re:NTFS (1)

nut (19435) | more than 4 years ago | (#32764330)

NTFS is actually not a bad option. Ubuntu 10.04 supports it out of the box (sic), OS X supports read by default but not write. Several companies supply cheap RW drivers for NTFS on OS X.

Re:NTFS (1)

legrimpeur (594896) | more than 4 years ago | (#32764534)

agreed I have been using over FAT as it supports files larger than 2GiB. Through FUSE you can get RW capabilities for free http://www.tuxera.com/community/ntfs-3g-download/ [tuxera.com] . A bit slow in writing though...

Re:NTFS (1)

MachineShedFred (621896) | more than 4 years ago | (#32764740)

with 10.6, the supplied NTFS driver can do read / write, but it's not supported by Apple. You just need to change it to RW in /etc/fstab.

Re:NTFS (1)

cyrano.mac (916276) | more than 4 years ago | (#32765302)

Quote from fstab.hd: "IGNORE THIS FILE. This file does nothing, contains no useful data, and might go away in future releases. Do not depend on this file or its contents."

Answers Own Question. (0)

Rantastic (583764) | more than 4 years ago | (#32763682)

My default is to use HFS+ without journaling but I'm looking to see if there are better suggestions that are reliable, fast, and allow read/write access in OS X and Linux.

So I see you're the "if it ain't broke fix it anyway" type.

Don't know if broke (2, Informative)

tepples (727027) | more than 4 years ago | (#32763720)

More like "I don't know if it's broke. I could be doing something so wrong it could end up on The Daily WTF. If what I'm doing is broke, could you help me fix it?"

Re:Don't know if broke (0)

Anonymous Coward | more than 4 years ago | (#32763792)

More like "I don't know if it's broke. I could be doing something so wrong it could end up on The Daily WTF. If what I'm doing is broke, could you help me fix it?"

I'm afraid that's considerably less likely to catch on as a slogan.

Re:Answers Own Question. (1)

octothorpe99 (34654) | more than 4 years ago | (#32763734)

That statement does imply that the OP finds HFS+ to be either "unreliable" or "slow" or both. So it may in his opinion be "broken"

Re:Answers Own Question. (0)

Anonymous Coward | more than 4 years ago | (#32764762)

My default is to use HFS+ without journaling but I'm looking to see if there are better suggestions that are reliable, fast, and allow read/write access in OS X and Linux.

So I see you're the "if it ain't broke fix it anyway" type.

Or the "it may not be broke, but I might be able to make it better" type. You know, like the sort that tends to actually make things better from time to time?

Just because it works doesn't mean it can't improve.

Re:Answers Own Question. (0)

Anonymous Coward | more than 4 years ago | (#32764828)

TBH, I really don't see what choice you have beyond HFS; OSX is your limitation. Whatever filesystems it supports, is what you have to use.

Ugly but works (0)

mewsenews (251487) | more than 4 years ago | (#32763690)

You will want to use vanilla FAT32 but then beef up the data with par2 and/or sfv files for error recovery and checksumming respectively.

FAT32 isn't reliable as anything but a lowest common denominator between platforms, anything else will give you headaches especially if you need to share the drives outside your local ecosystem.

QuickPAR and QuickSFV are the Windows utilities I've used, there are probably versions available on all platforms.

Re:Ugly but works (2, Informative)

GooDieZ (802156) | more than 4 years ago | (#32763764)

and 4Gb cap...

FAT32 is a fucking horrible idea in his case. (3, Informative)

Anonymous Coward | more than 4 years ago | (#32763866)

How the fuck is he supposed to store 80 GB files on a filesystem that maxes out at 4 GB?

Re:FAT32 is a fucking horrible idea in his case. (3, Funny)

Tirs (195467) | more than 4 years ago | (#32764252)

Sorry man, but your use of the "f" word is totally inadequate in this conversation. Let me correct you:

How the fsck is he supposed to store 80 GB files on a filesystem that maxes out at 4 GB?

Much more in-context, eh?

Re:Ugly but works (1)

X0563511 (793323) | more than 4 years ago | (#32764090)

Indeed. You completely missed the 80gb file part.

Do *nix and OSX support exfat at all? If they do, then that -should- work. But it's not really a good solution.

HIPAA Constraints? (5, Interesting)

fm6 (162816) | more than 4 years ago | (#32763702)

By "HIPAA Constraints" I assume you mean the privacy rule. I would think that this rule would prevent you from using sneakernet to transmit files. Unless you're encrypting your portable disks, and somehow it doesn't sound like you are.

Fun reading:

http://www.computerworld.com/s/article/9141172/Health_Net_says_1.5M_medical_records_lost_in_data_breach [computerworld.com]

Re:HIPAA Constraints? (1)

ducomputergeek (595742) | more than 4 years ago | (#32763794)

That was my first thought as well. And as much as I hate to say it, but Fat32 might be the best option. Either that or UFS.

FAT32 -NOT (0)

Anonymous Coward | more than 4 years ago | (#32763844)

FAT32 has a file limit size of 4GB.

He needs 20 times that!

Re:FAT32 -NOT (1)

raynet (51803) | more than 4 years ago | (#32763918)

Better just use RAR and compress them with recovery record, multipart and password.

Re:FAT32 -NOT (1)

X0563511 (793323) | more than 4 years ago | (#32764118)

No problem, that won't take any time at all...

Re:FAT32 -NOT (1)

cynyr (703126) | more than 4 years ago | (#32764346)

then skip the compress(and rar) and just use a small bash script and dd. no slower than writing it over USB anyways.... 80GB over USB2 must be painful.

Re:FAT32 -NOT (0)

Anonymous Coward | more than 4 years ago | (#32764326)

FAT + DriveSpace!

Re:FAT32 -NOT (1)

raynet (51803) | more than 4 years ago | (#32764812)

I prefer Stacker and their memory compressor for Windows 3.11 was awesome too. Still this wouldn't solve the maximum filesize issue that multipart volumes with RAR does solve.

Re:FAT32 -NOT (1)

Wolfraider (1065360) | more than 4 years ago | (#32764248)

Are we saying FAT32 is not so FAT after all?

Re:HIPAA Constraints? (0)

Anonymous Coward | more than 4 years ago | (#32763896)

except for the problem that fat32 can only handle files smaller than 4gb

Re:HIPAA Constraints? (1)

fm6 (162816) | more than 4 years ago | (#32764494)

FAT32 was pretty fat when it came out 15 years ago. Nobody even had 2GB drives, never mind 2GB files.

Re:HIPAA Constraints? (0)

Anonymous Coward | more than 4 years ago | (#32764052)

FAT32 has a 4GB file size limit and fragments too much, he needs something that can cope with huge 80GB files.

Re:HIPAA Constraints? (1)

hedwards (940851) | more than 4 years ago | (#32764128)

NTFS is probably the best bet. I don't like saying that, but it is the most widely available filesystem that can handle the large size of files. But really the company that makes the equipment really ought to come up with some way of reducing the size of the files to something reasonable. Most likely via splitting of some sort.

Re:HIPAA Constraints? (1)

cynyr (703126) | more than 4 years ago | (#32764370)

I'm pretty sure MRI data is that size for a reason...

Re:HIPAA Constraints? (1)

fm6 (162816) | more than 4 years ago | (#32764438)

Oh yeah, don't put it on one 80gb disk drive, put it on forty 2gb thumb drives. Nothing can go wrong with that.

If these guys are going to be transferring a lot of 80gb files, they have to find a way to do it over the network securely and reliably. Not easy, I admit, but using sneakernet for this kind of data (even without the huge file sizes) is asking for trouble.

NTFS is undocumented and read only on OS X (1)

Ilgaz (86384) | more than 4 years ago | (#32764546)

As Apple didn't want to bother with "OS X deleted my NTFS drive" people, they support NTFS as read only by default. "NTFS-3G" and other utilities can of course read/write but it doesn't change the true reason for NTFS being unreliable to support: It is _not_ documented.

HFS+ on the other hand is completely documented. Apple wins on this case because of openness and the fact that, their true discipline in making things backwards/forwards compatible with complete documentation.

Nobody had to/has to reverse engineer anything, the source code of HFS+ is right at opensource.apple.com

Re:NTFS is undocumented and read only on OS X (1)

dgatwood (11270) | more than 4 years ago | (#32765484)

UDF seems like it would be a good choice to consider. It's natively writable in Mac OS X v10.5 and later, Linux 2.6 and later, and Windows Vista and later.

FAT32 is a nightmare waiting to happen (1)

Ilgaz (86384) | more than 4 years ago | (#32764392)

Most of files they produce involves an actual patient, sometimes in critical condition stay in something like a grave for hour sometimes.

If one of issues with filesystem, that archaic junk which should have never been released happens, it will be nightmare to restore the data while it is easy on HFS+ Journaled or even NTFS.

I own a Symbian phone and trust me on that, if there was a $50 utility just to get rid of FAT32(!) junk risking my data on memory card, I would happily buy it.

Re:HIPAA Constraints? (1, Informative)

Anonymous Coward | more than 4 years ago | (#32763924)

Why would "sneakernet" be disallowed for digital medical files? This procedure is no different than transferring real physical medical files or records.

Re:HIPAA Constraints? (1)

X0563511 (793323) | more than 4 years ago | (#32764140)

But.... but it involves COMPUTERS! It's completely different, we need new rules!

(that said, it's far easier to pocket a USB drive (or just copy) and run then a folder full of files or some x-ray prints)

X-Ray and iPod? (1)

Ilgaz (86384) | more than 4 years ago | (#32764460)

I heard it is almost a standard procedure to use iPods in X-Ray community for the x-ray format images and that is why there are several OS X Utilities supporting it.

I guess the first reason was the gigantic (for that time) storage size of the iPod and you can also use it for music.

Re:HIPAA Constraints? (4, Informative)

eschasi (252157) | more than 4 years ago | (#32764384)

HIPPA mandates who can and should have access to the files. The method of storage (disk, tape, SSD, paper, whatever) is largely irrelevant. As long as all those who have access to the files are HIPPA-trained and following the appropriate procedures, everything is fine. Similarly, transport is relevant only in that there must be no data disclosure to unauthorized persons. As such, if a person with appropriate clearance does the transport, all is cool.

HIPPA data is often encrypted when placed on tape or transported across systems, but that's because such activities may involve the data being visible to unauthorized people. As examples of each:

  • If two physically separate sites exchange HIPPA data across the open Internet, the data must be encrypted during transport. This might be done by VPN, sftp, whatever. As long as the bits on the wire can't be read by the ISPs managing the connection, it's OK.
  • For tapes that you archive off-site, you don't want your external storage facility to be able to read the tapes, nor have the data readable if the tape is misplaced in transport.

IMHO wise use of sensitive data on laptops requires encryption at the filesystem level. It's neither difficult or time-consuming, but given how much sensitive data has been exposed via folks losing or misusing laptops, it ought to be a no-brainer. Sadly, too few places bother.

Re:HIPAA Constraints? (1)

Leebert (1694) | more than 4 years ago | (#32765104)

Even though you speak as someone knowledgeable and authoritative about HIPAA, I have a hard time believing you since you apparently don't know how to spell it.

Re:HIPAA Constraints? (0)

Anonymous Coward | more than 4 years ago | (#32764816)

Why would what *you* think matter at all when it comes to interpreting a complex set of requirements and rules. Have you read them? Have you been trained on them? Do you have to comply with them at your job? If not, why are you even commenting on them. Your idle speculation is worthless.

A research institution is going to have an entire department and program devoted to HIPAA compliance, complete with training program. You obviously have not the first clue about what the rules mean or how to implement them, so please just concentrate on things you know about, which I'm guessing is a very manageable task.

Re:HIPAA Constraints? (2, Interesting)

rwa2 (4391) | more than 4 years ago | (#32764904)

Maybe instead of using a portable disk, they could whip up a nettop running Linux and transfer files over the gigabit ethernet...
Then they could do transfers via samba or rsync+ssh , and the nettop could transparently take care of encrypting the underlying FS, whatever that may be.

Performance wouldn't be great... maybe 20MB/s instead of 60MB/s for an eSATA drive, and they'd have to work out a consistent network port / IP across all the sites it travels to. But it might confer some advantages.

Along similar lines, they could put in a small file server at each site, and rotate a removable disk drive between all of the file servers. That way they'd just have a drop box that they could always push files to throughout the day, and let the couriers just grab what's there and deliver.

Re:HIPAA Constraints? (1)

dugjohnson (920519) | more than 4 years ago | (#32765162)

Absolutely correct. If you are putting the files on a USB drive without encrypting you are setting yourself up to become a news story, and not a good one. I'm going to assume that you are not in a facility that is planning on getting any of the incentive money from ARRA because any time you move outside the network (copying to USB, CD, DVD, another computer not constrained by the network) you HAVE to encrypt....you should anyway, but to qualify for meaningful use you must. If you have lots of files to share all the time, you might consider getting a PACS; there are open source PACS that have a lot of capability and would let you share, notate, and track access (HIPAA again). Take a look at dcm4chee [dcm4che.org]

Re:HIPAA Constraints? (1)

RobertM1968 (951074) | more than 4 years ago | (#32765428)

By "HIPAA Constraints" I assume you mean the privacy rule. I would think that this rule would prevent you from using sneakernet to transmit files. Unless you're encrypting your portable disks, and somehow it doesn't sound like you are.

Fun reading:

http://www.computerworld.com/s/article/9141172/Health_Net_says_1.5M_medical_records_lost_in_data_breach [computerworld.com]

You would be surprised at how outdated parts of HIPAA are (from the day they were written). And what things they fail to cover. Heck, there are sections that indicate the requirement for data encryption for certain uses/storage/etc, but that's about the extent. ANY encryption will do to pass muster. A simple subsitution key would pass the required criteria. Then there are sections that are very specific in specifying methods that are useless... while others at least seem to have been thought out. There are sections that either were never written or not included in the final that should have been as well.

It really seems like they hired no one of any security knowledge to write it. Oh, btw, I deal with this stuff for various apps we write... I was appalled at some of it...

NTFS (3, Interesting)

Trevelyan (535381) | more than 4 years ago | (#32763886)

NTFS or any other FUSE (MacFUSE [google.com] ) file system. However in a heterogeneous environment NTFS has the bonus of native Windows support.

There is NTFS-3G for Linux and Mac OS X [sourceforge.net]

There is also an EXT2 Fuse FS (for Mac OS), and probably many other options.

Having said that, I have never had a problem with Linux's HFS+ write support.

Re:NTFS (3, Funny)

X0563511 (793323) | more than 4 years ago | (#32764156)

Windows doesn't play in here, it's OSX and Linux. Tossing NTFS into that would just be... wrong somehow.

Re:NTFS (0)

Anonymous Coward | more than 4 years ago | (#32764556)

Exactly. One of the systems should have native support for the fs at least .. right?

ext2 works. ntfs works. (1)

Snover (469130) | more than 4 years ago | (#32763908)

Mac OS and Linux both have support for NTFS through NTFS-3G [tuxera.com] . Mac OS has support for ext2 through fuse-ext2 [sf.net] .

Re:ext2 works. ntfs works. (1)

EXrider (756168) | more than 4 years ago | (#32764880)

Write performance through FUSE on Mac OS X is pretty disappointing, several orders of magnitude slower than direct filesystem access in my experience. Transferring an 80GB file to a FUSE mounted filesystem would be painful.

Re:ext2 works. ntfs works. (3, Informative)

MachineShedFred (621896) | more than 4 years ago | (#32764890)

If it's Mac OS X 10.6.x, you don't even n eed NTFS-3G, as the native NTFS driver has read / write capability. You just need to change the /etc/fstab entry for the volume to rw, and remount.

HFS+ unjournaled is best; MacFUSE also works (1)

vossman77 (300689) | more than 4 years ago | (#32763928)

I have a similar scenario and I think HFS+ unjournaled is best for your scenario. FAT32 is even worse. You are fortunate not to have to support windows. Ideally I would use NFS and file sharing instead of external disks. But shipping a disk is always better than transferring large amounts of data over the net.

Another option is to install MacFUSE [google.com] and then mount other file systems. This is what I do when NTFS is required. For my Linux system I love ext4, if you need an older file system use XFS, ext3 is stable but really slow for large data files.

NTFS (1)

dirtyhippie (259852) | more than 4 years ago | (#32763938)

It sucks, but NTFS might just be the best option. OSX and linux both have had stable enough support for years. The main plusses over FAT32 are journaling and support for files > 4GB. Using UFS is dangerous (or at least has been until very recently) because there are so many different variants of it (solaris, BSD, osx, etc.) that linux support is notoriously troublesome. An extra plus of NTFS is you can use it easily on windows machines as well.

Reiser? (4, Funny)

Wowsers (1151731) | more than 4 years ago | (#32763954)

I would have recommended ReiserFS, but the data might get buried somewhere and the system would not remember where it was....

Re:Reiser? (4, Funny)

Anonymous Coward | more than 4 years ago | (#32764856)

That's pure pure FUD. ReiserFS can recover anything, even something it allegedly never stored. "Oakland homicide detective Lt. Ersie Joyner recalled that Reiser led them directly to the exact site, without any hesitation or confusion."

Re:Reiser? (0)

Haxzaw (1502841) | more than 4 years ago | (#32765324)

Woooosh!

the question of our age (4, Insightful)

commodoresloat (172735) | more than 4 years ago | (#32765488)

who will wooosh the woooshers?

No Filesystem (5, Informative)

Rantastic (583764) | more than 4 years ago | (#32763988)

If you are only moving files from one system to another, and do not need to edit them on the portable drives, skip the filesystem and just use tar. Tar will happily write to and read from raw block devices... In fact, that is exactly what it was designed to do. A side benefit of this approach is that you won't lose any drive capacity to filesystem overhead.

Re:No Filesystem (0)

Anonymous Coward | more than 4 years ago | (#32764448)

He might not need to edit them in place, but he almost certainly needs to be able to directly read the files from the portable disk, without transferring them again. And if it's not a definite need, it would be a convenience well worth keeping.

Re:No Filesystem (0)

Anonymous Coward | more than 4 years ago | (#32764590)

Why do you think he needs to directly read the file from the portable drive? He says specifically in the question that they're transferring the images from one systems in one labratory to systems in another labratory.

He's not asking for a storage system, or an archive system, or the best way to share documents. He wants to read raw bits off of permanent storage in one laboratory into a temporary repository, write those bits from into permanent storage in another laboratory, and then destroy the temporary storage.

With that said, tar is a bad solution because it doesn't include any type of CRC or encryption. But it's a good idea, and certainly a million times better than a file system of some type.

Re:No Filesystem (2, Informative)

Rantastic (583764) | more than 4 years ago | (#32765016)

With that said, tar is a bad solution because it doesn't include any type of CRC or encryption. But it's a good idea, and certainly a million times better than a file system of some type.

True, but simply hashing the file at both ends solves that. Both linux and mac support shasum.

Re:No Filesystem (2, Informative)

Rantastic (583764) | more than 4 years ago | (#32765264)

As to encryption, you just encrypt the file before you tar it. In fact, with gpg you get both encryption and integrity checking.

Gnupg is available in Mac Ports and comes with just about every linux distro.

Re:No Filesystem (0)

Anonymous Coward | more than 4 years ago | (#32764586)

Genius! Simple, as fast as can be and utilizes every last bit if needed. I overlooked this approach when thinking of what filesystems would be best. Never crossed my mind that you could ditch the filesystem entirely.

ext3 or ext4 (1)

ecloud (3022) | more than 4 years ago | (#32763992)

Here's [paragon-software.com] a commercial $39.95 implementation of ext2/3/4 for MacOS. No idea if it's actually any good. I'd really like to hear if someone here has tried it, because I might like to use it for a shared /home between Linux and MacOS if it would work. I tried hfs+ (or it was it just hfs?) without journaling, and the dang thing needs to be fsck'd nearly every time I booted the alternate OS, which wastes a lot of time. Particularly, when shutting down Linux it's unmounted cleanly (such that Linux is happy if I just boot back into Linux again), but MacOS is still not happy and does the fsck in the background for 15 minutes or so before I can access it again. Sometimes it fails too, and has to be done manually from Disk Utility. Quite annoying.

Rubbish (4, Informative)

Improv (2467) | more than 4 years ago | (#32764042)

You're storing it in the wrong format - there are all sorts of tools to convert to Analyse or DICOM format, which give you a managable frame-by-frame set of images rather than one huge one. Most tools to manipulate MRI data expect DICOM or Analyse anyhow (BrainVoyager, NISTools, etc).

If you really want to keep it all safe, use tarfiles to hold structured data, although if you do that you've made it big again.

Removable media are a daft long-term storage - use ad-hoc removable media solutions (or more ideally, scp) to move the data.

Qualifications (1)

ecloud (3022) | more than 4 years ago | (#32764050)

I'd say the "best format" needs journaling absolutely, and preferably also extended attributes which work consistently between the two OS's, hardlinks and symlinks working consistently, long filenames, case sensitive, separate metadata for creation time and modification time, suitability to be used on a USB flash drive as well as a hard disk, and ability to mount it in Windows too. Haven't found any such mythical beast yet. If somebody would just finish the journaling support for Linux HFS+....

Who cares? (1)

rjstanford (69735) | more than 4 years ago | (#32764082)

No, seriously, who cares? This is a process designed to save files that are then transferred through SneakerNet. While moderately large, at 80gb, they're not huge by modern standards. If you have a current solution that works, stick with it.

If, however, there are other constraints that are affecting you - transfer speed, decades-long retention on local media, security, etc, then by all means let us know. Until then, to use the obligatory car analogy, its as if you've said:

Due to the distance between my house and work, I currently use an automobile to go between the two locations and to perform various other services. Currently I use a Honda Accord. What would you suggest?

NFS over SSH (2, Interesting)

HockeyPuck (141947) | more than 4 years ago | (#32764186)

Just tunnel NFS over SSH. I can't imagine how secure it would be to sneakernet any files around the office. If you need to encrypt the data at rest then either encrypt on the client or leverage an encrypted filesystem of a Decru type appliance.

Encrypt (0)

Anonymous Coward | more than 4 years ago | (#32764222)

If you are carrying these things around, I would think setting them up with TrueCrypt (works on OSX and linux) or other full disk encryption program would be a Good Idea (TM). It would be bad to have sensitive data get left somewhere. This way you would just be out the cost of hardware (assuming the data is backed-up). I would guess that HIPAA would want you to do what you can to protect that from happening.

Best of all, it has a proven track record: http://www.theregister.co.uk/2010/06/28/brazil_banker_crypto_lock_out/

FAT32 (-1, Redundant)

Chris Snook (872473) | more than 4 years ago | (#32764310)

I'm not kidding. For a filesystem that's only going to hold a handful of very large data files, transported by sneakernet, there's not much benefit to journalling, directory structure optimizations, POSIX permissions, etc. You just want something that's marginally more structured than writing data directly to the raw block device, and FAT32 is the lowest common denominator.

Network? (4, Informative)

guruevi (827432) | more than 4 years ago | (#32764486)

Really, you need a gigabit network and transfer files over it using AFP and/or NFS and/or SMB. First of all HIPAA requires you to encrypt your hard drives which most researchers won't do (it's too difficult). Then you also got the problem what happens if the researchers (or somebody else) leaves with the data.

Solaris and by extension Nexenta have really good solutions for this. You can DIY a 40TB RAIDZ2 system for well under $18,000. If you use desktop SATA drives (which I wouldn't recommend but ZFS keeps it safe) for your data you can press that cost to $10 or $12k.

I work in the same environment as you (neuroimaging, large datasets), feel free to contact me privately for more info.

Best format is... (1)

Picardo85 (1408929) | more than 4 years ago | (#32764596)

I would say - 3,5" if it's a desktop or 2,5" if it's a notebook

Samba that is (1)

wasabioss (1196799) | more than 4 years ago | (#32765130)

You should consider creating a samba share. If you don't have a server a NAS appliance or a el heapo old PC would handle the job very well.

Network! (1)

Anubis350 (772791) | more than 4 years ago | (#32765192)

RAID array and NFS, a Lustre, etc depending on need, but a network share! ...and if you need more encryption and even admins cant have access to data, have your users store true-crypt drives on the network. Sneakernet is, in the end, far more insecure!

UDF (2, Informative)

marquise2000 (235932) | more than 4 years ago | (#32765208)

I'm using a USB Disk formatted under linux with UDF (yep, it's not limited to DVDs, there is a profile for hard disks). It can be used without problems under OSX (even Snow Leopard)
 

XFS? (0)

Anonymous Coward | more than 4 years ago | (#32765352)

XFS handles large files VERY well, but I'm not sure of OSX support. Darwin is just a modified BSD kernel so it COULD have support. Question is whether the iCzar decided to build it in . . .

Use tar, that's what it is for (0)

Anonymous Coward | more than 4 years ago | (#32765370)

tar --posix --noxattrs --mode 644 cvf /dev/... honking-big-file.img

NTFS (1)

mstrebe (451943) | more than 4 years ago | (#32765466)

Handles large files well, doesn't respect UNIX permissions (which are problematic for removable drives) and is freely installable on Linux and Mac using NTFS3G and FUSE or MacFUSE. A lo

NAS device (2, Insightful)

linebackn (131821) | more than 4 years ago | (#32765590)

A simple NAS enclosure or NAS device might be what you are looking for. You can get a single drive NAS enclosure, and add a drive, that you can carry around just like a regular portable drive. You can move it between networks and use any connection method the NAS device happens to implement (SMB, FTP, NFS, etc). Some even let you optionally connect it directly via USB or eSATA to access the file system directly, and some may have encryption or other security features as well.

Of course, check to make sure you have permission and that connecting things to your network does not violate any policies. If connecting a network device directly to the your network is not permitted then perhaps you can add a second, dedicated, network card to the computers.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>