×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How To Move Your Linux Systems To ext4

timothy posted more than 5 years ago | from the or-you-could-guess dept.

Data Storage 304

LinucksGirl writes "Ext4 is the latest in a long line of Linux file systems, and it's likely to be as important and popular as its predecessors. As a Linux system administrator, you should be aware of the advantages, disadvantages, and basic steps for migrating to ext4. This article explains when to adopt ext4, how to adapt traditional file system maintenance tool usage to ext4, and how to get the most out of the file system."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

304 comments

But does it run... (2, Funny)

RiotingPacifist (1228016) | more than 5 years ago | (#23314250)

reiser4?

Re:But does it run... (5, Funny)

Sentry21 (8183) | more than 5 years ago | (#23315044)

From what I've read, Reiser4 completely kills Ext4 in performance... then it disposes of ext4's kernel module, removes one of its redundant drives, and then cleans the free space left on its array.

Re:But does it run... (0)

Anonymous Coward | more than 5 years ago | (#23315136)

I believe the proper question is how fast does it run FROM reiser4?

Mere Mortals? (1)

db32 (862117) | more than 5 years ago | (#23314366)

"largely unnoticed by mere mortal Linux users and administrators" strikes me as a strange phrase to find on this IBM page. Is there some other IBM project more interesting than ext4 being revealed here?

Re:Mere Mortals? (0)

Anonymous Coward | more than 5 years ago | (#23314438)

"Is there some other IBM project more interesting than ext4 being revealed here?"

Well, JFS is certainly a more interesting filesystem. However I am interested in their un-dead system administration staff. Finally someone to get a hold of Zombie processes.

Re:Mere Mortals? (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#23314848)

Definition of Irony - Firing Senator Craig for being gay after he voted to not protect gays from job discrimination.
I know you're just trying to be cutesy, but your initial assumption--that Craig was fired is incorrect. He's still a senator. Secondly, nobody wanted to can him / sanction him / etc for being gay, (or, for being bi, whatever) it was due to his soliciting sex in a public place! Do you think public response would have been different had he been in a women's bathroom?

I know facts have no place in your attempts at political humor, but come on, have enough self respect to at least have a tag line that makes sense.

Re:Mere Mortals? (2, Funny)

mweather (1089505) | more than 5 years ago | (#23315344)

Do you think the public response would have been different had he been in a bar?

Re:Mere Mortals? (0)

Anonymous Coward | more than 5 years ago | (#23315544)

It probably would have been slightly less if he was picking up men/women in a bar for anonymous sex, though I don't know.

That's not nearly analogous to the situation though--pickup up a sexual partner at a bar isn't illegal, it's just immoral (if one or both are married). It's the perversity and ILLEGALITY of cruising for sex (gay/straight/whatever) in a public toilet that most disturbed people!

Re:Mere Mortals? (1)

JackieBrown (987087) | more than 5 years ago | (#23316388)

You know, I thought I was missing a post and others got it, until I realized that you were responding to yourself.

Nice trick.

Not for the casual user (4, Informative)

halivar (535827) | more than 5 years ago | (#23314370)

ext4fs is designed to be used in systems requiring many terabytes of storage and vast directory trees. It is unlikely the common desktop (or even, for that matter, the common server) will see appreciable performance increase with it.

Re:Not for the casual user (5, Funny)

Anonymous Coward | more than 5 years ago | (#23314424)

Do you realize how much porn some people have?

Re:Not for the casual user (0)

Anonymous Coward | more than 5 years ago | (#23314506)

Its also unlikely we will ever need more then 640KB of ram!

Re:Not for the casual user (5, Informative)

Vellmont (569020) | more than 5 years ago | (#23314544)


It is unlikely the common desktop (or even, for that matter, the common server) will see appreciable performance increase with it.

Disk sizes are going up. In a few years you'll see a terabyte on a single drive. I'd also say that features like undelete, and online de-frag are important to anyone.

So while you may not see any real performance increases, that's really beside the point.

Re:Not for the casual user (5, Funny)

Anonymous Coward | more than 5 years ago | (#23314580)

Instead of waiting a few years, go to your local computer store. They should have terabyte drives now.

Re:Not for the casual user (4, Funny)

A nonymous Coward (7548) | more than 5 years ago | (#23314614)

Disk sizes are going up. In a few years you'll see a terabyte on a single drive.
Unlike those two 1000 GB (or is it 1024) drives I have on my desk now.

Re:Not for the casual user (0)

Anonymous Coward | more than 5 years ago | (#23315340)

1TB drives are available now [newegg.com] . Currently they run $220-$280.

Of course, those aren't truly 1TB but rather 1,000,000,000,000 bytes (0.91TB) instead of 1,099,511,627,776 bytes (1TB)...

Re:Not for the casual user (4, Informative)

Uncle Focker (1277658) | more than 5 years ago | (#23315406)

Disk sizes are going up. Since last year we've seen a terabyte on a single drive.
Fix'd it for you.

Re:Not for the casual user (3, Insightful)

LWATCDR (28044) | more than 5 years ago | (#23315878)

But EXT4 because really useful when you have many terabytes of disk storage. With just one or two EXT3 is probably good enough.
Now when we have ten TB drives....
Good grief people Yea just keep a few thousand TV shows on your desktop.

Re:Not for the casual user (3, Interesting)

miscz (888242) | more than 5 years ago | (#23314744)

I can't wait for faster fsck. It takes something like an hour on my 500GB ext3 partition. Terabytes of storage are not that far away.

Re:Not for the casual user (5, Funny)

XenoPhage (242134) | more than 5 years ago | (#23315412)

All you young kids want these days is a faster, more convenient fsck.. What about the old days where fscking was about the technique, not the speed or the size...

Re:Not for the casual user (4, Funny)

drinkypoo (153816) | more than 5 years ago | (#23316202)

What about the old days where fscking was about the technique, not the speed or the size...

I'm just happy when it's done for me, and I don't have to handle it manually. When fscking fails at the beginning, it can ruin your whole day if you're not an expert.

Re:Not for the casual user (3, Interesting)

stuporglue (1167677) | more than 5 years ago | (#23315684)

I have the following disks in my computer:

1 TB
500 GB
300 GB

When they decide to fsck at the same time, it can take 1/2 hour or longer to get to the login screen.

Re:Not for the casual user (5, Interesting)

EvilRyry (1025309) | more than 5 years ago | (#23316068)

This is why we have XFS. I fscked a 9TB partition is under 10 minutes. Hopefully they've done some improvements for ext4 in this area. A volume that takes days to fsck might as well just die completely.

Re:Not for the casual user (5, Funny)

techno-vampire (666512) | more than 5 years ago | (#23316148)

I can't wait for faster fsck.


I can tell you're a slashdotter. When most people fsck they want it to last as long as possible.

Re:Not for the casual user (2, Interesting)

dpilot (134227) | more than 5 years ago | (#23315756)

It might be a win for me even today on my meager 300G MythTV media partition. I'm currently using xfs for that, but every now and then I hear about bad things on xfs with a power failure, and other times I hear that it can be physically hard on the hard drive. (excess head motion?) Of course other times I hear that xfs is the best thing since sliced bread, and is usable for ANY purpose with just a little tuning.

I transcode my Myth stuff on an ext3 partition, and occasionally get complaints about the large data size without having the right options set. But it works.

Re:Not for the casual user (2, Informative)

LWATCDR (28044) | more than 5 years ago | (#23315942)

No file system likes a power failure. Bet a UPS that will shut down the PC. They are cheap.
And if you care about that data make a backup and even better run a raid.

Remember EVERY HARD DRIVE IS GOING TO FAIL SOMEDAY.

Re:Not for the casual user (1)

dpilot (134227) | more than 5 years ago | (#23316414)

My /home is nfs4, the server is on a UPS, and the disk(s) on the server are raid-1.

I don't really care that much about TV, but would prefer not to lose it. If it were on ext3 I'd have it journal=data, or is that data=journal, anyway, full data journaling instead of just metadata. Incidentally, my raid-1 is set up that way, both for reliability and because I'd read that it's actually faster for an nfs server.

Re:Not for the casual user (1)

trolltalk.com (1108067) | more than 5 years ago | (#23316032)

ext4fs is designed to be used in systems requiring many terabytes of storage and vast directory trees. It is unlikely the common desktop (or even, for that matter, the common server) will see appreciable performance increase with it.

Really? My (comparatively) cheap laptop has 640gig of storage, and when you start getting into video, 640g is NOT enough!

You can buy 2 x 500gig desktop hds for the grand total of $150.

At that price, a terrabyte will be the "new pink" within a couple of years. Just like 2 gigs of ram is now the "norm".

Wikipedia entry (5, Informative)

drgould (24404) | more than 5 years ago | (#23314414)

Link to Ext4 [wikipedia.org] entry on Wikipedia for people who aren't familar with it (like me).

Re:Wikipedia entry (3, Funny)

miscz (888242) | more than 5 years ago | (#23314762)

Because nobody on Slashdot knows that primary filesystem used on Linux is called Ext3 and we're too stupid to figure out what Ext4 might be. Come on.

Re:Wikipedia entry (3, Funny)

discord5 (798235) | more than 5 years ago | (#23314970)

Because nobody on Slashdot knows that primary filesystem used on Linux is called Ext3

Now now, don't give us too much credit

we're too stupid to figure out what Ext4 might be

It's like ext2 times two, stupid.

Re:Wikipedia entry (5, Funny)

Anonymous Coward | more than 5 years ago | (#23315328)

My Linux box goes to ext11.

Re:Wikipedia entry (1)

Applekid (993327) | more than 5 years ago | (#23315736)

Because nobody on Slashdot knows that primary filesystem used on Linux is called Ext3 and we're too stupid to figure out what Ext4 might be. Come on.
I, for one, saw the news item and immediately thought, "How to move to ext4? What's in it for me?"

Thank you, GP, for saving me the seconds of typing with a convenient link. And shame on you for wanting to put out a candle that might be used to light the darkness.

How do I get to ext4? (0)

Anonymous Coward | more than 5 years ago | (#23314468)

Partition, partition, partition.

Preempting the prefix war (3, Insightful)

A beautiful mind (821714) | more than 5 years ago | (#23314470)

Yes, Terabyte is not entirely correct according to SI, but Tebibyte just sounds lame and language is a tool, to facilitate written and oral communication.

Of course, in this case you have to balance the confusion stemming from the Tera in IT context meaning 1024 in some cases. To be honest, people insisting on the new naming, they should have come up with a sensible sounding name and promoted that. You have to remember that language, even technical language is for the people. There are lots of ways to craft a beautiful, logical, symmetrical language that no sane person would use because it just doesn't sound convenient.

Maybe a linguist can pitch in to explain why tebibyte sounds so awful?

Re:Preempting the prefix war (4, Funny)

Dachannien (617929) | more than 5 years ago | (#23314772)

Maybe a linguist can pitch in to explain why tebibyte sounds so awful?
Tebibyte-buh: It's bad-buh because-buh it makes-buh you sound-buh like Mushmouth-buh.

Hey hey hey!

Re:Preempting the prefix war (1)

mpathetiq (726625) | more than 5 years ago | (#23315352)

I think you meant....

Tebibyte: Ubit's bubad bubecubause ubit mubakes yubou subound lubike Mubushmubouth.

Re:Preempting the prefix war (1)

GreatBunzinni (642500) | more than 5 years ago | (#23314796)

I'm no linguist but the notion of anything that resembles a Tera binary byte doesn't compute all that well.

Re:Preempting the prefix war (1)

RiotingPacifist (1228016) | more than 5 years ago | (#23314800)

I think the only place you need to use it is in the abbreviations, there KiB vs KB is sort of useful

Re:Preempting the prefix war (1)

Bralkein (685733) | more than 5 years ago | (#23315124)

Yeah, but now the problem is that KB is now ambiguous - it could either be 1000 bytes, or it could be 1024. Before anyone mentions HDD manufacturers, it isn't ambiguous there, either. 1 KB is 1000 bytes, yeah that's because they're fleecing you, it sucks but oh well.

I just hate the mindset that comes up with all of this stuff, it reeks of the sort of person who alphabetises everything and writes into newspapers to complain that they misused the apostrophe one time on one page. I mean for god's sake, take exbibyte. The word was actually used in the article. Look at it, you can bear to. Say it, if you can figure out how to. The worst thing is that it seems to be becoming more popular. Sigh. I really shouldn't get so worked up by this.

Re:Preempting the prefix war (1)

CastrTroy (595695) | more than 5 years ago | (#23315496)

They're only fleecing you if you keep on insisting that 1 KB is 1024 bytes. If you define that 1 KB is 1 billion bytes, then they are are really fleecing you. The only reason that 1024 was used as the size of a KB was because it was much easier, not because we were trying to standardize things, or because it made things simpler to understand. It completely went against all the other standards, just because it made the code a little simpler to write.

Re:Preempting the prefix war (2, Funny)

sconeu (64226) | more than 5 years ago | (#23315984)

If you define that 1 KB is 1 billion bytes, then they are are really fleecing you

I'd say that i fyou define that 1KB is 1 billion bytes, then you've got bigger problems than the marketing departments of drive manufacturers.

Re:Preempting the prefix war (0)

Anonymous Coward | more than 5 years ago | (#23315280)

TBH, the ONLY .

Since the beginning of the computing age, storage has been measured with 1024-based prefixes, and that trend is still the de facto standard today.

The ONLY people that use the 1000 convention is hard disk (and now flash) storage companies, and the sole purpose of that is to a) Rip us off and b) Try to avoid lawsuits for false advertising (Which both Seagate and Creative have already lost in court).

Indulging the prefix war (4, Insightful)

JustinOpinion (1246824) | more than 5 years ago | (#23315072)

Of course, in this case you have to balance the confusion stemming from the Tera in IT context meaning 1024 in some cases.
It's worse than that. According to SI prefixes, "Tera" should mean 10^12 (1,000,000,000,000), but in common usage applied to computers it sometimes means 2^40 (1,099,511,627,776). But it also sometimes means "1024 Giga", where the Giga could be using either convention (and, for all you know, the "Mega" implied within could have been computed using either convention). So you can get a gradient of "mixed numbers" that conform to neither standard. You might say that only a non-professional would make such a stupid mistake... but on the other hand, if you see a column of numbers listed in "Gigabytes" and you want to convert them to Terabytes, what conversion factor would you use? How would you know what conversion factor the previous author had used? How could you guarantee that you were doing it right? Would you be able to confidently convert it into an exact number of bytes?

Personally, I think the whole thing is a mess, and computer professionals should be working harder to enforce a consistent scheme. Unfortunately, only a minority of computer professionals seem interested in changing the status quo confusion.

Maybe a linguist can pitch in to explain why tebibyte sounds so awful?
I'm no linguist, but I don't think "Tebibyte" sounding silly is the real problem. I admit that I laughed when I first heard the binary prefixes. They sound lame. But who cares? "Quark" was silly when it was first coined. So was "Yahoo" and "Google" and "Linux" and "WYSIWYG" and "SCSI" and "Drupal" and so on... Silly names become second-nature once they are used enough.

I think the real problem is that people, inherently, are loathe to change. They are more apt to come up with rationalizations and justifications for doing things "the old way" rather than put in the work to learn (and code!) a new system. Sorry if this sounds harsh, but I find the people who say the binary prefixes "sound dumb" or say that "the current (inconsistent)* system works fine" are just coming up with excuses to avoid doing the work to use a properly consistent standard/notation.

Maybe you're right, and that if the new prefixes had sounded "cooler", then adoption would have been faster... but I'm not so sure. Even if true, it doesn't absolve any of us for allowing the confusion to persist: cool or not, we (geeks especially!) should have the discipline to use proper standards.

* The current system can be roughly described as: SI prefixes are powers of 10 everywhere except in computer science, when they become powers of 2. But only when referring to memory, and some data structure sizes, but not when referring to transmission rates or disk space (unless it's a flash drive, sometimes), and other kinds of data structures.

Re:Preempting the prefix war (0, Offtopic)

Waffle Iron (339739) | more than 5 years ago | (#23315278)

but Tebibyte just sounds lame

It doesn't sound objectively any more lame to me than a prefixes like "giga" and "peta". You probably just think it sounds weird because you're not used to it yet.

If you're starting a crusade to clean up the language, why not start with far more egregious problems like all the words that contain the string "ough"?

Re:Preempting the prefix war (1)

A beautiful mind (821714) | more than 5 years ago | (#23315796)

To put it into more accurately worded language, I was hinting at the phonetic characteristics of tebibyte that makes the word harder to pronounce. Another poster mentioned that "quark" must have sounded weird when it was introduced the first time. I have the opposite experience. Quark is easy to pronounce, it is a distinct, hard to confuse name for a specific particle which has in fact a quite interesting etymology. I just love quark. It is actually one of the best examples of naming I could come up with, if asked off the top of my hat.

IANALinguist but... (0)

Anonymous Coward | more than 5 years ago | (#23315838)

Maybe a linguist can pitch in to explain why tebibyte sounds so awful?

Maybe it's your accent? I'm entirely serious. I don't have trouble saying either word, or find an aesthetic difference.

So, what's your accent? This is bemusing. You've got me trying the words in various regional intonations to figure out why you find tebibyte a bad coinage. No luck so far.

Re:Preempting the prefix war (1)

pizzach (1011925) | more than 5 years ago | (#23315882)

It really didn't start out that complicated, but it's the manufacturers who keeps F*ing it up because they are trying to stretch the numbers. I think it's the consumers who are the victims in this.

Hard drives have the MiB-MB problem because manufacturers wanted to be able to say 60GB instead of 54GB. When you buy a monitor, you have look for viewable size in much smaller print. Then there is the dithering you hear about on modern LCDs. I've also heard that early monitors were measured by their horizontal instead of their diagonal. Of course, the diagonal is longer so...

Meh. Buyer beware.

This is going to sound crazy but... (1)

pizzach (1011925) | more than 5 years ago | (#23315918)

Wouldn't it be easier to make manufacturers use the old MB=1024 type standard than to get the common people to understand a new prefix that they just won't remember?

Re:Preempting the prefix war (2, Interesting)

VirusEqualsVeryYes (981719) | more than 5 years ago | (#23316196)

I am not a professional linguist, but I think I can explain.

In any spoken language, different sounds are loosely associated with different ideas. As a simple example, voiceless sounds, like p, k, t, f, and s, are well suited for pointed use, as in pejoratives; and r, especially the alveolar trill variety, is associated with intimidation or primality. These associations are made either because it sounds like something else ("rrrr" sounds like an animal's growl or roar -- notice the Rs in "growl" and "roar"?) or because the sound serves a purpose (hard, clipped sounds serve well as punctuation -- notice all the hard sounds in "punctuate"?). In the latter case, combinations of sounds can invoke a wide array of ideas or feelings. Utilization of these things is key to a good punchline or to controlling semiconscious undertones of speech. I admire Dr. Seuss in particular for his mastery of sound combinations in making up suitable words to balance sing-song silliness with gravity and purpose.

Now, returning to "tebibyte" and all the other -bibytes, soft, voiced consonants like B are associated with childishness (a baby might make these sounds), silliness, bounciness, or informality. Two Bs in a row are especially so: bib, baboon, babble, bob, boob. The reason tebibyte sounds "stupid" is that it describes a technical idea using unsuitable sounds.

This being said, if you were used to using the word, you wouldn't think twice about it. You would probably complain about "flop" and "watt" in the same way if the words were new, but established use overrides the weak sound associations. The President could be instead called the Biggyloppalo and few would care, as long as the term were already established in the common vocabulary. I'd think people would move on even if a video game console were named something as ridiculous as "Wii" ... but that's just a wild guess.

As for my opinion on the matter, I'm in favor of it. The SI prefixes are already assumed to be powers of 10 in all other fields except the computer and information sciences. Tebibyte will maybe sound silly for awhile, but the problem will go away given time. And I, for one, look forward to buying futuristic data storage without feeling a little cheated.

Wait, what? (1)

jesdynf (42915) | more than 5 years ago | (#23314492)

Did you see the section on timestamps? Nanosecond resolution out to 2514.

Nanoseconds.

We're dealing with a process whose maximum useful precision is "has the green light gone off yet", and we've got nanosecond timestamps.

Re:Wait, what? (1)

Uncle Focker (1277658) | more than 5 years ago | (#23314534)

The nanosecond resolution is there for mission critical systems that need a finer resolution than seconds.

Re:Wait, what? (1)

jesdynf (42915) | more than 5 years ago | (#23314990)

I'm having trouble with that one. I mean, the statement would've been true without invoking "mission critical" -- of course the ability to get resolution better than seconds is for applications that need resolution better than seconds. I'm not sure why you've invoked the specter of "mission critical" here, and I'm having a damn hard time picturing any utterly important, world-ending task that's going to (a) rely on the *timestamp in the filesystem* and (b) run on Linux and ext4fs.

And the timestamp isn't in milliseconds (an incremental improvement that wouldn't've even raised eyebrows) or microseconds (which would have been future-proofed overkill) but nanoseconds. I know how PC clocks work -- you'll have a hard time convincing me that you can maintain nanosecond timing long enough for the difference between two nanosecond timestamps to be accurate down to the nanosecond.

Re:Wait, what? (4, Informative)

Waffle Iron (339739) | more than 5 years ago | (#23315472)

They're probably using a 64-bit number to hold the timestamp. That gives you 1.8e19 discreet time intervals, so you're going to get ridiculous precision, dates ridiculously far into the future, or both. I assume that they went for precision because that arguably has more potential for use in the real world than worrying about files thousands of years into the future.

IIRC, today's PCs have high-resolution timers available that surpass the old 14.318MHz clock chip. If you can't get accurate nanoseconds out of the timers yet, they'll just round the numbers off. No big deal.

BTW, NTFS uses 100ns timestamp granularity, and it was designed when systems were almost 100X slower than today. So it had a similar amount of overkill, but that certainly doesn't seem to have had any negative impact on the acceptance of NTFS.

Re:Wait, what? (1)

bfields (66644) | more than 5 years ago | (#23316476)

IIRC, today's PCs have high-resolution timers available that surpass the old 14.318MHz clock chip.

Last I checked the actual time source used for file timestamps was actually jiffies, so even though the filesystem inode may in theory have space for lots of precision, in practice the resolution is only hundredths of a second.

That's a problem if you're trying to use the timestamp to decide whether a file has changed or not--if you happen to check the time between two writes that come within (say) a millisecond of each other, then you may not ever see the second write. (E.g., consider the case where you're an nfs client trying to check whether your cached data for a file is still up-to-date.)

Re:Wait, what? (1)

flnca (1022891) | more than 5 years ago | (#23315582)

Nanoseconds isn't that high a precision, when you think about the fact that everyone can buy a computer with 2.0 GHz nowadays, which would be a clock cycle of half a nanosecond (500 picoseconds).

Even my old Amiga 1000 computer had a timer with 2 nanosecond precision in 1986 with a CIA 8520 chip.

High-precision timestamps are used in cases when time stamps of files have to be compared to detected changes, as in the ubiquitous "make" program, which uses timestamps to detect whether a file has been modified.

The same applies to some scripting situations ...

Anyway, the resolution is also geared at the big irons, on which processors run that exceed the teraflop range (1 THz cycle duration = 1 picosecond).

Re:Wait, what? (1)

flnca (1022891) | more than 5 years ago | (#23315626)

BTW, every IA-32 compliant CPU (PI,PII,III,4,etc. or Athlon etc.) has an instruction called RDTSC, which reads the CPU clock tick counter into a 64 bit register. This creates a timing facility that has a precision as high as the CPU clock. So 1 GHz clock cycle = 1 nanosecond precision.

Re:Wait, what? (1)

skulgnome (1114401) | more than 5 years ago | (#23316336)

However, on a Core 2 chip (and presumably its descendants), a RDTSC is sometimes undone by the speculative execution scheduling unit and can, in some circumstances, report time going backwards. Not to mention that on a dualcore processor these counts are not strictly in sync, and do not tick forward predictably enough to be used as a clock.

Good idea though. Unworkable in practice, but good idea.

Re:Wait, what? (1)

techno-vampire (666512) | more than 5 years ago | (#23316300)

you'll have a hard time convincing me that you can maintain nanosecond timing long enough for the difference between two nanosecond timestamps to be accurate down to the nanosecond.


Not now, and not in the near future, sure. However, who's to say that it won't happen, possibly sooner than we think? The developers had the room to store times that accurate, so they probably just put it in to allow for future developments.

Re:Wait, what? (0)

Anonymous Coward | more than 5 years ago | (#23314554)

Drive access is a red light. And that may be the extent of its usefulness to desktop computers but for other applications timestamps can be important (nonetheless I'm not sure nanoseconds is going to be that useful).

Re:Wait, what? (1)

flnca (1022891) | more than 5 years ago | (#23315890)

Yeah, it's not precise enough for some applications. picoseconds, or femtoseconds would've been more reasonable. Also, why not make a time stamp 128 bits instead of just 64 bits? Those 8 extra bytes wouldn't have made much of a difference. So, we'll have to wait for ext5 to support that ... ;-)

Re:Wait, what? (2, Interesting)

VeNoM0619 (1058216) | more than 5 years ago | (#23314688)

Seeing how processing becomes faster and faster: in batch file processing (just an example) where tens of files get processed in a single second. It may be useful to know which files processed in what order, in which case the precision could be useful. Think of it more as a feature than a necessity though I suppose.

Re:Wait, what? (1)

flnca (1022891) | more than 5 years ago | (#23315936)

Only, that in some applications, it's not tens of files per second, it's thousands, or hundreds of thousands of files per second, as when you're checking and/or updating lots of small files, like source trees (for programs or web sites).

Re:Wait, what? (0)

Anonymous Coward | more than 5 years ago | (#23315656)

Oh great. Now not only will we have to use some complicated leap-second calculation to find out what year it is from a timestamp - we'll have to factor in the altitude of the computer so as to correct for relativistic effects as well!

To all ext3 users... (5, Informative)

c0l0 (826165) | more than 5 years ago | (#23314514)

...who are on the lookout for a new fs to entrust with keeping their precious data: make sure to check out btrfs ( http://oss.oracle.com/projects/btrfs/ [oracle.com] ). It's a really neatly spec'd filesystem (with all the zfsish stuff like data checksumming and so on), developed by Oracle employees under GPLv2, which will feature a converter application for ext3's on-disk-format - so you can migrate from ext3 to the much more feature-packed and modern btrfs without having to mkfs anew.

On a related sidenode: I'm very happy with SGI's xfs right now. ext\d isn't the only player in the field, so please, go out and boldly evaluate available alternatives. You won't be disappointed, I promise.

Re:To all ext3 users... (2, Insightful)

DJProtoss (589443) | more than 5 years ago | (#23314758)

I agree btrfs looks nice, but its somewhat behind ext4 in terms of implmentation and stability (which is saying something) - theres the small matter of not yet handling E_NOSPACE, for instance

Re:To all ext3 users... (1)

EvilRyry (1025309) | more than 5 years ago | (#23316110)

btrfs is a completely new ground up filesystem. I would expect it to take a while longer than ext4 which is just another incremental improvement on ext2. btrfs isn't stabilized at all yet. I would consider the running out of space issue a non-issue at their current stage in development.

Re:To all ext3 users... (1)

miscz (888242) | more than 5 years ago | (#23314832)

The only thing that keeps many people using Ext3 is availablity of drivers on operating systems on Linux and clear upgrade path. Being supported by third-party partition management tools helps a lot too. I can ready my data on Windows (Ext2 IFS for Windows is awesome), OSX (though the only driver doesn't have write support) and more.

I don't want to entrust my data to a single OS.

Re:To all ext3 users... (1, Troll)

swilver (617741) | more than 5 years ago | (#23315386)

There is no way I'm installing anything Oracle on my Linux system ever. I will definitely not entrust my data to them after having witnessed over the past years what a mess their flagship product is.

Re:To all ext3 users... (3, Interesting)

Anonymous Coward | more than 5 years ago | (#23315542)

I'm an XFS fan as well. I have been using it for years. I usually have my root/boot partition as ext3 (so grub works) and all data on XFS.

XFS kills ext in terms of not losing data. I have recovered lots of data from failed drives that were XFS formatted. Not so with ext3 which tends to flake out and destroy itself when it gets bad data.

And don't even mention ReiserFS, that has always sucked. I have lost more data to Reiser than any other filesystem (ext is a close second though). Sometimes it would corrupt files just from rebooting the machine. I have never lost data on an XFS partition that wasn't due to hardware failure.

Re:To all ext3 users... (1)

johannesg (664142) | more than 5 years ago | (#23316438)

I notice they are happily throwing benchmarks around, which is funny considering that Oracle will not allow benchmarking at all for their flagship product...

undelete (4, Informative)

Nimey (114278) | more than 5 years ago | (#23314656)

Oh, please. ext2 had "undelete" capability, just as it had filesystem compression capability. Neither were ever implemented.

Re:undelete (0)

Anonymous Coward | more than 5 years ago | (#23316106)

Filesystem compression was implemented, it was just never accepted into the mainstream kernel.

Better option: (0)

Anonymous Coward | more than 5 years ago | (#23314788)

Buy a Mac and use OS X, then you won't have to worry about this kind of shit.

THINK DIFFERENT. THINK BETTER. THINK APPLE!

Re:Better option: (0)

Anonymous Coward | more than 5 years ago | (#23315152)

You do realize people come to Slashdot because they *want* to talk about this stuff, right? That's not going to change, so you might want to leave...

Re:Better option: (2, Insightful)

jdinkel (1028708) | more than 5 years ago | (#23315608)

Buy a Mac, then won't even be tempted with having a choice of something better.

But does it undelete... (1, Interesting)

swilver (617741) | more than 5 years ago | (#23315166)

That's all fine and dandy, but will it allow me to somehow undelete/recover when I accidently type rm -Rf /hugedir -- yes I know there are other ways to delete stuff, I just find it ridiculous that all linux file systems with the exception of ext2 make no effort at all to be able to recover from such a common mistake. Of course, rm not giving any indication at all about how many bytes and files it is about to remove doesn't help either.

Re:But does it undelete... (3, Informative)

mlwmohawk (801821) | more than 5 years ago | (#23315492)

The whole "undelete" thing is a DOS FAT stupidity. The *only* reason why people think that you *can* undelete is that the DOS FAT file system was designed in such a way that file changes could be recovered *IF* you managed not to change the file system too much. DOS being a mainly single tasker, with the exception of the standard "indos" flag games.

POSIX was not and should not be designed in such a way that "undelete" is reliably possible. That's like saying can I unlight that match. Can I unbreak that egg?

An unreliable system that may, on the odd chance that the file structure has not changed too much, recover files from a disk that have not been over-written yet is no replacement for NOT being an idiot and being careful when you delete something.

Re:But does it undelete... (1)

swilver (617741) | more than 5 years ago | (#23316022)

Fine, assume I'm an idiot then. No expert would ever want such a feature, or expect to be able to recover files in some way after they had made a mistake, even if that takes taking the drive offline immediately and having it perform a full disc scan.

Re:But does it undelete... (1)

Hatta (162192) | more than 5 years ago | (#23315514)

That's not the file systems job, that's the tool's job. You'll find on windows that when you use 'del' to delete something, it doesn't end up in the recycle bin.

So if you want some sort of soft delete, don't use rm or del. Use a tool that 'soft deletes' a file by moving it into a trash bin, which you can 'hard delete' when you need more space. This is how Windows and KDE both work.

Personally, I think file systems aren't aggressive enough when it comes to deleting files. When I delete something I want it well and truly gone.

Re:But does it undelete... (1)

EvanED (569694) | more than 5 years ago | (#23315724)

So if you want some sort of soft delete, don't use rm or del. Use a tool that 'soft deletes' a file by moving it into a trash bin, which you can 'hard delete' when you need more space. This is how Windows and KDE both work.
Know of a command line tool that fits this description that comes with most *nix distributions?

Re:But does it undelete... (1)

Xtravar (725372) | more than 5 years ago | (#23316024)

mv file.txt ~/.Trash/

That's essentially what any shell does. Personally, I find it annoying and do all my deleting from the command line.

Re:But does it undelete... (1)

swilver (617741) | more than 5 years ago | (#23315974)

I don't expect it to manage "deleted" files at all. It's that sinking feeling you get when you think you're deleting a directory that should be mostly empty, and rm is taking longer than expected (as your only indication that something is horribly wrong.

It's not unreasonable to expect to be able to undo that action when I immediately press cancel it and make sure the drive is not written to anymore. Ext2 can do this easily. Ext3 goes out of its way to make this impossible. XFS/ZFS/ReiserFS etc.. all make it way too hard or impossible.

I'd alias rm to something with a more sane cmdline interface, but that would break every shell script in existance.

Recycle bin solutions are crap. I want stuff to hang around for 30 minutes at most, not weeks on end taking up valuable free space causing unnecessary disk fragmentation.

Getting off topic, but... (1)

teh_commodore (1099079) | more than 5 years ago | (#23315632)

Might I recommend passing the -I option? I have rm aliased to 'rm -I' on my work machine.

Re:Getting off topic, but... (1)

swilver (617741) | more than 5 years ago | (#23315874)

Yes, I have that too, and it's useless.
Frankly, the whole command is poorly designed. I don't want to confirm each and every directory, I want an overview of what is getting deleted, and then I want it to ask me "are you sure?". It doesn't even offer an option "yes to all" when using interactive mode when you finally tire of pressing "y" 50.000 times.
I've even went looking for replacements that did just that, but that turned up nothing.

Re:But does it undelete... (1)

blitzkrieg3 (995849) | more than 5 years ago | (#23315746)

I don't know about you, but I rarely every have to type -f. Usually I only do that after I can't do it with a regular rm, in which case I'm considering not deleting /hugedir in the first place.

Wrong arguments (1)

The Mighty Buzzard (878441) | more than 5 years ago | (#23315774)

I think you're looking for rm -rI /hugedir then. Adding the -f option is you specificly stating that you know exactly what you're doing and do not want to be asked if you're sure you want to remove /home and all it's subdirectories.

Re:Wrong arguments (1)

swilver (617741) | more than 5 years ago | (#23316160)

Capital I not being an option on my rm, I assume you are talking about the "interactive" mode. The problem is that rm's behaviour is ridiculous in this mode. Watch this:

[root@MyServer 0 ~]# rm -Ri MPdeletethis
rm: descend into directory `MPdeletethis'? y
rm: descend into directory `MPdeletethis/Gui'? y
rm: descend into directory `MPdeletethis/Gui/wm'? y
rm: remove regular file `MPdeletethis/Gui/wm/ws.c'? y
rm: remove regular file `MPdeletethis/Gui/wm/ws.h'? y
rm: remove regular file `MPdeletethis/Gui/wm/wskeys.h'? y
rm: remove regular file `MPdeletethis/Gui/wm/wsmkeys.h'? y
rm: remove regular file `MPdeletethis/Gui/wm/wsxdnd.c'? y
rm: remove regular file `MPdeletethis/Gui/wm/wsxdnd.h'? y
rm: remove directory `MPdeletethis/Gui/wm'? y
rm: descend into directory `MPdeletethis/Gui/skin'? y
rm: remove regular file `MPdeletethis/Gui/skin/cut.c'? y
rm: remove regular file `MPdeletethis/Gui/skin/cut.h'? y
rm: remove regular file `MPdeletethis/Gui/skin/font.c'? y
rm: remove regular file `MPdeletethis/Gui/skin/font.h'? y
rm: remove regular file `MPdeletethis/Gui/skin/skin.c'? y
rm: remove regular file `MPdeletethis/Gui/skin/skin.h'? y
rm: remove directory `MPdeletethis/Gui/skin'? y
rm: remove regular file `MPdeletethis/Gui/interface.c'? y
rm: remove regular file `MPdeletethis/Gui/interface.h'? y
rm: remove regular file `MPdeletethis/Gui/Makefile'? y
rm: remove regular file `MPdeletethis/Gui/bitmap.c'? y
rm: remove regular file `MPdeletethis/Gui/bitmap.h'? y
rm: remove regular file `MPdeletethis/Gui/app.c'? y
rm: remove regular file `MPdeletethis/Gui/app.h'? y
rm: remove regular file `MPdeletethis/Gui/cfg.c'? y
rm: remove regular file `MPdeletethis/Gui/cfg.h'? y
rm: descend into directory `MPdeletethis/Gui/win32'? y
rm: remove regular file `MPdeletethis/Gui/win32/interface.c'? y
... few 1000 lines more removed ...
You get the idea. Is no wonder I type -f when I want to remove a directory.

Re:But does it undelete... (3, Funny)

Anonymous Coward | more than 5 years ago | (#23315938)

Perhaps you should try prm (pansy rm) or psh (pansy shell).

ext3 tops out at 16GB files? (1, Interesting)

XorNand (517466) | more than 5 years ago | (#23315174)

Ext3 tops out at 32 tebibyte (TiB) file systems and 2 TiB files, but practical limits may be lower than this depending on your architecture and system settings--perhaps as low as 2 TiB file systems and 16 gibibyte (GiB) files.
Is this really the case? I created a 100GB file on ext3 earlier this week. It contains a virtual machine image that I am currently running under Xen. I haven't yet had a problem. I would guess that >16GB files are pretty commonly used in the world of Xen.

Re:ext3 tops out at 16GB files? (1)

Uncle Focker (1277658) | more than 5 years ago | (#23315624)

Did you miss the part where it said "may be" lower. As in, it's in some cases that might be true, but not others.

Why bother? (3, Informative)

jabuzz (182671) | more than 5 years ago | (#23315898)

ext4 is the biggest waste of time and effort in Linux. There are already good extent based filesystems for Linux. Why anyone would consider using what is an experimental filesystem for a multi TB production filesystem is beyond me.

What ever they do XFS and JFS will have way more testing and use than ext4 will ever have. I just don't get the point of ext4. It would be far more useful to fix the one remaining issue with XFS, the inability to shrink the filesystem none destructively, than to flog the dead horse which is ext2/3 even more with ext4, which is not one disk compatible anyway.

Re:Why bother? (3, Insightful)

skulgnome (1114401) | more than 5 years ago | (#23316382)

It has value as an experiment, even if it ultimately doesn't turn into much. These people have ideas, and they want to implement them. They aren't maintenance programmers and should not be shoehorned into that task even at the level of J. Random Person On Slashdot's thought.

Remember how reiserfs was the first filesystem to have journaling in Linux, and how some people were ready to state that there is no need to do an ext3 any more?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...