Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix

Journal SL Baur's Journal: Why Unix has survived the test of time 2

I wrote this entry in a recent article:

I have been impressed with Unix and its descendents since I first encountered them in college. The big Blue and Green books documenting Version 7 Unix were useful for everything Unixy at the time and I've always liked the multiuser/multiprocessing aspect of the system. System V/R2 was a disaster on the order of Microsoft Windows XP (so I've read, I only used Microsoft Windows XP/SP2 for about half a year and it was only less stable than System V/R2 with patches), but it was released two decades earlier and since has all the problems worked out.

The Unix model, as first designed by Ken Thompson and Dennis Richie has withstood the test of time as no other software project ever has. They killed the proprietary O/S model on minis and mainframes. They killed the idea of non-portable OSes, though Microsoft has resurrected that idea. They so excited the minds and hearts of programmers that dozens of reimplemented spinoffs were done ... and survive to this day.

And now, I want to expand on it.

The Blue and Green books, which were published around 1980, were critical for the success of Unix. They documented every system call, every library call and did not leave anything out. This was the true start of Open Systems. No longer would you have to have huge support contracts with your O/S/Equipment manufacturers to figure out how to do things[1]. This was underscored by the fact that the crucial user interface to the system that every user had to endure, /bin/sh was just an ordinary program and many people who were unsatisified could and did (/bin/csh being the first) reimplement it.

* Open Interface, interchangable parts.

Unix has long been criticized for its cryptic commands. Well, it was until certain people decided the command line had gone out of fashion. The small, but numerous included commands with Unix emphasized a philosophy of Small is Beautiful and Do One Thing and Do it Well. While maybe being difficult to remember at first (but the Blue Book had all the manpages in nice, removable and 3-hole punched pages), were easy to type. But, let's face it, whether you are working at a 110 baud hardcopy teletype, or an Xterm, it's easier to type `ls' than `directory' or something like that. Later advances in Unix shell technology made that rather moot with user-defined aliases and bound function keys as in zsh. The point to remember is that the later advances were enabled by the fact that /bin/sh, wonderful innovation that it was at the time, was an interchangeable part.

The original Unix guys were proud of the pipeline, and indeed, it is a useful concept, but I doubt many folks today take much use of it. Programs like Richard Stallman's Emacs, XEmacs, Expect handle most of the complicated cases in greatest usage today and the rest are things like <some command that will produce a lot of output> | less.

As multicore systems become prevalent, we may see someone take advantage of that to make really wonderful pipelines possible, I'm not optimistic. But hey, I'll be happy to hack at zsh, with all of its wonderful pipeline primitive extensions if someone throws money at me, but on second thought, I'd rather the zsh guys be the target (of the money).

* Flexible design methodology.

Unix has been multiuser since the very beginnning. Well, after they got the bugs out that caused people to shout "a.out" when they were running programs they had just compiled. For historical perspective, the original PDPs had similar amounts of memory as the first IBM PCs. Separated I & D space (for a whopping 128k of address space) was a huge win in the days.

* Multiuser/multiprocessing because no one needs to be administrator on a computer to use it and no one does just one thing at a time.

The Unix file system has simplicity at its core. Files are just files and handled as a stream of data. Encoding a file's purpose as part of the file name or as metadata had been the norm and Unix broke that. RMS was the complete opposite and older folks remember the shelves of bookcase space VMS programmers routinely kept near by filled with programming manuals. Open, read, write, close. It doesn't get any simpler than that.

* Files are just files[2].

Over the years, kruft has crept in, as it will with any going[3] software project but not enough to invalidate what Ken & Dennis started almost 4 decades ago. Certainly the lack of an rmdir(2) system call was an oversight[4], but not a crippling one. My number one criticism of Unix is the short-sightedness of using only 32 bit signed integers for time computation that make the End Of The World coming around January 17, 2037. Fortunately, modern Unix-derivatives use 64 bit time and I guarantee you, many of the ones you see today will still be in use 3 decades from now.

Unix. Live Free, or die.

Footnotes:
[1] I am pointing the finger at the former Digital Equipment Corporation here. In the mid 1980s, I was employed at one of the first 100 .coms with all the associated huge defense contracts of the time, and was trying to code for myself a feature that was in one of the DEC supplied standard VMS utilities. I could not make it behave the same way as the DEC program. I spent many hours trouble shooting the problem, many hours on the phone with a DEC consultant who claimed to be reading the source of the program that implemented the feature I was trying to implement, but no luck. Just Say No! to undocumented interfaces.

[2] Yes, at times I have read mail with cat(1).

[3] As in "going concern" an accounting term referring to businesses with a future.

[4] Which causes the command `/bin/rm -rf /' run as root to fail after removing the /bin/rmdir program (do not try that at home unless you know what you are doing).

This discussion has been archived. No new comments can be posted.

Why Unix has survived the test of time

Comments Filter:
  • 1.Dude, if you can go without a given syscall, then you do. So, if rmdir can be a separet program it will.

    2. BTW, VMSses filesys is actualy quite cool, cuz it can treat files as pretty much separet file systems, and can store ONE file in two or more directories (that remind you of something? /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin ...), you just need a wrapper for dumber programs that treat files as bytestreams. As you said, the problem is undocumentaed interfaces.

    Cheers!

    • by SL Baur ( 19540 )

      1. It depends on the system call. The problem with implementing rmdir in userland is a matter of atomicity. `rmdir foo', where foo is an empty directory consists of three steps: unlink("foo/."), unlink("foo/.."); unlink("foo"). That's inherently racy.

      2. I have respect for certain aspects of VMS. Logical tables were cool. The way you could turn any synchronous system call into an asynchronous call with various kinds of completion was cool. Fine grained privileges was nice. VMS was decades ahead of any

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...