Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Humane Interface

timothy posted more than 12 years ago | from the killing-you-softly dept.

Programming 169

Reader Torulf contributed the below review of Jef Raskin's The Humane Interface .Though the book does not spend much time on Open Source software, it emphasizes ideas that every programmer probably ought to bear in mind -- at least if they wants hisprograms to have users. (And yes, he takes explicit exception to some UNIX truisms.)


The Humane Interface: New Directions for Interface Design, by Jef Raskin, the creator of the Macintosh project, provides an interesting read chock full of controversial ideas. As the book's full title suggests, Raskin focuses mainly on how things ideally should be made, rather than offering advice and recepies that can be immediately applied to problems in today's systems.

Don't Think!

The approach taken by Raskin stems from his definition of a humane interface: "An interface is humane if it is responsive to human needs and considerate of human frailties." In practice, this means that the interface he suggests is based on ergonomics and cognetics (psychology).

Basically, the idea is that we can do only one thing well at a time consciously. Most of us can walk, chew bubblegum, hold our bladder and speak (semi-intelligently) with a friend, all at the same time. This is because the only thing we are consciously doing (the only thing we are concentrating on) is the semi-intelligent babble. On the other hand most of us run into trouble when trying to study calculus at the same time we're hitting on a sexy lady (make that a handsome man for those 5% of ladies here at Slashdot, or some sort of hermaphrodite freak for those 11% with doubts about their sex).

The point with this is that the one thing we're consciously working with should, with as few interruptions as possible, be the content we are producing with the computer, not the computer itself. That is, all interaction with the system should, after the initial learning phase, become habitual or automated, just like we can walk or drive a car without ever consciously thinking about it. This way we could maximize productivity and concentrate on the content of our work.

There's Only One Way to Do it

For commands to become habitual as quickly as possible some interface-guidelines are given. First of all, all modes (differing types of responses based on context) should be eliminated. The system should always react in the same way to a command. All modes generate user errors when the user isn't paying attention to what mode the system is currently in (and the user should not have to pay attention to the systems current mode, the user should only have to pay attention to the content he or she is currently producing). An example of mode error happened to me while I was writing this review, just a few lines up: I unintentionally left overwrite on when I thought I was in insert-mode and thus overwrote a word by mistake. Of course, this was no big deal, but I had to take my mind off formulating the sentence to figure out what had happened, just for a second, but long enough to derail my thoughts, and that derailing should never happen.

Another way to speed the transition to habitual use is monotony. You should never have to think about what way to do something, there should always be one, and only one, obvious way. Offering multiple ways of doing the same thing makes the user think about the interface instead of the actual work that should be done. At the very least this slows down the process of making the interface habitual.

Unorthodox Suggestions

There are of course a lot of other suggestions in the book, some expected, some very surprising and unorthodox. The more conventional wisdom includes noting that the interface should be visible. That is one of the downsides of the otherwise efficient command line interface, i.e. you cannot see the commands at your disposal by just looking at the interface. A method called GOMS, for evaluating keystroke efficiency, and Fitts' law (the time it takes for you to hit a GUI button is a function of the distance from the cursor and the size of the button) are also among the less surprising ideas in the book.

The more unorthodox suggestions include Raskin's proposal for a universal undo/redo function, not just in the different applications but in the system itself. The same gesture would always undo, no matter what application or setting you last touched.

The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content. Even file-hierarchies should vanish along with the applications, according to Raskin.


The Humane Interface focuses on the principle of user interfaces and the theory behind them. The ideas presented in the book generally make sense, after considering the background and argument for them presented by Raskin; even if some seem very drastic when first encountered (I can hear Slashdotters firing up their flamethrowers after reading the end of last paragraph). As mentioned before, the book does not provide many ready to use recipes. It does provide a good insight into the background of user interfaces, which the reader can apply to the project at hand.

Some related ideas were dicussed about a year ago on slashdot. The Anti-Mac paper discussed then, came to pretty much the opposite conclusions from the ones that Raskin presents (Raskin makes a case against beginner/expert interface levels). After reading both sides of the story, I'm inclined to believe more in Raskin's reasoning.

The only Open Source or Free Software the book mentions is Emacs, in a discussion about string searches. (The incremental model in Emacs is preferable to systems where the search does not begin until you click the search button.) I do, however, believe that the alternative interface models could be an opportunity to the open source community and that Raskin's ideas perhaps are more likely to be implemented and tested in such an environment, compared to the possibility that Microsoft would greatly simplify the thousands of commands that MS Office consists of. I therefore warmly recommend this book to anyone doing software development and I would love to see some of the ideas used in KDE or GNOME.

You can purchase this book at Fatbrain.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


Re:This is the ... READ HIS STUFF! (2)

Anonymous Coward | more than 12 years ago | (#216465)

A few quotes from a page on his site. [jefraskin.com]
Perhaps the most important point is that most of what we do with computers are basically text-based applications.
GUI interfaces are becoming VERY, VERY "cognitively intensive
If people weren't good at finding tiny things in long lists, the Wall Street Journal would have gone out of business years ago. Would you rather have the stocks listed fifteen to a page, each page decorated like a modern GUI screen

Here's the kind of idea that would break Gnome, or KDE away from the GUI pack (again, from JR's site):

So we stored an image of the screen on disk so that when you turned on the Cat, that image promptly came up, which took less than a second. I knew from published experimental results that a person typically takes at least eight seconds to re-engage their thinking when coming into a new environment (e.g. moving from talking to a co-worker to using their computer). People stare at the screen for a few seconds, oblivious to time passing, regaining context. By the time they started working, the Cat screen was real (the only visual change was that the cursor started blinking).

In case they started typing before the screen was ready, we captured the keystrokes which all popped into the cursor location at once when the screen went live. In practice (and we did a lot of user observation to find this out) this almost never happened. When it did it did not unduly upset users. In any case, it was a LOT better than having to wait a minute or two as with PCs and Macs.

It would be possible, on today's computers, to quickly load a small routine to capture and display keystrokes so that you could sort-of get started or at least capture an idea while the rest of the system drifted in. Then you could paste what you've done where it belongs. But nobody seems to care. More's the pity.

Different Strokes for Different Folks (5)

Anonymous Coward | more than 12 years ago | (#216466)

I think something that a lot of people miss is the fact that the average computer user is very different from the average Slashdot reader or Linux user.

A lot of users (my mom, for example) use their computer for three things: reading email, surfing the web and word processing (Word). Now tell me why any of these three tasks require a user to think about "a file". Tell me why any of these three tasks require the user to think about "C:\My Documents\WorkStuff" (or "/home/foo/docs/workStuff", if you prefer).

These concepts are all hold-overs from an era where the people designing software were the only users of the software. If I'm a programmer, of course I'm going to design my shell so I can write shell scripts. Of course I'm going to give myself the ability to create complicated hierarchies -- that's how I think!

Now, we have a lot of users of software who are not programmers and geeks. If we care about them using technology, we need to think about the easiest thing for them. This doesn't mean we have to get rid of command lines, we just have to come up with something else for users who do not want (or need) command lines. This is not a zero-sum game. The growth and popularity of technology is a win for everyone.

To make this happen, a portion of the software community has to realize that they are designing software for people who are not programmers and who are not (gasp!) interested in technology for technology's sake. Some of us (but not all) need to get rid of the attitude exemplified by the following quote:

"Linux *is* user friendly. It's not idiot-friendly or fool-friendly!"

The majority of users are not "idiots" or "fools". Some are doctors, lawyers or artists who have chosen to concentrate in an area outside of computers. Saying they are idiots because they don't understand symbolic links is like a eye surgeon saying that a programmer is an idiot because he can't remove a cataract.

Re:Just read this myself (1)

Andrej Marjan (1012) | more than 12 years ago | (#216469)

I haven't read the book yet, but I did attend a presentation by Raskin last week, and he gave an example of why your suggestion -- visual cues indicating the current mode -- doesn't work.

As stated in the review, people can only consciously concentrate on one thing at a time. They will be concentrating on the task they wish to perform in vi, rather than on the current text colour and by extension, the mode the program is in.

His specific example was of a study he did with experienced users of a particular CAD program. This program has several selection tools, all indicated by different, distinctive mouse pointers. Those are modes.

Users invariably made the same error repeatedly, even with experience: they did something with a specialized selection tool, then moved to select an object with the regular selection tool without switching to it first, even though they were looking right at the visual mode indicator the whole time.

The users were focussing on the task of selecting an object, not on what selection tool (mode) was currently enabled. Adding your visual cue to vi won't help any; people will still get it wrong because they won't be paying attention to the visual cue.

Besides, wouldn't that mess up syntax highlighting?
Change is inevitable.

It's called "editing" (1)

cradle (1442) | more than 12 years ago | (#216471)

at least if they wants hisprograms to have users
I understand Slashdot is not a professional news organization, but this is pretty embarrassing.

And as has been mentioned, the title in the summary box substitutes "Human" for "Humane".

Maybe you should borrow an idea from XP and practice pair publishing ...

Re:Fitts' law (1)

ROC (2945) | more than 12 years ago | (#216472)

The logic seems to be

1. He's pulling it down - he want's it to go down...

2. He's pulling it further down - he want's it to go further down...

3. He's still pulling it even further down - AHA he want's it to go up!

This goes for Windows, Mac, KDE and probably a wholelotta other interfaces.

I think that's windows only. I don't have a Mac at the moment but my KDE (or any other X application) doesn't do this.

That feature annoys me a lot when working on windows just like the scrollbars popping back if you leave the area to any other side.


Re:Sounds familiar (1)

bjohnson (3225) | more than 12 years ago | (#216473)

More to the point, it sounds exactly like the Newton interface, which he mentions explicitly in the book.

Jazz ZUI (1)

richieb (3277) | more than 12 years ago | (#216474)

Here is an oopen source implementation of a Zooming UI in Java: http://www.cs.umd.edu/hcil/jazz/


OT: "Hermphroditic freak" was uncalled-for (2)

Just Some Guy (3352) | more than 12 years ago | (#216475)

You know, I'm willing to bet that hermaphrodites are well-aware that people think of them as strange or weird without being called "freaks" in a large public forum completely non-related to sexuality or biology issues.

Furthermore, I'm pretty sure that most gay people would not list a "hermaphroditic freak" as their distraction-of-choice.

The traditional neutral phrase is "member of the appropriate sex", or MOTAS (straight from the Jargon File). Better still, leave the weird imagery out of future book reviews.

Re:Just read this myself (1)

cpt kangarooski (3773) | more than 12 years ago | (#216477)

I've heard that the Canon Cat sold great, it was just the prize in a war between various divisions in the company, and was killed off due to politics.

OTOH, it would not have been particularly expandable into any kind of general purpose computer. But hey - no one buys lots of things that are good ideas, it doesn't invalidate them.

Re:Blah blah well DO something then (1)

cpt kangarooski (3773) | more than 12 years ago | (#216478)

He started the Mac project and introduced Apple to the concept of GUIs. He went to Canon and brought out a word processor that's generally agreed upon to have been very nice indeed.

So, what precisely is he not doing about things? In the end, with both him and Tog and others, someone's got to listen to them and follow their advice - neither have the ability or inclination to code a whole fully-featured UI themselves. At least they remain influential. I know I'm paying attention in my own projects, insofar as I can.

"Mother Nature" (1)

counsell (4057) | more than 12 years ago | (#216484)

The two most powerful designed interfaces I use are my computer keyboard and my (digital) piano keyboard. To me they are intuitive, but that's because I use them every day. In both cases I am rarely doing one thing at time. In both cases no one has devised a mechanism which improves significantly on them---in performing the same tasks, that is.

However, although they nod towards the "human"---the key sizes relate to the size of human fingers---neither interface could in anyway be described as "humane", in the sense referred to in the book, especially from the point of view of the new user.

This is the nature of life. Life is difficult. Life is hard work. (If you'd heard my piano playing you'd know how painfully true this is. But this is only a matter of time; practice makes perfect.)

The point I am trying to make is that the most powerful and popular interfaces are the ones which offer maximum speed and control---not the ones which have the shallowest learning curve, not the ones which focus on one task at a time, not the ones which confuse the user least.

Remember learning to walk, or learning to invert your vision? Quite a few million years of evolution went into "designing" the interfaces between your brain and your legs/eyes. It still hurts learning to use them. But look at the pay-off. I mean literally...

Re:No file hierarchies? (1)

Doctor Memory (6336) | more than 12 years ago | (#216491)

I'm not sure how you'd organise your documents though.

Possibly through a set of filters/queries? With no filter, *everything* on your system would be presented. You could create filters to specify files by date (creation/modification), "subject" (whatever you decide that should be), type, or a variety of user-specified attributes. Filters would have the ability to be combined with AND and OR operators. NTFS has some of these capabilities, but no support as far as I can tell, I understand BeOS (or its file browser) has something like it.

Modes and vi (2)

Luke (7869) | more than 12 years ago | (#216493)

For commands to become habitual as quickly as possible some interface-guidelines are given.

No, for commands to become habitual you need to practice them. People have a difficult time learning the intricacies of vi because they don't use it for 100% of their text editing. Once I started thinking about every command I entered in vi before doing it (such as hitting 10j instead of the down arrow 10 times, or the various ex commands) they quickly became habitual. No interface is intuitive automatically (except, of course, the nipple :-) ). True, some are easier, but those that are the most powerful are usually those that require the most effort to learn.

This doesn't just apply to vi, of course, but anything sufficiently complex on a computer. Stick with one way and learn it.

Re:good ideas from raskin (1)

scrytch (9198) | more than 12 years ago | (#216495)

show me a succesful application based on raskin's principles and i'll buy this fawning fanboy crap. good ideas are a dime a dozen. proven ideas aren't. on that note, given the proliferation of "GUI Bloopers" type books and sites out there like the interface hall of shame [iarchitect.com] it sure would be nice if people managed to just invest a little quality in existing GUI design principles. The start menu in windows, for example, violates microsoft's own design guidelines on menus (not to cascade more than one level)

The trouble with intuitiveness is.. (1)

Si (9816) | more than 12 years ago | (#216496)

it only becomes intuitive when you've done it a thousand times. (see that nipple quote in someone else's .sig).

So go ahead, find the 'one and only one, obvious way' to achieve everything you need to do to generate your content. I'll still be here trying to :wq in a web form, and performing searches with / in pine.

Oh and vi (or at least vim) has incremental searches too:

:set incsearch

Re:good ideas from raskin (2)

Col. Klink (retired) (11632) | more than 12 years ago | (#216498)

> all visionaries are subject to much abuse.

And therefore, all those that are subject to abuse must be visionaries? MicroSoft must therefore truly be the most innovative company out there.

True, they laughed at the Wright brothers and they laughed at Marconi. They also laughed at Bozo the Clown...

I take solace in knowing that if I'm modded down, it will only because I, too, am a visionary.

Opendoc (3)

mcc (14761) | more than 12 years ago | (#216502)

> The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content

This sounds like opendoc. Does Raskin actually mention that opendoc is what he's talking about? Is opendoc what he's talking about?

How does Raskin propose that this state he advocates in his book be reached? If he wants to follow the opendoc model of applications being split up into as many tiny interlocking pieces as possible, with a framework for pieces from disparate apps being allowed to interlock with one another as long as they operate on the same kinds of data, then how does he propose that the parts be coordinated together in such a way that they appear, as he wants them to, modeless and consistent?

Basically what i'm asking is this: The things he proposes (a single modeless system that does EVERYTHING and in which every bit of interface is consistent with every other) are things which are extremely difficult to achieve unless every bit of said system is written by the same company. Does he suggest any way that disparate groups could work on the creation of a system such as he proposes without the different voices involved ruining the purity of the whole thing-- like, the people writing apps start doing their own thing, and eventually the differnet parts of the system become inconsistent again.

I also would be curious to see what Raskin would think of the mac os x "services" model, which attempts to revive the opendoc idea with in a less extreme-marxism manner-- applications can rewrite parts of themselves as "services" that do certain actions on certain types of data. If the user is using a program which uses a type of data that a service the user has installed can work with, the user is allowed to let the service wrest control of the data from the application and operate on it. Is this good because it's a step in the right direction, or bad because unlike opendoc the focus remains on the application and not the document?


mcc (14761) | more than 12 years ago | (#216503)

What you say is very true. However, i don't think that the thing you are saying and the thing Raskin is saying are mutually exclusive.

In an ideal world, there should be only one way to do it, BUT the USER should be able to determine what this one way is.

In my humble opinion, the direction that GUI design needs to and, inevetably-- i have no idea when, but it HAS to go this way eventually-- will go, is in the direction of infinite modularity: the direction of seperating style from content, seperating form from function. Up until this point, interface design has been a constant war between the macintosh "all applications should act roughly consistently with one another" mindset, where you take a single set of guidelines and stick with them, and the Xwindows/UNIX "interface guidelines are fascism" mindset. The UNIX mindset has an extremely good point-- but makes the mistake that it just trades off apple's single point of interface fascism for a billion tiny points of interface fascism, one for the author of each application. The author of each application is still the one in control, and is in control only of the application they created. The user is in control of nothing. And from the user's standpoint, being powerless over the behavior of a bunch of applications that all act the same (as on the mac) is better than being powerless over the behavior bunch of applications that all act differently (as on UNIX).

Ideally, the person who dictates interface behavior would be neither Apple nor the application designer, but the user. Ideally Apple and the application designer would both offer *suggestions* as to how the application would behave, and the user would determine which, if either, of these suggestions the application would follow.

Ideally, eventually we will move to where programmers don't say "i want a text field, two buttons, and a menu with these specific sizes and positions", they will say "i want a text field that you can do these two specific things to and which you have these options for, and the text field and the potential buttons and the potential menu have these specific relationships to each other", and the system will build the interface based on the rules the user prefers.

Hmm. I'm kind of rambling now, and i have to go. But how i was going to end this is by saying that the direction things should take from here, and the direction things ARE taking from here (at least with the things that GNOME and apple are putting forth) is that common things like text fields and scrollbars should be done by a single systemwide service, but *abstractly*-- such that the user can augument the behavior of those text fields or whatever at will; and that we should move toward the model used by CSS, the glade files of the GNOME api, and the nib files of apple's Cocoa API-- where you define functions, and then define an interface, and then link the two together. I, the user, can open up the nib files inside of these mac os x applications i use, and because the relationship between interface and function is abstract rather than hardwired into code, i can rewire that relationship-- i can delete, rearrange, and redesign the function of interface elements of existing programs in InterfaceBuilder despite not having the source code to the program. This is the way things should be.

OK.. i'm done. thanks for listening :)

"I have a firm belief that computers should conform themselves to my behavior, and not the other way around." --Steven Levy, on the original Newton release of the Graffiti pen input system now used in the palmpilot

Re:Different Strokes for Different Folks (2)

Raven667 (14867) | more than 12 years ago | (#216504)

"Linux *is* user friendly. It's not idiot-friendly or fool-friendly!"
The majority of users are not "idiots" or "fools". Some are doctors, lawyers or artists who have chosen to concentrate in an area outside of computers. Saying they are idiots because they don't understand symbolic links is like a eye surgeon saying that a programmer is an idiot because he can't remove a cataract.

I've had to deal with this kind of situation before and your example is flawed. The people that this statement refers to are people who refuse to learn, either through direct teaching or through experience. Many times these are the same people who refuse to follow detailed instructions and are then confused and indignant when things don't work. Also the people who I am thinking of will use a software package for years and never attempt to learn more about the package or become more efficient using it.

My second point is that comparing the effort to understand symlinks with the ability to perform eye surgery is really lopsided. I can, hopefully, explain the concepts of symlinks to anyone who isn't a vegetable and is willing to learn in 5 minutes, it takes years of study and hard work to become even basicly competant in the medical field.

Computers are not all Magic Faries and Pixie Dust!!

-1, Redundant (or, why UI is not the problem) (4)

weston (16146) | more than 12 years ago | (#216507)

I've said this before, but I'm not sure it sticks yet. I think that Raskin has a good point about consistency across system interface. However, I've come to beleive that UI is not the bugaboo of human-computer interaction. The real problem is system configuration/administration.

My parents, for example, are competent using any number of applications (all with varying purposes and slightly different UIs). But ask them to change any system setting, and they will either give you a blank look or freak out. They don't have the faintest idea of how to start. They're even wary of navigating control panels until they find the right tab/checkbox.

Fair enough, right? The big realization came when I realized that I'm afraid of system administration too. Especially when it comes to unix systems. Even with all those neat-o little configuration tools that come with Linux now, it can be a nightmare to setup X or networking if things aren't just how you found them.

Compared to these sorts of trials, learning to type the right commands or navigate a heirarchy of menus is easy. Most humans are born with the ability to pick up language, so typing commands isn't too much of a stretch. Pointing and clicking isn't hard either. What we're not equiped to do is manage a lot of detail, or absorb a lot of underlying principles quickly. Until someone manages to address those concerns, UI may be great but human-computer interaction will not move far forward.


Re:Maybe he's being too simple. (2)

TWR (16835) | more than 12 years ago | (#216508)

We do it for cars, why is it wrong for computers?

Because the number of people killed by incompetent computer usage is vanishingly small.


humane != success (1)

rakjr (18074) | more than 12 years ago | (#216509)

A common point. But buried behind it is another fact. We are all different. What is humane and efficient for one person is a nightmare for another. Mac GUI is a god send for some and is the anti-christ for others. When I am doing desktop work, windows (X or MS) is a comfortable environment in which to work. But, when I am doing server administration, text is the way to go.So even when we talk about a single individual, what is good and intuitive for one task is a nightmare for another.

Humane Business Applications? (1)

mlosh (18885) | more than 12 years ago | (#216510)

I've read this book when it first came out. I think it is great! Well-written and thought provoking. It makes me want to try a Cannon Cat, a relatively successful product that Jef shaped. I've thought about programming a modern user environment with these ideas, but I haven't much time to devote to it. Perhaps someone out there does!

But my biggest question after reading and agreeing with much of the book is: How the hell does it apply to business applications? I can see how Jef's ideas work well for content-creation like writing, emailing, and drawing. I don't see yet a good way to apply LEAP and non-modal interfaces to typical repetitive business applications like order entry, general ledger, and so on... especially in a networked, enterprise setting. You can't possibly have dedicated keys for every application function, and the "highlight command and press execute" gets you back to a variation on command-line interfaces or menus. Modality or navigation inefficiencies seem to creep back in!

Any thoughts on this among other readers of _Humane Interface_? Jef, are you lurking?

Accessibility=more than one way to do it (1)

Squeedle (20031) | more than 12 years ago | (#216511)

Having strictly one way to do each task in an application limits accessibility for the disabled. This limitation might be solved, however, by violating one of his other guidelines - having more than one mode - one could have a Keyboard-Only mode or a Mouse-Only mode, for example.

Or maybe better, allow users to have their own, potentially system-wide interface configuration, with an interface API that every program receives keyboard, mouse, or other input device events from.

Re:NeXT scroll bars (1)

mrfrostee (30198) | more than 12 years ago | (#216514)

I'm pretty sure NeXT had the scroll bars on the left, for this very reason.

As did Smalltalk, where the scroll bars could also "flop out" only when the window was active (so that they didn't take up any space when it was inactive).

This is still an option in Squeak. You can see some examples of it in the Squeak screenshot gallery at: http://minnow.cc.gatech.edu/squeak/683

Canon Cat (1)

gambletron 3000 (33775) | more than 12 years ago | (#216519)

The bit about 'application-less' interaction with the computer sounds like a further development of the Canon Cat [landsnail.com] , Raskin's post-Mac project from the mid-late 80's.

Re:Blah blah well DO something then (2)

macpeep (36699) | more than 12 years ago | (#216521)

Considering that Raskin could be considered the father of the Apple Macintosh, I think it's a little unfair to say he hasn't done anything for making user interfaces beter.

Re:No file hierarchies? (1)

Pentagram (40862) | more than 12 years ago | (#216522)

I haven't read the book , but I'd like to see less reliance on file hierarchies. They might be a useful categorization at the system level, but I think they should be invisible to the general user.

Why should I need to tell an application to install itself to c:\program files\company\ellipsis or that its config file is at ~/.thingy ? All that information should be hidden from me and handled by the O/S. I should only need to know about the hierarchy structure if I'm actually writing code.

I'm not sure how you'd organise your documents though. Still in a hierarchy, but a separate one? (at least, separate from the point of view of the user). Perhaps you could retrieve them intelligently through the application you use them with (although its difficult to see how that might work).

Re:Modes and vi (2)

drivers (45076) | more than 12 years ago | (#216526)

So, you NEVER make mistakes in vi? You never forget that you are are not in insert mode and start typing? You never accidentally use CAPS LOCK when inserting some caps text and then when you are back in command mode, start executing the shifted commands instead of the regular commands? If you never make a mistake, is it because you have to stop and think about it?
And then there is the fact that if you type 10 (as in your 10j example) and then change you mind and delete the current line (dd), you will have to stop and think whether you are in a mode before you hit esc. Otherwise you will delete 10 lines instead of one.

I use vi all the time, and I don't usually make mistakes, but when I do, it's always a modal mistake. (Even worse is a co-worker who uses caps lock just to type a capital letter. Then he uses vi. A lethal combination. It's painful to watch.)

Commands are gestures. If you are concentrating on your text and not the interface (after the gestures have become habituated) you won't have to think about the interface at all. You will execute the command before you consciously decide what buttons to push.

You are right, vi can become habituated. Once I am outside of vi, in other places I try to use all the same commands. I use ksh set to vi-editing mode so I can edit the command line. Sometimes when I am working on a particularly gnarly command I hit "ESC:wq" to "save" it, which of course doesn't work. Just another mode error. I'm in ksh, not vi. No big deal, just another mode error. But it is a distraction.

Read the book. It's good stuff. Or at least read the summary on Jef Raskin's web site.

Humane Interface (3)

drivers (45076) | more than 12 years ago | (#216527)

I heartily recommend this book. Jef Raskin is a highly misunderstood HCI expert. I say that because about 6 months ago he got a lot of flames for criticizing Apple for continuing to make the mistakes which he preaches against inherent in the WIMP (windows, icons, mouse, pointer) interface. Raskin himself replied and tried to explain some things. He said that to understand him you had to read his book.
I bought the book soon after that, and as I read it, I was blown away. Sometimes when you read a book, it just gets into your head and it's all you can think about. That's how it was for me.
Unfortunately, although it describes many of the principles by which a Humane interface should be designed, there is not a design for a specific kind of interface. Perhaps it's because there is no one single right way to make an interface, but there are many wrong ways. Software producers continue to make the same mistakes about what they think is user-friendly (yes, including GNOME and KDE by following the WIMP example), but Raskin shows that many of the usual assumptions are wrong (pretty much everything we currently understand about user interfaces, e.g. "icons are user friendly").
After reading it, I felt that if I followed the principles of the book, I too could design a radically different yet vastly improved system for beginners and experts alike. I emailed Raskin with my thoughts. The response to the possibility that I could program such a thing was (paraphrased) "You will need a spec, which I am still working on."
I suppose I am still interested in making an interface with the principles outlined in the book, but I think it would require as much work as a GNOME or KDE project (including applications), perhaps even an entire new operating system, depending on how far you wanted to take it.

Jef Raskin's homepage is here [jefraskin.com] . Be sure to check out the summary of The Humane Interface [jefraskin.com] at least, if you aren't going to read the book.

Re:Modalities (1)

Monte (48723) | more than 12 years ago | (#216529)

Not necessarily. Automatics disguise this to some extent, and there is the less well known but used since the '30s CVT - Continously Variable Transmission.

I can't speak for CVT but every automatic I've driven has clearly had two modes: Go Forward and Go Backward.

(I'm probably setting myself up, somewhere in history there's probably a car that couldn't go backwards)

Re:No file hierarchies? (3)

dubl-u (51156) | more than 12 years ago | (#216531)

Categories and subcategories map well to the real world. If I have only a few files (documents, whatever) they can be in one place. But if I have many, then I?ll want them to be organized somehow, and hierarchically makes a lot of sense for that.

That's nearly true. Categories and subcategories map well to a particular view of the world. But to make effective use of the hierarchy, you are forced to use that view of the world.

Take an example. Say I create an Excel spreadsheet for a 2002 budget projection for a project for one of my clients. What folder and subfolder do I put it in? The top level of the hierarchy could be my role (personal documents vs business documents); it could be the entity the document belongs to (the client's name); it could be the topic of the document (a budget); the period it covers (2002); the type of document (a spreadsheet); the application used to create it (Excel); the date it was created (2001-05-17); or the name of the project. Or something else entirely.

So which is my top-level folder and which are subfolders? It depends on which I consider most "important" or "intuitive", which varies from person to person and from day to day. Heck, if you grew up with Windows you may believe the best place for an Excel document is in the C:\Program Files\Excel directory. I know secretaries who keep all their files right in with the program files because they never learned to make or change directories.

I haven't read Raskin's book yet, but when I dream of better ways to do this, I imagine history lists, search boxes, and hierarchies with multiple roots and automatic filing of documents based on things like name, date, creator, type, and keywords and categories chosen by the author. So when I'm looking for that budget projection, I can browse based on any of those criteria I mentioned above.

Already reality (2)

joq (63625) | more than 12 years ago | (#216534)

I posted something on this on a thread yesterday which wasn't so informative so here goes more information on this product I saw on Scientific American.
Input devices will have to miniaturize as well and become more direct, intuitive and able to be used while your hands (and part of your attention) are engaged elsewhere.

The Cyberlink System represents this next step in the evolution of the human-computer input interface. The Cyberlink System is a BrainBody actuated control technology that combines eye-movement, facial muscle, and brain wave bio-potentials detected at the user's forehead to generate computer inputs that can be used for a variety of tasks and recreations.

skip a paragraph

The forehead is a convenient, noninvasive measuring site rich in a variety of bio-potentials. Signals detected by three plastic sensors in a headband are sent to a Cyberlink interface box which contains a bio-amplifier and signal processor. The interface box connects to the PC computer's serial port. The forehead signals are amplified, digitized and translated by a patented decoding algorithm into multiple command signals, creating an efficient, intuitive
and easily learned hands-free control interface.

Three different types or channels of control signals are derived from the forehead signals by the Cyberlink Interface. The lowest frequency channel is called the ElectroOculoGraphic or EOG signal. This is a frequency region of the forehead bio-potential that is responsive primarily to eye movements. The EOG signal is typically used to detect left and right eye motion. This signal can be mapped to left and right cursor motion or on/off switch control.

http://www.brainfingers.com/technical.htm [brainfingers.com]

Myabe the author of the book can work on an offspin of this product.

Power versus Usability? (1)

Simon Tatham (66941) | more than 12 years ago | (#216536)

So one thing I'd like to know is, how can a non-AI-complete interface manage to be powerful and usable?

Currently, we have GUIs which are intuitively easy to use (unless you're my mum, who is virtually computer illiterate but much prefers CLIs) but provide relatively little power, and if you want to do a complex multi-stage operation you have to run through it step by step by hand. Then you get CLIs, which have massive scripting power and allow arbitrarily complex automated tasks, but which aren't the most obvious thing for the beginner. Hence the division into "beginner" and "expert" modes, which the review says Raskin thinks shouldn't exist.

The thing is, I can't imagine an interface that is 100% intuitive to use, without requiring the user to think about details of the interface, and simultaneously allows the sort of massive power you'd get from, say, putting together a five-command Unix pipeline that picks out all the lines that appear exactly twice in a text file. So if you aren't allowed separate beginner and expert modes, what can you possibly do?

The more I think about this, the more I think it has to be AI-complete, in fact; because the only way you can express a generic requirement like that to a non-AI computer is by programming. But that's precisely a process of concentrating on the details of the interface (language)! The only alternative to that is simply to describe to the computer what you want, and let it figure out how to get it - which requires AI. In other words, Raskin has to be advocating the HAL 9000 complete natural language interface. Intuitive, and powerful. Nothing else comes close.

Open the pod bay doors, HAL.

Re:No file hierarchies? (2)

Simon Tatham (66941) | more than 12 years ago | (#216537)

The obvious alternative to a fixed category system is arbitrary search criteria.

"Computer, give me the document I was working on last week about Genghis Khan."
"Computer? Didn't I once write something that compared Bill Gates to a hibernating wombat?"

... and so on. The computer can retrieve your document based on any criteria you like, not just the one (or, counting symlinks, the several) you happened to file it under.

Re:Just read this myself (1)

Eponymous, Showered (73818) | more than 12 years ago | (#216540)

Now I'm not going to try and start a vi vs. emacs war - I use and enjoy them both. But to agree with Raskin's "modal==bad" and then to adore vi is to offer a fairly stark contradiction. Let's face it - vi is nothing if not modal. Unless you are keenly aware of whether you're in insert vs. command mode, you either a) can't get anything done or b) can really dork up your file by accident.

Sounds familiar (2)

Eponymous, Showered (73818) | more than 12 years ago | (#216541)

With the exception of the universal undo, it sounds a lot like the interface for the Palm OS. No filesystem, the same widgets do the same thing, keep it extremely simple. In short, it's simply obvious what to do as soon as you turn the thing on the first time.

1 app with 1000 feat. vs. 1000 apps with 1 feat. (1)

Shook (75517) | more than 12 years ago | (#216542)

I'm sure there has to be a happy medium in this somewhere. You site Word as a program that tries to do everything, to the point that you get lost in the features. I agree. But on the other extreme, I have my latest Linux-Mandrake CD set with thousands of applications. Most of these apps only have one use.

I have always been frustrated by free software, because I am always certain there is a program that does just what I want, but it usually takes me 2 days of searching on the net to find it, only to find that I already have it installed on my computer. I think the key would be finding some ideal number of features to include in each application.

Maybe a program like Word could be split up into two or three apps. Maybe a separate program that handles mailing labels (to use your example), form letters, and printing envelopes. Then a standard Word Processor. Then a web-page editor. All of these could have similar interfaces, but only show the features needed for these unique tasks. Combining all three into one monolithic program is definitely a bad idea.

Related reading material. (1)

humungusfungus (81155) | more than 12 years ago | (#216543)

A book that my girlfriend gave me for my birthday last year (yes, she kicks ass) might prove to be a good complimentary read:

Neal Stephenson's In the Beginning...Was the Command Line [amazon.com]

It's a quick, sometimes funny essay on the merits of *nix and its paradigms...including the CLI. IMHO most of it, if not all, is bang-on.

Preaching to the choir? Maybe. Personally, I'll read anything by Stephenson at least once.

Re:Fitts' law (1)

acebone (94535) | more than 12 years ago | (#216547)

That's a good one!

How about this:

When I pull the scrollbar downwards, and keep pulling below the window - all of a sudden the scroll-bar pops back to the top??

The logic seems to be

1. He's pulling it down - he want's it to go down...

2. He's pulling it further down - he want's it to go further down...

3. He's still pulling it even further down - AHA he want's it to go up!

This goes for Windows, Mac, KDE and probably a wholelotta other interfaces.

It bugs me, and it makes me miss my Amiga :-(

(Dunno how the Amiga did it actually - but I miss it anyway)

I'm confused (3)

Fjord (99230) | more than 12 years ago | (#216548)

Is it "human interface" or "humane interface"? They mean very different things.

Sexy lady (1)

andy@petdance.com (114827) | more than 12 years ago | (#216550)

On the other hand most of us run into trouble when trying to study calculus at the same time we're hitting on a sexy lady

Never mind hitting on, how about just physical proximity? That's prob'ly why I did so poorly in Calculus class my first semester of college. Sandy Gann... <sigh> She had even flown down to Phoenix to see, and had a T-shirt from, the Who's (first) Farewell Tour (of many).

Re:Different Strokes for Different Folks (1)

GCP (122438) | more than 12 years ago | (#216553)

Well, you pretty much exemplify his point. Learning about symlinks really doesn't take much time -- assuming you've already learned countless other little things first. None of these things take much time by themselves, but there are thousands of equally useful things in the *nix world that your symlink argument could apply to. The time adds up. Most Slashdotters have learned mountains of details by spending massive amounts of time at their computers engaging in activities that were partly productive and partly just figuring out how to do things with their computer.

A doctor or lawyer should not have to stop their doctorin' and lawyerin' to learn about details like symlinks in order to qualify as non-idiots to runnynosed geeks who can't tell a platelet from a patella. They have better things to do with their time. Many of them "refuse to learn" for the same reason that they refuse to learn how to repair their own automobile engines: it's a low value-added activity best subcontracted to others.

Somebody needs to know the details, but then a high-value added activity for *them* would be to encapsulate those details behind a UI that is so clever that it amplifies the power of doctors and lawyers instead of dissipating it on trivia.


Wateshay (122749) | more than 12 years ago | (#216555)

In a way, though, I think you just agreed with his statement by saying that you pick one method of doing things and stick to it. It's been a while since I've read the book, but if I remember right his point is more that you should be able to do the same thing the same way, every time you do it. Not that you should never have a choice of which way to do it.

This book (1)

Edward Kmett (123105) | more than 12 years ago | (#216556)

Having read this book I can say that I agree with many of Raskin's points about a number of subjects and 'agree to disagree' regarding quite a few others.

I believe very strongly in his points regarding increasing the orthogonality of operations you let users perform, let them perform them anywhere by all means.

I agree whole-heartedly that one should try to build a mode-less interface wherever possible and that modes are the source of the majority of user error, because they force conscious thought about things that should be reflex.

I agree that 'user input is sacred' and should be preserved and that you shouldn't steal the focus away from the user midsentence.

I agree with his arguments that most popup messages are useless and should not steal the focus away from the application, especially if there is only one action you can perform (i.e. click OK) On the other hand, I am somewhat dubious about the merit of his desire for see-through overlays however. I feel that they only really result in so much clutter and the visibility of text below will make them difficult to read. This could, perhaps, be managed through some form of opacity control, but thats a lot to ask the user to use.

I also think that he hasn't done sufficient research on usability when it comes to some of his more radical proposals later on in the book.

I.E. His desire to use 'multiple selection' for everything, and the fixed scope of document tags. I think that he doesn't take into account the complex state he is making the user retain in their head in the first case and that he doesn't generalize the latter sufficiently to be useful.

However, a system based entirely on xml tags, where files are just tags in the larger system would fit his design goals nicely, and bypass this problem entirely.

All in all, I believe that I am better for having read this book, but moreso for having taken it with a grain of salt.

The real problem that I have with this book is its lack of direct applicability. The user interface he describes is so alien to current users that its not something you could graft over what they use today and expect them to pick it up in a reasonable timeframe. So you won't be building fully 'humane' windows applications any time soon.

Re:Just read this myself (1)

timbu2 (128121) | more than 12 years ago | (#216561)

I assume you mean that Raskin seems to critisize modalism in interface design and that Vi can't be used without considering if you are in insert or command mode.

You are right, of course ... but on other fronts Vi is very good at allowing me to edit large pieces of text without using the mouse to confirm "ok" in some dialog box seperate from the text, and by using a minimum of keystrokes. When I use Emacs I feel like I have to remember too many keystroke combinations to do the same job.

Also just because a programmer can become used to modality doesn't mean that everyone should do the same. After all theoretically each key on the keyboard could cause a change global modality when pressing a second key while holding down the first. (That would be a nightmare). But most of can manage to figure out that shift will uppercase the keys without too many problems.

The problem with modality that Raskin points out is like the CAPSLOCK key, that gives almost no indication that it has been pressed but quite obviously changes the mode. (That's why I use the Happy Hacker keyboard, no CAPSLOCK)

Re:Just read this myself (1)

timbu2 (128121) | more than 12 years ago | (#216562)

Yes, but I am full of contradictions. Contradictions work for me.

"I am large, I contain multitudes" -Walt Whitman

Just read this myself (3)

timbu2 (128121) | more than 12 years ago | (#216563)

I liked the book a lot because it focused a lot on making the human machine interface more efficient. I pretty much hate gui's that force you to jump through umpteen dialogs to configure something that should take 5 or 6 key strokes and Raskin seems to understand this.

I even had my wife a non-programmer read the section on modalism, which has greatly enhanced her ability to turn on the TV and actually play the TiVo or DVD successfully without calling me.

After reading the book I am even more rabid about my adoration of ViM [vim.org]

One problem ... in the book he talks a lot about products like this Canon word processor that didn't seem like commercial successes. They may have been "humane" or even efficient, but no one bought them. What good is that?

On modes (2)

The Pim (140414) | more than 12 years ago | (#216564)

All modes generate user errors

I don't know how extreme he is on this point, but my sense is he takes it too far. Modes in some form are integral to our interactions with machines, people, and the world.

If we forbid modes, a TV remote control must have an on button and an off button. Nobody wants this, because it's much harder to remember which button is which, than it is to observe that the TV is off and deduce that the button will thus turn it on.

The system should always react in the same way to a command.

Very few things in the real world always react the same way to the same command. You don't behave the same way in the library as you would at a friend's home. Your car responds much better to the gas pedal when it's in drive than in neutral. You read people's moods before deciding how to communicate with them.

People can observe and remember a certain amount of context, and tune their behaviors accordingly. It's frequently convenient to enter a mode, because you can forget about everything not appropriate to that mode. This is just as applicable to computers as to the real world--when we perform different activities, we enter different mindsets and adopt different expectations. Computer interfaces should take advantage of this human capability (with appropriate restraint, of course).

Re:Maybe he's being too simple. (1)

Lozzer (141543) | more than 12 years ago | (#216565)

Read Tog on Interfaces. Gives an interesting insight in to why you may find command lines perfectly usable whereas the majority of people would prefer something more Humane in the Raskin sense.

Re:This is the same Jef Raskin... (3)

MasterOfMuppets (144808) | more than 12 years ago | (#216567)

..of course "Unix" backwards is "Xinu", which is awfully close to "Xenu" [xenu.net] , super being. Ask your local Scientologist if you're not sure.
The Master Of Muppets,

good ideas from raskin (3)

thewrongtrousers (160539) | more than 12 years ago | (#216569)

Slashdot has discussed Raskin before with disappointing results.
Instead of trying to understand some of the concepts that may, at first, sound strange - his concepts are riduculed and flamed out of hand.
Flamethrowers grab a little bit of text, twist it - without trying to really comprehend, and proclaim Raskin an idiot. A shame because Raskin has some *very* good ideas that he generally supports quite well.
The only positive from what's going to be a large amount of flames is that Raskin should take some solace that all visionaries are subject to much abuse. Most people are unwilling to give up or reconsider what they think they "know" as truths, often to their own detriment.

Hermaphroditic Freaks and the Interface of functio (2)

table and chair (168765) | more than 12 years ago | (#216571)

"On the other hand most of us run into trouble when trying to study calculus at the same time we're hitting on a sexy lady (make that a handsome man for those 5% of ladies here at Slashdot, or some sort of hermaphrodite freak for those 11% with doubts about their sex)."

Interestingly, this strange, awkward, stale moment pulled me out of the article long enough to cause me to forget what point the author was trying to make.

Let's rewrite a later sentence:

Of course, this was no big deal, but I had to take my mind off
reading the sentence to figure out what had happened, just for a second, but long enough to derail my thoughts, and that derailing should never happen.
Maybe these rules for interface design apply equally well to good writing for a broad audience, or to attempts at "humor" in general. ;)

Fitts' law (4)

ortholattice (175065) | more than 12 years ago | (#216574)

...Fitts' law (the time it takes for you to hit a GUI button is a function of the distance from the cursor and the size of the button)...

This is the first I heard that this law had a name, but one thing I've wondered about is why most GUI editors have scroll bars on the right and not the left, where most work is done. E.g. for cut-and-paste you have to go all the way to the left to select and all the way to the right to move.

On UNDO and Mac OS X (1)

selkirk (175431) | more than 12 years ago | (#216575)

The new Finder in Mac OS X is much maligned by the Mac Community, but one thing that it does offer is an undo command. Right now, the undo menu in my finder says "Undo move of 8 items." I was organizing files last night.

There is some controversy about the dock on the Mac, but if you evaluate it using the Cognitive science based methods in this book (GOMS, Fitts', Hick's), the dock fares very well compared to windows and my limited experience with KDE.

I feel that the New Mac OS X was very much written in the same spirit of this book.

Re:Just read this myself (1)

Golias (176380) | more than 12 years ago | (#216576)

Now that black&white displays are disappearing (even from servers), maybe it is time for most unices to patch vi so it changes the text color according to mode... you know, a nice dark blue for insert mode, perhaps a light gray for command mode.

Maybe I should quit jabbering about it and try to make such a patch myself... hmmm....


RiBread (181983) | more than 12 years ago | (#216578)

You should never have to think about what way to do something, there should always be one, and only one, obvious way. Offering multiple ways of doing the same thing makes the user think about the interface instead of the actual work that should be done. At the very least this slows down the process of making the interface habitual.

I wholeheartedly disagree. I like to have multiple ways to do things. I try them all and figure out the one that suits me best, then make it a habit.

Someone else's idea of "the most natural way to do x" often times isn't mine. I guess that's why I always set custom keys in games, use emacs, and think PERL is fun to hack around with.

Re:Maybe he's being too simple. (1)

Ereth (194013) | more than 12 years ago | (#216582)

Yeah, these new-fangled automobiles are too hard. They need to be easier. There's levers and pedals and wheels and tires and air pressure and oil pressure and buttons and lights and they aren't all the same and it's just too hard to figure out! People will never get the hang of it on their own. We'll have to force the industry to standardize on one interface so that people have a chance to understand it.

Or, we could just offer training classes and require tests that demonstrate competency before you are allowed to use one alone.

We do it for cars, why is it wrong for computers?

I see some problems... (2)

Altrag (195300) | more than 12 years ago | (#216583)

The general idea is good (especially global undo/redo! Imagine being able to undo the last time you stupidly walked right in front of a rocket!).. I see some problems with it though..
firstly, not all applications can use the same interface. Period. Can you imagine trying to play Quake using a MS word (or emacs.. or vi.. or pretty much anything not remotely related to 3d navigation) interface? Or for that matter, trying to type out that document the boss wants in 20 minutes if you have to use a mouse/keyboard combination to pick letters from your latest Quake/text editor mix?.. it just cant be done.. its like trying to create a car using the "interface" of a pair of rollerskates.
secondly.. modes.. I agree that modes can be a pain in the @$$ (especially when Word decides to change modes for you when you dont want it to).. but really.. how are you going to get around this?.. if you want to allow bold/italic/underscored/different font/etc text, you need some way of specifying what you want. You obviously cant have an enormous keyboard with a set of keys for bold, another set for italic, etc (not to mention that that would likely be more confusing and distracting than accidentally bolding some text).. that leaves basically two options.. dont allow bold (unlikely), or have the text editor do it all for you.. and we've all seen that THAT doesnt work very well these days.. (maybe when they get direct-brain-control working and we can just think it the way we want:)
thirdly is legal.. MS has already attempted to meld things together, and found themselves slapped with a nice big antitrust lawsuit (not that I dont agree with that, but the point is there).. so we know this cant be done within one company (at least, not one that has any reasonable sized portion of the market share, and smaller companies dont have the funds for the R&D needed I'd guess).. open source/free software might be able to have a go at it here, but I doubt anyone would bother on any sort of large scale :P..
oh.. and about that bit about removing the file structures and stuff from the user view.. uhmm.. no.. a user might not care if the extension is ".rtf" or ".doc", but they DO care where their files are put.. having a billion files in the same place is NOT cool.. thats why we invented directories and libraries and whatnot in the first place! organization!

Re:Fitts' law (3)

sv0f (197289) | more than 12 years ago | (#216586)

More precisely (although still not perfectly), the time it takes to move a distance D to a target of size S is proportional to the logarithm of D/S. Academic HCI types make a big deal about Fitt's law and other empirical regularities of thinking.

Another is Hick's law, which (roughly) states that the time to choose between a set of N alternatives (e.g., in a menu) is proportional to the information-theoretic entropy of the decision. If the N choice are equally probable, the entropy is the logarithm of N.

Academic HCI and the application of Fitt's and Hick's Laws to this domains begin with Card, Moran, and Newell's 1983 book "The Psychology of Human-Computer Interaction". I recommend chapter 2 for those particularly interested in the psychological basis of their recommendations. This is the book that introduced the GOMS modeling technique that Raskin covers as well.

Personally, I've never found much insight in this line of thinking about HCI. Knowing this stuff does not make you a good designer of computer interfaces. Artistic flair and empathy for users plays a crucial role that is not addressed in this tradition, and perhaps not addressable at all within mechanistic approaches to cognition. All of this is IMHO, of course.

Card and Moran were at Xerox PARC in the late 1970s and early 1980s, I believe, which is where Raskin was before Apple hired him to develop the machine that would be taken over by Jobs and become the Macintosh.

Different Cathedral, Different Bazaar (1)

T1girl (213375) | more than 12 years ago | (#216590)

Open Source has turned the idea of the Cathedral and the Bazaar on its head, so that the people who just want to use a computer to do their jobs or communicate or shop or whatever will just buy the most commonly distributed software on the open market, while the monks who want to write code that most people don't know how to use will continue to do so and label everyone else idiots.

It only takes one idiot to raze a village.

problem with the "nipple" quote (1)

brlewis (214632) | more than 12 years ago | (#216592)

Ask anybody who has breastfed a newborn -- not even the nipple is intuitive. Sucking is instinctive, but getting a good latch on the nipple is learned. A more correct saying would be, "Sucking is the only intuitive interface; everything else is learned."

Hmmm...that doesn't sound right either.

Raskin's sort of a curmudgeon (2)

Philbert Desenex (219355) | more than 12 years ago | (#216595)

Raskin has some curmudgeon in him. Just read this page on what Raskin thinks we should do with the word "intuitive [asktog.com] ".

Unlike Tog, who seems to think that anything the Mac interface does is The Right Way To Do It, Raskin appears to receive some influence from actually experimentation.

On the other hand, going hard-over against modal interfaces seems a little odd. As far as I know, only one study has ever been done on this sort of thing: "A Comparative Study of Moded and Modeless Text Editing by Experienced Editor Users", by Poller, M.F., Garter, S.K., appearing in Proceedings of CHI '83, pp 166-170.

One interesting conclusion from that study:

Moded errors do not seem to be a problem for experienced vi users. The vi group made few moded errors, and those few were rapidly corrected. Futhermore, modeless editing may not totally avoid moded type errors, since the emacs group made errors that were similar in nature to the vi moded errors.

Re:Modes and vi (1)

bziman (223162) | more than 12 years ago | (#216596)

So, you NEVER make mistakes in vi? You never forget that you are are not in insert mode and start typing? You never accidentally use CAPS LOCK when inserting some caps text and then when you are back in command mode, start executing the shifted commands instead of the regular commands? If you never make a mistake, is it because you have to stop and think about it?

No, but I do frequently find myself typing in random "h", "j", "k", and "l" letters all over my MS Office documents -- no matter how many times I whack the ESC key.

And frequently I finding myself searching through menus for stuff that I ought to be able to do without leaving home position on the keyboard.

I also had to remap the F1 key on my Thinkpad to act like ESC in vi because they're placed one above the other, rather than next to each other, and I got sick of help popping up when switching modes due to incoordination. But that's a keyboard design issue, not a software design issue.


Modalities (2)

devnullkac (223246) | more than 12 years ago | (#216597)

I haven't read the book, of course, but interface designers almost always have to take advantage of modality. Even in the "simple" case of driving an automobile, the accelerator has a "modal" effect on the behavior of the vehicle, depending on which gear is selected. In computer interface design, the most fundamentally modal input device is the keyboard. In a windowing system, it has dramatically different effects depending on which window is selected.

The important item to keep in mind is the number of modalities and the complexity of their interactions. Keeping modalities obvious and intuitive will keep most users out of trouble. (e.g. a tiny little "Ovwrt" isn't always an obvious indicator that you've left Insert mode). Then all you have to do is figure out what's "intuitive" for your users :-)

Is *everything* about Open-source ? (1)

tmark (230091) | more than 12 years ago | (#216598)

The only Open Source or Free Software the book mentions is Emacs, in a discussion about string searches.

Really, since the book isn't *about* open-source/free software, why is this is even relevant ? Don't people get tired of beating the same drum ? What does open/free software even have to do with interface design ?

And while we are on the topic, why not confront the issue that maybe the reason Open/Free software was not mentioned is because there is a paucity of good interface examples in that realm ? I mean, Emacs is in toto (arguably) a very poor example of good interface design, often taking a plethora of unintuitive keystrokes to accomplish things. GNOME/KDE are hardly intuitive and all the Open/Free desktop environments still lag far behind, for instance, the UIs in class MacOS, OS X, and OS/2, and usually the best features of the Open/Free desktops are stolen almost directly from commercial desktops.

It's good to see a modern, fresh perspective... (2)

hillct (230132) | more than 12 years ago | (#216599)

It's good to see a frwsh perspective on this issue. I'm tired of reading stodgy texts from 30 years ago about how it would be a great advance to use a 'mouse'.

Over the past 20 years though, interface design has been pretty much dictated by OS vendors such as Apple and Microsoft. This trend is going full circle with the proliferation of applications to which various 'skins' can be applied, producing an entirely new look. This trend has appeared, I believe as a direct result of the advent of dynamic layout methodologies similar to HTML. Let's all remember it wasn't always so.

Apple was the first to implement draconian interface specifications for 3rd party applications (since 1984). Microsoft has generally left it more to the compiler vendors, although all design seemed to sprout from Microsoft anyway...

It's thrilled to see new thought in this vary important area of the field.



Re:Fitts' law (1)

Majik Sznak (230190) | more than 12 years ago | (#216600)

Hmm. Just tried that (Win2000, MSIE). Doesn't happen. I seem to remember what you're talking about, though. Confused.

Another good book about user interface... (3)

UsonianAutomatic (236235) | more than 12 years ago | (#216601)

...is "The Inmates are Running the Asylum" by Alan Cooper, ISBN 0672316498. You can probably find it cheap, too... got my copy at Borders for about $4.00, hardcover even.

Maybe he's being too simple. (2)

ojplg (249361) | more than 12 years ago | (#216603)

I cannot help thinking that the author is using the same logic that brought us the language Esperanto. (One way to do it, complete consistency, very few rules, etc.) I think the problem is that humans themselves are not really that way.

I suppose it is a criticism of the English language, just as it is of, say, Perl, that in different contexts the same word means different things. Many might argue that this is not a defect and that people intuitively use English despite all its modes because they are so well practiced in it.

This is not to say that computers should not be designed to be easier, merely that there is perhaps some utility in having different programs function differently. And even having one program respond differently to the same input at different times. For instance vi and emacs (and other programs for editing code) respond differently to a carriage return depending on the situation, by tabbing to a different point in the following line. Though I could disable this feature, I do not since it is so useful.

Oh, yeah, modality sucks. (2)

MagikSlinger (259969) | more than 12 years ago | (#216604)

You know, I hate using paper and pencil. It's so modal. If I want to erase something, I have to either turn my pencil upside down, or drop the pencil and grab an eraser. Oh, and painting is even worse! I constantly have to change modes by changing instruments. Why can't my palette knife do everything?

The lesson here is don't give users everything they think they want. In my experience, users don't even really know what they want. For applications, etc., focus on use cases: how people want to use the system. Design the interface to make those cases easy to do. Basically, goal oriented interfaces (which is what that VB guy who wrote Inmates Running the Asylum is a big fan of). Unfortunately, Microsoft took his ideas and created "wizards" which are the most useless goal-oriented interfaces I've ever had to use.

Even for hardware, like VCR's, focusing on how users want to use the system would make more effecient interfaces. Users really just want to say, "Record 'Manimal' at 8 o'clock on NBC" and have their VCR know what the heck they are talking about. They don't want to go into their TV Guide and type in numbers to a VCR+. Another option is the way I think TiVO does it where you can have an on-line list of programs, you select the program you want and indicate you want to record it. It's better than working through an on-screen interface to try to tell the VCR what time to start and stop recording.

Another example of goal-oriented interface is the Wacom Intuos digitizing tablets. You can get three extra pens, each withs it own unique digital id, and an airbrush pen (which looks and feels like an airbrush). Each pen can then be assigned a unique function in Photoshop (or other app). This way, the mode becomes obvious as picking up a different pen or picking up the airbrush tool. The tablet knows you've chosen the airbrush tool, so it automatically changes to airbrushing. You take the first pen, and it knows you want to draw. You change your mind, turn the pen around to the "eraser" on the end, and voila you're erasing. This is an intuitive mode change that I think is far more useful than trying to make a modeless interface.

The point here is the interfaces must be engineered towards what people want to do with it, and that you behave in a consistent way. For example, if you have modes, then make sure that there is a standard way in each mode to get out of it (like the ESCAPE key). Users can learn arbitrary interfaces, as long as they are consistent and geared to helping them do what they want to do.

Re:Modes and vi (1)

Sven Tuerpe (265795) | more than 12 years ago | (#216605)

This doesn't just apply to vi, of course, but anything sufficiently complex on a computer. Stick with one way and learn it.

The bad thing about vi is: once you learned it, you have to stick with it, because most other text editors will swallow your escapes and let the commands go through into the text.


Re:Humane Interface (2)

Sven Tuerpe (265795) | more than 12 years ago | (#216606)

Perhaps it's because there is no one single right way to make an interface, but there are many wrong ways. Software producers continue to make the same mistakes about what they think is user-friendly (yes, including GNOME and KDE by following the WIMP example), but Raskin shows that many of the usual assumptions are wrong (pretty much everything we currently understand about user interfaces, e.g. "icons are user friendly").

I don't think right and wrong are adequate categories here. In my opinion, the main problems in interface design are: there are not strict, absolute rules, and there are so many rules and trade-offs. As a result, most interfaces are a compromise between all those rules; they have to be. The interface designer's job is a multi-dimensional optimization problem.

Take colors for example. Designing a color scheme is a really hard task. One has to watch kind of side-effects, like brightness and contrast. One has to consider the possibility of color-blind users. Some combinations of colors strongly suggest a metaphor to the user, and should not be used if this is not intended; this applies namely to red-yellow-green which is likely to be associated with traffic lights -- in the western world. So cultural differences have to be taken into consideration, too.

Software developers tend to simplify and generalize the rules. That's what they are used to from programming. And they are seeking for recipes, for patterns. But there are no recipes and not many patterns for designing the user interfaces of sophisticated applications. Interface design needs skill, experience, the ability to watch your own creation with another one's eyes, and a will of iron to take every single problem of one's trial users seriously. And, of course, managers who do not sacrifice usability to deadlines and project plans.

Re:On modes (2)

Sven Tuerpe (265795) | more than 12 years ago | (#216607)

If we forbid modes, a TV remote control must have an on button and an off button.

On page 180 of Raskin's book you will find a case study showing this is not necessarily true. Raskin's example is about saving workplace state to a floppy disk, and he had the same considerations about modes. After describing a one-button solution, he concludes that the modality in this situation depends on the user's model.

Applied to the remote control: If one thinks of an on function and an off function behind the same button, there are modes. But if we watch it as, for example, a general "change actrivity state" function (in Raskin's book it's a "do the right thing with the disk" function), there are no modes.

Re:It's called "flame the flamer flamers" (2)

micje (302653) | more than 12 years ago | (#216609)

Yep, and in the box it says: "Title: The Human Interface".

No file hierarchies? (2)

Some call me...Tim (307785) | more than 12 years ago | (#216611)

I might buy the book just to see what he means by "no file hierarchies". I've read about this book before, and that was what stuck out the last time as well.

How would one organize files? Ok, say you don't have "files" but just documents. Is the device only good for editing one at a time? Or is he suggesting a replacement by a more random-access structure, so that you need to give it keywords to find the file you're looking for?

Categories and subcategories map well to the real world. If I have only a few files (documents, whatever) they can be in one place. But if I have many, then I'll want them to be organized somehow, and hierarchically makes a lot of sense for that.

My fear is that he's advocating a "sufficiently advanced technology" interface that somehow magically finds the file that you want based on your limited, possibly ambiguous query. If anyone has read the book and knows more specifically what he's advocating, please reply to this.



stevewahl (311107) | more than 12 years ago | (#216612)

In an ideal world, there should be only one way to do it, BUT the USER should be able to determine what this one way is.

I don't think I'd like to borrow your car under these circumstances. :-) (Maybe you like it better that way, but what about emergencies?)

Not to be elitist, but I think maybe there will always be differences between what's best for the masses and the desires of "the geeks". The automobile I use for general transportation differs in many ways from both the rigs of semi drivers and the cars on a NASCAR race track. Why aren't they trying to pound everything into a one size fits all scenario?

Re:No file hierarchies? (1)

stevewahl (311107) | more than 12 years ago | (#216613)

Categories and subcategories map well to a particular view of the world. But to make effective use of the hierarchy, you are forced to use that view of the world.

Reminds me of a M*A*S*H eppisode. "Radar, where did you file the maps to the minefields?" "They're under 'B' for 'Boom'!"

Re:1 app with 1000 feat. vs. 1000 apps with 1 feat (1)

tb3 (313150) | more than 12 years ago | (#216615)

Apple's OpenDoc was a step in this direction. You started with a document, and then invoked the editors to create the different pieces of your document. I think it was heading away from the application-centric approach, but like a lot of Apple projects of the mid-nineties, it got toasted before it really got off the ground.

More features: good or bad? (2)

melquiades (314628) | more than 12 years ago | (#216616)

Some random thoughts in the spirit of this discussion: We often judge user interfaces by their feature sets. More features mean more power, and are better, right? I think not. I actually want my software to have a minimal feature set which is maximally robust.

The idea behind building feature-rich software runs something like this: figure out what a program will generally be used for, and come up with a set of tasks -- use cases -- which are representative of what the majority of users will want to do with the software. Design the software around the use cases, using the ease and efficiency of each of the use cases as a yardstick for the program's success. If users demand more features, the market for the software expands, or the company simply has more resources to build up the software, then add more use cases and repeat.

This works reasonably well, but it has a serious flaw: as the amount of functionality and thus the number of use cases go to infinity, the number of features goes to infinity. Think of M$ Word -- at a certain point, the wizard that will make multi-column address labels is useless to me, because there are so many damned wizards I don't know where to find it.

What software really should try to do is maximize robustness while minimizing the feature set that achieves that functionality. If you want your software to achieve a wider base of functionality, you have to make your interface's features more generalized, and more expressive. As the amount of functionality goes to infinity, the feature set stays bounded, and just approaches some sort of Turing-complete programming language.

So when designing software, don't ask "What features should we add?" Ask "What functionality can we add, and what features can we generalize in the hope of removing others?"

I realize that this is all very idealistic, but it seems like a guiding principle that could keep software bloat in check.

Sounds like a good read.. (1)

myschae (317401) | more than 12 years ago | (#216618)

.. I remember many many years ago when I was creating a spreadsheet in Lotus 123. I spent about 1/2 a day looking for the command to search for data.

I looked under: Search, find, look, seek..... finally found it under Query. Of course.. why didn't I think of that sooner?

Obscure and hard to use interfaces are silly.

Re:Modalities (1)

mccalli (323026) | more than 12 years ago | (#216619)

"Even in the "simple" case of driving an automobile, the accelerator has a "modal" effect on the behavior of the vehicle, depending on which gear is selected."

Not necessarily. Automatics disguise this to some extent, and there is the less well known but used since the '30s CVT - Continously Variable Transmission.

I rather think this comment cuts to the point the book's author is trying to make. He is pointing out that it is usually possible to avoid modes, and for standard user-level code, and even cars for the purposes of this discussion, I have to agree.

"The important item to keep in mind is the number of modalities and the complexity of their interactions."

Now here we can agree.


What about the programmer's humane work condition? (1)

Tricolor Paulista (323547) | more than 12 years ago | (#216620)

Who is gonna code that beatiful "humane" interface in a single monolithic block? How we are supposed to add our own programs to it? What if the API changes?

No ready recipes really!! Just make me remeber why I don't like Windoze integrated GUI+OS: it's a crash waiting to happen!

proof is in the pudding (3)

janpod66 (323734) | more than 12 years ago | (#216621)

Rather than revolutionary, I think this book describes pretty traditional ideas in UI design and proposes somewhat superficial applications of results from psychology and congnitive science. For example, while attention and automation matter, the implication that interfaces shouldn't be modal is unjustified: lots of things in the real world are modal, and people deal just fine with them. In fact, while modality may have some cost, it also has benefits, and whether it is part of a good UI depends on the tradeoffs. Similarly, Fitts's law and other experimental results are often used in UI design far beyond their actual significance.

The really controversial idea, though, is to abandon applications altogether. There would be only the system, with its commands, and the users' content. Even file-hierarchies should vanish along with the applications, according to Raskin.

I don't see anything controversial about that. There were several systems that mostly behaved that way before the Macintosh. The idea was to eventually move towards a system with persistent object store and direct manipulation of objects, eliminating the need for applications and allowing easy reuse among applications. Generally, the way that was implemented at the time (early 1980's) was via "workspaces" in which all data and objects lived, together with implementations in safe languages that allowed code to co-exist in a single address space.

What killed this approach was, in fact, the Macintosh, later copied by Microsoft. Using Pascal and later C++ as its language made it very difficult for "applications" to safely coexist or to share data in memory. The Macintosh and Windows merely put a pretty face on a DOS-like system of files and applications.

I'd still recommend reading Raskin's book. It does have historical significance, and it gives you a good idea of what mainstream thinking in UI design is. Raskin himself, of course, has contributed significantly to the state of the art and the body of knowledge he describes. There are some ideas in there that are probably not so widely known and that may be helpful.

But don't turn off your brain while reading it and don't take it as ghospel truth. UI design is not a science, and many of the connections to experimental results are tenuous at best. And a lot more goes into making a UI successful than how quickly a user can hit a button. If anybody really knew how to design a UI that is significantly better in practice than what we have, it would be obvious from their development of highly distinctive, novel systems and applications.

Re:Blah blah well DO something then (1)

Tech187 (416303) | more than 12 years ago | (#216623)

Okay, we can just say he hasn't done anything for about fifteen years, then.

Re:Maybe he's being too simple. (1)

Ryan_Terry (444764) | more than 12 years ago | (#216624)

I agree. There seems to be a theme developing here that we are too stupid to use computers. The problem I see is that inherently computers are not perfect. Eventually every program runs into a bug, and what happens when these "dumbed down" users run into one? Is the OS going to take them by the hand and sit down next to them on the couch while it fixes the bug?
Ease of use is one thing, but productivity and usablity over time are effected by more than just pointing and clicking. I find that customization and variety offer more functionality to me. I make modifications every day to how I do certain things in a given environment. I do _not_ believe that by mnaking only one way to do something it helps form a pattern and thus becomes more productive.
I am babbling now so I'll shut up......

NeXT scroll bars (1)

JimRankin (450416) | more than 12 years ago | (#216626)

This is the first I heard that this law had a name, but one thing I've wondered about is why most GUI editors have scroll bars on the right and not the left, where most work is done.

I'm pretty sure NeXT had the scroll bars on the left, for this very reason.


Word processors... (1)

Chasing Amy (450778) | more than 12 years ago | (#216627)

Unfortunately, the best products don't always win out commercially. Especially in the software world, it's a matter of satisfying these factors:

1) Compatability--if it won't play well with other users on other systems, don't expect people to jump to embrace it.

2) Familiarity--if it works like and looks like other apps that came before it did, with similar layout and controls, people won't automatically just stick with what they've got; which is what many will do if it's new and different and requires re-learning.

3) Big Name Company--like it or not, if users know the name of the company that puts out the software, they'll be more likely to use it.

Of course, there are other factors, but I think these are the main ones. You can look at most software in the light of these points and have a few good insights on why it failed.

For example, since you mentioned word processors, look at MS Word. It started out as a knock-off of Word Perfect, the dominant app at the time--so it had compatibility, familiarity, and came from the company that made the OS it was likely to be run on (yeah, OS/2, but...). Then Microsoft got saavy and broke compatibility with their new formats, and broke familiarity a little by changing interface elements. Bang--users are locked in because they want compatibility, and they want their new word processor to look and work just like their old one. And once Microsoft had edged out Word Perfect, who could make an app that was at least as compatible, looked and worked similar, and had as much name recognition as Microsoft? No one.

Of course, as an aside, I always thought the best word processing app in terms of how well it worked and how easy it was to use, for most situations, was ClarisWorks. By version 3 it did everything you could want unless you were using your word processor for very weird things. Even small desktop-publishing businesses years ago were using it, it was so simple to use but versatile. MS Word is unintuitive bloatware compared to the elegance and simplicity that was ClarisWorks. And it was available for both Mac and Windows. To this day I'd be using ClarisWorks 4 if only I didn't need compatibility with the rest of the world, who are by and large using newer and newer MS .doc formats that ClarisWorks and AppleWorks can't handle.

Which is sad really, because a more user-friendly word processor was never written. But anyway--out of curiosity, any WordStar fans here?

Chasing Amy
(We all chase Amy...)

Change is a good thing (2)

cyberlync (450786) | more than 12 years ago | (#216628)

I have not actually read the book, but I will attempt to get around to it as soon as I can.

Computer interfaces really haven't changed in at least the last ten years. They have gotten prettier, maybe faster, but there has been no fundamental change in all that time. In fact, as far as I can see they have only changed twice in computing history (Let me know if I am wrong, lord knows I am not omniscient). The changed from simple printouts/registers to character based interfaces then from character based interfaces to our present GUI interfaces.

I can't see this lack of change as a good thing. In an industry where rapid change is standard it surprises me that interfaces have remained stagnant. So if this book can foster some original thinking and perhaps some newer more efficient designs for interfaces, we should probably take it seriously, not matter how controversial the content.

As always just my 2c

It May Seem Backwards (2)

Decedence (451940) | more than 12 years ago | (#216629)

..until you read the book.
I personally just purchased this little ditty at Border's not four nights ago, and flew through it. What Raskin does, and why, in the end, you can not help but agree with him, is to walk up to each concept he presents clearly, and explain that why, yes, UI designers often think this way, I think they are forgetting x and y, and thinking of z. Very similar to Aquinus's theolgica in voice.
What he points out, quite poignantly, is that engineers design the interfaces, not people. We engineers are a different breed, and, when creating our interactions, we tend more towards Neal Stephenson's essay "In The Beginning..", where every intimate interaction with machine, as well as each step in the process, should be visible to us.
However, as Raskin states, this is rarely transfered into an interface that makes it's current state obvious to it's user, and instead, engineers often assume the user a vast idiot.
Unlike Tog, who claims to be the definitive source of interfaces, Raskin admits that his ideas are difficult to actually place in an interface, and instead, seems to prefer the book be used as a guideline for meeting halfway.
Raskin also explains in very simple terms, just how the human mind is thought to work, and that if these fundamental are true, then why would working with a computer feel like x, when, in review, it should be z.
For instance, perhaps the most harped upon item in the work is the idea of modes. What Raskin believes is that the human brain takes 10 seconds to switch gears between two tasks, and that modes actually slow a user down. He seems to believe there should only be one mode, 'edit', giving the user complete control of all aspects of the document at any time. Most UI people believe modes save users from themselves, allowing certain changes to a document only when in mode x, and you have to click 'Ok' to enter. However, Raskin solves this argument by simply calling for the Universal Undo/Redo button ( as found on the Canon Cat), the inability to delete anything permanently, and automatic saving of documents.
One of the more intricate ideas, and one everyone seems to wonder about is the Raskin's idea to remove file-heirarchies.Here's how Raskin believes it should work; One, within a machine there are only documents. The system itself is invisible to the user. You don't launch applications, or move around system extensions, you just work. With this removed, you then look upon hard drives as actually an infinitely long scroll, where documents simple begin to occupy blocks of space, and can grow within their space indefinetly. While Raskin never explain exactly how this will work with hard drives that become radically more crammed, I don't think it was ever his intention to explain exactly how to do anything.
What many reviewers miss is Raskin's assertion that his ideas, the sum of this book, are just that, ideas, and guidelines. Bascially the book is a statement of 'In an Ideal world, this world happen.' However, he does provide several methods for testing current interfaces, as well as examing ways to improve them now.
I recommend it as shelf-material for anyone who works day-to-day in UI. At it's best, the book is a profound guide for the ideal that computers, or any device for that matter, should be less-complicated, more thought-out structures.

Let us havs our computers back!!! (1)

Alfiekins (452495) | more than 12 years ago | (#216631)

Having just moved from a Linux development environment to a Windows one I have found it interesting how hard it has been to adjust to point and click again. I tried to re-create my Linux/WindowMaker ways on Windows (this post is being written using GNU Emacs for Windows). I delved into the shallow depths of DOS just to get a command line.

I have found the lack of control and flexibility of Windows frustrating (purely from a UI point of view, not MS bashing, sorry).

But my colleagues here find my old-fashioned, command-line ways quaint and, frankly, passed their sell-by-date. I, of course, do not agree. Their knowledge of their systems: how they work, what they do, what they could do and what they should do is woeful.

What is wrong? The problem is that the GUI presents a seductive and often insurmountable barrier to what a user could be producing.

A command line gives you the ultimate power and flexibility that a computer user needs to wring the best out of the machine and out of themselves. To use a command line requires an enquiring mind, knowledge and intelligence: qualities that are sadly lacking in most people with a mouse in their hand.

Take away the GUI and give the user back the computer.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account