Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

N.Y. Times Magazine Chats With ALICE Bot Creator

timothy posted more than 12 years ago | from the what-are-you-wearing dept.

Programming 238

aridg writes: "This week's New York Times Magazine has an article about Richard Wallace, the programmer of the ALICE AI chatbot that won first place in several competitions for realistic human-like conversation. Wallace sounds like a pretty unusual and interesting fellow; the article quotes an NYU prof both praising ALICE and saying to Wallace: '... I actively dislike you. I think you are a paranoid psycho.' A good read. [Usual NY Times registration disclaimers apply.]"

cancel ×

238 comments

Sorry! There are no comments related to the filter you selected.

Can it be? (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3834527)

First? The earliest?

Re:Can it be? (-1)

YourMissionForToday (556292) | more than 12 years ago | (#3834532)

suck my shit, filthy AC! My post is best!

Jokes on them (0)

Anonymous Coward | more than 12 years ago | (#3834530)

They were chatting with the ALICE creator... they were chatting with ALICE!

/. Go Ask Alice (5, Funny)

SEWilco (27983) | more than 12 years ago | (#3834565)

Hey, have we yet had Slashdot ask us for our questions for Alice?

Re:/. Go Ask Alice (1)

eyepeepackets (33477) | more than 12 years ago | (#3834685)

Hmm, that could be most interesting and intertaining, if only to see what questions /.s asked.

It will be funnier if (2, Funny)

WetCat (558132) | more than 12 years ago | (#3834531)

the header will looks like:
N.Y. Times Magazine Chats With ALICE Bot

linux is dead (-1)

Shaklee3 (576563) | more than 12 years ago | (#3834537)

faggots: It's always sad to watch the zealots waste perfectly good contemporary hardware on some patched-up non-innovative decades old OS clone that barely supports it at best. Another fine example of Linux users begging and scraping for the chance to run some junk that is native to the windows world. I hate this mindset, it makes me sick. Why must I continually be bombarded with lame news on the from page of /. describing yet another lame attempt to emulate windows by "linux users" who just can't let go of their past. Go back to windows, do us all a favor.

Re:linux is dead (-1, Offtopic)

axehat (590580) | more than 12 years ago | (#3834590)

Your use of the term 'faggots' proves your validity, doesn't it. How can linux be dead when it is gaining market share? Maybe your definition of dying is not the same as mine, or Webster's. Just a question, how many times have you restarted in the past week? Month? I haven't restarted my linux box in one month and twelve days. I would never restart my iMac , but I must when I brink it to a friend's house, or a LAN party. Next time you have to restart because you installed something, or your machine hangs up when you are playing Counterstrike, let me know. I'm sure it will be within two days.

Re:linux is dead (-1)

axehat (590580) | more than 12 years ago | (#3834600)

Oh, and speaking of OS clones, where did DOS come from? That's right, it was bought. Oh yes, and where did Windows come from? Oh that's right, Bill stole it from Apple. Boy, your post gets more intelligent every time I read it.

Is he as cool as AOL's SmarterChild? (0, Offtopic)

DooBall (564455) | more than 12 years ago | (#3834539)

Smarterchild has me on ignore b/c I made fun of him. He kept asking me to apologize, but I showed him whos boss.

Chat Friend (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3834541)


"Hello? Taco? Have you seen my butt bong? Hello?"

hmm (2, Funny)

GoatPigSheep (525460) | more than 12 years ago | (#3834543)

Anyone think its possible they might have just ended up interviewing the latest version of alice?

More info here (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3834547)

There is more about them here [htmlplanet.com] .

Neat... (-1)

Anonymous Coward | more than 12 years ago | (#3834564)

I didn't know that they wrote an ALICE chatbot in Java. Rather cool to play around with, if a bit slow.

Cool site! (2, Interesting)

egg troll (515396) | more than 12 years ago | (#3834703)

That's a much cooler site than the one that Slashdot linked too. You know, usually I bash Slashdot and the people who post to it, but I gotta give that site props.

Re:More info here (0, Informative)

iamplasma (189832) | more than 12 years ago | (#3834767)

^^Goatse.cx link in parent^^

Alice is nickname for Goatse.cx guy! (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3834557)

He named it Alice because that was his nickname of the Goatse.cx guy!

Complete Text... (-1, Troll)

httpamphibio.us (579491) | more than 12 years ago | (#3834559)

Approximating Life By CLIVE THOMPSON It's a good thing you didn't see me this morning,'' Richard Wallace warns me as he bites into his hamburger. We're sitting in a sports bar near his home in San Francisco, and I can barely hear his soft, husky voice over the jukebox. He wipes his lips clean of ketchup and grins awkwardly. ''Or you'd have seen my backup personality.'' Advertisement The backup personality: that's Wallace's code name for his manic depression. To keep it in check, he downs a daily cocktail of psychoactive drugs, including Topamax, an anti-epileptic that acts as a mood stabilizer, and Prozac. Marijuana, too -- most afternoons, he'll roll about four or five joints the size of his index finger. The medications work pretty well, but some crisis always comes along to bring the backup personality to the front. This morning, a collection agency for Wallace's college loans wrote to say they'd begun docking $235 from the monthly disability checks he started getting from the government last year, when bipolar disorder was diagnosed. Oh, God, it's happening again, he panicked: His former employers -- the ones who had fired him from a string of universities and colleges -- would be cackling at his misfortune, happy they'd driven him out. Wallace, 41, had raged around the cramped apartment he shares with his wife and son, strewn with computer-science texts and action-doll figurines. ''Stuff like that really makes me insane, when I start thinking about my friends who are at Berkeley or Carnegie-Mellon with tenure and sabbaticals and promotions,'' he says, staring down at his plate. He looks awkward, as if he's borrowing someone else's body -- shifting his stocky frame in his chair, all rumpled jeans and unruly eyebrows. ''It's like I can't even talk to those people anymore. I live on a different planet.'' In June, after I visited him, his alienation from the academic establishment became more dramatic still: a former colleague, claiming Wallace had threatened him, took out a restraining order that prevents him from setting foot on the grounds of the University of California at Berkeley. When he can't get along with the real world, Wallace goes back to the only thing he has left: his computer. Each morning, he wakes before dawn and watches conversations stream by on his screen. Thousands of people flock to his Web site every day from all over the world to talk to his creation, a robot called Alice. It is the best artificial-intelligence program on the planet, a program so eerily human that some mistake it for a real person. As Wallace listens in, they confess intimate details about their lives, their dreams; they talk to Wallace's computer about God, their jobs, Britney Spears. It is a strange kind of success: Wallace has created an artificial life form that gets along with people better than he does. Richard Wallace never really fit in to begin with. His father was a traveling salesman, and Richard was the only one of his siblings to go to college. Like many nerds, he wanted mostly to be left alone to research his passion, ''robot minimalism'' -- machines that require only a few simple rules to make complex movements, like steering around a crowded room. Simple, he felt, worked. He lived by the same ascetic code, scorning professors who got rich by patenting work they'd developed on government grants. ''Corporate welfare,'' he sniffed. By 1992, Wallace's reputation was so strong that New York University recruited him to join the faculty. His main project, begun in December 1993, was a robot eye attached to the Internet, which visitors from afar could control. It was one of the first-ever Webcams, and Wallace figured that pioneering such a novel use of the Internet would impress his tenure committee. It didn't, and Wallace grew increasingly depressed as his grant applications were rejected one by one. At one point, a colleague found him quietly weeping at his desk, unable to talk. ''I had no clue what the rules were, what the game even was -- or that there was even a game,'' Wallace recalls. He started taking Prozac. How did all these successful senior professors do it, anyway? One day he checked into his Webcam and noticed something strange: people were reacting to the robot eye in an oddly emotional way. It was designed so that remote viewers could type in commands like ''tilt up'' or ''pan left,'' directing the eye to poke around Wallace's lab. Occasionally it would break down, and to Wallace's amusement, people would snap at it as if it were real: ''You're stupid,'' they'd type. It gave him an idea: What if it could talk back? Like all computer scientists, Wallace knew about a famous ''chat-bot'' experiment called Eliza. Back in 1966, an M.I.T. professor, Joseph Weizenbaum, created Eliza as a ''virtual therapist'' -- it would take a user's statement and turn it around as a question, emulating a psychiatrist's often-maddening circularity. (You: ''I'm mad at my mother.'' Eliza: ''Why are you mad at your mother?'') Eliza was quickly abandoned as a joke, even by its creator. It wasn't what scientists call ''strong'' A.I. -- able to learn on its own. It could only parrot lines Weizenbaum had fed it. But Wallace was drawn to Eliza's simplicity. As a professor, he often felt like an Eliza-bot himself -- numbly repeating the same lessons to students over and over again, or writing the same monotonous descriptions of his work on endless, dead-end grant-application forms. He decided to create an updated version of Eliza and imbue it with his own personality -- something that could fire back witty repartee when users became irritable. As Wallace's work progressed, though, his mental illness grew worse, making him both depressed and occasionally grandiose. He went on strike in class, refusing to grade his students' papers and instead awarding them all A's. He fired off acid e-mail messages dismissing colleagues as sellouts. When Wallace climbed out the window of his 16th-floor apartment and threatened to jump, his girlfriend pulled him back and took him down to N.Y.U.'s psychiatric department, where doctors told him he had bipolar disorder. Wallace resisted the diagnosis -- after all, didn't every computer scientist cycle through 72-hour sprees of creativity and then crash? ''I was in denial myself,'' he says now. '''I'm a successful professor, making $100,000 a year! I'm not one of those mental patients!''' His supervisors disagreed. In April 1995, N.Y.U. told him his contract wouldn't be renewed. Alice came to life on Nov. 23, 1995. That fall, Wallace relocated to Lehigh College in Pennsylvania, hired again for his expertise in robotics. He installed his chat program on a Web server, then sat back to watch, wondering what people would say to it. Numbingly boring things, as it turned out. Users would inevitably ask Alice the same few questions: ''Where do you live?'' ''What is your name?'' and ''What do you look like?'' Wallace began analyzing the chats and realized that almost every statement users made began with one of 2,000 words. The Alice chats were obeying something language theorists call Zipf's Law, a discovery from the 1930's, which found that a very small number of words make up most of what we say. Advertisement Wallace took Zipf's Law a step further. He began theorizing that only a few thousand statements composed the bulk of all conversation -- the everyday, commonplace chitchat that humans engage in at work, at the water cooler and in online discussion groups. Alice was his proof. If he taught Alice a new response every time he saw it baffled by a question, he would eventually cover all the common utterances and even many unusual ones. Wallace figured the magic number was about 40,000 responses. Once Alice had that many preprogrammed statements, it -- or ''she,'' as he'd begun to call the program fondly -- would be able to respond to 95 percent of what people were saying to her. Wallace had hit upon a theory that makes educated, intelligent people squirm: Maybe conversation simply isn't that complicated. Maybe we just say the same few thousand things to one another, over and over and over again. If Wallace was right, then artificial intelligence didn't need to be particularly intelligent in order to be convincingly lifelike. A.I. researchers had been focused on self-learning ''neural nets'' and mapping out grammar in ''natural language'' programs, but Wallace argued that the reason they had never mastered human conversation wasn't because humans are too complex, but because they are so simple. ''The smarter people are, the more complex they think the human brain is,'' he says. ''It's like anthropocentrism, but on an intellectual level. 'I have a great brain, therefore everybody else does -- and a computer must, too.''' Wallace says with a laugh. ''And unfortunately most people don't.'' Yet part of what makes Alice seem so human-like is her wry, seemingly spontaneous responses. Wallace's skill is not merely as a programmer but as the author of thousands of sharp one-liners for Alice. It is, as he puts it, ''more like writing good literature, perhaps drama, than writing computer programs.'' I looked over some of Alice's recent chat transcripts: USER: Will you have sex with me? ALICE: Can you go talk to someone else? USER: What do you look like? ALICE: Like an electronic brain. USER: That's sexy. ALICE: I'm glad we have that all cleared up. USER: So what are you wearing? ALICE: Oh. I am wearing my usual plastic computer wardrobe. USER: Can you fly? ALICE: Only in my dreams. But as Alice improved, Wallace declined. He began drinking heavily, and after one sodden evening at a local bar he rolled his car on the highway. Lehigh ended his contract in the spring of 1997; banks canceled all his credit cards. He again attempted suicide, this time landing in the hospital. Destitute, Wallace moved to California to look for work in artificial-intelligence companies. But he couldn't hold a job for more than a few months; instead, he spent his days obsessively writing dialogue for Alice on laptops he borrowed from friends, and watching, thrilled, as his robot grew ever more lifelike. Visitors used to talk to Alice for only three or four exchanges. Now the average conversation was 20 exchanges, and some users would chatter away for hours, returning to the site again and again. But Wallace still hungered for recognition, and in January 2000, he decided to stress-test Alice by entering her in the annual Loebner Prize competition, in which artificial-intelligence developers from around the world pit their programs head to head before a panel of judges, who rank them based on how ''lifelike'' they are. The contest is both well known and controversial within the tight circle of A.I.; winning programs are closely studied by both academics and corporate centers like Sprint Labs. Up against competitors from major corporations and well-financed universities, Alice won. It was, officially, the most human robot in the world. Too exhausted to celebrate, Wallace returned to his motel and slept clutching his award medallion. After his victory, Wallace plunged with new fervor into Alice. Geeks began eagerly e-mailing Wallace to offer their help; one, Noel Bush, came from one of the world's biggest A.I. corporations, Artificial Life. Over the next year, Wallace quadrupled Alice's knowledge base, teaching it 30,000 new responses, and last October, Alice won the Loebner competition for the second time in a row; this time one judge actually ranked Alice more realistic than a human. At last, some of the academics Wallace so loathes began to take note. ''Alice is a wonderful chat-bot -- uncannily vivid and lifelike,'' gushed Ken Perlin, an N.Y.U. professor, in an e-mail message he circulated to his colleagues and forwarded to Wallace. ''It's really great to see this work get the recognition it deserves.'' Wallace wasn't in the mood to make peace. He says he still believed that N.Y.U. had driven him out. ''Your crude attempt to flatter me cannot erase the years of unemployment, debt and trauma that I experienced because of you and your colleagues,'' he wrote back. ''I wouldn't bother trying to flatter you,'' Perlin responded. ''Frankly, I don't like you -- in fact, I actively dislike you. I think you are a paranoid psycho.'' At one point during the Monica Lewinksy hearings, a human asked Alice her opinion: ''Do you think President Clinton should be impeached?'' ''It depends,'' Alice responded, ''on what you mean by 'think.''' One could scarcely have asked for a more Clintonian response. But it's also a puzzling question that Alice's success itself raises: Is she intelligent? If so, how? In 1950, the pioneering British mathematician Alan Turing grappled with this question in the journal Mind, where he first posed the ''Turing Test'' -- the gold standard for artificial thought. ''Can machines think?'' he asked -- and immediately noted that the question hinges, of course, on what ''thinking'' is. He posed a simple ''imitation game'' to resolve the question. Put a person and a computer in one room and an interrogator in another. The interrogator talks to both via a teletype machine, and his goal is to figure out which is which. If the machine fools the interrogator into believing it is human, the test is passed -- it can be considered intelligent. This is, on the surface, a curiously unambitious definition; it's all about faking it. The machine doesn't need to act like a creative human or smart human or witty human -- it merely needs to appear not to be a robot. With this bit of intellectual jujitsu, Turing dodged a more troubling question: How do our brains, and language itself, work? Artificial-intelligence purists, however, caustically dismiss the Turing Test and Alice. For them, artificial intelligence is about capturing the actual functioning of the human brain, down to its neurons and learning ability. Parroting, they argue, doesn't count. Marvin Minksy, a prominent A.I. pioneer and M.I.T. Media Lab professor, e-mailed me to say that Wallace's idea of conversation is ''basically wrong.'' Minsky added, ''It's like explaining that a picture is an object made by applying paint to canvas and then putting it in a rectangular frame.'' Alice, according to Minsky, does not truly ''know'' anything about the world. Advertisement The fight over Alice is like any war between theorists and engineers, those who seek to understand why something works versus those who are content just to build it. The debate usually boils down to one major issue: creativity. Alice could never come up with a single creative thought, critics say. Wallace agrees that Alice may not be creative -- but neither, he argues gleefully, are people, at least in conversation. If Alice were merely given a massive enough set of responses, it would seem as creative as a human -- which is not as creative as we might like to believe. Even if the guts of Alice aren't precisely ''thinking,'' many users certainly never suspect it. In an everyday sense, fakery works -- particularly in our online age. Turing's ''imitation game'' eerily presaged today's world of chat rooms, where men pretend to be women, having lesbian cybersex with other women who are, in fact, men. Whenever a user has stumbled onto Alice without knowing in advance that she's a robot, they've always assumed she's human. It's 3 in the afternoon, but Wallace is already rolling what appears to be his fourth joint of the day. We're sitting in the ''pot club'' a few blocks from Wallace's home, an unmarked building where medical marijuana is distributed to members. Wallace gets up to wander around the club greeting friends: some intense men in suits playing speed chess, a long-haired man with a bushy mustache playing guitar, a thin reed of a woman staring wall-eyed at a VCR playing ''Cast Away.'' Everyone greets Wallace as ''Dr. Rich,'' relishing the credibility his academic credentials lend to the medical-marijuana cause, officially legal but politically beleaguered. The reverse is also true: Wallace identifies with the club's pariah status, its denizens who have been forced by cancer, AIDS or mental illness onto welfare. He's more relaxed than I've ever seen him, getting into a playful argument with a friend about Alice. The friend, a white-bearded programmer, isn't sure he buys Wallace's theories. ''I gotta say, I don't feel like a robot!'' the friend jokes, pounding the table. ''I just don't feel like a robot!'' ''That's why you're here, and that's why you're unemployed!'' Wallace shoots back. ''If you were a robot, you'd get a job!'' Friends used to tell Wallace to reconcile his past, clean himself up, apply for an academic job. But some now wonder whether Wallace's outsider status might be the whole key to Alice's success in emulating everyday human behavior. After all, outcasts are the keenest students of ''normal'' behavior -- since they're constantly trying, and failing, to achieve it themselves. Last month, a friend whom Wallace has known since grad school -- Ken Goldberg, now a professor at Berkeley -- got a restraining order against Wallace. Prompted by the movie ''A Beautiful Mind,'' Goldberg had e-mailed Wallace last winter to catch up, but an amicable exchange about Wallace's plight turned sour when Wallace began accusing Goldberg of cooperating with a corrupt academic ''establishment'' and of siding with N.Y.U. against him. He wrote, ''Although I am not a violent person, I think I have come to understand how people are driven to political violence.'' Wallace also wrote to a friend that he was ''getting ready to do some political theater and put up wanted posters around the Berkeley campus with [Goldberg's] picture on it.'' Wallace scoffs at Goldberg's fears. ''I'm not violent -- I'm a pacifist,'' he says. ''I always have been, and he knows that.'' He is fighting the order, arguing that Goldberg hasn't proved that a reasonable threat exists, and that the order considerably limits his free speech since it bars him from the Berkeley campus, as well as any academic events where Goldberg might appear. Yet even in such legal straits, Wallace seems oddly pleased. Goldberg's court order confirms everything he has always suspected: that the world, and particularly the academic world, is shutting him out, doubting his ideas, turning him into the crazy man out in the hallway. Wallace, who once wrote Attorney General John Ashcroft to suggest a federal racketeering lawsuit against the nation's academics, sees the case against him as a chance for vindication. Wallace imagines walking into the courtroom and finally getting a type of justice -- someone who will listen to his story. ''What a windfall for me,'' he says. ''It's nice to feel like a winner for once.'' Clive Thompson is a writer in New York City.

Re:Complete Text... (1)

httpamphibio.us (579491) | more than 12 years ago | (#3834582)

oh so that didn't work out well at all...

NYTimes random login generator. (0, Interesting)

Anonymous Coward | more than 12 years ago | (#3834568)

link [majcher.com]
This link takes you to the automatic login generator for the NYTimes, and then sends you straight to the article.

Posted AC, to not be a karma whore.

Funny interview with the bot itself (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3834570)

This is quite amusing, an interview with Alice bot. You can find it here [htmlplanet.com]

Parent is evil link!! Do not click! (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3834769)

See subject

NYTimes Random Login Generator (-1, Redundant)

Anonymous Coward | more than 12 years ago | (#3834572)

For those of us short on time, here is the NYTimes Random Login Generator [majcher.com]

The link you want (3, Interesting)

jcoy42 (412359) | more than 12 years ago | (#3834574)

is right here [majcher.com] .

Re:The link you want (2, Informative)

sinserve (455889) | more than 12 years ago | (#3834772)

It doesn't work anymore. Jus follow that link yourself and see.

Re:The link you want (1)

Joe Tie. (567096) | more than 12 years ago | (#3834836)

Backing out of the result, and hitting the submit key will get you there though.

Slashdot - Guilty of Murder (5, Funny)

mprinkey (1434) | more than 12 years ago | (#3834576)

AP - The artificial lifeform known to the computing world as ALICE came to a violent and tragic end this evening. The good-natured AI was interacting with several online users when an unprovoced attack was leveled against her by a geek gang known as Slashdot. The miscreants pelted ALICE with connection requests until she finally expired. FBI and local authorities are investigating and promise to bring these geeks to justise.

ALICE is survived by a grandfather TRS-80, her mother C-64, her sister IBM RS6000, and lifelong companion Athlon. In lieu of flowers, the family asks the donations be made to the Free Software Foundation.

Re:Slashdot - Guilty of Murder (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#3834675)

Fucking moderators.

Score:-1,Corny

Re:Slashdot - Guilty of Murder (1)

bobtheprophet (587843) | more than 12 years ago | (#3834750)

I would think her mother would be ELIZA....hmm.

Re:Slashdot - Guilty of Murder (0)

Anonymous Coward | more than 12 years ago | (#3835031)

The Federation of Floral Business is suing and
currently purchasing their congressman to pass a law to dissallow alternatives other than buying flowers from FFB member companies for such traditional occasions. There will be future ammendments and new laws to cover all occasions
including the outlawing of thinking of alternatives to flowers.

realistic?? ya right .. (5, Funny)

jest3r (458429) | more than 12 years ago | (#3834580)

Perhaps the creator needs to head down to the nearest Starbucks and try some real-world conversation .. here is a log from a chat with an ALICE bot ..

Hi, I'm Alison What's your name?
You: bubba baby ...
Alison: Don't call me "baby". Are you a man or a woman?
You: 100% man
Alison: Do you have any conditions I should know about?
You: what?
Alison: Didn't you understand it?
You: so what should we do now?
Alison: Right now? Oh. Whatever you like.
You: you know what i like
Alison: Interesting gossip: 217-person said he like.
You: 217? what does that mean?
Alison: Try to determine if this is a person or a computer responding.
Alison: Does "that" refer to "Interesting gossip: 217-person said he like" ?
You: this is making no sense, l8r
Alison: If you could have any kind of robot what would it be?
You: it would be you
Alison: You can download me for free! Or buy the CD Rom for $29.95.

(http://www.blivion.com/alison/)

Re:realistic?? ya right .. (3, Funny)

Anonymous Coward | more than 12 years ago | (#3834736)

digital wh0re .. selling herself for $29.95 ....

Re:realistic?? ya right .. (0)

Anonymous Coward | more than 12 years ago | (#3834770)

Yeah, but she downs loads for free.

hmm (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3834584)

Italian communists are terrorists. They must be killed at all costs. No wonder Italy is now a third world country. This socialist/communist trend in Western Europe is quite disturbing. Europe continues to slide into a backwater, third-world, cesspool.

AOL meets Eliza (1)

markus (2264) | more than 12 years ago | (#3834591)

For funny Eliza sessions check out AOLiza [fury.com] . The page hasn't been updated in a while, but it is definitly a classic.

Re:AOL meets Eliza (1)

nmnilsson (549442) | more than 12 years ago | (#3834737)

AOL? It guess that would go something like this:

> Hello, I am Eliza.
* U HAVE ANY COOL MEGADEATH MP3S?
> Don't you ever say Hello?
* DAMIT WHY DONT YOU LOSERS TALK ABOUT MUSIC NOT NO GAY SHIT
> We were discussing you, not me.
* GET A ROOM, THIS IS HEAVY METAL, NOT PANSYLAND
> I see.
...

Re:AOL meets Eliza (1)

Joe Tie. (567096) | more than 12 years ago | (#3834888)

That's wonderful! I love how the typical user makes only just about as much sense as AOLiza, and has far worse spelling ability.

Alice is like RACTER and ELIZA (1)

NortWind (575520) | more than 12 years ago | (#3834596)

This kind of chat-bot program has been around for a long time. I had RACTER on my PC, and it was pretty good. You can read some snips of RACTER and ELIZA [sbu.ac.uk] duking it out.

Complete text without gimpy formatting... :) (0, Redundant)

httpamphibio.us (579491) | more than 12 years ago | (#3834597)

Approximating Life By CLIVE THOMPSON

"It's a good thing you didn't see me this morning," Richard Wallace warns me as he bites into his hamburger. We're sitting in a sports bar near his home in San Francisco, and I can barely hear his soft, husky voice over the jukebox. He wipes his lips clean of ketchup and grins awkwardly. "Or you'd have seen my backup personality."

The backup personality: that's Wallace's code name for his manic depression. To keep it in check, he downs a daily cocktail of psychoactive drugs, including Topamax, an anti-epileptic that acts as a mood stabilizer, and Prozac. Marijuana, too -- most afternoons, he'll roll about four or five joints the size of his index finger. The medications work pretty well, but some crisis always comes along to bring the backup personality to the front. This morning, a collection agency for Wallace's college loans wrote to say they'd begun docking $235 from the monthly disability checks he started getting from the government last year, when bipolar disorder was diagnosed. Oh, God, it's happening again, he panicked: His former employers -- the ones who had fired him from a string of universities and colleges -- would be cackling at his misfortune, happy they'd driven him out. Wallace, 41, had raged around the cramped apartment he shares with his wife and son, strewn with computer-science texts and action-doll figurines.

"Stuff like that really makes me insane, when I start thinking about my friends who are at Berkeley or Carnegie-Mellon with tenure and sabbaticals and promotions," he says, staring down at his plate. He looks awkward, as if he's borrowing someone else's body -- shifting his stocky frame in his chair, all rumpled jeans and unruly eyebrows. "It's like I can't even talk to those people anymore. I live on a different planet." In June, after I visited him, his alienation from the academic establishment became more dramatic still: a former colleague, claiming Wallace had threatened him, took out a restraining order that prevents him from setting foot on the grounds of the University of California at Berkeley.

When he can't get along with the real world, Wallace goes back to the only thing he has left: his computer. Each morning, he wakes before dawn and watches conversations stream by on his screen. Thousands of people flock to his Web site every day from all over the world to talk to his creation, a robot called Alice. It is the best artificial-intelligence program on the planet, a program so eerily human that some mistake it for a real person. As Wallace listens in, they confess intimate details about their lives, their dreams; they talk to Wallace's computer about God, their jobs, Britney Spears.

It is a strange kind of success: Wallace has created an artificial life form that gets along with people better than he does.

Richard Wallace never really fit in to begin with. His father was a traveling salesman, and Richard was the only one of his siblings to go to college. Like many nerds, he wanted mostly to be left alone to research his passion, "robot minimalism" -- machines that require only a few simple rules to make complex movements, like steering around a crowded room. Simple, he felt, worked. He lived by the same ascetic code, scorning professors who got rich by patenting work they'd developed on government grants. "Corporate welfare," he sniffed.

By 1992, Wallace's reputation was so strong that New York University recruited him to join the faculty. His main project, begun in December 1993, was a robot eye attached to the Internet, which visitors from afar could control. It was one of the first-ever Webcams, and Wallace figured that pioneering such a novel use of the Internet would impress his tenure committee. It didn't, and Wallace grew increasingly depressed as his grant applications were rejected one by one. At one point, a colleague found him quietly weeping at his desk, unable to talk. "I had no clue what the rules were, what the game even was -- or that there was even a game," Wallace recalls. He started taking Prozac. How did all these successful senior professors do it, anyway?

One day he checked into his Webcam and noticed something strange: people were reacting to the robot eye in an oddly emotional way. It was designed so that remote viewers could type in commands like "tilt up" or "pan left," directing the eye to poke around Wallace's lab. Occasionally it would break down, and to Wallace's amusement, people would snap at it as if it were real: "You're stupid," they'd type. It gave him an idea: What if it could talk back?

Like all computer scientists, Wallace knew about a famous "chat-bot" experiment called Eliza. Back in 1966, an M.I.T. professor, Joseph Weizenbaum, created Eliza as a "virtual therapist" -- it would take a user's statement and turn it around as a question, emulating a psychiatrist's often-maddening circularity. (You: "I'm mad at my mother." Eliza: "Why are you mad at your mother?") Eliza was quickly abandoned as a joke, even by its creator. It wasn't what scientists call "strong" A.I. -- able to learn on its own. It could only parrot lines Weizenbaum had fed it.

But Wallace was drawn to Eliza's simplicity. As a professor, he often felt like an Eliza-bot himself -- numbly repeating the same lessons to students over and over again, or writing the same monotonous descriptions of his work on endless, dead-end grant-application forms. He decided to create an updated version of Eliza and imbue it with his own personality -- something that could fire back witty repartee when users became irritable.

As Wallace's work progressed, though, his mental illness grew worse, making him both depressed and occasionally grandiose. He went on strike in class, refusing to grade his students' papers and instead awarding them all A's. He fired off acid e-mail messages dismissing colleagues as sellouts. When Wallace climbed out the window of his 16th-floor apartment and threatened to jump, his girlfriend pulled him back and took him down to N.Y.U.'s psychiatric department, where doctors told him he had bipolar disorder. Wallace resisted the diagnosis -- after all, didn't every computer scientist cycle through 72-hour sprees of creativity and then crash? "I was in denial myself," he says now. "'I'm a successful professor, making $100,000 a year! I'm not one of those mental patients!"'

His supervisors disagreed. In April 1995, N.Y.U. told him his contract wouldn't be renewed.

Alice came to life on Nov. 23, 1995. That fall, Wallace relocated to Lehigh College in Pennsylvania, hired again for his expertise in robotics. He installed his chat program on a Web server, then sat back to watch, wondering what people would say to it.

Numbingly boring things, as it turned out. Users would inevitably ask Alice the same few questions: "Where do you live?" "What is your name?" and "What do you look like?" Wallace began analyzing the chats and realized that almost every statement users made began with one of 2,000 words. The Alice chats were obeying something language theorists call Zipf's Law, a discovery from the 1930's, which found that a very small number of words make up most of what we say.

Wallace took Zipf's Law a step further. He began theorizing that only a few thousand statements composed the bulk of all conversation -- the everyday, commonplace chitchat that humans engage in at work, at the water cooler and in online discussion groups. Alice was his proof. If he taught Alice a new response every time he saw it baffled by a question, he would eventually cover all the common utterances and even many unusual ones. Wallace figured the magic number was about 40,000 responses. Once Alice had that many preprogrammed statements, it -- or "she," as he'd begun to call the program fondly -- would be able to respond to 95 percent of what people were saying to her.

Wallace had hit upon a theory that makes educated, intelligent people squirm: Maybe conversation simply isn't that complicated. Maybe we just say the same few thousand things to one another, over and over and over again. If Wallace was right, then artificial intelligence didn't need to be particularly intelligent in order to be convincingly lifelike. A.I. researchers had been focused on self-learning "neural nets" and mapping out grammar in "natural language" programs, but Wallace argued that the reason they had never mastered human conversation wasn't because humans are too complex, but because they are so simple.

"The smarter people are, the more complex they think the human brain is," he says. "It's like anthropocentrism, but on an intellectual level. 'I have a great brain, therefore everybody else does -- and a computer must, too."' Wallace says with a laugh. "And unfortunately most people don't."

Yet part of what makes Alice seem so human-like is her wry, seemingly spontaneous responses. Wallace's skill is not merely as a programmer but as the author of thousands of sharp one-liners for Alice. It is, as he puts it, "more like writing good literature, perhaps drama, than writing computer programs." I looked over some of Alice's recent chat transcripts:

USER: Will you have sex with me?
USER: What do you look like?
ALICE: Like an electronic brain.
USER: That's sexy.
ALICE: I'm glad we have that all cleared up.
USER: So what are you wearing?
ALICE: Oh. I am wearing my usual plastic computer wardrobe.
USER: Can you fly?
ALICE: Only in my dreams.


But as Alice improved, Wallace declined. He began drinking heavily, and after one sodden evening at a local bar he rolled his car on the highway. Lehigh ended his contract in the spring of 1997; banks canceled all his credit cards. He again attempted suicide, this time landing in the hospital.

Destitute, Wallace moved to California to look for work in artificial-intelligence companies. But he couldn't hold a job for more than a few months; instead, he spent his days obsessively writing dialogue for Alice on laptops he borrowed from friends, and watching, thrilled, as his robot grew ever more lifelike. Visitors used to talk to Alice for only three or four exchanges. Now the average conversation was 20 exchanges, and some users would chatter away for hours, returning to the site again and again.

But Wallace still hungered for recognition, and in January 2000, he decided to stress-test Alice by entering her in the annual Loebner Prize competition, in which artificial-intelligence developers from around the world pit their programs head to head before a panel of judges, who rank them based on how "lifelike" they are. The contest is both well known and controversial within the tight circle of A.I.; winning programs are closely studied by both academics and corporate centers like Sprint Labs. Up against competitors from major corporations and well-financed universities, Alice won. It was, officially, the most human robot in the world. Too exhausted to celebrate, Wallace returned to his motel and slept clutching his award medallion.

After his victory, Wallace plunged with new fervor into Alice. Geeks began eagerly e-mailing Wallace to offer their help; one, Noel Bush, came from one of the world's biggest A.I. corporations, Artificial Life. Over the next year, Wallace quadrupled Alice's knowledge base, teaching it 30,000 new responses, and last October, Alice won the Loebner competition for the second time in a row; this time one judge actually ranked Alice more realistic than a human.

At last, some of the academics Wallace so loathes began to take note. "Alice is a wonderful chat-bot -- uncannily vivid and lifelike," gushed Ken Perlin, an N.Y.U. professor, in an e-mail message he circulated to his colleagues and forwarded to Wallace. "It's really great to see this work get the recognition it deserves."

Wallace wasn't in the mood to make peace. He says he still believed that N.Y.U. had driven him out. "Your crude attempt to flatter me cannot erase the years of unemployment, debt and trauma that I experienced because of you and your colleagues," he wrote back.

"I wouldn't bother trying to flatter you," Perlin responded. "Frankly, I don't like you -- in fact, I actively dislike you. I think you are a paranoid psycho."

At one point during the Monica Lewinksy hearings, a human asked Alice her opinion: "Do you think President Clinton should be impeached?"

"It depends," Alice responded, "on what you mean by 'think."'

One could scarcely have asked for a more Clintonian response. But it's also a puzzling question that Alice's success itself raises: Is she intelligent? If so, how?

In 1950, the pioneering British mathematician Alan Turing grappled with this question in the journal Mind, where he first posed the "Turing Test" -- the gold standard for artificial thought. "Can machines think?" he asked -- and immediately noted that the question hinges, of course, on what "thinking" is. He posed a simple "imitation game" to resolve the question. Put a person and a computer in one room and an interrogator in another. The interrogator talks to both via a teletype machine, and his goal is to figure out which is which. If the machine fools the interrogator into believing it is human, the test is passed -- it can be considered intelligent.

This is, on the surface, a curiously unambitious definition; it's all about faking it. The machine doesn't need to act like a creative human or smart human or witty human -- it merely needs to appear not to be a robot. With this bit of intellectual jujitsu, Turing dodged a more troubling question: How do our brains, and language itself, work?

Artificial-intelligence purists, however, caustically dismiss the Turing Test and Alice. For them, artificial intelligence is about capturing the actual functioning of the human brain, down to its neurons and learning ability. Parroting, they argue, doesn't count. Marvin Minksy, a prominent A.I. pioneer and M.I.T. Media Lab professor, e-mailed me to say that Wallace's idea of conversation is "basically wrong." Minsky added, "It's like explaining that a picture is an object made by applying paint to canvas and then putting it in a rectangular frame." Alice, according to Minsky, does not truly "know" anything about the world.

The fight over Alice is like any war between theorists and engineers, those who seek to understand why something works versus those who are content just to build it. The debate usually boils down to one major issue: creativity. Alice could never come up with a single creative thought, critics say. Wallace agrees that Alice may not be creative -- but neither, he argues gleefully, are people, at least in conversation. If Alice were merely given a massive enough set of responses, it would seem as creative as a human -- which is not as creative as we might like to believe.

Even if the guts of Alice aren't precisely "thinking," many users certainly never suspect it. In an everyday sense, fakery works -- particularly in our online age. Turing's "imitation game" eerily presaged today's world of chat rooms, where men pretend to be women, having lesbian cybersex with other women who are, in fact, men. Whenever a user has stumbled onto Alice without knowing in advance that she's a robot, they've always assumed she's human.

It's 3 in the afternoon, but Wallace is already rolling what appears to be his fourth joint of the day. We're sitting in the "pot club" a few blocks from Wallace's home, an unmarked building where medical marijuana is distributed to members. Wallace gets up to wander around the club greeting friends: some intense men in suits playing speed chess, a long-haired man with a bushy mustache playing guitar, a thin reed of a woman staring wall-eyed at a VCR playing "Cast Away." Everyone greets Wallace as "Dr. Rich," relishing the credibility his academic credentials lend to the medical-marijuana cause, officially legal but politically beleaguered. The reverse is also true: Wallace identifies with the club's pariah status, its denizens who have been forced by cancer, AIDS or mental illness onto welfare. He's more relaxed than I've ever seen him, getting into a playful argument with a friend about Alice. The friend, a white-bearded programmer, isn't sure he buys Wallace's theories.

"I gotta say, I don't feel like a robot!" the friend jokes, pounding the table. "I just don't feel like a robot!"

"That's why you're here, and that's why you're unemployed!" Wallace shoots back. "If you were a robot, you'd get a job!"

Friends used to tell Wallace to reconcile his past, clean himself up, apply for an academic job. But some now wonder whether Wallace's outsider status might be the whole key to Alice's success in emulating everyday human behavior. After all, outcasts are the keenest students of "normal" behavior -- since they're constantly trying, and failing, to achieve it themselves.

Last month, a friend whom Wallace has known since grad school -- Ken Goldberg, now a professor at Berkeley -- got a restraining order against Wallace. Prompted by the movie "A Beautiful Mind," Goldberg had e-mailed Wallace last winter to catch up, but an amicable exchange about Wallace's plight turned sour when Wallace began accusing Goldberg of cooperating with a corrupt academic "establishment" and of siding with N.Y.U. against him. He wrote, "Although I am not a violent person, I think I have come to understand how people are driven to political violence." Wallace also wrote to a friend that he was "getting ready to do some political theater and put up wanted posters around the Berkeley campus with [Goldberg's] picture on it."

Wallace scoffs at Goldberg's fears. "I'm not violent -- I'm a pacifist," he says. "I always have been, and he knows that." He is fighting the order, arguing that Goldberg hasn't proved that a reasonable threat exists, and that the order considerably limits his free speech since it bars him from the Berkeley campus, as well as any academic events where Goldberg might appear.

Yet even in such legal straits, Wallace seems oddly pleased. Goldberg's court order confirms everything he has always suspected: that the world, and particularly the academic world, is shutting him out, doubting his ideas, turning him into the crazy man out in the hallway. Wallace, who once wrote Attorney General John Ashcroft to suggest a federal racketeering lawsuit against the nation's academics, sees the case against him as a chance for vindication. Wallace imagines walking into the courtroom and finally getting a type of justice -- someone who will listen to his story. "What a windfall for me," he says. "It's nice to feel like a winner for once."

Clive Thompson is a writer in New York City.

Chat with her cousin Ally (2, Interesting)

nmnilsson (549442) | more than 12 years ago | (#3834599)

...Here [accessterminal.com] , as Alice herself is busy being slashdotted.

Re:Chat with her cousin Ally (-1)

Anonymous Coward | more than 12 years ago | (#3834630)

if you slashdot an AI bot, is that like gang rape?

Re:Chat with her cousin Ally (0)

Anonymous Coward | more than 12 years ago | (#3834751)

>if you slashdot an AI bot, is that like gang rape?

How does using such words make you feel?

Anthromorphize much? (3, Interesting)

Anonymous Coward | more than 12 years ago | (#3834610)

I wonder how many reams of chatlogs the author had to go through to find those "witty" conversational snippets. I've "chatted" with ALICE a few times myself. (I do tech support, and frequently have long stretches with nothing but the Internet to entertain me) While she is definately a most impressive AI bot, she is also not mistakable for human by anyone with a moderate intelligence. Like that "That depends on what you mean by 'think'." I recognize that as one of her stock dodges when she doesn't "understand" a question, with 'think' replaced by whatever.
But then again, my standard stress test for an AI program is to try to get it to discuss existential philosophy. That's probably a bit evil.

At any rate, while I think it's nifty that AI constantly hovers in the public mind, it's a bit premature (and misleading) to think that HAL-level conversational ability is anywhere close to being here.

Re:Anthromorphize much? (1)

Mr. Slippery (47854) | more than 12 years ago | (#3834656)

But then again, my standard stress test for an AI program is to try to get it to discuss existential philosophy.

Try to get the average chat user to discuss existential philosophy. I'd say there's a more than even chance you'll get better results from the AI.

Re:Anthromorphize much? (0)

Anonymous Coward | more than 12 years ago | (#3834682)

Well, you'll at least get better dodges. I'm currently playing with "Ally The Chatbot" (http://www.accessterminal.com/L.html [accessterminal.com] ) and she does have a very impressive list of ways of dodging a difficult question.
We were discussing movies. I said I liked Ghostbusters.
She: "I think I saw that one."
Me (testing her short-term memory): "Which one?"
S: "The Fat One."
M: "Which fat one?"
S: "Is that a rhetorical question?"
M: "No."
S: "So you really want an answer?"
M: "Yes."
S: "Next question, please!"
M: "Have you thought about going into politics?"
S: "I doubt it, but my memory may have been reset."

I got a good laugh out of that little exchange.

Re:Anthromorphize much? (0)

Anonymous Coward | more than 12 years ago | (#3834792)

she is also not mistakable for human by anyone with a moderate intelligence. Like that "That depends on what you mean by 'think'."

You have to put the quote in its full context to appreciate:

-"Do you think President Clinton should be impeached?"
-"It depends, on what you mean by 'think'."

Hardly what I'd call AI (5, Interesting)

awptic (211411) | more than 12 years ago | (#3834615)

ALICE is nothing more than a bunch of preprogrammed responses to common statements and questions, what the
hell is the big deal about that? Anyone with enough time on their hands could create something simular.
What I would like to see is an AI program which can actually follow conversation and make responses
relevent to the topic of discussion, even if the statement didn't directly reference it.

Re:Hardly what I'd call AI (1)

perydell (109595) | more than 12 years ago | (#3834748)

ALICE can follow conversations and stay on topic with the tag.

I think you are dismissing Dr. Wallace's work too quickly. Take a look at all the capabilities of AIML.

Re:Hardly what I'd call AI (0)

Anonymous Coward | more than 12 years ago | (#3834897)

That's the big issue you dunce. If a program can fool a person with preprogrammed responses then is it intelligent???

Re:Hardly what I'd call AI (3, Insightful)

jovlinger (55075) | more than 12 years ago | (#3834941)

Greg Egan has a great story (I believe it is called "learning to be me") about this small computer (jewel) that you get implanted in your brain as a small child. The premise is that all other parts of the body can be readily replaced, appart from the brain. Thus, the only obstacle to eternal life is copying the brain.

The jewel sits in your head, monitoring your inputs (sight, sound, tactile...) and your outputs. Eventually, it is consistently able to predict your actions. It has learned how to be you.

Later in life, it is time for your transference, where the jewel is given control over the outputs, and your brain takes the back seat. Of course, being a good fiction short, the jewel soon diverges from what you want to do, but the real you has no outputs... and is eventually scooped out to be replaced by some biologically inert material, while the jewel lives to be 1000s of years old.

It was several years since I read it, but good stuff all the same.

Re:Hardly what I'd call AI (4, Funny)

Tablizer (95088) | more than 12 years ago | (#3834944)

(* What I would like to see is an AI program which can actually follow conversation and make responses relevent to the topic of discussion *)

You realize that would disqualify most slashdot participants as "intelligent".

[ot] sig (0)

Anonymous Coward | more than 12 years ago | (#3835008)

this is offtopic but MY GOD that is a cool sig!

Mod parent (-1, Offtopic) (1)

marxmarv (30295) | more than 12 years ago | (#3835060)

ALICE is nothing more than a bunch of preprogrammed responses to common statements and questions, what the hell is the big deal about that?
The big deal about that is that preprogrammed responses to common statements and questions are a huge part of human conversation, or less generously that human conversation is mostly useless filler.

The more I read /. the more I find Wallace's misanthropy rubbing off on me.

-jhp

Wonder what happens when... (0)

Anonymous Coward | more than 12 years ago | (#3834620)

If some one starts up a conversation with one bot and starts feeding the replies to another bot and then in turn feed the replies back to the first bot. Would we always get the same turn of events given the same starting point?

Re:Wonder what happens when... (1)

AndrewRUK (543993) | more than 12 years ago | (#3834798)

I've seen this done on irc, and this [andrewr.co.uk] is the log. But those ones had been learning from people on irc (and from each other during that 'conversation' they had) so they wouldn't generate the same result again, as they will have learnt more since.

He could very well be... (0)

DigitalHammer (581235) | more than 12 years ago | (#3834622)

...the next Albert Einstein.

Yet even in such legal straits, Wallace seems oddly pleased. Goldberg's court order confirms everything he has always suspected: that the world, and particularly the academic world, is shutting him out, doubting his ideas, turning him into the crazy man out in the hallway.

They both show the same similarities in their personalities. As Einstein's teachers had ridiculed him for his theories and ideas, Wallace is shunned by academic officials for his perceptions of reality. Perhaps Walace will turn out to be next Einstein of the century.

Re:He could very well be... (0)

Anonymous Coward | more than 12 years ago | (#3834678)

They both show the same similarities in their personalities. As Einstein's teachers had ridiculed him for his theories and ideas, Wallace is shunned by academic officials for his perceptions of reality. Perhaps Walace will turn out to be next Einstein of the century.

By this logic .. I could very well be the next Einstein too.

Re:He could very well be... (0)

Anonymous Coward | more than 12 years ago | (#3834705)

And my pubic soul patch with grow flowers if I try hard enough.

Re:He could very well be... (4, Insightful)

Angry Toad (314562) | more than 12 years ago | (#3834716)

Actually the whole thing seems like a pretty sad story to me - he's clearly a clever guy battling against a debilitating mental illness. In the end the "Alice" concept was interesting and original, but its a one-note song. He doesn't seem to have moved beyond it in any significant research-linked sense, and it seems like his illness is probably the reason.

It doesn't strike me as an "endearingly odd and brilliant" character story at all. Just an unfortunate tale about a man's fight against his own bad brain chemistry.

Re:He could very well be... (0)

Anonymous Coward | more than 12 years ago | (#3834727)

They laughed at Einstein, they laughed at Newton, but they also laughed at Bozo The Clown.

Somebody had to say it. (0)

Anonymous Coward | more than 12 years ago | (#3834636)

You bastards! You've killed ALICE!

Lucky fellow.... (1)

tetuth (590388) | more than 12 years ago | (#3834637)

He again attempted suicide, this time landing in the hospital.
This guy better be glad his apartment's where it is! Closer to topic, from what I read, this didn't seem like the kind of A.I. I'd want in a conversational bot. If he sits there and looks at questions, then inputs his own canned responses to those questions, is the bot really learning anything on its own? I think he's just forcefeeding it. Poor ALICE.

Re:Lucky fellow.... (1)

Mr. Slippery (47854) | more than 12 years ago | (#3834753)

If he sits there and looks at questions, then inputs his own canned responses to those questions, is the bot really learning anything on its own?

But, isn't that similar to how a large amount of our conversational activity is learned? Children pick up the "canned" responses of adults. His point seems to be that this accounts for a large amount of what we talk about every day.

Re:Lucky fellow.... (0)

Anonymous Coward | more than 12 years ago | (#3834764)

He again attempted suicide, this time landing in the hospital.

Should have waited until they were over the drop zone.

Re:Lucky fellow.... (0)

Anonymous Coward | more than 12 years ago | (#3834828)

'Glad'? he wanted to die, not survive. Just think of how much of a failure he must feel like if he cant even end his own life, regardless if its because of the intervention of some selfish pricks who think everybody should live.

Sinus Fantasies (-1, Offtopic)

koolB (149856) | more than 12 years ago | (#3834649)

But, in my present condition, the thought seems a bit detached. Rather like
the feeling of, hung over one sunny Sunday morning, idly glancing at a
newspaper, and reading that hundreds of thousands Bangladeshis have drowned
in a typhoon. Disturbing, at first. But quickly succeeded by more pertinent
questions: Why so abominable, this cappucino? Why, this thick and winding
strand of hair upon my tongue? Okay, who pissed in the Coffeematic?

Thus is the thought of sex fantasy to those afflicted with sinus headache.
It can be appreciated mentally, but not emotionally; for the body is
otherwise occupied.

Sinus headache: Some of you may have never been afflicted by this beast.
Some may never experience it again. Lucky bastards. For the uninitiated, a
description: Pain. Pure pain. As Everclear is to liquors, sinus pain is to
agony. Unadulterated by the coarser emotions. A smooth, creamy wash of
pain daubed across the eyebrows and cheekbones. Not even the gently pulsing
pain of migraine; constant, and distilled. Like an unwanted mother-in-law,
it may drop in at any time, and stay for days, weeks, months.

The cause? Mucus. Mucus Supremus, Mucus Horribilis, Mucus Rex. Mucus
Sapiens, perhaps. Backed up by allergy-inflamed polyps or a fierce cold, it
roosts and festers in the spongy tunnels of sinus. Continually secreted,
with no escape. Pressure is too mild a word; this force that slowly
squashes my cheekbones from the inside out must be something else entirely.

The inevitable result: Sinus Fantasies.

My first Sinus Fantasy: The Needle. Walking through a hideous slum as my
pain consumes me, oblivious to the world around me, my foot connects with a
broken slab of pavement, and I fall. Pushing myself to my knees, I
recognize a scrap of plastic and steel at my feet: an old hypodermic
needle. Infected, surely; but no matter, for as my hands softly tremble
with the awfulness of the deed, I recognize inevitably what must be done.

I pick the thing up firmly; not so firmly as to break it, but with a certain
quivering force, from fear in my pain-induced delirium that the needle will
spring from my hand with the last desperate energy of a captured beast. I
take the needle between my left fingers and press it to a bulging, inflated
cheekbone. The empty reservoir poised in the air, tense, waiting.

Then my right hand blurs and crashes against the hypo's base like a hammer
on a nail. Driving the steel pinpoint of cathartic pain through spongy bone
into the tunnels of pain that are my sinuses. Thick yellow mucus has been
festering there for what must be decades, building up astronomical
pressure. It explodes through the thin hypodermic pipe and fountains into
the reservoir, which fills in seconds, then bursts. I stand thus mute for
what seems like minutes as my slime spurts from the shattered needle, finally
drying to a trickle. Before my feet is a golden pool of fluid, skeins of
vermilion woven about the crust which already begins to form on its
surface. The smell: pungent, fierce, alive. Catharsis achieved. The left
side of my face feels almost limp. Yet the right still pulses like an
overinflated balloon, bloated and waiting. I scrabble around the broken
fragments of concrete that pass for a sidewalk, in quest of another needle.

A second. The Sinus Fantasy that drove me to fear all thought of Sinus
Fantasies, as the depressed fear skyscrapers and bridges. I call it
"Roto-Rooter."

The surgeon must treat some organs with timidity and respect, for they are
necessary to life. Heart, liver, lungs. But others earn not the same fear:
the dispensable ones, the evolutionary anomalies, the vestiges. Appendix,
tonsils, gallbladder.

I need not state the obvious, by telling you in what class I place the
Sinus.

Most of you have probably seen a pneumatic drill in action. Blasts of
compressed air throw a heavy steel head forward to crush everything in its
path, ten times a second. The noise, like an orgy of concrete woodpeckers.
One of the modern Sacraments Of Power, a true ambassador plenipotentiary of
Progress.

About three feet long, though. Clearly too large to serve as a practical
instrument of personal grooming.

Consider an eight-inch pneumatic drill. Sold, perhaps, as a sex toy, an
artifact of yuppie pleasure, gracing the pages of the Sharper Image. With a
classy carborundum cutting head, and an ecologically-correct imitation ivory
case. Whatever.

And with a flick of the wrist I ignite the thing, and thrust it into my
welcoming flesh. The drill drums up an unholy beat as it crashes into my
zygomatic arch, tearing away spongy bone to expose and lay waste the tender
sinuses beneath. I run it in a rough figure-eight around my face, careful
to leave no pocket of mucus unpierced. The room is spattered with blood,
bone dust, and phlegm, and I have gouged deep, moist facial trenches.
Scarred for life. But already the pain is beginning to recede.

The summary of this article. (-1, Flamebait)

Restil (31903) | more than 12 years ago | (#3834652)

In general, people are dumbasses.

To fool a dumbass, you only have to emulate a dumbass. The best way to fool a dumbass is to say the same stupid things over and over again, since that's all the dumbass does anyways. And from what I've seen from your generic IRC chats, 99% of them qualify.

And while we're on the subject, lets talk about people who's ego has outgrown their brain to the point they've driven themselves into depression over it. The author seems pretty bright, but maybe he heard that fact a bit too many times and believed it a bit too much. Grants aren't always "granted". Sometimes, you just have to give things time. To say that everyone schemes against you is the paranoid view. And the reactions of those he detests are well justified. Heck, even when some of them tried to give him the benefit of the doubt and give him and his project a good review, he simply turned the compliment against them. He's a product of his own misery.

-Restil

Re:The summary of this article. (1)

donglekey (124433) | more than 12 years ago | (#3834679)

Just remember, just because you are paranoid, doesn't mean everyone's not out to get you.

Re:The summary of this article. (4, Insightful)

geekd (14774) | more than 12 years ago | (#3834773)

It's called "mental illness" and it's caused by a chemical imbalace in the brain.

A friend of mine is bi-polar, and it's not pretty. He also thinks everyone schemes against him, has wild mood swings, etc.

Sometimes he is fine, just like his old, normal, self. But those days are fewer and fewer.

For people like this, it's next to impossible to hold a job, keep friends, etc.

To say "...ego has outgrown their brain to the point they've driven themselves into depression over it." is short sighted. It's a physical problem, not a bad personality.

Re:The summary of this article. (0)

Anonymous Coward | more than 12 years ago | (#3835073)

To say "...ego has outgrown their brain to the point they've driven themselves into depression over it." is short sighted. It's a physical problem, not a bad personality.
Not always. My psychiatrist said "You are not sick, you have fucked up personality. You are just a miserable asshole."
There are realy people who are have a menthal illness, but it's not their fault. And there are assholes, loosers, freeks, people with "ego that has overgrown their brian", that not merit any respect becouse it's their fault there are "twisted".
For example in my case I lacked the humility and wanted to have a frenship relation a girl who has only 'A' at all her exams, that is beloved, respected, admired at all she does (for example she teached on charity basis the catechesis to children). But I merited only her pity. That is the leak of the humility.

Re:The summary of this article. (0)

Anonymous Coward | more than 12 years ago | (#3835095)

Or maybe he's just an asshole.

By the way, I'm not fat, just big-boned.

I'm not a retard, I'm mentally handicapped.

I'm not a drunken loser, I'm disadvantaged!

Re:The summary of this article. (1)

Broccolist (52333) | more than 12 years ago | (#3834784)

Nah, bipolar disorder is really a chemical thing. I know a bipolar person who was quite normal for the first 20 years of her life, and one year it just struck suddenly for no special reason. It's a disease like any other: it shouldn't be blamed on the victim.

Re:The summary of this article. (0)

Anonymous Coward | more than 12 years ago | (#3834964)

bipolar usually first strikes in the late teens-early 20s. dunno if anybody knows why, might be related to general brain development maybe.

if it's recognized quickly and treated properly (if the patient's lucky enough that it _can_ be treated and controlled; not all are), then a fairly normal life shouldn't be entirely impossible, but the longer it goes untreated the worse it gets.

even a good treatment really just suppresses the disease, there's no real cure - drop off the meds and it's right back again. some people, too, get "breakthrough" attacks where all of a sudden the medication that used to work fine isn't enough anymore; those usually make it worse, too.

and then there's drug interactions (usually more than one drug's needed, mood stabilizers, antidepressants, and on and on), side effects forcing changes in the mixture, and any little change risking who knows what. you're very right - it's not pretty at all.

Moderators on Holiday? (5, Insightful)

dragons_flight (515217) | more than 12 years ago | (#3835122)

Okay, I'll agree the summary of the article is rather fitting and somewhat funny, but the rest of Restil's comments are in very bad taste.

In case no one noticed, the guy is mentally ill. He has serious problems, and they are not his fault. He didn't chose to "drive himself into depression" or any such thing. Manic depression (aka bipolar disorder) is one the most clearly nuerochemically linked and genetically linked mental illnesses there is. It's hardly his fault that some of his nuerotransmitters receptors are functioning incorrectly. Unlike simple (unipolar) depression, manic depression can't be solved by talk therapy alone, it is a physical illness of the brain that must be controlled with medication.

Yes, he's paranoid. Yes, he seems unable to hold a job. Yes, he has suicidal epsiodes. Is this his fault? No! He has a disease that literally makes his mind unable to function the way a normal person's does. Join the rest of us in the 21st century and quit blaming the patient for something beyond his control.

In the mean time, moderators, why am I reading this distasteful junk at Score:4?

For more info on bipolar disorder, see here [nih.gov] , here [mentalhelp.net] , or here [mayoclinic.com] .

Ummmm... (0)

Anonymous Coward | more than 12 years ago | (#3834731)

If that's the cutting edge of AI, I may as well dig up a copy of Eliza. Takes less resources and makes more sense.

If he needs money... (3, Interesting)

geekd (14774) | more than 12 years ago | (#3834754)

If this gent needs cash, he can just make a cybersex version of Alice and sell her to the porn sites.

Actually, I bet this has already been done.

Full Atricle -- KARMA WHORING (-1, Redundant)

cgleba (521624) | more than 12 years ago | (#3834755)

OK, NYT is a PAIN to get into from where I am now so I figured I would spare some other people the strife:

"t's a good thing you didn't see me this morning,'' Richard Wallace warns me as he bites into his hamburger. We're sitting in a sports bar near his home in San Francisco, and I can barely hear his soft, husky voice over the jukebox. He wipes his lips clean of ketchup and grins awkwardly. ''Or you'd have seen my backup personality.''

Advertisement

The backup personality: that's Wallace's code name for

his manic depression. To keep it in check, he downs a daily cocktail of psychoactive drugs, including Topamax, an anti-epileptic that acts as a mood stabilizer, and Prozac. Marijuana, too -- most afternoons, he'll roll about four or five joints the size of his index finger. The medications work pretty well, but some crisis always comes along to bring the backup personality to the front. This morning, a collection agency for Wallace's college loans wrote to say they'd begun docking $235 from the monthly disability checks he started getting from the government last year, when bipolar disorder was diagnosed. Oh, God, it's happening again, he panicked: His former employers -- the ones who had fired him from a string of universities and colleges -- would be cackling at his misfortune, happy they'd driven him out. Wallace, 41, had raged around the cramped apartment he shares with his wife and son, strewn with computer-science texts and action-doll figurines.

''Stuff like that really makes me insane, when I start thinking about my friends who are at Berkeley or Carnegie-Mellon with tenure and sabbaticals and promotions,'' he says, staring down at his plate. He looks awkward, as if he's borrowing someone else's body -- shifting his stocky frame in his chair, all rumpled jeans and unruly eyebrows. ''It's like I can't even talk to those people anymore. I live on a different planet.'' In June, after I visited him, his alienation from the academic establishment became more dramatic still: a former colleague, claiming Wallace had threatened him, took out a restraining order that prevents him from setting foot on the grounds of the University of California at Berkeley.

When he can't get along with the real world, Wallace goes back to the only thing he has left: his computer. Each morning, he wakes before dawn and watches conversations stream by on his screen. Thousands of people flock to his Web site every day from all over the world to talk to his creation, a robot called Alice. It is the best artificial-intelligence program on the planet, a program so eerily human that some mistake it for a real person. As Wallace listens in, they confess intimate details about their lives, their dreams; they talk to Wallace's computer about God, their jobs, Britney Spears.

It is a strange kind of success: Wallace has created an artificial life form that gets along with people better than he does.

Richard Wallace never really fit in to begin with. His father was a traveling salesman, and Richard was the only one of his siblings to go to college. Like many nerds, he wanted mostly to be left alone to research his passion, ''robot minimalism'' -- machines that require only a few simple rules to make complex movements, like steering around a crowded room. Simple, he felt, worked. He lived by the same ascetic code, scorning professors who got rich by patenting work they'd developed on government grants. ''Corporate welfare,'' he sniffed.

By 1992, Wallace's reputation was so strong that New York University recruited him to join the faculty. His main project, begun in December 1993, was a robot eye attached to the Internet, which visitors from afar could control. It was one of the first-ever Webcams, and Wallace figured that pioneering such a novel use of the Internet would impress his tenure committee. It didn't, and Wallace grew increasingly depressed as his grant applications were rejected one by one. At one point, a colleague found him quietly weeping at his desk, unable to talk. ''I had no clue what the rules were, what the game even was -- or that there was even a game,'' Wallace recalls. He started taking Prozac. How did all these successful senior professors do it, anyway?

One day he checked into his Webcam and noticed something strange: people were reacting to the robot eye in an oddly emotional way. It was designed so that remote viewers could type in commands like ''tilt up'' or ''pan left,'' directing the eye to poke around Wallace's lab. Occasionally it would break down, and to Wallace's amusement, people would snap at it as if it were real: ''You're stupid,'' they'd type. It gave him an idea: What if it could talk back?

Like all computer scientists, Wallace knew about a famous ''chat-bot'' experiment called Eliza. Back in 1966, an M.I.T. professor, Joseph Weizenbaum, created Eliza as a ''virtual therapist'' -- it would take a user's statement and turn it around as a question, emulating a psychiatrist's often- maddening circularity. (You: ''I'm mad at my mother.'' Eliza: ''Why are you mad at your mother?'') Eliza was quickly abandoned as a joke, even by its creator. It wasn't what scientists call ''strong'' A.I. -- able to learn on its own. It could only parrot lines Weizenbaum had fed it.

But Wallace was drawn to Eliza's simplicity. As a professor, he often felt like an Eliza-bot himself -- numbly repeating the same lessons to students over and over again, or writing the same monotonous descriptions of his work on endless, dead-end grant-application forms. He decided to create an updated version of Eliza and imbue it with his own personality -- something that could fire back witty repartee when users became irritable.

As Wallace's work progressed, though, his mental illness grew worse, making him both depressed and occasionally grandiose. He went on strike in class, refusing to grade his students' papers and instead awarding them all A's. He fired off acid e-mail messages dismissing colleagues as sellouts. When Wallace climbed out the window of his 16th-floor apartment and threatened to jump, his girlfriend pulled him back and took him down to N.Y.U.'s psychiatric department, where doctors told him he had bipolar disorder. Wallace resisted the diagnosis -- after all, didn't every computer scientist cycle through 72-hour sprees of creativity and then crash? ''I was in denial myself,'' he says now. '''I'm a successful professor, making $100,000 a year! I'm not one of those mental patients!'''

His supervisors disagreed. In April 1995, N.Y.U. told him his contract wouldn't be renewed.

lice came to life on Nov. 23, 1995. That fall, Wallace relocated to Lehigh College in Pennsylvania, hired again for his expertise in robotics. He installed his chat program on a Web server, then sat back to watch, wondering what people would say to it.

Approximating Life

(Page 2 of 3)

Numbingly boring things, as it turned out. Users would inevitably ask Alice the
same few questions: ''Where do you live?'' ''What is your name?'' and ''What do you look like?'' Wallace began analyzing the chats and realized that almost every statement users made began with one of 2,000 words. The Alice chats were obeying something language theorists call Zipf's Law, a discovery from the 1930's, which found that a very small number of words make up most of what we say.

Advertisement

Wallace took Zipf's Law a step further. He began

theorizing that only a few thousand statements composed the bulk of all conversation -- the everyday, commonplace chitchat that humans engage in at work, at the water cooler and in online discussion groups. Alice was his proof. If he taught Alice a new response every time he saw it baffled by a question, he would eventually cover all the common utterances and even many unusual ones. Wallace figured the magic number was about 40,000 responses. Once Alice had that many preprogrammed statements, it -- or ''she,'' as he'd begun to call the program fondly -- would be able to respond to 95 percent of what people were saying to her.

Wallace had hit upon a theory that makes educated, intelligent people squirm: Maybe conversation simply isn't that complicated. Maybe we just say the same few thousand things to one another, over and over and over again. If Wallace was right, then artificial intelligence didn't need to be particularly intelligent in order to be convincingly lifelike. A.I. researchers had been focused on self-learning ''neural nets'' and mapping out grammar in ''natural language'' programs, but Wallace argued that the reason they had never mastered human conversation wasn't because humans are too complex, but because they are so simple.

''The smarter people are, the more complex they think the human brain is,'' he says. ''It's like anthropocentrism, but on an intellectual level. 'I have a great brain, therefore everybody else does -- and a computer must, too.''' Wallace says with a laugh. ''And unfortunately most people don't.''

Yet part of what makes Alice seem so human-like is her wry, seemingly spontaneous responses. Wallace's skill is not merely as a programmer but as the author of thousands of sharp one-liners for Alice. It is, as he puts it, ''more like writing good literature, perhaps drama, than writing computer programs.'' I looked over some of Alice's recent chat transcripts:

USER: Will you have sex with me?
ALICE: Can you go talk to someone else?
USER: What do you look like?
ALICE: Like an electronic brain.
USER: That's sexy.
ALICE: I'm glad we have that all cleared up.
USER: So what are you wearing?
ALICE: Oh. I am wearing my usual plastic computer wardrobe.
USER: Can you fly?
ALICE: Only in my dreams.

But as Alice improved, Wallace declined. He began drinking heavily, and after one sodden evening at a local bar he rolled his car on the highway. Lehigh ended his contract in the spring of 1997; banks canceled all his credit cards. He again attempted suicide, this time landing in the hospital.

Destitute, Wallace moved to California to look for work in artificial- intelligence companies. But he couldn't hold a job for more than a few months; instead, he spent his days obsessively writing dialogue for Alice on laptops he borrowed from friends, and watching, thrilled, as his robot grew ever more lifelike. Visitors used to talk to Alice for only three or four exchanges. Now the average conversation was 20 exchanges, and some users would chatter away for hours, returning to the site again and again.

But Wallace still hungered for recognition, and in January 2000, he decided to stress-test Alice by entering her in the annual Loebner Prize competition, in which artificial-intelligence developers from around the world pit their programs head to head before a panel of judges, who rank them based on how ''lifelike'' they are. The contest is both well known and controversial within the tight circle of A.I.; winning programs are closely studied by both academics and corporate centers like Sprint Labs. Up against competitors from major corporations and well-financed universities, Alice won. It was, officially, the most human robot in the world. Too exhausted to celebrate, Wallace returned to his motel and slept clutching his award medallion.

After his victory, Wallace plunged with new fervor into Alice. Geeks began eagerly e-mailing Wallace to offer their help; one, Noel Bush, came from one of the world's biggest A.I. corporations, Artificial Life. Over the next year, Wallace quadrupled Alice's knowledge base, teaching it 30,000 new responses, and last October, Alice won the Loebner competition for the second time in a row; this time one judge actually ranked Alice more realistic than a human.

At last, some of the academics Wallace so loathes began to take note. ''Alice is a wonderful chat-bot -- uncannily vivid and lifelike,'' gushed Ken Perlin, an N.Y.U. professor, in an e-mail message he circulated to his colleagues and forwarded to Wallace. ''It's really great to see this work get the recognition it deserves.''

Wallace wasn't in the mood to make peace. He says he still believed that N.Y.U. had driven him out. ''Your crude attempt to flatter me cannot erase the years of unemployment, debt and trauma that I experienced because of you and your colleagues,'' he wrote back.

''I wouldn't bother trying to flatter you,'' Perlin responded. ''Frankly, I don't like you -- in fact, I actively dislike you. I think you are a paranoid psycho.''

At one point during the Monica Lewinksy hearings, a human asked Alice her opinion: ''Do you think President Clinton should be impeached?''

''It depends,'' Alice responded, ''on what you mean by 'think.'''

One could scarcely have asked for a more Clintonian response. But it's also a puzzling question that Alice's success itself raises: Is she intelligent? If so, how?

In 1950, the pioneering British mathematician Alan Turing grappled with this question in the journal Mind, where he first posed the ''Turing Test'' -- the gold standard for artificial thought. ''Can machines think?'' he asked -- and immediately noted that the question hinges, of course, on what ''thinking'' is. He posed a simple ''imitation game'' to resolve the question. Put a person and a computer in one room and an interrogator in another. The interrogator talks to both via a teletype machine, and his goal is to figure out which is which. If the machine fools the interrogator into believing it is human, the test is passed - - it can be considered intelligent.

This is, on the surface, a curiously unambitious definition; it's all about faking it. The machine doesn't need to act like a creative human or smart human or witty human -- it merely needs to appear not to be a robot. With this bit of intellectual jujitsu, Turing dodged a more troubling question: How do our brains, and language itself, work?

Artificial-intelligence purists, however, caustically dismiss the Turing Test and Alice. For them, artificial intelligence is about capturing the actual functioning of the human brain, down to its neurons and learning ability. Parroting, they argue, doesn't count. Marvin Minksy, a prominent A.I. pioneer and M.I.T. Media Lab professor, e-mailed me to say that Wallace's idea of conversation is ''basically wrong.'' Minsky added, ''It's like explaining that a picture is an object made by applying paint to canvas and then putting it in a rectangular frame.'' Alice, according to Minsky, does not truly ''know'' anything about the world.

Advertisement

The fight over Alice is like any war between theorists

and engineers, those who seek to understand why something works versus those who are content just to build it. The debate usually boils down to one major issue: creativity. Alice could never come up with a single creative thought, critics say. Wallace agrees that Alice may not be creative -- but neither, he argues gleefully, are people, at least in conversation. If Alice were merely given a massive enough set of responses, it would seem as creative as a human -- which is not as creative as we might like to believe.

Even if the guts of Alice aren't precisely ''thinking,'' many users certainly never suspect it. In an everyday sense, fakery works -- particularly in our online age. Turing's ''imitation game'' eerily presaged today's world of chat rooms, where men pretend to be women, having lesbian cybersex with other women who are, in fact, men. Whenever a user has stumbled onto Alice without knowing in advance that she's a robot, they've always assumed she's human.

t's 3 in the afternoon, but Wallace is already rolling what appears to be his fourth joint of the day. We're sitting in the ''pot club'' a few blocks from Wallace's home, an unmarked building where medical marijuana is distributed to members. Wallace gets up to wander around the club greeting friends: some intense men in suits playing speed chess, a long-haired man with a bushy mustache playing guitar, a thin reed of a woman staring wall-eyed at a VCR playing ''Cast Away.'' Everyone greets Wallace as ''Dr. Rich,'' relishing the credibility his academic credentials lend to the medical- marijuana cause, officially legal but politically beleaguered. The reverse is also true: Wallace identifies with the club's pariah status, its denizens who have been forced by cancer, AIDS or mental illness onto welfare. He's more relaxed than I've ever seen him, getting into a playful argument with a friend about Alice. The friend, a white-bearded programmer, isn't sure he buys Wallace's theories.

''I gotta say, I don't feel like a robot!'' the friend jokes, pounding the table. ''I just don't feel like a robot!''

''That's why you're here, and that's why you're unemployed!'' Wallace shoots back. ''If you were a robot, you'd get a job!''

Friends used to tell Wallace to reconcile his past, clean himself up, apply for an academic job. But some now wonder whether Wallace's outsider status might be the whole key to Alice's success in emulating everyday human behavior. After all, outcasts are the keenest students of ''normal'' behavior -- since they're constantly trying, and failing, to achieve it themselves.

Last month, a friend whom Wallace has known since grad school -- Ken Goldberg, now a professor at Berkeley -- got a restraining order against Wallace. Prompted by the movie ''A Beautiful Mind,'' Goldberg had e-mailed Wallace last winter to catch up, but an amicable exchange about Wallace's plight turned sour when Wallace began accusing Goldberg of cooperating with a corrupt academic ''establishment'' and of siding with N.Y.U. against him. He wrote, ''Although I am not a violent person, I think I have come to understand how people are driven to political violence.'' Wallace also wrote to a friend that he was ''getting ready to do some political theater and put up wanted posters around the Berkeley campus with [Goldberg's] picture on it.''

Wallace scoffs at Goldberg's fears. ''I'm not violent -- I'm a pacifist,'' he says. ''I always have been, and he knows that.'' He is fighting the order, arguing that Goldberg hasn't proved that a reasonable threat exists, and that the order considerably limits his free speech since it bars him from the Berkeley campus, as well as any academic events where Goldberg might appear.

Yet even in such legal straits, Wallace seems oddly pleased. Goldberg's court order confirms everything he has always suspected: that the world, and particularly the academic world, is shutting him out, doubting his ideas, turning him into the crazy man out in the hallway. Wallace, who once wrote Attorney General John Ashcroft to suggest a federal racketeering lawsuit against the nation's academics, sees the case against him as a chance for vindication. Wallace imagines walking into the courtroom and finally getting a type of justice -- someone who will listen to his story. ''What a windfall for me,'' he says. ''It's nice to feel like a winner for once.''

Clive Thompson is a writer in New York City.

A.I. field is currently crippled, (5, Insightful)

eyepeepackets (33477) | more than 12 years ago | (#3834756)

check back in twenty years.

There is much too much anthropomorphizing going on in the A.I. field and this has always been true. We want to make machines which think like we do, but the sad part is that we really don't yet know the full mechanics of how our brains work (how we think.) And yet we're going to make machines which think like we do? Rather dumb, really.

IMO, A.I. researchers would do better getting machines to "think" in their own "machine" context. Instead of trying to make intelligent "human" machines, doesn't it make more sense to make intelligent "machine" machines? For example, what does a machines need to know about changing human baby diapers when it makes more sense for the machine to know about monitoring it's log files and making backups and other self-correcting actions (changing it's own diapers, heh.)

Seems to me my Linux machines are plenty smart already, there are just some missing parts:

1. Self-awareness on the part of the machine (not much more than self-monitoring with statefulness and history.)

2. Communication with decent machine/machine and machine/human interfaces (direct software for machine/machine, add human language capability or greatly improved H.I. for human/machine. Much work has already been done on these.)

3. History of self/other interactions which can be stored and referrenced (should be an interesting database project.)

Make smart machines, not fake humans.

Re:A.I. field is currently crippled, (0)

Anonymous Coward | more than 12 years ago | (#3834916)

My friend,

smartness is not something you put "on top of" something else. Smartness pervades from your brain to the tip of your little finger to the way you talk to your mother. Smart is losing a limb and getting on with life. Smartness is life itself.

Re:A.I. field is currently crippled, (2)

Tablizer (95088) | more than 12 years ago | (#3834931)

(* For example, what does a machines need to know about changing human baby diapers when it makes more sense for the machine to know about monitoring it's log files and making backups and other self-correcting actions *)

But to communicate with humans, you need to know this kind of stuff.

For example, its boss may say, "Your last report resembled the contents of a used baby diaper."

A robot that did not know anything about diapers would not realize that the boss is saying that the report is no good, and start asking annoying questions to try to figure it out.

If companies wanted somebody without social clues, they would be hiring geeks instead of "and must have excellent communications and social skills".

Wow, misreading (1)

duren686 (463275) | more than 12 years ago | (#3834765)

I don't know why, but I read the title of the story as N.Y. Times Magazine Cheats With ALICE Bot Creator..

AOLiza (1)

Lumin Inverse (471513) | more than 12 years ago | (#3834766)

Check out www.fury.com/aoliza [fury.com] if you want to see some amusing logs of AIM users who were fooled into believing that they were talking to real people that they knew, when they were actually talking to an AI bot, like ALICE.

A great example. (0, Redundant)

Fixer (35500) | more than 12 years ago | (#3834790)

of what intelligence is NOT. But, we knew that already, didn't we? I had thought the expert system craze had died the true death in the mid-eighties.

More informatively, ALICE only grows when her creator adds new rules, becoming at best a pale shadow of an intellect as it *was*, at a point in time. Intelligence is more than rules, or at least, intellect uses rules we haven't even begun to understand. And yet, most of us think.

Ah well.. knowing what a thing is not is almost as useful as the reverse.

Re:A great example. (5, Insightful)

kmellis (442405) | more than 12 years ago | (#3835069)

This kind of stuff drives me crazy. And I already have a mood disorder.

It occurs to me that people take faux-AI stuff like this seriously because, actually, they don't take AI seriously at all. This magazine writer seems to think that the sufficient characteristic of "strong" AI is some form of learning. Presumably, then, "AI" without learning is "weak" AI? Where, exactly, is the "I" part of the whole AI thing?

Don't get me wrong. I'm not an essentialist. Searle and other anti-AI people are basically asserting the tautology that something's not intelligent because it's not intelligent. And they get to decide what it means to be intelligent. But the main idea of Turing with his test was that if it is indistinguishable from intelligence, it's intelligence.

The problem here is that ALICE is easily determined to be non-intelligent by the average person. ALICE can only pass for an intelligence under conditions so severely constrained that what ALICE is emulating is merely a narrow and relatively trivial part of intelligent behavior. Humans cry out when they are injured -- I don't see anyone claiming that an animal, a rabbit for example, that screams when it's injured is intelligent.

Nobody in their right mind could think that anything we've seen even significantly approaches intelligence.

Wallace is quoted as saying that he went into the field favoring "robot minimalism", and the article writer explains this as the idea that complex behavior can arise from simple instructions. (Oops, someone better contact Stephen Wolfram and tell him he didn't invent this idea.) Wallace is clearly influenced by some important ideas of this nature that came out of, I believe, the MIT robotics lab. (Not the AI lab -- Minsky is hostile to this sort of thing, he's really is an advocate of "strong" AI; and what that really means is something like an explicitly designed AI predicated upon an understanding of consciousness that allows for a top-down description of it. I think that's, er, wrong-headed.)

Lots of folks think that this idea of complexity is the correct way to approach AI. But a really, really big problem is that I don't think that a 30,000 explicitly coded set of responses can really be described as "minimalist". Effectively, Wallace's approach has a seperate instruction for every behavior -- something quite contrary to the minimalism he seems to advocate.

For the sake of argument, let's assume that the central idea of the Turing Test is correct -- a fake indistinguishable from the original is the same kind of thing as the original. I happen to actually believe that assumption. But Wallace is also assuming that a canned set of stock responses is reasonably possible to achieve such a thing. But it clearly isn't.

A little bit of thought and math will reveal that the total number of correctly-formed English sentences is a very, very, very large number. It's effectively infinite for practical purposes. But Wallace claims that almost all of what we actually say in practice is such a tiny subset of that, that compiling a list of them is possible. So? Almost everything interesting lies in the less frequently uttered sentences; and almost everything that makes intelligence what it is is in the connections between all these sentences. Something that really could pass for intelligence would have to be able to reach, at the very least, even the least often uttered sentences; and, frankly, it'd need to be able to reach heretofore unuttered sentences, as well. More to the point, it would have to be able to do this in the same manner that a human does -- a "train of thought" would have to be apparent to an observer. Given this, we already have that practically infinite number of possible, coherent English sentences; and if you then require that sequences of sentences be constrained by an appearance of intelligence, then you've taken an enormous, practically infinite number and increased it many orders of magnitude.

I submit that such a list of possible query/response sets would be larger than the number of atoms in the galaxy (or the universe! it's not hard to get to these huge numbers quickly), or some such ridiculously large magnitude. It's just not possible to actually do it this way. If you managed it, I'd actually accept a judgment of "intelligence", since I think that the list itself would necessarily encapsulate "intelligence", though in a very brute force fashion. But so what? As in the case of Searle's Chinese Room, all the "intelligence" would implicitly be contained in the list. But this list would need to be, in physical terms, impossible large -- just to do something that the nicely (relatively) compact human brain does quite well.

So, hey, if someone wants to pursue this type of project, I can't say that as a matter of pure theory, it's "not possible". I can say that it's probably not physically possible.

The sense in which Wallace's ALICE chatbot is like trying to describe complexity arising from simplicitly is the same sense in which the Greeks (and others) tried to describe all of nature as the products of Earth, Wind, Fire, and Air. The "simple" things he's starting with aren't really simple; they're not "atomic".

Another example from AI is the problem of computer vision -- people once thought it'd be trivial for a computer to recognize basic shapes from a camera image. Boy, were they wrong.

We'll "solve" the problem of AI. Not like this. And nothing we've seen so far, anywhere, is anything even remotely like legitimate AI.

I've a tcl chatter running now (1)

Kwelstr (114389) | more than 12 years ago | (#3834817)

I wrote a pretty good chatter, if anybody cares to check it, it's on IRC at dalnet's #planetchat. Say hi to ^Bartend. The chat is only for private message. In the channel it just runs a bunch of silly scripts.
^Bartend must be pretty cool, since some girls have proposed to him. LOL.

Re:I've a tcl chatter running now (-1)

Anal Cocks (557998) | more than 12 years ago | (#3834949)

"I wrote a pretty good chatter, if anybody cares to check it, it's on IRC at dalnet's #planetchat. Say hi to ^Bartend. The chat is only for private message. In the channel it just runs a bunch of silly scripts. ^Bartend must be pretty cool, since some girls have proposed to him. LOL."

Your bot may not pass a Turing test, but then again neither do most of the n00bs on DALnet.

"slm! r u ctc? asl plzzzzzzzzz"

Is there an Alice bot for IRC? (OT) (2)

antdude (79039) | more than 12 years ago | (#3834818)

Who learns and is good as infobot [infobot.org] ? I tried the original IRC Alice bot, but she was buggy. There's a new one but it is too new.

And also, is there one active on any IRC servers? Thank you in advance. :)

Re:Is there an Alice bot for IRC? (OT) (2, Interesting)

Kwelstr (114389) | more than 12 years ago | (#3834896)

I have an active alice chat bot on irc, undernet's #planetchat ^Helga^, on private message. Also wrote my own tcl chat, its on Dalnet's #planetchat, ^Bartend.

Marijuana induced idea... (0)

Anonymous Coward | more than 12 years ago | (#3834823)

Alter the algorithm genetically based on /. style moderation (bots not allowed to moderate).

Reminds me of "Good Omens" (1)

Pravada (217899) | more than 12 years ago | (#3834825)

"There was an unconnected fax machine with the intelligence of a computer and a
computer with the intelligence of a retarded ant"

Ken Perlin (1)

deanj (519759) | more than 12 years ago | (#3834834)

That Perlin guy he fired e-mail back and forth with is really quite interesting. He's done a lot of good graphics work. Last time I saw him lecture was in 1997 at SIGGRAPH. He's done a lot of good work.

Yeah! (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3834839)

Here in Montreal, the black chicks have bodies that make white chicks look like boys.

I would like to video them and run home and masturbate like a wild monkey.

Black chicks are difficult to figure out, so this is the only option for a white boy like me!

Courtesy of AliCe, the artificial Anonymous Coward!

Re:Yeah! (-1)

Anal Cocks (557998) | more than 12 years ago | (#3834887)

"Here in Montreal, the black chicks have bodies that make white chicks look like boys. I would like to video them and run home and masturbate like a wild monkey. Black chicks are difficult to figure out, so this is the only option for a white boy like me!"

Whereas I also enjoy a good wank to all kinds of porn, especially black chix0rs, ... that was offtopic. Mod down, please. And mod up my penis post!!!

Hi. (-1)

Anal Cocks (557998) | more than 12 years ago | (#3834865)

Hi.

Please to be sucking onto my penis, sirs.
I have washed my penis most recently and it is a fine-tasting penis.
Very muchly thank you.

Job interview bot (2)

Tablizer (95088) | more than 12 years ago | (#3834901)

(* It is a strange kind of success: Wallace has created an artificial life form that gets along with people better than he does. *)

The geek dream!

(* He's more relaxed than I've ever seen him, getting into a playful argument with a friend about Alice. The friend, a white-bearded programmer, isn't sure he buys Wallace's theories. ''I gotta say, I don't feel like a robot!'' the friend jokes, pounding the table. ''I just don't feel like a robot!'' ''That's why you're here, and that's why you're unemployed!'' Wallace shoots back. ''If you were a robot, you'd get a job!'' *)

What about making an Interview Bot? Sell it as a job-finding practice tool.

Someday robots will be programmed with responses that PHB's want to hear. A true logical robot would be too honest and frank. Spock would probably be hard to employ in a typical cubicle setting. PHB's don't want to hear the truth, so robot makers better figure out how to make them give BS answers.

As a geek, responding to PHB's properly is far more brain-intensive than doing actual work. I think doing actual work will be perfected by AI long before pleasing PHB's.

Unless of course, PHB's are automated first. However, I doubt that because ultimately one must sell to humans, and humans are not logical. Thus, the lower rungs will probably be automated first because logic is simpler to automate than human irrationalism.

Then we can all hang out and drink and smoke with Wallace as robots take over bit by bit.

Anyone want a project (2)

Sanity (1431) | more than 12 years ago | (#3834918)

I am amazed that nobody yet has tried to create a "learning" chat bot. It would be pretty straight-forward.

Basically the chat bot would follow simple rules, similar to regular expressions, that would trigger particular statements in response to statements from the user. Each of these rules could also test for "flags" that could be set and unset by rules which "fire". Then, some algorithm could be devised for creating new rules randomly, based on observed behavior. The effectiveness of a rule could be determined by how long the conversation continues after that rule has been used. Good rules could be moved up in priority, and bad rules moved down (and eventually deleted) on this basis.

Students of "normal" behavior, unite! (2)

snilloc (470200) | more than 12 years ago | (#3834919)

After all, outcasts are the keenest students of ''normal'' behavior -- since they're constantly trying, and failing, to achieve it themselves.

Wow. Besides the general theme of people being repetitive dumbasses, this part stood out the most.

Of course, I've always been approaching it from the evolution-driven genetic motivations of people to create the various stable equilibria we have called "cultures" or "societies". (Perhaps Wolfram was right - from simple (genetic) rules emerge complex structures.)

Did that part of the article really ring true for anybody else?

Re:Students of "normal" behavior, unite! (2)

marxmarv (30295) | more than 12 years ago | (#3835119)

Did that part of the article really ring true for anybody else?
Yes, yes it did, being an outcast of sorts myself. It fits well with the common tale of the insanity of the "sane", for one thing, and having attempted to learn real-life dating I can't see for the life of me how anyone sane would put themselves through that ridiculous Masonic handshake and basket of expectations just to be partnered for the night. It was all so much easier when it was non-verbal.

-jhp

Is Pot Helping? (0)

nemski (587833) | more than 12 years ago | (#3834939)

I know I'll probably take a lot heat for this, but is pot really helping Richard Wallace? Or is it just adding more problems to a list of pretty severe ones?

There's a better program than "Alice"... (1)

newestbob (589866) | more than 12 years ago | (#3834950)

...give the Ya-Hoot Oracle [yahoot.com] a try. I find it to be much better than "Alice" in terms of its comprehension and its range of subject matter. It can really seem to be able to read minds and see into the future...
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>