Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Alicebot Creator Dr. Richard Wallace Expounds

Roblimo posted about 12 years ago | from the only-on-Slashdot dept.

News 318

Okay, here are Alicebot inventor Dr. Richard Wallace's answers to your questions. You're about to enter a world that contains interesting thoughts on A.I., a bit of marijuana advocacy, a courtroom drama, tales of academic politics and infighting, personal ranting, discussion of the nature of mental illness, and comments about the state of American society and the world in general. Yes, all this in one interview so long and strong we had to break it up into three parts to make it fit on our pages. This is an amazing work, well worth reading all the way to the end.

1) AI through simulation?
by Jeppe Salvesen

Do you think that the ever increasing processing power will eventually enable us to fully simulate the human brain? What ramifications would this have for the A.I. discipline?

Dr. Wallace:

My longstanding opinion is that neural networks are the wrong level of abstraction for understanding intelligence, human or machine.

Neurons are the transistors of the brain. They are the low level switching components out of which higher-order functionality is built. But like the individual transistor, studying the individual neuron tells us little about these higher functions.

Suppose an alien came down to Earth who had never seen a computer before. Assuming interstellar travel is possible without a computer! He/she might be tempted to break it open, and discover that it is made of millions of tiny transistors. The alien may try to discover how the computer works by measuring the electronic signals in the transistors. But they would miss the operating system completely. The transistors tell us nothing about the software.

Similarly, neurons tell us little about the higher order software running on our brains.

Significantly, no one has ever proved that the brain is a *good* computer. It seems to run some tasks like visual recognition better than our existing machines, but it is terrible at math, prone to errors, susceptible to distraction, and it requires half its uptime for food, sleep, and maintenance.

It sometimes seems to me that the brain is actually a very shitty computer. So why would you want to build a computer out of slimy, wet, broken, slow, hungry, tired neurons? I chose computer science over medical school because I don't have the stomach for those icky, bloody body parts. I prefer my technology clean and dry, thank you. Moreover, it could be the case that an electronic, silicon-based computer is more reliable, faster, more accurate, and cheaper.

I find myself agreeing with the Churchlands that the notion of consciousness belongs to "folk psychology" and that there may be no clear brain correlates for the ego, id, emotions as they are commonly classified, and so on. But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer. In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!

I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat. I'll take transistors over meat any day. Human intelligence may even be a poor kludge of the intelligence algorithm on an organ that is basically a glorified animal eyeball. From an evolutionary standpoint, our supposedly wonderful cognitive skills are a very recent innovation. It should not be surprising if they are only poorly implemented in us, like the lung of the first mudfish. We can breathe the air of thought and imagination, but not that well yet.

And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created.

Functionalism is basically the view that the mind is the software, and the brain is the hardware. It holds that mental states are equivalent to the states of a Turing Machine. Behaviorism was a pre-computational theory, which imagines the nervous system as a complex piece of machinery like a telephone exchange, but they didn't think much about software. Dualism goes back to Descartes. It is the view that the mind and brain are separate and distinct things, possibly affecting each other, or possibly mirroring each other.

My view is a kind of modified dualism in which I claim that the soul, spirit, or consciousness may exist, but for most people, most of the time, it is almost infentesimally small, compared with the robotic machinery responsible for most of our thought and action. Descartes never talked about the relative weights of brain and mind, but you can read in an implicit 50-50 assumption in most Dualist literature. My idea is more like 99-1, or even 99.999999% automatic machinery and .00000001% self-awareness, creativity, consciousness, spirit or what have you.

That's not to say that some people can't be more enlightened than others. But for the vast herd out there, on average, consciousness is simply not a significant factor. Not even a second- or third-order effect. Consciousness is marginal.

I say this with such confidence because of my experience building robot brains over the past seven years. Almost everything people ever say to our robot falls into one of about 45,000 categories. Considering the astronomical number of things people could say, if every sentence was an original line of poetry, 45,000 is a very, very small number.

2) Turing Test
by Transient0

I noticed that your AliceBot won the 2000 Loebner Prize for most human responses. My question is: "As an Artificial Intelligence researcher, do you feel that the Loebner Prize represents a legitimate variety of testing, or did you just want the $2000?"

I was pretty sure that almost all AI researchers came to the agreement about thirty years ago that the original imitation game as proposed by Turing in 1951 was useful only as a mental exercise, not in practice. Do you feel that the types of developments that the Loebner prize supports(intentional, hard-coded spelling mistakes, etc.) are actually productive in terms of the AI research project?

Dr. Wallace:

In case you haven't noticed, the field of Artificial Intelligence (defined however you wish) has almost nothing to do with science. It is all about politics. When you look at all the people working professionally in the field of A.I., it brings to mind the old joke:

Q: How many Carnegie Mellon Ph.D.s does it take to screw in a light bulb?
A: Two. One to change the bulb, and one to pull the chair out from under him.

The only rule most of these people know is: undermine the competition at all costs, by whatever legal means, or whatever they can get away with. That is how you become King of the A.I. Anthill.

Having a good theory or better implementation of anything is beside the point. Being able to "play the game" and knock out the competition, that is what it is all about. Swim with sharks or be eaten by them.

Especially in the age of increased competition for diminishing jobs and funding, scientific truth takes a back seat to save-your-ass.

Unfortunately it seems that the A.I. problem is inseperable from politics.

When I say that academia is corrupt in America, I don't mean that professors are accepting bribes and giving kickbacks for government contracts. There may be a financial motive in some cases, such as the use of overhead funds for a "course buyout" to reduce a professor's workload, but I am not talking about the kind of corruption associated with Wall Street and Washington exactly. I am talking about the replacement of science with politics as the main item on the academic agenda.

It must not have always been so. At one time, I believe academics were appointed and promoted primarily on the basis of merit and accomplishment. Within the last 20 years or so in the United States this has gradually changed into a system in which political correctness, slickness, and good salesmanship are more highly valued than good science. I don't pretend to understand the reasons for this, but I can point to many examples within our own community.

I have written that it is like a dysfunctional family. Those in positions of leadership and authority have mental health, drug and/or alcohol problems that make them incapable of carrying out their administrative responsibilities. In response, people who are skilled at "enabling" or "nursing" the dysfunctional leaders get promoted and advanced. Those who are prone to logical thinking and speaking the truth are discarded, because they make the authorities face their unconscious anxieties.

I often say, people don't go into computer science because they enjoy working with the public. But as the field has matured, I think it has attracted people who are more comfortable wearing business suits and attending strategy meetings than tinkering on a lab bench or writing a research paper. As computer science departments matured, the people already in them began to want everything to remain the same until they retired. They didn't want to hire young professors with a lot of new ideas about the administration. They hired young professors who wanted everything to stay exactly like it was, no matter what.

You may think that the politicization of a field like computer science is no big deal. We can have slick politicians instead of scientists running university CS departments, and not cause a lot of problems. But I think it is a really big problem in other fields, especially in medical science, especially in drugs and mental health.

Take LSD for example. Discovered by Albert Hoffmann in 1945, LSD is the most powerful drug ever developed. If you have ever gotten a prescription for any drug, you may have noticed that the dosage is usally given in "milligrams". But the dosage of LSD is "micrograms". It has the lowest ED50 of any known drug.

In the early 1960's there was some very promising research at Harvard applying LSD to depressed patients like me. The work was never completed or published for, guess what, political reasons. Subsequently, LSD was classified as a "Schedule I" drug with no useful medical value. This was not a decision based on sound science but on politics and fear. Even today there is zero research on this topic. Did you ever wonder why there is no Department of Psychedelic Studies on any university campus? It is a gaping hole in the academic curriculum, filled only by the informal undergraduate ratings of colleges as "party schools".

Even the very name of the federal agency that provides funding for drug research, the National Institute on Drug Abuse, prejudices the applications and the results. The native born American hippie agronomy student who got his Ph.D. in the 1970's is growing pot underground in California today. The immigrant doctor who "proved" that marijuana causes cancer got the NIDA grant and has tenure at UCLA. What's wrong with this picture?

Until 2 years ago, there was no federally funded research on the medical benefits of marijuana since the 1970's. Even now the only funded research is for terminal illnesses, and it seems like it will take a long time before they consider mental illnesses like mine. I conducted a survey of patients in San Francisco and discovered that "pain" was the #1 symptom for medical marijuana but "depression" was #2, and terminal illnesses like AIDS and cancer were lower on the list. So I am not alone in the perception that there is a patient need for research on this drug.

The problem here, my friends, is that NIDA is part of a specturm of trouble that includes once respected agencies such as NASA, NSF and DARPA. It is an octopus of political corruption that reaches into MIT and CMU and Berkeley and darkens everything it touches. It calls into question the quality and even the veracity of the scientific results and publications. We all witnessed the beginning of this even when we were all friends together at the ICRA conferences in the acrimonious interchanges between academia and industry. I myself saw enough of the system from the inside at NYU and Lehigh to know that science plays almost no role in the hiring, promoting or review process. It's all politics.

Not to place blame, but I think graduate advisors should be more straightforward with students about this point. It would be better to put more time into training them how to "shmooze" and "work the system" than how to solve mathematical problems, if they want their students to be successful. Either that, or they should work on changing the system back to merit based promotion.

3) My question (with answer)
by outlier

Historically, AI has done poorly managing public expectations. People expected thinking, understanding computers, while researchers had trouble getting computers to successfully disambiguate simple sentences. This is not good PR. Do you think the field has learned from this? If so, what should the public expect, and how do we excite them about it?

Just for fun, I asked slashwallace a shortened version of the question, do you think your response would differ?

Human: Historically AI has done poorly managing the public's expectations, do you think this will continue?
SlashWallace: Where did he get it?

Dr. Wallace:

Hugh Loebner is an independently wealthy, eccentric businessman, activist and philanthropist. In 1990 Dr. Loebner, who holds a Ph.D. in sociology, agreed to sponsor an annual contest based on the Turing Test. The contest awards medals and cash prizes for the "most human" computer. Since its inception, the Loebner contest has been a magnet for controversy.

One of the central disputes arose over Hugh Loebner's decision to award the Gold Medal and $100,000 top cash prize only when a robot is capable of passing an "audio-visual" Turing Test. The rules for this Grand Prize contest have not even been written yet. So it remains unlikely that anyone will be awarded the gold Loebner medal in the near future. The Silver and Bronze medal competitions are based on the STT. In 2001, eight programs played alongside two human confederates. A group of 10 judges rotated through each of ten terminals and chatted about 15 minutes with each. The judges then ranked the terminals on a scale of "least human" to "most human." Winning the Silver Medal and its $25,000 prize requires that the judges rank the program higher than half the human confederates. In fact one judge ranked A.L.I.C.E. higher than one of the human confederates in 2001. Had all the judges done so, she might have been eligible for the Silver Medal as well, because there were only two confederates.

To really understand how we accomplished this, I have to teach you some AIML.

CATEGORIES

The basic unit of knowledge in AIML is called a category. Each category consists of an input question, an output answer, and an optional context.

The question, or stimulus, is called the pattern. The answer, or response, is called the template. The two types of optional context are called "that" and "topic."

The AIML pattern language is simple, consisting only of words, spaces, and the wildcard symbols _ and *.

The words may consist of letters and numerals, but no other characters. The pattern language is case invariant.

Words are separated by a single space, and the wildcard characters function like words.

The first versions of AIML allowed only one wild card character per pattern.

The AIML 1.01 standard permits multiple wildcards in each pattern, but the language is designed to be as simple as possible for the task at hand, simpler even than regular expressions.

The template is the AIML response or reply. In its simplest form, the template consists of only plain, unmarked text.

More generally, AIML tags transform the reply into a mini computer program which can save data, activate other programs, give conditional responses, and recursively call the pattern matcher to insert the responses from other categories.

Most AIML tags in fact belong to this template side sublanguage.

AIML currently supports two ways to interface other languages and systems. The <system> tag executes any program accessible as an operating system shell command, and inserts the results in the reply. Similarly, the <javascript> tag allows arbitrary scripting inside the templates.

The optional context portion of the category consists of two variants, called <that> and <topic>. The <that> tag appears inside the category, and its pattern must match the robot's last utterance.

Remembering one last utterance is important if the robot asks a question. The <topic> tag appears outside the category, and collects a group of categories together.

The topic may be set inside any template. AIML is not exactly the same as a simple database of questions and answers. The pattern matching "query" language is much simpler than something like SQL. But a category template may contain the recursive <srai> tag, so that the output depends not only on one matched category, but also any others recursively reached through <srai>.

RECURSION

AIML implements recursion with the <srai> operator. No agreement exists about the meaning of the acronym.

The "A.I." stands for artificial intelligence, but "S.R." may mean "stimulus-response," "syntactic rewrite," "symbolic reduction," "simple recursion," or "synonym resolution." The disagreement over the acronym reflects the variety of applications for <srai> in AIML. Each of these is described in more detail in a subsection below:

(1). Symbolic Reduction-Reduce complex grammatic forms to simpler ones.
(2). Divide and Conquer-Split an input into two or more subparts, and combine the responses to each.
(3). Synonyms-Map different ways of saying the same thing to the same reply.
(4). Spelling or grammar corrections.
(5). Detecting keywords anywhere in the input.
(6). Conditionals-Certain forms of branching may be implemented with <srai>.
(7). Any combination of (1)-(6).

The danger of <srai> is that it permits the botmaster to create infinite loops. Though posing some risk to novice programmers, we surmised that including <srai> was much simpler than any of the iterative block structured control tags which might have replaced it.

(1). Symbolic Reduction
Symbolic reduction refers to the process of simplifying complex grammatical forms into simpler ones. Usually, the atomic patterns in categories storing robot knowledge are stated in the simplest possible terms, for example we tend to prefer patterns like "WHO IS SOCRATES" to ones like "DO YOU KNOW WHO SOCRATES IS" when storing biographical information about Socrates. Many of the more complex forms reduce to simpler forms using AIML categories designed for symbolic reduction:

<category>
<pattern>DO YOU KNOW WHO * IS</pattern>
<template><srai>WHO IS <star/></srai></template> </category>

Whatever input matched this pattern, the portion bound to the wildcard * may be inserted into the reply with the markup <star/>. This category reduces any input of the form "Do you know who X is?" to "Who is X?"

(2). Divide and Conquer
Many individual sentences may be reduced to two or more subsentences, and the reply formed by combining the replies to each. A sentence beginning with the word "Yes" for example, if it has more than one word, may be treated as the subsentence "Yes." plus whatever follows it.

<category>
<pattern>YES *</pattern>
<template><srai>YES</srai> <sr/></template>
</category>

The markup <sr/> is simply an abbreviation for <srai><star/></srai>.

(3). Synonyms
The AIML 1.01 standard does not permit more than one pattern per category. Synonyms are perhaps the most common application of <srai>. Many ways to say the same thing reduce to one category, which contains the reply:

<category>
<pattern>HELLO</pattern>
<template>Hi there!</template>
</category>
<category>
<pattern>HI</pattern>
<template><srai>HELLO</srai></template>
</category>
<category>
<pattern>HI THERE</pattern>
<template><srai>HELLO</srai></template>
</category>
<category>
<pattern>HOWDY</pattern>
<template><srai>HELLO</srai></template>
</category>
<category>
<pattern>HOLA</pattern>
<template><srai>HELLO</srai></template>
</category>

(4). Spelling and Grammar correction
The single most common client spelling mistake is the use of "your" when "you're" or "you are" is intended. Not every occurrence of "your" however should be turned into "you're." A small amount of grammatical context is usually necessary to catch this error:

<category>
<pattern>YOUR A *</pattern>
<template>I think you mean "you're" or "you are" not "your."
<srai>YOU ARE A <star/></srai>
</template>
</category>

Here the bot both corrects the client input and acts as a language tutor.

(5). Keywords
Frequently we would like to write an AIML template which is activated by the appearance of a keyword anywhere in the input sentence. The general format of four AIML categories is illustrated by this example borrowed from ELIZA:

<category>
<pattern>MOTHER</pattern> <template> Tell me more about your family. </template>
</category>
<category>
<pattern>_ MOTHER</pattern> <template><srai>MOTHER</srai></template>
</category>
<category>
<pattern>MOTHER _</pattern>
<template><srai>MOTHER</srai></template>
</category>
<category>
<pattern>_ MOTHER *</pattern>
<template><srai>MOTHER</srai></template>
</category>

The first category both detects the keyword when it appears by itself, and provides the generic response. The second category detects the keyword as the suffix of a sentence. The third detects it as the prefix of an input sentence, and finally the last category detects the keyword as an infix. Each of the last three categories uses <srai> to link to the first, so that all four cases produce the same reply, but it needs to be written and stored only once.

(6). Conditionals
It is possible to write conditional branches in AIML, using only the <srai> tag. Consider three categories: <category>
<pattern>WHO IS HE</pattern> <template><srai>WHOISHE <get name="he"/></srai></template>
</category>
<category>
<pattern>WHOISHE *</pattern>
<template>He is <get name="he"/>.</template>
</category>
<category>
<pattern>WHOISHE UNKNOWN</pattern>
<template>I don't know who he is.</template>
</category>
Provided that the predicate "he" is initialized to "Unknown," the categories execute a conditional branch depending on whether "he" has been set. As a convenience to the botmaster, AIML also provides the equivalent function through the <condition> tag.

CONTEXT

The keyword "that" in AIML refers to the robot's previous utterance. Specifically, if the robot responds with a multiple sentence paragraph, the value of that is set to the last sentence in the sequence. The choice of the keyword "that" is motivated by its use in ordinary language:

R: Today is yesterday.
C: That makes no sense.
R: The answer is 3.1412926 approximately.
C: That is cool.

In AIML the syntax <that>...</that> encloses a pattern that matches the robot's previous utterance. A common application of <that> is found in yes-no questions:

<category>
<pattern>YES</pattern>
<that>DO YOU LIKE MOVIES</that>
<template>What is your favorite movie?</template>
</category>

This category is activated when the client says YES. The robot must find out what is he saying "yes" to. If the robot asked, "Do you like movies?," this category matches, and the response, "What is your favorite movie?," continues the conversation along the same lines. One interesting application of <that> are categories that enable the robot to respond to knock-knock jokes.

The categories:

<category>
<pattern>KNOCK KNOCK</pattern>
<template>Who is there?</template>
</category>
<category>
<pattern>*</pattern>
<that>WHO IS THERE</that>
<template><person/> who?</template>
</category>
<category>
<pattern>*</pattern>
<that>* WHO</that>
<template>Ha ha very funny, <get name="name"/>.</template>
</category>

produce the following dialogue:
C: Knock knock.
R: Who's there?
C: Banana.
R: Banana who?
C: Knock knock.
R: Who's there?
C: Banana.
R: Banana who? C: Knock knock.
R: Who's there?
C: Orange.
R: Orange who?
C: Orange you glad I didn't say banana.
R: Ha ha very funny, Nancy.

Internally the AIML interpreter stores the input pattern, that pattern and topic pattern along a single path, like: INPUT <that> THAT <topic> TOPIC When the values of <that> or <topic> are not specified, the program implicitly sets the values of the corresponding THAT or TOPIC pattern to the wildcard *.

The first part of the path to match is the input. If more than one category have the same input pattern, the program may distinguish between them depending on the value of <that>. If two or more categories have the same <pattern> and <that>, the final step is to choose the reply based on the <topic>. This structure suggests a design rule: never use <that> unless you have written two categories with the same <pattern>, and never use <topic> unless you write two categories with the same <pattern> and <that>. Still, one of the most useful applications for <topic> is to create subject-dependent "pickup lines," like:

<topic name="CARS">
<category>
<pattern>*</pattern>
<template>
<random>
<li>What's your favorite car?</li>
<li>What kind of car do you drive?</li>
<li>Do you get a lot of parking tickets?</li>
<li>My favorite car is one with a driver.</li>
</random>
</template>

Considering the vast size of the set of things people could say that are grammatically correct or semantically meaningful, the number of things people actually do say is surprisingly small. Steven Pinker,in his book How the Mind Works wrote, "Say you have ten choices for the first word to begin a sentence, ten choices for the second word (yielding 100 two-word beginnings), ten choices for the third word (yielding a thousand three-word beginnings), and so on. (Ten is in fact the approximate geometric mean of the number of word choices available at each point in assembling a grammatical and sensible sentence). A little arithmetic shows that the number of sentences of 20 words or less (not an unusual length) is about 1020."

Fortunately for chat robot programmers, Pinker's calculations are way off. Our experiments with A.L.I.C.E. indicate that the number of choices for the "first word" is more than ten, but it is only about two thousand. Specifically, about 2000 words covers 95% of all the first words input to A.L.I.C.E.. The number of choices for the second word is only about two. To be sure, there are some first words ("I" and "You" for example) that have many possible second words, but the overall average is just under two words. The average branching factor decreases with each successive word.

We have plotted some beautiful images of the A.L.I.C.E. brain contents represented by this graph (http://alice.sunlitsurf.com/documentation/gallery/).

More than just elegant pictures of the A.L.I.C.E. brain, these spiral images (see more) outline a territory of language that has been effectively "conquered" by A.L.I.C.E. and AIML. No other theory of natural language processing can better explain or reproduce the results within our territory. You don't need a complex theory of learning, neural nets, or cognitive models to explain how to chat within the limits of A.L.I.C.E.'s 25,000 categories. Our stimulus-response model is as good a theory as any other for these cases, and certainly the simplest. If there is any room left for "higher" natural language theories, it lies outside the map of the A.L.I.C.E. brain. Academics are fond of concocting riddles and linguistic paradoxes that supposedly show how difficult the natural language problem is. "John saw the mountains flying over Zurich" or "Fruit flies like a banana" reveal the ambiguity of language and the limits of an A.L.I.C.E.-style approach (though not these particular examples, of course, A.L.I.C.E. already knows about them).

In the years to come we will only advance the frontier further. The basic outline of the spiral graph may look much the same, for we have found all of the "big trees" from "A *" to "YOUR *". These trees may become bigger, but unless language itself changes we won't find any more big trees (except of course in foreign languages). The work of those seeking to explain natural language in terms of something more complex than stimulus response will take place beyond our frontier, increasingly in the hinterlands occupied by only the rarest forms of language. Our territory of language already contains the highest population of sentences that people use. Expanding the borders even more we will continue to absorb the stragglers outside, until the very last human critic cannot think of one sentence to "fool" A.L.I.C.E..

[Continue to part 2 of the interview.]

cancel ×

318 comments

Sorry! There are no comments related to the filter you selected.

Be like Mike (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959084)


FP BIOTCHES!

ALICE IS A TROLL!!!! (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959294)

> Is Rob Malda gay?

He never told me if he is or not.

> Does the CLIT 0wn?

I think it does the CLIT 0wn.

> Does Michael have no penis?

I think it does Michael have no penis.

> Does Slashdot suck?

I think it does Slashdot suck.

A.I. will never happen. Just face it. (-1)

Anonymous Coward | about 12 years ago | (#3959096)

Yep.

Re:A.I. will never happen. Just face it. (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959111)

You're wrong. It was released in theaters last year I think, or possibly the year before.

Re:A.I. will never happen. Just face it. (0)

Anonymous Coward | about 12 years ago | (#3959415)

It was this year.

MY CATS BREATH SMELLS LIKE CATFOOD (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959106)

Re:MY CATS BREATH SMELLS LIKE CATFOOD (-1, Troll)

Anonymous Coward | about 12 years ago | (#3959339)

...and your breath smells like cat genitalia.

THATS BECAUSE I KISSED YOUR GIRLFRIEND (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959495)

<p><div></div></p>



wow! (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959117)

now all the trolls get 3 simultaneous chances at that glorious FP!

In case of slashdotting (-1)

JismTroll (588456) | about 12 years ago | (#3959126)

Alicebot Creator Dr. Richard Wallace Expounds
News | Posted by Roblimo on 11:30 AM -- Friday July 26 2002
from the only-on-Slashdot dept.
Okay, here are Alicebot inventor Dr. Richard Wallace's answers to your questions. You're about to enter a world that contains interesting thoughts on A.I., a bit of marijuana advocacy, a courtroom drama, tales of academic politics and infighting, personal ranting, discussion of the nature of mental illness, and comments about the state of American society and the world in general. Yes, all this in one interview so long and strong we had to break it up into three parts to make it fit on our pages. This is an amazing work, well worth reading all the way to the end.

1) AI through simulation?
by Jeppe Salvesen

Do you think that the ever increasing processing power will eventually enable us to fully simulate the human brain? What ramifications would this have for the A.I. discipline?

Dr. Wallace:

My longstanding opinion is that neural networks are the wrong level of abstraction for understanding intelligence, human or machine.

Neurons are the transistors of the brain. They are the low level switching components out of which higher-order functionality is built. But like the individual transistor, studying the individual neuron tells us little about these higher functions.

Suppose an alien came down to Earth who had never seen a computer before. Assuming interstellar travel is possible without a computer! He/she might be tempted to break it open, and discover that it is made of millions of tiny transistors. The alien may try to discover how the computer works by measuring the electronic signals in the transistors. But they would miss the operating system completely. The transistors tell us nothing about the software.

Similarly, neurons tell us little about the higher order software running on our brains.

Significantly, no one has ever proved that the brain is a *good* computer. It seems to run some tasks like visual recognition better than our existing machines, but it is terrible at math, prone to errors, susceptible to distraction, and it requires half its uptime for food, sleep, and maintenance.

It sometimes seems to me that the brain is actually a very shitty computer. So why would you want to build a computer out of slimy, wet, broken, slow, hungry, tired neurons? I chose computer science over medical school because I don't have the stomach for those icky, bloody body parts. I prefer my technology clean and dry, thank you. Moreover, it could be the case that an electronic, silicon-based computer is more reliable, faster, more accurate, and cheaper.

I find myself agreeing with the Churchlands that the notion of consciousness belongs to "folk psychology" and that there may be no clear brain correlates for the ego, id, emotions as they are commonly classified, and so on. But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer. In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!

I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat. I'll take transistors over meat any day. Human intelligence may even be a poor kludge of the intelligence algorithm on an organ that is basically a glorified animal eyeball. From an evolutionary standpoint, our supposedly wonderful cognitive skills are a very recent innovation. It should not be surprising if they are only poorly implemented in us, like the lung of the first mudfish. We can breathe the air of thought and imagination, but not that well yet.

And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created.

Functionalism is basically the view that the mind is the software, and the brain is the hardware. It holds that mental states are equivalent to the states of a Turing Machine. Behaviorism was a pre-computational theory, which imagines the nervous system as a complex piece of machinery like a telephone exchange, but they didn't think much about software. Dualism goes back to Descartes. It is the view that the mind and brain are separate and distinct things, possibly affecting each other, or possibly mirroring each other.

My view is a kind of modified dualism in which I claim that the soul, spirit, or consciousness may exist, but for most people, most of the time, it is almost infentesimally small, compared with the robotic machinery responsible for most of our thought and action. Descartes never talked about the relative weights of brain and mind, but you can read in an implicit 50-50 assumption in most Dualist literature. My idea is more like 99-1, or even 99.999999% automatic machinery and .00000001% self-awareness, creativity, consciousness, spirit or what have you.

That's not to say that some people can't be more enlightened than others. But for the vast herd out there, on average, consciousness is simply not a significant factor. Not even a second- or third-order effect. Consciousness is marginal.

I say this with such confidence because of my experience building robot brains over the past seven years. Almost everything people ever say to our robot falls into one of about 45,000 categories. Considering the astronomical number of things people could say, if every sentence was an original line of poetry, 45,000 is a very, very small number.

2) Turing Test
by Transient0

I noticed that your AliceBot won the 2000 Loebner Prize for most human responses. My question is: "As an Artificial Intelligence researcher, do you feel that the Loebner Prize represents a legitimate variety of testing, or did you just want the $2000?"

I was pretty sure that almost all AI researchers came to the agreement about thirty years ago that the original imitation game as proposed by Turing in 1951 was useful only as a mental exercise, not in practice. Do you feel that the types of developments that the Loebner prize supports(intentional, hard-coded spelling mistakes, etc.) are actually productive in terms of the AI research project?

Dr. Wallace:

In case you haven't noticed, the field of Artificial Intelligence (defined however you wish) has almost nothing to do with science. It is all about politics. When you look at all the people working professionally in the field of A.I., it brings to mind the old joke:

Q: How many Carnegie Mellon Ph.D.s does it take to screw in a light bulb?
A: Two. One to change the bulb, and one to pull the chair out from under him.

The only rule most of these people know is: undermine the competition at all costs, by whatever legal means, or whatever they can get away with. That is how you become King of the A.I. Anthill.

Having a good theory or better implementation of anything is beside the point. Being able to "play the game" and knock out the competition, that is what it is all about. Swim with sharks or be eaten by them.

Especially in the age of increased competition for diminishing jobs and funding, scientific truth takes a back seat to save-your-ass.

Unfortunately it seems that the A.I. problem is inseperable from politics.

When I say that academia is corrupt in America, I don't mean that professors are accepting bribes and giving kickbacks for government contracts. There may be a financial motive in some cases, such as the use of overhead funds for a "course buyout" to reduce a professor's workload, but I am not talking about the kind of corruption associated with Wall Street and Washington exactly. I am talking about the replacement of science with politics as the main item on the academic agenda.

It must not have always been so. At one time, I believe academics were appointed and promoted primarily on the basis of merit and accomplishment. Within the last 20 years or so in the United States this has gradually changed into a system in which political correctness, slickness, and good salesmanship are more highly valued than good science. I don't pretend to understand the reasons for this, but I can point to many examples within our own community.

I have written that it is like a dysfunctional family. Those in positions of leadership and authority have mental health, drug and/or alcohol problems that make them incapable of carrying out their administrative responsibilities. In response, people who are skilled at "enabling" or "nursing" the dysfunctional leaders get promoted and advanced. Those who are prone to logical thinking and speaking the truth are discarded, because they make the authorities face their unconscious anxieties.

I often say, people don't go into computer science because they enjoy working with the public. But as the field has matured, I think it has attracted people who are more comfortable wearing business suits and attending strategy meetings than tinkering on a lab bench or writing a research paper. As computer science departments matured, the people already in them began to want everything to remain the same until they retired. They didn't want to hire young professors with a lot of new ideas about the administration. They hired young professors who wanted everything to stay exactly like it was, no matter what.

You may think that the politicization of a field like computer science is no big deal. We can have slick politicians instead of scientists running university CS departments, and not cause a lot of problems. But I think it is a really big problem in other fields, especially in medical science, especially in drugs and mental health.

Take LSD for example. Discovered by Albert Hoffmann in 1945, LSD is the most powerful drug ever developed. If you have ever gotten a prescription for any drug, you may have noticed that the dosage is usally given in "milligrams". But the dosage of LSD is "micrograms". It has the lowest ED50 of any known drug.

In the early 1960's there was some very promising research at Harvard applying LSD to depressed patients like me. The work was never completed or published for, guess what, political reasons. Subsequently, LSD was classified as a "Schedule I" drug with no useful medical value. This was not a decision based on sound science but on politics and fear. Even today there is zero research on this topic. Did you ever wonder why there is no Department of Psychedelic Studies on any university campus? It is a gaping hole in the academic curriculum, filled only by the informal undergraduate ratings of colleges as "party schools".

Even the very name of the federal agency that provides funding for drug research, the National Institute on Drug Abuse, prejudices the applications and the results. The native born American hippie agronomy student who got his Ph.D. in the 1970's is growing pot underground in California today. The immigrant doctor who "proved" that marijuana causes cancer got the NIDA grant and has tenure at UCLA. What's wrong with this picture?

Until 2 years ago, there was no federally funded research on the medical benefits of marijuana since the 1970's. Even now the only funded research is for terminal illnesses, and it seems like it will take a long time before they consider mental illnesses like mine. I conducted a survey of patients in San Francisco and discovered that "pain" was the #1 symptom for medical marijuana but "depression" was #2, and terminal illnesses like AIDS and cancer were lower on the list. So I am not alone in the perception that there is a patient need for research on this drug.

The problem here, my friends, is that NIDA is part of a specturm of trouble that includes once respected agencies such as NASA, NSF and DARPA. It is an octopus of political corruption that reaches into MIT and CMU and Berkeley and darkens everything it touches. It calls into question the quality and even the veracity of the scientific results and publications. We all witnessed the beginning of this even when we were all friends together at the ICRA conferences in the acrimonious interchanges between academia and industry. I myself saw enough of the system from the inside at NYU and Lehigh to know that science plays almost no role in the hiring, promoting or review process. It's all politics.

Not to place blame, but I think graduate advisors should be more straightforward with students about this point. It would be better to put more time into training them how to "shmooze" and "work the system" than how to solve mathematical problems, if they want their students to be successful. Either that, or they should work on changing the system back to merit based promotion.

3) My question (with answer)
by outlier

Historically, AI has done poorly managing public expectations. People expected thinking, understanding computers, while researchers had trouble getting computers to successfully disambiguate simple sentences. This is not good PR. Do you think the field has learned from this? If so, what should the public expect, and how do we excite them about it?

Just for fun, I asked slashwallace a shortened version of the question, do you think your response would differ?

Human: Historically AI has done poorly managing the public's expectations, do you think this will continue?
SlashWallace: Where did he get it?

Dr. Wallace:

Hugh Loebner is an independently wealthy, eccentric businessman, activist and philanthropist. In 1990 Dr. Loebner, who holds a Ph.D. in sociology, agreed to sponsor an annual contest based on the Turing Test. The contest awards medals and cash prizes for the "most human" computer. Since its inception, the Loebner contest has been a magnet for controversy.

One of the central disputes arose over Hugh Loebner's decision to award the Gold Medal and $100,000 top cash prize only when a robot is capable of passing an "audio-visual" Turing Test. The rules for this Grand Prize contest have not even been written yet. So it remains unlikely that anyone will be awarded the gold Loebner medal in the near future. The Silver and Bronze medal competitions are based on the STT. In 2001, eight programs played alongside two human confederates. A group of 10 judges rotated through each of ten terminals and chatted about 15 minutes with each. The judges then ranked the terminals on a scale of "least human" to "most human." Winning the Silver Medal and its $25,000 prize requires that the judges rank the program higher than half the human confederates. In fact one judge ranked A.L.I.C.E. higher than one of the human confederates in 2001. Had all the judges done so, she might have been eligible for the Silver Medal as well, because there were only two confederates.

To really understand how we accomplished this, I have to teach you some AIML.

CATEGORIES

The basic unit of knowledge in AIML is called a category. Each category consists of an input question, an output answer, and an optional context.

The question, or stimulus, is called the pattern. The answer, or response, is called the template. The two types of optional context are called "that" and "topic."

The AIML pattern language is simple, consisting only of words, spaces, and the wildcard symbols _ and *.

The words may consist of letters and numerals, but no other characters. The pattern language is case invariant.

Words are separated by a single space, and the wildcard characters function like words.

The first versions of AIML allowed only one wild card character per pattern.

The AIML 1.01 standard permits multiple wildcards in each pattern, but the language is designed to be as simple as possible for the task at hand, simpler even than regular expressions.

The template is the AIML response or reply. In its simplest form, the template consists of only plain, unmarked text.

More generally, AIML tags transform the reply into a mini computer program which can save data, activate other programs, give conditional responses, and recursively call the pattern matcher to insert the responses from other categories.

Most AIML tags in fact belong to this template side sublanguage.

AIML currently supports two ways to interface other languages and systems. The tag executes any program accessible as an operating system shell command, and inserts the results in the reply. Similarly, the tag allows arbitrary scripting inside the templates.

The optional context portion of the category consists of two variants, called and . The tag appears inside the category, and its pattern must match the robot's last utterance.

Remembering one last utterance is important if the robot asks a question. The tag appears outside the category, and collects a group of categories together.

The topic may be set inside any template. AIML is not exactly the same as a simple database of questions and answers. The pattern matching "query" language is much simpler than something like SQL. But a category template may contain the recursive tag, so that the output depends not only on one matched category, but also any others recursively reached through .

RECURSION

AIML implements recursion with the operator. No agreement exists about the meaning of the acronym.

The "A.I." stands for artificial intelligence, but "S.R." may mean "stimulus-response," "syntactic rewrite," "symbolic reduction," "simple recursion," or "synonym resolution." The disagreement over the acronym reflects the variety of applications for in AIML. Each of these is described in more detail in a subsection below:

(1). Symbolic Reduction-Reduce complex grammatic forms to simpler ones.
(2). Divide and Conquer-Split an input into two or more subparts, and combine the responses to each.
(3). Synonyms-Map different ways of saying the same thing to the same reply.
(4). Spelling or grammar corrections.
(5). Detecting keywords anywhere in the input.
(6). Conditionals-Certain forms of branching may be implemented with .
(7). Any combination of (1)-(6).

The danger of is that it permits the botmaster to create infinite loops. Though posing some risk to novice programmers, we surmised that including was much simpler than any of the iterative block structured control tags which might have replaced it.

(1). Symbolic Reduction
Symbolic reduction refers to the process of simplifying complex grammatical forms into simpler ones. Usually, the atomic patterns in categories storing robot knowledge are stated in the simplest possible terms, for example we tend to prefer patterns like "WHO IS SOCRATES" to ones like "DO YOU KNOW WHO SOCRATES IS" when storing biographical information about Socrates. Many of the more complex forms reduce to simpler forms using AIML categories designed for symbolic reduction:

DO YOU KNOW WHO * IS
WHO IS

Whatever input matched this pattern, the portion bound to the wildcard * may be inserted into the reply with the markup . This category reduces any input of the form "Do you know who X is?" to "Who is X?"

(2). Divide and Conquer
Many individual sentences may be reduced to two or more subsentences, and the reply formed by combining the replies to each. A sentence beginning with the word "Yes" for example, if it has more than one word, may be treated as the subsentence "Yes." plus whatever follows it.

YES *
YES

The markup is simply an abbreviation for .

(3). Synonyms
The AIML 1.01 standard does not permit more than one pattern per category. Synonyms are perhaps the most common application of . Many ways to say the same thing reduce to one category, which contains the reply:

HELLO
Hi there!

HI
HELLO

HI THERE
HELLO

HOWDY
HELLO

HOLA
HELLO

(4). Spelling and Grammar correction
The single most common client spelling mistake is the use of "your" when "you're" or "you are" is intended. Not every occurrence of "your" however should be turned into "you're." A small amount of grammatical context is usually necessary to catch this error:

YOUR A *
I think you mean "you're" or "you are" not "your."
YOU ARE A

Here the bot both corrects the client input and acts as a language tutor.

(5). Keywords
Frequently we would like to write an AIML template which is activated by the appearance of a keyword anywhere in the input sentence. The general format of four AIML categories is illustrated by this example borrowed from ELIZA:

MOTHER Tell me more about your family.

_ MOTHER MOTHER

MOTHER _
MOTHER

_ MOTHER *
MOTHER

The first category both detects the keyword when it appears by itself, and provides the generic response. The second category detects the keyword as the suffix of a sentence. The third detects it as the prefix of an input sentence, and finally the last category detects the keyword as an infix. Each of the last three categories uses to link to the first, so that all four cases produce the same reply, but it needs to be written and stored only once.

(6). Conditionals
It is possible to write conditional branches in AIML, using only the tag. Consider three categories:
WHO IS HE WHOISHE

WHOISHE *
He is .

WHOISHE UNKNOWN
I don't know who he is.

Provided that the predicate "he" is initialized to "Unknown," the categories execute a conditional branch depending on whether "he" has been set. As a convenience to the botmaster, AIML also provides the equivalent function through the tag.

CONTEXT

The keyword "that" in AIML refers to the robot's previous utterance. Specifically, if the robot responds with a multiple sentence paragraph, the value of that is set to the last sentence in the sequence. The choice of the keyword "that" is motivated by its use in ordinary language:

R: Today is yesterday.
C: That makes no sense.
R: The answer is 3.1412926 approximately.
C: That is cool.

In AIML the syntax ... encloses a pattern that matches the robot's previous utterance. A common application of is found in yes-no questions:

YES
DO YOU LIKE MOVIES
What is your favorite movie?

This category is activated when the client says YES. The robot must find out what is he saying "yes" to. If the robot asked, "Do you like movies?," this category matches, and the response, "What is your favorite movie?," continues the conversation along the same lines. One interesting application of are categories that enable the robot to respond to knock-knock jokes.

The categories:

KNOCK KNOCK
Who is there?

*
WHO IS THERE
who?

*
* WHO
Ha ha very funny, .

produce the following dialogue:
C: Knock knock.
R: Who's there?
C: Banana.
R: Banana who?
C: Knock knock.
R: Who's there?
C: Banana.
R: Banana who? C: Knock knock.
R: Who's there?
C: Orange.
R: Orange who?
C: Orange you glad I didn't say banana.
R: Ha ha very funny, Nancy.

Internally the AIML interpreter stores the input pattern, that pattern and topic pattern along a single path, like: INPUT THAT TOPIC When the values of or are not specified, the program implicitly sets the values of the corresponding THAT or TOPIC pattern to the wildcard *.

The first part of the path to match is the input. If more than one category have the same input pattern, the program may distinguish between them depending on the value of . If two or more categories have the same and , the final step is to choose the reply based on the . This structure suggests a design rule: never use unless you have written two categories with the same , and never use unless you write two categories with the same and . Still, one of the most useful applications for is to create subject-dependent "pickup lines," like:

*

What's your favorite car?

What kind of car do you drive?

Do you get a lot of parking tickets?

My favorite car is one with a driver. Considering the vast size of the set of things people could say that are grammatically correct or semantically meaningful, the number of things people actually do say is surprisingly small. Steven Pinker,in his book How the Mind Works wrote, "Say you have ten choices for the first word to begin a sentence, ten choices for the second word (yielding 100 two-word beginnings), ten choices for the third word (yielding a thousand three-word beginnings), and so on. (Ten is in fact the approximate geometric mean of the number of word choices available at each point in assembling a grammatical and sensible sentence). A little arithmetic shows that the number of sentences of 20 words or less (not an unusual length) is about 1020."

Fortunately for chat robot programmers, Pinker's calculations are way off. Our experiments with A.L.I.C.E. indicate that the number of choices for the "first word" is more than ten, but it is only about two thousand. Specifically, about 2000 words covers 95% of all the first words input to A.L.I.C.E.. The number of choices for the second word is only about two. To be sure, there are some first words ("I" and "You" for example) that have many possible second words, but the overall average is just under two words. The average branching factor decreases with each successive word.

We have plotted some beautiful images of the A.L.I.C.E. brain contents represented by this graph (http://alice.sunlitsurf.com/documentation/gallery /).

More than just elegant pictures of the A.L.I.C.E. brain, these spiral images (see more) outline a territory of language that has been effectively "conquered" by A.L.I.C.E. and AIML. No other theory of natural language processing can better explain or reproduce the results within our territory. You don't need a complex theory of learning, neural nets, or cognitive models to explain how to chat within the limits of A.L.I.C.E.'s 25,000 categories. Our stimulus-response model is as good a theory as any other for these cases, and certainly the simplest. If there is any room left for "higher" natural language theories, it lies outside the map of the A.L.I.C.E. brain. Academics are fond of concocting riddles and linguistic paradoxes that supposedly show how difficult the natural language problem is. "John saw the mountains flying over Zurich" or "Fruit flies like a banana" reveal the ambiguity of language and the limits of an A.L.I.C.E.-style approach (though not these particular examples, of course, A.L.I.C.E. already knows about them).

In the years to come we will only advance the frontier further. The basic outline of the spiral graph may look much the same, for we have found all of the "big trees" from "A *" to "YOUR *". These trees may become bigger, but unless language itself changes we won't find any more big trees (except of course in foreign languages). The work of those seeking to explain natural language in terms of something more complex than stimulus response will take place beyond our frontier, increasingly in the hinterlands occupied by only the rarest forms of language. Our territory of language already contains the highest population of sentences that people use. Expanding the borders even more we will continue to absorb the stragglers outside, until the very last human critic cannot think of one sentence to "fool" A.L.I.C.E..

Re:In case of slashdotting (2, Funny)

hekk (471747) | about 12 years ago | (#3959154)

I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat.

well.. you could cook it pretty well with an OC'ed athlon. cook dinner and waste time at the same time.

Re:In case of slashdotting (0)

the way, what're you (591901) | about 12 years ago | (#3959406)


well.. you could cook it pretty well with an OC'ed athlon. cook dinner and waste time at the same time.

No kidding, some dude cooked an egg [handyscripts.co.uk] with his Athlon XP!

Yes but what does the acronym A.L.I.C.E stand for? (3, Funny)

Anonymous Coward | about 12 years ago | (#3959134)

I can't find the answer to this on their pages anywhere and if you ask the ALICE program it give back some cryptic bull-crap asking what I think it means. Someone just tell me!!!

Re:Yes but what does the acronym A.L.I.C.E stand f (4, Informative)

I Want GNU! (556631) | about 12 years ago | (#3959336)

A.L.I.C.E. = artificial linguistic Internet computer entity

From page 2 of the interview: (0)

Anonymous Coward | about 12 years ago | (#3959577)

A.L.I.C.E. was not the original name of A.L.I.C.E. The first prototype was called PNAMBIC, in tribute to the hoaxes, deceptions and tricks that have littered the history of artificial intelligence.

"PNAMBIC-(acronym) Pay No Attention to that Man Behind the Curtain [from The Wizard of Oz]. Denoting any supposedly fully automated system that in fact requires human intervention to achieve the desired result."-New Hacker's Dictionary

But the machine hosting PNAMBIC was already named Alice by a forgotten systems administrator, so people began to call her "Alice." At that point, we invented the "retronym": Artificial Linguistic Internet Computer Entity.

That's the Alicebot's father alright. (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959138)

He likes to *talk*, a lot.

To me, it Sounds Like... (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959144)

Burned out ramblings

Great programming! (-1)

JismTroll (588456) | about 12 years ago | (#3959146)

> Do you have a smelly vagina?

That's an interesting question I don't hear everyday: Do I have a smelly vagina.
I have a great programmer.

Pictures? (2, Informative)

httpamphibio.us (579491) | about 12 years ago | (#3959151)

Anyone have more pictures of this guy? The article on nytimes.com had that tiiiiny little picture where he just looked like a muppet.

Here ya go (5, Funny)

Codex The Sloth (93427) | about 12 years ago | (#3959412)

Picture right here [anabaena.net] .

How do we know... (4, Informative)

bucklesl (73547) | about 12 years ago | (#3959153)

...that this is actually him, eh?

Re:How do we know... (1)

Camulus (578128) | about 12 years ago | (#3959333)

Good point! Seeing has how so many people can give a page and a half rant on AIML seemingly off the top of their head it probably is an imposter.

Dick Wallace is a dick (-1)

CreamOfWheat (593775) | about 12 years ago | (#3959157)

Turd bandit Rump wrangler

first (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959167)

or not

who gives a crap about this? this [goatse.cx] is all you need.

eh? (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959172)

The wipotrollse.cx lawyer said we needed a warning, so if your under 18 or find this photograph offensive please don't look at it thank you

Wipo troll [archive.org]

[ the snot ] [ contrib ] [ feedback ]

** Trollaxor [trollaxor.com] **
** Troll food [klerck.org] **

Is this an Onion article? (0)

gosand (234100) | about 12 years ago | (#3959186)

Damn, is it just me, or is this interview a lot like The Onion Advice [theonion.com] articles?

Re:Is this an Onion article? (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959282)

hahahaha

it's not just you - I was thinking the same thing.

Drugs (-1, Offtopic)

af_robot (553885) | about 12 years ago | (#3959204)

LSD is the most powerful drug ever developed

Yeah, and also i really enjoy its side-effects ...

Re:Drugs (1)

jaymz168 (555580) | about 12 years ago | (#3959419)

I think one it's more important effects is to be able to stand outside one's thought processes (as much as one can) and be able to somewhat objectively analyze the mind. It's possible that use of this drug can help us understand more about conciousness and developing AI in the right hands. Of course this isn't anything new, but I'm not sure this approach has ever been taken in an attempt to develop AI.

Marcello could learn a thing or two... (0)

JohnFluxx (413620) | about 12 years ago | (#3959220)

about longer replies *grin*

JohnFlux

Re:Marcello could learn a thing or two... (2)

capt.Hij (318203) | about 12 years ago | (#3959303)

No kidding. It seems clear that this guy is an academic. Why use three words when five hundred will do?

Maybe the real problem with academia is that there are a boatload of people like this trying to talk to one another?

Re:Marcello could learn a thing or two... (0)

JohnFluxx (413620) | about 12 years ago | (#3959456)

Bah, Well _I_ thought I was funny....

oh well :)

JohnFlux

But great answers, thanks wallace!

Re:Marcello could learn a thing or two... (0)

JohnFluxx (413620) | about 12 years ago | (#3959553)

Hmm, Why are my posts at score 0 now?

ME RUN MOZLLA ON OS X (-1)

Anonymous Coward | about 12 years ago | (#3959224)

See, anyone can use a Mac!
Important Stuff:
Please try to keep posts on topic.
Try to reply to other people comments instead of starting new threads.
Read other people's messages before posting your own to avoid simply duplicating what has already been said.
Use a clear subject that describes what your message is about.
Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments might be moderated. (You can read everything, even moderated posts, by adjusting your threshold on the User Preferences Page)

Speak for Yourself (0, Offtopic)

zet0n (266284) | about 12 years ago | (#3959229)

I don't know about yours, but my brain runs Linux

THATS WHY YOURE A DROOLING MORON (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959269)

Best interview ever! (2, Interesting)

moldar (536869) | about 12 years ago | (#3959238)

Not only is the length of these replies very exciting - it seems that he has taken great care to provide technical details that are invaluable. In all the classes that I have taken I haven't seen such an excitement for this kind of material. And, just to put those statements in context I completed a MS in CS with a focus on AI . . .

Re:Best interview ever! (1)

JPriest (547211) | about 12 years ago | (#3959530)

You read all 3 pages of that in time to make the 4th post to /.? What is you clock speed?

Don't be fooled. (3, Funny)

natefaerber (143261) | about 12 years ago | (#3959243)

This is obviously A.L.I.C.E. answering.

Re:Don't be fooled. (1)

b0bd0bbs (592231) | about 12 years ago | (#3959397)

If that is A.L.I.C.E answering, then alice smokes some mad chronic.

I used to think I was intelligent. (4, Funny)

Nomad7674 (453223) | about 12 years ago | (#3959247)

Then I read this interview and began to begin to sense that my brain was about to explode. Guess I need to ratchet down my self assessment and get some Tylenol for the headache!

Pot's Horrible Horrible Side-Effects (-1, Offtopic)

punkrider (176796) | about 12 years ago | (#3959258)

California is always leading the way. Medical marijuana is now held up in court as a viable treatment for people who can benefit from its side-effects.

The feds can still bust you, but the state has realized that people as smart as this can smoke pot and still contribute a great amount to society.

Go ask A.L.I.C.E (-1, Offtopic)

ehorizon (591829) | about 12 years ago | (#3959259)

When she's 100 billion transisters tall...

whew, now my brain can rest. oh wait.... (4, Funny)

dubiousmike (558126) | about 12 years ago | (#3959263)

there's part two to the interview.

I am exausted already.

Re:whew, now my brain can rest. oh wait.... (0)

Anonymous Coward | about 12 years ago | (#3959306)

And then there's also a part three...

DAMN YOU RICHARD SPECK (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959273)

"Do you like fucking men, Speck?"]

"Absolutely."

Re:DAMN YOU RICHARD SPECK (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959290)

"Is the quality of this cocaine satisfactory, Mr. Delorian?"

"Good as gold."

OT: RICHARD SPECK (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959465)

One time when he was videotaped in prison, he was partying with another guy (sex & drugs). He looked into the camera after taking a smoke and said "If they knew what a good time I was having, they'd let me go."

Slightly worrying (3, Insightful)

streetlawyer (169828) | about 12 years ago | (#3959280)

That's not to say that some people can't be more enlightened than others. But for the vast herd out there, on average, consciousness is simply not a significant factor. Not even a second- or third-order effect. Consciousness is marginal.

Does this not have the implication that there would be nothing very terrible about rounding up large numbers of the "vast herd" and painlessly slaughtering them? Has he thought through the consequences of this view?

Re:Slightly worrying (3)

SirSlud (67381) | about 12 years ago | (#3959343)

Only if you think something without conciousness should be slaughtered.

Considering that we (mostly) agree that even the lack of a conciousness in a human doesn't excuse you from slaughtering them, whats the problem?

Re:Slightly worrying (2, Insightful)

streetlawyer (169828) | about 12 years ago | (#3959361)

Considering that we (mostly) agree that even the lack of a conciousness in a human doesn't excuse you from slaughtering them, whats the problem?

If this were true, surely abortion would be illegal?

Re:Slightly worrying (1)

SirSlud (67381) | about 12 years ago | (#3959417)

Shoulda known from your sig.

By human, I am speaking specifically about humans that have been born.

I will not get dragged into anything more complicated that that.

Re:Slightly worrying (0)

Anonymous Coward | about 12 years ago | (#3959505)

no because fetuses arent human or animal. theyre part of the birthing stage. they cannot survive outside an articial environment of the womb and are therefore not relevant.

Re:Slightly worrying (1)

ThereIsNoSporkNeo (587688) | about 12 years ago | (#3959622)

I looked up articial. I couldn't find it.

I'll assume you meant artificial. If so, how can you consider the womb artificial? The womb's environment is all natural.

"... fetuses arent human or animal. theyre part of the birthing stage."
Yes... the -human- birthing stage. By that logic you could say that infants aren't human because they are part of the "Growing stage"

"They cannot survive outside an articial environment of the womb and are therefore not relevant" ... tell me, can you survive without all the "articial" enhancements that we have? Could you rampage through the wild and kill game with your bare hands? No? Well, then, obviously you are not relevant, making this conversation a moot point. And you (I'm assuming) are at least a mostly grown adult. An infant is not capable of surviving on its own in the world. Does this also make it non-human?

At least come up with a better argument for killing people than those. Perhaps mental instability.

Re:Slightly worrying (0)

Anonymous Coward | about 12 years ago | (#3959432)

No, it does not have that implication. I would guess that he has.

Ok. Hold up. (1, Interesting)

rash (83406) | about 12 years ago | (#3959297)

As im reading this I am getting a bit irritated.

It seams as whenever he tries to prove something he brings up a bunch of "facts" without backing them up with anything "real". And then draws a conclusion that doesnt have anything todo with the "proof" he gave.

So from my view he makes up evidence to justify his own views. Instead I think he should addapt his views to reality and the rules of western society.

Re:Ok. Hold up. (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959337)

What the fuck does "the rules of western society" have to do with anything.

Hate to break it to you dippy but the "rules of eastern society" aren't any better, in fact i think it could be argued that they are actually worse.

Re:Ok. Hold up. (3, Insightful)

SirSlud (67381) | about 12 years ago | (#3959389)

Uh, how do you give proof?

This is so funny - short of him doing an experiment in your livingroom, any refernece he provided could be easily dismissed as you. You sound like you dont want to believe anything. How could he provide proof?

Take your blinders off. Suggesting our 'western rules' must be upheld in scientific discovery is exactly the problem he's dicussing; that politics is superceding any actual search for scientific truth.

And by the way, if you want to discredit him, why not provide some facts and proof yourself? People's distrust of counter-institution thinking is hilarious given how history suggests that its the only type of thinking that generally leads to the 'progress' we so enjoy today. If everybody thought like you, we'd still think that Earth was the center of the universe.

Re:Ok. Hold up. (1)

rash (83406) | about 12 years ago | (#3959526)

What I am talking about is relevance.
You cant talk for 10 hours about stuff that isnt relevant to your point.

If I were to say. "The owner of the store hates me. So therefore I wont shop at the store next to that store". Then it wouldnt make any sense.

Re:Ok. Hold up. (2)

SirSlud (67381) | about 12 years ago | (#3959575)

>You cant talk for 10 hours about stuff that isnt relevant to your point.

Fortunately, its a free world, and you can.

And I believe he does address the questions, ultimately, in his answers.

But man, there is a whackload of bonus information and thinking in there that I am *glad* he includes. You can never expound too much; its up to the person asking the question to filter the reply and use what information is relevent to them.

Re:Ok. Hold up. (1)

rash (83406) | about 12 years ago | (#3959611)

hahaha
you are funny

For a moment there.... (0)

Anonymous Coward | about 12 years ago | (#3959328)

I read the title as "Alicebot Creator Dr. Richard Wallace Exlodes". Whew! Always gotta watch out for spontaneous combustion.

not a troll, i swear (3, Insightful)

macsox (236590) | about 12 years ago | (#3959332)

i certainly appreciate good technology, don't get me wrong. but, after reading a new york times magazine article on the good doctor, i revisited ALICE, and was not impressed, as i hadn't been the first time. i messed with it for about ten minutes, thinking maybe i was missing something, and then showed it to my girlfriend, who asked ALICE about three questions and then gave me one of those looks.

i know, i know, baby steps, but, in a behavioral sense, this neither approximates nor even reasonably simulates intelligent thought. why are people so blown away?

Re:not a troll, i swear (1)

Zurk (37028) | about 12 years ago | (#3959528)

it talks back. its a (slightly better) eliza. we arent blown away. we're just happy that someone is building it. sure its a toy. but its a step towards the real thing (combining google with alicebot and cyc would be a great start). and maybe just maybe modifications to the open code of alicebot can lead to some real progress.

Re:not a troll, i swear (1)

JPriest (547211) | about 12 years ago | (#3959584)

I think I liked Eliza better. For some cheap entertainment check out AOLiza [fury.com] . It's a list of some chat logs where some unsuspecting AIM users end up talking to Eliza.

Newral Networks are Wrong Level? (5, Informative)

Louis Savain (65843) | about 12 years ago | (#3959348)

My longstanding opinion is that neural networks are the wrong level of abstraction for understanding intelligence, human or machine.

Not a very valid opinion since the behavioral complexity and robustness of biological neural networks are many, many orders of magnitude greater than that of any robot or program in existence. Alice is a good example. But this view is to be expected from a GOFAI (good old fashioned AI) guru whose livelihood depends on hawking the hopelessly flawed symbolic intelligence and knowledge representation approach to AI. This approach is over fifty years old and they still can't use it to make a machine as smart as a cockroach. Not a very good track record, IMO.

For a better take on why neural networks are the only hope for achieving human level AI, click on the links below:

Temporal Intelligence [gte.net]
Animal [gte.net]

Re:Newral Networks are Wrong Level? (0)

Anonymous Coward | about 12 years ago | (#3959541)

yes and please tell us exactly how your machine simulated perceptron is even close to the real thing. its all bullshit. no perceptron can ever match a single biological neuron.

Re:Neural Networks are Wrong Level? (0)

Anonymous Coward | about 12 years ago | (#3959657)

Not a very valid opinion since the behavioral complexity and robustness of biological neural networks are many, many orders of magnitude greater than that of any robot or program in existence.

Yeah! Brains may be shitty at math, but they're fantastic at interacting with the real world. Would you rather have a robot that can do matrix multiplication, or one that can walk to the kitchen and return with an (intact) beer? The substrate matters.

Re:NeUral Networks are Wrong Level? (1)

imta11 (129979) | about 12 years ago | (#3959662)

There are two levels to the AI problem. The symbolic and the manipulation. Symbols should be used to define meanings to things, and the neural net for processing things. Thats how the brain works. Signals fly around in the frontal lobe and produce some kind of emerging answer. That answer has no meaning outside of the brain, but it produces a stimilus. This stumulus causes the training that humans get as an infant to "make" the learned behavior happen. Or if you like terminate the signal path. In reality nothing terminates, other things just take over. What you need is a neural network that adjusts its weights based on its enviornment, and then produces a canned response at some point. This canned response ideally could be the result of the enviornment. mnjnjmmnjmn,mn,

fuck it. I'll just write a paper.

Re:Newral Networks are Wrong Level? (1)

mike3411 (558976) | about 12 years ago | (#3959665)

I agree, I'm studying neurobiology (at CMU :) and am amazed at Dr. Wallace's disdain for the functionality of the human brain, and similar approaches. I haven't gotten a chance to play with ALICE much (the site is getting rather slow), but the bottom line is that all of its "intelligence" and abilities have been hard-coded in, and it has no ability to adapt or learn, some of the fundamentals of intelligence.

lsd (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3959351)

lsd is simply fantastic.

if lsd frightens you then the government loves you.

have a nice afternoon.

This is the Best Interview On Slashdot Ever (1)

BlueRain (90236) | about 12 years ago | (#3959363)

Thank you Dr. Wallace. Really.

Re:This is the Best Interview On Slashdot Ever (0)

Anonymous Coward | about 12 years ago | (#3959670)

Suck - sukc -slurp!

Kind of hard to get past the first answer. (4, Insightful)

Christianfreak (100697) | about 12 years ago | (#3959391)

Disclaimer, I haven't read the whole thing yet since its long I'm going to comment on my observations so far.

That's not to say that some people can't be more enlightened than others. But for the vast herd out there, on average, consciousness is simply not a significant factor. Not even a second- or third-order effect. Consciousness is marginal.

Okay I'm sure this guy is a huge expert and all but this sounds rather elitest, lots of people create lots of wonderful things, to say that most people don't use their consciousness simply ignores all the massive achievements of the last 100 years. He goes on to talk about that people say only about 45000 things to his robots... well it seems to me the obvious answer is that most people perceive robots a certain way ... as machines. In fact I'm impressed he got that many responses, most people don't ask their electric can-opener what the meaning of life is, and I venture to guess that most people don't see a robot much differently.

Also he talks about how the brain is such a horrible computer but completely ignores human interaction, something that our computers can't do and I don't see them doing very well anytime in the near future (ever talked to that crappy robot voice on Sprint PCS customer service?). He talks about how the brain is horrible at math but ignores that fact that everytime we move the brain makes complex calcuations to put our legs in the right place and keep us balanced. Just because we aren't conscious of it doesn't mean it doesn't happen.

So really I think hes comparing humans from the perspective of his robots ... I don't think its a very good comparison. In fact switch good visual recognition with good math skills in what he's saying and you would have a better description of a robot than a person ...

Just my opinions, not meant as a troll.

Re:Kind of hard to get past the first answer. (0)

Anonymous Coward | about 12 years ago | (#3959550)

Ah. Walking. Muscle memory.

Is it me or (-1, Flamebait)

Anonymous Coward | about 12 years ago | (#3959392)

Have the stories on slashdot really sucked lately.

Oh well off to fuck ALICE up the ass.

LSD (2, Interesting)

hanwen (8589) | about 12 years ago | (#3959414)

In the early 1960's there was some very promising research at Harvard applying LSD to depressed patients like me. [...] Even today there is zero research on this topic.

It all depends on where the "topic" ends precisely, but there have been studies on the effect of LSD on religious experiences. Some of them are cited in "Zen and the Brain" by James Austin.

Re:LSD (1)

Ashtangi (583372) | about 12 years ago | (#3959544)

Yeah, he blew this one. There is a lot of study academically going on now on hallucinogens in general. Check out MAPS [maps.org] for a good starting point.

As for the consciousness remark he made ("consciousness is marginal."), I for one will disagree. And that is what MAPS is all about (along with the likes of Richard Schultes, the late Terrence McKenna, Dennis McKenna, and a slew of other "psyconauts" out there).

Read the answer to #2 (3, Insightful)

sconeu (64226) | about 12 years ago | (#3959430)

1. He didn't answer the question
2. <SARCASM>Good thing he's not bitter or anything, isn't it?</SARCASM>

Re:Read the answer to #2 (0)

Anonymous Coward | about 12 years ago | (#3959568)

It seems that answer 3 goes to question 2 and answer 2 goes to question 3.

"what's wrong with this picture?" (0)

Anonymous Coward | about 12 years ago | (#3959447)

The native born American hippie agronomy student who got his Ph.D. in the 1970's is growing pot underground in California today. The immigrant doctor who "proved" that marijuana causes cancer got the NIDA grant and has tenure at UCLA. What's wrong with this picture?
What's wrong with this picture is the very fact that he mentions ethnicity. I've always kind of assumed that scientific minds- especially computer scientific minds- can move past irrelevant things like that....and I find it especially sad that someone complaining about the amount of backstabbing going on in american science (and that's a valid argument) has to resort to essentially the same tactics to drive a point home.

Re:"what's wrong with this picture?" (1)

alicebotmaster (134416) | about 12 years ago | (#3959625)

You brought up ethnicity, not me. I just said he was an immigrant.

Question I wish I'd thought of.... (1)

buffer-overflowed (588867) | about 12 years ago | (#3959452)

It seems to me that Dr. Wallace is half right on his interpretation. His transistor/operating system analogy would seem to be fairly compelling and makes a lot of sense to me.

The question I wish had been asked is, we all know emulation is slower and normally less accurate than a native system. If you are approaching AI from the standpoint of developing the operating system before developing the system itself, how is this a more accurate approach or will both approaches yield to a final positive result?

His answers basically make me think that a true AI is most likely to evolve on two fronts. First, the development of models that emulate the structure of the brain (neural networks/etc.), and second the development of models that emulate the way it actually behaves. NNs are quite good at learning things from an input layer, but how do you go about getting that input layer without an appropriate model of what human behavior is?

This is why I think that models like ALICE will be used to approximate behavior and then a neural network will be used to learn how to emulate that logic with an adaptive input layer(being a next generation ALICE equivalent). IANITF(I am not in the field) however. Last thing I read in it was on perceptors, logic grammars, and kohenegan[sic] SONs. Any other /.ers who may be more informed have any thoughts?

Re:Question I wish I'd thought of.... (0)

Anonymous Coward | about 12 years ago | (#3959597)

I'd question the whole "hardwired" approach in either silicon or meat. Perhaps a better approach would be to combine something like Transmeta's chips with neural networks. Then we may begin to approximate an evolving approach to self learning AI actually capable of some kind of behaviour rather than rigid, codified programming.
Just my 2 c

I can't believe people are being taken in by this! (5, Interesting)

chrisseaton (573490) | about 12 years ago | (#3959460)

This has nothing to do with AI!

All the Alice Bot does is respond to your statements and questions. It never initiates anything, it never thinks for itself, it just loads responses and sends them off.

This is not AI because it never creates answers to questions, just picks them from a list. Sure, it uses the current context to pick the responses, and it modifies the responses to fit what you have already told it, but it never creates anything itself, and nothing ever changes. Ask it the same question a hundred times, and you get the same answer a hundred times.

This is just a database with a human text interface - nothing more. There is no creativity, no adaptability, not inteligence, and it really annoys me when people sing about this being AI.

Re:I can't believe people are being taken in by th (0)

Anonymous Coward | about 12 years ago | (#3959547)

Do you even know what A.I. stands for? If it actually created it's own answers to the questions, it would be just an I. But because it simply feigns an intelligence, it is reffered to as an A.I.

Re:I can't believe people are being taken in by th (3, Informative)

slashkitty (21637) | about 12 years ago | (#3959589)

It really annoys me how many slashdotters like your self don't seem to know the definition of AI.

This is the definition of AI. (m-w.com)

Main Entry: artificial intelligence
Function: noun
Date: 1956
1 : the capability of a machine to imitate intelligent human behavior
2 : a branch of computer science dealing with the simulation of intelligent behavior in computers
Clearly, ALICE falls under the first definition. It's not simulating intelligence, it's only imitating the behavior of conversation.

I also beg to differ about the creativity and adaptability of it. The bot master must be very creative to create something that can fool humans. In addition, there are some ways that bot can adapt and change, as he states in the interview.

If you need to see more robots 50,000 have been created at RunABot.com [runabot.com] .. or you can make one yourself.

He's not shy (2, Interesting)

f00zbll (526151) | about 12 years ago | (#3959463)

I read through most of the answers and I have to say this guy has strong beliefs. I'm going to bother making judgement on his beliefs, but it was an interesting read.

I personally think the idea of "consciousness" is over-rated and gets in the way of most of the time. But I'm not about to make the quantum leap from that and say "people have no consciousness and that most people are cows." Making comparisons between what a computer does and what a human can do are completely different things. For example, a great architect can look a structure for 2 minutes and deconstruct it. Can a machine do the same thing? A robot may be able to measure a building down to millimeters, but would it be able to take it to the next step and recommend a way to add two rooms to the house?

In my opinion, one of the hardest things for AI to immitate/model is creative thinking. Take mathematical proofs for example. How long would it take for a bot to realize pi is infinite? Would it have to calculate it to 10^10 to finally say "this number may be infinite." Look at the recent high profile proof that were solved with brute force. Could a robot come up with those theorums spontaneously. Sometimes a problem isn't computationally feasible and intuition is needed. Maybe the good professor is too focused on computer science and forgotten how to live a full life and learn to appreciate humanity with all it's flaws and gems.

This is awesome... (3, Insightful)

imta11 (129979) | about 12 years ago | (#3959510)

This guy knows what it is about. His response to question #2 pegs the fundamental problem with the CS discipline as an undergraduate or graduate field of study, and maybe the sciences in general. The people that do things by the book, solve the same problem sets and schmooze with the professors the most get the A's the promotions, etc... How many times do I have to solve the same problem? Is this just so the people that waste their study time can be able to bullshti their parents? Someone in my classes actually said "I'm a CS major because my father told me to be. I had no idea what it was" Guess what she still doesn't but that didnt stop her from getting elected to the ACM president for my schols chapter. These type of people need to get the fuck out of CS and go into management so that the other brood of worthless Cs majors, those that think techinal knowledge (defined to be somerhing they read about the linux kernel when they were sitting at home smacking their pud around a d&d table on a friday night) can bitch about them when they get jobs as sysadmins. If you dont like the science go to a techinal school or business school so that people will know they should never take you seriously.

Re:This is awesome... (1)

imscarr (246204) | about 12 years ago | (#3959687)

You snooze, you lose -- You shmooze, you win!

Not quite (4, Insightful)

iocat (572367) | about 12 years ago | (#3959512)

He said

I say this with such confidence because of my experience building robot brains over the past seven years. Almost everything people ever say to our robot falls into one of about 45,000 categories. Considering the astronomical number of things people could say, if every sentence was an original line of poetry, 45,000 is a very, very small number.

I say:

The fact that people only say 45,000 different things to a robot shouldn't indicate to you that people only have about 45,000 things to say, just that they only have 45,000 things to say to a robot in what is essentially a lab setting!

That said, I think this is a pretty fascinating interview.

Totally unimpressed so far (3, Insightful)

disappear (21915) | about 12 years ago | (#3959535)

Well, I'm so totally unimpressed so far. One out of three, and we have a whole bunch of nonsense.

For starters, AI, neural nets, and brains. We have the assertion that the brain is a computer, and we should really be concerned with the software on the computer, not the state of the neurons.

Even accepting the good doctor's view that the brain is a computer, this is an absurd position. After all, the software is in the brain. It's not like it gets bootstrapped from outside sources. So either the software is built into the whole structure of the brain and we can only learn about it by studying the rules (a la neural nets) or we have to figure out which part of the brain bootstraps the rest of it. Which we'd have to study the wet squishy bits to figure out. Which can best be done with a combination of noninvasive study (MRI, for example) and simulation. Like neural nets.

(The third possibility is that the brain is a computer, but the program is stored on a shared network drive... that is, in a non-material 'soul.' Which would bring us back to Cartesian dualism, God, and a whole bunch of things you'd better reject if you want to work in AI. Not rejecting the notion of God per se, just in the degree of investment in the nonmaterial world in which a being needs to take part...)

Second, academic politics. Dr. Wallace seems to believe in a golden age (that occurred, not coincidentally, just before his professional career) where professors were promoted and supported on the basis of merit.

Right. Anyone who believes in any society at any time in the West that existed without politics is invited to check into the nearest mental institution. To accept the idea of a 'golden age' just tantalizingly out of his reach is pathetic. It's like imagining an era where writers received acclaim based on the quality of their work.

Newsflash: Emily Dickinson's writings were discovered after her death. Everything we read by Melville was written long after his popularity had waned. Any number of great artists were 'discovered' after their deaths. And the most popular writers and artists at any time have been the ones who played the political game successfully. (Personal politics, not governmental politics, of course.) Anyone who's read any medieval philosophy or theology knows that there hasn't been a meritocracy in Western academia for at least eight hundred years.

As far as LSD and politics, it was the professors involved in those experiments (ie Tim Leary) who engaged in politics. And they were bad at it. And they lost. And the substances ended up scheduled. And their academic careers were ruined.

On to part two, to see what he says there. Perhaps it gets better.

been there, done that (0)

Anonymous Coward | about 12 years ago | (#3959542)

King's Quest had AI logic just as good in 1983!!!

He predicted the questions. (1)

sadclown (303554) | about 12 years ago | (#3959548)

It would appear he wrote these answers before he received the questions. He then randomly applied these essays to the questions. After all, his theory of question and answer is that human conversations are banal and predictable and that creating a reasonable response is elementary programming.

language? (2)

macsox (236590) | about 12 years ago | (#3959601)

belated question -- maybe some ai geek out there can answer:

is it possible to create an ai like this that is scalable to multiple languages, or would the wheel have to be reinvented each time? is it too reliant on idioms?

Re:language? (0)

Anonymous Coward | about 12 years ago | (#3959694)

alice can do german, french and english so yes.

Amazing! (1)

EvilBudMan (588716) | about 12 years ago | (#3959642)

Amazing interview. He didn't always answer the questions and I didn't always agree, but it was very interesting still.

First Answer, Re: Simulated A.I. (2, Insightful)

f8xmulder (588686) | about 12 years ago | (#3959674)

Dr. Wallace wrote in the answer to the first question: "Significantly, no one has ever proved that the brain is a *good* computer. It seems to run some tasks like visual recognition better than our existing machines, but it is terrible at math, prone to errors, susceptible to distraction, and it requires half its uptime for food, sleep, and maintenance.

It sometimes seems to me that the brain is actually a very shitty computer. So why would you want to build a computer out of slimy, wet, broken, slow, hungry, tired neurons? I chose computer science over medical school because I don't have the stomach for those icky, bloody body parts. I prefer my technology clean and dry, thank you. Moreover, it could be the case that an electronic, silicon-based computer is more reliable, faster, more accurate, and cheaper.

I find myself agreeing with the Churchlands that the notion of consciousness belongs to "folk psychology" and that there may be no clear brain correlates for the ego, id, emotions as they are commonly classified, and so on. But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer. In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!

I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat. I'll take transistors over meat any day. Human intelligence may even be a poor kludge of the intelligence algorithm on an organ that is basically a glorified animal eyeball. From an evolutionary standpoint, our supposedly wonderful cognitive skills are a very recent innovation. It should not be surprising if they are only poorly implemented in us, like the lung of the first mudfish. We can breathe the air of thought and imagination, but not that well yet.

And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created. "

It's not that I don't appreciate Dr. Wallace' contributions to the field of A.I., nor am I ignoring his obvious expertise in his programming and computer science skills. Those skills have made him the foremost expert on A.I. today. Yet he has denigrated the very organ by which he is able to formulate his thoughts, and seems to see little, if any, use in modelling or even studying its structure and arrangement to gain any insight into the possible ramifications for A.I.

I just find it interesting that we humans, as rational beings, with certain innate intelligences and thinking abilities, often rail against the very things that allow us the liberty and (dare I say) privilege of saying them.

That was my only complaint - the interview was insightful and interesting, a great read.

Answer mixup? (1)

HitchHik (103069) | about 12 years ago | (#3959677)

Isn't the answer to question 2 presented as the answer to question 3?

AI and Complexity (1)

hyperizer (123449) | about 12 years ago | (#3959699)

But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture.

But what if "the mind," aka the illusion of consciousness, is an emergent property of the brain's complex system?

I would think it would be very difficult to create a computer model of the brain, since there's likely a high degree of probability involved that can't be directly measured. But has any work been done along these lines (agent-based models, etc.)?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>